Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'tty-3.12-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty

Pull tty/serial driver patches from Greg KH:
"Here's the big tty/serial driver pull request for 3.12-rc1.

Lots of n_tty reworks to resolve some very long-standing issues,
removing the 3-4 different locks that were taken for every character.
This code has been beaten on for a long time in linux-next with no
reported regressions.

Other than that, a range of serial and tty driver updates and
revisions. Full details in the shortlog"

* tag 'tty-3.12-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty: (226 commits)
hvc_xen: Remove unnecessary __GFP_ZERO from kzalloc
serial: imx: initialize the local variable
tty: ar933x_uart: add device tree support and binding documentation
tty: ar933x_uart: allow to build the driver as a module
ARM: dts: msm: Update uartdm compatible strings
devicetree: serial: Document msm_serial bindings
serial: unify serial bindings into a single dir
serial: fsl-imx-uart: Cleanup duplicate device tree binding
tty: ar933x_uart: use config_enabled() macro to clean up ifdefs
tty: ar933x_uart: remove superfluous assignment of ar933x_uart_driver.nr
tty: ar933x_uart: use the clk API to get the uart clock
tty: serial: cpm_uart: Adding proper request of GPIO used by cpm_uart driver
serial: sirf: fix the amount of serial ports
serial: sirf: define macro for some magic numbers of USP
serial: icom: move array overflow checks earlier
TTY: amiserial, remove unnecessary platform_set_drvdata()
serial: st-asc: remove unnecessary platform_set_drvdata()
msm_serial: Send more than 1 character on the console w/ UARTDM
msm_serial: Add support for non-GSBI UARTDM devices
msm_serial: Switch clock consumer strings and simplify code
...

+6782 -3049
+8 -14
Documentation/devicetree/bindings/serial/fsl-imx-uart.txt
··· 1 - * Freescale i.MX UART controller 1 + * Freescale i.MX Universal Asynchronous Receiver/Transmitter (UART) 2 2 3 3 Required properties: 4 - - compatible : should be "fsl,imx21-uart" 4 + - compatible : Should be "fsl,<soc>-uart" 5 5 - reg : Address and length of the register set for the device 6 - - interrupts : Should contain UART interrupt number 6 + - interrupts : Should contain uart interrupt 7 7 8 8 Optional properties: 9 - - fsl,uart-has-rtscts: indicate that RTS/CTS signals are used 9 + - fsl,uart-has-rtscts : Indicate the uart has rts and cts 10 + - fsl,irda-mode : Indicate the uart supports irda mode 11 + - fsl,dte-mode : Indicate the uart works in DTE mode. The uart works 12 + is DCE mode by default. 10 13 11 14 Note: Each uart controller should have an alias correctly numbered 12 15 in "aliases" node. 13 16 14 17 Example: 15 18 16 - - From imx51.dtsi: 17 19 aliases { 18 20 serial0 = &uart1; 19 - serial1 = &uart2; 20 - serial2 = &uart3; 21 21 }; 22 22 23 23 uart1: serial@73fbc000 { 24 24 compatible = "fsl,imx51-uart", "fsl,imx21-uart"; 25 25 reg = <0x73fbc000 0x4000>; 26 26 interrupts = <31>; 27 - status = "disabled"; 28 - } 29 - 30 - - From imx51-babbage.dts: 31 - uart1: serial@73fbc000 { 32 27 fsl,uart-has-rtscts; 33 - status = "okay"; 28 + fsl,dte-mode; 34 29 }; 35 -
+25
Documentation/devicetree/bindings/serial/qcom,msm-uart.txt
··· 1 + * MSM Serial UART 2 + 3 + The MSM serial UART hardware is designed for low-speed use cases where a 4 + dma-engine isn't needed. From a software perspective it's mostly compatible 5 + with the MSM serial UARTDM except that it only supports reading and writing one 6 + character at a time. 7 + 8 + Required properties: 9 + - compatible: Should contain "qcom,msm-uart" 10 + - reg: Should contain UART register location and length. 11 + - interrupts: Should contain UART interrupt. 12 + - clocks: Should contain the core clock. 13 + - clock-names: Should be "core". 14 + 15 + Example: 16 + 17 + A uart device at 0xa9c00000 with interrupt 11. 18 + 19 + serial@a9c00000 { 20 + compatible = "qcom,msm-uart"; 21 + reg = <0xa9c00000 0x1000>; 22 + interrupts = <11>; 23 + clocks = <&uart_cxc>; 24 + clock-names = "core"; 25 + };
+53
Documentation/devicetree/bindings/serial/qcom,msm-uartdm.txt
··· 1 + * MSM Serial UARTDM 2 + 3 + The MSM serial UARTDM hardware is designed for high-speed use cases where the 4 + transmit and/or receive channels can be offloaded to a dma-engine. From a 5 + software perspective it's mostly compatible with the MSM serial UART except 6 + that it supports reading and writing multiple characters at a time. 7 + 8 + Required properties: 9 + - compatible: Should contain at least "qcom,msm-uartdm". 10 + A more specific property should be specified as follows depending 11 + on the version: 12 + "qcom,msm-uartdm-v1.1" 13 + "qcom,msm-uartdm-v1.2" 14 + "qcom,msm-uartdm-v1.3" 15 + "qcom,msm-uartdm-v1.4" 16 + - reg: Should contain UART register locations and lengths. The first 17 + register shall specify the main control registers. An optional second 18 + register location shall specify the GSBI control region. 19 + "qcom,msm-uartdm-v1.3" is the only compatible value that might 20 + need the GSBI control region. 21 + - interrupts: Should contain UART interrupt. 22 + - clocks: Should contain the core clock and the AHB clock. 23 + - clock-names: Should be "core" for the core clock and "iface" for the 24 + AHB clock. 25 + 26 + Optional properties: 27 + - dmas: Should contain dma specifiers for transmit and receive channels 28 + - dma-names: Should contain "tx" for transmit and "rx" for receive channels 29 + 30 + Examples: 31 + 32 + A uartdm v1.4 device with dma capabilities. 33 + 34 + serial@f991e000 { 35 + compatible = "qcom,msm-uartdm-v1.4", "qcom,msm-uartdm"; 36 + reg = <0xf991e000 0x1000>; 37 + interrupts = <0 108 0x0>; 38 + clocks = <&blsp1_uart2_apps_cxc>, <&blsp1_ahb_cxc>; 39 + clock-names = "core", "iface"; 40 + dmas = <&dma0 0>, <&dma0 1>; 41 + dma-names = "tx", "rx"; 42 + }; 43 + 44 + A uartdm v1.3 device without dma capabilities and part of a GSBI complex. 45 + 46 + serial@19c40000 { 47 + compatible = "qcom,msm-uartdm-v1.3", "qcom,msm-uartdm"; 48 + reg = <0x19c40000 0x1000>, 49 + <0x19c00000 0x1000>; 50 + interrupts = <0 195 0x0>; 51 + clocks = <&gsbi5_uart_cxc>, <&gsbi5_ahb_cxc>; 52 + clock-names = "core", "iface"; 53 + };
+33
Documentation/devicetree/bindings/serial/sirf-uart.txt
··· 1 + * CSR SiRFprimaII/atlasVI Universal Synchronous Asynchronous Receiver/Transmitter * 2 + 3 + Required properties: 4 + - compatible : Should be "sirf,prima2-uart" or "sirf, prima2-usp-uart" 5 + - reg : Offset and length of the register set for the device 6 + - interrupts : Should contain uart interrupt 7 + - fifosize : Should define hardware rx/tx fifo size 8 + - clocks : Should contain uart clock number 9 + 10 + Optional properties: 11 + - sirf,uart-has-rtscts: we have hardware flow controller pins in hardware 12 + - rts-gpios: RTS pin for USP-based UART if sirf,uart-has-rtscts is true 13 + - cts-gpios: CTS pin for USP-based UART if sirf,uart-has-rtscts is true 14 + 15 + Example: 16 + 17 + uart0: uart@b0050000 { 18 + cell-index = <0>; 19 + compatible = "sirf,prima2-uart"; 20 + reg = <0xb0050000 0x1000>; 21 + interrupts = <17>; 22 + fifosize = <128>; 23 + clocks = <&clks 13>; 24 + }; 25 + 26 + On the board-specific dts, we can put rts-gpios and cts-gpios like 27 + 28 + usp@b0090000 { 29 + compatible = "sirf,prima2-usp-uart"; 30 + sirf,uart-has-rtscts; 31 + rts-gpios = <&gpio 15 0>; 32 + cts-gpios = <&gpio 46 0>; 33 + };
+18
Documentation/devicetree/bindings/serial/st-asc.txt
··· 1 + *st-asc(Serial Port) 2 + 3 + Required properties: 4 + - compatible : Should be "st,asc". 5 + - reg, reg-names, interrupts, interrupt-names : Standard way to define device 6 + resources with names. look in 7 + Documentation/devicetree/bindings/resource-names.txt 8 + 9 + Optional properties: 10 + - st,hw-flow-ctrl bool flag to enable hardware flow control. 11 + - st,force-m1 bool flat to force asc to be in Mode-1 recommeded 12 + for high bit rates (above 19.2K) 13 + Example: 14 + serial@fe440000{ 15 + compatible = "st,asc"; 16 + reg = <0xfe440000 0x2c>; 17 + interrupts = <0 209 0>; 18 + };
Documentation/devicetree/bindings/tty/serial/arc-uart.txt Documentation/devicetree/bindings/serial/arc-uart.txt
+17 -1
Documentation/devicetree/bindings/tty/serial/atmel-usart.txt Documentation/devicetree/bindings/serial/atmel-usart.txt
··· 10 10 Optional properties: 11 11 - atmel,use-dma-rx: use of PDC or DMA for receiving data 12 12 - atmel,use-dma-tx: use of PDC or DMA for transmitting data 13 + - add dma bindings for dma transfer: 14 + - dmas: DMA specifier, consisting of a phandle to DMA controller node, 15 + memory peripheral interface and USART DMA channel ID, FIFO configuration. 16 + Refer to dma.txt and atmel-dma.txt for details. 17 + - dma-names: "rx" for RX channel, "tx" for TX channel. 13 18 14 19 <chip> compatible description: 15 20 - at91rm9200: legacy USART support 16 21 - at91sam9260: generic USART implementation for SAM9 SoCs 17 22 18 23 Example: 19 - 24 + - use PDC: 20 25 usart0: serial@fff8c000 { 21 26 compatible = "atmel,at91sam9260-usart"; 22 27 reg = <0xfff8c000 0x4000>; ··· 30 25 atmel,use-dma-tx; 31 26 }; 32 27 28 + - use DMA: 29 + usart0: serial@f001c000 { 30 + compatible = "atmel,at91sam9260-usart"; 31 + reg = <0xf001c000 0x100>; 32 + interrupts = <12 4 5>; 33 + atmel,use-dma-rx; 34 + atmel,use-dma-tx; 35 + dmas = <&dma0 2 0x3>, 36 + <&dma0 2 0x204>; 37 + dma-names = "tx", "rx"; 38 + };
Documentation/devicetree/bindings/tty/serial/efm32-uart.txt Documentation/devicetree/bindings/serial/efm32-uart.txt
-22
Documentation/devicetree/bindings/tty/serial/fsl-imx-uart.txt
··· 1 - * Freescale i.MX Universal Asynchronous Receiver/Transmitter (UART) 2 - 3 - Required properties: 4 - - compatible : Should be "fsl,<soc>-uart" 5 - - reg : Address and length of the register set for the device 6 - - interrupts : Should contain uart interrupt 7 - 8 - Optional properties: 9 - - fsl,uart-has-rtscts : Indicate the uart has rts and cts 10 - - fsl,irda-mode : Indicate the uart supports irda mode 11 - - fsl,dte-mode : Indicate the uart works in DTE mode. The uart works 12 - is DCE mode by default. 13 - 14 - Example: 15 - 16 - serial@73fbc000 { 17 - compatible = "fsl,imx51-uart", "fsl,imx21-uart"; 18 - reg = <0x73fbc000 0x4000>; 19 - interrupts = <31>; 20 - fsl,uart-has-rtscts; 21 - fsl,dte-mode; 22 - };
Documentation/devicetree/bindings/tty/serial/fsl-lpuart.txt Documentation/devicetree/bindings/serial/fsl-lpuart.txt
+4
Documentation/devicetree/bindings/tty/serial/fsl-mxs-auart.txt Documentation/devicetree/bindings/serial/fsl-mxs-auart.txt
··· 10 10 Refer to dma.txt and fsl-mxs-dma.txt for details. 11 11 - dma-names: "rx" for RX channel, "tx" for TX channel. 12 12 13 + Optional properties: 14 + - fsl,uart-has-rtscts : Indicate the UART has RTS and CTS lines, 15 + it also means you enable the DMA support for this UART. 16 + 13 17 Example: 14 18 auart0: serial@8006a000 { 15 19 compatible = "fsl,imx28-auart", "fsl,imx23-auart";
-27
Documentation/devicetree/bindings/tty/serial/msm_serial.txt
··· 1 - * Qualcomm MSM UART 2 - 3 - Required properties: 4 - - compatible : 5 - - "qcom,msm-uart", and one of "qcom,msm-hsuart" or 6 - "qcom,msm-lsuart". 7 - - reg : offset and length of the register set for the device 8 - for the hsuart operating in compatible mode, there should be a 9 - second pair describing the gsbi registers. 10 - - interrupts : should contain the uart interrupt. 11 - 12 - There are two different UART blocks used in MSM devices, 13 - "qcom,msm-hsuart" and "qcom,msm-lsuart". The msm-serial driver is 14 - able to handle both of these, and matches against the "qcom,msm-uart" 15 - as the compatibility. 16 - 17 - The registers for the "qcom,msm-hsuart" device need to specify both 18 - register blocks, even for the common driver. 19 - 20 - Example: 21 - 22 - uart@19c400000 { 23 - compatible = "qcom,msm-hsuart", "qcom,msm-uart"; 24 - reg = <0x19c40000 0x1000>, 25 - <0x19c00000 0x1000>; 26 - interrupts = <195>; 27 - };
Documentation/devicetree/bindings/tty/serial/nxp-lpc32xx-hsuart.txt Documentation/devicetree/bindings/serial/nxp-lpc32xx-hsuart.txt
Documentation/devicetree/bindings/tty/serial/of-serial.txt Documentation/devicetree/bindings/serial/of-serial.txt
+34
Documentation/devicetree/bindings/tty/serial/qca,ar9330-uart.txt
··· 1 + * Qualcomm Atheros AR9330 High-Speed UART 2 + 3 + Required properties: 4 + 5 + - compatible: Must be "qca,ar9330-uart" 6 + 7 + - reg: Specifies the physical base address of the controller and 8 + the length of the memory mapped region. 9 + 10 + - interrupt-parent: The phandle for the interrupt controller that 11 + services interrupts for this device. 12 + 13 + - interrupts: Specifies the interrupt source of the parent interrupt 14 + controller. The format of the interrupt specifier depends on the 15 + parent interrupt controller. 16 + 17 + Additional requirements: 18 + 19 + Each UART port must have an alias correctly numbered in "aliases" 20 + node. 21 + 22 + Example: 23 + 24 + aliases { 25 + serial0 = &uart0; 26 + }; 27 + 28 + uart0: uart@18020000 { 29 + compatible = "qca,ar9330-uart"; 30 + reg = <0x18020000 0x14>; 31 + 32 + interrupt-parent = <&intc>; 33 + interrupts = <3>; 34 + };
Documentation/devicetree/bindings/tty/serial/snps-dw-apb-uart.txt Documentation/devicetree/bindings/serial/snps-dw-apb-uart.txt
Documentation/devicetree/bindings/tty/serial/via,vt8500-uart.txt Documentation/devicetree/bindings/serial/via,vt8500-uart.txt
+10
Documentation/kernel-parameters.txt
··· 3322 3322 them quite hard to use for exploits but 3323 3323 might break your system. 3324 3324 3325 + vt.color= [VT] Default text color. 3326 + Format: 0xYX, X = foreground, Y = background. 3327 + Default: 0x07 = light gray on black. 3328 + 3325 3329 vt.cur_default= [VT] Default cursor shape. 3326 3330 Format: 0xCCBBAA, where AA, BB, and CC are the same as 3327 3331 the parameters of the <Esc>[?A;B;Cc escape sequence; ··· 3364 3360 i.e. cursors will be created by default unless 3365 3361 overridden by individual drivers. 0 will hide 3366 3362 cursors, 1 will display them. 3363 + 3364 + vt.italic= [VT] Default color for italic text; 0-15. 3365 + Default: 2 = green. 3366 + 3367 + vt.underline= [VT] Default color for underlined text; 0-15. 3368 + Default: 3 = cyan. 3367 3369 3368 3370 watchdog timers [HW,WDT] For information on watchdog timers, 3369 3371 see Documentation/watchdog/watchdog-parameters.txt
+1
arch/arm/boot/dts/imx28-evk.dts
··· 220 220 auart0: serial@8006a000 { 221 221 pinctrl-names = "default"; 222 222 pinctrl-0 = <&auart0_pins_a>; 223 + fsl,uart-has-rtscts; 223 224 status = "okay"; 224 225 }; 225 226
+1 -1
arch/arm/boot/dts/msm8660-surf.dts
··· 38 38 }; 39 39 40 40 serial@19c40000 { 41 - compatible = "qcom,msm-hsuart", "qcom,msm-uart"; 41 + compatible = "qcom,msm-uartdm-v1.3", "qcom,msm-uartdm"; 42 42 reg = <0x19c40000 0x1000>, 43 43 <0x19c00000 0x1000>; 44 44 interrupts = <0 195 0x0>;
+1 -1
arch/arm/boot/dts/msm8960-cdp.dts
··· 38 38 }; 39 39 40 40 serial@16440000 { 41 - compatible = "qcom,msm-hsuart", "qcom,msm-uart"; 41 + compatible = "qcom,msm-uartdm-v1.3", "qcom,msm-uartdm"; 42 42 reg = <0x16440000 0x1000>, 43 43 <0x16400000 0x1000>; 44 44 interrupts = <0 154 0x0>;
+3 -3
arch/arm/mach-msm/devices-msm7x00.c
··· 456 456 CLK_PCOM("tsif_ref_clk", TSIF_REF_CLK, NULL, 0), 457 457 CLK_PCOM("tv_dac_clk", TV_DAC_CLK, NULL, 0), 458 458 CLK_PCOM("tv_enc_clk", TV_ENC_CLK, NULL, 0), 459 - CLK_PCOM("uart_clk", UART1_CLK, "msm_serial.0", OFF), 460 - CLK_PCOM("uart_clk", UART2_CLK, "msm_serial.1", 0), 461 - CLK_PCOM("uart_clk", UART3_CLK, "msm_serial.2", OFF), 459 + CLK_PCOM("core", UART1_CLK, "msm_serial.0", OFF), 460 + CLK_PCOM("core", UART2_CLK, "msm_serial.1", 0), 461 + CLK_PCOM("core", UART3_CLK, "msm_serial.2", OFF), 462 462 CLK_PCOM("uart1dm_clk", UART1DM_CLK, NULL, OFF), 463 463 CLK_PCOM("uart2dm_clk", UART2DM_CLK, NULL, 0), 464 464 CLK_PCOM("usb_hs_clk", USB_HS_CLK, "msm_hsusb", OFF),
+1 -1
arch/arm/mach-msm/devices-msm7x30.c
··· 211 211 CLK_PCOM("spi_pclk", SPI_P_CLK, NULL, 0), 212 212 CLK_PCOM("tv_dac_clk", TV_DAC_CLK, NULL, 0), 213 213 CLK_PCOM("tv_enc_clk", TV_ENC_CLK, NULL, 0), 214 - CLK_PCOM("uart_clk", UART2_CLK, "msm_serial.1", 0), 214 + CLK_PCOM("core", UART2_CLK, "msm_serial.1", 0), 215 215 CLK_PCOM("usb_phy_clk", USB_PHY_CLK, NULL, 0), 216 216 CLK_PCOM("usb_hs_clk", USB_HS_CLK, NULL, OFF), 217 217 CLK_PCOM("usb_hs_pclk", USB_HS_P_CLK, NULL, OFF),
+3 -3
arch/arm/mach-msm/devices-qsd8x50.c
··· 358 358 CLK_PCOM("tsif_ref_clk", TSIF_REF_CLK, NULL, 0), 359 359 CLK_PCOM("tv_dac_clk", TV_DAC_CLK, NULL, 0), 360 360 CLK_PCOM("tv_enc_clk", TV_ENC_CLK, NULL, 0), 361 - CLK_PCOM("uart_clk", UART1_CLK, NULL, OFF), 362 - CLK_PCOM("uart_clk", UART2_CLK, NULL, 0), 363 - CLK_PCOM("uart_clk", UART3_CLK, "msm_serial.2", OFF), 361 + CLK_PCOM("core", UART1_CLK, NULL, OFF), 362 + CLK_PCOM("core", UART2_CLK, NULL, 0), 363 + CLK_PCOM("core", UART3_CLK, "msm_serial.2", OFF), 364 364 CLK_PCOM("uartdm_clk", UART1DM_CLK, NULL, OFF), 365 365 CLK_PCOM("uartdm_clk", UART2DM_CLK, NULL, 0), 366 366 CLK_PCOM("usb_hs_clk", USB_HS_CLK, NULL, OFF),
-1
arch/mips/sni/a20r.c
··· 122 122 123 123 static struct sccnxp_pdata sccnxp_data = { 124 124 .reg_shift = 2, 125 - .frequency = 3686400, 126 125 .mctrl_cfg[0] = MCTRL_SIG(DTR_OP, LINE_OP7) | 127 126 MCTRL_SIG(RTS_OP, LINE_OP3) | 128 127 MCTRL_SIG(DSR_IP, LINE_IP5) |
+4 -4
drivers/net/irda/irtty-sir.c
··· 123 123 124 124 tty = priv->tty; 125 125 126 - mutex_lock(&tty->termios_mutex); 126 + down_write(&tty->termios_rwsem); 127 127 old_termios = tty->termios; 128 128 cflag = tty->termios.c_cflag; 129 129 tty_encode_baud_rate(tty, speed, speed); 130 130 if (tty->ops->set_termios) 131 131 tty->ops->set_termios(tty, &old_termios); 132 132 priv->io.speed = speed; 133 - mutex_unlock(&tty->termios_mutex); 133 + up_write(&tty->termios_rwsem); 134 134 135 135 return 0; 136 136 } ··· 280 280 struct ktermios old_termios; 281 281 int cflag; 282 282 283 - mutex_lock(&tty->termios_mutex); 283 + down_write(&tty->termios_rwsem); 284 284 old_termios = tty->termios; 285 285 cflag = tty->termios.c_cflag; 286 286 ··· 292 292 tty->termios.c_cflag = cflag; 293 293 if (tty->ops->set_termios) 294 294 tty->ops->set_termios(tty, &old_termios); 295 - mutex_unlock(&tty->termios_mutex); 295 + up_write(&tty->termios_rwsem); 296 296 } 297 297 298 298 /*****************************************************************/
+26
drivers/of/base.c
··· 32 32 EXPORT_SYMBOL(of_allnodes); 33 33 struct device_node *of_chosen; 34 34 struct device_node *of_aliases; 35 + static struct device_node *of_stdout; 35 36 36 37 DEFINE_MUTEX(of_aliases_mutex); 37 38 ··· 1596 1595 of_chosen = of_find_node_by_path("/chosen"); 1597 1596 if (of_chosen == NULL) 1598 1597 of_chosen = of_find_node_by_path("/chosen@0"); 1598 + 1599 + if (of_chosen) { 1600 + const char *name; 1601 + 1602 + name = of_get_property(of_chosen, "linux,stdout-path", NULL); 1603 + if (name) 1604 + of_stdout = of_find_node_by_path(name); 1605 + } 1606 + 1599 1607 of_aliases = of_find_node_by_path("/aliases"); 1600 1608 if (!of_aliases) 1601 1609 return; ··· 1713 1703 return curv; 1714 1704 } 1715 1705 EXPORT_SYMBOL_GPL(of_prop_next_string); 1706 + 1707 + /** 1708 + * of_device_is_stdout_path - check if a device node matches the 1709 + * linux,stdout-path property 1710 + * 1711 + * Check if this device node matches the linux,stdout-path property 1712 + * in the chosen node. return true if yes, false otherwise. 1713 + */ 1714 + int of_device_is_stdout_path(struct device_node *dn) 1715 + { 1716 + if (!of_stdout) 1717 + return false; 1718 + 1719 + return of_stdout == dn; 1720 + } 1721 + EXPORT_SYMBOL_GPL(of_device_is_stdout_path);
-1
drivers/staging/comedi/comedidev.h
··· 390 390 */ 391 391 #define PCI_VENDOR_ID_KOLTER 0x1001 392 392 #define PCI_VENDOR_ID_ICP 0x104c 393 - #define PCI_VENDOR_ID_AMCC 0x10e8 394 393 #define PCI_VENDOR_ID_DT 0x1116 395 394 #define PCI_VENDOR_ID_IOTECH 0x1616 396 395 #define PCI_VENDOR_ID_CONTEC 0x1221
+2
drivers/staging/dgrp/dgrp_tty.c
··· 1120 1120 if (!sent_printer_offstr) 1121 1121 dgrp_tty_flush_buffer(tty); 1122 1122 1123 + spin_unlock_irqrestore(&nd->nd_lock, lock_flags); 1123 1124 tty_ldisc_flush(tty); 1125 + spin_lock_irqsave(&nd->nd_lock, lock_flags); 1124 1126 break; 1125 1127 } 1126 1128
-2
drivers/tty/amiserial.c
··· 1785 1785 free_irq(IRQ_AMIGA_TBE, state); 1786 1786 free_irq(IRQ_AMIGA_RBF, state); 1787 1787 1788 - platform_set_drvdata(pdev, NULL); 1789 - 1790 1788 return error; 1791 1789 } 1792 1790
+10 -1
drivers/tty/hvc/hvc_console.c
··· 361 361 tty->driver_data = NULL; 362 362 tty_port_put(&hp->port); 363 363 printk(KERN_ERR "hvc_open: request_irq failed with rc %d.\n", rc); 364 - } 364 + } else 365 + /* We are ready... raise DTR/RTS */ 366 + if (C_BAUD(tty)) 367 + if (hp->ops->dtr_rts) 368 + hp->ops->dtr_rts(hp, 1); 369 + 365 370 /* Force wakeup of the polling thread */ 366 371 hvc_kick(); 367 372 ··· 397 392 spin_unlock_irqrestore(&hp->port.lock, flags); 398 393 /* We are done with the tty pointer now. */ 399 394 tty_port_tty_set(&hp->port, NULL); 395 + 396 + if (C_HUPCL(tty)) 397 + if (hp->ops->dtr_rts) 398 + hp->ops->dtr_rts(hp, 0); 400 399 401 400 if (hp->ops->notifier_del) 402 401 hp->ops->notifier_del(hp, hp->data);
+3
drivers/tty/hvc/hvc_console.h
··· 75 75 /* tiocmget/set implementation */ 76 76 int (*tiocmget)(struct hvc_struct *hp); 77 77 int (*tiocmset)(struct hvc_struct *hp, unsigned int set, unsigned int clear); 78 + 79 + /* Callbacks to handle tty ports */ 80 + void (*dtr_rts)(struct hvc_struct *hp, int raise); 78 81 }; 79 82 80 83 /* Register a vterm and a slot index for use as a console (console_init) */
+51 -13
drivers/tty/hvc/hvc_iucv.c
··· 656 656 } 657 657 658 658 /** 659 + * hvc_iucv_dtr_rts() - HVC notifier for handling DTR/RTS 660 + * @hp: Pointer the HVC device (struct hvc_struct) 661 + * @raise: Non-zero to raise or zero to lower DTR/RTS lines 662 + * 663 + * This routine notifies the HVC back-end to raise or lower DTR/RTS 664 + * lines. Raising DTR/RTS is ignored. Lowering DTR/RTS indicates to 665 + * drop the IUCV connection (similar to hang up the modem). 666 + */ 667 + static void hvc_iucv_dtr_rts(struct hvc_struct *hp, int raise) 668 + { 669 + struct hvc_iucv_private *priv; 670 + struct iucv_path *path; 671 + 672 + /* Raising the DTR/RTS is ignored as IUCV connections can be 673 + * established at any times. 674 + */ 675 + if (raise) 676 + return; 677 + 678 + priv = hvc_iucv_get_private(hp->vtermno); 679 + if (!priv) 680 + return; 681 + 682 + /* Lowering the DTR/RTS lines disconnects an established IUCV 683 + * connection. 684 + */ 685 + flush_sndbuf_sync(priv); 686 + 687 + spin_lock_bh(&priv->lock); 688 + path = priv->path; /* save reference to IUCV path */ 689 + priv->path = NULL; 690 + priv->iucv_state = IUCV_DISCONN; 691 + spin_unlock_bh(&priv->lock); 692 + 693 + /* Sever IUCV path outside of priv->lock due to lock ordering of: 694 + * priv->lock <--> iucv_table_lock */ 695 + if (path) { 696 + iucv_path_sever(path, NULL); 697 + iucv_path_free(path); 698 + } 699 + } 700 + 701 + /** 659 702 * hvc_iucv_notifier_del() - HVC notifier for closing a TTY for the last time. 660 703 * @hp: Pointer to the HVC device (struct hvc_struct) 661 704 * @id: Additional data (originally passed to hvc_alloc): 662 705 * the index of an struct hvc_iucv_private instance. 663 706 * 664 707 * This routine notifies the HVC back-end that the last tty device fd has been 665 - * closed. The function calls hvc_iucv_cleanup() to clean up the struct 666 - * hvc_iucv_private instance. 708 + * closed. The function cleans up tty resources. The clean-up of the IUCV 709 + * connection is done in hvc_iucv_dtr_rts() and depends on the HUPCL termios 710 + * control setting. 667 711 * 668 712 * Locking: struct hvc_iucv_private->lock 669 713 */ 670 714 static void hvc_iucv_notifier_del(struct hvc_struct *hp, int id) 671 715 { 672 716 struct hvc_iucv_private *priv; 673 - struct iucv_path *path; 674 717 675 718 priv = hvc_iucv_get_private(id); 676 719 if (!priv) ··· 722 679 flush_sndbuf_sync(priv); 723 680 724 681 spin_lock_bh(&priv->lock); 725 - path = priv->path; /* save reference to IUCV path */ 726 - priv->path = NULL; 727 - hvc_iucv_cleanup(priv); 682 + destroy_tty_buffer_list(&priv->tty_outqueue); 683 + destroy_tty_buffer_list(&priv->tty_inqueue); 684 + priv->tty_state = TTY_CLOSED; 685 + priv->sndbuf_len = 0; 728 686 spin_unlock_bh(&priv->lock); 729 - 730 - /* sever IUCV path outside of priv->lock due to lock ordering of: 731 - * priv->lock <--> iucv_table_lock */ 732 - if (path) { 733 - iucv_path_sever(path, NULL); 734 - iucv_path_free(path); 735 - } 736 687 } 737 688 738 689 /** ··· 968 931 .notifier_add = hvc_iucv_notifier_add, 969 932 .notifier_del = hvc_iucv_notifier_del, 970 933 .notifier_hangup = hvc_iucv_notifier_hangup, 934 + .dtr_rts = hvc_iucv_dtr_rts, 971 935 }; 972 936 973 937 /* Suspend / resume device operations */
+3 -3
drivers/tty/hvc/hvc_xen.c
··· 208 208 209 209 info = vtermno_to_xencons(HVC_COOKIE); 210 210 if (!info) { 211 - info = kzalloc(sizeof(struct xencons_info), GFP_KERNEL | __GFP_ZERO); 211 + info = kzalloc(sizeof(struct xencons_info), GFP_KERNEL); 212 212 if (!info) 213 213 return -ENOMEM; 214 214 } else if (info->intf != NULL) { ··· 257 257 258 258 info = vtermno_to_xencons(HVC_COOKIE); 259 259 if (!info) { 260 - info = kzalloc(sizeof(struct xencons_info), GFP_KERNEL | __GFP_ZERO); 260 + info = kzalloc(sizeof(struct xencons_info), GFP_KERNEL); 261 261 if (!info) 262 262 return -ENOMEM; 263 263 } else if (info->intf != NULL) { ··· 284 284 285 285 info = vtermno_to_xencons(HVC_COOKIE); 286 286 if (!info) { 287 - info = kzalloc(sizeof(struct xencons_info), GFP_KERNEL | __GFP_ZERO); 287 + info = kzalloc(sizeof(struct xencons_info), GFP_KERNEL); 288 288 if (!info) 289 289 return -ENOMEM; 290 290 }
+16 -13
drivers/tty/n_gsm.c
··· 807 807 int h = dlci->adaption - 1; 808 808 809 809 total_size = 0; 810 - while(1) { 810 + while (1) { 811 811 len = kfifo_len(dlci->fifo); 812 812 if (len == 0) 813 813 return total_size; ··· 827 827 switch (dlci->adaption) { 828 828 case 1: /* Unstructured */ 829 829 break; 830 - case 2: /* Unstructed with modem bits. Always one byte as we never 831 - send inline break data */ 830 + case 2: /* Unstructed with modem bits. 831 + Always one byte as we never send inline break data */ 832 832 *dp++ = gsm_encode_modem(dlci); 833 833 break; 834 834 } ··· 968 968 unsigned long flags; 969 969 int sweep; 970 970 971 - if (dlci->constipated) 971 + if (dlci->constipated) 972 972 return; 973 973 974 974 spin_lock_irqsave(&dlci->gsm->tx_lock, flags); ··· 981 981 gsm_dlci_data_output(dlci->gsm, dlci); 982 982 } 983 983 if (sweep) 984 - gsm_dlci_data_sweep(dlci->gsm); 984 + gsm_dlci_data_sweep(dlci->gsm); 985 985 spin_unlock_irqrestore(&dlci->gsm->tx_lock, flags); 986 986 } 987 987 ··· 1138 1138 static void gsm_control_rls(struct gsm_mux *gsm, u8 *data, int clen) 1139 1139 { 1140 1140 struct tty_port *port; 1141 - unsigned int addr = 0 ; 1141 + unsigned int addr = 0; 1142 1142 u8 bits; 1143 1143 int len = clen; 1144 1144 u8 *dp = data; ··· 1740 1740 1741 1741 if ((gsm->control & ~PF) == UI) 1742 1742 gsm->fcs = gsm_fcs_add_block(gsm->fcs, gsm->buf, gsm->len); 1743 - if (gsm->encoding == 0){ 1744 - /* WARNING: gsm->received_fcs is used for gsm->encoding = 0 only. 1745 - In this case it contain the last piece of data 1746 - required to generate final CRC */ 1743 + if (gsm->encoding == 0) { 1744 + /* WARNING: gsm->received_fcs is used for 1745 + gsm->encoding = 0 only. 1746 + In this case it contain the last piece of data 1747 + required to generate final CRC */ 1747 1748 gsm->fcs = gsm_fcs_add(gsm->fcs, gsm->received_fcs); 1748 1749 } 1749 1750 if (gsm->fcs != GOOD_FCS) { ··· 2905 2904 gsm = gsm_mux[mux]; 2906 2905 if (gsm->dead) 2907 2906 return -EL2HLT; 2908 - /* If DLCI 0 is not yet fully open return an error. This is ok from a locking 2909 - perspective as we don't have to worry about this if DLCI0 is lost */ 2910 - if (gsm->dlci[0] && gsm->dlci[0]->state != DLCI_OPEN) 2907 + /* If DLCI 0 is not yet fully open return an error. 2908 + This is ok from a locking 2909 + perspective as we don't have to worry about this 2910 + if DLCI0 is lost */ 2911 + if (gsm->dlci[0] && gsm->dlci[0]->state != DLCI_OPEN) 2911 2912 return -EL2NSYNC; 2912 2913 dlci = gsm->dlci[line]; 2913 2914 if (dlci == NULL) {
+846 -570
drivers/tty/n_tty.c
··· 50 50 #include <linux/uaccess.h> 51 51 #include <linux/module.h> 52 52 #include <linux/ratelimit.h> 53 + #include <linux/vmalloc.h> 53 54 54 55 55 56 /* number of characters left in xmit buffer before select has we have room */ ··· 75 74 #define ECHO_OP_SET_CANON_COL 0x81 76 75 #define ECHO_OP_ERASE_TAB 0x82 77 76 77 + #define ECHO_COMMIT_WATERMARK 256 78 + #define ECHO_BLOCK 256 79 + #define ECHO_DISCARD_WATERMARK N_TTY_BUF_SIZE - (ECHO_BLOCK + 32) 80 + 81 + 82 + #undef N_TTY_TRACE 83 + #ifdef N_TTY_TRACE 84 + # define n_tty_trace(f, args...) trace_printk(f, ##args) 85 + #else 86 + # define n_tty_trace(f, args...) 87 + #endif 88 + 78 89 struct n_tty_data { 79 - unsigned int column; 90 + /* producer-published */ 91 + size_t read_head; 92 + size_t canon_head; 93 + size_t echo_head; 94 + size_t echo_commit; 95 + DECLARE_BITMAP(char_map, 256); 96 + 97 + /* private to n_tty_receive_overrun (single-threaded) */ 80 98 unsigned long overrun_time; 81 99 int num_overrun; 82 100 101 + /* non-atomic */ 102 + bool no_room; 103 + 104 + /* must hold exclusive termios_rwsem to reset these */ 83 105 unsigned char lnext:1, erasing:1, raw:1, real_raw:1, icanon:1; 84 - unsigned char echo_overrun:1; 85 106 86 - DECLARE_BITMAP(process_char_map, 256); 107 + /* shared by producer and consumer */ 108 + char read_buf[N_TTY_BUF_SIZE]; 87 109 DECLARE_BITMAP(read_flags, N_TTY_BUF_SIZE); 110 + unsigned char echo_buf[N_TTY_BUF_SIZE]; 88 111 89 - char *read_buf; 90 - int read_head; 91 - int read_tail; 92 - int read_cnt; 93 112 int minimum_to_wake; 94 113 95 - unsigned char *echo_buf; 96 - unsigned int echo_pos; 97 - unsigned int echo_cnt; 114 + /* consumer-published */ 115 + size_t read_tail; 116 + size_t line_start; 98 117 99 - int canon_data; 100 - unsigned long canon_head; 118 + /* protected by output lock */ 119 + unsigned int column; 101 120 unsigned int canon_column; 121 + size_t echo_tail; 102 122 103 123 struct mutex atomic_read_lock; 104 124 struct mutex output_lock; 105 - struct mutex echo_lock; 106 - raw_spinlock_t read_lock; 107 125 }; 126 + 127 + static inline size_t read_cnt(struct n_tty_data *ldata) 128 + { 129 + return ldata->read_head - ldata->read_tail; 130 + } 131 + 132 + static inline unsigned char read_buf(struct n_tty_data *ldata, size_t i) 133 + { 134 + return ldata->read_buf[i & (N_TTY_BUF_SIZE - 1)]; 135 + } 136 + 137 + static inline unsigned char *read_buf_addr(struct n_tty_data *ldata, size_t i) 138 + { 139 + return &ldata->read_buf[i & (N_TTY_BUF_SIZE - 1)]; 140 + } 141 + 142 + static inline unsigned char echo_buf(struct n_tty_data *ldata, size_t i) 143 + { 144 + return ldata->echo_buf[i & (N_TTY_BUF_SIZE - 1)]; 145 + } 146 + 147 + static inline unsigned char *echo_buf_addr(struct n_tty_data *ldata, size_t i) 148 + { 149 + return &ldata->echo_buf[i & (N_TTY_BUF_SIZE - 1)]; 150 + } 108 151 109 152 static inline int tty_put_user(struct tty_struct *tty, unsigned char x, 110 153 unsigned char __user *ptr) ··· 159 114 return put_user(x, ptr); 160 115 } 161 116 162 - /** 163 - * n_tty_set_room - receive space 164 - * @tty: terminal 165 - * 166 - * Updates tty->receive_room to reflect the currently available space 167 - * in the input buffer, and re-schedules the flip buffer work if space 168 - * just became available. 169 - * 170 - * Locks: Concurrent update is protected with read_lock 171 - */ 172 - 173 - static int set_room(struct tty_struct *tty) 117 + static int receive_room(struct tty_struct *tty) 174 118 { 175 119 struct n_tty_data *ldata = tty->disc_data; 176 120 int left; 177 - int old_left; 178 - unsigned long flags; 179 - 180 - raw_spin_lock_irqsave(&ldata->read_lock, flags); 181 121 182 122 if (I_PARMRK(tty)) { 183 123 /* Multiply read_cnt by 3, since each byte might take up to 184 124 * three times as many spaces when PARMRK is set (depending on 185 125 * its flags, e.g. parity error). */ 186 - left = N_TTY_BUF_SIZE - ldata->read_cnt * 3 - 1; 126 + left = N_TTY_BUF_SIZE - read_cnt(ldata) * 3 - 1; 187 127 } else 188 - left = N_TTY_BUF_SIZE - ldata->read_cnt - 1; 128 + left = N_TTY_BUF_SIZE - read_cnt(ldata) - 1; 189 129 190 130 /* 191 131 * If we are doing input canonicalization, and there are no ··· 179 149 * characters will be beeped. 180 150 */ 181 151 if (left <= 0) 182 - left = ldata->icanon && !ldata->canon_data; 183 - old_left = tty->receive_room; 184 - tty->receive_room = left; 152 + left = ldata->icanon && ldata->canon_head == ldata->read_tail; 185 153 186 - raw_spin_unlock_irqrestore(&ldata->read_lock, flags); 187 - 188 - return left && !old_left; 154 + return left; 189 155 } 156 + 157 + /** 158 + * n_tty_set_room - receive space 159 + * @tty: terminal 160 + * 161 + * Re-schedules the flip buffer work if space just became available. 162 + * 163 + * Caller holds exclusive termios_rwsem 164 + * or 165 + * n_tty_read()/consumer path: 166 + * holds non-exclusive termios_rwsem 167 + */ 190 168 191 169 static void n_tty_set_room(struct tty_struct *tty) 192 170 { 171 + struct n_tty_data *ldata = tty->disc_data; 172 + 193 173 /* Did this open up the receive buffer? We may need to flip */ 194 - if (set_room(tty)) { 174 + if (unlikely(ldata->no_room) && receive_room(tty)) { 175 + ldata->no_room = 0; 176 + 195 177 WARN_RATELIMIT(tty->port->itty == NULL, 196 178 "scheduling with invalid itty\n"); 197 179 /* see if ldisc has been killed - if so, this means that ··· 212 170 */ 213 171 WARN_RATELIMIT(test_bit(TTY_LDISC_HALTED, &tty->flags), 214 172 "scheduling buffer work for halted ldisc\n"); 215 - schedule_work(&tty->port->buf.work); 173 + queue_work(system_unbound_wq, &tty->port->buf.work); 216 174 } 217 175 } 218 176 219 - static void put_tty_queue_nolock(unsigned char c, struct n_tty_data *ldata) 177 + static ssize_t chars_in_buffer(struct tty_struct *tty) 220 178 { 221 - if (ldata->read_cnt < N_TTY_BUF_SIZE) { 222 - ldata->read_buf[ldata->read_head] = c; 223 - ldata->read_head = (ldata->read_head + 1) & (N_TTY_BUF_SIZE-1); 224 - ldata->read_cnt++; 179 + struct n_tty_data *ldata = tty->disc_data; 180 + ssize_t n = 0; 181 + 182 + if (!ldata->icanon) 183 + n = read_cnt(ldata); 184 + else 185 + n = ldata->canon_head - ldata->read_tail; 186 + return n; 187 + } 188 + 189 + /** 190 + * n_tty_write_wakeup - asynchronous I/O notifier 191 + * @tty: tty device 192 + * 193 + * Required for the ptys, serial driver etc. since processes 194 + * that attach themselves to the master and rely on ASYNC 195 + * IO must be woken up 196 + */ 197 + 198 + static void n_tty_write_wakeup(struct tty_struct *tty) 199 + { 200 + if (tty->fasync && test_and_clear_bit(TTY_DO_WRITE_WAKEUP, &tty->flags)) 201 + kill_fasync(&tty->fasync, SIGIO, POLL_OUT); 202 + } 203 + 204 + static void n_tty_check_throttle(struct tty_struct *tty) 205 + { 206 + if (tty->driver->type == TTY_DRIVER_TYPE_PTY) 207 + return; 208 + /* 209 + * Check the remaining room for the input canonicalization 210 + * mode. We don't want to throttle the driver if we're in 211 + * canonical mode and don't have a newline yet! 212 + */ 213 + while (1) { 214 + int throttled; 215 + tty_set_flow_change(tty, TTY_THROTTLE_SAFE); 216 + if (receive_room(tty) >= TTY_THRESHOLD_THROTTLE) 217 + break; 218 + throttled = tty_throttle_safe(tty); 219 + if (!throttled) 220 + break; 225 221 } 222 + __tty_set_flow_change(tty, 0); 223 + } 224 + 225 + static void n_tty_check_unthrottle(struct tty_struct *tty) 226 + { 227 + if (tty->driver->type == TTY_DRIVER_TYPE_PTY && 228 + tty->link->ldisc->ops->write_wakeup == n_tty_write_wakeup) { 229 + if (chars_in_buffer(tty) > TTY_THRESHOLD_UNTHROTTLE) 230 + return; 231 + if (!tty->count) 232 + return; 233 + n_tty_set_room(tty); 234 + n_tty_write_wakeup(tty->link); 235 + wake_up_interruptible_poll(&tty->link->write_wait, POLLOUT); 236 + return; 237 + } 238 + 239 + /* If there is enough space in the read buffer now, let the 240 + * low-level driver know. We use chars_in_buffer() to 241 + * check the buffer, as it now knows about canonical mode. 242 + * Otherwise, if the driver is throttled and the line is 243 + * longer than TTY_THRESHOLD_UNTHROTTLE in canonical mode, 244 + * we won't get any more characters. 245 + */ 246 + 247 + while (1) { 248 + int unthrottled; 249 + tty_set_flow_change(tty, TTY_UNTHROTTLE_SAFE); 250 + if (chars_in_buffer(tty) > TTY_THRESHOLD_UNTHROTTLE) 251 + break; 252 + if (!tty->count) 253 + break; 254 + n_tty_set_room(tty); 255 + unthrottled = tty_unthrottle_safe(tty); 256 + if (!unthrottled) 257 + break; 258 + } 259 + __tty_set_flow_change(tty, 0); 226 260 } 227 261 228 262 /** ··· 306 188 * @c: character 307 189 * @ldata: n_tty data 308 190 * 309 - * Add a character to the tty read_buf queue. This is done under the 310 - * read_lock to serialize character addition and also to protect us 311 - * against parallel reads or flushes 191 + * Add a character to the tty read_buf queue. 192 + * 193 + * n_tty_receive_buf()/producer path: 194 + * caller holds non-exclusive termios_rwsem 195 + * modifies read_head 196 + * 197 + * read_head is only considered 'published' if canonical mode is 198 + * not active. 312 199 */ 313 200 314 - static void put_tty_queue(unsigned char c, struct n_tty_data *ldata) 201 + static inline void put_tty_queue(unsigned char c, struct n_tty_data *ldata) 315 202 { 316 - unsigned long flags; 317 - /* 318 - * The problem of stomping on the buffers ends here. 319 - * Why didn't anyone see this one coming? --AJK 320 - */ 321 - raw_spin_lock_irqsave(&ldata->read_lock, flags); 322 - put_tty_queue_nolock(c, ldata); 323 - raw_spin_unlock_irqrestore(&ldata->read_lock, flags); 203 + *read_buf_addr(ldata, ldata->read_head++) = c; 324 204 } 325 205 326 206 /** ··· 328 212 * Reset the read buffer counters and clear the flags. 329 213 * Called from n_tty_open() and n_tty_flush_buffer(). 330 214 * 331 - * Locking: tty_read_lock for read fields. 215 + * Locking: caller holds exclusive termios_rwsem 216 + * (or locking is not required) 332 217 */ 333 218 334 219 static void reset_buffer_flags(struct n_tty_data *ldata) 335 220 { 336 - unsigned long flags; 221 + ldata->read_head = ldata->canon_head = ldata->read_tail = 0; 222 + ldata->echo_head = ldata->echo_tail = ldata->echo_commit = 0; 223 + ldata->line_start = 0; 337 224 338 - raw_spin_lock_irqsave(&ldata->read_lock, flags); 339 - ldata->read_head = ldata->read_tail = ldata->read_cnt = 0; 340 - raw_spin_unlock_irqrestore(&ldata->read_lock, flags); 341 - 342 - mutex_lock(&ldata->echo_lock); 343 - ldata->echo_pos = ldata->echo_cnt = ldata->echo_overrun = 0; 344 - mutex_unlock(&ldata->echo_lock); 345 - 346 - ldata->canon_head = ldata->canon_data = ldata->erasing = 0; 225 + ldata->erasing = 0; 347 226 bitmap_zero(ldata->read_flags, N_TTY_BUF_SIZE); 348 227 } 349 228 ··· 362 251 * buffer flushed (eg at hangup) or when the N_TTY line discipline 363 252 * internally has to clean the pending queue (for example some signals). 364 253 * 365 - * Locking: ctrl_lock, read_lock. 254 + * Holds termios_rwsem to exclude producer/consumer while 255 + * buffer indices are reset. 256 + * 257 + * Locking: ctrl_lock, exclusive termios_rwsem 366 258 */ 367 259 368 260 static void n_tty_flush_buffer(struct tty_struct *tty) 369 261 { 262 + down_write(&tty->termios_rwsem); 370 263 reset_buffer_flags(tty->disc_data); 371 264 n_tty_set_room(tty); 372 265 373 266 if (tty->link) 374 267 n_tty_packet_mode_flush(tty); 268 + up_write(&tty->termios_rwsem); 375 269 } 376 270 377 271 /** ··· 386 270 * Report the number of characters buffered to be delivered to user 387 271 * at this instant in time. 388 272 * 389 - * Locking: read_lock 273 + * Locking: exclusive termios_rwsem 390 274 */ 391 275 392 276 static ssize_t n_tty_chars_in_buffer(struct tty_struct *tty) 393 277 { 394 - struct n_tty_data *ldata = tty->disc_data; 395 - unsigned long flags; 396 - ssize_t n = 0; 278 + ssize_t n; 397 279 398 - raw_spin_lock_irqsave(&ldata->read_lock, flags); 399 - if (!ldata->icanon) { 400 - n = ldata->read_cnt; 401 - } else if (ldata->canon_data) { 402 - n = (ldata->canon_head > ldata->read_tail) ? 403 - ldata->canon_head - ldata->read_tail : 404 - ldata->canon_head + (N_TTY_BUF_SIZE - ldata->read_tail); 405 - } 406 - raw_spin_unlock_irqrestore(&ldata->read_lock, flags); 280 + WARN_ONCE(1, "%s is deprecated and scheduled for removal.", __func__); 281 + 282 + down_write(&tty->termios_rwsem); 283 + n = chars_in_buffer(tty); 284 + up_write(&tty->termios_rwsem); 407 285 return n; 408 286 } 409 287 ··· 642 532 * are prioritized. Also, when control characters are echoed with a 643 533 * prefixed "^", the pair is treated atomically and thus not separated. 644 534 * 645 - * Locking: output_lock to protect column state and space left, 646 - * echo_lock to protect the echo buffer 535 + * Locking: callers must hold output_lock 647 536 */ 648 537 649 - static void process_echoes(struct tty_struct *tty) 538 + static size_t __process_echoes(struct tty_struct *tty) 650 539 { 651 540 struct n_tty_data *ldata = tty->disc_data; 652 - int space, nr; 541 + int space, old_space; 542 + size_t tail; 653 543 unsigned char c; 654 - unsigned char *cp, *buf_end; 655 544 656 - if (!ldata->echo_cnt) 657 - return; 545 + old_space = space = tty_write_room(tty); 658 546 659 - mutex_lock(&ldata->output_lock); 660 - mutex_lock(&ldata->echo_lock); 661 - 662 - space = tty_write_room(tty); 663 - 664 - buf_end = ldata->echo_buf + N_TTY_BUF_SIZE; 665 - cp = ldata->echo_buf + ldata->echo_pos; 666 - nr = ldata->echo_cnt; 667 - while (nr > 0) { 668 - c = *cp; 547 + tail = ldata->echo_tail; 548 + while (ldata->echo_commit != tail) { 549 + c = echo_buf(ldata, tail); 669 550 if (c == ECHO_OP_START) { 670 551 unsigned char op; 671 - unsigned char *opp; 672 552 int no_space_left = 0; 673 553 674 554 /* ··· 666 566 * operation, get the next byte, which is either the 667 567 * op code or a control character value. 668 568 */ 669 - opp = cp + 1; 670 - if (opp == buf_end) 671 - opp -= N_TTY_BUF_SIZE; 672 - op = *opp; 569 + op = echo_buf(ldata, tail + 1); 673 570 674 571 switch (op) { 675 572 unsigned int num_chars, num_bs; 676 573 677 574 case ECHO_OP_ERASE_TAB: 678 - if (++opp == buf_end) 679 - opp -= N_TTY_BUF_SIZE; 680 - num_chars = *opp; 575 + num_chars = echo_buf(ldata, tail + 2); 681 576 682 577 /* 683 578 * Determine how many columns to go back ··· 698 603 if (ldata->column > 0) 699 604 ldata->column--; 700 605 } 701 - cp += 3; 702 - nr -= 3; 606 + tail += 3; 703 607 break; 704 608 705 609 case ECHO_OP_SET_CANON_COL: 706 610 ldata->canon_column = ldata->column; 707 - cp += 2; 708 - nr -= 2; 611 + tail += 2; 709 612 break; 710 613 711 614 case ECHO_OP_MOVE_BACK_COL: 712 615 if (ldata->column > 0) 713 616 ldata->column--; 714 - cp += 2; 715 - nr -= 2; 617 + tail += 2; 716 618 break; 717 619 718 620 case ECHO_OP_START: ··· 721 629 tty_put_char(tty, ECHO_OP_START); 722 630 ldata->column++; 723 631 space--; 724 - cp += 2; 725 - nr -= 2; 632 + tail += 2; 726 633 break; 727 634 728 635 default: ··· 742 651 tty_put_char(tty, op ^ 0100); 743 652 ldata->column += 2; 744 653 space -= 2; 745 - cp += 2; 746 - nr -= 2; 654 + tail += 2; 747 655 } 748 656 749 657 if (no_space_left) ··· 759 669 tty_put_char(tty, c); 760 670 space -= 1; 761 671 } 762 - cp += 1; 763 - nr -= 1; 672 + tail += 1; 764 673 } 765 - 766 - /* When end of circular buffer reached, wrap around */ 767 - if (cp >= buf_end) 768 - cp -= N_TTY_BUF_SIZE; 769 674 } 770 675 771 - if (nr == 0) { 772 - ldata->echo_pos = 0; 773 - ldata->echo_cnt = 0; 774 - ldata->echo_overrun = 0; 775 - } else { 776 - int num_processed = ldata->echo_cnt - nr; 777 - ldata->echo_pos += num_processed; 778 - ldata->echo_pos &= N_TTY_BUF_SIZE - 1; 779 - ldata->echo_cnt = nr; 780 - if (num_processed > 0) 781 - ldata->echo_overrun = 0; 676 + /* If the echo buffer is nearly full (so that the possibility exists 677 + * of echo overrun before the next commit), then discard enough 678 + * data at the tail to prevent a subsequent overrun */ 679 + while (ldata->echo_commit - tail >= ECHO_DISCARD_WATERMARK) { 680 + if (echo_buf(ldata, tail == ECHO_OP_START)) { 681 + if (echo_buf(ldata, tail) == ECHO_OP_ERASE_TAB) 682 + tail += 3; 683 + else 684 + tail += 2; 685 + } else 686 + tail++; 782 687 } 783 688 784 - mutex_unlock(&ldata->echo_lock); 689 + ldata->echo_tail = tail; 690 + return old_space - space; 691 + } 692 + 693 + static void commit_echoes(struct tty_struct *tty) 694 + { 695 + struct n_tty_data *ldata = tty->disc_data; 696 + size_t nr, old, echoed; 697 + size_t head; 698 + 699 + head = ldata->echo_head; 700 + old = ldata->echo_commit - ldata->echo_tail; 701 + 702 + /* Process committed echoes if the accumulated # of bytes 703 + * is over the threshold (and try again each time another 704 + * block is accumulated) */ 705 + nr = head - ldata->echo_tail; 706 + if (nr < ECHO_COMMIT_WATERMARK || (nr % ECHO_BLOCK > old % ECHO_BLOCK)) 707 + return; 708 + 709 + mutex_lock(&ldata->output_lock); 710 + ldata->echo_commit = head; 711 + echoed = __process_echoes(tty); 785 712 mutex_unlock(&ldata->output_lock); 786 713 787 - if (tty->ops->flush_chars) 714 + if (echoed && tty->ops->flush_chars) 788 715 tty->ops->flush_chars(tty); 716 + } 717 + 718 + static void process_echoes(struct tty_struct *tty) 719 + { 720 + struct n_tty_data *ldata = tty->disc_data; 721 + size_t echoed; 722 + 723 + if (!L_ECHO(tty) || ldata->echo_commit == ldata->echo_tail) 724 + return; 725 + 726 + mutex_lock(&ldata->output_lock); 727 + echoed = __process_echoes(tty); 728 + mutex_unlock(&ldata->output_lock); 729 + 730 + if (echoed && tty->ops->flush_chars) 731 + tty->ops->flush_chars(tty); 732 + } 733 + 734 + static void flush_echoes(struct tty_struct *tty) 735 + { 736 + struct n_tty_data *ldata = tty->disc_data; 737 + 738 + if (!L_ECHO(tty) || ldata->echo_commit == ldata->echo_head) 739 + return; 740 + 741 + mutex_lock(&ldata->output_lock); 742 + ldata->echo_commit = ldata->echo_head; 743 + __process_echoes(tty); 744 + mutex_unlock(&ldata->output_lock); 789 745 } 790 746 791 747 /** ··· 840 704 * @ldata: n_tty data 841 705 * 842 706 * Add a character or operation byte to the echo buffer. 843 - * 844 - * Should be called under the echo lock to protect the echo buffer. 845 707 */ 846 708 847 - static void add_echo_byte(unsigned char c, struct n_tty_data *ldata) 709 + static inline void add_echo_byte(unsigned char c, struct n_tty_data *ldata) 848 710 { 849 - int new_byte_pos; 850 - 851 - if (ldata->echo_cnt == N_TTY_BUF_SIZE) { 852 - /* Circular buffer is already at capacity */ 853 - new_byte_pos = ldata->echo_pos; 854 - 855 - /* 856 - * Since the buffer start position needs to be advanced, 857 - * be sure to step by a whole operation byte group. 858 - */ 859 - if (ldata->echo_buf[ldata->echo_pos] == ECHO_OP_START) { 860 - if (ldata->echo_buf[(ldata->echo_pos + 1) & 861 - (N_TTY_BUF_SIZE - 1)] == 862 - ECHO_OP_ERASE_TAB) { 863 - ldata->echo_pos += 3; 864 - ldata->echo_cnt -= 2; 865 - } else { 866 - ldata->echo_pos += 2; 867 - ldata->echo_cnt -= 1; 868 - } 869 - } else { 870 - ldata->echo_pos++; 871 - } 872 - ldata->echo_pos &= N_TTY_BUF_SIZE - 1; 873 - 874 - ldata->echo_overrun = 1; 875 - } else { 876 - new_byte_pos = ldata->echo_pos + ldata->echo_cnt; 877 - new_byte_pos &= N_TTY_BUF_SIZE - 1; 878 - ldata->echo_cnt++; 879 - } 880 - 881 - ldata->echo_buf[new_byte_pos] = c; 711 + *echo_buf_addr(ldata, ldata->echo_head++) = c; 882 712 } 883 713 884 714 /** ··· 852 750 * @ldata: n_tty data 853 751 * 854 752 * Add an operation to the echo buffer to move back one column. 855 - * 856 - * Locking: echo_lock to protect the echo buffer 857 753 */ 858 754 859 755 static void echo_move_back_col(struct n_tty_data *ldata) 860 756 { 861 - mutex_lock(&ldata->echo_lock); 862 757 add_echo_byte(ECHO_OP_START, ldata); 863 758 add_echo_byte(ECHO_OP_MOVE_BACK_COL, ldata); 864 - mutex_unlock(&ldata->echo_lock); 865 759 } 866 760 867 761 /** ··· 866 768 * 867 769 * Add an operation to the echo buffer to set the canon column 868 770 * to the current column. 869 - * 870 - * Locking: echo_lock to protect the echo buffer 871 771 */ 872 772 873 773 static void echo_set_canon_col(struct n_tty_data *ldata) 874 774 { 875 - mutex_lock(&ldata->echo_lock); 876 775 add_echo_byte(ECHO_OP_START, ldata); 877 776 add_echo_byte(ECHO_OP_SET_CANON_COL, ldata); 878 - mutex_unlock(&ldata->echo_lock); 879 777 } 880 778 881 779 /** ··· 887 793 * of input. This information will be used later, along with 888 794 * canon column (if applicable), to go back the correct number 889 795 * of columns. 890 - * 891 - * Locking: echo_lock to protect the echo buffer 892 796 */ 893 797 894 798 static void echo_erase_tab(unsigned int num_chars, int after_tab, 895 799 struct n_tty_data *ldata) 896 800 { 897 - mutex_lock(&ldata->echo_lock); 898 - 899 801 add_echo_byte(ECHO_OP_START, ldata); 900 802 add_echo_byte(ECHO_OP_ERASE_TAB, ldata); 901 803 ··· 903 813 num_chars |= 0x80; 904 814 905 815 add_echo_byte(num_chars, ldata); 906 - 907 - mutex_unlock(&ldata->echo_lock); 908 816 } 909 817 910 818 /** ··· 914 826 * L_ECHO(tty) is true. Called from the driver receive_buf path. 915 827 * 916 828 * This variant does not treat control characters specially. 917 - * 918 - * Locking: echo_lock to protect the echo buffer 919 829 */ 920 830 921 831 static void echo_char_raw(unsigned char c, struct n_tty_data *ldata) 922 832 { 923 - mutex_lock(&ldata->echo_lock); 924 833 if (c == ECHO_OP_START) { 925 834 add_echo_byte(ECHO_OP_START, ldata); 926 835 add_echo_byte(ECHO_OP_START, ldata); 927 836 } else { 928 837 add_echo_byte(c, ldata); 929 838 } 930 - mutex_unlock(&ldata->echo_lock); 931 839 } 932 840 933 841 /** ··· 936 852 * 937 853 * This variant tags control characters to be echoed as "^X" 938 854 * (where X is the letter representing the control char). 939 - * 940 - * Locking: echo_lock to protect the echo buffer 941 855 */ 942 856 943 857 static void echo_char(unsigned char c, struct tty_struct *tty) 944 858 { 945 859 struct n_tty_data *ldata = tty->disc_data; 946 - 947 - mutex_lock(&ldata->echo_lock); 948 860 949 861 if (c == ECHO_OP_START) { 950 862 add_echo_byte(ECHO_OP_START, ldata); ··· 950 870 add_echo_byte(ECHO_OP_START, ldata); 951 871 add_echo_byte(c, ldata); 952 872 } 953 - 954 - mutex_unlock(&ldata->echo_lock); 955 873 } 956 874 957 875 /** ··· 974 896 * present in the stream from the driver layer. Handles the complexities 975 897 * of UTF-8 multibyte symbols. 976 898 * 977 - * Locking: read_lock for tty buffers 899 + * n_tty_receive_buf()/producer path: 900 + * caller holds non-exclusive termios_rwsem 901 + * modifies read_head 902 + * 903 + * Modifying the read_head is not considered a publish in this context 904 + * because canonical mode is active -- only canon_head publishes 978 905 */ 979 906 980 907 static void eraser(unsigned char c, struct tty_struct *tty) 981 908 { 982 909 struct n_tty_data *ldata = tty->disc_data; 983 910 enum { ERASE, WERASE, KILL } kill_type; 984 - int head, seen_alnums, cnt; 985 - unsigned long flags; 911 + size_t head; 912 + size_t cnt; 913 + int seen_alnums; 986 914 987 - /* FIXME: locking needed ? */ 988 915 if (ldata->read_head == ldata->canon_head) { 989 916 /* process_output('\a', tty); */ /* what do you think? */ 990 917 return; ··· 1000 917 kill_type = WERASE; 1001 918 else { 1002 919 if (!L_ECHO(tty)) { 1003 - raw_spin_lock_irqsave(&ldata->read_lock, flags); 1004 - ldata->read_cnt -= ((ldata->read_head - ldata->canon_head) & 1005 - (N_TTY_BUF_SIZE - 1)); 1006 920 ldata->read_head = ldata->canon_head; 1007 - raw_spin_unlock_irqrestore(&ldata->read_lock, flags); 1008 921 return; 1009 922 } 1010 923 if (!L_ECHOK(tty) || !L_ECHOKE(tty) || !L_ECHOE(tty)) { 1011 - raw_spin_lock_irqsave(&ldata->read_lock, flags); 1012 - ldata->read_cnt -= ((ldata->read_head - ldata->canon_head) & 1013 - (N_TTY_BUF_SIZE - 1)); 1014 924 ldata->read_head = ldata->canon_head; 1015 - raw_spin_unlock_irqrestore(&ldata->read_lock, flags); 1016 925 finish_erasing(ldata); 1017 926 echo_char(KILL_CHAR(tty), tty); 1018 927 /* Add a newline if ECHOK is on and ECHOKE is off. */ ··· 1016 941 } 1017 942 1018 943 seen_alnums = 0; 1019 - /* FIXME: Locking ?? */ 1020 944 while (ldata->read_head != ldata->canon_head) { 1021 945 head = ldata->read_head; 1022 946 1023 947 /* erase a single possibly multibyte character */ 1024 948 do { 1025 - head = (head - 1) & (N_TTY_BUF_SIZE-1); 1026 - c = ldata->read_buf[head]; 949 + head--; 950 + c = read_buf(ldata, head); 1027 951 } while (is_continuation(c, tty) && head != ldata->canon_head); 1028 952 1029 953 /* do not partially erase */ ··· 1036 962 else if (seen_alnums) 1037 963 break; 1038 964 } 1039 - cnt = (ldata->read_head - head) & (N_TTY_BUF_SIZE-1); 1040 - raw_spin_lock_irqsave(&ldata->read_lock, flags); 965 + cnt = ldata->read_head - head; 1041 966 ldata->read_head = head; 1042 - ldata->read_cnt -= cnt; 1043 - raw_spin_unlock_irqrestore(&ldata->read_lock, flags); 1044 967 if (L_ECHO(tty)) { 1045 968 if (L_ECHOPRT(tty)) { 1046 969 if (!ldata->erasing) { ··· 1047 976 /* if cnt > 1, output a multi-byte character */ 1048 977 echo_char(c, tty); 1049 978 while (--cnt > 0) { 1050 - head = (head+1) & (N_TTY_BUF_SIZE-1); 1051 - echo_char_raw(ldata->read_buf[head], 1052 - ldata); 979 + head++; 980 + echo_char_raw(read_buf(ldata, head), ldata); 1053 981 echo_move_back_col(ldata); 1054 982 } 1055 983 } else if (kill_type == ERASE && !L_ECHOE(tty)) { ··· 1056 986 } else if (c == '\t') { 1057 987 unsigned int num_chars = 0; 1058 988 int after_tab = 0; 1059 - unsigned long tail = ldata->read_head; 989 + size_t tail = ldata->read_head; 1060 990 1061 991 /* 1062 992 * Count the columns used for characters ··· 1066 996 * number of columns. 1067 997 */ 1068 998 while (tail != ldata->canon_head) { 1069 - tail = (tail-1) & (N_TTY_BUF_SIZE-1); 1070 - c = ldata->read_buf[tail]; 999 + tail--; 1000 + c = read_buf(ldata, tail); 1071 1001 if (c == '\t') { 1072 1002 after_tab = 1; 1073 1003 break; ··· 1110 1040 * Locking: ctrl_lock 1111 1041 */ 1112 1042 1113 - static inline void isig(int sig, struct tty_struct *tty) 1043 + static void isig(int sig, struct tty_struct *tty) 1114 1044 { 1115 1045 struct pid *tty_pgrp = tty_get_pgrp(tty); 1116 1046 if (tty_pgrp) { ··· 1126 1056 * An RS232 break event has been hit in the incoming bitstream. This 1127 1057 * can cause a variety of events depending upon the termios settings. 1128 1058 * 1129 - * Called from the receive_buf path so single threaded. 1059 + * n_tty_receive_buf()/producer path: 1060 + * caller holds non-exclusive termios_rwsem 1061 + * publishes read_head via put_tty_queue() 1062 + * 1063 + * Note: may get exclusive termios_rwsem if flushing input buffer 1130 1064 */ 1131 1065 1132 - static inline void n_tty_receive_break(struct tty_struct *tty) 1066 + static void n_tty_receive_break(struct tty_struct *tty) 1133 1067 { 1134 1068 struct n_tty_data *ldata = tty->disc_data; 1135 1069 ··· 1142 1068 if (I_BRKINT(tty)) { 1143 1069 isig(SIGINT, tty); 1144 1070 if (!L_NOFLSH(tty)) { 1071 + /* flushing needs exclusive termios_rwsem */ 1072 + up_read(&tty->termios_rwsem); 1145 1073 n_tty_flush_buffer(tty); 1146 1074 tty_driver_flush_buffer(tty); 1075 + down_read(&tty->termios_rwsem); 1147 1076 } 1148 1077 return; 1149 1078 } ··· 1171 1094 * private. 1172 1095 */ 1173 1096 1174 - static inline void n_tty_receive_overrun(struct tty_struct *tty) 1097 + static void n_tty_receive_overrun(struct tty_struct *tty) 1175 1098 { 1176 1099 struct n_tty_data *ldata = tty->disc_data; 1177 1100 char buf[64]; ··· 1193 1116 * @c: character 1194 1117 * 1195 1118 * Process a parity error and queue the right data to indicate 1196 - * the error case if necessary. Locking as per n_tty_receive_buf. 1119 + * the error case if necessary. 1120 + * 1121 + * n_tty_receive_buf()/producer path: 1122 + * caller holds non-exclusive termios_rwsem 1123 + * publishes read_head via put_tty_queue() 1197 1124 */ 1198 - static inline void n_tty_receive_parity_error(struct tty_struct *tty, 1199 - unsigned char c) 1125 + static void n_tty_receive_parity_error(struct tty_struct *tty, unsigned char c) 1200 1126 { 1201 1127 struct n_tty_data *ldata = tty->disc_data; 1202 1128 ··· 1216 1136 wake_up_interruptible(&tty->read_wait); 1217 1137 } 1218 1138 1139 + static void 1140 + n_tty_receive_signal_char(struct tty_struct *tty, int signal, unsigned char c) 1141 + { 1142 + if (!L_NOFLSH(tty)) { 1143 + /* flushing needs exclusive termios_rwsem */ 1144 + up_read(&tty->termios_rwsem); 1145 + n_tty_flush_buffer(tty); 1146 + tty_driver_flush_buffer(tty); 1147 + down_read(&tty->termios_rwsem); 1148 + } 1149 + if (I_IXON(tty)) 1150 + start_tty(tty); 1151 + if (L_ECHO(tty)) { 1152 + echo_char(c, tty); 1153 + commit_echoes(tty); 1154 + } 1155 + isig(signal, tty); 1156 + return; 1157 + } 1158 + 1219 1159 /** 1220 1160 * n_tty_receive_char - perform processing 1221 1161 * @tty: terminal device ··· 1244 1144 * Process an individual character of input received from the driver. 1245 1145 * This is serialized with respect to itself by the rules for the 1246 1146 * driver above. 1147 + * 1148 + * n_tty_receive_buf()/producer path: 1149 + * caller holds non-exclusive termios_rwsem 1150 + * publishes canon_head if canonical mode is active 1151 + * otherwise, publishes read_head via put_tty_queue() 1152 + * 1153 + * Returns 1 if LNEXT was received, else returns 0 1247 1154 */ 1248 1155 1249 - static inline void n_tty_receive_char(struct tty_struct *tty, unsigned char c) 1156 + static int 1157 + n_tty_receive_char_special(struct tty_struct *tty, unsigned char c) 1250 1158 { 1251 1159 struct n_tty_data *ldata = tty->disc_data; 1252 - unsigned long flags; 1253 1160 int parmrk; 1254 - 1255 - if (ldata->raw) { 1256 - put_tty_queue(c, ldata); 1257 - return; 1258 - } 1259 - 1260 - if (I_ISTRIP(tty)) 1261 - c &= 0x7f; 1262 - if (I_IUCLC(tty) && L_IEXTEN(tty)) 1263 - c = tolower(c); 1264 - 1265 - if (L_EXTPROC(tty)) { 1266 - put_tty_queue(c, ldata); 1267 - return; 1268 - } 1269 - 1270 - if (tty->stopped && !tty->flow_stopped && I_IXON(tty) && 1271 - I_IXANY(tty) && c != START_CHAR(tty) && c != STOP_CHAR(tty) && 1272 - c != INTR_CHAR(tty) && c != QUIT_CHAR(tty) && c != SUSP_CHAR(tty)) { 1273 - start_tty(tty); 1274 - process_echoes(tty); 1275 - } 1276 - 1277 - if (tty->closing) { 1278 - if (I_IXON(tty)) { 1279 - if (c == START_CHAR(tty)) { 1280 - start_tty(tty); 1281 - process_echoes(tty); 1282 - } else if (c == STOP_CHAR(tty)) 1283 - stop_tty(tty); 1284 - } 1285 - return; 1286 - } 1287 - 1288 - /* 1289 - * If the previous character was LNEXT, or we know that this 1290 - * character is not one of the characters that we'll have to 1291 - * handle specially, do shortcut processing to speed things 1292 - * up. 1293 - */ 1294 - if (!test_bit(c, ldata->process_char_map) || ldata->lnext) { 1295 - ldata->lnext = 0; 1296 - parmrk = (c == (unsigned char) '\377' && I_PARMRK(tty)) ? 1 : 0; 1297 - if (ldata->read_cnt >= (N_TTY_BUF_SIZE - parmrk - 1)) { 1298 - /* beep if no space */ 1299 - if (L_ECHO(tty)) 1300 - process_output('\a', tty); 1301 - return; 1302 - } 1303 - if (L_ECHO(tty)) { 1304 - finish_erasing(ldata); 1305 - /* Record the column of first canon char. */ 1306 - if (ldata->canon_head == ldata->read_head) 1307 - echo_set_canon_col(ldata); 1308 - echo_char(c, tty); 1309 - process_echoes(tty); 1310 - } 1311 - if (parmrk) 1312 - put_tty_queue(c, ldata); 1313 - put_tty_queue(c, ldata); 1314 - return; 1315 - } 1316 1161 1317 1162 if (I_IXON(tty)) { 1318 1163 if (c == START_CHAR(tty)) { 1319 1164 start_tty(tty); 1320 - process_echoes(tty); 1321 - return; 1165 + commit_echoes(tty); 1166 + return 0; 1322 1167 } 1323 1168 if (c == STOP_CHAR(tty)) { 1324 1169 stop_tty(tty); 1325 - return; 1170 + return 0; 1326 1171 } 1327 1172 } 1328 1173 1329 1174 if (L_ISIG(tty)) { 1330 - int signal; 1331 - signal = SIGINT; 1332 - if (c == INTR_CHAR(tty)) 1333 - goto send_signal; 1334 - signal = SIGQUIT; 1335 - if (c == QUIT_CHAR(tty)) 1336 - goto send_signal; 1337 - signal = SIGTSTP; 1338 - if (c == SUSP_CHAR(tty)) { 1339 - send_signal: 1340 - if (!L_NOFLSH(tty)) { 1341 - n_tty_flush_buffer(tty); 1342 - tty_driver_flush_buffer(tty); 1343 - } 1344 - if (I_IXON(tty)) 1345 - start_tty(tty); 1346 - if (L_ECHO(tty)) { 1347 - echo_char(c, tty); 1348 - process_echoes(tty); 1349 - } 1350 - isig(signal, tty); 1351 - return; 1175 + if (c == INTR_CHAR(tty)) { 1176 + n_tty_receive_signal_char(tty, SIGINT, c); 1177 + return 0; 1178 + } else if (c == QUIT_CHAR(tty)) { 1179 + n_tty_receive_signal_char(tty, SIGQUIT, c); 1180 + return 0; 1181 + } else if (c == SUSP_CHAR(tty)) { 1182 + n_tty_receive_signal_char(tty, SIGTSTP, c); 1183 + return 0; 1352 1184 } 1185 + } 1186 + 1187 + if (tty->stopped && !tty->flow_stopped && I_IXON(tty) && I_IXANY(tty)) { 1188 + start_tty(tty); 1189 + process_echoes(tty); 1353 1190 } 1354 1191 1355 1192 if (c == '\r') { 1356 1193 if (I_IGNCR(tty)) 1357 - return; 1194 + return 0; 1358 1195 if (I_ICRNL(tty)) 1359 1196 c = '\n'; 1360 1197 } else if (c == '\n' && I_INLCR(tty)) ··· 1301 1264 if (c == ERASE_CHAR(tty) || c == KILL_CHAR(tty) || 1302 1265 (c == WERASE_CHAR(tty) && L_IEXTEN(tty))) { 1303 1266 eraser(c, tty); 1304 - process_echoes(tty); 1305 - return; 1267 + commit_echoes(tty); 1268 + return 0; 1306 1269 } 1307 1270 if (c == LNEXT_CHAR(tty) && L_IEXTEN(tty)) { 1308 1271 ldata->lnext = 1; ··· 1311 1274 if (L_ECHOCTL(tty)) { 1312 1275 echo_char_raw('^', ldata); 1313 1276 echo_char_raw('\b', ldata); 1314 - process_echoes(tty); 1277 + commit_echoes(tty); 1315 1278 } 1316 1279 } 1317 - return; 1280 + return 1; 1318 1281 } 1319 - if (c == REPRINT_CHAR(tty) && L_ECHO(tty) && 1320 - L_IEXTEN(tty)) { 1321 - unsigned long tail = ldata->canon_head; 1282 + if (c == REPRINT_CHAR(tty) && L_ECHO(tty) && L_IEXTEN(tty)) { 1283 + size_t tail = ldata->canon_head; 1322 1284 1323 1285 finish_erasing(ldata); 1324 1286 echo_char(c, tty); 1325 1287 echo_char_raw('\n', ldata); 1326 1288 while (tail != ldata->read_head) { 1327 - echo_char(ldata->read_buf[tail], tty); 1328 - tail = (tail+1) & (N_TTY_BUF_SIZE-1); 1289 + echo_char(read_buf(ldata, tail), tty); 1290 + tail++; 1329 1291 } 1330 - process_echoes(tty); 1331 - return; 1292 + commit_echoes(tty); 1293 + return 0; 1332 1294 } 1333 1295 if (c == '\n') { 1334 - if (ldata->read_cnt >= N_TTY_BUF_SIZE) { 1335 - if (L_ECHO(tty)) 1336 - process_output('\a', tty); 1337 - return; 1338 - } 1339 1296 if (L_ECHO(tty) || L_ECHONL(tty)) { 1340 1297 echo_char_raw('\n', ldata); 1341 - process_echoes(tty); 1298 + commit_echoes(tty); 1342 1299 } 1343 1300 goto handle_newline; 1344 1301 } 1345 1302 if (c == EOF_CHAR(tty)) { 1346 - if (ldata->read_cnt >= N_TTY_BUF_SIZE) 1347 - return; 1348 - if (ldata->canon_head != ldata->read_head) 1349 - set_bit(TTY_PUSH, &tty->flags); 1350 1303 c = __DISABLED_CHAR; 1351 1304 goto handle_newline; 1352 1305 } ··· 1344 1317 (c == EOL2_CHAR(tty) && L_IEXTEN(tty))) { 1345 1318 parmrk = (c == (unsigned char) '\377' && I_PARMRK(tty)) 1346 1319 ? 1 : 0; 1347 - if (ldata->read_cnt >= (N_TTY_BUF_SIZE - parmrk)) { 1348 - if (L_ECHO(tty)) 1349 - process_output('\a', tty); 1350 - return; 1351 - } 1352 1320 /* 1353 1321 * XXX are EOL_CHAR and EOL2_CHAR echoed?!? 1354 1322 */ ··· 1352 1330 if (ldata->canon_head == ldata->read_head) 1353 1331 echo_set_canon_col(ldata); 1354 1332 echo_char(c, tty); 1355 - process_echoes(tty); 1333 + commit_echoes(tty); 1356 1334 } 1357 1335 /* 1358 1336 * XXX does PARMRK doubling happen for ··· 1362 1340 put_tty_queue(c, ldata); 1363 1341 1364 1342 handle_newline: 1365 - raw_spin_lock_irqsave(&ldata->read_lock, flags); 1366 - set_bit(ldata->read_head, ldata->read_flags); 1367 - put_tty_queue_nolock(c, ldata); 1343 + set_bit(ldata->read_head & (N_TTY_BUF_SIZE - 1), ldata->read_flags); 1344 + put_tty_queue(c, ldata); 1368 1345 ldata->canon_head = ldata->read_head; 1369 - ldata->canon_data++; 1370 - raw_spin_unlock_irqrestore(&ldata->read_lock, flags); 1371 1346 kill_fasync(&tty->fasync, SIGIO, POLL_IN); 1372 1347 if (waitqueue_active(&tty->read_wait)) 1373 1348 wake_up_interruptible(&tty->read_wait); 1374 - return; 1349 + return 0; 1375 1350 } 1376 1351 } 1377 1352 1378 1353 parmrk = (c == (unsigned char) '\377' && I_PARMRK(tty)) ? 1 : 0; 1379 - if (ldata->read_cnt >= (N_TTY_BUF_SIZE - parmrk - 1)) { 1380 - /* beep if no space */ 1381 - if (L_ECHO(tty)) 1382 - process_output('\a', tty); 1383 - return; 1384 - } 1385 1354 if (L_ECHO(tty)) { 1386 1355 finish_erasing(ldata); 1387 1356 if (c == '\n') ··· 1383 1370 echo_set_canon_col(ldata); 1384 1371 echo_char(c, tty); 1385 1372 } 1386 - process_echoes(tty); 1373 + commit_echoes(tty); 1387 1374 } 1388 1375 1389 1376 if (parmrk) 1390 1377 put_tty_queue(c, ldata); 1391 1378 1392 1379 put_tty_queue(c, ldata); 1380 + return 0; 1393 1381 } 1394 1382 1395 - 1396 - /** 1397 - * n_tty_write_wakeup - asynchronous I/O notifier 1398 - * @tty: tty device 1399 - * 1400 - * Required for the ptys, serial driver etc. since processes 1401 - * that attach themselves to the master and rely on ASYNC 1402 - * IO must be woken up 1403 - */ 1404 - 1405 - static void n_tty_write_wakeup(struct tty_struct *tty) 1383 + static inline void 1384 + n_tty_receive_char_inline(struct tty_struct *tty, unsigned char c) 1406 1385 { 1407 - if (tty->fasync && test_and_clear_bit(TTY_DO_WRITE_WAKEUP, &tty->flags)) 1408 - kill_fasync(&tty->fasync, SIGIO, POLL_OUT); 1386 + struct n_tty_data *ldata = tty->disc_data; 1387 + int parmrk; 1388 + 1389 + if (tty->stopped && !tty->flow_stopped && I_IXON(tty) && I_IXANY(tty)) { 1390 + start_tty(tty); 1391 + process_echoes(tty); 1392 + } 1393 + if (L_ECHO(tty)) { 1394 + finish_erasing(ldata); 1395 + /* Record the column of first canon char. */ 1396 + if (ldata->canon_head == ldata->read_head) 1397 + echo_set_canon_col(ldata); 1398 + echo_char(c, tty); 1399 + commit_echoes(tty); 1400 + } 1401 + parmrk = (c == (unsigned char) '\377' && I_PARMRK(tty)) ? 1 : 0; 1402 + if (parmrk) 1403 + put_tty_queue(c, ldata); 1404 + put_tty_queue(c, ldata); 1405 + } 1406 + 1407 + static inline void n_tty_receive_char(struct tty_struct *tty, unsigned char c) 1408 + { 1409 + n_tty_receive_char_inline(tty, c); 1410 + } 1411 + 1412 + static inline void 1413 + n_tty_receive_char_fast(struct tty_struct *tty, unsigned char c) 1414 + { 1415 + struct n_tty_data *ldata = tty->disc_data; 1416 + 1417 + if (tty->stopped && !tty->flow_stopped && I_IXON(tty) && I_IXANY(tty)) { 1418 + start_tty(tty); 1419 + process_echoes(tty); 1420 + } 1421 + if (L_ECHO(tty)) { 1422 + finish_erasing(ldata); 1423 + /* Record the column of first canon char. */ 1424 + if (ldata->canon_head == ldata->read_head) 1425 + echo_set_canon_col(ldata); 1426 + echo_char(c, tty); 1427 + commit_echoes(tty); 1428 + } 1429 + put_tty_queue(c, ldata); 1430 + } 1431 + 1432 + static inline void 1433 + n_tty_receive_char_closing(struct tty_struct *tty, unsigned char c) 1434 + { 1435 + if (I_ISTRIP(tty)) 1436 + c &= 0x7f; 1437 + if (I_IUCLC(tty) && L_IEXTEN(tty)) 1438 + c = tolower(c); 1439 + 1440 + if (I_IXON(tty)) { 1441 + if (c == STOP_CHAR(tty)) 1442 + stop_tty(tty); 1443 + else if (c == START_CHAR(tty) || 1444 + (tty->stopped && !tty->flow_stopped && I_IXANY(tty) && 1445 + c != INTR_CHAR(tty) && c != QUIT_CHAR(tty) && 1446 + c != SUSP_CHAR(tty))) { 1447 + start_tty(tty); 1448 + process_echoes(tty); 1449 + } 1450 + } 1451 + } 1452 + 1453 + static void 1454 + n_tty_receive_char_flagged(struct tty_struct *tty, unsigned char c, char flag) 1455 + { 1456 + char buf[64]; 1457 + 1458 + switch (flag) { 1459 + case TTY_BREAK: 1460 + n_tty_receive_break(tty); 1461 + break; 1462 + case TTY_PARITY: 1463 + case TTY_FRAME: 1464 + n_tty_receive_parity_error(tty, c); 1465 + break; 1466 + case TTY_OVERRUN: 1467 + n_tty_receive_overrun(tty); 1468 + break; 1469 + default: 1470 + printk(KERN_ERR "%s: unknown flag %d\n", 1471 + tty_name(tty, buf), flag); 1472 + break; 1473 + } 1474 + } 1475 + 1476 + static void 1477 + n_tty_receive_char_lnext(struct tty_struct *tty, unsigned char c, char flag) 1478 + { 1479 + struct n_tty_data *ldata = tty->disc_data; 1480 + 1481 + ldata->lnext = 0; 1482 + if (likely(flag == TTY_NORMAL)) { 1483 + if (I_ISTRIP(tty)) 1484 + c &= 0x7f; 1485 + if (I_IUCLC(tty) && L_IEXTEN(tty)) 1486 + c = tolower(c); 1487 + n_tty_receive_char(tty, c); 1488 + } else 1489 + n_tty_receive_char_flagged(tty, c, flag); 1409 1490 } 1410 1491 1411 1492 /** ··· 1513 1406 * been received. This function must be called from soft contexts 1514 1407 * not from interrupt context. The driver is responsible for making 1515 1408 * calls one at a time and in order (or using flush_to_ldisc) 1409 + * 1410 + * n_tty_receive_buf()/producer path: 1411 + * claims non-exclusive termios_rwsem 1412 + * publishes read_head and canon_head 1516 1413 */ 1517 1414 1518 - static void n_tty_receive_buf(struct tty_struct *tty, const unsigned char *cp, 1519 - char *fp, int count) 1415 + static void 1416 + n_tty_receive_buf_real_raw(struct tty_struct *tty, const unsigned char *cp, 1417 + char *fp, int count) 1520 1418 { 1521 1419 struct n_tty_data *ldata = tty->disc_data; 1522 - const unsigned char *p; 1523 - char *f, flags = TTY_NORMAL; 1524 - int i; 1525 - char buf[64]; 1526 - unsigned long cpuflags; 1420 + size_t n, head; 1527 1421 1528 - if (ldata->real_raw) { 1529 - raw_spin_lock_irqsave(&ldata->read_lock, cpuflags); 1530 - i = min(N_TTY_BUF_SIZE - ldata->read_cnt, 1531 - N_TTY_BUF_SIZE - ldata->read_head); 1532 - i = min(count, i); 1533 - memcpy(ldata->read_buf + ldata->read_head, cp, i); 1534 - ldata->read_head = (ldata->read_head + i) & (N_TTY_BUF_SIZE-1); 1535 - ldata->read_cnt += i; 1536 - cp += i; 1537 - count -= i; 1422 + head = ldata->read_head & (N_TTY_BUF_SIZE - 1); 1423 + n = N_TTY_BUF_SIZE - max(read_cnt(ldata), head); 1424 + n = min_t(size_t, count, n); 1425 + memcpy(read_buf_addr(ldata, head), cp, n); 1426 + ldata->read_head += n; 1427 + cp += n; 1428 + count -= n; 1538 1429 1539 - i = min(N_TTY_BUF_SIZE - ldata->read_cnt, 1540 - N_TTY_BUF_SIZE - ldata->read_head); 1541 - i = min(count, i); 1542 - memcpy(ldata->read_buf + ldata->read_head, cp, i); 1543 - ldata->read_head = (ldata->read_head + i) & (N_TTY_BUF_SIZE-1); 1544 - ldata->read_cnt += i; 1545 - raw_spin_unlock_irqrestore(&ldata->read_lock, cpuflags); 1546 - } else { 1547 - for (i = count, p = cp, f = fp; i; i--, p++) { 1548 - if (f) 1549 - flags = *f++; 1550 - switch (flags) { 1551 - case TTY_NORMAL: 1552 - n_tty_receive_char(tty, *p); 1553 - break; 1554 - case TTY_BREAK: 1555 - n_tty_receive_break(tty); 1556 - break; 1557 - case TTY_PARITY: 1558 - case TTY_FRAME: 1559 - n_tty_receive_parity_error(tty, *p); 1560 - break; 1561 - case TTY_OVERRUN: 1562 - n_tty_receive_overrun(tty); 1563 - break; 1564 - default: 1565 - printk(KERN_ERR "%s: unknown flag %d\n", 1566 - tty_name(tty, buf), flags); 1567 - break; 1430 + head = ldata->read_head & (N_TTY_BUF_SIZE - 1); 1431 + n = N_TTY_BUF_SIZE - max(read_cnt(ldata), head); 1432 + n = min_t(size_t, count, n); 1433 + memcpy(read_buf_addr(ldata, head), cp, n); 1434 + ldata->read_head += n; 1435 + } 1436 + 1437 + static void 1438 + n_tty_receive_buf_raw(struct tty_struct *tty, const unsigned char *cp, 1439 + char *fp, int count) 1440 + { 1441 + struct n_tty_data *ldata = tty->disc_data; 1442 + char flag = TTY_NORMAL; 1443 + 1444 + while (count--) { 1445 + if (fp) 1446 + flag = *fp++; 1447 + if (likely(flag == TTY_NORMAL)) 1448 + put_tty_queue(*cp++, ldata); 1449 + else 1450 + n_tty_receive_char_flagged(tty, *cp++, flag); 1451 + } 1452 + } 1453 + 1454 + static void 1455 + n_tty_receive_buf_closing(struct tty_struct *tty, const unsigned char *cp, 1456 + char *fp, int count) 1457 + { 1458 + char flag = TTY_NORMAL; 1459 + 1460 + while (count--) { 1461 + if (fp) 1462 + flag = *fp++; 1463 + if (likely(flag == TTY_NORMAL)) 1464 + n_tty_receive_char_closing(tty, *cp++); 1465 + else 1466 + n_tty_receive_char_flagged(tty, *cp++, flag); 1467 + } 1468 + } 1469 + 1470 + static void 1471 + n_tty_receive_buf_standard(struct tty_struct *tty, const unsigned char *cp, 1472 + char *fp, int count) 1473 + { 1474 + struct n_tty_data *ldata = tty->disc_data; 1475 + char flag = TTY_NORMAL; 1476 + 1477 + while (count--) { 1478 + if (fp) 1479 + flag = *fp++; 1480 + if (likely(flag == TTY_NORMAL)) { 1481 + unsigned char c = *cp++; 1482 + 1483 + if (I_ISTRIP(tty)) 1484 + c &= 0x7f; 1485 + if (I_IUCLC(tty) && L_IEXTEN(tty)) 1486 + c = tolower(c); 1487 + if (L_EXTPROC(tty)) { 1488 + put_tty_queue(c, ldata); 1489 + continue; 1568 1490 } 1491 + if (!test_bit(c, ldata->char_map)) 1492 + n_tty_receive_char_inline(tty, c); 1493 + else if (n_tty_receive_char_special(tty, c) && count) { 1494 + if (fp) 1495 + flag = *fp++; 1496 + n_tty_receive_char_lnext(tty, *cp++, flag); 1497 + count--; 1498 + } 1499 + } else 1500 + n_tty_receive_char_flagged(tty, *cp++, flag); 1501 + } 1502 + } 1503 + 1504 + static void 1505 + n_tty_receive_buf_fast(struct tty_struct *tty, const unsigned char *cp, 1506 + char *fp, int count) 1507 + { 1508 + struct n_tty_data *ldata = tty->disc_data; 1509 + char flag = TTY_NORMAL; 1510 + 1511 + while (count--) { 1512 + if (fp) 1513 + flag = *fp++; 1514 + if (likely(flag == TTY_NORMAL)) { 1515 + unsigned char c = *cp++; 1516 + 1517 + if (!test_bit(c, ldata->char_map)) 1518 + n_tty_receive_char_fast(tty, c); 1519 + else if (n_tty_receive_char_special(tty, c) && count) { 1520 + if (fp) 1521 + flag = *fp++; 1522 + n_tty_receive_char_lnext(tty, *cp++, flag); 1523 + count--; 1524 + } 1525 + } else 1526 + n_tty_receive_char_flagged(tty, *cp++, flag); 1527 + } 1528 + } 1529 + 1530 + static void __receive_buf(struct tty_struct *tty, const unsigned char *cp, 1531 + char *fp, int count) 1532 + { 1533 + struct n_tty_data *ldata = tty->disc_data; 1534 + bool preops = I_ISTRIP(tty) || (I_IUCLC(tty) && L_IEXTEN(tty)); 1535 + 1536 + if (ldata->real_raw) 1537 + n_tty_receive_buf_real_raw(tty, cp, fp, count); 1538 + else if (ldata->raw || (L_EXTPROC(tty) && !preops)) 1539 + n_tty_receive_buf_raw(tty, cp, fp, count); 1540 + else if (tty->closing && !L_EXTPROC(tty)) 1541 + n_tty_receive_buf_closing(tty, cp, fp, count); 1542 + else { 1543 + if (ldata->lnext) { 1544 + char flag = TTY_NORMAL; 1545 + 1546 + if (fp) 1547 + flag = *fp++; 1548 + n_tty_receive_char_lnext(tty, *cp++, flag); 1549 + count--; 1569 1550 } 1551 + 1552 + if (!preops && !I_PARMRK(tty)) 1553 + n_tty_receive_buf_fast(tty, cp, fp, count); 1554 + else 1555 + n_tty_receive_buf_standard(tty, cp, fp, count); 1556 + 1557 + flush_echoes(tty); 1570 1558 if (tty->ops->flush_chars) 1571 1559 tty->ops->flush_chars(tty); 1572 1560 } 1573 1561 1574 - set_room(tty); 1575 - 1576 - if ((!ldata->icanon && (ldata->read_cnt >= ldata->minimum_to_wake)) || 1562 + if ((!ldata->icanon && (read_cnt(ldata) >= ldata->minimum_to_wake)) || 1577 1563 L_EXTPROC(tty)) { 1578 1564 kill_fasync(&tty->fasync, SIGIO, POLL_IN); 1579 1565 if (waitqueue_active(&tty->read_wait)) 1580 1566 wake_up_interruptible(&tty->read_wait); 1581 1567 } 1568 + } 1582 1569 1583 - /* 1584 - * Check the remaining room for the input canonicalization 1585 - * mode. We don't want to throttle the driver if we're in 1586 - * canonical mode and don't have a newline yet! 1587 - */ 1570 + static void n_tty_receive_buf(struct tty_struct *tty, const unsigned char *cp, 1571 + char *fp, int count) 1572 + { 1573 + int room, n; 1574 + 1575 + down_read(&tty->termios_rwsem); 1576 + 1588 1577 while (1) { 1589 - tty_set_flow_change(tty, TTY_THROTTLE_SAFE); 1590 - if (tty->receive_room >= TTY_THRESHOLD_THROTTLE) 1578 + room = receive_room(tty); 1579 + n = min(count, room); 1580 + if (!n) 1591 1581 break; 1592 - if (!tty_throttle_safe(tty)) 1593 - break; 1582 + __receive_buf(tty, cp, fp, n); 1583 + cp += n; 1584 + if (fp) 1585 + fp += n; 1586 + count -= n; 1594 1587 } 1595 - __tty_set_flow_change(tty, 0); 1588 + 1589 + tty->receive_room = room; 1590 + n_tty_check_throttle(tty); 1591 + up_read(&tty->termios_rwsem); 1592 + } 1593 + 1594 + static int n_tty_receive_buf2(struct tty_struct *tty, const unsigned char *cp, 1595 + char *fp, int count) 1596 + { 1597 + struct n_tty_data *ldata = tty->disc_data; 1598 + int room, n, rcvd = 0; 1599 + 1600 + down_read(&tty->termios_rwsem); 1601 + 1602 + while (1) { 1603 + room = receive_room(tty); 1604 + n = min(count, room); 1605 + if (!n) { 1606 + if (!room) 1607 + ldata->no_room = 1; 1608 + break; 1609 + } 1610 + __receive_buf(tty, cp, fp, n); 1611 + cp += n; 1612 + if (fp) 1613 + fp += n; 1614 + count -= n; 1615 + rcvd += n; 1616 + } 1617 + 1618 + tty->receive_room = room; 1619 + n_tty_check_throttle(tty); 1620 + up_read(&tty->termios_rwsem); 1621 + 1622 + return rcvd; 1596 1623 } 1597 1624 1598 1625 int is_ignored(int sig) ··· 1746 1505 * guaranteed that this function will not be re-entered or in progress 1747 1506 * when the ldisc is closed. 1748 1507 * 1749 - * Locking: Caller holds tty->termios_mutex 1508 + * Locking: Caller holds tty->termios_rwsem 1750 1509 */ 1751 1510 1752 1511 static void n_tty_set_termios(struct tty_struct *tty, struct ktermios *old) ··· 1758 1517 canon_change = (old->c_lflag ^ tty->termios.c_lflag) & ICANON; 1759 1518 if (canon_change) { 1760 1519 bitmap_zero(ldata->read_flags, N_TTY_BUF_SIZE); 1520 + ldata->line_start = 0; 1761 1521 ldata->canon_head = ldata->read_tail; 1762 - ldata->canon_data = 0; 1763 1522 ldata->erasing = 0; 1523 + ldata->lnext = 0; 1764 1524 } 1765 1525 1766 - if (canon_change && !L_ICANON(tty) && ldata->read_cnt) 1526 + if (canon_change && !L_ICANON(tty) && read_cnt(ldata)) 1767 1527 wake_up_interruptible(&tty->read_wait); 1768 1528 1769 1529 ldata->icanon = (L_ICANON(tty) != 0); ··· 1773 1531 I_ICRNL(tty) || I_INLCR(tty) || L_ICANON(tty) || 1774 1532 I_IXON(tty) || L_ISIG(tty) || L_ECHO(tty) || 1775 1533 I_PARMRK(tty)) { 1776 - bitmap_zero(ldata->process_char_map, 256); 1534 + bitmap_zero(ldata->char_map, 256); 1777 1535 1778 1536 if (I_IGNCR(tty) || I_ICRNL(tty)) 1779 - set_bit('\r', ldata->process_char_map); 1537 + set_bit('\r', ldata->char_map); 1780 1538 if (I_INLCR(tty)) 1781 - set_bit('\n', ldata->process_char_map); 1539 + set_bit('\n', ldata->char_map); 1782 1540 1783 1541 if (L_ICANON(tty)) { 1784 - set_bit(ERASE_CHAR(tty), ldata->process_char_map); 1785 - set_bit(KILL_CHAR(tty), ldata->process_char_map); 1786 - set_bit(EOF_CHAR(tty), ldata->process_char_map); 1787 - set_bit('\n', ldata->process_char_map); 1788 - set_bit(EOL_CHAR(tty), ldata->process_char_map); 1542 + set_bit(ERASE_CHAR(tty), ldata->char_map); 1543 + set_bit(KILL_CHAR(tty), ldata->char_map); 1544 + set_bit(EOF_CHAR(tty), ldata->char_map); 1545 + set_bit('\n', ldata->char_map); 1546 + set_bit(EOL_CHAR(tty), ldata->char_map); 1789 1547 if (L_IEXTEN(tty)) { 1790 - set_bit(WERASE_CHAR(tty), 1791 - ldata->process_char_map); 1792 - set_bit(LNEXT_CHAR(tty), 1793 - ldata->process_char_map); 1794 - set_bit(EOL2_CHAR(tty), 1795 - ldata->process_char_map); 1548 + set_bit(WERASE_CHAR(tty), ldata->char_map); 1549 + set_bit(LNEXT_CHAR(tty), ldata->char_map); 1550 + set_bit(EOL2_CHAR(tty), ldata->char_map); 1796 1551 if (L_ECHO(tty)) 1797 1552 set_bit(REPRINT_CHAR(tty), 1798 - ldata->process_char_map); 1553 + ldata->char_map); 1799 1554 } 1800 1555 } 1801 1556 if (I_IXON(tty)) { 1802 - set_bit(START_CHAR(tty), ldata->process_char_map); 1803 - set_bit(STOP_CHAR(tty), ldata->process_char_map); 1557 + set_bit(START_CHAR(tty), ldata->char_map); 1558 + set_bit(STOP_CHAR(tty), ldata->char_map); 1804 1559 } 1805 1560 if (L_ISIG(tty)) { 1806 - set_bit(INTR_CHAR(tty), ldata->process_char_map); 1807 - set_bit(QUIT_CHAR(tty), ldata->process_char_map); 1808 - set_bit(SUSP_CHAR(tty), ldata->process_char_map); 1561 + set_bit(INTR_CHAR(tty), ldata->char_map); 1562 + set_bit(QUIT_CHAR(tty), ldata->char_map); 1563 + set_bit(SUSP_CHAR(tty), ldata->char_map); 1809 1564 } 1810 - clear_bit(__DISABLED_CHAR, ldata->process_char_map); 1565 + clear_bit(__DISABLED_CHAR, ldata->char_map); 1811 1566 ldata->raw = 0; 1812 1567 ldata->real_raw = 0; 1813 1568 } else { ··· 1847 1608 if (tty->link) 1848 1609 n_tty_packet_mode_flush(tty); 1849 1610 1850 - kfree(ldata->read_buf); 1851 - kfree(ldata->echo_buf); 1852 - kfree(ldata); 1611 + vfree(ldata); 1853 1612 tty->disc_data = NULL; 1854 1613 } 1855 1614 ··· 1865 1628 { 1866 1629 struct n_tty_data *ldata; 1867 1630 1868 - ldata = kzalloc(sizeof(*ldata), GFP_KERNEL); 1631 + /* Currently a malloc failure here can panic */ 1632 + ldata = vmalloc(sizeof(*ldata)); 1869 1633 if (!ldata) 1870 1634 goto err; 1871 1635 1872 1636 ldata->overrun_time = jiffies; 1873 1637 mutex_init(&ldata->atomic_read_lock); 1874 1638 mutex_init(&ldata->output_lock); 1875 - mutex_init(&ldata->echo_lock); 1876 - raw_spin_lock_init(&ldata->read_lock); 1877 - 1878 - /* These are ugly. Currently a malloc failure here can panic */ 1879 - ldata->read_buf = kzalloc(N_TTY_BUF_SIZE, GFP_KERNEL); 1880 - ldata->echo_buf = kzalloc(N_TTY_BUF_SIZE, GFP_KERNEL); 1881 - if (!ldata->read_buf || !ldata->echo_buf) 1882 - goto err_free_bufs; 1883 1639 1884 1640 tty->disc_data = ldata; 1885 1641 reset_buffer_flags(tty->disc_data); 1886 1642 ldata->column = 0; 1643 + ldata->canon_column = 0; 1887 1644 ldata->minimum_to_wake = 1; 1645 + ldata->num_overrun = 0; 1646 + ldata->no_room = 0; 1647 + ldata->lnext = 0; 1888 1648 tty->closing = 0; 1889 1649 /* indicate buffer work may resume */ 1890 1650 clear_bit(TTY_LDISC_HALTED, &tty->flags); ··· 1889 1655 tty_unthrottle(tty); 1890 1656 1891 1657 return 0; 1892 - err_free_bufs: 1893 - kfree(ldata->read_buf); 1894 - kfree(ldata->echo_buf); 1895 - kfree(ldata); 1896 1658 err: 1897 1659 return -ENOMEM; 1898 1660 } ··· 1897 1667 { 1898 1668 struct n_tty_data *ldata = tty->disc_data; 1899 1669 1900 - tty_flush_to_ldisc(tty); 1901 1670 if (ldata->icanon && !L_EXTPROC(tty)) { 1902 - if (ldata->canon_data) 1671 + if (ldata->canon_head != ldata->read_tail) 1903 1672 return 1; 1904 - } else if (ldata->read_cnt >= (amt ? amt : 1)) 1673 + } else if (read_cnt(ldata) >= (amt ? amt : 1)) 1905 1674 return 1; 1906 1675 1907 1676 return 0; ··· 1921 1692 * 1922 1693 * Called under the ldata->atomic_read_lock sem 1923 1694 * 1695 + * n_tty_read()/consumer path: 1696 + * caller holds non-exclusive termios_rwsem 1697 + * read_tail published 1924 1698 */ 1925 1699 1926 1700 static int copy_from_read_buf(struct tty_struct *tty, ··· 1934 1702 struct n_tty_data *ldata = tty->disc_data; 1935 1703 int retval; 1936 1704 size_t n; 1937 - unsigned long flags; 1938 1705 bool is_eof; 1706 + size_t tail = ldata->read_tail & (N_TTY_BUF_SIZE - 1); 1939 1707 1940 1708 retval = 0; 1941 - raw_spin_lock_irqsave(&ldata->read_lock, flags); 1942 - n = min(ldata->read_cnt, N_TTY_BUF_SIZE - ldata->read_tail); 1709 + n = min(read_cnt(ldata), N_TTY_BUF_SIZE - tail); 1943 1710 n = min(*nr, n); 1944 - raw_spin_unlock_irqrestore(&ldata->read_lock, flags); 1945 1711 if (n) { 1946 - retval = copy_to_user(*b, &ldata->read_buf[ldata->read_tail], n); 1712 + retval = copy_to_user(*b, read_buf_addr(ldata, tail), n); 1947 1713 n -= retval; 1948 - is_eof = n == 1 && 1949 - ldata->read_buf[ldata->read_tail] == EOF_CHAR(tty); 1950 - tty_audit_add_data(tty, &ldata->read_buf[ldata->read_tail], n, 1714 + is_eof = n == 1 && read_buf(ldata, tail) == EOF_CHAR(tty); 1715 + tty_audit_add_data(tty, read_buf_addr(ldata, tail), n, 1951 1716 ldata->icanon); 1952 - raw_spin_lock_irqsave(&ldata->read_lock, flags); 1953 - ldata->read_tail = (ldata->read_tail + n) & (N_TTY_BUF_SIZE-1); 1954 - ldata->read_cnt -= n; 1717 + ldata->read_tail += n; 1955 1718 /* Turn single EOF into zero-length read */ 1956 - if (L_EXTPROC(tty) && ldata->icanon && is_eof && !ldata->read_cnt) 1719 + if (L_EXTPROC(tty) && ldata->icanon && is_eof && !read_cnt(ldata)) 1957 1720 n = 0; 1958 - raw_spin_unlock_irqrestore(&ldata->read_lock, flags); 1959 1721 *b += n; 1960 1722 *nr -= n; 1961 1723 } 1962 1724 return retval; 1725 + } 1726 + 1727 + /** 1728 + * canon_copy_from_read_buf - copy read data in canonical mode 1729 + * @tty: terminal device 1730 + * @b: user data 1731 + * @nr: size of data 1732 + * 1733 + * Helper function for n_tty_read. It is only called when ICANON is on; 1734 + * it copies one line of input up to and including the line-delimiting 1735 + * character into the user-space buffer. 1736 + * 1737 + * Called under the atomic_read_lock mutex 1738 + * 1739 + * n_tty_read()/consumer path: 1740 + * caller holds non-exclusive termios_rwsem 1741 + * read_tail published 1742 + */ 1743 + 1744 + static int canon_copy_from_read_buf(struct tty_struct *tty, 1745 + unsigned char __user **b, 1746 + size_t *nr) 1747 + { 1748 + struct n_tty_data *ldata = tty->disc_data; 1749 + size_t n, size, more, c; 1750 + size_t eol; 1751 + size_t tail; 1752 + int ret, found = 0; 1753 + bool eof_push = 0; 1754 + 1755 + /* N.B. avoid overrun if nr == 0 */ 1756 + n = min(*nr, read_cnt(ldata)); 1757 + if (!n) 1758 + return 0; 1759 + 1760 + tail = ldata->read_tail & (N_TTY_BUF_SIZE - 1); 1761 + size = min_t(size_t, tail + n, N_TTY_BUF_SIZE); 1762 + 1763 + n_tty_trace("%s: nr:%zu tail:%zu n:%zu size:%zu\n", 1764 + __func__, *nr, tail, n, size); 1765 + 1766 + eol = find_next_bit(ldata->read_flags, size, tail); 1767 + more = n - (size - tail); 1768 + if (eol == N_TTY_BUF_SIZE && more) { 1769 + /* scan wrapped without finding set bit */ 1770 + eol = find_next_bit(ldata->read_flags, more, 0); 1771 + if (eol != more) 1772 + found = 1; 1773 + } else if (eol != size) 1774 + found = 1; 1775 + 1776 + size = N_TTY_BUF_SIZE - tail; 1777 + n = (found + eol + size) & (N_TTY_BUF_SIZE - 1); 1778 + c = n; 1779 + 1780 + if (found && read_buf(ldata, eol) == __DISABLED_CHAR) { 1781 + n--; 1782 + eof_push = !n && ldata->read_tail != ldata->line_start; 1783 + } 1784 + 1785 + n_tty_trace("%s: eol:%zu found:%d n:%zu c:%zu size:%zu more:%zu\n", 1786 + __func__, eol, found, n, c, size, more); 1787 + 1788 + if (n > size) { 1789 + ret = copy_to_user(*b, read_buf_addr(ldata, tail), size); 1790 + if (ret) 1791 + return -EFAULT; 1792 + ret = copy_to_user(*b + size, ldata->read_buf, n - size); 1793 + } else 1794 + ret = copy_to_user(*b, read_buf_addr(ldata, tail), n); 1795 + 1796 + if (ret) 1797 + return -EFAULT; 1798 + *b += n; 1799 + *nr -= n; 1800 + 1801 + if (found) 1802 + clear_bit(eol, ldata->read_flags); 1803 + smp_mb__after_clear_bit(); 1804 + ldata->read_tail += c; 1805 + 1806 + if (found) { 1807 + ldata->line_start = ldata->read_tail; 1808 + tty_audit_push(tty); 1809 + } 1810 + return eof_push ? -EAGAIN : 0; 1963 1811 } 1964 1812 1965 1813 extern ssize_t redirected_tty_write(struct file *, const char __user *, ··· 2099 1787 * a hangup. Always called in user context, may sleep. 2100 1788 * 2101 1789 * This code must be sure never to sleep through a hangup. 1790 + * 1791 + * n_tty_read()/consumer path: 1792 + * claims non-exclusive termios_rwsem 1793 + * publishes read_tail 2102 1794 */ 2103 1795 2104 1796 static ssize_t n_tty_read(struct tty_struct *tty, struct file *file, ··· 2114 1798 int c; 2115 1799 int minimum, time; 2116 1800 ssize_t retval = 0; 2117 - ssize_t size; 2118 1801 long timeout; 2119 1802 unsigned long flags; 2120 1803 int packet; 2121 1804 2122 - do_it_again: 2123 1805 c = job_control(tty, file); 2124 1806 if (c < 0) 2125 1807 return c; 1808 + 1809 + /* 1810 + * Internal serialization of reads. 1811 + */ 1812 + if (file->f_flags & O_NONBLOCK) { 1813 + if (!mutex_trylock(&ldata->atomic_read_lock)) 1814 + return -EAGAIN; 1815 + } else { 1816 + if (mutex_lock_interruptible(&ldata->atomic_read_lock)) 1817 + return -ERESTARTSYS; 1818 + } 1819 + 1820 + down_read(&tty->termios_rwsem); 2126 1821 2127 1822 minimum = time = 0; 2128 1823 timeout = MAX_SCHEDULE_TIMEOUT; ··· 2152 1825 } 2153 1826 } 2154 1827 2155 - /* 2156 - * Internal serialization of reads. 2157 - */ 2158 - if (file->f_flags & O_NONBLOCK) { 2159 - if (!mutex_trylock(&ldata->atomic_read_lock)) 2160 - return -EAGAIN; 2161 - } else { 2162 - if (mutex_lock_interruptible(&ldata->atomic_read_lock)) 2163 - return -ERESTARTSYS; 2164 - } 2165 1828 packet = tty->packet; 2166 1829 2167 1830 add_wait_queue(&tty->read_wait, &wait); ··· 2200 1883 break; 2201 1884 } 2202 1885 n_tty_set_room(tty); 1886 + up_read(&tty->termios_rwsem); 1887 + 2203 1888 timeout = schedule_timeout(timeout); 1889 + 1890 + down_read(&tty->termios_rwsem); 2204 1891 continue; 2205 1892 } 2206 1893 __set_current_state(TASK_RUNNING); ··· 2220 1899 } 2221 1900 2222 1901 if (ldata->icanon && !L_EXTPROC(tty)) { 2223 - /* N.B. avoid overrun if nr == 0 */ 2224 - raw_spin_lock_irqsave(&ldata->read_lock, flags); 2225 - while (nr && ldata->read_cnt) { 2226 - int eol; 2227 - 2228 - eol = test_and_clear_bit(ldata->read_tail, 2229 - ldata->read_flags); 2230 - c = ldata->read_buf[ldata->read_tail]; 2231 - ldata->read_tail = ((ldata->read_tail+1) & 2232 - (N_TTY_BUF_SIZE-1)); 2233 - ldata->read_cnt--; 2234 - if (eol) { 2235 - /* this test should be redundant: 2236 - * we shouldn't be reading data if 2237 - * canon_data is 0 2238 - */ 2239 - if (--ldata->canon_data < 0) 2240 - ldata->canon_data = 0; 2241 - } 2242 - raw_spin_unlock_irqrestore(&ldata->read_lock, flags); 2243 - 2244 - if (!eol || (c != __DISABLED_CHAR)) { 2245 - if (tty_put_user(tty, c, b++)) { 2246 - retval = -EFAULT; 2247 - b--; 2248 - raw_spin_lock_irqsave(&ldata->read_lock, flags); 2249 - break; 2250 - } 2251 - nr--; 2252 - } 2253 - if (eol) { 2254 - tty_audit_push(tty); 2255 - raw_spin_lock_irqsave(&ldata->read_lock, flags); 2256 - break; 2257 - } 2258 - raw_spin_lock_irqsave(&ldata->read_lock, flags); 2259 - } 2260 - raw_spin_unlock_irqrestore(&ldata->read_lock, flags); 2261 - if (retval) 1902 + retval = canon_copy_from_read_buf(tty, &b, &nr); 1903 + if (retval == -EAGAIN) { 1904 + retval = 0; 1905 + continue; 1906 + } else if (retval) 2262 1907 break; 2263 1908 } else { 2264 1909 int uncopied; ··· 2238 1951 } 2239 1952 } 2240 1953 2241 - /* If there is enough space in the read buffer now, let the 2242 - * low-level driver know. We use n_tty_chars_in_buffer() to 2243 - * check the buffer, as it now knows about canonical mode. 2244 - * Otherwise, if the driver is throttled and the line is 2245 - * longer than TTY_THRESHOLD_UNTHROTTLE in canonical mode, 2246 - * we won't get any more characters. 2247 - */ 2248 - while (1) { 2249 - tty_set_flow_change(tty, TTY_UNTHROTTLE_SAFE); 2250 - if (n_tty_chars_in_buffer(tty) > TTY_THRESHOLD_UNTHROTTLE) 2251 - break; 2252 - if (!tty->count) 2253 - break; 2254 - n_tty_set_room(tty); 2255 - if (!tty_unthrottle_safe(tty)) 2256 - break; 2257 - } 2258 - __tty_set_flow_change(tty, 0); 1954 + n_tty_check_unthrottle(tty); 2259 1955 2260 1956 if (b - buf >= minimum) 2261 1957 break; ··· 2252 1982 ldata->minimum_to_wake = minimum; 2253 1983 2254 1984 __set_current_state(TASK_RUNNING); 2255 - size = b - buf; 2256 - if (size) { 2257 - retval = size; 2258 - if (nr) 2259 - clear_bit(TTY_PUSH, &tty->flags); 2260 - } else if (test_and_clear_bit(TTY_PUSH, &tty->flags)) 2261 - goto do_it_again; 1985 + if (b - buf) 1986 + retval = b - buf; 2262 1987 2263 1988 n_tty_set_room(tty); 1989 + up_read(&tty->termios_rwsem); 2264 1990 return retval; 2265 1991 } 2266 1992 ··· 2296 2030 if (retval) 2297 2031 return retval; 2298 2032 } 2033 + 2034 + down_read(&tty->termios_rwsem); 2299 2035 2300 2036 /* Write out any echoed characters that are still pending */ 2301 2037 process_echoes(tty); ··· 2352 2084 retval = -EAGAIN; 2353 2085 break; 2354 2086 } 2087 + up_read(&tty->termios_rwsem); 2088 + 2355 2089 schedule(); 2090 + 2091 + down_read(&tty->termios_rwsem); 2356 2092 } 2357 2093 break_out: 2358 2094 __set_current_state(TASK_RUNNING); 2359 2095 remove_wait_queue(&tty->write_wait, &wait); 2360 2096 if (b - buf != nr && tty->fasync) 2361 2097 set_bit(TTY_DO_WRITE_WAKEUP, &tty->flags); 2098 + up_read(&tty->termios_rwsem); 2362 2099 return (b - buf) ? b - buf : retval; 2363 2100 } 2364 2101 ··· 2412 2139 2413 2140 static unsigned long inq_canon(struct n_tty_data *ldata) 2414 2141 { 2415 - int nr, head, tail; 2142 + size_t nr, head, tail; 2416 2143 2417 - if (!ldata->canon_data) 2144 + if (ldata->canon_head == ldata->read_tail) 2418 2145 return 0; 2419 2146 head = ldata->canon_head; 2420 2147 tail = ldata->read_tail; 2421 - nr = (head - tail) & (N_TTY_BUF_SIZE-1); 2148 + nr = head - tail; 2422 2149 /* Skip EOF-chars.. */ 2423 2150 while (head != tail) { 2424 - if (test_bit(tail, ldata->read_flags) && 2425 - ldata->read_buf[tail] == __DISABLED_CHAR) 2151 + if (test_bit(tail & (N_TTY_BUF_SIZE - 1), ldata->read_flags) && 2152 + read_buf(ldata, tail) == __DISABLED_CHAR) 2426 2153 nr--; 2427 - tail = (tail+1) & (N_TTY_BUF_SIZE-1); 2154 + tail++; 2428 2155 } 2429 2156 return nr; 2430 2157 } ··· 2439 2166 case TIOCOUTQ: 2440 2167 return put_user(tty_chars_in_buffer(tty), (int __user *) arg); 2441 2168 case TIOCINQ: 2442 - /* FIXME: Locking */ 2443 - retval = ldata->read_cnt; 2169 + down_write(&tty->termios_rwsem); 2444 2170 if (L_ICANON(tty)) 2445 2171 retval = inq_canon(ldata); 2172 + else 2173 + retval = read_cnt(ldata); 2174 + up_write(&tty->termios_rwsem); 2446 2175 return put_user(retval, (unsigned int __user *) arg); 2447 2176 default: 2448 2177 return n_tty_ioctl_helper(tty, file, cmd, arg); ··· 2478 2203 .receive_buf = n_tty_receive_buf, 2479 2204 .write_wakeup = n_tty_write_wakeup, 2480 2205 .fasync = n_tty_fasync, 2206 + .receive_buf2 = n_tty_receive_buf2, 2481 2207 }; 2482 2208 2483 2209 /**
+6 -12
drivers/tty/pty.c
··· 89 89 * pty_space - report space left for writing 90 90 * @to: tty we are writing into 91 91 * 92 - * The tty buffers allow 64K but we sneak a peak and clip at 8K this 93 - * allows a lot of overspill room for echo and other fun messes to 94 - * be handled properly 92 + * Limit the buffer space used by ptys to 8k. 95 93 */ 96 94 97 95 static int pty_space(struct tty_struct *to) 98 96 { 99 - int n = 8192 - to->port->buf.memory_used; 100 - if (n < 0) 101 - return 0; 102 - return n; 97 + int n = tty_buffer_space_avail(to->port); 98 + return min(n, 8192); 103 99 } 104 100 105 101 /** ··· 121 125 /* Stuff the data into the input queue of the other end */ 122 126 c = tty_insert_flip_string(to->port, buf, c); 123 127 /* And shovel */ 124 - if (c) { 128 + if (c) 125 129 tty_flip_buffer_push(to->port); 126 - tty_wakeup(tty); 127 - } 128 130 } 129 131 return c; 130 132 } ··· 281 287 struct tty_struct *pty = tty->link; 282 288 283 289 /* For a PTY we need to lock the tty side */ 284 - mutex_lock(&tty->termios_mutex); 290 + mutex_lock(&tty->winsize_mutex); 285 291 if (!memcmp(ws, &tty->winsize, sizeof(*ws))) 286 292 goto done; 287 293 ··· 308 314 tty->winsize = *ws; 309 315 pty->winsize = *ws; /* Never used so will go away soon */ 310 316 done: 311 - mutex_unlock(&tty->termios_mutex); 317 + mutex_unlock(&tty->winsize_mutex); 312 318 return 0; 313 319 } 314 320
+1 -1
drivers/tty/serial/8250/8250_core.c
··· 3062 3062 */ 3063 3063 static int serial8250_probe(struct platform_device *dev) 3064 3064 { 3065 - struct plat_serial8250_port *p = dev->dev.platform_data; 3065 + struct plat_serial8250_port *p = dev_get_platdata(&dev->dev); 3066 3066 struct uart_8250_port uart; 3067 3067 int ret, i, irqflag = 0; 3068 3068
+26 -8
drivers/tty/serial/8250/8250_dw.c
··· 57 57 58 58 struct dw8250_data { 59 59 int last_lcr; 60 + int last_mcr; 60 61 int line; 61 62 struct clk *clk; 62 63 u8 usr_reg; 63 64 }; 65 + 66 + static inline int dw8250_modify_msr(struct uart_port *p, int offset, int value) 67 + { 68 + struct dw8250_data *d = p->private_data; 69 + 70 + /* If reading MSR, report CTS asserted when auto-CTS/RTS enabled */ 71 + if (offset == UART_MSR && d->last_mcr & UART_MCR_AFE) { 72 + value |= UART_MSR_CTS; 73 + value &= ~UART_MSR_DCTS; 74 + } 75 + 76 + return value; 77 + } 64 78 65 79 static void dw8250_serial_out(struct uart_port *p, int offset, int value) 66 80 { ··· 83 69 if (offset == UART_LCR) 84 70 d->last_lcr = value; 85 71 86 - offset <<= p->regshift; 87 - writeb(value, p->membase + offset); 72 + if (offset == UART_MCR) 73 + d->last_mcr = value; 74 + 75 + writeb(value, p->membase + (offset << p->regshift)); 88 76 } 89 77 90 78 static unsigned int dw8250_serial_in(struct uart_port *p, int offset) 91 79 { 92 - offset <<= p->regshift; 80 + unsigned int value = readb(p->membase + (offset << p->regshift)); 93 81 94 - return readb(p->membase + offset); 82 + return dw8250_modify_msr(p, offset, value); 95 83 } 96 84 97 85 /* Read Back (rb) version to ensure register access ording. */ ··· 110 94 if (offset == UART_LCR) 111 95 d->last_lcr = value; 112 96 113 - offset <<= p->regshift; 114 - writel(value, p->membase + offset); 97 + if (offset == UART_MCR) 98 + d->last_mcr = value; 99 + 100 + writel(value, p->membase + (offset << p->regshift)); 115 101 } 116 102 117 103 static unsigned int dw8250_serial_in32(struct uart_port *p, int offset) 118 104 { 119 - offset <<= p->regshift; 105 + unsigned int value = readl(p->membase + (offset << p->regshift)); 120 106 121 - return readl(p->membase + offset); 107 + return dw8250_modify_msr(p, offset, value); 122 108 } 123 109 124 110 static int dw8250_handle_irq(struct uart_port *p)
+1 -1
drivers/tty/serial/8250/8250_early.c
··· 194 194 options++; 195 195 device->baud = simple_strtoul(options, NULL, 0); 196 196 length = min(strcspn(options, " ") + 1, 197 - sizeof(device->options)); 197 + (size_t)(sizeof(device->options))); 198 198 strlcpy(device->options, options, length); 199 199 } else { 200 200 device->baud = probe_baud(port);
+8 -19
drivers/tty/serial/8250/8250_em.c
··· 95 95 struct resource *irq = platform_get_resource(pdev, IORESOURCE_IRQ, 0); 96 96 struct serial8250_em_priv *priv; 97 97 struct uart_8250_port up; 98 - int ret = -EINVAL; 98 + int ret; 99 99 100 100 if (!regs || !irq) { 101 101 dev_err(&pdev->dev, "missing registers or irq\n"); 102 - goto err0; 102 + return -EINVAL; 103 103 } 104 104 105 - priv = kzalloc(sizeof(*priv), GFP_KERNEL); 105 + priv = devm_kzalloc(&pdev->dev, sizeof(*priv), GFP_KERNEL); 106 106 if (!priv) { 107 107 dev_err(&pdev->dev, "unable to allocate private data\n"); 108 - ret = -ENOMEM; 109 - goto err0; 108 + return -ENOMEM; 110 109 } 111 110 112 - priv->sclk = clk_get(&pdev->dev, "sclk"); 111 + priv->sclk = devm_clk_get(&pdev->dev, "sclk"); 113 112 if (IS_ERR(priv->sclk)) { 114 113 dev_err(&pdev->dev, "unable to get clock\n"); 115 - ret = PTR_ERR(priv->sclk); 116 - goto err1; 114 + return PTR_ERR(priv->sclk); 117 115 } 118 116 119 117 memset(&up, 0, sizeof(up)); ··· 134 136 ret = serial8250_register_8250_port(&up); 135 137 if (ret < 0) { 136 138 dev_err(&pdev->dev, "unable to register 8250 port\n"); 137 - goto err2; 139 + clk_disable(priv->sclk); 140 + return ret; 138 141 } 139 142 140 143 priv->line = ret; 141 144 platform_set_drvdata(pdev, priv); 142 145 return 0; 143 - 144 - err2: 145 - clk_disable(priv->sclk); 146 - clk_put(priv->sclk); 147 - err1: 148 - kfree(priv); 149 - err0: 150 - return ret; 151 146 } 152 147 153 148 static int serial8250_em_remove(struct platform_device *pdev) ··· 149 158 150 159 serial8250_unregister_port(priv->line); 151 160 clk_disable(priv->sclk); 152 - clk_put(priv->sclk); 153 - kfree(priv); 154 161 return 0; 155 162 } 156 163
+11 -4
drivers/tty/serial/8250/8250_pci.c
··· 1565 1565 #define PCI_DEVICE_ID_COMMTECH_4228PCIE 0x0021 1566 1566 #define PCI_DEVICE_ID_COMMTECH_4222PCIE 0x0022 1567 1567 #define PCI_DEVICE_ID_BROADCOM_TRUMANAGE 0x160a 1568 + #define PCI_DEVICE_ID_AMCC_ADDIDATA_APCI7800 0x818e 1568 1569 1569 1570 #define PCI_VENDOR_ID_SUNIX 0x1fd4 1570 1571 #define PCI_DEVICE_ID_SUNIX_1999 0x1999 ··· 1588 1587 * ADDI-DATA GmbH communication cards <info@addi-data.com> 1589 1588 */ 1590 1589 { 1591 - .vendor = PCI_VENDOR_ID_ADDIDATA_OLD, 1592 - .device = PCI_DEVICE_ID_ADDIDATA_APCI7800, 1590 + .vendor = PCI_VENDOR_ID_AMCC, 1591 + .device = PCI_DEVICE_ID_AMCC_ADDIDATA_APCI7800, 1593 1592 .subvendor = PCI_ANY_ID, 1594 1593 .subdevice = PCI_ANY_ID, 1595 1594 .setup = addidata_apci7800_setup, ··· 4698 4697 0, 4699 4698 pbn_b0_1_115200 }, 4700 4699 4701 - { PCI_VENDOR_ID_ADDIDATA_OLD, 4702 - PCI_DEVICE_ID_ADDIDATA_APCI7800, 4700 + { PCI_VENDOR_ID_AMCC, 4701 + PCI_DEVICE_ID_AMCC_ADDIDATA_APCI7800, 4703 4702 PCI_ANY_ID, 4704 4703 PCI_ANY_ID, 4705 4704 0, ··· 4797 4796 { PCI_VENDOR_ID_NETMOS, PCI_DEVICE_ID_NETMOS_9835, 4798 4797 PCI_VENDOR_ID_IBM, 0x0299, 4799 4798 0, 0, pbn_b0_bt_2_115200 }, 4799 + 4800 + /* 4801 + * other NetMos 9835 devices are most likely handled by the 4802 + * parport_serial driver, check drivers/parport/parport_serial.c 4803 + * before adding them here. 4804 + */ 4800 4805 4801 4806 { PCI_VENDOR_ID_NETMOS, PCI_DEVICE_ID_NETMOS_9901, 4802 4807 0xA000, 0x1000,
+2
drivers/tty/serial/8250/Kconfig
··· 116 116 This builds standard PCI serial support. You may be able to 117 117 disable this feature if you only need legacy serial support. 118 118 Saves about 9K. 119 + Note that serial ports on NetMos 9835 Multi-I/O cards are handled 120 + by the parport_serial driver, enabled with CONFIG_PARPORT_SERIAL. 119 121 120 122 config SERIAL_8250_HP300 121 123 tristate
+26 -7
drivers/tty/serial/Kconfig
··· 291 291 292 292 config SERIAL_MAX310X 293 293 bool "MAX310X support" 294 - depends on SPI 294 + depends on SPI_MASTER 295 295 select SERIAL_CORE 296 - select REGMAP_SPI if SPI 296 + select REGMAP_SPI if SPI_MASTER 297 297 default n 298 298 help 299 299 This selects support for an advanced UART from Maxim (Dallas). 300 - Supported ICs are MAX3107, MAX3108. 300 + Supported ICs are MAX3107, MAX3108, MAX3109, MAX14830. 301 301 Each IC contains 128 words each of receive and transmit FIFO 302 302 that can be controlled through I2C or high-speed SPI. 303 303 ··· 1401 1401 Enable a Xilinx PS UART port to be the system console. 1402 1402 1403 1403 config SERIAL_AR933X 1404 - bool "AR933X serial port support" 1405 - depends on SOC_AR933X 1404 + tristate "AR933X serial port support" 1405 + depends on HAVE_CLK && SOC_AR933X 1406 1406 select SERIAL_CORE 1407 1407 help 1408 1408 If you have an Atheros AR933X SOC based board and want to use the 1409 1409 built-in UART of the SoC, say Y to this option. 1410 + 1411 + To compile this driver as a module, choose M here: the 1412 + module will be called ar933x_uart. 1410 1413 1411 1414 config SERIAL_AR933X_CONSOLE 1412 1415 bool "Console on AR933X serial port" ··· 1427 1424 to support. 1428 1425 1429 1426 config SERIAL_EFM32_UART 1430 - tristate "EFM32 UART/USART port." 1431 - depends on ARCH_EFM32 1427 + tristate "EFM32 UART/USART port" 1428 + depends on ARM && (ARCH_EFM32 || COMPILE_TEST) 1432 1429 select SERIAL_CORE 1433 1430 help 1434 1431 This driver support the USART and UART ports on ··· 1499 1496 help 1500 1497 If you have enabled the lpuart serial port on the Freescale SoCs, 1501 1498 you can make it the console by answering Y to this option. 1499 + 1500 + config SERIAL_ST_ASC 1501 + tristate "ST ASC serial port support" 1502 + select SERIAL_CORE 1503 + help 1504 + This driver is for the on-chip Asychronous Serial Controller on 1505 + STMicroelectronics STi SoCs. 1506 + ASC is embedded in ST COMMS IP block. It supports Rx & Tx functionality. 1507 + It support all industry standard baud rates. 1508 + 1509 + If unsure, say N. 1510 + 1511 + config SERIAL_ST_ASC_CONSOLE 1512 + bool "Support for console on ST ASC" 1513 + depends on SERIAL_ST_ASC=y 1514 + select SERIAL_CORE_CONSOLE 1502 1515 1503 1516 endmenu 1504 1517
+1
drivers/tty/serial/Makefile
··· 65 65 obj-$(CONFIG_SERIAL_KS8695) += serial_ks8695.o 66 66 obj-$(CONFIG_SERIAL_OMAP) += omap-serial.o 67 67 obj-$(CONFIG_SERIAL_ALTERA_UART) += altera_uart.o 68 + obj-$(CONFIG_SERIAL_ST_ASC) += st-asc.o 68 69 obj-$(CONFIG_KGDB_SERIAL_CONSOLE) += kgdboc.o 69 70 obj-$(CONFIG_SERIAL_QE) += ucc_uart.o 70 71 obj-$(CONFIG_SERIAL_TIMBERDALE) += timbuart.o
+4 -1
drivers/tty/serial/altera_jtaguart.c
··· 139 139 uart_insert_char(port, 0, 0, ch, flag); 140 140 } 141 141 142 + spin_unlock(&port->lock); 142 143 tty_flip_buffer_push(&port->state->port); 144 + spin_lock(&port->lock); 143 145 } 144 146 145 147 static void altera_jtaguart_tx_chars(struct altera_jtaguart *pp) ··· 410 408 411 409 static int altera_jtaguart_probe(struct platform_device *pdev) 412 410 { 413 - struct altera_jtaguart_platform_uart *platp = pdev->dev.platform_data; 411 + struct altera_jtaguart_platform_uart *platp = 412 + dev_get_platdata(&pdev->dev); 414 413 struct uart_port *port; 415 414 struct resource *res_irq, *res_mem; 416 415 int i = pdev->id;
+3 -1
drivers/tty/serial/altera_uart.c
··· 231 231 flag); 232 232 } 233 233 234 + spin_unlock(&port->lock); 234 235 tty_flip_buffer_push(&port->state->port); 236 + spin_lock(&port->lock); 235 237 } 236 238 237 239 static void altera_uart_tx_chars(struct altera_uart *pp) ··· 536 534 537 535 static int altera_uart_probe(struct platform_device *pdev) 538 536 { 539 - struct altera_uart_platform_uart *platp = pdev->dev.platform_data; 537 + struct altera_uart_platform_uart *platp = dev_get_platdata(&pdev->dev); 540 538 struct uart_port *port; 541 539 struct resource *res_mem; 542 540 struct resource *res_irq;
+1 -1
drivers/tty/serial/amba-pl010.c
··· 721 721 uap->port.flags = UPF_BOOT_AUTOCONF; 722 722 uap->port.line = i; 723 723 uap->dev = dev; 724 - uap->data = dev->dev.platform_data; 724 + uap->data = dev_get_platdata(&dev->dev); 725 725 726 726 amba_ports[i] = uap; 727 727
+11 -7
drivers/tty/serial/amba-pl011.c
··· 265 265 static void pl011_dma_probe_initcall(struct device *dev, struct uart_amba_port *uap) 266 266 { 267 267 /* DMA is the sole user of the platform data right now */ 268 - struct amba_pl011_data *plat = uap->port.dev->platform_data; 268 + struct amba_pl011_data *plat = dev_get_platdata(uap->port.dev); 269 269 struct dma_slave_config tx_conf = { 270 270 .dst_addr = uap->port.mapbase + UART01x_DR, 271 271 .dst_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE, ··· 677 677 * Locking: called with port lock held and IRQs disabled. 678 678 */ 679 679 static void pl011_dma_flush_buffer(struct uart_port *port) 680 + __releases(&uap->port.lock) 681 + __acquires(&uap->port.lock) 680 682 { 681 683 struct uart_amba_port *uap = (struct uart_amba_port *)port; 682 684 ··· 1200 1198 } 1201 1199 1202 1200 static void pl011_rx_chars(struct uart_amba_port *uap) 1201 + __releases(&uap->port.lock) 1202 + __acquires(&uap->port.lock) 1203 1203 { 1204 1204 pl011_fifo_to_tty(uap); 1205 1205 ··· 1501 1497 uap->im = readw(uap->port.membase + UART011_IMSC); 1502 1498 writew(UART011_RTIM | UART011_RXIM, uap->port.membase + UART011_IMSC); 1503 1499 1504 - if (uap->port.dev->platform_data) { 1500 + if (dev_get_platdata(uap->port.dev)) { 1505 1501 struct amba_pl011_data *plat; 1506 1502 1507 - plat = uap->port.dev->platform_data; 1503 + plat = dev_get_platdata(uap->port.dev); 1508 1504 if (plat->init) 1509 1505 plat->init(); 1510 1506 } ··· 1649 1645 /* Optionally let pins go into sleep states */ 1650 1646 pinctrl_pm_select_sleep_state(port->dev); 1651 1647 1652 - if (uap->port.dev->platform_data) { 1648 + if (dev_get_platdata(uap->port.dev)) { 1653 1649 struct amba_pl011_data *plat; 1654 1650 1655 - plat = uap->port.dev->platform_data; 1651 + plat = dev_get_platdata(uap->port.dev); 1656 1652 if (plat->exit) 1657 1653 plat->exit(); 1658 1654 } ··· 2006 2002 if (ret) 2007 2003 return ret; 2008 2004 2009 - if (uap->port.dev->platform_data) { 2005 + if (dev_get_platdata(uap->port.dev)) { 2010 2006 struct amba_pl011_data *plat; 2011 2007 2012 - plat = uap->port.dev->platform_data; 2008 + plat = dev_get_platdata(uap->port.dev); 2013 2009 if (plat->init) 2014 2010 plat->init(); 2015 2011 }
+2
drivers/tty/serial/apbuart.c
··· 125 125 status = UART_GET_STATUS(port); 126 126 } 127 127 128 + spin_unlock(&port->lock); 128 129 tty_flip_buffer_push(&port->state->port); 130 + spin_lock(&port->lock); 129 131 } 130 132 131 133 static void apbuart_tx_chars(struct uart_port *port)
+69 -46
drivers/tty/serial/ar933x_uart.c
··· 17 17 #include <linux/sysrq.h> 18 18 #include <linux/delay.h> 19 19 #include <linux/platform_device.h> 20 + #include <linux/of.h> 21 + #include <linux/of_platform.h> 20 22 #include <linux/tty.h> 21 23 #include <linux/tty_flip.h> 22 24 #include <linux/serial_core.h> ··· 26 24 #include <linux/slab.h> 27 25 #include <linux/io.h> 28 26 #include <linux/irq.h> 27 + #include <linux/clk.h> 29 28 30 29 #include <asm/div64.h> 31 30 32 31 #include <asm/mach-ath79/ar933x_uart.h> 33 - #include <asm/mach-ath79/ar933x_uart_platform.h> 34 32 35 33 #define DRIVER_NAME "ar933x-uart" 36 34 ··· 49 47 unsigned int ier; /* shadow Interrupt Enable Register */ 50 48 unsigned int min_baud; 51 49 unsigned int max_baud; 50 + struct clk *clk; 52 51 }; 52 + 53 + static inline bool ar933x_uart_console_enabled(void) 54 + { 55 + return config_enabled(CONFIG_SERIAL_AR933X_CONSOLE); 56 + } 53 57 54 58 static inline unsigned int ar933x_uart_read(struct ar933x_uart_port *up, 55 59 int offset) ··· 330 322 tty_insert_flip_char(port, ch, TTY_NORMAL); 331 323 } while (max_count-- > 0); 332 324 325 + spin_unlock(&up->port.lock); 333 326 tty_flip_buffer_push(port); 327 + spin_lock(&up->port.lock); 334 328 } 335 329 336 330 static void ar933x_uart_tx_chars(struct ar933x_uart_port *up) ··· 507 497 .verify_port = ar933x_uart_verify_port, 508 498 }; 509 499 510 - #ifdef CONFIG_SERIAL_AR933X_CONSOLE 511 - 512 500 static struct ar933x_uart_port * 513 501 ar933x_console_ports[CONFIG_SERIAL_AR933X_NR_UARTS]; 514 502 ··· 605 597 606 598 static void ar933x_uart_add_console_port(struct ar933x_uart_port *up) 607 599 { 600 + if (!ar933x_uart_console_enabled()) 601 + return; 602 + 608 603 ar933x_console_ports[up->port.line] = up; 609 604 } 610 - 611 - #define AR933X_SERIAL_CONSOLE (&ar933x_uart_console) 612 - 613 - #else 614 - 615 - static inline void ar933x_uart_add_console_port(struct ar933x_uart_port *up) {} 616 - 617 - #define AR933X_SERIAL_CONSOLE NULL 618 - 619 - #endif /* CONFIG_SERIAL_AR933X_CONSOLE */ 620 605 621 606 static struct uart_driver ar933x_uart_driver = { 622 607 .owner = THIS_MODULE, 623 608 .driver_name = DRIVER_NAME, 624 609 .dev_name = "ttyATH", 625 610 .nr = CONFIG_SERIAL_AR933X_NR_UARTS, 626 - .cons = AR933X_SERIAL_CONSOLE, 611 + .cons = NULL, /* filled in runtime */ 627 612 }; 628 613 629 614 static int ar933x_uart_probe(struct platform_device *pdev) 630 615 { 631 - struct ar933x_uart_platform_data *pdata; 632 616 struct ar933x_uart_port *up; 633 617 struct uart_port *port; 634 618 struct resource *mem_res; 635 619 struct resource *irq_res; 620 + struct device_node *np; 636 621 unsigned int baud; 637 622 int id; 638 623 int ret; 639 624 640 - pdata = pdev->dev.platform_data; 641 - if (!pdata) 642 - return -EINVAL; 643 - 644 - id = pdev->id; 645 - if (id == -1) 646 - id = 0; 625 + np = pdev->dev.of_node; 626 + if (config_enabled(CONFIG_OF) && np) { 627 + id = of_alias_get_id(np, "serial"); 628 + if (id < 0) { 629 + dev_err(&pdev->dev, "unable to get alias id, err=%d\n", 630 + id); 631 + return id; 632 + } 633 + } else { 634 + id = pdev->id; 635 + if (id == -1) 636 + id = 0; 637 + } 647 638 648 639 if (id > CONFIG_SERIAL_AR933X_NR_UARTS) 649 640 return -EINVAL; 650 - 651 - mem_res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 652 - if (!mem_res) { 653 - dev_err(&pdev->dev, "no MEM resource\n"); 654 - return -EINVAL; 655 - } 656 641 657 642 irq_res = platform_get_resource(pdev, IORESOURCE_IRQ, 0); 658 643 if (!irq_res) { ··· 653 652 return -EINVAL; 654 653 } 655 654 656 - up = kzalloc(sizeof(struct ar933x_uart_port), GFP_KERNEL); 655 + up = devm_kzalloc(&pdev->dev, sizeof(struct ar933x_uart_port), 656 + GFP_KERNEL); 657 657 if (!up) 658 658 return -ENOMEM; 659 659 660 - port = &up->port; 661 - port->mapbase = mem_res->start; 662 - 663 - port->membase = ioremap(mem_res->start, AR933X_UART_REGS_SIZE); 664 - if (!port->membase) { 665 - ret = -ENOMEM; 666 - goto err_free_up; 660 + up->clk = devm_clk_get(&pdev->dev, "uart"); 661 + if (IS_ERR(up->clk)) { 662 + dev_err(&pdev->dev, "unable to get UART clock\n"); 663 + return PTR_ERR(up->clk); 667 664 } 668 665 666 + port = &up->port; 667 + 668 + mem_res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 669 + port->membase = devm_ioremap_resource(&pdev->dev, mem_res); 670 + if (IS_ERR(port->membase)) 671 + return PTR_ERR(port->membase); 672 + 673 + ret = clk_prepare_enable(up->clk); 674 + if (ret) 675 + return ret; 676 + 677 + port->uartclk = clk_get_rate(up->clk); 678 + if (!port->uartclk) { 679 + ret = -EINVAL; 680 + goto err_disable_clk; 681 + } 682 + 683 + port->mapbase = mem_res->start; 669 684 port->line = id; 670 685 port->irq = irq_res->start; 671 686 port->dev = &pdev->dev; 672 687 port->type = PORT_AR933X; 673 688 port->iotype = UPIO_MEM32; 674 - port->uartclk = pdata->uartclk; 675 689 676 690 port->regshift = 2; 677 691 port->fifosize = AR933X_UART_FIFO_SIZE; ··· 702 686 703 687 ret = uart_add_one_port(&ar933x_uart_driver, &up->port); 704 688 if (ret) 705 - goto err_unmap; 689 + goto err_disable_clk; 706 690 707 691 platform_set_drvdata(pdev, up); 708 692 return 0; 709 693 710 - err_unmap: 711 - iounmap(up->port.membase); 712 - err_free_up: 713 - kfree(up); 694 + err_disable_clk: 695 + clk_disable_unprepare(up->clk); 714 696 return ret; 715 697 } 716 698 ··· 717 703 struct ar933x_uart_port *up; 718 704 719 705 up = platform_get_drvdata(pdev); 720 - platform_set_drvdata(pdev, NULL); 721 706 722 707 if (up) { 723 708 uart_remove_one_port(&ar933x_uart_driver, &up->port); 724 - iounmap(up->port.membase); 725 - kfree(up); 709 + clk_disable_unprepare(up->clk); 726 710 } 727 711 728 712 return 0; 729 713 } 714 + 715 + #ifdef CONFIG_OF 716 + static const struct of_device_id ar933x_uart_of_ids[] = { 717 + { .compatible = "qca,ar9330-uart" }, 718 + {}, 719 + }; 720 + MODULE_DEVICE_TABLE(of, ar933x_uart_of_ids); 721 + #endif 730 722 731 723 static struct platform_driver ar933x_uart_platform_driver = { 732 724 .probe = ar933x_uart_probe, ··· 740 720 .driver = { 741 721 .name = DRIVER_NAME, 742 722 .owner = THIS_MODULE, 723 + .of_match_table = of_match_ptr(ar933x_uart_of_ids), 743 724 }, 744 725 }; 745 726 ··· 748 727 { 749 728 int ret; 750 729 751 - ar933x_uart_driver.nr = CONFIG_SERIAL_AR933X_NR_UARTS; 730 + if (ar933x_uart_console_enabled()) 731 + ar933x_uart_driver.cons = &ar933x_uart_console; 732 + 752 733 ret = uart_register_driver(&ar933x_uart_driver); 753 734 if (ret) 754 735 goto err_out;
+20 -15
drivers/tty/serial/arc_uart.c
··· 209 209 arc_serial_tx_chars(uart); 210 210 } 211 211 212 - static void arc_serial_rx_chars(struct arc_uart_port *uart) 212 + static void arc_serial_rx_chars(struct arc_uart_port *uart, unsigned int status) 213 213 { 214 - unsigned int status, ch, flg = 0; 214 + unsigned int ch, flg = 0; 215 215 216 216 /* 217 217 * UART has 4 deep RX-FIFO. Driver's recongnition of this fact ··· 222 222 * before RX-EMPTY=0, implies some sort of buffering going on in the 223 223 * controller, which is indeed the Rx-FIFO. 224 224 */ 225 - while (!((status = UART_GET_STATUS(uart)) & RXEMPTY)) { 226 - 227 - ch = UART_GET_DATA(uart); 228 - uart->port.icount.rx++; 229 - 225 + do { 226 + /* 227 + * This could be an Rx Intr for err (no data), 228 + * so check err and clear that Intr first 229 + */ 230 230 if (unlikely(status & (RXOERR | RXFERR))) { 231 231 if (status & RXOERR) { 232 232 uart->port.icount.overrun++; ··· 242 242 } else 243 243 flg = TTY_NORMAL; 244 244 245 - if (unlikely(uart_handle_sysrq_char(&uart->port, ch))) 246 - goto done; 245 + if (status & RXEMPTY) 246 + continue; 247 247 248 - uart_insert_char(&uart->port, status, RXOERR, ch, flg); 248 + ch = UART_GET_DATA(uart); 249 + uart->port.icount.rx++; 249 250 250 - done: 251 + if (!(uart_handle_sysrq_char(&uart->port, ch))) 252 + uart_insert_char(&uart->port, status, RXOERR, ch, flg); 253 + 254 + spin_unlock(&uart->port.lock); 251 255 tty_flip_buffer_push(&uart->port.state->port); 252 - } 256 + spin_lock(&uart->port.lock); 257 + } while (!((status = UART_GET_STATUS(uart)) & RXEMPTY)); 253 258 } 254 259 255 260 /* ··· 297 292 * notifications from the UART Controller. 298 293 * To demultiplex between the two, we check the relevant bits 299 294 */ 300 - if ((status & RXIENB) && !(status & RXEMPTY)) { 295 + if (status & RXIENB) { 301 296 302 297 /* already in ISR, no need of xx_irqsave */ 303 298 spin_lock(&uart->port.lock); 304 - arc_serial_rx_chars(uart); 299 + arc_serial_rx_chars(uart, status); 305 300 spin_unlock(&uart->port.lock); 306 301 } 307 302 ··· 533 528 unsigned long *plat_data; 534 529 struct arc_uart_port *uart = &arc_uart_ports[dev_id]; 535 530 536 - plat_data = ((unsigned long *)(pdev->dev.platform_data)); 531 + plat_data = (unsigned long *)dev_get_platdata(&pdev->dev); 537 532 if (!plat_data) 538 533 return -ENODEV; 539 534
+712 -151
drivers/tty/serial/atmel_serial.c
··· 39 39 #include <linux/atmel_pdc.h> 40 40 #include <linux/atmel_serial.h> 41 41 #include <linux/uaccess.h> 42 - #include <linux/pinctrl/consumer.h> 43 42 #include <linux/platform_data/atmel.h> 43 + #include <linux/timer.h> 44 44 45 45 #include <asm/io.h> 46 46 #include <asm/ioctls.h> ··· 98 98 #define UART_PUT_BRGR(port,v) __raw_writel(v, (port)->membase + ATMEL_US_BRGR) 99 99 #define UART_PUT_RTOR(port,v) __raw_writel(v, (port)->membase + ATMEL_US_RTOR) 100 100 #define UART_PUT_TTGR(port, v) __raw_writel(v, (port)->membase + ATMEL_US_TTGR) 101 + #define UART_GET_IP_NAME(port) __raw_readl((port)->membase + ATMEL_US_NAME) 101 102 102 103 /* PDC registers */ 103 104 #define UART_PUT_PTCR(port,v) __raw_writel(v, (port)->membase + ATMEL_PDC_PTCR) ··· 141 140 u32 backup_imr; /* IMR saved during suspend */ 142 141 int break_active; /* break being received */ 143 142 144 - short use_dma_rx; /* enable PDC receiver */ 143 + bool use_dma_rx; /* enable DMA receiver */ 144 + bool use_pdc_rx; /* enable PDC receiver */ 145 145 short pdc_rx_idx; /* current PDC RX buffer */ 146 146 struct atmel_dma_buffer pdc_rx[2]; /* PDC receier */ 147 147 148 - short use_dma_tx; /* enable PDC transmitter */ 148 + bool use_dma_tx; /* enable DMA transmitter */ 149 + bool use_pdc_tx; /* enable PDC transmitter */ 149 150 struct atmel_dma_buffer pdc_tx; /* PDC transmitter */ 150 151 152 + spinlock_t lock_tx; /* port lock */ 153 + spinlock_t lock_rx; /* port lock */ 154 + struct dma_chan *chan_tx; 155 + struct dma_chan *chan_rx; 156 + struct dma_async_tx_descriptor *desc_tx; 157 + struct dma_async_tx_descriptor *desc_rx; 158 + dma_cookie_t cookie_tx; 159 + dma_cookie_t cookie_rx; 160 + struct scatterlist sg_tx; 161 + struct scatterlist sg_rx; 151 162 struct tasklet_struct tasklet; 152 163 unsigned int irq_status; 153 164 unsigned int irq_status_prev; ··· 168 155 169 156 struct serial_rs485 rs485; /* rs485 settings */ 170 157 unsigned int tx_done_mask; 158 + bool is_usart; /* usart or uart */ 159 + struct timer_list uart_timer; /* uart timer */ 160 + int (*prepare_rx)(struct uart_port *port); 161 + int (*prepare_tx)(struct uart_port *port); 162 + void (*schedule_rx)(struct uart_port *port); 163 + void (*schedule_tx)(struct uart_port *port); 164 + void (*release_rx)(struct uart_port *port); 165 + void (*release_tx)(struct uart_port *port); 171 166 }; 172 167 173 168 static struct atmel_uart_port atmel_ports[ATMEL_MAX_UART]; ··· 202 181 } 203 182 204 183 #ifdef CONFIG_SERIAL_ATMEL_PDC 205 - static bool atmel_use_dma_rx(struct uart_port *port) 184 + static bool atmel_use_pdc_rx(struct uart_port *port) 206 185 { 207 186 struct atmel_uart_port *atmel_port = to_atmel_uart_port(port); 208 187 209 - return atmel_port->use_dma_rx; 188 + return atmel_port->use_pdc_rx; 210 189 } 190 + 191 + static bool atmel_use_pdc_tx(struct uart_port *port) 192 + { 193 + struct atmel_uart_port *atmel_port = to_atmel_uart_port(port); 194 + 195 + return atmel_port->use_pdc_tx; 196 + } 197 + #else 198 + static bool atmel_use_pdc_rx(struct uart_port *port) 199 + { 200 + return false; 201 + } 202 + 203 + static bool atmel_use_pdc_tx(struct uart_port *port) 204 + { 205 + return false; 206 + } 207 + #endif 211 208 212 209 static bool atmel_use_dma_tx(struct uart_port *port) 213 210 { ··· 233 194 234 195 return atmel_port->use_dma_tx; 235 196 } 236 - #else 197 + 237 198 static bool atmel_use_dma_rx(struct uart_port *port) 238 199 { 239 - return false; 240 - } 200 + struct atmel_uart_port *atmel_port = to_atmel_uart_port(port); 241 201 242 - static bool atmel_use_dma_tx(struct uart_port *port) 243 - { 244 - return false; 202 + return atmel_port->use_dma_rx; 245 203 } 246 - #endif 247 204 248 205 /* Enable or disable the rs485 support */ 249 206 void atmel_config_rs485(struct uart_port *port, struct serial_rs485 *rs485conf) ··· 268 233 mode |= ATMEL_US_USMODE_RS485; 269 234 } else { 270 235 dev_dbg(port->dev, "Setting UART to RS232\n"); 271 - if (atmel_use_dma_tx(port)) 236 + if (atmel_use_pdc_tx(port)) 272 237 atmel_port->tx_done_mask = ATMEL_US_ENDTX | 273 238 ATMEL_US_TXBUFE; 274 239 else ··· 380 345 { 381 346 struct atmel_uart_port *atmel_port = to_atmel_uart_port(port); 382 347 383 - if (atmel_use_dma_tx(port)) { 348 + if (atmel_use_pdc_tx(port)) { 384 349 /* disable PDC transmit */ 385 350 UART_PUT_PTCR(port, ATMEL_PDC_TXTDIS); 386 351 } ··· 399 364 { 400 365 struct atmel_uart_port *atmel_port = to_atmel_uart_port(port); 401 366 402 - if (atmel_use_dma_tx(port)) { 367 + if (atmel_use_pdc_tx(port)) { 403 368 if (UART_GET_PTSR(port) & ATMEL_PDC_TXTEN) 404 369 /* The transmitter is already running. Yes, we 405 370 really need this.*/ ··· 425 390 426 391 UART_PUT_CR(port, ATMEL_US_RXEN); 427 392 428 - if (atmel_use_dma_rx(port)) { 393 + if (atmel_use_pdc_rx(port)) { 429 394 /* enable PDC controller */ 430 395 UART_PUT_IER(port, ATMEL_US_ENDRX | ATMEL_US_TIMEOUT | 431 396 port->read_status_mask); ··· 442 407 { 443 408 UART_PUT_CR(port, ATMEL_US_RXDIS); 444 409 445 - if (atmel_use_dma_rx(port)) { 410 + if (atmel_use_pdc_rx(port)) { 446 411 /* disable PDC receive */ 447 412 UART_PUT_PTCR(port, ATMEL_PDC_RXTDIS); 448 413 UART_PUT_IDR(port, ATMEL_US_ENDRX | ATMEL_US_TIMEOUT | ··· 599 564 UART_PUT_IER(port, atmel_port->tx_done_mask); 600 565 } 601 566 567 + static void atmel_complete_tx_dma(void *arg) 568 + { 569 + struct atmel_uart_port *atmel_port = arg; 570 + struct uart_port *port = &atmel_port->uart; 571 + struct circ_buf *xmit = &port->state->xmit; 572 + struct dma_chan *chan = atmel_port->chan_tx; 573 + unsigned long flags; 574 + 575 + spin_lock_irqsave(&port->lock, flags); 576 + 577 + if (chan) 578 + dmaengine_terminate_all(chan); 579 + xmit->tail += sg_dma_len(&atmel_port->sg_tx); 580 + xmit->tail &= UART_XMIT_SIZE - 1; 581 + 582 + port->icount.tx += sg_dma_len(&atmel_port->sg_tx); 583 + 584 + spin_lock_irq(&atmel_port->lock_tx); 585 + async_tx_ack(atmel_port->desc_tx); 586 + atmel_port->cookie_tx = -EINVAL; 587 + atmel_port->desc_tx = NULL; 588 + spin_unlock_irq(&atmel_port->lock_tx); 589 + 590 + if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 591 + uart_write_wakeup(port); 592 + 593 + /* Do we really need this? */ 594 + if (!uart_circ_empty(xmit)) 595 + tasklet_schedule(&atmel_port->tasklet); 596 + 597 + spin_unlock_irqrestore(&port->lock, flags); 598 + } 599 + 600 + static void atmel_release_tx_dma(struct uart_port *port) 601 + { 602 + struct atmel_uart_port *atmel_port = to_atmel_uart_port(port); 603 + struct dma_chan *chan = atmel_port->chan_tx; 604 + 605 + if (chan) { 606 + dmaengine_terminate_all(chan); 607 + dma_release_channel(chan); 608 + dma_unmap_sg(port->dev, &atmel_port->sg_tx, 1, 609 + DMA_MEM_TO_DEV); 610 + } 611 + 612 + atmel_port->desc_tx = NULL; 613 + atmel_port->chan_tx = NULL; 614 + atmel_port->cookie_tx = -EINVAL; 615 + } 616 + 617 + /* 618 + * Called from tasklet with TXRDY interrupt is disabled. 619 + */ 620 + static void atmel_tx_dma(struct uart_port *port) 621 + { 622 + struct atmel_uart_port *atmel_port = to_atmel_uart_port(port); 623 + struct circ_buf *xmit = &port->state->xmit; 624 + struct dma_chan *chan = atmel_port->chan_tx; 625 + struct dma_async_tx_descriptor *desc; 626 + struct scatterlist *sg = &atmel_port->sg_tx; 627 + 628 + /* Make sure we have an idle channel */ 629 + if (atmel_port->desc_tx != NULL) 630 + return; 631 + 632 + if (!uart_circ_empty(xmit) && !uart_tx_stopped(port)) { 633 + /* 634 + * DMA is idle now. 635 + * Port xmit buffer is already mapped, 636 + * and it is one page... Just adjust 637 + * offsets and lengths. Since it is a circular buffer, 638 + * we have to transmit till the end, and then the rest. 639 + * Take the port lock to get a 640 + * consistent xmit buffer state. 641 + */ 642 + sg->offset = xmit->tail & (UART_XMIT_SIZE - 1); 643 + sg_dma_address(sg) = (sg_dma_address(sg) & 644 + ~(UART_XMIT_SIZE - 1)) 645 + + sg->offset; 646 + sg_dma_len(sg) = CIRC_CNT_TO_END(xmit->head, 647 + xmit->tail, 648 + UART_XMIT_SIZE); 649 + BUG_ON(!sg_dma_len(sg)); 650 + 651 + desc = dmaengine_prep_slave_sg(chan, 652 + sg, 653 + 1, 654 + DMA_MEM_TO_DEV, 655 + DMA_PREP_INTERRUPT | 656 + DMA_CTRL_ACK); 657 + if (!desc) { 658 + dev_err(port->dev, "Failed to send via dma!\n"); 659 + return; 660 + } 661 + 662 + dma_sync_sg_for_device(port->dev, sg, 1, DMA_MEM_TO_DEV); 663 + 664 + atmel_port->desc_tx = desc; 665 + desc->callback = atmel_complete_tx_dma; 666 + desc->callback_param = atmel_port; 667 + atmel_port->cookie_tx = dmaengine_submit(desc); 668 + 669 + } else { 670 + if (atmel_port->rs485.flags & SER_RS485_ENABLED) { 671 + /* DMA done, stop TX, start RX for RS485 */ 672 + atmel_start_rx(port); 673 + } 674 + } 675 + 676 + if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 677 + uart_write_wakeup(port); 678 + } 679 + 680 + static int atmel_prepare_tx_dma(struct uart_port *port) 681 + { 682 + struct atmel_uart_port *atmel_port = to_atmel_uart_port(port); 683 + dma_cap_mask_t mask; 684 + struct dma_slave_config config; 685 + int ret, nent; 686 + 687 + dma_cap_zero(mask); 688 + dma_cap_set(DMA_SLAVE, mask); 689 + 690 + atmel_port->chan_tx = dma_request_slave_channel(port->dev, "tx"); 691 + if (atmel_port->chan_tx == NULL) 692 + goto chan_err; 693 + dev_info(port->dev, "using %s for tx DMA transfers\n", 694 + dma_chan_name(atmel_port->chan_tx)); 695 + 696 + spin_lock_init(&atmel_port->lock_tx); 697 + sg_init_table(&atmel_port->sg_tx, 1); 698 + /* UART circular tx buffer is an aligned page. */ 699 + BUG_ON((int)port->state->xmit.buf & ~PAGE_MASK); 700 + sg_set_page(&atmel_port->sg_tx, 701 + virt_to_page(port->state->xmit.buf), 702 + UART_XMIT_SIZE, 703 + (int)port->state->xmit.buf & ~PAGE_MASK); 704 + nent = dma_map_sg(port->dev, 705 + &atmel_port->sg_tx, 706 + 1, 707 + DMA_MEM_TO_DEV); 708 + 709 + if (!nent) { 710 + dev_dbg(port->dev, "need to release resource of dma\n"); 711 + goto chan_err; 712 + } else { 713 + dev_dbg(port->dev, "%s: mapped %d@%p to %x\n", __func__, 714 + sg_dma_len(&atmel_port->sg_tx), 715 + port->state->xmit.buf, 716 + sg_dma_address(&atmel_port->sg_tx)); 717 + } 718 + 719 + /* Configure the slave DMA */ 720 + memset(&config, 0, sizeof(config)); 721 + config.direction = DMA_MEM_TO_DEV; 722 + config.dst_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE; 723 + config.dst_addr = port->mapbase + ATMEL_US_THR; 724 + 725 + ret = dmaengine_device_control(atmel_port->chan_tx, 726 + DMA_SLAVE_CONFIG, 727 + (unsigned long)&config); 728 + if (ret) { 729 + dev_err(port->dev, "DMA tx slave configuration failed\n"); 730 + goto chan_err; 731 + } 732 + 733 + return 0; 734 + 735 + chan_err: 736 + dev_err(port->dev, "TX channel not available, switch to pio\n"); 737 + atmel_port->use_dma_tx = 0; 738 + if (atmel_port->chan_tx) 739 + atmel_release_tx_dma(port); 740 + return -EINVAL; 741 + } 742 + 743 + static void atmel_flip_buffer_rx_dma(struct uart_port *port, 744 + char *buf, size_t count) 745 + { 746 + struct atmel_uart_port *atmel_port = to_atmel_uart_port(port); 747 + struct tty_port *tport = &port->state->port; 748 + 749 + dma_sync_sg_for_cpu(port->dev, 750 + &atmel_port->sg_rx, 751 + 1, 752 + DMA_DEV_TO_MEM); 753 + 754 + tty_insert_flip_string(tport, buf, count); 755 + 756 + dma_sync_sg_for_device(port->dev, 757 + &atmel_port->sg_rx, 758 + 1, 759 + DMA_DEV_TO_MEM); 760 + /* 761 + * Drop the lock here since it might end up calling 762 + * uart_start(), which takes the lock. 763 + */ 764 + spin_unlock(&port->lock); 765 + tty_flip_buffer_push(tport); 766 + spin_lock(&port->lock); 767 + } 768 + 769 + static void atmel_complete_rx_dma(void *arg) 770 + { 771 + struct uart_port *port = arg; 772 + struct atmel_uart_port *atmel_port = to_atmel_uart_port(port); 773 + 774 + tasklet_schedule(&atmel_port->tasklet); 775 + } 776 + 777 + static void atmel_release_rx_dma(struct uart_port *port) 778 + { 779 + struct atmel_uart_port *atmel_port = to_atmel_uart_port(port); 780 + struct dma_chan *chan = atmel_port->chan_rx; 781 + 782 + if (chan) { 783 + dmaengine_terminate_all(chan); 784 + dma_release_channel(chan); 785 + dma_unmap_sg(port->dev, &atmel_port->sg_rx, 1, 786 + DMA_DEV_TO_MEM); 787 + } 788 + 789 + atmel_port->desc_rx = NULL; 790 + atmel_port->chan_rx = NULL; 791 + atmel_port->cookie_rx = -EINVAL; 792 + 793 + if (!atmel_port->is_usart) 794 + del_timer_sync(&atmel_port->uart_timer); 795 + } 796 + 797 + static void atmel_rx_from_dma(struct uart_port *port) 798 + { 799 + struct atmel_uart_port *atmel_port = to_atmel_uart_port(port); 800 + struct circ_buf *ring = &atmel_port->rx_ring; 801 + struct dma_chan *chan = atmel_port->chan_rx; 802 + struct dma_tx_state state; 803 + enum dma_status dmastat; 804 + size_t pending, count; 805 + 806 + 807 + /* Reset the UART timeout early so that we don't miss one */ 808 + UART_PUT_CR(port, ATMEL_US_STTTO); 809 + dmastat = dmaengine_tx_status(chan, 810 + atmel_port->cookie_rx, 811 + &state); 812 + /* Restart a new tasklet if DMA status is error */ 813 + if (dmastat == DMA_ERROR) { 814 + dev_dbg(port->dev, "Get residue error, restart tasklet\n"); 815 + UART_PUT_IER(port, ATMEL_US_TIMEOUT); 816 + tasklet_schedule(&atmel_port->tasklet); 817 + return; 818 + } 819 + /* current transfer size should no larger than dma buffer */ 820 + pending = sg_dma_len(&atmel_port->sg_rx) - state.residue; 821 + BUG_ON(pending > sg_dma_len(&atmel_port->sg_rx)); 822 + 823 + /* 824 + * This will take the chars we have so far, 825 + * ring->head will record the transfer size, only new bytes come 826 + * will insert into the framework. 827 + */ 828 + if (pending > ring->head) { 829 + count = pending - ring->head; 830 + 831 + atmel_flip_buffer_rx_dma(port, ring->buf + ring->head, count); 832 + 833 + ring->head += count; 834 + if (ring->head == sg_dma_len(&atmel_port->sg_rx)) 835 + ring->head = 0; 836 + 837 + port->icount.rx += count; 838 + } 839 + 840 + UART_PUT_IER(port, ATMEL_US_TIMEOUT); 841 + } 842 + 843 + static int atmel_prepare_rx_dma(struct uart_port *port) 844 + { 845 + struct atmel_uart_port *atmel_port = to_atmel_uart_port(port); 846 + struct dma_async_tx_descriptor *desc; 847 + dma_cap_mask_t mask; 848 + struct dma_slave_config config; 849 + struct circ_buf *ring; 850 + int ret, nent; 851 + 852 + ring = &atmel_port->rx_ring; 853 + 854 + dma_cap_zero(mask); 855 + dma_cap_set(DMA_CYCLIC, mask); 856 + 857 + atmel_port->chan_rx = dma_request_slave_channel(port->dev, "rx"); 858 + if (atmel_port->chan_rx == NULL) 859 + goto chan_err; 860 + dev_info(port->dev, "using %s for rx DMA transfers\n", 861 + dma_chan_name(atmel_port->chan_rx)); 862 + 863 + spin_lock_init(&atmel_port->lock_rx); 864 + sg_init_table(&atmel_port->sg_rx, 1); 865 + /* UART circular rx buffer is an aligned page. */ 866 + BUG_ON((int)port->state->xmit.buf & ~PAGE_MASK); 867 + sg_set_page(&atmel_port->sg_rx, 868 + virt_to_page(ring->buf), 869 + ATMEL_SERIAL_RINGSIZE, 870 + (int)ring->buf & ~PAGE_MASK); 871 + nent = dma_map_sg(port->dev, 872 + &atmel_port->sg_rx, 873 + 1, 874 + DMA_DEV_TO_MEM); 875 + 876 + if (!nent) { 877 + dev_dbg(port->dev, "need to release resource of dma\n"); 878 + goto chan_err; 879 + } else { 880 + dev_dbg(port->dev, "%s: mapped %d@%p to %x\n", __func__, 881 + sg_dma_len(&atmel_port->sg_rx), 882 + ring->buf, 883 + sg_dma_address(&atmel_port->sg_rx)); 884 + } 885 + 886 + /* Configure the slave DMA */ 887 + memset(&config, 0, sizeof(config)); 888 + config.direction = DMA_DEV_TO_MEM; 889 + config.src_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE; 890 + config.src_addr = port->mapbase + ATMEL_US_RHR; 891 + 892 + ret = dmaengine_device_control(atmel_port->chan_rx, 893 + DMA_SLAVE_CONFIG, 894 + (unsigned long)&config); 895 + if (ret) { 896 + dev_err(port->dev, "DMA rx slave configuration failed\n"); 897 + goto chan_err; 898 + } 899 + /* 900 + * Prepare a cyclic dma transfer, assign 2 descriptors, 901 + * each one is half ring buffer size 902 + */ 903 + desc = dmaengine_prep_dma_cyclic(atmel_port->chan_rx, 904 + sg_dma_address(&atmel_port->sg_rx), 905 + sg_dma_len(&atmel_port->sg_rx), 906 + sg_dma_len(&atmel_port->sg_rx)/2, 907 + DMA_DEV_TO_MEM, 908 + DMA_PREP_INTERRUPT); 909 + desc->callback = atmel_complete_rx_dma; 910 + desc->callback_param = port; 911 + atmel_port->desc_rx = desc; 912 + atmel_port->cookie_rx = dmaengine_submit(desc); 913 + 914 + return 0; 915 + 916 + chan_err: 917 + dev_err(port->dev, "RX channel not available, switch to pio\n"); 918 + atmel_port->use_dma_rx = 0; 919 + if (atmel_port->chan_rx) 920 + atmel_release_rx_dma(port); 921 + return -EINVAL; 922 + } 923 + 924 + static void atmel_uart_timer_callback(unsigned long data) 925 + { 926 + struct uart_port *port = (void *)data; 927 + struct atmel_uart_port *atmel_port = to_atmel_uart_port(port); 928 + 929 + tasklet_schedule(&atmel_port->tasklet); 930 + mod_timer(&atmel_port->uart_timer, jiffies + uart_poll_timeout(port)); 931 + } 932 + 602 933 /* 603 934 * receive interrupt handler. 604 935 */ ··· 973 572 { 974 573 struct atmel_uart_port *atmel_port = to_atmel_uart_port(port); 975 574 976 - if (atmel_use_dma_rx(port)) { 575 + if (atmel_use_pdc_rx(port)) { 977 576 /* 978 577 * PDC receive. Just schedule the tasklet and let it 979 578 * figure out the details. ··· 990 589 if (pending & (ATMEL_US_RXBRK | ATMEL_US_OVRE | 991 590 ATMEL_US_FRAME | ATMEL_US_PARE)) 992 591 atmel_pdc_rxerr(port, pending); 592 + } 593 + 594 + if (atmel_use_dma_rx(port)) { 595 + if (pending & ATMEL_US_TIMEOUT) { 596 + UART_PUT_IDR(port, ATMEL_US_TIMEOUT); 597 + tasklet_schedule(&atmel_port->tasklet); 598 + } 993 599 } 994 600 995 601 /* Interrupt receive */ ··· 1066 658 return pass_counter ? IRQ_HANDLED : IRQ_NONE; 1067 659 } 1068 660 661 + static void atmel_release_tx_pdc(struct uart_port *port) 662 + { 663 + struct atmel_uart_port *atmel_port = to_atmel_uart_port(port); 664 + struct atmel_dma_buffer *pdc = &atmel_port->pdc_tx; 665 + 666 + dma_unmap_single(port->dev, 667 + pdc->dma_addr, 668 + pdc->dma_size, 669 + DMA_TO_DEVICE); 670 + } 671 + 1069 672 /* 1070 673 * Called from tasklet with ENDTX and TXBUFE interrupts disabled. 1071 674 */ 1072 - static void atmel_tx_dma(struct uart_port *port) 675 + static void atmel_tx_pdc(struct uart_port *port) 1073 676 { 1074 677 struct atmel_uart_port *atmel_port = to_atmel_uart_port(port); 1075 678 struct circ_buf *xmit = &port->state->xmit; ··· 1127 708 1128 709 if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 1129 710 uart_write_wakeup(port); 711 + } 712 + 713 + static int atmel_prepare_tx_pdc(struct uart_port *port) 714 + { 715 + struct atmel_uart_port *atmel_port = to_atmel_uart_port(port); 716 + struct atmel_dma_buffer *pdc = &atmel_port->pdc_tx; 717 + struct circ_buf *xmit = &port->state->xmit; 718 + 719 + pdc->buf = xmit->buf; 720 + pdc->dma_addr = dma_map_single(port->dev, 721 + pdc->buf, 722 + UART_XMIT_SIZE, 723 + DMA_TO_DEVICE); 724 + pdc->dma_size = UART_XMIT_SIZE; 725 + pdc->ofs = 0; 726 + 727 + return 0; 1130 728 } 1131 729 1132 730 static void atmel_rx_from_ring(struct uart_port *port) ··· 1214 778 spin_lock(&port->lock); 1215 779 } 1216 780 1217 - static void atmel_rx_from_dma(struct uart_port *port) 781 + static void atmel_release_rx_pdc(struct uart_port *port) 782 + { 783 + struct atmel_uart_port *atmel_port = to_atmel_uart_port(port); 784 + int i; 785 + 786 + for (i = 0; i < 2; i++) { 787 + struct atmel_dma_buffer *pdc = &atmel_port->pdc_rx[i]; 788 + 789 + dma_unmap_single(port->dev, 790 + pdc->dma_addr, 791 + pdc->dma_size, 792 + DMA_FROM_DEVICE); 793 + kfree(pdc->buf); 794 + } 795 + 796 + if (!atmel_port->is_usart) 797 + del_timer_sync(&atmel_port->uart_timer); 798 + } 799 + 800 + static void atmel_rx_from_pdc(struct uart_port *port) 1218 801 { 1219 802 struct atmel_uart_port *atmel_port = to_atmel_uart_port(port); 1220 803 struct tty_port *tport = &port->state->port; ··· 1310 855 UART_PUT_IER(port, ATMEL_US_ENDRX | ATMEL_US_TIMEOUT); 1311 856 } 1312 857 858 + static int atmel_prepare_rx_pdc(struct uart_port *port) 859 + { 860 + struct atmel_uart_port *atmel_port = to_atmel_uart_port(port); 861 + int i; 862 + 863 + for (i = 0; i < 2; i++) { 864 + struct atmel_dma_buffer *pdc = &atmel_port->pdc_rx[i]; 865 + 866 + pdc->buf = kmalloc(PDC_BUFFER_SIZE, GFP_KERNEL); 867 + if (pdc->buf == NULL) { 868 + if (i != 0) { 869 + dma_unmap_single(port->dev, 870 + atmel_port->pdc_rx[0].dma_addr, 871 + PDC_BUFFER_SIZE, 872 + DMA_FROM_DEVICE); 873 + kfree(atmel_port->pdc_rx[0].buf); 874 + } 875 + atmel_port->use_pdc_rx = 0; 876 + return -ENOMEM; 877 + } 878 + pdc->dma_addr = dma_map_single(port->dev, 879 + pdc->buf, 880 + PDC_BUFFER_SIZE, 881 + DMA_FROM_DEVICE); 882 + pdc->dma_size = PDC_BUFFER_SIZE; 883 + pdc->ofs = 0; 884 + } 885 + 886 + atmel_port->pdc_rx_idx = 0; 887 + 888 + UART_PUT_RPR(port, atmel_port->pdc_rx[0].dma_addr); 889 + UART_PUT_RCR(port, PDC_BUFFER_SIZE); 890 + 891 + UART_PUT_RNPR(port, atmel_port->pdc_rx[1].dma_addr); 892 + UART_PUT_RNCR(port, PDC_BUFFER_SIZE); 893 + 894 + return 0; 895 + } 896 + 1313 897 /* 1314 898 * tasklet handling tty stuff outside the interrupt handler. 1315 899 */ ··· 1362 868 /* The interrupt handler does not take the lock */ 1363 869 spin_lock(&port->lock); 1364 870 1365 - if (atmel_use_dma_tx(port)) 1366 - atmel_tx_dma(port); 1367 - else 1368 - atmel_tx_chars(port); 871 + atmel_port->schedule_tx(port); 1369 872 1370 873 status = atmel_port->irq_status; 1371 874 status_change = status ^ atmel_port->irq_status_prev; ··· 1384 893 atmel_port->irq_status_prev = status; 1385 894 } 1386 895 1387 - if (atmel_use_dma_rx(port)) 1388 - atmel_rx_from_dma(port); 1389 - else 1390 - atmel_rx_from_ring(port); 896 + atmel_port->schedule_rx(port); 1391 897 1392 898 spin_unlock(&port->lock); 899 + } 900 + 901 + static int atmel_init_property(struct atmel_uart_port *atmel_port, 902 + struct platform_device *pdev) 903 + { 904 + struct device_node *np = pdev->dev.of_node; 905 + struct atmel_uart_data *pdata = dev_get_platdata(&pdev->dev); 906 + 907 + if (np) { 908 + /* DMA/PDC usage specification */ 909 + if (of_get_property(np, "atmel,use-dma-rx", NULL)) { 910 + if (of_get_property(np, "dmas", NULL)) { 911 + atmel_port->use_dma_rx = true; 912 + atmel_port->use_pdc_rx = false; 913 + } else { 914 + atmel_port->use_dma_rx = false; 915 + atmel_port->use_pdc_rx = true; 916 + } 917 + } else { 918 + atmel_port->use_dma_rx = false; 919 + atmel_port->use_pdc_rx = false; 920 + } 921 + 922 + if (of_get_property(np, "atmel,use-dma-tx", NULL)) { 923 + if (of_get_property(np, "dmas", NULL)) { 924 + atmel_port->use_dma_tx = true; 925 + atmel_port->use_pdc_tx = false; 926 + } else { 927 + atmel_port->use_dma_tx = false; 928 + atmel_port->use_pdc_tx = true; 929 + } 930 + } else { 931 + atmel_port->use_dma_tx = false; 932 + atmel_port->use_pdc_tx = false; 933 + } 934 + 935 + } else { 936 + atmel_port->use_pdc_rx = pdata->use_dma_rx; 937 + atmel_port->use_pdc_tx = pdata->use_dma_tx; 938 + atmel_port->use_dma_rx = false; 939 + atmel_port->use_dma_tx = false; 940 + } 941 + 942 + return 0; 943 + } 944 + 945 + static void atmel_init_rs485(struct atmel_uart_port *atmel_port, 946 + struct platform_device *pdev) 947 + { 948 + struct device_node *np = pdev->dev.of_node; 949 + struct atmel_uart_data *pdata = dev_get_platdata(&pdev->dev); 950 + 951 + if (np) { 952 + u32 rs485_delay[2]; 953 + /* rs485 properties */ 954 + if (of_property_read_u32_array(np, "rs485-rts-delay", 955 + rs485_delay, 2) == 0) { 956 + struct serial_rs485 *rs485conf = &atmel_port->rs485; 957 + 958 + rs485conf->delay_rts_before_send = rs485_delay[0]; 959 + rs485conf->delay_rts_after_send = rs485_delay[1]; 960 + rs485conf->flags = 0; 961 + 962 + if (of_get_property(np, "rs485-rx-during-tx", NULL)) 963 + rs485conf->flags |= SER_RS485_RX_DURING_TX; 964 + 965 + if (of_get_property(np, "linux,rs485-enabled-at-boot-time", 966 + NULL)) 967 + rs485conf->flags |= SER_RS485_ENABLED; 968 + } 969 + } else { 970 + atmel_port->rs485 = pdata->rs485; 971 + } 972 + 973 + } 974 + 975 + static void atmel_set_ops(struct uart_port *port) 976 + { 977 + struct atmel_uart_port *atmel_port = to_atmel_uart_port(port); 978 + 979 + if (atmel_use_dma_rx(port)) { 980 + atmel_port->prepare_rx = &atmel_prepare_rx_dma; 981 + atmel_port->schedule_rx = &atmel_rx_from_dma; 982 + atmel_port->release_rx = &atmel_release_rx_dma; 983 + } else if (atmel_use_pdc_rx(port)) { 984 + atmel_port->prepare_rx = &atmel_prepare_rx_pdc; 985 + atmel_port->schedule_rx = &atmel_rx_from_pdc; 986 + atmel_port->release_rx = &atmel_release_rx_pdc; 987 + } else { 988 + atmel_port->prepare_rx = NULL; 989 + atmel_port->schedule_rx = &atmel_rx_from_ring; 990 + atmel_port->release_rx = NULL; 991 + } 992 + 993 + if (atmel_use_dma_tx(port)) { 994 + atmel_port->prepare_tx = &atmel_prepare_tx_dma; 995 + atmel_port->schedule_tx = &atmel_tx_dma; 996 + atmel_port->release_tx = &atmel_release_tx_dma; 997 + } else if (atmel_use_pdc_tx(port)) { 998 + atmel_port->prepare_tx = &atmel_prepare_tx_pdc; 999 + atmel_port->schedule_tx = &atmel_tx_pdc; 1000 + atmel_port->release_tx = &atmel_release_tx_pdc; 1001 + } else { 1002 + atmel_port->prepare_tx = NULL; 1003 + atmel_port->schedule_tx = &atmel_tx_chars; 1004 + atmel_port->release_tx = NULL; 1005 + } 1006 + } 1007 + 1008 + /* 1009 + * Get ip name usart or uart 1010 + */ 1011 + static int atmel_get_ip_name(struct uart_port *port) 1012 + { 1013 + struct atmel_uart_port *atmel_port = to_atmel_uart_port(port); 1014 + int name = UART_GET_IP_NAME(port); 1015 + int usart, uart; 1016 + /* usart and uart ascii */ 1017 + usart = 0x55534152; 1018 + uart = 0x44424755; 1019 + 1020 + atmel_port->is_usart = false; 1021 + 1022 + if (name == usart) { 1023 + dev_dbg(port->dev, "This is usart\n"); 1024 + atmel_port->is_usart = true; 1025 + } else if (name == uart) { 1026 + dev_dbg(port->dev, "This is uart\n"); 1027 + atmel_port->is_usart = false; 1028 + } else { 1029 + dev_err(port->dev, "Not supported ip name, set to uart\n"); 1030 + return -EINVAL; 1031 + } 1032 + 1033 + return 0; 1393 1034 } 1394 1035 1395 1036 /* ··· 1529 906 */ 1530 907 static int atmel_startup(struct uart_port *port) 1531 908 { 909 + struct platform_device *pdev = to_platform_device(port->dev); 1532 910 struct atmel_uart_port *atmel_port = to_atmel_uart_port(port); 1533 911 struct tty_struct *tty = port->state->port.tty; 1534 912 int retval; ··· 1554 930 /* 1555 931 * Initialize DMA (if necessary) 1556 932 */ 1557 - if (atmel_use_dma_rx(port)) { 1558 - int i; 933 + atmel_init_property(atmel_port, pdev); 1559 934 1560 - for (i = 0; i < 2; i++) { 1561 - struct atmel_dma_buffer *pdc = &atmel_port->pdc_rx[i]; 1562 - 1563 - pdc->buf = kmalloc(PDC_BUFFER_SIZE, GFP_KERNEL); 1564 - if (pdc->buf == NULL) { 1565 - if (i != 0) { 1566 - dma_unmap_single(port->dev, 1567 - atmel_port->pdc_rx[0].dma_addr, 1568 - PDC_BUFFER_SIZE, 1569 - DMA_FROM_DEVICE); 1570 - kfree(atmel_port->pdc_rx[0].buf); 1571 - } 1572 - free_irq(port->irq, port); 1573 - return -ENOMEM; 1574 - } 1575 - pdc->dma_addr = dma_map_single(port->dev, 1576 - pdc->buf, 1577 - PDC_BUFFER_SIZE, 1578 - DMA_FROM_DEVICE); 1579 - pdc->dma_size = PDC_BUFFER_SIZE; 1580 - pdc->ofs = 0; 1581 - } 1582 - 1583 - atmel_port->pdc_rx_idx = 0; 1584 - 1585 - UART_PUT_RPR(port, atmel_port->pdc_rx[0].dma_addr); 1586 - UART_PUT_RCR(port, PDC_BUFFER_SIZE); 1587 - 1588 - UART_PUT_RNPR(port, atmel_port->pdc_rx[1].dma_addr); 1589 - UART_PUT_RNCR(port, PDC_BUFFER_SIZE); 1590 - } 1591 - if (atmel_use_dma_tx(port)) { 1592 - struct atmel_dma_buffer *pdc = &atmel_port->pdc_tx; 1593 - struct circ_buf *xmit = &port->state->xmit; 1594 - 1595 - pdc->buf = xmit->buf; 1596 - pdc->dma_addr = dma_map_single(port->dev, 1597 - pdc->buf, 1598 - UART_XMIT_SIZE, 1599 - DMA_TO_DEVICE); 1600 - pdc->dma_size = UART_XMIT_SIZE; 1601 - pdc->ofs = 0; 935 + if (atmel_port->prepare_rx) { 936 + retval = atmel_port->prepare_rx(port); 937 + if (retval < 0) 938 + atmel_set_ops(port); 1602 939 } 1603 940 941 + if (atmel_port->prepare_tx) { 942 + retval = atmel_port->prepare_tx(port); 943 + if (retval < 0) 944 + atmel_set_ops(port); 945 + } 1604 946 /* 1605 947 * If there is a specific "open" function (to register 1606 948 * control line interrupts) ··· 1590 1000 /* enable xmit & rcvr */ 1591 1001 UART_PUT_CR(port, ATMEL_US_TXEN | ATMEL_US_RXEN); 1592 1002 1593 - if (atmel_use_dma_rx(port)) { 1003 + if (atmel_use_pdc_rx(port)) { 1594 1004 /* set UART timeout */ 1595 - UART_PUT_RTOR(port, PDC_RX_TIMEOUT); 1596 - UART_PUT_CR(port, ATMEL_US_STTTO); 1005 + if (!atmel_port->is_usart) { 1006 + setup_timer(&atmel_port->uart_timer, 1007 + atmel_uart_timer_callback, 1008 + (unsigned long)port); 1009 + mod_timer(&atmel_port->uart_timer, 1010 + jiffies + uart_poll_timeout(port)); 1011 + /* set USART timeout */ 1012 + } else { 1013 + UART_PUT_RTOR(port, PDC_RX_TIMEOUT); 1014 + UART_PUT_CR(port, ATMEL_US_STTTO); 1597 1015 1598 - UART_PUT_IER(port, ATMEL_US_ENDRX | ATMEL_US_TIMEOUT); 1016 + UART_PUT_IER(port, ATMEL_US_ENDRX | ATMEL_US_TIMEOUT); 1017 + } 1599 1018 /* enable PDC controller */ 1600 1019 UART_PUT_PTCR(port, ATMEL_PDC_RXTEN); 1020 + } else if (atmel_use_dma_rx(port)) { 1021 + /* set UART timeout */ 1022 + if (!atmel_port->is_usart) { 1023 + setup_timer(&atmel_port->uart_timer, 1024 + atmel_uart_timer_callback, 1025 + (unsigned long)port); 1026 + mod_timer(&atmel_port->uart_timer, 1027 + jiffies + uart_poll_timeout(port)); 1028 + /* set USART timeout */ 1029 + } else { 1030 + UART_PUT_RTOR(port, PDC_RX_TIMEOUT); 1031 + UART_PUT_CR(port, ATMEL_US_STTTO); 1032 + 1033 + UART_PUT_IER(port, ATMEL_US_TIMEOUT); 1034 + } 1601 1035 } else { 1602 1036 /* enable receive only */ 1603 1037 UART_PUT_IER(port, ATMEL_US_RXRDY); ··· 1645 1031 /* 1646 1032 * Shut-down the DMA. 1647 1033 */ 1648 - if (atmel_use_dma_rx(port)) { 1649 - int i; 1650 - 1651 - for (i = 0; i < 2; i++) { 1652 - struct atmel_dma_buffer *pdc = &atmel_port->pdc_rx[i]; 1653 - 1654 - dma_unmap_single(port->dev, 1655 - pdc->dma_addr, 1656 - pdc->dma_size, 1657 - DMA_FROM_DEVICE); 1658 - kfree(pdc->buf); 1659 - } 1660 - } 1661 - if (atmel_use_dma_tx(port)) { 1662 - struct atmel_dma_buffer *pdc = &atmel_port->pdc_tx; 1663 - 1664 - dma_unmap_single(port->dev, 1665 - pdc->dma_addr, 1666 - pdc->dma_size, 1667 - DMA_TO_DEVICE); 1668 - } 1034 + if (atmel_port->release_rx) 1035 + atmel_port->release_rx(port); 1036 + if (atmel_port->release_tx) 1037 + atmel_port->release_tx(port); 1669 1038 1670 1039 /* 1671 1040 * Disable all interrupts, port and break condition. ··· 1677 1080 { 1678 1081 struct atmel_uart_port *atmel_port = to_atmel_uart_port(port); 1679 1082 1680 - if (atmel_use_dma_tx(port)) { 1083 + if (atmel_use_pdc_tx(port)) { 1681 1084 UART_PUT_TCR(port, 0); 1682 1085 atmel_port->pdc_tx.ofs = 0; 1683 1086 } ··· 1790 1193 if (termios->c_iflag & (BRKINT | PARMRK)) 1791 1194 port->read_status_mask |= ATMEL_US_RXBRK; 1792 1195 1793 - if (atmel_use_dma_rx(port)) 1196 + if (atmel_use_pdc_rx(port)) 1794 1197 /* need to enable error interrupts */ 1795 1198 UART_PUT_IER(port, port->read_status_mask); 1796 1199 ··· 2020 1423 #endif 2021 1424 }; 2022 1425 2023 - static void atmel_of_init_port(struct atmel_uart_port *atmel_port, 2024 - struct device_node *np) 2025 - { 2026 - u32 rs485_delay[2]; 2027 - 2028 - /* DMA/PDC usage specification */ 2029 - if (of_get_property(np, "atmel,use-dma-rx", NULL)) 2030 - atmel_port->use_dma_rx = 1; 2031 - else 2032 - atmel_port->use_dma_rx = 0; 2033 - if (of_get_property(np, "atmel,use-dma-tx", NULL)) 2034 - atmel_port->use_dma_tx = 1; 2035 - else 2036 - atmel_port->use_dma_tx = 0; 2037 - 2038 - /* rs485 properties */ 2039 - if (of_property_read_u32_array(np, "rs485-rts-delay", 2040 - rs485_delay, 2) == 0) { 2041 - struct serial_rs485 *rs485conf = &atmel_port->rs485; 2042 - 2043 - rs485conf->delay_rts_before_send = rs485_delay[0]; 2044 - rs485conf->delay_rts_after_send = rs485_delay[1]; 2045 - rs485conf->flags = 0; 2046 - 2047 - if (of_get_property(np, "rs485-rx-during-tx", NULL)) 2048 - rs485conf->flags |= SER_RS485_RX_DURING_TX; 2049 - 2050 - if (of_get_property(np, "linux,rs485-enabled-at-boot-time", NULL)) 2051 - rs485conf->flags |= SER_RS485_ENABLED; 2052 - } 2053 - } 2054 - 2055 1426 /* 2056 1427 * Configure the port from the platform device resource info. 2057 1428 */ ··· 2028 1463 { 2029 1464 int ret; 2030 1465 struct uart_port *port = &atmel_port->uart; 2031 - struct atmel_uart_data *pdata = pdev->dev.platform_data; 1466 + struct atmel_uart_data *pdata = dev_get_platdata(&pdev->dev); 2032 1467 2033 - if (pdev->dev.of_node) { 2034 - atmel_of_init_port(atmel_port, pdev->dev.of_node); 2035 - } else { 2036 - atmel_port->use_dma_rx = pdata->use_dma_rx; 2037 - atmel_port->use_dma_tx = pdata->use_dma_tx; 2038 - atmel_port->rs485 = pdata->rs485; 2039 - } 1468 + if (!atmel_init_property(atmel_port, pdev)) 1469 + atmel_set_ops(port); 1470 + 1471 + atmel_init_rs485(atmel_port, pdev); 2040 1472 2041 1473 port->iotype = UPIO_MEM; 2042 1474 port->flags = UPF_BOOT_AUTOCONF; ··· 2078 1516 /* Use TXEMPTY for interrupt when rs485 else TXRDY or ENDTX|TXBUFE */ 2079 1517 if (atmel_port->rs485.flags & SER_RS485_ENABLED) 2080 1518 atmel_port->tx_done_mask = ATMEL_US_TXEMPTY; 2081 - else if (atmel_use_dma_tx(port)) { 1519 + else if (atmel_use_pdc_tx(port)) { 2082 1520 port->fifosize = PDC_BUFFER_SIZE; 2083 1521 atmel_port->tx_done_mask = ATMEL_US_ENDTX | ATMEL_US_TXBUFE; 2084 1522 } else { ··· 2226 1664 int ret; 2227 1665 if (atmel_default_console_device) { 2228 1666 struct atmel_uart_data *pdata = 2229 - atmel_default_console_device->dev.platform_data; 1667 + dev_get_platdata(&atmel_default_console_device->dev); 2230 1668 int id = pdata->num; 2231 1669 struct atmel_uart_port *port = &atmel_ports[id]; 2232 1670 ··· 2334 1772 { 2335 1773 struct atmel_uart_port *port; 2336 1774 struct device_node *np = pdev->dev.of_node; 2337 - struct atmel_uart_data *pdata = pdev->dev.platform_data; 1775 + struct atmel_uart_data *pdata = dev_get_platdata(&pdev->dev); 2338 1776 void *data; 2339 1777 int ret = -ENODEV; 2340 - struct pinctrl *pinctrl; 2341 1778 2342 1779 BUILD_BUG_ON(ATMEL_SERIAL_RINGSIZE & (ATMEL_SERIAL_RINGSIZE - 1)); 2343 1780 ··· 2370 1809 if (ret) 2371 1810 goto err; 2372 1811 2373 - pinctrl = devm_pinctrl_get_select_default(&pdev->dev); 2374 - if (IS_ERR(pinctrl)) { 2375 - ret = PTR_ERR(pinctrl); 2376 - goto err; 2377 - } 2378 - 2379 - if (!atmel_use_dma_rx(&port->uart)) { 1812 + if (!atmel_use_pdc_rx(&port->uart)) { 2380 1813 ret = -ENOMEM; 2381 1814 data = kmalloc(sizeof(struct atmel_uart_char) 2382 1815 * ATMEL_SERIAL_RINGSIZE, GFP_KERNEL); ··· 2402 1847 UART_PUT_CR(&port->uart, ATMEL_US_RTSEN); 2403 1848 } 2404 1849 1850 + /* 1851 + * Get port name of usart or uart 1852 + */ 1853 + ret = atmel_get_ip_name(&port->uart); 1854 + if (ret < 0) 1855 + goto err_add_port; 1856 + 2405 1857 return 0; 2406 1858 2407 1859 err_add_port: ··· 2430 1868 int ret = 0; 2431 1869 2432 1870 device_init_wakeup(&pdev->dev, 0); 2433 - platform_set_drvdata(pdev, NULL); 2434 1871 2435 1872 ret = uart_remove_one_port(&atmel_uart, port); 2436 1873
+2 -1
drivers/tty/serial/bcm63xx_uart.c
··· 302 302 303 303 } while (--max_count); 304 304 305 + spin_unlock(&port->lock); 305 306 tty_flip_buffer_push(tty_port); 307 + spin_lock(&port->lock); 306 308 } 307 309 308 310 /* ··· 854 852 855 853 port = platform_get_drvdata(pdev); 856 854 uart_remove_one_port(&bcm_uart_driver, port); 857 - platform_set_drvdata(pdev, NULL); 858 855 /* mark port as free */ 859 856 ports[pdev->id].membase = 0; 860 857 return 0;
+8 -6
drivers/tty/serial/bfin_sport_uart.c
··· 161 161 if (!uart_handle_sysrq_char(&up->port, ch)) 162 162 tty_insert_flip_char(port, ch, TTY_NORMAL); 163 163 } 164 - /* XXX this won't deadlock with lowlat? */ 165 - tty_flip_buffer_push(port); 166 164 167 165 spin_unlock(&up->port.lock); 166 + 167 + /* XXX this won't deadlock with lowlat? */ 168 + tty_flip_buffer_push(port); 168 169 169 170 return IRQ_HANDLED; 170 171 } ··· 767 766 } 768 767 769 768 ret = peripheral_request_list( 770 - (unsigned short *)pdev->dev.platform_data, DRV_NAME); 769 + (unsigned short *)dev_get_platdata(&pdev->dev), 770 + DRV_NAME); 771 771 if (ret) { 772 772 dev_err(&pdev->dev, 773 773 "Fail to request SPORT peripherals\n"); ··· 845 843 iounmap(sport->port.membase); 846 844 out_error_free_peripherals: 847 845 peripheral_free_list( 848 - (unsigned short *)pdev->dev.platform_data); 846 + (unsigned short *)dev_get_platdata(&pdev->dev)); 849 847 out_error_free_mem: 850 848 kfree(sport); 851 849 bfin_sport_uart_ports[pdev->id] = NULL; ··· 865 863 uart_remove_one_port(&sport_uart_reg, &sport->port); 866 864 iounmap(sport->port.membase); 867 865 peripheral_free_list( 868 - (unsigned short *)pdev->dev.platform_data); 866 + (unsigned short *)dev_get_platdata(&pdev->dev)); 869 867 kfree(sport); 870 868 bfin_sport_uart_ports[pdev->id] = NULL; 871 869 } ··· 885 883 }; 886 884 887 885 #ifdef CONFIG_SERIAL_BFIN_SPORT_CONSOLE 888 - static __initdata struct early_platform_driver early_sport_uart_driver = { 886 + static struct early_platform_driver early_sport_uart_driver __initdata = { 889 887 .class_str = CLASS_BFIN_SPORT_CONSOLE, 890 888 .pdrv = &sport_uart_driver, 891 889 .requested_id = EARLY_PLATFORM_ID_UNSET,
+9 -12
drivers/tty/serial/bfin_uart.c
··· 41 41 # undef CONFIG_EARLY_PRINTK 42 42 #endif 43 43 44 - #ifdef CONFIG_SERIAL_BFIN_MODULE 45 - # undef CONFIG_EARLY_PRINTK 46 - #endif 47 - 48 44 /* UART name and device definitions */ 49 45 #define BFIN_SERIAL_DEV_NAME "ttyBF" 50 46 #define BFIN_SERIAL_MAJOR 204 ··· 1176 1180 * don't let the common infrastructure play with things. (see calls to setup 1177 1181 * & earlysetup in ./kernel/printk.c:register_console() 1178 1182 */ 1179 - static struct __initdata console bfin_early_serial_console = { 1183 + static struct console bfin_early_serial_console __initdata = { 1180 1184 .name = "early_BFuart", 1181 1185 .write = bfin_earlyprintk_console_write, 1182 1186 .device = uart_console_device, ··· 1240 1244 */ 1241 1245 #endif 1242 1246 ret = peripheral_request_list( 1243 - (unsigned short *)pdev->dev.platform_data, DRIVER_NAME); 1247 + (unsigned short *)dev_get_platdata(&pdev->dev), 1248 + DRIVER_NAME); 1244 1249 if (ret) { 1245 1250 dev_err(&pdev->dev, 1246 1251 "fail to request bfin serial peripherals\n"); ··· 1359 1362 iounmap(uart->port.membase); 1360 1363 out_error_free_peripherals: 1361 1364 peripheral_free_list( 1362 - (unsigned short *)pdev->dev.platform_data); 1365 + (unsigned short *)dev_get_platdata(&pdev->dev)); 1363 1366 out_error_free_mem: 1364 1367 kfree(uart); 1365 1368 bfin_serial_ports[pdev->id] = NULL; ··· 1378 1381 uart_remove_one_port(&bfin_serial_reg, &uart->port); 1379 1382 iounmap(uart->port.membase); 1380 1383 peripheral_free_list( 1381 - (unsigned short *)pdev->dev.platform_data); 1384 + (unsigned short *)dev_get_platdata(&pdev->dev)); 1382 1385 kfree(uart); 1383 1386 bfin_serial_ports[pdev->id] = NULL; 1384 1387 } ··· 1398 1401 }; 1399 1402 1400 1403 #if defined(CONFIG_SERIAL_BFIN_CONSOLE) 1401 - static __initdata struct early_platform_driver early_bfin_serial_driver = { 1404 + static struct early_platform_driver early_bfin_serial_driver __initdata = { 1402 1405 .class_str = CLASS_BFIN_CONSOLE, 1403 1406 .pdrv = &bfin_serial_driver, 1404 1407 .requested_id = EARLY_PLATFORM_ID_UNSET, ··· 1433 1436 } 1434 1437 1435 1438 ret = peripheral_request_list( 1436 - (unsigned short *)pdev->dev.platform_data, DRIVER_NAME); 1439 + (unsigned short *)dev_get_platdata(&pdev->dev), DRIVER_NAME); 1437 1440 if (ret) { 1438 1441 dev_err(&pdev->dev, 1439 1442 "fail to request bfin serial peripherals\n"); ··· 1464 1467 1465 1468 out_error_free_peripherals: 1466 1469 peripheral_free_list( 1467 - (unsigned short *)pdev->dev.platform_data); 1470 + (unsigned short *)dev_get_platdata(&pdev->dev)); 1468 1471 1469 1472 return ret; 1470 1473 } ··· 1477 1480 }, 1478 1481 }; 1479 1482 1480 - static __initdata struct early_platform_driver early_bfin_earlyprintk_driver = { 1483 + static struct early_platform_driver early_bfin_earlyprintk_driver __initdata = { 1481 1484 .class_str = CLASS_BFIN_EARLYPRINTK, 1482 1485 .pdrv = &bfin_earlyprintk_driver, 1483 1486 .requested_id = EARLY_PLATFORM_ID_UNSET,
+2 -9
drivers/tty/serial/clps711x.c
··· 438 438 s->uart_clk = devm_clk_get(&pdev->dev, "uart"); 439 439 if (IS_ERR(s->uart_clk)) { 440 440 dev_err(&pdev->dev, "Can't get UART clocks\n"); 441 - ret = PTR_ERR(s->uart_clk); 442 - goto err_out; 441 + return PTR_ERR(s->uart_clk); 443 442 } 444 443 445 444 s->uart.owner = THIS_MODULE; ··· 460 461 if (ret) { 461 462 dev_err(&pdev->dev, "Registering UART driver failed\n"); 462 463 devm_clk_put(&pdev->dev, s->uart_clk); 463 - goto err_out; 464 + return ret; 464 465 } 465 466 466 467 for (i = 0; i < UART_CLPS711X_NR; i++) { ··· 477 478 } 478 479 479 480 return 0; 480 - 481 - err_out: 482 - platform_set_drvdata(pdev, NULL); 483 - 484 - return ret; 485 481 } 486 482 487 483 static int uart_clps711x_remove(struct platform_device *pdev) ··· 489 495 490 496 devm_clk_put(&pdev->dev, s->uart_clk); 491 497 uart_unregister_driver(&s->uart); 492 - platform_set_drvdata(pdev, NULL); 493 498 494 499 return 0; 495 500 }
+26 -2
drivers/tty/serial/cpm_uart/cpm_uart_core.c
··· 1213 1213 goto out_pram; 1214 1214 } 1215 1215 1216 - for (i = 0; i < NUM_GPIOS; i++) 1217 - pinfo->gpios[i] = of_get_gpio(np, i); 1216 + for (i = 0; i < NUM_GPIOS; i++) { 1217 + int gpio; 1218 + 1219 + pinfo->gpios[i] = -1; 1220 + 1221 + gpio = of_get_gpio(np, i); 1222 + 1223 + if (gpio_is_valid(gpio)) { 1224 + ret = gpio_request(gpio, "cpm_uart"); 1225 + if (ret) { 1226 + pr_err("can't request gpio #%d: %d\n", i, ret); 1227 + continue; 1228 + } 1229 + if (i == GPIO_RTS || i == GPIO_DTR) 1230 + ret = gpio_direction_output(gpio, 0); 1231 + else 1232 + ret = gpio_direction_input(gpio); 1233 + if (ret) { 1234 + pr_err("can't set direction for gpio #%d: %d\n", 1235 + i, ret); 1236 + gpio_free(gpio); 1237 + continue; 1238 + } 1239 + pinfo->gpios[i] = gpio; 1240 + } 1241 + } 1218 1242 1219 1243 #ifdef CONFIG_PPC_EARLY_DEBUG_CPM 1220 1244 udbg_putc = NULL;
+16 -13
drivers/tty/serial/efm32-uart.c
··· 268 268 handled = IRQ_HANDLED; 269 269 } 270 270 271 - tty_flip_buffer_push(tport); 272 - 273 271 spin_unlock(&port->lock); 272 + 273 + tty_flip_buffer_push(tport); 274 274 275 275 return handled; 276 276 } ··· 698 698 { 699 699 struct efm32_uart_port *efm_port; 700 700 struct resource *res; 701 + unsigned int line; 701 702 int ret; 702 703 703 704 efm_port = kzalloc(sizeof(*efm_port), GFP_KERNEL); ··· 751 750 752 751 if (pdata) 753 752 efm_port->pdata = *pdata; 754 - } 753 + } else if (ret < 0) 754 + goto err_probe_dt; 755 755 756 - if (efm_port->port.line >= 0 && 757 - efm_port->port.line < ARRAY_SIZE(efm32_uart_ports)) 758 - efm32_uart_ports[efm_port->port.line] = efm_port; 756 + line = efm_port->port.line; 757 + 758 + if (line >= 0 && line < ARRAY_SIZE(efm32_uart_ports)) 759 + efm32_uart_ports[line] = efm_port; 759 760 760 761 ret = uart_add_one_port(&efm32_uart_reg, &efm_port->port); 761 762 if (ret) { 762 763 dev_dbg(&pdev->dev, "failed to add port: %d\n", ret); 763 764 764 - if (pdev->id >= 0 && pdev->id < ARRAY_SIZE(efm32_uart_ports)) 765 - efm32_uart_ports[pdev->id] = NULL; 765 + if (line >= 0 && line < ARRAY_SIZE(efm32_uart_ports)) 766 + efm32_uart_ports[line] = NULL; 767 + err_probe_dt: 766 768 err_get_rxirq: 767 769 err_too_small: 768 770 err_get_base: ··· 781 777 static int efm32_uart_remove(struct platform_device *pdev) 782 778 { 783 779 struct efm32_uart_port *efm_port = platform_get_drvdata(pdev); 784 - 785 - platform_set_drvdata(pdev, NULL); 780 + unsigned int line = efm_port->port.line; 786 781 787 782 uart_remove_one_port(&efm32_uart_reg, &efm_port->port); 788 783 789 - if (pdev->id >= 0 && pdev->id < ARRAY_SIZE(efm32_uart_ports)) 790 - efm32_uart_ports[pdev->id] = NULL; 784 + if (line >= 0 && line < ARRAY_SIZE(efm32_uart_ports)) 785 + efm32_uart_ports[line] = NULL; 791 786 792 787 kfree(efm_port); 793 788 794 789 return 0; 795 790 } 796 791 797 - static struct of_device_id efm32_uart_dt_ids[] = { 792 + static const struct of_device_id efm32_uart_dt_ids[] = { 798 793 { 799 794 .compatible = "efm32,uart", 800 795 }, {
+6 -1
drivers/tty/serial/fsl_lpuart.c
··· 342 342 static void lpuart_setup_watermark(struct lpuart_port *sport) 343 343 { 344 344 unsigned char val, cr2; 345 + unsigned char cr2_saved; 345 346 346 347 cr2 = readb(sport->port.membase + UARTCR2); 348 + cr2_saved = cr2; 347 349 cr2 &= ~(UARTCR2_TIE | UARTCR2_TCIE | UARTCR2_TE | 348 350 UARTCR2_RIE | UARTCR2_RE); 349 351 writeb(cr2, sport->port.membase + UARTCR2); ··· 368 366 369 367 writeb(2, sport->port.membase + UARTTWFIFO); 370 368 writeb(1, sport->port.membase + UARTRWFIFO); 369 + 370 + /* Restore cr2 */ 371 + writeb(cr2_saved, sport->port.membase + UARTCR2); 371 372 } 372 373 373 374 static int lpuart_startup(struct uart_port *port) ··· 863 858 if (ret) 864 859 uart_unregister_driver(&lpuart_reg); 865 860 866 - return 0; 861 + return ret; 867 862 } 868 863 869 864 static void __exit lpuart_serial_exit(void)
+56 -47
drivers/tty/serial/icom.c
··· 105 105 {} 106 106 }; 107 107 108 - struct lookup_proc_table start_proc[4] = { 108 + static struct lookup_proc_table start_proc[4] = { 109 109 {NULL, ICOM_CONTROL_START_A}, 110 110 {NULL, ICOM_CONTROL_START_B}, 111 111 {NULL, ICOM_CONTROL_START_C}, ··· 113 113 }; 114 114 115 115 116 - struct lookup_proc_table stop_proc[4] = { 116 + static struct lookup_proc_table stop_proc[4] = { 117 117 {NULL, ICOM_CONTROL_STOP_A}, 118 118 {NULL, ICOM_CONTROL_STOP_B}, 119 119 {NULL, ICOM_CONTROL_STOP_C}, 120 120 {NULL, ICOM_CONTROL_STOP_D} 121 121 }; 122 122 123 - struct lookup_int_table int_mask_tbl[4] = { 123 + static struct lookup_int_table int_mask_tbl[4] = { 124 124 {NULL, ICOM_INT_MASK_PRC_A}, 125 125 {NULL, ICOM_INT_MASK_PRC_B}, 126 126 {NULL, ICOM_INT_MASK_PRC_C}, ··· 297 297 spin_lock_irqsave(&icom_lock, flags); 298 298 299 299 port = icom_port->port; 300 + if (port >= ARRAY_SIZE(stop_proc)) { 301 + dev_err(&icom_port->adapter->pci_dev->dev, 302 + "Invalid port assignment\n"); 303 + goto unlock; 304 + } 305 + 300 306 if (port == 0 || port == 1) 301 307 stop_proc[port].global_control_reg = &icom_port->global_reg->control; 302 308 else 303 309 stop_proc[port].global_control_reg = &icom_port->global_reg->control_2; 304 310 311 + temp = readl(stop_proc[port].global_control_reg); 312 + temp = (temp & ~start_proc[port].processor_id) | stop_proc[port].processor_id; 313 + writel(temp, stop_proc[port].global_control_reg); 305 314 306 - if (port < 4) { 307 - temp = readl(stop_proc[port].global_control_reg); 308 - temp = 309 - (temp & ~start_proc[port].processor_id) | stop_proc[port].processor_id; 310 - writel(temp, stop_proc[port].global_control_reg); 315 + /* write flush */ 316 + readl(stop_proc[port].global_control_reg); 311 317 312 - /* write flush */ 313 - readl(stop_proc[port].global_control_reg); 314 - } else { 315 - dev_err(&icom_port->adapter->pci_dev->dev, 316 - "Invalid port assignment\n"); 317 - } 318 - 318 + unlock: 319 319 spin_unlock_irqrestore(&icom_lock, flags); 320 320 } 321 321 ··· 328 328 spin_lock_irqsave(&icom_lock, flags); 329 329 330 330 port = icom_port->port; 331 + if (port >= ARRAY_SIZE(start_proc)) { 332 + dev_err(&icom_port->adapter->pci_dev->dev, 333 + "Invalid port assignment\n"); 334 + goto unlock; 335 + } 336 + 331 337 if (port == 0 || port == 1) 332 338 start_proc[port].global_control_reg = &icom_port->global_reg->control; 333 339 else 334 340 start_proc[port].global_control_reg = &icom_port->global_reg->control_2; 335 - if (port < 4) { 336 - temp = readl(start_proc[port].global_control_reg); 337 - temp = 338 - (temp & ~stop_proc[port].processor_id) | start_proc[port].processor_id; 339 - writel(temp, start_proc[port].global_control_reg); 340 341 341 - /* write flush */ 342 - readl(start_proc[port].global_control_reg); 343 - } else { 344 - dev_err(&icom_port->adapter->pci_dev->dev, 345 - "Invalid port assignment\n"); 346 - } 342 + temp = readl(start_proc[port].global_control_reg); 343 + temp = (temp & ~stop_proc[port].processor_id) | start_proc[port].processor_id; 344 + writel(temp, start_proc[port].global_control_reg); 347 345 346 + /* write flush */ 347 + readl(start_proc[port].global_control_reg); 348 + 349 + unlock: 348 350 spin_unlock_irqrestore(&icom_lock, flags); 349 351 } 350 352 ··· 559 557 */ 560 558 spin_lock_irqsave(&icom_lock, flags); 561 559 port = icom_port->port; 560 + if (port >= ARRAY_SIZE(int_mask_tbl)) { 561 + dev_err(&icom_port->adapter->pci_dev->dev, 562 + "Invalid port assignment\n"); 563 + goto unlock; 564 + } 565 + 562 566 if (port == 0 || port == 1) 563 567 int_mask_tbl[port].global_int_mask = &icom_port->global_reg->int_mask; 564 568 else ··· 574 566 writew(0x00FF, icom_port->int_reg); 575 567 else 576 568 writew(0x3F00, icom_port->int_reg); 577 - if (port < 4) { 578 - temp = readl(int_mask_tbl[port].global_int_mask); 579 - writel(temp & ~int_mask_tbl[port].processor_id, int_mask_tbl[port].global_int_mask); 580 569 581 - /* write flush */ 582 - readl(int_mask_tbl[port].global_int_mask); 583 - } else { 584 - dev_err(&icom_port->adapter->pci_dev->dev, 585 - "Invalid port assignment\n"); 586 - } 570 + temp = readl(int_mask_tbl[port].global_int_mask); 571 + writel(temp & ~int_mask_tbl[port].processor_id, int_mask_tbl[port].global_int_mask); 587 572 573 + /* write flush */ 574 + readl(int_mask_tbl[port].global_int_mask); 575 + 576 + unlock: 588 577 spin_unlock_irqrestore(&icom_lock, flags); 589 578 return 0; 590 579 } ··· 600 595 * disable all interrupts 601 596 */ 602 597 port = icom_port->port; 598 + if (port >= ARRAY_SIZE(int_mask_tbl)) { 599 + dev_err(&icom_port->adapter->pci_dev->dev, 600 + "Invalid port assignment\n"); 601 + goto unlock; 602 + } 603 603 if (port == 0 || port == 1) 604 604 int_mask_tbl[port].global_int_mask = &icom_port->global_reg->int_mask; 605 605 else 606 606 int_mask_tbl[port].global_int_mask = &icom_port->global_reg->int_mask_2; 607 607 608 - if (port < 4) { 609 - temp = readl(int_mask_tbl[port].global_int_mask); 610 - writel(temp | int_mask_tbl[port].processor_id, int_mask_tbl[port].global_int_mask); 608 + temp = readl(int_mask_tbl[port].global_int_mask); 609 + writel(temp | int_mask_tbl[port].processor_id, int_mask_tbl[port].global_int_mask); 611 610 612 - /* write flush */ 613 - readl(int_mask_tbl[port].global_int_mask); 614 - } else { 615 - dev_err(&icom_port->adapter->pci_dev->dev, 616 - "Invalid port assignment\n"); 617 - } 611 + /* write flush */ 612 + readl(int_mask_tbl[port].global_int_mask); 613 + 614 + unlock: 618 615 spin_unlock_irqrestore(&icom_lock, flags); 619 616 620 617 /* ··· 841 834 status = cpu_to_le16(icom_port->statStg->rcv[rcv_buff].flags); 842 835 } 843 836 icom_port->next_rcv = rcv_buff; 837 + 838 + spin_unlock(&icom_port->uart_port.lock); 844 839 tty_flip_buffer_push(port); 840 + spin_lock(&icom_port->uart_port.lock); 845 841 } 846 842 847 843 static void process_interrupt(u16 port_int_reg, ··· 1097 1087 1098 1088 /* stop receiver */ 1099 1089 cmdReg = readb(&ICOM_PORT->dram->CmdReg); 1100 - writeb(cmdReg & (unsigned char) ~CMD_RCV_ENABLE, 1101 - &ICOM_PORT->dram->CmdReg); 1090 + writeb(cmdReg & ~CMD_RCV_ENABLE, &ICOM_PORT->dram->CmdReg); 1102 1091 1103 1092 shutdown(ICOM_PORT); 1104 1093 ··· 1576 1567 icom_port->uart_port.type = PORT_ICOM; 1577 1568 icom_port->uart_port.iotype = UPIO_MEM; 1578 1569 icom_port->uart_port.membase = 1579 - (char *) icom_adapter->base_addr_pci; 1570 + (unsigned char __iomem *)icom_adapter->base_addr_pci; 1580 1571 icom_port->uart_port.fifosize = 16; 1581 1572 icom_port->uart_port.ops = &icom_ops; 1582 1573 icom_port->uart_port.line =
+1 -1
drivers/tty/serial/ifx6x60.c
··· 1008 1008 return -ENODEV; 1009 1009 } 1010 1010 1011 - pl_data = (struct ifx_modem_platform_data *)spi->dev.platform_data; 1011 + pl_data = (struct ifx_modem_platform_data *)dev_get_platdata(&spi->dev); 1012 1012 if (!pl_data) { 1013 1013 dev_err(&spi->dev, "missing platform data!"); 1014 1014 return -ENODEV;
+471 -49
drivers/tty/serial/imx.c
··· 47 47 #include <linux/slab.h> 48 48 #include <linux/of.h> 49 49 #include <linux/of_device.h> 50 - #include <linux/pinctrl/consumer.h> 51 50 #include <linux/io.h> 51 + #include <linux/dma-mapping.h> 52 52 53 53 #include <asm/irq.h> 54 54 #include <linux/platform_data/serial-imx.h> 55 + #include <linux/platform_data/dma-imx.h> 55 56 56 57 /* Register definitions */ 57 58 #define URXD0 0x0 /* Receiver Register */ ··· 84 83 #define UCR1_ADBR (1<<14) /* Auto detect baud rate */ 85 84 #define UCR1_TRDYEN (1<<13) /* Transmitter ready interrupt enable */ 86 85 #define UCR1_IDEN (1<<12) /* Idle condition interrupt */ 86 + #define UCR1_ICD_REG(x) (((x) & 3) << 10) /* idle condition detect */ 87 87 #define UCR1_RRDYEN (1<<9) /* Recv ready interrupt enable */ 88 88 #define UCR1_RDMAEN (1<<8) /* Recv ready DMA enable */ 89 89 #define UCR1_IREN (1<<7) /* Infrared interface enable */ ··· 93 91 #define UCR1_SNDBRK (1<<4) /* Send break */ 94 92 #define UCR1_TDMAEN (1<<3) /* Transmitter ready DMA enable */ 95 93 #define IMX1_UCR1_UARTCLKEN (1<<2) /* UART clock enabled, i.mx1 only */ 94 + #define UCR1_ATDMAEN (1<<2) /* Aging DMA Timer Enable */ 96 95 #define UCR1_DOZE (1<<1) /* Doze */ 97 96 #define UCR1_UARTEN (1<<0) /* UART enabled */ 98 97 #define UCR2_ESCI (1<<15) /* Escape seq interrupt enable */ ··· 129 126 #define UCR4_ENIRI (1<<8) /* Serial infrared interrupt enable */ 130 127 #define UCR4_WKEN (1<<7) /* Wake interrupt enable */ 131 128 #define UCR4_REF16 (1<<6) /* Ref freq 16 MHz */ 129 + #define UCR4_IDDMAEN (1<<6) /* DMA IDLE Condition Detected */ 132 130 #define UCR4_IRSC (1<<5) /* IR special case */ 133 131 #define UCR4_TCEN (1<<3) /* Transmit complete interrupt enable */ 134 132 #define UCR4_BKEN (1<<2) /* Break condition interrupt enable */ ··· 191 187 enum imx_uart_type { 192 188 IMX1_UART, 193 189 IMX21_UART, 190 + IMX6Q_UART, 194 191 }; 195 192 196 193 /* device type dependent stuff */ ··· 214 209 struct clk *clk_ipg; 215 210 struct clk *clk_per; 216 211 const struct imx_uart_data *devdata; 212 + 213 + /* DMA fields */ 214 + unsigned int dma_is_inited:1; 215 + unsigned int dma_is_enabled:1; 216 + unsigned int dma_is_rxing:1; 217 + unsigned int dma_is_txing:1; 218 + struct dma_chan *dma_chan_rx, *dma_chan_tx; 219 + struct scatterlist rx_sgl, tx_sgl[2]; 220 + void *rx_buf; 221 + unsigned int rx_bytes, tx_bytes; 222 + struct work_struct tsk_dma_rx, tsk_dma_tx; 223 + unsigned int dma_tx_nents; 224 + wait_queue_head_t dma_wait; 217 225 }; 218 226 219 227 struct imx_port_ucrs { ··· 250 232 .uts_reg = IMX21_UTS, 251 233 .devtype = IMX21_UART, 252 234 }, 235 + [IMX6Q_UART] = { 236 + .uts_reg = IMX21_UTS, 237 + .devtype = IMX6Q_UART, 238 + }, 253 239 }; 254 240 255 241 static struct platform_device_id imx_uart_devtype[] = { ··· 264 242 .name = "imx21-uart", 265 243 .driver_data = (kernel_ulong_t) &imx_uart_devdata[IMX21_UART], 266 244 }, { 245 + .name = "imx6q-uart", 246 + .driver_data = (kernel_ulong_t) &imx_uart_devdata[IMX6Q_UART], 247 + }, { 267 248 /* sentinel */ 268 249 } 269 250 }; 270 251 MODULE_DEVICE_TABLE(platform, imx_uart_devtype); 271 252 272 253 static struct of_device_id imx_uart_dt_ids[] = { 254 + { .compatible = "fsl,imx6q-uart", .data = &imx_uart_devdata[IMX6Q_UART], }, 273 255 { .compatible = "fsl,imx1-uart", .data = &imx_uart_devdata[IMX1_UART], }, 274 256 { .compatible = "fsl,imx21-uart", .data = &imx_uart_devdata[IMX21_UART], }, 275 257 { /* sentinel */ } ··· 295 269 return sport->devdata->devtype == IMX21_UART; 296 270 } 297 271 272 + static inline int is_imx6q_uart(struct imx_port *sport) 273 + { 274 + return sport->devdata->devtype == IMX6Q_UART; 275 + } 298 276 /* 299 277 * Save and restore functions for UCR1, UCR2 and UCR3 registers 300 278 */ ··· 417 387 return; 418 388 } 419 389 390 + /* 391 + * We are maybe in the SMP context, so if the DMA TX thread is running 392 + * on other cpu, we have to wait for it to finish. 393 + */ 394 + if (sport->dma_is_enabled && sport->dma_is_txing) 395 + return; 396 + 420 397 temp = readl(sport->port.membase + UCR1); 421 398 writel(temp & ~UCR1_TXMPTYEN, sport->port.membase + UCR1); 422 399 } ··· 435 398 { 436 399 struct imx_port *sport = (struct imx_port *)port; 437 400 unsigned long temp; 401 + 402 + /* 403 + * We are maybe in the SMP context, so if the DMA TX thread is running 404 + * on other cpu, we have to wait for it to finish. 405 + */ 406 + if (sport->dma_is_enabled && sport->dma_is_rxing) 407 + return; 438 408 439 409 temp = readl(sport->port.membase + UCR2); 440 410 writel(temp & ~UCR2_RXEN, sport->port.membase + UCR2); ··· 478 434 imx_stop_tx(&sport->port); 479 435 } 480 436 437 + static void dma_tx_callback(void *data) 438 + { 439 + struct imx_port *sport = data; 440 + struct scatterlist *sgl = &sport->tx_sgl[0]; 441 + struct circ_buf *xmit = &sport->port.state->xmit; 442 + unsigned long flags; 443 + 444 + dma_unmap_sg(sport->port.dev, sgl, sport->dma_tx_nents, DMA_TO_DEVICE); 445 + 446 + sport->dma_is_txing = 0; 447 + 448 + /* update the stat */ 449 + spin_lock_irqsave(&sport->port.lock, flags); 450 + xmit->tail = (xmit->tail + sport->tx_bytes) & (UART_XMIT_SIZE - 1); 451 + sport->port.icount.tx += sport->tx_bytes; 452 + spin_unlock_irqrestore(&sport->port.lock, flags); 453 + 454 + dev_dbg(sport->port.dev, "we finish the TX DMA.\n"); 455 + 456 + if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 457 + uart_write_wakeup(&sport->port); 458 + 459 + if (waitqueue_active(&sport->dma_wait)) { 460 + wake_up(&sport->dma_wait); 461 + dev_dbg(sport->port.dev, "exit in %s.\n", __func__); 462 + return; 463 + } 464 + 465 + schedule_work(&sport->tsk_dma_tx); 466 + } 467 + 468 + static void dma_tx_work(struct work_struct *w) 469 + { 470 + struct imx_port *sport = container_of(w, struct imx_port, tsk_dma_tx); 471 + struct circ_buf *xmit = &sport->port.state->xmit; 472 + struct scatterlist *sgl = sport->tx_sgl; 473 + struct dma_async_tx_descriptor *desc; 474 + struct dma_chan *chan = sport->dma_chan_tx; 475 + struct device *dev = sport->port.dev; 476 + enum dma_status status; 477 + unsigned long flags; 478 + int ret; 479 + 480 + status = chan->device->device_tx_status(chan, (dma_cookie_t)0, NULL); 481 + if (DMA_IN_PROGRESS == status) 482 + return; 483 + 484 + spin_lock_irqsave(&sport->port.lock, flags); 485 + sport->tx_bytes = uart_circ_chars_pending(xmit); 486 + if (sport->tx_bytes == 0) { 487 + spin_unlock_irqrestore(&sport->port.lock, flags); 488 + return; 489 + } 490 + 491 + if (xmit->tail > xmit->head) { 492 + sport->dma_tx_nents = 2; 493 + sg_init_table(sgl, 2); 494 + sg_set_buf(sgl, xmit->buf + xmit->tail, 495 + UART_XMIT_SIZE - xmit->tail); 496 + sg_set_buf(sgl + 1, xmit->buf, xmit->head); 497 + } else { 498 + sport->dma_tx_nents = 1; 499 + sg_init_one(sgl, xmit->buf + xmit->tail, sport->tx_bytes); 500 + } 501 + spin_unlock_irqrestore(&sport->port.lock, flags); 502 + 503 + ret = dma_map_sg(dev, sgl, sport->dma_tx_nents, DMA_TO_DEVICE); 504 + if (ret == 0) { 505 + dev_err(dev, "DMA mapping error for TX.\n"); 506 + return; 507 + } 508 + desc = dmaengine_prep_slave_sg(chan, sgl, sport->dma_tx_nents, 509 + DMA_MEM_TO_DEV, DMA_PREP_INTERRUPT); 510 + if (!desc) { 511 + dev_err(dev, "We cannot prepare for the TX slave dma!\n"); 512 + return; 513 + } 514 + desc->callback = dma_tx_callback; 515 + desc->callback_param = sport; 516 + 517 + dev_dbg(dev, "TX: prepare to send %lu bytes by DMA.\n", 518 + uart_circ_chars_pending(xmit)); 519 + /* fire it */ 520 + sport->dma_is_txing = 1; 521 + dmaengine_submit(desc); 522 + dma_async_issue_pending(chan); 523 + return; 524 + } 525 + 481 526 /* 482 527 * interrupts disabled on entry 483 528 */ ··· 593 460 temp |= UCR4_OREN; 594 461 writel(temp, sport->port.membase + UCR4); 595 462 596 - temp = readl(sport->port.membase + UCR1); 597 - writel(temp | UCR1_TXMPTYEN, sport->port.membase + UCR1); 463 + if (!sport->dma_is_enabled) { 464 + temp = readl(sport->port.membase + UCR1); 465 + writel(temp | UCR1_TXMPTYEN, sport->port.membase + UCR1); 466 + } 598 467 599 468 if (USE_IRDA(sport)) { 600 469 temp = readl(sport->port.membase + UCR1); ··· 606 471 temp = readl(sport->port.membase + UCR4); 607 472 temp |= UCR4_TCEN; 608 473 writel(temp, sport->port.membase + UCR4); 474 + } 475 + 476 + if (sport->dma_is_enabled) { 477 + /* 478 + * We may in the interrupt context, so arise a work_struct to 479 + * do the real job. 480 + */ 481 + schedule_work(&sport->tsk_dma_tx); 482 + return; 609 483 } 610 484 611 485 if (readl(sport->port.membase + uts_reg(sport)) & UTS_TXEMPTY) ··· 732 588 return IRQ_HANDLED; 733 589 } 734 590 591 + /* 592 + * If the RXFIFO is filled with some data, and then we 593 + * arise a DMA operation to receive them. 594 + */ 595 + static void imx_dma_rxint(struct imx_port *sport) 596 + { 597 + unsigned long temp; 598 + 599 + temp = readl(sport->port.membase + USR2); 600 + if ((temp & USR2_RDR) && !sport->dma_is_rxing) { 601 + sport->dma_is_rxing = 1; 602 + 603 + /* disable the `Recerver Ready Interrrupt` */ 604 + temp = readl(sport->port.membase + UCR1); 605 + temp &= ~(UCR1_RRDYEN); 606 + writel(temp, sport->port.membase + UCR1); 607 + 608 + /* tell the DMA to receive the data. */ 609 + schedule_work(&sport->tsk_dma_rx); 610 + } 611 + } 612 + 735 613 static irqreturn_t imx_int(int irq, void *dev_id) 736 614 { 737 615 struct imx_port *sport = dev_id; ··· 762 596 763 597 sts = readl(sport->port.membase + USR1); 764 598 765 - if (sts & USR1_RRDY) 766 - imx_rxint(irq, dev_id); 599 + if (sts & USR1_RRDY) { 600 + if (sport->dma_is_enabled) 601 + imx_dma_rxint(sport); 602 + else 603 + imx_rxint(irq, dev_id); 604 + } 767 605 768 606 if (sts & USR1_TRDY && 769 607 readl(sport->port.membase + UCR1) & UCR1_TXMPTYEN) ··· 824 654 temp = readl(sport->port.membase + UCR2) & ~UCR2_CTS; 825 655 826 656 if (mctrl & TIOCM_RTS) 827 - temp |= UCR2_CTS; 657 + if (!sport->dma_is_enabled) 658 + temp |= UCR2_CTS; 828 659 829 660 writel(temp, sport->port.membase + UCR2); 830 661 } ··· 864 693 return 0; 865 694 } 866 695 696 + #define RX_BUF_SIZE (PAGE_SIZE) 697 + static int start_rx_dma(struct imx_port *sport); 698 + static void dma_rx_work(struct work_struct *w) 699 + { 700 + struct imx_port *sport = container_of(w, struct imx_port, tsk_dma_rx); 701 + struct tty_port *port = &sport->port.state->port; 702 + 703 + if (sport->rx_bytes) { 704 + tty_insert_flip_string(port, sport->rx_buf, sport->rx_bytes); 705 + tty_flip_buffer_push(port); 706 + sport->rx_bytes = 0; 707 + } 708 + 709 + if (sport->dma_is_rxing) 710 + start_rx_dma(sport); 711 + } 712 + 713 + static void imx_rx_dma_done(struct imx_port *sport) 714 + { 715 + unsigned long temp; 716 + 717 + /* Enable this interrupt when the RXFIFO is empty. */ 718 + temp = readl(sport->port.membase + UCR1); 719 + temp |= UCR1_RRDYEN; 720 + writel(temp, sport->port.membase + UCR1); 721 + 722 + sport->dma_is_rxing = 0; 723 + 724 + /* Is the shutdown waiting for us? */ 725 + if (waitqueue_active(&sport->dma_wait)) 726 + wake_up(&sport->dma_wait); 727 + } 728 + 729 + /* 730 + * There are three kinds of RX DMA interrupts(such as in the MX6Q): 731 + * [1] the RX DMA buffer is full. 732 + * [2] the Aging timer expires(wait for 8 bytes long) 733 + * [3] the Idle Condition Detect(enabled the UCR4_IDDMAEN). 734 + * 735 + * The [2] is trigger when a character was been sitting in the FIFO 736 + * meanwhile [3] can wait for 32 bytes long when the RX line is 737 + * on IDLE state and RxFIFO is empty. 738 + */ 739 + static void dma_rx_callback(void *data) 740 + { 741 + struct imx_port *sport = data; 742 + struct dma_chan *chan = sport->dma_chan_rx; 743 + struct scatterlist *sgl = &sport->rx_sgl; 744 + struct dma_tx_state state; 745 + enum dma_status status; 746 + unsigned int count; 747 + 748 + /* unmap it first */ 749 + dma_unmap_sg(sport->port.dev, sgl, 1, DMA_FROM_DEVICE); 750 + 751 + status = chan->device->device_tx_status(chan, (dma_cookie_t)0, &state); 752 + count = RX_BUF_SIZE - state.residue; 753 + dev_dbg(sport->port.dev, "We get %d bytes.\n", count); 754 + 755 + if (count) { 756 + sport->rx_bytes = count; 757 + schedule_work(&sport->tsk_dma_rx); 758 + } else 759 + imx_rx_dma_done(sport); 760 + } 761 + 762 + static int start_rx_dma(struct imx_port *sport) 763 + { 764 + struct scatterlist *sgl = &sport->rx_sgl; 765 + struct dma_chan *chan = sport->dma_chan_rx; 766 + struct device *dev = sport->port.dev; 767 + struct dma_async_tx_descriptor *desc; 768 + int ret; 769 + 770 + sg_init_one(sgl, sport->rx_buf, RX_BUF_SIZE); 771 + ret = dma_map_sg(dev, sgl, 1, DMA_FROM_DEVICE); 772 + if (ret == 0) { 773 + dev_err(dev, "DMA mapping error for RX.\n"); 774 + return -EINVAL; 775 + } 776 + desc = dmaengine_prep_slave_sg(chan, sgl, 1, DMA_DEV_TO_MEM, 777 + DMA_PREP_INTERRUPT); 778 + if (!desc) { 779 + dev_err(dev, "We cannot prepare for the RX slave dma!\n"); 780 + return -EINVAL; 781 + } 782 + desc->callback = dma_rx_callback; 783 + desc->callback_param = sport; 784 + 785 + dev_dbg(dev, "RX: prepare for the DMA.\n"); 786 + dmaengine_submit(desc); 787 + dma_async_issue_pending(chan); 788 + return 0; 789 + } 790 + 791 + static void imx_uart_dma_exit(struct imx_port *sport) 792 + { 793 + if (sport->dma_chan_rx) { 794 + dma_release_channel(sport->dma_chan_rx); 795 + sport->dma_chan_rx = NULL; 796 + 797 + kfree(sport->rx_buf); 798 + sport->rx_buf = NULL; 799 + } 800 + 801 + if (sport->dma_chan_tx) { 802 + dma_release_channel(sport->dma_chan_tx); 803 + sport->dma_chan_tx = NULL; 804 + } 805 + 806 + sport->dma_is_inited = 0; 807 + } 808 + 809 + static int imx_uart_dma_init(struct imx_port *sport) 810 + { 811 + struct dma_slave_config slave_config = {}; 812 + struct device *dev = sport->port.dev; 813 + int ret; 814 + 815 + /* Prepare for RX : */ 816 + sport->dma_chan_rx = dma_request_slave_channel(dev, "rx"); 817 + if (!sport->dma_chan_rx) { 818 + dev_dbg(dev, "cannot get the DMA channel.\n"); 819 + ret = -EINVAL; 820 + goto err; 821 + } 822 + 823 + slave_config.direction = DMA_DEV_TO_MEM; 824 + slave_config.src_addr = sport->port.mapbase + URXD0; 825 + slave_config.src_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE; 826 + slave_config.src_maxburst = RXTL; 827 + ret = dmaengine_slave_config(sport->dma_chan_rx, &slave_config); 828 + if (ret) { 829 + dev_err(dev, "error in RX dma configuration.\n"); 830 + goto err; 831 + } 832 + 833 + sport->rx_buf = kzalloc(PAGE_SIZE, GFP_KERNEL); 834 + if (!sport->rx_buf) { 835 + dev_err(dev, "cannot alloc DMA buffer.\n"); 836 + ret = -ENOMEM; 837 + goto err; 838 + } 839 + sport->rx_bytes = 0; 840 + 841 + /* Prepare for TX : */ 842 + sport->dma_chan_tx = dma_request_slave_channel(dev, "tx"); 843 + if (!sport->dma_chan_tx) { 844 + dev_err(dev, "cannot get the TX DMA channel!\n"); 845 + ret = -EINVAL; 846 + goto err; 847 + } 848 + 849 + slave_config.direction = DMA_MEM_TO_DEV; 850 + slave_config.dst_addr = sport->port.mapbase + URTX0; 851 + slave_config.dst_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE; 852 + slave_config.dst_maxburst = TXTL; 853 + ret = dmaengine_slave_config(sport->dma_chan_tx, &slave_config); 854 + if (ret) { 855 + dev_err(dev, "error in TX dma configuration."); 856 + goto err; 857 + } 858 + 859 + sport->dma_is_inited = 1; 860 + 861 + return 0; 862 + err: 863 + imx_uart_dma_exit(sport); 864 + return ret; 865 + } 866 + 867 + static void imx_enable_dma(struct imx_port *sport) 868 + { 869 + unsigned long temp; 870 + struct tty_port *port = &sport->port.state->port; 871 + 872 + port->low_latency = 1; 873 + INIT_WORK(&sport->tsk_dma_tx, dma_tx_work); 874 + INIT_WORK(&sport->tsk_dma_rx, dma_rx_work); 875 + init_waitqueue_head(&sport->dma_wait); 876 + 877 + /* set UCR1 */ 878 + temp = readl(sport->port.membase + UCR1); 879 + temp |= UCR1_RDMAEN | UCR1_TDMAEN | UCR1_ATDMAEN | 880 + /* wait for 32 idle frames for IDDMA interrupt */ 881 + UCR1_ICD_REG(3); 882 + writel(temp, sport->port.membase + UCR1); 883 + 884 + /* set UCR4 */ 885 + temp = readl(sport->port.membase + UCR4); 886 + temp |= UCR4_IDDMAEN; 887 + writel(temp, sport->port.membase + UCR4); 888 + 889 + sport->dma_is_enabled = 1; 890 + } 891 + 892 + static void imx_disable_dma(struct imx_port *sport) 893 + { 894 + unsigned long temp; 895 + struct tty_port *port = &sport->port.state->port; 896 + 897 + /* clear UCR1 */ 898 + temp = readl(sport->port.membase + UCR1); 899 + temp &= ~(UCR1_RDMAEN | UCR1_TDMAEN | UCR1_ATDMAEN); 900 + writel(temp, sport->port.membase + UCR1); 901 + 902 + /* clear UCR2 */ 903 + temp = readl(sport->port.membase + UCR2); 904 + temp &= ~(UCR2_CTSC | UCR2_CTS); 905 + writel(temp, sport->port.membase + UCR2); 906 + 907 + /* clear UCR4 */ 908 + temp = readl(sport->port.membase + UCR4); 909 + temp &= ~UCR4_IDDMAEN; 910 + writel(temp, sport->port.membase + UCR4); 911 + 912 + sport->dma_is_enabled = 0; 913 + port->low_latency = 0; 914 + } 915 + 867 916 /* half the RX buffer size */ 868 917 #define CTSTL 16 869 918 ··· 1093 702 int retval; 1094 703 unsigned long flags, temp; 1095 704 1096 - if (!uart_console(port)) { 1097 - retval = clk_prepare_enable(sport->clk_per); 1098 - if (retval) 1099 - goto error_out1; 1100 - retval = clk_prepare_enable(sport->clk_ipg); 1101 - if (retval) { 1102 - clk_disable_unprepare(sport->clk_per); 1103 - goto error_out1; 1104 - } 705 + retval = clk_prepare_enable(sport->clk_per); 706 + if (retval) 707 + goto error_out1; 708 + retval = clk_prepare_enable(sport->clk_ipg); 709 + if (retval) { 710 + clk_disable_unprepare(sport->clk_per); 711 + goto error_out1; 1105 712 } 1106 713 1107 714 imx_setup_ufcr(sport, 0); ··· 1192 803 } 1193 804 } 1194 805 1195 - if (is_imx21_uart(sport)) { 806 + if (!is_imx1_uart(sport)) { 1196 807 temp = readl(sport->port.membase + UCR3); 1197 808 temp |= IMX21_UCR3_RXDMUXSEL; 1198 809 writel(temp, sport->port.membase + UCR3); ··· 1222 833 1223 834 if (USE_IRDA(sport)) { 1224 835 struct imxuart_platform_data *pdata; 1225 - pdata = sport->port.dev->platform_data; 836 + pdata = dev_get_platdata(sport->port.dev); 1226 837 sport->irda_inv_rx = pdata->irda_inv_rx; 1227 838 sport->irda_inv_tx = pdata->irda_inv_tx; 1228 839 sport->trcv_delay = pdata->transceiver_delay; ··· 1248 859 unsigned long temp; 1249 860 unsigned long flags; 1250 861 862 + if (sport->dma_is_enabled) { 863 + /* We have to wait for the DMA to finish. */ 864 + wait_event(sport->dma_wait, 865 + !sport->dma_is_rxing && !sport->dma_is_txing); 866 + imx_stop_rx(port); 867 + imx_disable_dma(sport); 868 + imx_uart_dma_exit(sport); 869 + } 870 + 1251 871 spin_lock_irqsave(&sport->port.lock, flags); 1252 872 temp = readl(sport->port.membase + UCR2); 1253 873 temp &= ~(UCR2_TXEN); ··· 1265 867 1266 868 if (USE_IRDA(sport)) { 1267 869 struct imxuart_platform_data *pdata; 1268 - pdata = sport->port.dev->platform_data; 870 + pdata = dev_get_platdata(sport->port.dev); 1269 871 if (pdata->irda_enable) 1270 872 pdata->irda_enable(0); 1271 873 } ··· 1299 901 writel(temp, sport->port.membase + UCR1); 1300 902 spin_unlock_irqrestore(&sport->port.lock, flags); 1301 903 1302 - if (!uart_console(&sport->port)) { 1303 - clk_disable_unprepare(sport->clk_per); 1304 - clk_disable_unprepare(sport->clk_ipg); 1305 - } 904 + clk_disable_unprepare(sport->clk_per); 905 + clk_disable_unprepare(sport->clk_ipg); 1306 906 } 1307 907 1308 908 static void ··· 1343 947 if (sport->have_rtscts) { 1344 948 ucr2 &= ~UCR2_IRTS; 1345 949 ucr2 |= UCR2_CTSC; 950 + 951 + /* Can we enable the DMA support? */ 952 + if (is_imx6q_uart(sport) && !uart_console(port) 953 + && !sport->dma_is_inited) 954 + imx_uart_dma_init(sport); 1346 955 } else { 1347 956 termios->c_cflag &= ~CRTSCTS; 1348 957 } ··· 1421 1020 */ 1422 1021 div = 1; 1423 1022 } else { 1023 + /* custom-baudrate handling */ 1024 + div = sport->port.uartclk / (baud * 16); 1025 + if (baud == 38400 && quot != div) 1026 + baud = sport->port.uartclk / (quot * 16); 1027 + 1424 1028 div = sport->port.uartclk / (baud * 16); 1425 1029 if (div > 7) 1426 1030 div = 7; ··· 1454 1048 writel(num, sport->port.membase + UBIR); 1455 1049 writel(denom, sport->port.membase + UBMR); 1456 1050 1457 - if (is_imx21_uart(sport)) 1051 + if (!is_imx1_uart(sport)) 1458 1052 writel(sport->port.uartclk / div / 1000, 1459 1053 sport->port.membase + IMX21_ONEMS); 1460 1054 ··· 1466 1060 if (UART_ENABLE_MS(&sport->port, termios->c_cflag)) 1467 1061 imx_enable_ms(&sport->port); 1468 1062 1063 + if (sport->dma_is_inited && !sport->dma_is_enabled) 1064 + imx_enable_dma(sport); 1469 1065 spin_unlock_irqrestore(&sport->port.lock, flags); 1470 1066 } 1471 1067 ··· 1659 1251 unsigned int ucr1; 1660 1252 unsigned long flags = 0; 1661 1253 int locked = 1; 1254 + int retval; 1255 + 1256 + retval = clk_enable(sport->clk_per); 1257 + if (retval) 1258 + return; 1259 + retval = clk_enable(sport->clk_ipg); 1260 + if (retval) { 1261 + clk_disable(sport->clk_per); 1262 + return; 1263 + } 1662 1264 1663 1265 if (sport->port.sysrq) 1664 1266 locked = 0; ··· 1704 1286 1705 1287 if (locked) 1706 1288 spin_unlock_irqrestore(&sport->port.lock, flags); 1289 + 1290 + clk_disable(sport->clk_ipg); 1291 + clk_disable(sport->clk_per); 1707 1292 } 1708 1293 1709 1294 /* ··· 1780 1359 int bits = 8; 1781 1360 int parity = 'n'; 1782 1361 int flow = 'n'; 1362 + int retval; 1783 1363 1784 1364 /* 1785 1365 * Check whether an invalid uart number has been specified, and ··· 1793 1371 if (sport == NULL) 1794 1372 return -ENODEV; 1795 1373 1374 + /* For setting the registers, we only need to enable the ipg clock. */ 1375 + retval = clk_prepare_enable(sport->clk_ipg); 1376 + if (retval) 1377 + goto error_console; 1378 + 1796 1379 if (options) 1797 1380 uart_parse_options(options, &baud, &parity, &bits, &flow); 1798 1381 else ··· 1805 1378 1806 1379 imx_setup_ufcr(sport, 0); 1807 1380 1808 - return uart_set_options(&sport->port, co, baud, parity, bits, flow); 1381 + retval = uart_set_options(&sport->port, co, baud, parity, bits, flow); 1382 + 1383 + clk_disable(sport->clk_ipg); 1384 + if (retval) { 1385 + clk_unprepare(sport->clk_ipg); 1386 + goto error_console; 1387 + } 1388 + 1389 + retval = clk_prepare(sport->clk_per); 1390 + if (retval) 1391 + clk_disable_unprepare(sport->clk_ipg); 1392 + 1393 + error_console: 1394 + return retval; 1809 1395 } 1810 1396 1811 1397 static struct uart_driver imx_reg; ··· 1912 1472 1913 1473 sport->devdata = of_id->data; 1914 1474 1475 + if (of_device_is_stdout_path(np)) 1476 + add_preferred_console(imx_reg.cons->name, sport->port.line, 0); 1477 + 1915 1478 return 0; 1916 1479 } 1917 1480 #else ··· 1928 1485 static void serial_imx_probe_pdata(struct imx_port *sport, 1929 1486 struct platform_device *pdev) 1930 1487 { 1931 - struct imxuart_platform_data *pdata = pdev->dev.platform_data; 1488 + struct imxuart_platform_data *pdata = dev_get_platdata(&pdev->dev); 1932 1489 1933 1490 sport->port.line = pdev->id; 1934 1491 sport->devdata = (struct imx_uart_data *) pdev->id_entry->driver_data; ··· 1950 1507 void __iomem *base; 1951 1508 int ret = 0; 1952 1509 struct resource *res; 1953 - struct pinctrl *pinctrl; 1954 1510 1955 1511 sport = devm_kzalloc(&pdev->dev, sizeof(*sport), GFP_KERNEL); 1956 1512 if (!sport) ··· 1985 1543 sport->timer.function = imx_timeout; 1986 1544 sport->timer.data = (unsigned long)sport; 1987 1545 1988 - pinctrl = devm_pinctrl_get_select_default(&pdev->dev); 1989 - if (IS_ERR(pinctrl)) { 1990 - ret = PTR_ERR(pinctrl); 1991 - dev_err(&pdev->dev, "failed to get default pinctrl: %d\n", ret); 1992 - return ret; 1993 - } 1994 - 1995 1546 sport->clk_ipg = devm_clk_get(&pdev->dev, "ipg"); 1996 1547 if (IS_ERR(sport->clk_ipg)) { 1997 1548 ret = PTR_ERR(sport->clk_ipg); ··· 1999 1564 return ret; 2000 1565 } 2001 1566 2002 - clk_prepare_enable(sport->clk_per); 2003 - clk_prepare_enable(sport->clk_ipg); 2004 - 2005 1567 sport->port.uartclk = clk_get_rate(sport->clk_per); 2006 1568 2007 1569 imx_ports[sport->port.line] = sport; 2008 1570 2009 - pdata = pdev->dev.platform_data; 1571 + pdata = dev_get_platdata(&pdev->dev); 2010 1572 if (pdata && pdata->init) { 2011 1573 ret = pdata->init(pdev); 2012 1574 if (ret) 2013 - goto clkput; 1575 + return ret; 2014 1576 } 2015 1577 2016 1578 ret = uart_add_one_port(&imx_reg, &sport->port); ··· 2015 1583 goto deinit; 2016 1584 platform_set_drvdata(pdev, sport); 2017 1585 2018 - if (!uart_console(&sport->port)) { 2019 - clk_disable_unprepare(sport->clk_per); 2020 - clk_disable_unprepare(sport->clk_ipg); 2021 - } 2022 - 2023 1586 return 0; 2024 1587 deinit: 2025 1588 if (pdata && pdata->exit) 2026 1589 pdata->exit(pdev); 2027 - clkput: 2028 - clk_disable_unprepare(sport->clk_per); 2029 - clk_disable_unprepare(sport->clk_ipg); 2030 1590 return ret; 2031 1591 } 2032 1592 ··· 2027 1603 struct imxuart_platform_data *pdata; 2028 1604 struct imx_port *sport = platform_get_drvdata(pdev); 2029 1605 2030 - pdata = pdev->dev.platform_data; 2031 - 2032 - platform_set_drvdata(pdev, NULL); 1606 + pdata = dev_get_platdata(&pdev->dev); 2033 1607 2034 1608 uart_remove_one_port(&imx_reg, &sport->port); 2035 1609
+2 -2
drivers/tty/serial/ioc4_serial.c
··· 297 297 struct ioc4_uartregs uart_1; 298 298 struct ioc4_uartregs uart_2; 299 299 struct ioc4_uartregs uart_3; 300 - } ioc4_serial; 300 + }; 301 301 302 302 /* UART clock speed */ 303 303 #define IOC4_SER_XIN_CLK_66 66666667 ··· 2767 2767 * called per card found from IOC4 master module. 2768 2768 * @idd: Master module data for this IOC4 2769 2769 */ 2770 - int 2770 + static int 2771 2771 ioc4_serial_attach_one(struct ioc4_driver_data *idd) 2772 2772 { 2773 2773 unsigned long tmp_addr1;
+5 -2
drivers/tty/serial/lantiq.c
··· 318 318 struct ltq_uart_port *ltq_port = to_ltq_uart_port(port); 319 319 int retval; 320 320 321 - if (ltq_port->clk) 321 + if (!IS_ERR(ltq_port->clk)) 322 322 clk_enable(ltq_port->clk); 323 323 port->uartclk = clk_get_rate(ltq_port->fpiclk); 324 324 ··· 386 386 port->membase + LTQ_ASC_RXFCON); 387 387 ltq_w32_mask(ASCTXFCON_TXFEN, ASCTXFCON_TXFFLU, 388 388 port->membase + LTQ_ASC_TXFCON); 389 - if (ltq_port->clk) 389 + if (!IS_ERR(ltq_port->clk)) 390 390 clk_disable(ltq_port->clk); 391 391 } 392 392 ··· 635 635 return -ENODEV; 636 636 637 637 port = &ltq_port->port; 638 + 639 + if (!IS_ERR(ltq_port->clk)) 640 + clk_enable(ltq_port->clk); 638 641 639 642 port->uartclk = clk_get_rate(ltq_port->fpiclk); 640 643
+4 -3
drivers/tty/serial/lpc32xx_hs.c
··· 279 279 280 280 tmp = readl(LPC32XX_HSUART_FIFO(port->membase)); 281 281 } 282 + 283 + spin_unlock(&port->lock); 282 284 tty_flip_buffer_push(tport); 285 + spin_lock(&port->lock); 283 286 } 284 287 285 288 static void __serial_lpc32xx_tx(struct uart_port *port) ··· 354 351 } 355 352 356 353 /* Data received? */ 357 - if (status & (LPC32XX_HSU_RX_TIMEOUT_INT | LPC32XX_HSU_RX_TRIG_INT)) { 354 + if (status & (LPC32XX_HSU_RX_TIMEOUT_INT | LPC32XX_HSU_RX_TRIG_INT)) 358 355 __serial_lpc32xx_rx(port); 359 - tty_flip_buffer_push(tport); 360 - } 361 356 362 357 /* Transmit data request? */ 363 358 if ((status & LPC32XX_HSU_TX_INT) && (!uart_tx_stopped(port))) {
+3
drivers/tty/serial/m32r_sio.c
··· 368 368 ignore_char: 369 369 *status = serial_in(up, UART_LSR); 370 370 } while ((*status & UART_LSR_DR) && (max_count-- > 0)); 371 + 372 + spin_unlock(&up->port.lock); 371 373 tty_flip_buffer_push(port); 374 + spin_lock(&up->port.lock); 372 375 } 373 376 374 377 static void transmit_chars(struct uart_sio_port *up)
+1 -1
drivers/tty/serial/max3100.c
··· 779 779 max3100s[i]->irq = spi->irq; 780 780 spin_lock_init(&max3100s[i]->conf_lock); 781 781 spi_set_drvdata(spi, max3100s[i]); 782 - pdata = spi->dev.platform_data; 782 + pdata = dev_get_platdata(&spi->dev); 783 783 max3100s[i]->crystal = pdata->crystal; 784 784 max3100s[i]->loopback = pdata->loopback; 785 785 max3100s[i]->poll_time = pdata->poll_time * HZ / 1000;
+527 -474
drivers/tty/serial/max310x.c
··· 1 1 /* 2 - * Maxim (Dallas) MAX3107/8 serial driver 2 + * Maxim (Dallas) MAX3107/8/9, MAX14830 serial driver 3 3 * 4 - * Copyright (C) 2012 Alexander Shiyan <shc_work@mail.ru> 4 + * Copyright (C) 2012-2013 Alexander Shiyan <shc_work@mail.ru> 5 5 * 6 6 * Based on max3100.c, by Christian Pellegrin <chripell@evolware.org> 7 7 * Based on max3110.c, by Feng Tang <feng.tang@intel.com> ··· 13 13 * (at your option) any later version. 14 14 */ 15 15 16 - /* TODO: MAX3109 support (Dual) */ 17 - /* TODO: MAX14830 support (Quad) */ 18 - 19 16 #include <linux/module.h> 17 + #include <linux/delay.h> 20 18 #include <linux/device.h> 19 + #include <linux/bitops.h> 21 20 #include <linux/serial_core.h> 22 21 #include <linux/serial.h> 23 22 #include <linux/tty.h> ··· 24 25 #include <linux/regmap.h> 25 26 #include <linux/gpio.h> 26 27 #include <linux/spi/spi.h> 28 + 27 29 #include <linux/platform_data/max310x.h> 28 30 31 + #define MAX310X_NAME "max310x" 29 32 #define MAX310X_MAJOR 204 30 33 #define MAX310X_MINOR 209 31 34 ··· 38 37 #define MAX310X_IRQSTS_REG (0x02) /* IRQ status */ 39 38 #define MAX310X_LSR_IRQEN_REG (0x03) /* LSR IRQ enable */ 40 39 #define MAX310X_LSR_IRQSTS_REG (0x04) /* LSR IRQ status */ 41 - #define MAX310X_SPCHR_IRQEN_REG (0x05) /* Special char IRQ enable */ 40 + #define MAX310X_REG_05 (0x05) 41 + #define MAX310X_SPCHR_IRQEN_REG MAX310X_REG_05 /* Special char IRQ en */ 42 42 #define MAX310X_SPCHR_IRQSTS_REG (0x06) /* Special char IRQ status */ 43 43 #define MAX310X_STS_IRQEN_REG (0x07) /* Status IRQ enable */ 44 44 #define MAX310X_STS_IRQSTS_REG (0x08) /* Status IRQ status */ ··· 65 63 #define MAX310X_BRGDIVLSB_REG (0x1c) /* Baud rate divisor LSB */ 66 64 #define MAX310X_BRGDIVMSB_REG (0x1d) /* Baud rate divisor MSB */ 67 65 #define MAX310X_CLKSRC_REG (0x1e) /* Clock source */ 68 - /* Only present in MAX3107 */ 69 - #define MAX3107_REVID_REG (0x1f) /* Revision identification */ 66 + #define MAX310X_REG_1F (0x1f) 67 + 68 + #define MAX310X_REVID_REG MAX310X_REG_1F /* Revision ID */ 69 + 70 + #define MAX310X_GLOBALIRQ_REG MAX310X_REG_1F /* Global IRQ (RO) */ 71 + #define MAX310X_GLOBALCMD_REG MAX310X_REG_1F /* Global Command (WO) */ 72 + 73 + /* Extended registers */ 74 + #define MAX310X_REVID_EXTREG MAX310X_REG_05 /* Revision ID */ 70 75 71 76 /* IRQ register bits */ 72 77 #define MAX310X_IRQ_LSR_BIT (1 << 0) /* LSR interrupt */ ··· 255 246 #define MAX310X_CLKSRC_EXTCLK_BIT (1 << 4) /* External clock enable */ 256 247 #define MAX310X_CLKSRC_CLK2RTS_BIT (1 << 7) /* Baud clk to RTS pin */ 257 248 249 + /* Global commands */ 250 + #define MAX310X_EXTREG_ENBL (0xce) 251 + #define MAX310X_EXTREG_DSBL (0xcd) 252 + 258 253 /* Misc definitions */ 259 254 #define MAX310X_FIFO_SIZE (128) 255 + #define MAX310x_REV_MASK (0xfc) 260 256 261 257 /* MAX3107 specific */ 262 258 #define MAX3107_REV_ID (0xa0) 263 - #define MAX3107_REV_MASK (0xfe) 264 259 265 - /* IRQ status bits definitions */ 266 - #define MAX310X_IRQ_TX (MAX310X_IRQ_TXFIFO_BIT | \ 267 - MAX310X_IRQ_TXEMPTY_BIT) 268 - #define MAX310X_IRQ_RX (MAX310X_IRQ_RXFIFO_BIT | \ 269 - MAX310X_IRQ_RXEMPTY_BIT) 260 + /* MAX3109 specific */ 261 + #define MAX3109_REV_ID (0xc0) 270 262 271 - /* Supported chip types */ 272 - enum { 273 - MAX310X_TYPE_MAX3107 = 3107, 274 - MAX310X_TYPE_MAX3108 = 3108, 263 + /* MAX14830 specific */ 264 + #define MAX14830_BRGCFG_CLKDIS_BIT (1 << 6) /* Clock Disable */ 265 + #define MAX14830_REV_ID (0xb0) 266 + 267 + struct max310x_devtype { 268 + char name[9]; 269 + int nr; 270 + int (*detect)(struct device *); 271 + void (*power)(struct uart_port *, int); 272 + }; 273 + 274 + struct max310x_one { 275 + struct uart_port port; 276 + struct work_struct tx_work; 275 277 }; 276 278 277 279 struct max310x_port { 278 280 struct uart_driver uart; 279 - struct uart_port port; 280 - 281 - const char *name; 282 - int uartclk; 283 - 284 - unsigned int nr_gpio; 281 + struct max310x_devtype *devtype; 282 + struct regmap *regmap; 283 + struct regmap_config regcfg; 284 + struct mutex mutex; 285 + struct max310x_pdata *pdata; 286 + int gpio_used; 285 287 #ifdef CONFIG_GPIOLIB 286 288 struct gpio_chip gpio; 287 289 #endif 288 - 289 - struct regmap *regmap; 290 - struct regmap_config regcfg; 291 - 292 - struct workqueue_struct *wq; 293 - struct work_struct tx_work; 294 - 295 - struct mutex max310x_mutex; 296 - 297 - struct max310x_pdata *pdata; 290 + struct max310x_one p[0]; 298 291 }; 299 292 300 - static bool max3107_8_reg_writeable(struct device *dev, unsigned int reg) 293 + static u8 max310x_port_read(struct uart_port *port, u8 reg) 301 294 { 302 - switch (reg) { 295 + struct max310x_port *s = dev_get_drvdata(port->dev); 296 + unsigned int val = 0; 297 + 298 + regmap_read(s->regmap, port->iobase + reg, &val); 299 + 300 + return val; 301 + } 302 + 303 + static void max310x_port_write(struct uart_port *port, u8 reg, u8 val) 304 + { 305 + struct max310x_port *s = dev_get_drvdata(port->dev); 306 + 307 + regmap_write(s->regmap, port->iobase + reg, val); 308 + } 309 + 310 + static void max310x_port_update(struct uart_port *port, u8 reg, u8 mask, u8 val) 311 + { 312 + struct max310x_port *s = dev_get_drvdata(port->dev); 313 + 314 + regmap_update_bits(s->regmap, port->iobase + reg, mask, val); 315 + } 316 + 317 + static int max3107_detect(struct device *dev) 318 + { 319 + struct max310x_port *s = dev_get_drvdata(dev); 320 + unsigned int val = 0; 321 + int ret; 322 + 323 + ret = regmap_read(s->regmap, MAX310X_REVID_REG, &val); 324 + if (ret) 325 + return ret; 326 + 327 + if (((val & MAX310x_REV_MASK) != MAX3107_REV_ID)) { 328 + dev_err(dev, 329 + "%s ID 0x%02x does not match\n", s->devtype->name, val); 330 + return -ENODEV; 331 + } 332 + 333 + return 0; 334 + } 335 + 336 + static int max3108_detect(struct device *dev) 337 + { 338 + struct max310x_port *s = dev_get_drvdata(dev); 339 + unsigned int val = 0; 340 + int ret; 341 + 342 + /* MAX3108 have not REV ID register, we just check default value 343 + * from clocksource register to make sure everything works. 344 + */ 345 + ret = regmap_read(s->regmap, MAX310X_CLKSRC_REG, &val); 346 + if (ret) 347 + return ret; 348 + 349 + if (val != (MAX310X_CLKSRC_EXTCLK_BIT | MAX310X_CLKSRC_PLLBYP_BIT)) { 350 + dev_err(dev, "%s not present\n", s->devtype->name); 351 + return -ENODEV; 352 + } 353 + 354 + return 0; 355 + } 356 + 357 + static int max3109_detect(struct device *dev) 358 + { 359 + struct max310x_port *s = dev_get_drvdata(dev); 360 + unsigned int val = 0; 361 + int ret; 362 + 363 + ret = regmap_read(s->regmap, MAX310X_REVID_REG, &val); 364 + if (ret) 365 + return ret; 366 + 367 + if (((val & MAX310x_REV_MASK) != MAX3109_REV_ID)) { 368 + dev_err(dev, 369 + "%s ID 0x%02x does not match\n", s->devtype->name, val); 370 + return -ENODEV; 371 + } 372 + 373 + return 0; 374 + } 375 + 376 + static void max310x_power(struct uart_port *port, int on) 377 + { 378 + max310x_port_update(port, MAX310X_MODE1_REG, 379 + MAX310X_MODE1_FORCESLEEP_BIT, 380 + on ? 0 : MAX310X_MODE1_FORCESLEEP_BIT); 381 + if (on) 382 + msleep(50); 383 + } 384 + 385 + static int max14830_detect(struct device *dev) 386 + { 387 + struct max310x_port *s = dev_get_drvdata(dev); 388 + unsigned int val = 0; 389 + int ret; 390 + 391 + ret = regmap_write(s->regmap, MAX310X_GLOBALCMD_REG, 392 + MAX310X_EXTREG_ENBL); 393 + if (ret) 394 + return ret; 395 + 396 + regmap_read(s->regmap, MAX310X_REVID_EXTREG, &val); 397 + regmap_write(s->regmap, MAX310X_GLOBALCMD_REG, MAX310X_EXTREG_DSBL); 398 + if (((val & MAX310x_REV_MASK) != MAX14830_REV_ID)) { 399 + dev_err(dev, 400 + "%s ID 0x%02x does not match\n", s->devtype->name, val); 401 + return -ENODEV; 402 + } 403 + 404 + return 0; 405 + } 406 + 407 + static void max14830_power(struct uart_port *port, int on) 408 + { 409 + max310x_port_update(port, MAX310X_BRGCFG_REG, 410 + MAX14830_BRGCFG_CLKDIS_BIT, 411 + on ? 0 : MAX14830_BRGCFG_CLKDIS_BIT); 412 + if (on) 413 + msleep(50); 414 + } 415 + 416 + static const struct max310x_devtype max3107_devtype = { 417 + .name = "MAX3107", 418 + .nr = 1, 419 + .detect = max3107_detect, 420 + .power = max310x_power, 421 + }; 422 + 423 + static const struct max310x_devtype max3108_devtype = { 424 + .name = "MAX3108", 425 + .nr = 1, 426 + .detect = max3108_detect, 427 + .power = max310x_power, 428 + }; 429 + 430 + static const struct max310x_devtype max3109_devtype = { 431 + .name = "MAX3109", 432 + .nr = 2, 433 + .detect = max3109_detect, 434 + .power = max310x_power, 435 + }; 436 + 437 + static const struct max310x_devtype max14830_devtype = { 438 + .name = "MAX14830", 439 + .nr = 4, 440 + .detect = max14830_detect, 441 + .power = max14830_power, 442 + }; 443 + 444 + static bool max310x_reg_writeable(struct device *dev, unsigned int reg) 445 + { 446 + switch (reg & 0x1f) { 303 447 case MAX310X_IRQSTS_REG: 304 448 case MAX310X_LSR_IRQSTS_REG: 305 449 case MAX310X_SPCHR_IRQSTS_REG: 306 450 case MAX310X_STS_IRQSTS_REG: 307 451 case MAX310X_TXFIFOLVL_REG: 308 452 case MAX310X_RXFIFOLVL_REG: 309 - case MAX3107_REVID_REG: /* Only available on MAX3107 */ 310 453 return false; 311 454 default: 312 455 break; ··· 469 308 470 309 static bool max310x_reg_volatile(struct device *dev, unsigned int reg) 471 310 { 472 - switch (reg) { 311 + switch (reg & 0x1f) { 473 312 case MAX310X_RHR_REG: 474 313 case MAX310X_IRQSTS_REG: 475 314 case MAX310X_LSR_IRQSTS_REG: ··· 478 317 case MAX310X_TXFIFOLVL_REG: 479 318 case MAX310X_RXFIFOLVL_REG: 480 319 case MAX310X_GPIODATA_REG: 320 + case MAX310X_BRGDIVLSB_REG: 321 + case MAX310X_REG_05: 322 + case MAX310X_REG_1F: 481 323 return true; 482 324 default: 483 325 break; ··· 491 327 492 328 static bool max310x_reg_precious(struct device *dev, unsigned int reg) 493 329 { 494 - switch (reg) { 330 + switch (reg & 0x1f) { 495 331 case MAX310X_RHR_REG: 496 332 case MAX310X_IRQSTS_REG: 497 333 case MAX310X_SPCHR_IRQSTS_REG: ··· 504 340 return false; 505 341 } 506 342 507 - static void max310x_set_baud(struct max310x_port *s, int baud) 343 + static void max310x_set_baud(struct uart_port *port, int baud) 508 344 { 509 - unsigned int mode = 0, div = s->uartclk / baud; 345 + unsigned int mode = 0, div = port->uartclk / baud; 510 346 511 347 if (!(div / 16)) { 512 348 /* Mode x2 */ 513 349 mode = MAX310X_BRGCFG_2XMODE_BIT; 514 - div = (s->uartclk * 2) / baud; 350 + div = (port->uartclk * 2) / baud; 515 351 } 516 352 517 353 if (!(div / 16)) { 518 354 /* Mode x4 */ 519 355 mode = MAX310X_BRGCFG_4XMODE_BIT; 520 - div = (s->uartclk * 4) / baud; 356 + div = (port->uartclk * 4) / baud; 521 357 } 522 358 523 - regmap_write(s->regmap, MAX310X_BRGDIVMSB_REG, 524 - ((div / 16) >> 8) & 0xff); 525 - regmap_write(s->regmap, MAX310X_BRGDIVLSB_REG, (div / 16) & 0xff); 526 - regmap_write(s->regmap, MAX310X_BRGCFG_REG, (div % 16) | mode); 527 - } 528 - 529 - static void max310x_wait_pll(struct max310x_port *s) 530 - { 531 - int tryes = 1000; 532 - 533 - /* Wait for PLL only if crystal is used */ 534 - if (!(s->pdata->driver_flags & MAX310X_EXT_CLK)) { 535 - unsigned int sts = 0; 536 - 537 - while (tryes--) { 538 - regmap_read(s->regmap, MAX310X_STS_IRQSTS_REG, &sts); 539 - if (sts & MAX310X_STS_CLKREADY_BIT) 540 - break; 541 - } 542 - } 359 + max310x_port_write(port, MAX310X_BRGDIVMSB_REG, (div / 16) >> 8); 360 + max310x_port_write(port, MAX310X_BRGDIVLSB_REG, div / 16); 361 + max310x_port_write(port, MAX310X_BRGCFG_REG, (div % 16) | mode); 543 362 } 544 363 545 364 static int max310x_update_best_err(unsigned long f, long *besterr) ··· 596 449 597 450 regmap_write(s->regmap, MAX310X_CLKSRC_REG, clksrc); 598 451 599 - if (pllcfg) 600 - max310x_wait_pll(s); 601 - 602 - dev_dbg(s->port.dev, "Reference clock set to %lu Hz\n", bestfreq); 452 + /* Wait for crystal */ 453 + if (pllcfg && !(s->pdata->driver_flags & MAX310X_EXT_CLK)) 454 + msleep(10); 603 455 604 456 return (int)bestfreq; 605 457 } 606 458 607 - static void max310x_handle_rx(struct max310x_port *s, unsigned int rxlen) 459 + static void max310x_handle_rx(struct uart_port *port, unsigned int rxlen) 608 460 { 609 - unsigned int sts = 0, ch = 0, flag; 461 + unsigned int sts, ch, flag; 610 462 611 - if (unlikely(rxlen >= MAX310X_FIFO_SIZE)) { 612 - dev_warn(s->port.dev, "Possible RX FIFO overrun %d\n", rxlen); 463 + if (unlikely(rxlen >= port->fifosize)) { 464 + dev_warn_ratelimited(port->dev, 465 + "Port %i: Possible RX FIFO overrun\n", 466 + port->line); 467 + port->icount.buf_overrun++; 613 468 /* Ensure sanity of RX level */ 614 - rxlen = MAX310X_FIFO_SIZE; 469 + rxlen = port->fifosize; 615 470 } 616 471 617 - dev_dbg(s->port.dev, "RX Len = %u\n", rxlen); 618 - 619 472 while (rxlen--) { 620 - regmap_read(s->regmap, MAX310X_RHR_REG, &ch); 621 - regmap_read(s->regmap, MAX310X_LSR_IRQSTS_REG, &sts); 473 + ch = max310x_port_read(port, MAX310X_RHR_REG); 474 + sts = max310x_port_read(port, MAX310X_LSR_IRQSTS_REG); 622 475 623 476 sts &= MAX310X_LSR_RXPAR_BIT | MAX310X_LSR_FRERR_BIT | 624 477 MAX310X_LSR_RXOVR_BIT | MAX310X_LSR_RXBRK_BIT; 625 478 626 - s->port.icount.rx++; 479 + port->icount.rx++; 627 480 flag = TTY_NORMAL; 628 481 629 482 if (unlikely(sts)) { 630 483 if (sts & MAX310X_LSR_RXBRK_BIT) { 631 - s->port.icount.brk++; 632 - if (uart_handle_break(&s->port)) 484 + port->icount.brk++; 485 + if (uart_handle_break(port)) 633 486 continue; 634 487 } else if (sts & MAX310X_LSR_RXPAR_BIT) 635 - s->port.icount.parity++; 488 + port->icount.parity++; 636 489 else if (sts & MAX310X_LSR_FRERR_BIT) 637 - s->port.icount.frame++; 490 + port->icount.frame++; 638 491 else if (sts & MAX310X_LSR_RXOVR_BIT) 639 - s->port.icount.overrun++; 492 + port->icount.overrun++; 640 493 641 - sts &= s->port.read_status_mask; 494 + sts &= port->read_status_mask; 642 495 if (sts & MAX310X_LSR_RXBRK_BIT) 643 496 flag = TTY_BREAK; 644 497 else if (sts & MAX310X_LSR_RXPAR_BIT) ··· 649 502 flag = TTY_OVERRUN; 650 503 } 651 504 652 - if (uart_handle_sysrq_char(s->port, ch)) 505 + if (uart_handle_sysrq_char(port, ch)) 653 506 continue; 654 507 655 - if (sts & s->port.ignore_status_mask) 508 + if (sts & port->ignore_status_mask) 656 509 continue; 657 510 658 - uart_insert_char(&s->port, sts, MAX310X_LSR_RXOVR_BIT, 659 - ch, flag); 511 + uart_insert_char(port, sts, MAX310X_LSR_RXOVR_BIT, ch, flag); 660 512 } 661 513 662 - tty_flip_buffer_push(&s->port.state->port); 514 + tty_flip_buffer_push(&port->state->port); 663 515 } 664 516 665 - static void max310x_handle_tx(struct max310x_port *s) 517 + static void max310x_handle_tx(struct uart_port *port) 666 518 { 667 - struct circ_buf *xmit = &s->port.state->xmit; 668 - unsigned int txlen = 0, to_send; 519 + struct circ_buf *xmit = &port->state->xmit; 520 + unsigned int txlen, to_send; 669 521 670 - if (unlikely(s->port.x_char)) { 671 - regmap_write(s->regmap, MAX310X_THR_REG, s->port.x_char); 672 - s->port.icount.tx++; 673 - s->port.x_char = 0; 522 + if (unlikely(port->x_char)) { 523 + max310x_port_write(port, MAX310X_THR_REG, port->x_char); 524 + port->icount.tx++; 525 + port->x_char = 0; 674 526 return; 675 527 } 676 528 677 - if (uart_circ_empty(xmit) || uart_tx_stopped(&s->port)) 529 + if (uart_circ_empty(xmit) || uart_tx_stopped(port)) 678 530 return; 679 531 680 532 /* Get length of data pending in circular buffer */ 681 533 to_send = uart_circ_chars_pending(xmit); 682 534 if (likely(to_send)) { 683 535 /* Limit to size of TX FIFO */ 684 - regmap_read(s->regmap, MAX310X_TXFIFOLVL_REG, &txlen); 685 - txlen = MAX310X_FIFO_SIZE - txlen; 536 + txlen = max310x_port_read(port, MAX310X_TXFIFOLVL_REG); 537 + txlen = port->fifosize - txlen; 686 538 to_send = (to_send > txlen) ? txlen : to_send; 687 539 688 - dev_dbg(s->port.dev, "TX Len = %u\n", to_send); 689 - 690 540 /* Add data to send */ 691 - s->port.icount.tx += to_send; 541 + port->icount.tx += to_send; 692 542 while (to_send--) { 693 - regmap_write(s->regmap, MAX310X_THR_REG, 694 - xmit->buf[xmit->tail]); 543 + max310x_port_write(port, MAX310X_THR_REG, 544 + xmit->buf[xmit->tail]); 695 545 xmit->tail = (xmit->tail + 1) & (UART_XMIT_SIZE - 1); 696 546 }; 697 547 } 698 548 699 549 if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 700 - uart_write_wakeup(&s->port); 550 + uart_write_wakeup(port); 551 + } 552 + 553 + static void max310x_port_irq(struct max310x_port *s, int portno) 554 + { 555 + struct uart_port *port = &s->p[portno].port; 556 + 557 + do { 558 + unsigned int ists, lsr, rxlen; 559 + 560 + /* Read IRQ status & RX FIFO level */ 561 + ists = max310x_port_read(port, MAX310X_IRQSTS_REG); 562 + rxlen = max310x_port_read(port, MAX310X_RXFIFOLVL_REG); 563 + if (!ists && !rxlen) 564 + break; 565 + 566 + if (ists & MAX310X_IRQ_CTS_BIT) { 567 + lsr = max310x_port_read(port, MAX310X_LSR_IRQSTS_REG); 568 + uart_handle_cts_change(port, 569 + !!(lsr & MAX310X_LSR_CTS_BIT)); 570 + } 571 + if (rxlen) 572 + max310x_handle_rx(port, rxlen); 573 + if (ists & MAX310X_IRQ_TXEMPTY_BIT) { 574 + mutex_lock(&s->mutex); 575 + max310x_handle_tx(port); 576 + mutex_unlock(&s->mutex); 577 + } 578 + } while (1); 701 579 } 702 580 703 581 static irqreturn_t max310x_ist(int irq, void *dev_id) 704 582 { 705 583 struct max310x_port *s = (struct max310x_port *)dev_id; 706 - unsigned int ists = 0, lsr = 0, rxlen = 0; 707 584 708 - mutex_lock(&s->max310x_mutex); 585 + if (s->uart.nr > 1) { 586 + do { 587 + unsigned int val = ~0; 709 588 710 - for (;;) { 711 - /* Read IRQ status & RX FIFO level */ 712 - regmap_read(s->regmap, MAX310X_IRQSTS_REG, &ists); 713 - regmap_read(s->regmap, MAX310X_LSR_IRQSTS_REG, &lsr); 714 - regmap_read(s->regmap, MAX310X_RXFIFOLVL_REG, &rxlen); 715 - if (!ists && !(lsr & MAX310X_LSR_RXTO_BIT) && !rxlen) 716 - break; 717 - 718 - dev_dbg(s->port.dev, "IRQ status: 0x%02x\n", ists); 719 - 720 - if (rxlen) 721 - max310x_handle_rx(s, rxlen); 722 - if (ists & MAX310X_IRQ_TX) 723 - max310x_handle_tx(s); 724 - if (ists & MAX310X_IRQ_CTS_BIT) 725 - uart_handle_cts_change(&s->port, 726 - !!(lsr & MAX310X_LSR_CTS_BIT)); 727 - } 728 - 729 - mutex_unlock(&s->max310x_mutex); 589 + WARN_ON_ONCE(regmap_read(s->regmap, 590 + MAX310X_GLOBALIRQ_REG, &val)); 591 + val = ((1 << s->uart.nr) - 1) & ~val; 592 + if (!val) 593 + break; 594 + max310x_port_irq(s, fls(val) - 1); 595 + } while (1); 596 + } else 597 + max310x_port_irq(s, 0); 730 598 731 599 return IRQ_HANDLED; 732 600 } 733 601 734 602 static void max310x_wq_proc(struct work_struct *ws) 735 603 { 736 - struct max310x_port *s = container_of(ws, struct max310x_port, tx_work); 604 + struct max310x_one *one = container_of(ws, struct max310x_one, tx_work); 605 + struct max310x_port *s = dev_get_drvdata(one->port.dev); 737 606 738 - mutex_lock(&s->max310x_mutex); 739 - max310x_handle_tx(s); 740 - mutex_unlock(&s->max310x_mutex); 607 + mutex_lock(&s->mutex); 608 + max310x_handle_tx(&one->port); 609 + mutex_unlock(&s->mutex); 741 610 } 742 611 743 612 static void max310x_start_tx(struct uart_port *port) 744 613 { 745 - struct max310x_port *s = container_of(port, struct max310x_port, port); 614 + struct max310x_one *one = container_of(port, struct max310x_one, port); 746 615 747 - queue_work(s->wq, &s->tx_work); 748 - } 749 - 750 - static void max310x_stop_tx(struct uart_port *port) 751 - { 752 - /* Do nothing */ 753 - } 754 - 755 - static void max310x_stop_rx(struct uart_port *port) 756 - { 757 - /* Do nothing */ 616 + if (!work_pending(&one->tx_work)) 617 + schedule_work(&one->tx_work); 758 618 } 759 619 760 620 static unsigned int max310x_tx_empty(struct uart_port *port) 761 621 { 762 - unsigned int val = 0; 763 - struct max310x_port *s = container_of(port, struct max310x_port, port); 622 + unsigned int lvl, sts; 764 623 765 - mutex_lock(&s->max310x_mutex); 766 - regmap_read(s->regmap, MAX310X_TXFIFOLVL_REG, &val); 767 - mutex_unlock(&s->max310x_mutex); 624 + lvl = max310x_port_read(port, MAX310X_TXFIFOLVL_REG); 625 + sts = max310x_port_read(port, MAX310X_IRQSTS_REG); 768 626 769 - return val ? 0 : TIOCSER_TEMT; 770 - } 771 - 772 - static void max310x_enable_ms(struct uart_port *port) 773 - { 774 - /* Modem status not supported */ 627 + return ((sts & MAX310X_IRQ_TXEMPTY_BIT) && !lvl) ? TIOCSER_TEMT : 0; 775 628 } 776 629 777 630 static unsigned int max310x_get_mctrl(struct uart_port *port) ··· 791 644 792 645 static void max310x_break_ctl(struct uart_port *port, int break_state) 793 646 { 794 - struct max310x_port *s = container_of(port, struct max310x_port, port); 795 - 796 - mutex_lock(&s->max310x_mutex); 797 - regmap_update_bits(s->regmap, MAX310X_LCR_REG, 798 - MAX310X_LCR_TXBREAK_BIT, 799 - break_state ? MAX310X_LCR_TXBREAK_BIT : 0); 800 - mutex_unlock(&s->max310x_mutex); 647 + max310x_port_update(port, MAX310X_LCR_REG, 648 + MAX310X_LCR_TXBREAK_BIT, 649 + break_state ? MAX310X_LCR_TXBREAK_BIT : 0); 801 650 } 802 651 803 652 static void max310x_set_termios(struct uart_port *port, 804 653 struct ktermios *termios, 805 654 struct ktermios *old) 806 655 { 807 - struct max310x_port *s = container_of(port, struct max310x_port, port); 808 656 unsigned int lcr, flow = 0; 809 657 int baud; 810 658 811 - mutex_lock(&s->max310x_mutex); 812 - 813 659 /* Mask termios capabilities we don't support */ 814 660 termios->c_cflag &= ~CMSPAR; 815 - termios->c_iflag &= ~IXANY; 816 661 817 662 /* Word size */ 818 663 switch (termios->c_cflag & CSIZE) { ··· 835 696 lcr |= MAX310X_LCR_STOPLEN_BIT; /* 2 stops */ 836 697 837 698 /* Update LCR register */ 838 - regmap_write(s->regmap, MAX310X_LCR_REG, lcr); 699 + max310x_port_write(port, MAX310X_LCR_REG, lcr); 839 700 840 701 /* Set read status mask */ 841 702 port->read_status_mask = MAX310X_LSR_RXOVR_BIT; ··· 856 717 MAX310X_LSR_RXBRK_BIT; 857 718 858 719 /* Configure flow control */ 859 - regmap_write(s->regmap, MAX310X_XON1_REG, termios->c_cc[VSTART]); 860 - regmap_write(s->regmap, MAX310X_XOFF1_REG, termios->c_cc[VSTOP]); 720 + max310x_port_write(port, MAX310X_XON1_REG, termios->c_cc[VSTART]); 721 + max310x_port_write(port, MAX310X_XOFF1_REG, termios->c_cc[VSTOP]); 861 722 if (termios->c_cflag & CRTSCTS) 862 723 flow |= MAX310X_FLOWCTRL_AUTOCTS_BIT | 863 724 MAX310X_FLOWCTRL_AUTORTS_BIT; ··· 867 728 if (termios->c_iflag & IXOFF) 868 729 flow |= MAX310X_FLOWCTRL_SWFLOW1_BIT | 869 730 MAX310X_FLOWCTRL_SWFLOWEN_BIT; 870 - regmap_write(s->regmap, MAX310X_FLOWCTRL_REG, flow); 731 + max310x_port_write(port, MAX310X_FLOWCTRL_REG, flow); 871 732 872 733 /* Get baud rate generator configuration */ 873 734 baud = uart_get_baud_rate(port, termios, old, ··· 875 736 port->uartclk / 4); 876 737 877 738 /* Setup baudrate generator */ 878 - max310x_set_baud(s, baud); 739 + max310x_set_baud(port, baud); 879 740 880 741 /* Update timeout according to new baud rate */ 881 742 uart_update_timeout(port, termios->c_cflag, baud); 882 - 883 - mutex_unlock(&s->max310x_mutex); 884 743 } 885 744 886 745 static int max310x_startup(struct uart_port *port) 887 746 { 888 747 unsigned int val, line = port->line; 889 - struct max310x_port *s = container_of(port, struct max310x_port, port); 748 + struct max310x_port *s = dev_get_drvdata(port->dev); 890 749 891 - if (s->pdata->suspend) 892 - s->pdata->suspend(0); 893 - 894 - mutex_lock(&s->max310x_mutex); 750 + s->devtype->power(port, 1); 895 751 896 752 /* Configure baud rate, 9600 as default */ 897 - max310x_set_baud(s, 9600); 753 + max310x_set_baud(port, 9600); 898 754 899 755 /* Configure LCR register, 8N1 mode by default */ 900 - val = MAX310X_LCR_WORD_LEN_8; 901 - regmap_write(s->regmap, MAX310X_LCR_REG, val); 756 + max310x_port_write(port, MAX310X_LCR_REG, MAX310X_LCR_WORD_LEN_8); 902 757 903 758 /* Configure MODE1 register */ 904 - regmap_update_bits(s->regmap, MAX310X_MODE1_REG, 905 - MAX310X_MODE1_TRNSCVCTRL_BIT, 906 - (s->pdata->uart_flags[line] & MAX310X_AUTO_DIR_CTRL) 907 - ? MAX310X_MODE1_TRNSCVCTRL_BIT : 0); 759 + max310x_port_update(port, MAX310X_MODE1_REG, 760 + MAX310X_MODE1_TRNSCVCTRL_BIT, 761 + (s->pdata->uart_flags[line] & MAX310X_AUTO_DIR_CTRL) 762 + ? MAX310X_MODE1_TRNSCVCTRL_BIT : 0); 908 763 909 764 /* Configure MODE2 register */ 910 765 val = MAX310X_MODE2_RXEMPTINV_BIT; ··· 909 776 910 777 /* Reset FIFOs */ 911 778 val |= MAX310X_MODE2_FIFORST_BIT; 912 - regmap_write(s->regmap, MAX310X_MODE2_REG, val); 913 - 914 - /* Configure FIFO trigger level register */ 915 - /* RX FIFO trigger for 16 words, TX FIFO trigger for 64 words */ 916 - val = MAX310X_FIFOTRIGLVL_RX(16) | MAX310X_FIFOTRIGLVL_TX(64); 917 - regmap_write(s->regmap, MAX310X_FIFOTRIGLVL_REG, val); 779 + max310x_port_write(port, MAX310X_MODE2_REG, val); 780 + max310x_port_update(port, MAX310X_MODE2_REG, 781 + MAX310X_MODE2_FIFORST_BIT, 0); 918 782 919 783 /* Configure flow control levels */ 920 784 /* Flow control halt level 96, resume level 48 */ 921 - val = MAX310X_FLOWLVL_RES(48) | MAX310X_FLOWLVL_HALT(96); 922 - regmap_write(s->regmap, MAX310X_FLOWLVL_REG, val); 785 + max310x_port_write(port, MAX310X_FLOWLVL_REG, 786 + MAX310X_FLOWLVL_RES(48) | MAX310X_FLOWLVL_HALT(96)); 923 787 924 - /* Clear timeout register */ 925 - regmap_write(s->regmap, MAX310X_RXTO_REG, 0); 788 + /* Clear IRQ status register */ 789 + max310x_port_read(port, MAX310X_IRQSTS_REG); 926 790 927 - /* Configure LSR interrupt enable register */ 928 - /* Enable RX timeout interrupt */ 929 - val = MAX310X_LSR_RXTO_BIT; 930 - regmap_write(s->regmap, MAX310X_LSR_IRQEN_REG, val); 931 - 932 - /* Clear FIFO reset */ 933 - regmap_update_bits(s->regmap, MAX310X_MODE2_REG, 934 - MAX310X_MODE2_FIFORST_BIT, 0); 935 - 936 - /* Clear IRQ status register by reading it */ 937 - regmap_read(s->regmap, MAX310X_IRQSTS_REG, &val); 938 - 939 - /* Configure interrupt enable register */ 940 - /* Enable CTS change interrupt */ 941 - val = MAX310X_IRQ_CTS_BIT; 942 - /* Enable RX, TX interrupts */ 943 - val |= MAX310X_IRQ_RX | MAX310X_IRQ_TX; 944 - regmap_write(s->regmap, MAX310X_IRQEN_REG, val); 945 - 946 - mutex_unlock(&s->max310x_mutex); 791 + /* Enable RX, TX, CTS change interrupts */ 792 + val = MAX310X_IRQ_RXEMPTY_BIT | MAX310X_IRQ_TXEMPTY_BIT; 793 + max310x_port_write(port, MAX310X_IRQEN_REG, val | MAX310X_IRQ_CTS_BIT); 947 794 948 795 return 0; 949 796 } 950 797 951 798 static void max310x_shutdown(struct uart_port *port) 952 799 { 953 - struct max310x_port *s = container_of(port, struct max310x_port, port); 800 + struct max310x_port *s = dev_get_drvdata(port->dev); 954 801 955 802 /* Disable all interrupts */ 956 - mutex_lock(&s->max310x_mutex); 957 - regmap_write(s->regmap, MAX310X_IRQEN_REG, 0); 958 - mutex_unlock(&s->max310x_mutex); 803 + max310x_port_write(port, MAX310X_IRQEN_REG, 0); 959 804 960 - if (s->pdata->suspend) 961 - s->pdata->suspend(1); 805 + s->devtype->power(port, 0); 962 806 } 963 807 964 808 static const char *max310x_type(struct uart_port *port) 965 809 { 966 - struct max310x_port *s = container_of(port, struct max310x_port, port); 810 + struct max310x_port *s = dev_get_drvdata(port->dev); 967 811 968 - return (port->type == PORT_MAX310X) ? s->name : NULL; 812 + return (port->type == PORT_MAX310X) ? s->devtype->name : NULL; 969 813 } 970 814 971 815 static int max310x_request_port(struct uart_port *port) ··· 951 841 return 0; 952 842 } 953 843 954 - static void max310x_release_port(struct uart_port *port) 955 - { 956 - /* Do nothing */ 957 - } 958 - 959 844 static void max310x_config_port(struct uart_port *port, int flags) 960 845 { 961 846 if (flags & UART_CONFIG_TYPE) 962 847 port->type = PORT_MAX310X; 963 848 } 964 849 965 - static int max310x_verify_port(struct uart_port *port, struct serial_struct *ser) 850 + static int max310x_verify_port(struct uart_port *port, struct serial_struct *s) 966 851 { 967 - if ((ser->type == PORT_UNKNOWN) || (ser->type == PORT_MAX310X)) 968 - return 0; 969 - if (ser->irq == port->irq) 970 - return 0; 852 + if ((s->type != PORT_UNKNOWN) && (s->type != PORT_MAX310X)) 853 + return -EINVAL; 854 + if (s->irq != port->irq) 855 + return -EINVAL; 971 856 972 - return -EINVAL; 857 + return 0; 973 858 } 974 859 975 - static struct uart_ops max310x_ops = { 860 + static void max310x_null_void(struct uart_port *port) 861 + { 862 + /* Do nothing */ 863 + } 864 + 865 + static const struct uart_ops max310x_ops = { 976 866 .tx_empty = max310x_tx_empty, 977 867 .set_mctrl = max310x_set_mctrl, 978 868 .get_mctrl = max310x_get_mctrl, 979 - .stop_tx = max310x_stop_tx, 869 + .stop_tx = max310x_null_void, 980 870 .start_tx = max310x_start_tx, 981 - .stop_rx = max310x_stop_rx, 982 - .enable_ms = max310x_enable_ms, 871 + .stop_rx = max310x_null_void, 872 + .enable_ms = max310x_null_void, 983 873 .break_ctl = max310x_break_ctl, 984 874 .startup = max310x_startup, 985 875 .shutdown = max310x_shutdown, 986 876 .set_termios = max310x_set_termios, 987 877 .type = max310x_type, 988 878 .request_port = max310x_request_port, 989 - .release_port = max310x_release_port, 879 + .release_port = max310x_null_void, 990 880 .config_port = max310x_config_port, 991 881 .verify_port = max310x_verify_port, 992 882 }; 993 883 994 - #ifdef CONFIG_PM_SLEEP 995 - 996 - static int max310x_suspend(struct device *dev) 997 - { 998 - int ret; 999 - struct max310x_port *s = dev_get_drvdata(dev); 1000 - 1001 - dev_dbg(dev, "Suspend\n"); 1002 - 1003 - ret = uart_suspend_port(&s->uart, &s->port); 1004 - 1005 - mutex_lock(&s->max310x_mutex); 1006 - 1007 - /* Enable sleep mode */ 1008 - regmap_update_bits(s->regmap, MAX310X_MODE1_REG, 1009 - MAX310X_MODE1_FORCESLEEP_BIT, 1010 - MAX310X_MODE1_FORCESLEEP_BIT); 1011 - 1012 - mutex_unlock(&s->max310x_mutex); 1013 - 1014 - if (s->pdata->suspend) 1015 - s->pdata->suspend(1); 1016 - 1017 - return ret; 1018 - } 1019 - 1020 - static int max310x_resume(struct device *dev) 884 + static int __maybe_unused max310x_suspend(struct device *dev) 1021 885 { 1022 886 struct max310x_port *s = dev_get_drvdata(dev); 887 + int i; 1023 888 1024 - dev_dbg(dev, "Resume\n"); 889 + for (i = 0; i < s->uart.nr; i++) { 890 + uart_suspend_port(&s->uart, &s->p[i].port); 891 + s->devtype->power(&s->p[i].port, 0); 892 + } 1025 893 1026 - if (s->pdata->suspend) 1027 - s->pdata->suspend(0); 1028 - 1029 - mutex_lock(&s->max310x_mutex); 1030 - 1031 - /* Disable sleep mode */ 1032 - regmap_update_bits(s->regmap, MAX310X_MODE1_REG, 1033 - MAX310X_MODE1_FORCESLEEP_BIT, 1034 - 0); 1035 - 1036 - max310x_wait_pll(s); 1037 - 1038 - mutex_unlock(&s->max310x_mutex); 1039 - 1040 - return uart_resume_port(&s->uart, &s->port); 894 + return 0; 1041 895 } 1042 896 1043 - static SIMPLE_DEV_PM_OPS(max310x_pm_ops, max310x_suspend, max310x_resume); 1044 - #define MAX310X_PM_OPS (&max310x_pm_ops) 897 + static int __maybe_unused max310x_resume(struct device *dev) 898 + { 899 + struct max310x_port *s = dev_get_drvdata(dev); 900 + int i; 1045 901 1046 - #else 1047 - #define MAX310X_PM_OPS NULL 1048 - #endif 902 + for (i = 0; i < s->uart.nr; i++) { 903 + s->devtype->power(&s->p[i].port, 1); 904 + uart_resume_port(&s->uart, &s->p[i].port); 905 + } 906 + 907 + return 0; 908 + } 1049 909 1050 910 #ifdef CONFIG_GPIOLIB 1051 911 static int max310x_gpio_get(struct gpio_chip *chip, unsigned offset) 1052 912 { 1053 - unsigned int val = 0; 913 + unsigned int val; 1054 914 struct max310x_port *s = container_of(chip, struct max310x_port, gpio); 915 + struct uart_port *port = &s->p[offset / 4].port; 1055 916 1056 - mutex_lock(&s->max310x_mutex); 1057 - regmap_read(s->regmap, MAX310X_GPIODATA_REG, &val); 1058 - mutex_unlock(&s->max310x_mutex); 917 + val = max310x_port_read(port, MAX310X_GPIODATA_REG); 1059 918 1060 - return !!((val >> 4) & (1 << offset)); 919 + return !!((val >> 4) & (1 << (offset % 4))); 1061 920 } 1062 921 1063 922 static void max310x_gpio_set(struct gpio_chip *chip, unsigned offset, int value) 1064 923 { 1065 924 struct max310x_port *s = container_of(chip, struct max310x_port, gpio); 925 + struct uart_port *port = &s->p[offset / 4].port; 1066 926 1067 - mutex_lock(&s->max310x_mutex); 1068 - regmap_update_bits(s->regmap, MAX310X_GPIODATA_REG, 1 << offset, value ? 1069 - 1 << offset : 0); 1070 - mutex_unlock(&s->max310x_mutex); 927 + max310x_port_update(port, MAX310X_GPIODATA_REG, 1 << (offset % 4), 928 + value ? 1 << (offset % 4) : 0); 1071 929 } 1072 930 1073 931 static int max310x_gpio_direction_input(struct gpio_chip *chip, unsigned offset) 1074 932 { 1075 933 struct max310x_port *s = container_of(chip, struct max310x_port, gpio); 934 + struct uart_port *port = &s->p[offset / 4].port; 1076 935 1077 - mutex_lock(&s->max310x_mutex); 1078 - 1079 - regmap_update_bits(s->regmap, MAX310X_GPIOCFG_REG, 1 << offset, 0); 1080 - 1081 - mutex_unlock(&s->max310x_mutex); 936 + max310x_port_update(port, MAX310X_GPIOCFG_REG, 1 << (offset % 4), 0); 1082 937 1083 938 return 0; 1084 939 } ··· 1052 977 unsigned offset, int value) 1053 978 { 1054 979 struct max310x_port *s = container_of(chip, struct max310x_port, gpio); 980 + struct uart_port *port = &s->p[offset / 4].port; 1055 981 1056 - mutex_lock(&s->max310x_mutex); 1057 - 1058 - regmap_update_bits(s->regmap, MAX310X_GPIOCFG_REG, 1 << offset, 1059 - 1 << offset); 1060 - regmap_update_bits(s->regmap, MAX310X_GPIODATA_REG, 1 << offset, value ? 1061 - 1 << offset : 0); 1062 - 1063 - mutex_unlock(&s->max310x_mutex); 982 + max310x_port_update(port, MAX310X_GPIODATA_REG, 1 << (offset % 4), 983 + value ? 1 << (offset % 4) : 0); 984 + max310x_port_update(port, MAX310X_GPIOCFG_REG, 1 << (offset % 4), 985 + 1 << (offset % 4)); 1064 986 1065 987 return 0; 1066 988 } 1067 989 #endif 1068 990 1069 - /* Generic platform data */ 1070 - static struct max310x_pdata generic_plat_data = { 1071 - .driver_flags = MAX310X_EXT_CLK, 1072 - .uart_flags[0] = MAX310X_ECHO_SUPRESS, 1073 - .frequency = 26000000, 1074 - }; 1075 - 1076 - static int max310x_probe(struct spi_device *spi) 991 + static int max310x_probe(struct device *dev, int is_spi, 992 + struct max310x_devtype *devtype, int irq) 1077 993 { 1078 994 struct max310x_port *s; 1079 - struct device *dev = &spi->dev; 1080 - int chiptype = spi_get_device_id(spi)->driver_data; 1081 - struct max310x_pdata *pdata = dev->platform_data; 1082 - unsigned int val = 0; 1083 - int ret; 995 + struct max310x_pdata *pdata = dev_get_platdata(dev); 996 + int i, ret, uartclk; 1084 997 1085 998 /* Check for IRQ */ 1086 - if (spi->irq <= 0) { 999 + if (irq <= 0) { 1087 1000 dev_err(dev, "No IRQ specified\n"); 1088 1001 return -ENOTSUPP; 1089 1002 } 1090 1003 1004 + if (!pdata) { 1005 + dev_err(dev, "No platform data supplied\n"); 1006 + return -EINVAL; 1007 + } 1008 + 1091 1009 /* Alloc port structure */ 1092 - s = devm_kzalloc(dev, sizeof(struct max310x_port), GFP_KERNEL); 1010 + s = devm_kzalloc(dev, sizeof(*s) + 1011 + sizeof(struct max310x_one) * devtype->nr, GFP_KERNEL); 1093 1012 if (!s) { 1094 1013 dev_err(dev, "Error allocating port structure\n"); 1095 1014 return -ENOMEM; 1096 - } 1097 - dev_set_drvdata(dev, s); 1098 - 1099 - if (!pdata) { 1100 - dev_warn(dev, "No platform data supplied, using defaults\n"); 1101 - pdata = &generic_plat_data; 1102 - } 1103 - s->pdata = pdata; 1104 - 1105 - /* Individual chip settings */ 1106 - switch (chiptype) { 1107 - case MAX310X_TYPE_MAX3107: 1108 - s->name = "MAX3107"; 1109 - s->nr_gpio = 4; 1110 - s->uart.nr = 1; 1111 - s->regcfg.max_register = 0x1f; 1112 - break; 1113 - case MAX310X_TYPE_MAX3108: 1114 - s->name = "MAX3108"; 1115 - s->nr_gpio = 4; 1116 - s->uart.nr = 1; 1117 - s->regcfg.max_register = 0x1e; 1118 - break; 1119 - default: 1120 - dev_err(dev, "Unsupported chip type %i\n", chiptype); 1121 - return -ENOTSUPP; 1122 1015 } 1123 1016 1124 1017 /* Check input frequency */ ··· 1098 1055 ((pdata->frequency < 1000000) || (pdata->frequency > 4000000))) 1099 1056 goto err_freq; 1100 1057 1101 - mutex_init(&s->max310x_mutex); 1058 + s->pdata = pdata; 1059 + s->devtype = devtype; 1060 + dev_set_drvdata(dev, s); 1102 1061 1103 - /* Setup SPI bus */ 1104 - spi->mode = SPI_MODE_0; 1105 - spi->bits_per_word = 8; 1106 - spi->max_speed_hz = 26000000; 1107 - spi_setup(spi); 1062 + mutex_init(&s->mutex); 1108 1063 1109 1064 /* Setup regmap */ 1110 1065 s->regcfg.reg_bits = 8; ··· 1110 1069 s->regcfg.read_flag_mask = 0x00; 1111 1070 s->regcfg.write_flag_mask = 0x80; 1112 1071 s->regcfg.cache_type = REGCACHE_RBTREE; 1113 - s->regcfg.writeable_reg = max3107_8_reg_writeable; 1072 + s->regcfg.writeable_reg = max310x_reg_writeable; 1114 1073 s->regcfg.volatile_reg = max310x_reg_volatile; 1115 1074 s->regcfg.precious_reg = max310x_reg_precious; 1116 - s->regmap = devm_regmap_init_spi(spi, &s->regcfg); 1075 + s->regcfg.max_register = devtype->nr * 0x20 - 1; 1076 + 1077 + if (IS_ENABLED(CONFIG_SPI_MASTER) && is_spi) { 1078 + struct spi_device *spi = to_spi_device(dev); 1079 + 1080 + s->regmap = devm_regmap_init_spi(spi, &s->regcfg); 1081 + } else 1082 + return -ENOTSUPP; 1083 + 1117 1084 if (IS_ERR(s->regmap)) { 1118 - ret = PTR_ERR(s->regmap); 1119 1085 dev_err(dev, "Failed to initialize register map\n"); 1120 - goto err_out; 1121 - } 1122 - 1123 - /* Reset chip & check SPI function */ 1124 - ret = regmap_write(s->regmap, MAX310X_MODE2_REG, MAX310X_MODE2_RST_BIT); 1125 - if (ret) { 1126 - dev_err(dev, "SPI transfer failed\n"); 1127 - goto err_out; 1128 - } 1129 - /* Clear chip reset */ 1130 - regmap_write(s->regmap, MAX310X_MODE2_REG, 0); 1131 - 1132 - switch (chiptype) { 1133 - case MAX310X_TYPE_MAX3107: 1134 - /* Check REV ID to ensure we are talking to what we expect */ 1135 - regmap_read(s->regmap, MAX3107_REVID_REG, &val); 1136 - if (((val & MAX3107_REV_MASK) != MAX3107_REV_ID)) { 1137 - dev_err(dev, "%s ID 0x%02x does not match\n", 1138 - s->name, val); 1139 - ret = -ENODEV; 1140 - goto err_out; 1141 - } 1142 - break; 1143 - case MAX310X_TYPE_MAX3108: 1144 - /* MAX3108 have not REV ID register, we just check default value 1145 - * from clocksource register to make sure everything works. 1146 - */ 1147 - regmap_read(s->regmap, MAX310X_CLKSRC_REG, &val); 1148 - if (val != (MAX310X_CLKSRC_EXTCLK_BIT | 1149 - MAX310X_CLKSRC_PLLBYP_BIT)) { 1150 - dev_err(dev, "%s not present\n", s->name); 1151 - ret = -ENODEV; 1152 - goto err_out; 1153 - } 1154 - break; 1086 + return PTR_ERR(s->regmap); 1155 1087 } 1156 1088 1157 1089 /* Board specific configure */ 1158 - if (pdata->init) 1159 - pdata->init(); 1160 - if (pdata->suspend) 1161 - pdata->suspend(0); 1090 + if (s->pdata->init) 1091 + s->pdata->init(); 1162 1092 1163 - /* Calculate referecne clock */ 1164 - s->uartclk = max310x_set_ref_clk(s); 1093 + /* Check device to ensure we are talking to what we expect */ 1094 + ret = devtype->detect(dev); 1095 + if (ret) 1096 + return ret; 1165 1097 1166 - /* Disable all interrupts */ 1167 - regmap_write(s->regmap, MAX310X_IRQEN_REG, 0); 1098 + for (i = 0; i < devtype->nr; i++) { 1099 + unsigned int offs = i << 5; 1168 1100 1169 - /* Setup MODE1 register */ 1170 - val = MAX310X_MODE1_IRQSEL_BIT; /* Enable IRQ pin */ 1171 - if (pdata->driver_flags & MAX310X_AUTOSLEEP) 1172 - val = MAX310X_MODE1_AUTOSLEEP_BIT; 1173 - regmap_write(s->regmap, MAX310X_MODE1_REG, val); 1101 + /* Reset port */ 1102 + regmap_write(s->regmap, MAX310X_MODE2_REG + offs, 1103 + MAX310X_MODE2_RST_BIT); 1104 + /* Clear port reset */ 1105 + regmap_write(s->regmap, MAX310X_MODE2_REG + offs, 0); 1174 1106 1175 - /* Setup interrupt */ 1176 - ret = devm_request_threaded_irq(dev, spi->irq, NULL, max310x_ist, 1177 - IRQF_TRIGGER_FALLING | IRQF_ONESHOT, 1178 - dev_name(dev), s); 1179 - if (ret) { 1180 - dev_err(dev, "Unable to reguest IRQ %i\n", spi->irq); 1181 - goto err_out; 1107 + /* Wait for port startup */ 1108 + do { 1109 + regmap_read(s->regmap, 1110 + MAX310X_BRGDIVLSB_REG + offs, &ret); 1111 + } while (ret != 0x01); 1112 + 1113 + regmap_update_bits(s->regmap, MAX310X_MODE1_REG + offs, 1114 + MAX310X_MODE1_AUTOSLEEP_BIT, 1115 + MAX310X_MODE1_AUTOSLEEP_BIT); 1182 1116 } 1117 + 1118 + uartclk = max310x_set_ref_clk(s); 1119 + dev_dbg(dev, "Reference clock set to %i Hz\n", uartclk); 1183 1120 1184 1121 /* Register UART driver */ 1185 1122 s->uart.owner = THIS_MODULE; 1186 - s->uart.driver_name = dev_name(dev); 1187 1123 s->uart.dev_name = "ttyMAX"; 1188 1124 s->uart.major = MAX310X_MAJOR; 1189 1125 s->uart.minor = MAX310X_MINOR; 1126 + s->uart.nr = devtype->nr; 1190 1127 ret = uart_register_driver(&s->uart); 1191 1128 if (ret) { 1192 1129 dev_err(dev, "Registering UART driver failed\n"); 1193 - goto err_out; 1130 + return ret; 1194 1131 } 1195 1132 1196 - /* Initialize workqueue for start TX */ 1197 - s->wq = create_freezable_workqueue(dev_name(dev)); 1198 - INIT_WORK(&s->tx_work, max310x_wq_proc); 1199 - 1200 - /* Initialize UART port data */ 1201 - s->port.line = 0; 1202 - s->port.dev = dev; 1203 - s->port.irq = spi->irq; 1204 - s->port.type = PORT_MAX310X; 1205 - s->port.fifosize = MAX310X_FIFO_SIZE; 1206 - s->port.flags = UPF_SKIP_TEST | UPF_FIXED_TYPE; 1207 - s->port.iotype = UPIO_PORT; 1208 - s->port.membase = (void __iomem *)0xffffffff; /* Bogus value */ 1209 - s->port.uartclk = s->uartclk; 1210 - s->port.ops = &max310x_ops; 1211 - uart_add_one_port(&s->uart, &s->port); 1133 + for (i = 0; i < devtype->nr; i++) { 1134 + /* Initialize port data */ 1135 + s->p[i].port.line = i; 1136 + s->p[i].port.dev = dev; 1137 + s->p[i].port.irq = irq; 1138 + s->p[i].port.type = PORT_MAX310X; 1139 + s->p[i].port.fifosize = MAX310X_FIFO_SIZE; 1140 + s->p[i].port.flags = UPF_SKIP_TEST | UPF_FIXED_TYPE | 1141 + UPF_LOW_LATENCY; 1142 + s->p[i].port.iotype = UPIO_PORT; 1143 + s->p[i].port.iobase = i * 0x20; 1144 + s->p[i].port.membase = (void __iomem *)~0; 1145 + s->p[i].port.uartclk = uartclk; 1146 + s->p[i].port.ops = &max310x_ops; 1147 + /* Disable all interrupts */ 1148 + max310x_port_write(&s->p[i].port, MAX310X_IRQEN_REG, 0); 1149 + /* Clear IRQ status register */ 1150 + max310x_port_read(&s->p[i].port, MAX310X_IRQSTS_REG); 1151 + /* Enable IRQ pin */ 1152 + max310x_port_update(&s->p[i].port, MAX310X_MODE1_REG, 1153 + MAX310X_MODE1_IRQSEL_BIT, 1154 + MAX310X_MODE1_IRQSEL_BIT); 1155 + /* Initialize queue for start TX */ 1156 + INIT_WORK(&s->p[i].tx_work, max310x_wq_proc); 1157 + /* Register port */ 1158 + uart_add_one_port(&s->uart, &s->p[i].port); 1159 + /* Go to suspend mode */ 1160 + devtype->power(&s->p[i].port, 0); 1161 + } 1212 1162 1213 1163 #ifdef CONFIG_GPIOLIB 1214 1164 /* Setup GPIO cotroller */ 1215 - if (pdata->gpio_base) { 1165 + if (s->pdata->gpio_base) { 1216 1166 s->gpio.owner = THIS_MODULE; 1217 1167 s->gpio.dev = dev; 1218 1168 s->gpio.label = dev_name(dev); ··· 1211 1179 s->gpio.get = max310x_gpio_get; 1212 1180 s->gpio.direction_output= max310x_gpio_direction_output; 1213 1181 s->gpio.set = max310x_gpio_set; 1214 - s->gpio.base = pdata->gpio_base; 1215 - s->gpio.ngpio = s->nr_gpio; 1182 + s->gpio.base = s->pdata->gpio_base; 1183 + s->gpio.ngpio = devtype->nr * 4; 1216 1184 s->gpio.can_sleep = 1; 1217 - if (gpiochip_add(&s->gpio)) { 1218 - /* Indicate that we should not call gpiochip_remove */ 1219 - s->gpio.base = 0; 1220 - } 1185 + if (!gpiochip_add(&s->gpio)) 1186 + s->gpio_used = 1; 1221 1187 } else 1222 1188 dev_info(dev, "GPIO support not enabled\n"); 1223 1189 #endif 1224 1190 1225 - /* Go to suspend mode */ 1226 - if (pdata->suspend) 1227 - pdata->suspend(1); 1191 + /* Setup interrupt */ 1192 + ret = devm_request_threaded_irq(dev, irq, NULL, max310x_ist, 1193 + IRQF_TRIGGER_FALLING | IRQF_ONESHOT, 1194 + dev_name(dev), s); 1195 + if (ret) { 1196 + dev_err(dev, "Unable to reguest IRQ %i\n", irq); 1197 + #ifdef CONFIG_GPIOLIB 1198 + if (s->gpio_used) 1199 + WARN_ON(gpiochip_remove(&s->gpio)); 1200 + #endif 1201 + } 1228 1202 1229 - return 0; 1203 + return ret; 1230 1204 1231 1205 err_freq: 1232 1206 dev_err(dev, "Frequency parameter incorrect\n"); 1233 - ret = -EINVAL; 1234 - 1235 - err_out: 1236 - dev_set_drvdata(dev, NULL); 1237 - 1238 - return ret; 1207 + return -EINVAL; 1239 1208 } 1240 1209 1241 - static int max310x_remove(struct spi_device *spi) 1210 + static int max310x_remove(struct device *dev) 1242 1211 { 1243 - struct device *dev = &spi->dev; 1244 1212 struct max310x_port *s = dev_get_drvdata(dev); 1245 - int ret = 0; 1213 + int i, ret = 0; 1246 1214 1247 - dev_dbg(dev, "Removing port\n"); 1248 - 1249 - devm_free_irq(dev, s->port.irq, s); 1250 - 1251 - destroy_workqueue(s->wq); 1252 - 1253 - uart_remove_one_port(&s->uart, &s->port); 1215 + for (i = 0; i < s->uart.nr; i++) { 1216 + cancel_work_sync(&s->p[i].tx_work); 1217 + uart_remove_one_port(&s->uart, &s->p[i].port); 1218 + s->devtype->power(&s->p[i].port, 0); 1219 + } 1254 1220 1255 1221 uart_unregister_driver(&s->uart); 1256 1222 1257 1223 #ifdef CONFIG_GPIOLIB 1258 - if (s->pdata->gpio_base) { 1224 + if (s->gpio_used) 1259 1225 ret = gpiochip_remove(&s->gpio); 1260 - if (ret) 1261 - dev_err(dev, "Failed to remove gpio chip: %d\n", ret); 1262 - } 1263 1226 #endif 1264 1227 1265 - dev_set_drvdata(dev, NULL); 1266 - 1267 - if (s->pdata->suspend) 1268 - s->pdata->suspend(1); 1269 1228 if (s->pdata->exit) 1270 1229 s->pdata->exit(); 1271 1230 1272 1231 return ret; 1273 1232 } 1274 1233 1234 + #ifdef CONFIG_SPI_MASTER 1235 + static int max310x_spi_probe(struct spi_device *spi) 1236 + { 1237 + struct max310x_devtype *devtype = 1238 + (struct max310x_devtype *)spi_get_device_id(spi)->driver_data; 1239 + int ret; 1240 + 1241 + /* Setup SPI bus */ 1242 + spi->bits_per_word = 8; 1243 + spi->mode = spi->mode ? : SPI_MODE_0; 1244 + spi->max_speed_hz = spi->max_speed_hz ? : 26000000; 1245 + ret = spi_setup(spi); 1246 + if (ret) { 1247 + dev_err(&spi->dev, "SPI setup failed\n"); 1248 + return ret; 1249 + } 1250 + 1251 + return max310x_probe(&spi->dev, 1, devtype, spi->irq); 1252 + } 1253 + 1254 + static int max310x_spi_remove(struct spi_device *spi) 1255 + { 1256 + return max310x_remove(&spi->dev); 1257 + } 1258 + 1259 + static SIMPLE_DEV_PM_OPS(max310x_pm_ops, max310x_suspend, max310x_resume); 1260 + 1275 1261 static const struct spi_device_id max310x_id_table[] = { 1276 - { "max3107", MAX310X_TYPE_MAX3107 }, 1277 - { "max3108", MAX310X_TYPE_MAX3108 }, 1262 + { "max3107", (kernel_ulong_t)&max3107_devtype, }, 1263 + { "max3108", (kernel_ulong_t)&max3108_devtype, }, 1264 + { "max3109", (kernel_ulong_t)&max3109_devtype, }, 1265 + { "max14830", (kernel_ulong_t)&max14830_devtype, }, 1278 1266 { } 1279 1267 }; 1280 1268 MODULE_DEVICE_TABLE(spi, max310x_id_table); 1281 1269 1282 - static struct spi_driver max310x_driver = { 1270 + static struct spi_driver max310x_uart_driver = { 1283 1271 .driver = { 1284 - .name = "max310x", 1272 + .name = MAX310X_NAME, 1285 1273 .owner = THIS_MODULE, 1286 - .pm = MAX310X_PM_OPS, 1274 + .pm = &max310x_pm_ops, 1287 1275 }, 1288 - .probe = max310x_probe, 1289 - .remove = max310x_remove, 1276 + .probe = max310x_spi_probe, 1277 + .remove = max310x_spi_remove, 1290 1278 .id_table = max310x_id_table, 1291 1279 }; 1292 - module_spi_driver(max310x_driver); 1280 + module_spi_driver(max310x_uart_driver); 1281 + #endif 1293 1282 1294 - MODULE_LICENSE("GPL v2"); 1283 + MODULE_LICENSE("GPL"); 1295 1284 MODULE_AUTHOR("Alexander Shiyan <shc_work@mail.ru>"); 1296 1285 MODULE_DESCRIPTION("MAX310X serial driver");
+4 -1
drivers/tty/serial/mcf.c
··· 24 24 #include <linux/serial_core.h> 25 25 #include <linux/io.h> 26 26 #include <linux/uaccess.h> 27 + #include <linux/platform_device.h> 27 28 #include <asm/coldfire.h> 28 29 #include <asm/mcfsim.h> 29 30 #include <asm/mcfuart.h> ··· 325 324 uart_insert_char(port, status, MCFUART_USR_RXOVERRUN, ch, flag); 326 325 } 327 326 327 + spin_unlock(&port->lock); 328 328 tty_flip_buffer_push(&port->state->port); 329 + spin_lock(&port->lock); 329 330 } 330 331 331 332 /****************************************************************************/ ··· 647 644 648 645 static int mcf_probe(struct platform_device *pdev) 649 646 { 650 - struct mcf_platform_uart *platp = pdev->dev.platform_data; 647 + struct mcf_platform_uart *platp = dev_get_platdata(&pdev->dev); 651 648 struct uart_port *port; 652 649 int i; 653 650
+10 -4
drivers/tty/serial/mfd.c
··· 386 386 387 387 /* This is always called in spinlock protected mode, so 388 388 * modify timeout timer is safe here */ 389 - void hsu_dma_rx(struct uart_hsu_port *up, u32 int_sts) 389 + void hsu_dma_rx(struct uart_hsu_port *up, u32 int_sts, unsigned long *flags) 390 390 { 391 391 struct hsu_dma_buffer *dbuf = &up->rxbuf; 392 392 struct hsu_dma_chan *chan = up->rxc; ··· 438 438 | (0x1 << 16) 439 439 | (0x1 << 24) /* timeout bit, see HSU Errata 1 */ 440 440 ); 441 + spin_unlock_irqrestore(&up->port.lock, *flags); 441 442 tty_flip_buffer_push(tport); 443 + spin_lock_irqsave(&up->port.lock, *flags); 442 444 443 445 chan_writel(chan, HSU_CH_CR, 0x3); 444 446 ··· 461 459 } 462 460 } 463 461 464 - static inline void receive_chars(struct uart_hsu_port *up, int *status) 462 + static inline void receive_chars(struct uart_hsu_port *up, int *status, 463 + unsigned long *flags) 465 464 { 466 465 unsigned int ch, flag; 467 466 unsigned int max_count = 256; ··· 522 519 ignore_char: 523 520 *status = serial_in(up, UART_LSR); 524 521 } while ((*status & UART_LSR_DR) && max_count--); 522 + 523 + spin_unlock_irqrestore(&up->port.lock, *flags); 525 524 tty_flip_buffer_push(&up->port.state->port); 525 + spin_lock_irqsave(&up->port.lock, *flags); 526 526 } 527 527 528 528 static void transmit_chars(struct uart_hsu_port *up) ··· 619 613 620 614 lsr = serial_in(up, UART_LSR); 621 615 if (lsr & UART_LSR_DR) 622 - receive_chars(up, &lsr); 616 + receive_chars(up, &lsr, &flags); 623 617 check_modem_status(up); 624 618 625 619 /* lsr will be renewed during the receive_chars */ ··· 649 643 650 644 /* Rx channel */ 651 645 if (chan->dirt == DMA_FROM_DEVICE) 652 - hsu_dma_rx(up, int_sts); 646 + hsu_dma_rx(up, int_sts, &flags); 653 647 654 648 /* Tx channel */ 655 649 if (chan->dirt == DMA_TO_DEVICE) {
+10 -5
drivers/tty/serial/mpsc.c
··· 934 934 ****************************************************************************** 935 935 */ 936 936 937 - static int mpsc_rx_intr(struct mpsc_port_info *pi) 937 + static int mpsc_rx_intr(struct mpsc_port_info *pi, unsigned long *flags) 938 938 { 939 939 struct mpsc_rx_desc *rxre; 940 940 struct tty_port *port = &pi->port.state->port; ··· 969 969 #endif 970 970 /* Following use of tty struct directly is deprecated */ 971 971 if (tty_buffer_request_room(port, bytes_in) < bytes_in) { 972 - if (port->low_latency) 972 + if (port->low_latency) { 973 + spin_unlock_irqrestore(&pi->port.lock, *flags); 973 974 tty_flip_buffer_push(port); 975 + spin_lock_irqsave(&pi->port.lock, *flags); 976 + } 974 977 /* 975 978 * If this failed then we will throw away the bytes 976 979 * but must do so to clear interrupts. ··· 1083 1080 if ((readl(pi->sdma_base + SDMA_SDCM) & SDMA_SDCM_ERD) == 0) 1084 1081 mpsc_start_rx(pi); 1085 1082 1083 + spin_unlock_irqrestore(&pi->port.lock, *flags); 1086 1084 tty_flip_buffer_push(port); 1085 + spin_lock_irqsave(&pi->port.lock, *flags); 1087 1086 return rc; 1088 1087 } 1089 1088 ··· 1227 1222 1228 1223 spin_lock_irqsave(&pi->port.lock, iflags); 1229 1224 mpsc_sdma_intr_ack(pi); 1230 - if (mpsc_rx_intr(pi)) 1225 + if (mpsc_rx_intr(pi, &iflags)) 1231 1226 rc = IRQ_HANDLED; 1232 1227 if (mpsc_tx_intr(pi)) 1233 1228 rc = IRQ_HANDLED; ··· 1889 1884 if (dev->id == 0) { 1890 1885 if (!(rc = mpsc_shared_map_regs(dev))) { 1891 1886 pdata = (struct mpsc_shared_pdata *) 1892 - dev->dev.platform_data; 1887 + dev_get_platdata(&dev->dev); 1893 1888 1894 1889 mpsc_shared_regs.MPSC_MRR_m = pdata->mrr_val; 1895 1890 mpsc_shared_regs.MPSC_RCRR_m= pdata->rcrr_val; ··· 2030 2025 { 2031 2026 struct mpsc_pdata *pdata; 2032 2027 2033 - pdata = (struct mpsc_pdata *)pd->dev.platform_data; 2028 + pdata = (struct mpsc_pdata *)dev_get_platdata(&pd->dev); 2034 2029 2035 2030 pi->port.uartclk = pdata->brg_clk_freq; 2036 2031 pi->port.iotype = UPIO_MEM;
+2 -2
drivers/tty/serial/mrst_max3110.c
··· 713 713 { 714 714 } 715 715 716 - struct uart_ops serial_m3110_ops = { 716 + static struct uart_ops serial_m3110_ops = { 717 717 .tx_empty = serial_m3110_tx_empty, 718 718 .set_mctrl = serial_m3110_set_mctrl, 719 719 .get_mctrl = serial_m3110_get_mctrl, ··· 844 844 pmax = max; 845 845 846 846 /* Give membase a psudo value to pass serial_core's check */ 847 - max->port.membase = (void *)0xff110000; 847 + max->port.membase = (unsigned char __iomem *)0xff110000; 848 848 uart_add_one_port(&serial_m3110_reg, &max->port); 849 849 850 850 return 0;
+158 -118
drivers/tty/serial/msm_serial.c
··· 45 45 struct clk *clk; 46 46 struct clk *pclk; 47 47 unsigned int imr; 48 - unsigned int *gsbi_base; 48 + void __iomem *gsbi_base; 49 49 int is_uartdm; 50 50 unsigned int old_snap_state; 51 51 }; 52 52 53 - static inline void wait_for_xmitr(struct uart_port *port, int bits) 53 + static inline void wait_for_xmitr(struct uart_port *port) 54 54 { 55 - if (!(msm_read(port, UART_SR) & UART_SR_TX_EMPTY)) 56 - while ((msm_read(port, UART_ISR) & bits) != bits) 57 - cpu_relax(); 55 + while (!(msm_read(port, UART_SR) & UART_SR_TX_EMPTY)) { 56 + if (msm_read(port, UART_ISR) & UART_ISR_TX_READY) 57 + break; 58 + udelay(1); 59 + } 60 + msm_write(port, UART_CR_CMD_RESET_TX_READY, UART_CR); 58 61 } 59 62 60 63 static void msm_stop_tx(struct uart_port *port) ··· 140 137 count -= 4; 141 138 } 142 139 140 + spin_unlock(&port->lock); 143 141 tty_flip_buffer_push(tport); 142 + spin_lock(&port->lock); 143 + 144 144 if (misr & (UART_IMR_RXSTALE)) 145 145 msm_write(port, UART_CR_CMD_RESET_STALE_INT, UART_CR); 146 146 msm_write(port, 0xFFFFFF, UARTDM_DMRX); ··· 195 189 tty_insert_flip_char(tport, c, flag); 196 190 } 197 191 192 + spin_unlock(&port->lock); 198 193 tty_flip_buffer_push(tport); 194 + spin_lock(&port->lock); 199 195 } 200 196 201 - static void reset_dm_count(struct uart_port *port) 197 + static void reset_dm_count(struct uart_port *port, int count) 202 198 { 203 - wait_for_xmitr(port, UART_ISR_TX_READY); 204 - msm_write(port, 1, UARTDM_NCF_TX); 199 + wait_for_xmitr(port); 200 + msm_write(port, count, UARTDM_NCF_TX); 201 + msm_read(port, UARTDM_NCF_TX); 205 202 } 206 203 207 204 static void handle_tx(struct uart_port *port) 208 205 { 209 206 struct circ_buf *xmit = &port->state->xmit; 210 207 struct msm_port *msm_port = UART_TO_MSM(port); 211 - int sent_tx; 208 + unsigned int tx_count, num_chars; 209 + unsigned int tf_pointer = 0; 210 + 211 + tx_count = uart_circ_chars_pending(xmit); 212 + tx_count = min3(tx_count, (unsigned int)UART_XMIT_SIZE - xmit->tail, 213 + port->fifosize); 212 214 213 215 if (port->x_char) { 214 216 if (msm_port->is_uartdm) 215 - reset_dm_count(port); 217 + reset_dm_count(port, tx_count + 1); 216 218 217 219 msm_write(port, port->x_char, 218 220 msm_port->is_uartdm ? UARTDM_TF : UART_TF); 219 221 port->icount.tx++; 220 222 port->x_char = 0; 223 + } else if (tx_count && msm_port->is_uartdm) { 224 + reset_dm_count(port, tx_count); 221 225 } 222 226 223 - if (msm_port->is_uartdm) 224 - reset_dm_count(port); 227 + while (tf_pointer < tx_count) { 228 + int i; 229 + char buf[4] = { 0 }; 230 + unsigned int *bf = (unsigned int *)&buf; 225 231 226 - while (msm_read(port, UART_SR) & UART_SR_TX_READY) { 227 - if (uart_circ_empty(xmit)) { 228 - /* disable tx interrupts */ 229 - msm_port->imr &= ~UART_IMR_TXLEV; 230 - msm_write(port, msm_port->imr, UART_IMR); 232 + if (!(msm_read(port, UART_SR) & UART_SR_TX_READY)) 231 233 break; 232 - } 233 - msm_write(port, xmit->buf[xmit->tail], 234 - msm_port->is_uartdm ? UARTDM_TF : UART_TF); 235 234 236 235 if (msm_port->is_uartdm) 237 - reset_dm_count(port); 236 + num_chars = min(tx_count - tf_pointer, 237 + (unsigned int)sizeof(buf)); 238 + else 239 + num_chars = 1; 238 240 239 - xmit->tail = (xmit->tail + 1) & (UART_XMIT_SIZE - 1); 240 - port->icount.tx++; 241 - sent_tx = 1; 241 + for (i = 0; i < num_chars; i++) { 242 + buf[i] = xmit->buf[xmit->tail + i]; 243 + port->icount.tx++; 244 + } 245 + 246 + msm_write(port, *bf, msm_port->is_uartdm ? UARTDM_TF : UART_TF); 247 + xmit->tail = (xmit->tail + num_chars) & (UART_XMIT_SIZE - 1); 248 + tf_pointer += num_chars; 242 249 } 250 + 251 + /* disable tx interrupts if nothing more to send */ 252 + if (uart_circ_empty(xmit)) 253 + msm_stop_tx(port); 243 254 244 255 if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 245 256 uart_write_wakeup(port); ··· 318 295 msm_write(port, UART_CR_CMD_SET_RFR, UART_CR); 319 296 } 320 297 321 - void msm_set_mctrl(struct uart_port *port, unsigned int mctrl) 298 + static void msm_set_mctrl(struct uart_port *port, unsigned int mctrl) 322 299 { 323 300 unsigned int mr; 324 301 mr = msm_read(port, UART_MR1); ··· 341 318 msm_write(port, UART_CR_CMD_STOP_BREAK, UART_CR); 342 319 } 343 320 321 + struct msm_baud_map { 322 + u16 divisor; 323 + u8 code; 324 + u8 rxstale; 325 + }; 326 + 327 + static const struct msm_baud_map * 328 + msm_find_best_baud(struct uart_port *port, unsigned int baud) 329 + { 330 + unsigned int i, divisor; 331 + const struct msm_baud_map *entry; 332 + static const struct msm_baud_map table[] = { 333 + { 1536, 0x00, 1 }, 334 + { 768, 0x11, 1 }, 335 + { 384, 0x22, 1 }, 336 + { 192, 0x33, 1 }, 337 + { 96, 0x44, 1 }, 338 + { 48, 0x55, 1 }, 339 + { 32, 0x66, 1 }, 340 + { 24, 0x77, 1 }, 341 + { 16, 0x88, 1 }, 342 + { 12, 0x99, 6 }, 343 + { 8, 0xaa, 6 }, 344 + { 6, 0xbb, 6 }, 345 + { 4, 0xcc, 6 }, 346 + { 3, 0xdd, 8 }, 347 + { 2, 0xee, 16 }, 348 + { 1, 0xff, 31 }, 349 + }; 350 + 351 + divisor = uart_get_divisor(port, baud); 352 + 353 + for (i = 0, entry = table; i < ARRAY_SIZE(table); i++, entry++) 354 + if (entry->divisor <= divisor) 355 + break; 356 + 357 + return entry; /* Default to smallest divider */ 358 + } 359 + 344 360 static int msm_set_baud_rate(struct uart_port *port, unsigned int baud) 345 361 { 346 - unsigned int baud_code, rxstale, watermark; 362 + unsigned int rxstale, watermark; 347 363 struct msm_port *msm_port = UART_TO_MSM(port); 364 + const struct msm_baud_map *entry; 348 365 349 - switch (baud) { 350 - case 300: 351 - baud_code = UART_CSR_300; 352 - rxstale = 1; 353 - break; 354 - case 600: 355 - baud_code = UART_CSR_600; 356 - rxstale = 1; 357 - break; 358 - case 1200: 359 - baud_code = UART_CSR_1200; 360 - rxstale = 1; 361 - break; 362 - case 2400: 363 - baud_code = UART_CSR_2400; 364 - rxstale = 1; 365 - break; 366 - case 4800: 367 - baud_code = UART_CSR_4800; 368 - rxstale = 1; 369 - break; 370 - case 9600: 371 - baud_code = UART_CSR_9600; 372 - rxstale = 2; 373 - break; 374 - case 14400: 375 - baud_code = UART_CSR_14400; 376 - rxstale = 3; 377 - break; 378 - case 19200: 379 - baud_code = UART_CSR_19200; 380 - rxstale = 4; 381 - break; 382 - case 28800: 383 - baud_code = UART_CSR_28800; 384 - rxstale = 6; 385 - break; 386 - case 38400: 387 - baud_code = UART_CSR_38400; 388 - rxstale = 8; 389 - break; 390 - case 57600: 391 - baud_code = UART_CSR_57600; 392 - rxstale = 16; 393 - break; 394 - case 115200: 395 - default: 396 - baud_code = UART_CSR_115200; 397 - baud = 115200; 398 - rxstale = 31; 399 - break; 400 - } 366 + entry = msm_find_best_baud(port, baud); 401 367 402 368 if (msm_port->is_uartdm) 403 369 msm_write(port, UART_CR_CMD_RESET_RX, UART_CR); 404 370 405 - msm_write(port, baud_code, UART_CSR); 371 + msm_write(port, entry->code, UART_CSR); 406 372 407 373 /* RX stale watermark */ 374 + rxstale = entry->rxstale; 408 375 watermark = UART_IPR_STALE_LSB & rxstale; 409 376 watermark |= UART_IPR_RXSTALE_LAST; 410 377 watermark |= UART_IPR_STALE_TIMEOUT_MSB & (rxstale << 2); ··· 422 409 struct msm_port *msm_port = UART_TO_MSM(port); 423 410 424 411 clk_prepare_enable(msm_port->clk); 425 - if (!IS_ERR(msm_port->pclk)) 426 - clk_prepare_enable(msm_port->pclk); 412 + clk_prepare_enable(msm_port->pclk); 427 413 msm_serial_set_mnd_regs(port); 428 414 } 429 415 ··· 601 589 port->membase = NULL; 602 590 603 591 if (msm_port->gsbi_base) { 604 - iowrite32(GSBI_PROTOCOL_IDLE, msm_port->gsbi_base + 605 - GSBI_CONTROL); 592 + writel_relaxed(GSBI_PROTOCOL_IDLE, 593 + msm_port->gsbi_base + GSBI_CONTROL); 606 594 607 - gsbi_resource = platform_get_resource(pdev, 608 - IORESOURCE_MEM, 1); 609 - 595 + gsbi_resource = platform_get_resource(pdev, IORESOURCE_MEM, 1); 610 596 if (unlikely(!gsbi_resource)) 611 597 return; 612 598 ··· 647 637 if (!request_mem_region(gsbi_resource->start, size, 648 638 "msm_serial")) { 649 639 ret = -EBUSY; 650 - goto fail_release_port; 640 + goto fail_release_port_membase; 651 641 } 652 642 653 643 msm_port->gsbi_base = ioremap(gsbi_resource->start, size); ··· 661 651 662 652 fail_release_gsbi: 663 653 release_mem_region(gsbi_resource->start, size); 654 + fail_release_port_membase: 655 + iounmap(port->membase); 664 656 fail_release_port: 665 657 release_mem_region(port->mapbase, size); 666 658 return ret; ··· 678 666 if (ret) 679 667 return; 680 668 } 681 - 682 - if (msm_port->is_uartdm) 683 - iowrite32(GSBI_PROTOCOL_UART, msm_port->gsbi_base + 684 - GSBI_CONTROL); 669 + if (msm_port->gsbi_base) 670 + writel_relaxed(GSBI_PROTOCOL_UART, 671 + msm_port->gsbi_base + GSBI_CONTROL); 685 672 } 686 673 687 674 static int msm_verify_port(struct uart_port *port, struct serial_struct *ser) ··· 700 689 switch (state) { 701 690 case 0: 702 691 clk_prepare_enable(msm_port->clk); 703 - if (!IS_ERR(msm_port->pclk)) 704 - clk_prepare_enable(msm_port->pclk); 692 + clk_prepare_enable(msm_port->pclk); 705 693 break; 706 694 case 3: 707 695 clk_disable_unprepare(msm_port->clk); 708 - if (!IS_ERR(msm_port->pclk)) 709 - clk_disable_unprepare(msm_port->pclk); 696 + clk_disable_unprepare(msm_port->pclk); 710 697 break; 711 698 default: 712 699 printk(KERN_ERR "msm_serial: Unknown PM state %d\n", state); ··· 769 760 } 770 761 771 762 #ifdef CONFIG_SERIAL_MSM_CONSOLE 772 - 773 - static void msm_console_putchar(struct uart_port *port, int c) 774 - { 775 - struct msm_port *msm_port = UART_TO_MSM(port); 776 - 777 - if (msm_port->is_uartdm) 778 - reset_dm_count(port); 779 - 780 - while (!(msm_read(port, UART_SR) & UART_SR_TX_READY)) 781 - ; 782 - msm_write(port, c, msm_port->is_uartdm ? UARTDM_TF : UART_TF); 783 - } 784 - 785 763 static void msm_console_write(struct console *co, const char *s, 786 764 unsigned int count) 787 765 { 766 + int i; 788 767 struct uart_port *port; 789 768 struct msm_port *msm_port; 769 + int num_newlines = 0; 770 + bool replaced = false; 790 771 791 772 BUG_ON(co->index < 0 || co->index >= UART_NR); 792 773 793 774 port = get_port_from_line(co->index); 794 775 msm_port = UART_TO_MSM(port); 795 776 777 + /* Account for newlines that will get a carriage return added */ 778 + for (i = 0; i < count; i++) 779 + if (s[i] == '\n') 780 + num_newlines++; 781 + count += num_newlines; 782 + 796 783 spin_lock(&port->lock); 797 - uart_console_write(port, s, count, msm_console_putchar); 784 + if (msm_port->is_uartdm) 785 + reset_dm_count(port, count); 786 + 787 + i = 0; 788 + while (i < count) { 789 + int j; 790 + unsigned int num_chars; 791 + char buf[4] = { 0 }; 792 + unsigned int *bf = (unsigned int *)&buf; 793 + 794 + if (msm_port->is_uartdm) 795 + num_chars = min(count - i, (unsigned int)sizeof(buf)); 796 + else 797 + num_chars = 1; 798 + 799 + for (j = 0; j < num_chars; j++) { 800 + char c = *s; 801 + 802 + if (c == '\n' && !replaced) { 803 + buf[j] = '\r'; 804 + j++; 805 + replaced = true; 806 + } 807 + if (j < num_chars) { 808 + buf[j] = c; 809 + s++; 810 + replaced = false; 811 + } 812 + } 813 + 814 + while (!(msm_read(port, UART_SR) & UART_SR_TX_READY)) 815 + cpu_relax(); 816 + 817 + msm_write(port, *bf, msm_port->is_uartdm ? UARTDM_TF : UART_TF); 818 + i += num_chars; 819 + } 798 820 spin_unlock(&port->lock); 799 821 } 800 822 ··· 899 859 900 860 static atomic_t msm_uart_next_id = ATOMIC_INIT(0); 901 861 862 + static const struct of_device_id msm_uartdm_table[] = { 863 + { .compatible = "qcom,msm-uartdm" }, 864 + { } 865 + }; 866 + 902 867 static int __init msm_serial_probe(struct platform_device *pdev) 903 868 { 904 869 struct msm_port *msm_port; ··· 923 878 port->dev = &pdev->dev; 924 879 msm_port = UART_TO_MSM(port); 925 880 926 - if (platform_get_resource(pdev, IORESOURCE_MEM, 1)) 881 + if (of_match_device(msm_uartdm_table, &pdev->dev)) 927 882 msm_port->is_uartdm = 1; 928 883 else 929 884 msm_port->is_uartdm = 0; 930 885 931 - if (msm_port->is_uartdm) { 932 - msm_port->clk = devm_clk_get(&pdev->dev, "gsbi_uart_clk"); 933 - msm_port->pclk = devm_clk_get(&pdev->dev, "gsbi_pclk"); 934 - } else { 935 - msm_port->clk = devm_clk_get(&pdev->dev, "uart_clk"); 936 - msm_port->pclk = ERR_PTR(-ENOENT); 937 - } 938 - 886 + msm_port->clk = devm_clk_get(&pdev->dev, "core"); 939 887 if (IS_ERR(msm_port->clk)) 940 888 return PTR_ERR(msm_port->clk); 941 889 942 890 if (msm_port->is_uartdm) { 891 + msm_port->pclk = devm_clk_get(&pdev->dev, "iface"); 943 892 if (IS_ERR(msm_port->pclk)) 944 893 return PTR_ERR(msm_port->pclk); 945 894 ··· 970 931 971 932 static struct of_device_id msm_match_table[] = { 972 933 { .compatible = "qcom,msm-uart" }, 934 + { .compatible = "qcom,msm-uartdm" }, 973 935 {} 974 936 }; 975 937
+5 -14
drivers/tty/serial/msm_serial.h
··· 38 38 #define UART_MR2_PARITY_MODE_SPACE 0x3 39 39 #define UART_MR2_PARITY_MODE 0x3 40 40 41 - #define UART_CSR 0x0008 42 - #define UART_CSR_115200 0xFF 43 - #define UART_CSR_57600 0xEE 44 - #define UART_CSR_38400 0xDD 45 - #define UART_CSR_28800 0xCC 46 - #define UART_CSR_19200 0xBB 47 - #define UART_CSR_14400 0xAA 48 - #define UART_CSR_9600 0x99 49 - #define UART_CSR_4800 0x77 50 - #define UART_CSR_2400 0x55 51 - #define UART_CSR_1200 0x44 52 - #define UART_CSR_600 0x33 53 - #define UART_CSR_300 0x22 41 + #define UART_CSR 0x0008 54 42 55 43 #define UART_TF 0x000C 56 44 #define UARTDM_TF 0x0070 ··· 59 71 #define UART_CR_CMD_RESET_RFR (14 << 4) 60 72 #define UART_CR_CMD_PROTECTION_EN (16 << 4) 61 73 #define UART_CR_CMD_STALE_EVENT_ENABLE (80 << 4) 74 + #define UART_CR_CMD_RESET_TX_READY (3 << 8) 62 75 #define UART_CR_TX_DISABLE (1 << 3) 63 76 #define UART_CR_TX_ENABLE (1 << 2) 64 77 #define UART_CR_RX_DISABLE (1 << 1) ··· 140 151 msm_write(port, 0xF1, UART_NREG); 141 152 msm_write(port, 0x0F, UART_DREG); 142 153 msm_write(port, 0x1A, UART_MNDREG); 154 + port->uartclk = 1843200; 143 155 } 144 156 145 157 /* ··· 152 162 msm_write(port, 0xF6, UART_NREG); 153 163 msm_write(port, 0x0F, UART_DREG); 154 164 msm_write(port, 0x0A, UART_MNDREG); 165 + port->uartclk = 1843200; 155 166 } 156 167 157 168 static inline ··· 160 169 { 161 170 if (port->uartclk == 19200000) 162 171 msm_serial_set_mnd_regs_tcxo(port); 163 - else 172 + else if (port->uartclk == 4800000) 164 173 msm_serial_set_mnd_regs_tcxoby4(port); 165 174 } 166 175
+1 -1
drivers/tty/serial/msm_serial_hs.c
··· 1618 1618 struct msm_hs_port *msm_uport; 1619 1619 struct resource *resource; 1620 1620 const struct msm_serial_hs_platform_data *pdata = 1621 - pdev->dev.platform_data; 1621 + dev_get_platdata(&pdev->dev); 1622 1622 1623 1623 if (pdev->id < 0 || pdev->id >= UARTDM_NR) { 1624 1624 printk(KERN_ERR "Invalid plaform device ID = %d\n", pdev->id);
+5 -11
drivers/tty/serial/mxs-auart.c
··· 32 32 #include <linux/clk.h> 33 33 #include <linux/delay.h> 34 34 #include <linux/io.h> 35 - #include <linux/pinctrl/consumer.h> 36 35 #include <linux/of_device.h> 37 36 #include <linux/dma-mapping.h> 38 37 #include <linux/dmaengine.h> ··· 133 134 struct mxs_auart_port { 134 135 struct uart_port port; 135 136 136 - #define MXS_AUART_DMA_CONFIG 0x1 137 137 #define MXS_AUART_DMA_ENABLED 0x2 138 138 #define MXS_AUART_DMA_TX_SYNC 2 /* bit 2 */ 139 139 #define MXS_AUART_DMA_RX_READY 3 /* bit 3 */ 140 + #define MXS_AUART_RTSCTS 4 /* bit 4 */ 140 141 unsigned long flags; 141 142 unsigned int ctrl; 142 143 enum mxs_auart_type devtype; ··· 639 640 * we can only implement the DMA support for auart 640 641 * in mx28. 641 642 */ 642 - if (is_imx28_auart(s) && (s->flags & MXS_AUART_DMA_CONFIG)) { 643 + if (is_imx28_auart(s) 644 + && test_bit(MXS_AUART_RTSCTS, &s->flags)) { 643 645 if (!mxs_auart_dma_init(s)) 644 646 /* enable DMA tranfer */ 645 647 ctrl2 |= AUART_CTRL2_TXDMAE | AUART_CTRL2_RXDMAE ··· 1008 1008 } 1009 1009 s->port.line = ret; 1010 1010 1011 - s->flags |= MXS_AUART_DMA_CONFIG; 1011 + if (of_get_property(np, "fsl,uart-has-rtscts", NULL)) 1012 + set_bit(MXS_AUART_RTSCTS, &s->flags); 1012 1013 1013 1014 return 0; 1014 1015 } ··· 1022 1021 u32 version; 1023 1022 int ret = 0; 1024 1023 struct resource *r; 1025 - struct pinctrl *pinctrl; 1026 1024 1027 1025 s = kzalloc(sizeof(struct mxs_auart_port), GFP_KERNEL); 1028 1026 if (!s) { ··· 1034 1034 s->port.line = pdev->id < 0 ? 0 : pdev->id; 1035 1035 else if (ret < 0) 1036 1036 goto out_free; 1037 - 1038 - pinctrl = devm_pinctrl_get_select_default(&pdev->dev); 1039 - if (IS_ERR(pinctrl)) { 1040 - ret = PTR_ERR(pinctrl); 1041 - goto out_free; 1042 - } 1043 1037 1044 1038 if (of_id) { 1045 1039 pdev->id_entry = of_id->data;
+4 -4
drivers/tty/serial/netx-serial.c
··· 196 196 uart_write_wakeup(port); 197 197 } 198 198 199 - static void netx_rxint(struct uart_port *port) 199 + static void netx_rxint(struct uart_port *port, unsigned long *flags) 200 200 { 201 201 unsigned char rx, flg, status; 202 202 ··· 236 236 uart_insert_char(port, status, SR_OE, rx, flg); 237 237 } 238 238 239 + spin_unlock_irqrestore(&port->lock, *flags); 239 240 tty_flip_buffer_push(&port->state->port); 241 + spin_lock_irqsave(&port->lock, *flags); 240 242 } 241 243 242 244 static irqreturn_t netx_int(int irq, void *dev_id) ··· 252 250 status = readl(port->membase + UART_IIR) & IIR_MASK; 253 251 while (status) { 254 252 if (status & IIR_RIS) 255 - netx_rxint(port); 253 + netx_rxint(port, &flags); 256 254 if (status & IIR_TIS) 257 255 netx_txint(port); 258 256 if (status & IIR_MIS) { ··· 694 692 static int serial_netx_remove(struct platform_device *pdev) 695 693 { 696 694 struct netx_port *sport = platform_get_drvdata(pdev); 697 - 698 - platform_set_drvdata(pdev, NULL); 699 695 700 696 if (sport) 701 697 uart_remove_one_port(&netx_reg, &sport->port);
+3
drivers/tty/serial/nwpserial.c
··· 149 149 tty_insert_flip_char(port, ch, TTY_NORMAL); 150 150 } while (dcr_read(up->dcr_host, UART_LSR) & UART_LSR_DR); 151 151 152 + spin_unlock(&up->port.lock); 152 153 tty_flip_buffer_push(port); 154 + spin_lock(&up->port.lock); 155 + 153 156 ret = IRQ_HANDLED; 154 157 155 158 /* clear interrupt */
+199 -15
drivers/tty/serial/omap-serial.c
··· 40 40 #include <linux/pm_runtime.h> 41 41 #include <linux/of.h> 42 42 #include <linux/gpio.h> 43 - #include <linux/pinctrl/consumer.h> 43 + #include <linux/of_gpio.h> 44 44 #include <linux/platform_data/serial-omap.h> 45 + 46 + #include <dt-bindings/gpio/gpio.h> 45 47 46 48 #define OMAP_MAX_HSUART_PORTS 6 47 49 ··· 53 51 #define OMAP_UART_REV_46 0x0406 54 52 #define OMAP_UART_REV_52 0x0502 55 53 #define OMAP_UART_REV_63 0x0603 54 + 55 + #define OMAP_UART_TX_WAKEUP_EN BIT(7) 56 + 57 + /* Feature flags */ 58 + #define OMAP_UART_WER_HAS_TX_WAKEUP BIT(0) 56 59 57 60 #define UART_ERRATA_i202_MDR1_ACCESS BIT(0) 58 61 #define UART_ERRATA_i291_DMA_FORCEIDLE BIT(1) ··· 144 137 unsigned char dlh; 145 138 unsigned char mdr1; 146 139 unsigned char scr; 140 + unsigned char wer; 147 141 148 142 int use_dma; 149 143 /* ··· 159 151 int context_loss_cnt; 160 152 u32 errata; 161 153 u8 wakeups_enabled; 154 + u32 features; 162 155 163 156 int DTR_gpio; 164 157 int DTR_inverted; 165 158 int DTR_active; 166 159 160 + struct serial_rs485 rs485; 161 + int rts_gpio; 162 + 167 163 struct pm_qos_request pm_qos_request; 168 164 u32 latency; 169 165 u32 calc_latency; 170 166 struct work_struct qos_work; 171 - struct pinctrl *pins; 172 167 bool is_suspending; 173 168 }; 174 169 ··· 206 195 207 196 static int serial_omap_get_context_loss_count(struct uart_omap_port *up) 208 197 { 209 - struct omap_uart_port_info *pdata = up->dev->platform_data; 198 + struct omap_uart_port_info *pdata = dev_get_platdata(up->dev); 210 199 211 200 if (!pdata || !pdata->get_context_loss_count) 212 201 return -EINVAL; ··· 216 205 217 206 static void serial_omap_enable_wakeup(struct uart_omap_port *up, bool enable) 218 207 { 219 - struct omap_uart_port_info *pdata = up->dev->platform_data; 208 + struct omap_uart_port_info *pdata = dev_get_platdata(up->dev); 220 209 221 210 if (!pdata || !pdata->enable_wakeup) 222 211 return; ··· 283 272 static void serial_omap_stop_tx(struct uart_port *port) 284 273 { 285 274 struct uart_omap_port *up = to_uart_omap_port(port); 275 + struct circ_buf *xmit = &up->port.state->xmit; 276 + int res; 286 277 287 278 pm_runtime_get_sync(up->dev); 279 + 280 + /* handle rs485 */ 281 + if (up->rs485.flags & SER_RS485_ENABLED) { 282 + /* do nothing if current tx not yet completed */ 283 + res = serial_in(up, UART_LSR) & UART_LSR_TEMT; 284 + if (!res) 285 + return; 286 + 287 + /* if there's no more data to send, turn off rts */ 288 + if (uart_circ_empty(xmit)) { 289 + /* if rts not already disabled */ 290 + res = (up->rs485.flags & SER_RS485_RTS_AFTER_SEND) ? 1 : 0; 291 + if (gpio_get_value(up->rts_gpio) != res) { 292 + if (up->rs485.delay_rts_after_send > 0) { 293 + mdelay(up->rs485.delay_rts_after_send); 294 + } 295 + gpio_set_value(up->rts_gpio, res); 296 + } 297 + } 298 + } 299 + 288 300 if (up->ier & UART_IER_THRI) { 289 301 up->ier &= ~UART_IER_THRI; 302 + serial_out(up, UART_IER, up->ier); 303 + } 304 + 305 + if ((up->rs485.flags & SER_RS485_ENABLED) && 306 + !(up->rs485.flags & SER_RS485_RX_DURING_TX)) { 307 + up->ier = UART_IER_RLSI | UART_IER_RDI; 290 308 serial_out(up, UART_IER, up->ier); 291 309 } 292 310 ··· 380 340 static void serial_omap_start_tx(struct uart_port *port) 381 341 { 382 342 struct uart_omap_port *up = to_uart_omap_port(port); 343 + int res; 383 344 384 345 pm_runtime_get_sync(up->dev); 346 + 347 + /* handle rs485 */ 348 + if (up->rs485.flags & SER_RS485_ENABLED) { 349 + /* if rts not already enabled */ 350 + res = (up->rs485.flags & SER_RS485_RTS_ON_SEND) ? 1 : 0; 351 + if (gpio_get_value(up->rts_gpio) != res) { 352 + gpio_set_value(up->rts_gpio, res); 353 + if (up->rs485.delay_rts_before_send > 0) { 354 + mdelay(up->rs485.delay_rts_before_send); 355 + } 356 + } 357 + } 358 + 359 + if ((up->rs485.flags & SER_RS485_ENABLED) && 360 + !(up->rs485.flags & SER_RS485_RX_DURING_TX)) 361 + serial_omap_stop_rx(port); 362 + 385 363 serial_omap_enable_ier_thri(up); 386 364 pm_runtime_mark_last_busy(up->dev); 387 365 pm_runtime_put_autosuspend(up->dev); ··· 741 683 serial_out(up, UART_IER, up->ier); 742 684 743 685 /* Enable module level wake up */ 744 - serial_out(up, UART_OMAP_WER, OMAP_UART_WER_MOD_WKUP); 686 + up->wer = OMAP_UART_WER_MOD_WKUP; 687 + if (up->features & OMAP_UART_WER_HAS_TX_WAKEUP) 688 + up->wer |= OMAP_UART_TX_WAKEUP_EN; 689 + 690 + serial_out(up, UART_OMAP_WER, up->wer); 745 691 746 692 pm_runtime_mark_last_busy(up->dev); 747 693 pm_runtime_put_autosuspend(up->dev); ··· 1316 1254 1317 1255 #endif 1318 1256 1257 + /* Enable or disable the rs485 support */ 1258 + static void 1259 + serial_omap_config_rs485(struct uart_port *port, struct serial_rs485 *rs485conf) 1260 + { 1261 + struct uart_omap_port *up = to_uart_omap_port(port); 1262 + unsigned long flags; 1263 + unsigned int mode; 1264 + int val; 1265 + 1266 + pm_runtime_get_sync(up->dev); 1267 + spin_lock_irqsave(&up->port.lock, flags); 1268 + 1269 + /* Disable interrupts from this port */ 1270 + mode = up->ier; 1271 + up->ier = 0; 1272 + serial_out(up, UART_IER, 0); 1273 + 1274 + /* store new config */ 1275 + up->rs485 = *rs485conf; 1276 + 1277 + /* 1278 + * Just as a precaution, only allow rs485 1279 + * to be enabled if the gpio pin is valid 1280 + */ 1281 + if (gpio_is_valid(up->rts_gpio)) { 1282 + /* enable / disable rts */ 1283 + val = (up->rs485.flags & SER_RS485_ENABLED) ? 1284 + SER_RS485_RTS_AFTER_SEND : SER_RS485_RTS_ON_SEND; 1285 + val = (up->rs485.flags & val) ? 1 : 0; 1286 + gpio_set_value(up->rts_gpio, val); 1287 + } else 1288 + up->rs485.flags &= ~SER_RS485_ENABLED; 1289 + 1290 + /* Enable interrupts */ 1291 + up->ier = mode; 1292 + serial_out(up, UART_IER, up->ier); 1293 + 1294 + spin_unlock_irqrestore(&up->port.lock, flags); 1295 + pm_runtime_mark_last_busy(up->dev); 1296 + pm_runtime_put_autosuspend(up->dev); 1297 + } 1298 + 1299 + static int 1300 + serial_omap_ioctl(struct uart_port *port, unsigned int cmd, unsigned long arg) 1301 + { 1302 + struct serial_rs485 rs485conf; 1303 + 1304 + switch (cmd) { 1305 + case TIOCSRS485: 1306 + if (copy_from_user(&rs485conf, (struct serial_rs485 *) arg, 1307 + sizeof(rs485conf))) 1308 + return -EFAULT; 1309 + 1310 + serial_omap_config_rs485(port, &rs485conf); 1311 + break; 1312 + 1313 + case TIOCGRS485: 1314 + if (copy_to_user((struct serial_rs485 *) arg, 1315 + &(to_uart_omap_port(port)->rs485), 1316 + sizeof(rs485conf))) 1317 + return -EFAULT; 1318 + break; 1319 + 1320 + default: 1321 + return -ENOIOCTLCMD; 1322 + } 1323 + return 0; 1324 + } 1325 + 1326 + 1319 1327 static struct uart_ops serial_omap_pops = { 1320 1328 .tx_empty = serial_omap_tx_empty, 1321 1329 .set_mctrl = serial_omap_set_mctrl, ··· 1407 1275 .request_port = serial_omap_request_port, 1408 1276 .config_port = serial_omap_config_port, 1409 1277 .verify_port = serial_omap_verify_port, 1278 + .ioctl = serial_omap_ioctl, 1410 1279 #ifdef CONFIG_CONSOLE_POLL 1411 1280 .poll_put_char = serial_omap_poll_put_char, 1412 1281 .poll_get_char = serial_omap_poll_get_char, ··· 1467 1334 u32 mvr, scheme; 1468 1335 u16 revision, major, minor; 1469 1336 1470 - mvr = serial_in(up, UART_OMAP_MVER); 1337 + mvr = readl(up->port.membase + (UART_OMAP_MVER << up->port.regshift)); 1471 1338 1472 1339 /* Check revision register scheme */ 1473 1340 scheme = mvr >> OMAP_UART_MVR_SCHEME_SHIFT; ··· 1506 1373 case OMAP_UART_REV_52: 1507 1374 up->errata |= (UART_ERRATA_i202_MDR1_ACCESS | 1508 1375 UART_ERRATA_i291_DMA_FORCEIDLE); 1376 + up->features |= OMAP_UART_WER_HAS_TX_WAKEUP; 1509 1377 break; 1510 1378 case OMAP_UART_REV_63: 1511 1379 up->errata |= UART_ERRATA_i202_MDR1_ACCESS; 1380 + up->features |= OMAP_UART_WER_HAS_TX_WAKEUP; 1512 1381 break; 1513 1382 default: 1514 1383 break; ··· 1530 1395 return omap_up_info; 1531 1396 } 1532 1397 1398 + static int serial_omap_probe_rs485(struct uart_omap_port *up, 1399 + struct device_node *np) 1400 + { 1401 + struct serial_rs485 *rs485conf = &up->rs485; 1402 + u32 rs485_delay[2]; 1403 + enum of_gpio_flags flags; 1404 + int ret; 1405 + 1406 + rs485conf->flags = 0; 1407 + up->rts_gpio = -EINVAL; 1408 + 1409 + if (!np) 1410 + return 0; 1411 + 1412 + if (of_property_read_bool(np, "rs485-rts-active-high")) 1413 + rs485conf->flags |= SER_RS485_RTS_ON_SEND; 1414 + else 1415 + rs485conf->flags |= SER_RS485_RTS_AFTER_SEND; 1416 + 1417 + /* check for tx enable gpio */ 1418 + up->rts_gpio = of_get_named_gpio_flags(np, "rts-gpio", 0, &flags); 1419 + if (gpio_is_valid(up->rts_gpio)) { 1420 + ret = gpio_request(up->rts_gpio, "omap-serial"); 1421 + if (ret < 0) 1422 + return ret; 1423 + ret = gpio_direction_output(up->rts_gpio, 1424 + flags & SER_RS485_RTS_AFTER_SEND); 1425 + if (ret < 0) 1426 + return ret; 1427 + } else 1428 + up->rts_gpio = -EINVAL; 1429 + 1430 + if (of_property_read_u32_array(np, "rs485-rts-delay", 1431 + rs485_delay, 2) == 0) { 1432 + rs485conf->delay_rts_before_send = rs485_delay[0]; 1433 + rs485conf->delay_rts_after_send = rs485_delay[1]; 1434 + } 1435 + 1436 + if (of_property_read_bool(np, "rs485-rx-during-tx")) 1437 + rs485conf->flags |= SER_RS485_RX_DURING_TX; 1438 + 1439 + if (of_property_read_bool(np, "linux,rs485-enabled-at-boot-time")) 1440 + rs485conf->flags |= SER_RS485_ENABLED; 1441 + 1442 + return 0; 1443 + } 1444 + 1533 1445 static int serial_omap_probe(struct platform_device *pdev) 1534 1446 { 1535 1447 struct uart_omap_port *up; 1536 1448 struct resource *mem, *irq; 1537 - struct omap_uart_port_info *omap_up_info = pdev->dev.platform_data; 1449 + struct omap_uart_port_info *omap_up_info = dev_get_platdata(&pdev->dev); 1538 1450 int ret; 1539 1451 1540 - if (pdev->dev.of_node) 1452 + if (pdev->dev.of_node) { 1541 1453 omap_up_info = of_get_uart_port_info(&pdev->dev); 1454 + pdev->dev.platform_data = omap_up_info; 1455 + } 1542 1456 1543 1457 mem = platform_get_resource(pdev, IORESOURCE_MEM, 0); 1544 1458 if (!mem) { ··· 1652 1468 goto err_port_line; 1653 1469 } 1654 1470 1655 - up->pins = devm_pinctrl_get_select_default(&pdev->dev); 1656 - if (IS_ERR(up->pins)) { 1657 - dev_warn(&pdev->dev, "did not get pins for uart%i error: %li\n", 1658 - up->port.line, PTR_ERR(up->pins)); 1659 - up->pins = NULL; 1660 - } 1471 + ret = serial_omap_probe_rs485(up, pdev->dev.of_node); 1472 + if (ret < 0) 1473 + goto err_rs485; 1661 1474 1662 1475 sprintf(up->name, "OMAP UART%d", up->port.line); 1663 1476 up->port.mapbase = mem->start; ··· 1682 1501 INIT_WORK(&up->qos_work, serial_omap_uart_qos_work); 1683 1502 1684 1503 platform_set_drvdata(pdev, up); 1685 - pm_runtime_enable(&pdev->dev); 1686 1504 if (omap_up_info->autosuspend_timeout == 0) 1687 1505 omap_up_info->autosuspend_timeout = -1; 1688 1506 device_init_wakeup(up->dev, true); ··· 1690 1510 omap_up_info->autosuspend_timeout); 1691 1511 1692 1512 pm_runtime_irq_safe(&pdev->dev); 1513 + pm_runtime_enable(&pdev->dev); 1514 + 1693 1515 pm_runtime_get_sync(&pdev->dev); 1694 1516 1695 1517 omap_serial_fill_features_erratas(up); ··· 1711 1529 pm_runtime_put(&pdev->dev); 1712 1530 pm_runtime_disable(&pdev->dev); 1713 1531 err_ioremap: 1532 + err_rs485: 1714 1533 err_port_line: 1715 1534 dev_err(&pdev->dev, "[UART%d]: failure [%s]: %d\n", 1716 1535 pdev->id, __func__, ret); ··· 1792 1609 serial_omap_mdr1_errataset(up, up->mdr1); 1793 1610 else 1794 1611 serial_out(up, UART_OMAP_MDR1, up->mdr1); 1612 + serial_out(up, UART_OMAP_WER, up->wer); 1795 1613 } 1796 1614 1797 1615 static int serial_omap_runtime_suspend(struct device *dev)
+56 -28
drivers/tty/serial/pch_uart.c
··· 232 232 unsigned int iobase; 233 233 struct pci_dev *pdev; 234 234 int fifo_size; 235 - int uartclk; 235 + unsigned int uartclk; 236 236 int start_tx; 237 237 int start_rx; 238 238 int tx_empty; ··· 373 373 }; 374 374 #endif /* CONFIG_DEBUG_FS */ 375 375 376 + static struct dmi_system_id pch_uart_dmi_table[] = { 377 + { 378 + .ident = "CM-iTC", 379 + { 380 + DMI_MATCH(DMI_BOARD_NAME, "CM-iTC"), 381 + }, 382 + (void *)CMITC_UARTCLK, 383 + }, 384 + { 385 + .ident = "FRI2", 386 + { 387 + DMI_MATCH(DMI_BIOS_VERSION, "FRI2"), 388 + }, 389 + (void *)FRI2_64_UARTCLK, 390 + }, 391 + { 392 + .ident = "Fish River Island II", 393 + { 394 + DMI_MATCH(DMI_PRODUCT_NAME, "Fish River Island II"), 395 + }, 396 + (void *)FRI2_48_UARTCLK, 397 + }, 398 + { 399 + .ident = "COMe-mTT", 400 + { 401 + DMI_MATCH(DMI_BOARD_NAME, "COMe-mTT"), 402 + }, 403 + (void *)NTC1_UARTCLK, 404 + }, 405 + { 406 + .ident = "nanoETXexpress-TT", 407 + { 408 + DMI_MATCH(DMI_BOARD_NAME, "nanoETXexpress-TT"), 409 + }, 410 + (void *)NTC1_UARTCLK, 411 + }, 412 + { 413 + .ident = "MinnowBoard", 414 + { 415 + DMI_MATCH(DMI_BOARD_NAME, "MinnowBoard"), 416 + }, 417 + (void *)MINNOW_UARTCLK, 418 + }, 419 + }; 420 + 376 421 /* Return UART clock, checking for board specific clocks. */ 377 - static int pch_uart_get_uartclk(void) 422 + static unsigned int pch_uart_get_uartclk(void) 378 423 { 379 - const char *cmp; 424 + const struct dmi_system_id *d; 380 425 381 426 if (user_uartclk) 382 427 return user_uartclk; 383 428 384 - cmp = dmi_get_system_info(DMI_BOARD_NAME); 385 - if (cmp && strstr(cmp, "CM-iTC")) 386 - return CMITC_UARTCLK; 387 - 388 - cmp = dmi_get_system_info(DMI_BIOS_VERSION); 389 - if (cmp && strnstr(cmp, "FRI2", 4)) 390 - return FRI2_64_UARTCLK; 391 - 392 - cmp = dmi_get_system_info(DMI_PRODUCT_NAME); 393 - if (cmp && strstr(cmp, "Fish River Island II")) 394 - return FRI2_48_UARTCLK; 395 - 396 - /* Kontron COMe-mTT10 (nanoETXexpress-TT) */ 397 - cmp = dmi_get_system_info(DMI_BOARD_NAME); 398 - if (cmp && (strstr(cmp, "COMe-mTT") || 399 - strstr(cmp, "nanoETXexpress-TT"))) 400 - return NTC1_UARTCLK; 401 - 402 - cmp = dmi_get_system_info(DMI_BOARD_NAME); 403 - if (cmp && strstr(cmp, "MinnowBoard")) 404 - return MINNOW_UARTCLK; 429 + d = dmi_first_match(pch_uart_dmi_table); 430 + if (d) 431 + return (unsigned long)d->driver_data; 405 432 406 433 return DEFAULT_UARTCLK; 407 434 } ··· 449 422 iowrite8(ier, priv->membase + UART_IER); 450 423 } 451 424 452 - static int pch_uart_hal_set_line(struct eg20t_port *priv, int baud, 425 + static int pch_uart_hal_set_line(struct eg20t_port *priv, unsigned int baud, 453 426 unsigned int parity, unsigned int bits, 454 427 unsigned int stb) 455 428 { ··· 484 457 lcr |= bits; 485 458 lcr |= stb; 486 459 487 - dev_dbg(priv->port.dev, "%s:baud = %d, div = %04x, lcr = %02x (%lu)\n", 460 + dev_dbg(priv->port.dev, "%s:baud = %u, div = %04x, lcr = %02x (%lu)\n", 488 461 __func__, baud, div, lcr, jiffies); 489 462 iowrite8(PCH_UART_LCR_DLAB, priv->membase + UART_LCR); 490 463 iowrite8(dll, priv->membase + PCH_UART_DLL); ··· 1390 1363 static void pch_uart_set_termios(struct uart_port *port, 1391 1364 struct ktermios *termios, struct ktermios *old) 1392 1365 { 1393 - int baud; 1394 1366 int rtn; 1395 - unsigned int parity, bits, stb; 1367 + unsigned int baud, parity, bits, stb; 1396 1368 struct eg20t_port *priv; 1397 1369 unsigned long flags; 1398 1370 ··· 1524 1498 return 0; 1525 1499 } 1526 1500 1501 + #if defined(CONFIG_CONSOLE_POLL) || defined(CONFIG_SERIAL_PCH_UART_CONSOLE) 1527 1502 /* 1528 1503 * Wait for transmitter & holding register to empty 1529 1504 */ ··· 1555 1528 } 1556 1529 } 1557 1530 } 1531 + #endif /* CONFIG_CONSOLE_POLL || CONFIG_SERIAL_PCH_UART_CONSOLE */ 1558 1532 1559 1533 #ifdef CONFIG_CONSOLE_POLL 1560 1534 /*
-1
drivers/tty/serial/pmac_zilog.c
··· 1798 1798 1799 1799 uart_remove_one_port(&pmz_uart_reg, &uap->port); 1800 1800 1801 - platform_set_drvdata(pdev, NULL); 1802 1801 uap->port.dev = NULL; 1803 1802 1804 1803 return 0;
+3 -2
drivers/tty/serial/pnx8xxx_uart.c
··· 237 237 status = FIFO_TO_SM(serial_in(sport, PNX8XXX_FIFO)) | 238 238 ISTAT_TO_SM(serial_in(sport, PNX8XXX_ISTAT)); 239 239 } 240 + 241 + spin_unlock(&sport->port.lock); 240 242 tty_flip_buffer_push(&sport->port.state->port); 243 + spin_lock(&sport->port.lock); 241 244 } 242 245 243 246 static void pnx8xxx_tx_chars(struct pnx8xxx_port *sport) ··· 803 800 static int pnx8xxx_serial_remove(struct platform_device *pdev) 804 801 { 805 802 struct pnx8xxx_port *sport = platform_get_drvdata(pdev); 806 - 807 - platform_set_drvdata(pdev, NULL); 808 803 809 804 if (sport) 810 805 uart_remove_one_port(&pnx8xxx_reg, &sport->port);
+3 -30
drivers/tty/serial/pxa.c
··· 332 332 spin_unlock_irqrestore(&up->port.lock, flags); 333 333 } 334 334 335 - #if 0 336 - static void serial_pxa_dma_init(struct pxa_uart *up) 337 - { 338 - up->rxdma = 339 - pxa_request_dma(up->name, DMA_PRIO_LOW, pxa_receive_dma, up); 340 - if (up->rxdma < 0) 341 - goto out; 342 - up->txdma = 343 - pxa_request_dma(up->name, DMA_PRIO_LOW, pxa_transmit_dma, up); 344 - if (up->txdma < 0) 345 - goto err_txdma; 346 - up->dmadesc = kmalloc(4 * sizeof(pxa_dma_desc), GFP_KERNEL); 347 - if (!up->dmadesc) 348 - goto err_alloc; 349 - 350 - /* ... */ 351 - err_alloc: 352 - pxa_free_dma(up->txdma); 353 - err_rxdma: 354 - pxa_free_dma(up->rxdma); 355 - out: 356 - return; 357 - } 358 - #endif 359 - 360 335 static int serial_pxa_startup(struct uart_port *port) 361 336 { 362 337 struct uart_pxa_port *up = (struct uart_pxa_port *)port; ··· 765 790 #define PXA_CONSOLE NULL 766 791 #endif 767 792 768 - struct uart_ops serial_pxa_pops = { 793 + static struct uart_ops serial_pxa_pops = { 769 794 .tx_empty = serial_pxa_tx_empty, 770 795 .set_mctrl = serial_pxa_set_mctrl, 771 796 .get_mctrl = serial_pxa_get_mctrl, ··· 920 945 { 921 946 struct uart_pxa_port *sport = platform_get_drvdata(dev); 922 947 923 - platform_set_drvdata(dev, NULL); 924 - 925 948 uart_remove_one_port(&serial_pxa_reg, &sport->port); 926 949 927 950 clk_unprepare(sport->clk); ··· 943 970 }, 944 971 }; 945 972 946 - int __init serial_pxa_init(void) 973 + static int __init serial_pxa_init(void) 947 974 { 948 975 int ret; 949 976 ··· 958 985 return ret; 959 986 } 960 987 961 - void __exit serial_pxa_exit(void) 988 + static void __exit serial_pxa_exit(void) 962 989 { 963 990 platform_driver_unregister(&serial_pxa_driver); 964 991 uart_unregister_driver(&serial_pxa_reg);
+2
drivers/tty/serial/rp2.c
··· 427 427 up->port.icount.rx++; 428 428 } 429 429 430 + spin_unlock(&up->port.lock); 430 431 tty_flip_buffer_push(port); 432 + spin_lock(&up->port.lock); 431 433 } 432 434 433 435 static void rp2_tx_chars(struct rp2_uart_port *up)
+3 -2
drivers/tty/serial/sa1100.c
··· 232 232 status = UTSR1_TO_SM(UART_GET_UTSR1(sport)) | 233 233 UTSR0_TO_SM(UART_GET_UTSR0(sport)); 234 234 } 235 + 236 + spin_unlock(&sport->port.lock); 235 237 tty_flip_buffer_push(&sport->port.state->port); 238 + spin_lock(&sport->port.lock); 236 239 } 237 240 238 241 static void sa1100_tx_chars(struct sa1100_port *sport) ··· 866 863 static int sa1100_serial_remove(struct platform_device *pdev) 867 864 { 868 865 struct sa1100_port *sport = platform_get_drvdata(pdev); 869 - 870 - platform_set_drvdata(pdev, NULL); 871 866 872 867 if (sport) 873 868 uart_remove_one_port(&sa1100_reg, &sport->port);
+6 -3
drivers/tty/serial/samsung.c
··· 249 249 ufcon |= S3C2410_UFCON_RESETRX; 250 250 wr_regl(port, S3C2410_UFCON, ufcon); 251 251 rx_enabled(port) = 1; 252 + spin_unlock_irqrestore(&port->lock, 253 + flags); 252 254 goto out; 253 255 } 254 256 continue; ··· 299 297 ignore_char: 300 298 continue; 301 299 } 300 + 301 + spin_unlock_irqrestore(&port->lock, flags); 302 302 tty_flip_buffer_push(&port->state->port); 303 303 304 304 out: 305 - spin_unlock_irqrestore(&port->lock, flags); 306 305 return IRQ_HANDLED; 307 306 } 308 307 ··· 1253 1250 1254 1251 ourport->baudclk = ERR_PTR(-EINVAL); 1255 1252 ourport->info = ourport->drv_data->info; 1256 - ourport->cfg = (pdev->dev.platform_data) ? 1257 - (struct s3c2410_uartcfg *)pdev->dev.platform_data : 1253 + ourport->cfg = (dev_get_platdata(&pdev->dev)) ? 1254 + (struct s3c2410_uartcfg *)dev_get_platdata(&pdev->dev) : 1258 1255 ourport->drv_data->def_cfg; 1259 1256 1260 1257 ourport->port.fifosize = (ourport->info->fifosize) ?
+2 -1
drivers/tty/serial/samsung.h
··· 68 68 /* register access controls */ 69 69 70 70 #define portaddr(port, reg) ((port)->membase + (reg)) 71 - #define portaddrl(port, reg) ((unsigned long *)((port)->membase + (reg))) 71 + #define portaddrl(port, reg) \ 72 + ((unsigned long *)(unsigned long)((port)->membase + (reg))) 72 73 73 74 #define rd_regb(port, reg) (__raw_readb(portaddr(port, reg))) 74 75 #define rd_regl(port, reg) (__raw_readl(portaddr(port, reg)))
+1 -1
drivers/tty/serial/sc26xx.c
··· 637 637 { 638 638 struct resource *res; 639 639 struct uart_sc26xx_port *up; 640 - unsigned int *sc26xx_data = dev->dev.platform_data; 640 + unsigned int *sc26xx_data = dev_get_platdata(&dev->dev); 641 641 int err; 642 642 643 643 res = platform_get_resource(dev, IORESOURCE_MEM, 0);
+158 -181
drivers/tty/serial/sccnxp.c
··· 15 15 #define SUPPORT_SYSRQ 16 16 #endif 17 17 18 + #include <linux/clk.h> 18 19 #include <linux/err.h> 19 20 #include <linux/module.h> 20 21 #include <linux/device.h> ··· 95 94 #define MCTRL_IBIT(cfg, sig) ((((cfg) >> (sig)) & 0xf) - LINE_IP0) 96 95 #define MCTRL_OBIT(cfg, sig) ((((cfg) >> (sig)) & 0xf) - LINE_OP0) 97 96 98 - /* Supported chip types */ 99 - enum { 100 - SCCNXP_TYPE_SC2681 = 2681, 101 - SCCNXP_TYPE_SC2691 = 2691, 102 - SCCNXP_TYPE_SC2692 = 2692, 103 - SCCNXP_TYPE_SC2891 = 2891, 104 - SCCNXP_TYPE_SC2892 = 2892, 105 - SCCNXP_TYPE_SC28202 = 28202, 106 - SCCNXP_TYPE_SC68681 = 68681, 107 - SCCNXP_TYPE_SC68692 = 68692, 97 + #define SCCNXP_HAVE_IO 0x00000001 98 + #define SCCNXP_HAVE_MR0 0x00000002 99 + 100 + struct sccnxp_chip { 101 + const char *name; 102 + unsigned int nr; 103 + unsigned long freq_min; 104 + unsigned long freq_std; 105 + unsigned long freq_max; 106 + unsigned int flags; 107 + unsigned int fifosize; 108 108 }; 109 109 110 110 struct sccnxp_port { ··· 113 111 struct uart_port port[SCCNXP_MAX_UARTS]; 114 112 bool opened[SCCNXP_MAX_UARTS]; 115 113 116 - const char *name; 117 114 int irq; 118 - 119 115 u8 imr; 120 - u8 addr_mask; 121 - int freq_std; 122 116 123 - int flags; 124 - #define SCCNXP_HAVE_IO 0x00000001 125 - #define SCCNXP_HAVE_MR0 0x00000002 117 + struct sccnxp_chip *chip; 126 118 127 119 #ifdef CONFIG_SERIAL_SCCNXP_CONSOLE 128 120 struct console console; ··· 132 136 struct regulator *regulator; 133 137 }; 134 138 135 - static inline u8 sccnxp_raw_read(void __iomem *base, u8 reg, u8 shift) 136 - { 137 - return readb(base + (reg << shift)); 138 - } 139 + static const struct sccnxp_chip sc2681 = { 140 + .name = "SC2681", 141 + .nr = 2, 142 + .freq_min = 1000000, 143 + .freq_std = 3686400, 144 + .freq_max = 4000000, 145 + .flags = SCCNXP_HAVE_IO, 146 + .fifosize = 3, 147 + }; 139 148 140 - static inline void sccnxp_raw_write(void __iomem *base, u8 reg, u8 shift, u8 v) 141 - { 142 - writeb(v, base + (reg << shift)); 143 - } 149 + static const struct sccnxp_chip sc2691 = { 150 + .name = "SC2691", 151 + .nr = 1, 152 + .freq_min = 1000000, 153 + .freq_std = 3686400, 154 + .freq_max = 4000000, 155 + .flags = 0, 156 + .fifosize = 3, 157 + }; 158 + 159 + static const struct sccnxp_chip sc2692 = { 160 + .name = "SC2692", 161 + .nr = 2, 162 + .freq_min = 1000000, 163 + .freq_std = 3686400, 164 + .freq_max = 4000000, 165 + .flags = SCCNXP_HAVE_IO, 166 + .fifosize = 3, 167 + }; 168 + 169 + static const struct sccnxp_chip sc2891 = { 170 + .name = "SC2891", 171 + .nr = 1, 172 + .freq_min = 100000, 173 + .freq_std = 3686400, 174 + .freq_max = 8000000, 175 + .flags = SCCNXP_HAVE_IO | SCCNXP_HAVE_MR0, 176 + .fifosize = 16, 177 + }; 178 + 179 + static const struct sccnxp_chip sc2892 = { 180 + .name = "SC2892", 181 + .nr = 2, 182 + .freq_min = 100000, 183 + .freq_std = 3686400, 184 + .freq_max = 8000000, 185 + .flags = SCCNXP_HAVE_IO | SCCNXP_HAVE_MR0, 186 + .fifosize = 16, 187 + }; 188 + 189 + static const struct sccnxp_chip sc28202 = { 190 + .name = "SC28202", 191 + .nr = 2, 192 + .freq_min = 1000000, 193 + .freq_std = 14745600, 194 + .freq_max = 50000000, 195 + .flags = SCCNXP_HAVE_IO | SCCNXP_HAVE_MR0, 196 + .fifosize = 256, 197 + }; 198 + 199 + static const struct sccnxp_chip sc68681 = { 200 + .name = "SC68681", 201 + .nr = 2, 202 + .freq_min = 1000000, 203 + .freq_std = 3686400, 204 + .freq_max = 4000000, 205 + .flags = SCCNXP_HAVE_IO, 206 + .fifosize = 3, 207 + }; 208 + 209 + static const struct sccnxp_chip sc68692 = { 210 + .name = "SC68692", 211 + .nr = 2, 212 + .freq_min = 1000000, 213 + .freq_std = 3686400, 214 + .freq_max = 4000000, 215 + .flags = SCCNXP_HAVE_IO, 216 + .fifosize = 3, 217 + }; 144 218 145 219 static inline u8 sccnxp_read(struct uart_port *port, u8 reg) 146 220 { 147 - struct sccnxp_port *s = dev_get_drvdata(port->dev); 148 - 149 - return sccnxp_raw_read(port->membase, reg & s->addr_mask, 150 - port->regshift); 221 + return readb(port->membase + (reg << port->regshift)); 151 222 } 152 223 153 224 static inline void sccnxp_write(struct uart_port *port, u8 reg, u8 v) 154 225 { 155 - struct sccnxp_port *s = dev_get_drvdata(port->dev); 156 - 157 - sccnxp_raw_write(port->membase, reg & s->addr_mask, port->regshift, v); 226 + writeb(v, port->membase + (reg << port->regshift)); 158 227 } 159 228 160 229 static inline u8 sccnxp_port_read(struct uart_port *port, u8 reg) ··· 285 224 { 286 225 struct sccnxp_port *s = dev_get_drvdata(port->dev); 287 226 int div_std, tmp_baud, bestbaud = baud, besterr = -1; 227 + struct sccnxp_chip *chip = s->chip; 288 228 u8 i, acr = 0, csr = 0, mr0 = 0; 289 229 290 230 /* Find best baud from table */ 291 231 for (i = 0; baud_std[i].baud && besterr; i++) { 292 - if (baud_std[i].mr0 && !(s->flags & SCCNXP_HAVE_MR0)) 232 + if (baud_std[i].mr0 && !(chip->flags & SCCNXP_HAVE_MR0)) 293 233 continue; 294 - div_std = DIV_ROUND_CLOSEST(s->freq_std, baud_std[i].baud); 234 + div_std = DIV_ROUND_CLOSEST(chip->freq_std, baud_std[i].baud); 295 235 tmp_baud = DIV_ROUND_CLOSEST(port->uartclk, div_std); 296 236 if (!sccnxp_update_best_err(baud, tmp_baud, &besterr)) { 297 237 acr = baud_std[i].acr; ··· 302 240 } 303 241 } 304 242 305 - if (s->flags & SCCNXP_HAVE_MR0) { 243 + if (chip->flags & SCCNXP_HAVE_MR0) { 306 244 /* Enable FIFO, set half level for TX */ 307 245 mr0 |= MR0_FIFO | MR0_TXLVL; 308 246 /* Update MR0 */ ··· 425 363 sccnxp_disable_irq(port, IMR_TXRDY); 426 364 427 365 /* Set direction to input */ 428 - if (s->flags & SCCNXP_HAVE_IO) 366 + if (s->chip->flags & SCCNXP_HAVE_IO) 429 367 sccnxp_set_bit(port, DIR_OP, 0); 430 368 } 431 369 return; ··· 499 437 spin_lock_irqsave(&s->lock, flags); 500 438 501 439 /* Set direction to output */ 502 - if (s->flags & SCCNXP_HAVE_IO) 440 + if (s->chip->flags & SCCNXP_HAVE_IO) 503 441 sccnxp_set_bit(port, DIR_OP, 1); 504 442 505 443 sccnxp_enable_irq(port, IMR_TXRDY); ··· 545 483 struct sccnxp_port *s = dev_get_drvdata(port->dev); 546 484 unsigned long flags; 547 485 548 - if (!(s->flags & SCCNXP_HAVE_IO)) 486 + if (!(s->chip->flags & SCCNXP_HAVE_IO)) 549 487 return; 550 488 551 489 spin_lock_irqsave(&s->lock, flags); ··· 563 501 struct sccnxp_port *s = dev_get_drvdata(port->dev); 564 502 unsigned int mctrl = TIOCM_DSR | TIOCM_CTS | TIOCM_CAR; 565 503 566 - if (!(s->flags & SCCNXP_HAVE_IO)) 504 + if (!(s->chip->flags & SCCNXP_HAVE_IO)) 567 505 return mctrl; 568 506 569 507 spin_lock_irqsave(&s->lock, flags); ··· 679 617 680 618 /* Setup baudrate */ 681 619 baud = uart_get_baud_rate(port, termios, old, 50, 682 - (s->flags & SCCNXP_HAVE_MR0) ? 620 + (s->chip->flags & SCCNXP_HAVE_MR0) ? 683 621 230400 : 38400); 684 622 baud = sccnxp_set_baud(port, baud); 685 623 ··· 703 641 704 642 spin_lock_irqsave(&s->lock, flags); 705 643 706 - if (s->flags & SCCNXP_HAVE_IO) { 644 + if (s->chip->flags & SCCNXP_HAVE_IO) { 707 645 /* Outputs are controlled manually */ 708 646 sccnxp_write(port, SCCNXP_OPCR_REG, 0); 709 647 } ··· 743 681 sccnxp_port_write(port, SCCNXP_CR_REG, CR_RX_DISABLE | CR_TX_DISABLE); 744 682 745 683 /* Leave direction to input */ 746 - if (s->flags & SCCNXP_HAVE_IO) 684 + if (s->chip->flags & SCCNXP_HAVE_IO) 747 685 sccnxp_set_bit(port, DIR_OP, 0); 748 686 749 687 spin_unlock_irqrestore(&s->lock, flags); ··· 753 691 { 754 692 struct sccnxp_port *s = dev_get_drvdata(port->dev); 755 693 756 - return (port->type == PORT_SC26XX) ? s->name : NULL; 694 + return (port->type == PORT_SC26XX) ? s->chip->name : NULL; 757 695 } 758 696 759 697 static void sccnxp_release_port(struct uart_port *port) ··· 840 778 } 841 779 #endif 842 780 781 + static const struct platform_device_id sccnxp_id_table[] = { 782 + { .name = "sc2681", .driver_data = (kernel_ulong_t)&sc2681, }, 783 + { .name = "sc2691", .driver_data = (kernel_ulong_t)&sc2691, }, 784 + { .name = "sc2692", .driver_data = (kernel_ulong_t)&sc2692, }, 785 + { .name = "sc2891", .driver_data = (kernel_ulong_t)&sc2891, }, 786 + { .name = "sc2892", .driver_data = (kernel_ulong_t)&sc2892, }, 787 + { .name = "sc28202", .driver_data = (kernel_ulong_t)&sc28202, }, 788 + { .name = "sc68681", .driver_data = (kernel_ulong_t)&sc68681, }, 789 + { .name = "sc68692", .driver_data = (kernel_ulong_t)&sc68692, }, 790 + { } 791 + }; 792 + MODULE_DEVICE_TABLE(platform, sccnxp_id_table); 793 + 843 794 static int sccnxp_probe(struct platform_device *pdev) 844 795 { 845 796 struct resource *res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 846 - int chiptype = pdev->id_entry->driver_data; 847 797 struct sccnxp_pdata *pdata = dev_get_platdata(&pdev->dev); 848 - int i, ret, fifosize, freq_min, freq_max; 798 + int i, ret, uartclk; 849 799 struct sccnxp_port *s; 850 800 void __iomem *membase; 801 + struct clk *clk; 851 802 852 - if (!res) { 853 - dev_err(&pdev->dev, "Missing memory resource data\n"); 854 - return -EADDRNOTAVAIL; 855 - } 803 + membase = devm_ioremap_resource(&pdev->dev, res); 804 + if (IS_ERR(membase)) 805 + return PTR_ERR(membase); 856 806 857 807 s = devm_kzalloc(&pdev->dev, sizeof(struct sccnxp_port), GFP_KERNEL); 858 808 if (!s) { ··· 875 801 876 802 spin_lock_init(&s->lock); 877 803 878 - /* Individual chip settings */ 879 - switch (chiptype) { 880 - case SCCNXP_TYPE_SC2681: 881 - s->name = "SC2681"; 882 - s->uart.nr = 2; 883 - s->freq_std = 3686400; 884 - s->addr_mask = 0x0f; 885 - s->flags = SCCNXP_HAVE_IO; 886 - fifosize = 3; 887 - freq_min = 1000000; 888 - freq_max = 4000000; 889 - break; 890 - case SCCNXP_TYPE_SC2691: 891 - s->name = "SC2691"; 892 - s->uart.nr = 1; 893 - s->freq_std = 3686400; 894 - s->addr_mask = 0x07; 895 - s->flags = 0; 896 - fifosize = 3; 897 - freq_min = 1000000; 898 - freq_max = 4000000; 899 - break; 900 - case SCCNXP_TYPE_SC2692: 901 - s->name = "SC2692"; 902 - s->uart.nr = 2; 903 - s->freq_std = 3686400; 904 - s->addr_mask = 0x0f; 905 - s->flags = SCCNXP_HAVE_IO; 906 - fifosize = 3; 907 - freq_min = 1000000; 908 - freq_max = 4000000; 909 - break; 910 - case SCCNXP_TYPE_SC2891: 911 - s->name = "SC2891"; 912 - s->uart.nr = 1; 913 - s->freq_std = 3686400; 914 - s->addr_mask = 0x0f; 915 - s->flags = SCCNXP_HAVE_IO | SCCNXP_HAVE_MR0; 916 - fifosize = 16; 917 - freq_min = 100000; 918 - freq_max = 8000000; 919 - break; 920 - case SCCNXP_TYPE_SC2892: 921 - s->name = "SC2892"; 922 - s->uart.nr = 2; 923 - s->freq_std = 3686400; 924 - s->addr_mask = 0x0f; 925 - s->flags = SCCNXP_HAVE_IO | SCCNXP_HAVE_MR0; 926 - fifosize = 16; 927 - freq_min = 100000; 928 - freq_max = 8000000; 929 - break; 930 - case SCCNXP_TYPE_SC28202: 931 - s->name = "SC28202"; 932 - s->uart.nr = 2; 933 - s->freq_std = 14745600; 934 - s->addr_mask = 0x7f; 935 - s->flags = SCCNXP_HAVE_IO | SCCNXP_HAVE_MR0; 936 - fifosize = 256; 937 - freq_min = 1000000; 938 - freq_max = 50000000; 939 - break; 940 - case SCCNXP_TYPE_SC68681: 941 - s->name = "SC68681"; 942 - s->uart.nr = 2; 943 - s->freq_std = 3686400; 944 - s->addr_mask = 0x0f; 945 - s->flags = SCCNXP_HAVE_IO; 946 - fifosize = 3; 947 - freq_min = 1000000; 948 - freq_max = 4000000; 949 - break; 950 - case SCCNXP_TYPE_SC68692: 951 - s->name = "SC68692"; 952 - s->uart.nr = 2; 953 - s->freq_std = 3686400; 954 - s->addr_mask = 0x0f; 955 - s->flags = SCCNXP_HAVE_IO; 956 - fifosize = 3; 957 - freq_min = 1000000; 958 - freq_max = 4000000; 959 - break; 960 - default: 961 - dev_err(&pdev->dev, "Unsupported chip type %i\n", chiptype); 962 - ret = -ENOTSUPP; 804 + s->chip = (struct sccnxp_chip *)pdev->id_entry->driver_data; 805 + 806 + s->regulator = devm_regulator_get(&pdev->dev, "vcc"); 807 + if (!IS_ERR(s->regulator)) { 808 + ret = regulator_enable(s->regulator); 809 + if (ret) { 810 + dev_err(&pdev->dev, 811 + "Failed to enable regulator: %i\n", ret); 812 + return ret; 813 + } 814 + } else if (PTR_ERR(s->regulator) == -EPROBE_DEFER) 815 + return -EPROBE_DEFER; 816 + 817 + clk = devm_clk_get(&pdev->dev, NULL); 818 + if (IS_ERR(clk)) { 819 + if (PTR_ERR(clk) == -EPROBE_DEFER) { 820 + ret = -EPROBE_DEFER; 821 + goto err_out; 822 + } 823 + dev_notice(&pdev->dev, "Using default clock frequency\n"); 824 + uartclk = s->chip->freq_std; 825 + } else 826 + uartclk = clk_get_rate(clk); 827 + 828 + /* Check input frequency */ 829 + if ((uartclk < s->chip->freq_min) || (uartclk > s->chip->freq_max)) { 830 + dev_err(&pdev->dev, "Frequency out of bounds\n"); 831 + ret = -EINVAL; 963 832 goto err_out; 964 833 } 965 834 966 - if (!pdata) { 967 - dev_warn(&pdev->dev, 968 - "No platform data supplied, using defaults\n"); 969 - s->pdata.frequency = s->freq_std; 970 - } else 835 + if (pdata) 971 836 memcpy(&s->pdata, pdata, sizeof(struct sccnxp_pdata)); 972 837 973 838 if (s->pdata.poll_time_us) { ··· 924 911 } 925 912 } 926 913 927 - /* Check input frequency */ 928 - if ((s->pdata.frequency < freq_min) || 929 - (s->pdata.frequency > freq_max)) { 930 - dev_err(&pdev->dev, "Frequency out of bounds\n"); 931 - ret = -EINVAL; 932 - goto err_out; 933 - } 934 - 935 - s->regulator = devm_regulator_get(&pdev->dev, "VCC"); 936 - if (!IS_ERR(s->regulator)) { 937 - ret = regulator_enable(s->regulator); 938 - if (ret) { 939 - dev_err(&pdev->dev, 940 - "Failed to enable regulator: %i\n", ret); 941 - return ret; 942 - } 943 - } 944 - 945 - membase = devm_ioremap_resource(&pdev->dev, res); 946 - if (IS_ERR(membase)) { 947 - ret = PTR_ERR(membase); 948 - goto err_out; 949 - } 950 - 951 914 s->uart.owner = THIS_MODULE; 952 915 s->uart.dev_name = "ttySC"; 953 916 s->uart.major = SCCNXP_MAJOR; 954 917 s->uart.minor = SCCNXP_MINOR; 918 + s->uart.nr = s->chip->nr; 955 919 #ifdef CONFIG_SERIAL_SCCNXP_CONSOLE 956 920 s->uart.cons = &s->console; 957 921 s->uart.cons->device = uart_console_device; ··· 950 960 s->port[i].dev = &pdev->dev; 951 961 s->port[i].irq = s->irq; 952 962 s->port[i].type = PORT_SC26XX; 953 - s->port[i].fifosize = fifosize; 963 + s->port[i].fifosize = s->chip->fifosize; 954 964 s->port[i].flags = UPF_SKIP_TEST | UPF_FIXED_TYPE; 955 965 s->port[i].iotype = UPIO_MEM; 956 966 s->port[i].mapbase = res->start; 957 967 s->port[i].membase = membase; 958 968 s->port[i].regshift = s->pdata.reg_shift; 959 - s->port[i].uartclk = s->pdata.frequency; 969 + s->port[i].uartclk = uartclk; 960 970 s->port[i].ops = &sccnxp_ops; 961 971 uart_add_one_port(&s->uart, &s->port[i]); 962 972 /* Set direction to input */ 963 - if (s->flags & SCCNXP_HAVE_IO) 973 + if (s->chip->flags & SCCNXP_HAVE_IO) 964 974 sccnxp_set_bit(&s->port[i], DIR_OP, 0); 965 975 } 966 976 ··· 987 997 } 988 998 989 999 err_out: 990 - platform_set_drvdata(pdev, NULL); 1000 + if (!IS_ERR(s->regulator)) 1001 + return regulator_disable(s->regulator); 991 1002 992 1003 return ret; 993 1004 } ··· 1007 1016 uart_remove_one_port(&s->uart, &s->port[i]); 1008 1017 1009 1018 uart_unregister_driver(&s->uart); 1010 - platform_set_drvdata(pdev, NULL); 1011 1019 1012 1020 if (!IS_ERR(s->regulator)) 1013 1021 return regulator_disable(s->regulator); 1014 1022 1015 1023 return 0; 1016 1024 } 1017 - 1018 - static const struct platform_device_id sccnxp_id_table[] = { 1019 - { "sc2681", SCCNXP_TYPE_SC2681 }, 1020 - { "sc2691", SCCNXP_TYPE_SC2691 }, 1021 - { "sc2692", SCCNXP_TYPE_SC2692 }, 1022 - { "sc2891", SCCNXP_TYPE_SC2891 }, 1023 - { "sc2892", SCCNXP_TYPE_SC2892 }, 1024 - { "sc28202", SCCNXP_TYPE_SC28202 }, 1025 - { "sc68681", SCCNXP_TYPE_SC68681 }, 1026 - { "sc68692", SCCNXP_TYPE_SC68692 }, 1027 - { }, 1028 - }; 1029 - MODULE_DEVICE_TABLE(platform, sccnxp_id_table); 1030 1025 1031 1026 static struct platform_driver sccnxp_uart_driver = { 1032 1027 .driver = {
+11 -5
drivers/tty/serial/serial-tegra.c
··· 571 571 572 572 tegra_uart_handle_rx_pio(tup, port); 573 573 if (tty) { 574 + spin_unlock_irqrestore(&u->lock, flags); 574 575 tty_flip_buffer_push(port); 576 + spin_lock_irqsave(&u->lock, flags); 575 577 tty_kref_put(tty); 576 578 } 577 579 tegra_uart_start_rx_dma(tup); ··· 585 583 spin_unlock_irqrestore(&u->lock, flags); 586 584 } 587 585 588 - static void tegra_uart_handle_rx_dma(struct tegra_uart_port *tup) 586 + static void tegra_uart_handle_rx_dma(struct tegra_uart_port *tup, 587 + unsigned long *flags) 589 588 { 590 589 struct dma_tx_state state; 591 590 struct tty_struct *tty = tty_port_tty_get(&tup->uport.state->port); 592 591 struct tty_port *port = &tup->uport.state->port; 592 + struct uart_port *u = &tup->uport; 593 593 int count; 594 594 595 595 /* Deactivate flow control to stop sender */ ··· 608 604 609 605 tegra_uart_handle_rx_pio(tup, port); 610 606 if (tty) { 607 + spin_unlock_irqrestore(&u->lock, *flags); 611 608 tty_flip_buffer_push(port); 609 + spin_lock_irqsave(&u->lock, *flags); 612 610 tty_kref_put(tty); 613 611 } 614 612 tegra_uart_start_rx_dma(tup); ··· 677 671 iir = tegra_uart_read(tup, UART_IIR); 678 672 if (iir & UART_IIR_NO_INT) { 679 673 if (is_rx_int) { 680 - tegra_uart_handle_rx_dma(tup); 674 + tegra_uart_handle_rx_dma(tup, &flags); 681 675 if (tup->rx_in_progress) { 682 676 ier = tup->ier_shadow; 683 677 ier |= (UART_IER_RLSI | UART_IER_RTOIE | ··· 1212 1206 .owner = THIS_MODULE, 1213 1207 .driver_name = "tegra_hsuart", 1214 1208 .dev_name = "ttyTHS", 1215 - .cons = 0, 1209 + .cons = NULL, 1216 1210 .nr = TEGRA_UART_MAXIMUM, 1217 1211 }; 1218 1212 ··· 1243 1237 return 0; 1244 1238 } 1245 1239 1246 - struct tegra_uart_chip_data tegra20_uart_chip_data = { 1240 + static struct tegra_uart_chip_data tegra20_uart_chip_data = { 1247 1241 .tx_fifo_full_status = false, 1248 1242 .allow_txfifo_reset_fifo_mode = true, 1249 1243 .support_clk_src_div = false, 1250 1244 }; 1251 1245 1252 - struct tegra_uart_chip_data tegra30_uart_chip_data = { 1246 + static struct tegra_uart_chip_data tegra30_uart_chip_data = { 1253 1247 .tx_fifo_full_status = true, 1254 1248 .allow_txfifo_reset_fifo_mode = false, 1255 1249 .support_clk_src_div = true,
+2 -2
drivers/tty/serial/serial_core.c
··· 2095 2095 break; 2096 2096 } 2097 2097 2098 - printk(KERN_INFO "%s%s%s%d at %s (irq = %d) is a %s\n", 2098 + printk(KERN_INFO "%s%s%s%d at %s (irq = %d, base_baud = %d) is a %s\n", 2099 2099 port->dev ? dev_name(port->dev) : "", 2100 2100 port->dev ? ": " : "", 2101 2101 drv->dev_name, 2102 2102 drv->tty_driver->name_base + port->line, 2103 - address, port->irq, uart_type(port)); 2103 + address, port->irq, port->uartclk / 16, uart_type(port)); 2104 2104 } 2105 2105 2106 2106 static void
+1 -1
drivers/tty/serial/serial_txx9.c
··· 1097 1097 */ 1098 1098 static int serial_txx9_probe(struct platform_device *dev) 1099 1099 { 1100 - struct uart_port *p = dev->dev.platform_data; 1100 + struct uart_port *p = dev_get_platdata(&dev->dev); 1101 1101 struct uart_port port; 1102 1102 int ret, i; 1103 1103
+2 -2
drivers/tty/serial/sh-sci.c
··· 2380 2380 2381 2381 static int sci_probe_earlyprintk(struct platform_device *pdev) 2382 2382 { 2383 - struct plat_sci_port *cfg = pdev->dev.platform_data; 2383 + struct plat_sci_port *cfg = dev_get_platdata(&pdev->dev); 2384 2384 2385 2385 if (early_serial_console.data) 2386 2386 return -EEXIST; ··· 2469 2469 2470 2470 static int sci_probe(struct platform_device *dev) 2471 2471 { 2472 - struct plat_sci_port *p = dev->dev.platform_data; 2472 + struct plat_sci_port *p = dev_get_platdata(&dev->dev); 2473 2473 struct sci_port *sp = &sci_ports[dev->id]; 2474 2474 int ret; 2475 2475
+981 -214
drivers/tty/serial/sirfsoc_uart.c
··· 20 20 #include <linux/of.h> 21 21 #include <linux/slab.h> 22 22 #include <linux/io.h> 23 + #include <linux/of_gpio.h> 24 + #include <linux/dmaengine.h> 25 + #include <linux/dma-direction.h> 26 + #include <linux/dma-mapping.h> 27 + #include <linux/sirfsoc_dma.h> 23 28 #include <asm/irq.h> 24 29 #include <asm/mach/irq.h> 25 - #include <linux/pinctrl/consumer.h> 26 30 27 31 #include "sirfsoc_uart.h" 28 32 ··· 36 32 sirfsoc_uart_pio_rx_chars(struct uart_port *port, unsigned int max_rx_count); 37 33 static struct uart_driver sirfsoc_uart_drv; 38 34 35 + static void sirfsoc_uart_tx_dma_complete_callback(void *param); 36 + static void sirfsoc_uart_start_next_rx_dma(struct uart_port *port); 37 + static void sirfsoc_uart_rx_dma_complete_callback(void *param); 39 38 static const struct sirfsoc_baudrate_to_regv baudrate_to_regv[] = { 40 39 {4000000, 2359296}, 41 40 {3500000, 1310721}, ··· 96 89 .line = 4, 97 90 }, 98 91 }, 92 + [5] = { 93 + .port = { 94 + .iotype = UPIO_MEM, 95 + .flags = UPF_BOOT_AUTOCONF, 96 + .line = 5, 97 + }, 98 + }, 99 99 }; 100 100 101 101 static inline struct sirfsoc_uart_port *to_sirfport(struct uart_port *port) ··· 113 99 static inline unsigned int sirfsoc_uart_tx_empty(struct uart_port *port) 114 100 { 115 101 unsigned long reg; 116 - reg = rd_regl(port, SIRFUART_TX_FIFO_STATUS); 117 - if (reg & SIRFUART_FIFOEMPTY_MASK(port)) 118 - return TIOCSER_TEMT; 119 - else 120 - return 0; 102 + struct sirfsoc_uart_port *sirfport = to_sirfport(port); 103 + struct sirfsoc_register *ureg = &sirfport->uart_reg->uart_reg; 104 + struct sirfsoc_fifo_status *ufifo_st = &sirfport->uart_reg->fifo_status; 105 + reg = rd_regl(port, ureg->sirfsoc_tx_fifo_status); 106 + 107 + return (reg & ufifo_st->ff_empty(port->line)) ? TIOCSER_TEMT : 0; 121 108 } 122 109 123 110 static unsigned int sirfsoc_uart_get_mctrl(struct uart_port *port) 124 111 { 125 112 struct sirfsoc_uart_port *sirfport = to_sirfport(port); 126 - if (!(sirfport->ms_enabled)) { 113 + struct sirfsoc_register *ureg = &sirfport->uart_reg->uart_reg; 114 + if (!sirfport->hw_flow_ctrl || !sirfport->ms_enabled) 127 115 goto cts_asserted; 128 - } else if (sirfport->hw_flow_ctrl) { 129 - if (!(rd_regl(port, SIRFUART_AFC_CTRL) & 130 - SIRFUART_CTS_IN_STATUS)) 116 + if (sirfport->uart_reg->uart_type == SIRF_REAL_UART) { 117 + if (!(rd_regl(port, ureg->sirfsoc_afc_ctrl) & 118 + SIRFUART_AFC_CTS_STATUS)) 119 + goto cts_asserted; 120 + else 121 + goto cts_deasserted; 122 + } else { 123 + if (!gpio_get_value(sirfport->cts_gpio)) 131 124 goto cts_asserted; 132 125 else 133 126 goto cts_deasserted; ··· 148 127 static void sirfsoc_uart_set_mctrl(struct uart_port *port, unsigned int mctrl) 149 128 { 150 129 struct sirfsoc_uart_port *sirfport = to_sirfport(port); 130 + struct sirfsoc_register *ureg = &sirfport->uart_reg->uart_reg; 151 131 unsigned int assert = mctrl & TIOCM_RTS; 152 132 unsigned int val = assert ? SIRFUART_AFC_CTRL_RX_THD : 0x0; 153 133 unsigned int current_val; 154 - if (sirfport->hw_flow_ctrl) { 155 - current_val = rd_regl(port, SIRFUART_AFC_CTRL) & ~0xFF; 134 + 135 + if (!sirfport->hw_flow_ctrl || !sirfport->ms_enabled) 136 + return; 137 + if (sirfport->uart_reg->uart_type == SIRF_REAL_UART) { 138 + current_val = rd_regl(port, ureg->sirfsoc_afc_ctrl) & ~0xFF; 156 139 val |= current_val; 157 - wr_regl(port, SIRFUART_AFC_CTRL, val); 140 + wr_regl(port, ureg->sirfsoc_afc_ctrl, val); 141 + } else { 142 + if (!val) 143 + gpio_set_value(sirfport->rts_gpio, 1); 144 + else 145 + gpio_set_value(sirfport->rts_gpio, 0); 158 146 } 159 147 } 160 148 161 149 static void sirfsoc_uart_stop_tx(struct uart_port *port) 162 150 { 163 - unsigned int regv; 164 - regv = rd_regl(port, SIRFUART_INT_EN); 165 - wr_regl(port, SIRFUART_INT_EN, regv & ~SIRFUART_TX_INT_EN); 151 + struct sirfsoc_uart_port *sirfport = to_sirfport(port); 152 + struct sirfsoc_register *ureg = &sirfport->uart_reg->uart_reg; 153 + struct sirfsoc_int_en *uint_en = &sirfport->uart_reg->uart_int_en; 154 + 155 + if (IS_DMA_CHAN_VALID(sirfport->tx_dma_no)) { 156 + if (sirfport->tx_dma_state == TX_DMA_RUNNING) { 157 + dmaengine_pause(sirfport->tx_dma_chan); 158 + sirfport->tx_dma_state = TX_DMA_PAUSE; 159 + } else { 160 + if (!sirfport->is_marco) 161 + wr_regl(port, ureg->sirfsoc_int_en_reg, 162 + rd_regl(port, ureg->sirfsoc_int_en_reg) & 163 + ~uint_en->sirfsoc_txfifo_empty_en); 164 + else 165 + wr_regl(port, SIRFUART_INT_EN_CLR, 166 + uint_en->sirfsoc_txfifo_empty_en); 167 + } 168 + } else { 169 + if (!sirfport->is_marco) 170 + wr_regl(port, ureg->sirfsoc_int_en_reg, 171 + rd_regl(port, ureg->sirfsoc_int_en_reg) & 172 + ~uint_en->sirfsoc_txfifo_empty_en); 173 + else 174 + wr_regl(port, SIRFUART_INT_EN_CLR, 175 + uint_en->sirfsoc_txfifo_empty_en); 176 + } 166 177 } 167 178 168 - void sirfsoc_uart_start_tx(struct uart_port *port) 179 + static void sirfsoc_uart_tx_with_dma(struct sirfsoc_uart_port *sirfport) 180 + { 181 + struct uart_port *port = &sirfport->port; 182 + struct sirfsoc_register *ureg = &sirfport->uart_reg->uart_reg; 183 + struct sirfsoc_int_en *uint_en = &sirfport->uart_reg->uart_int_en; 184 + struct circ_buf *xmit = &port->state->xmit; 185 + unsigned long tran_size; 186 + unsigned long tran_start; 187 + unsigned long pio_tx_size; 188 + 189 + tran_size = CIRC_CNT_TO_END(xmit->head, xmit->tail, UART_XMIT_SIZE); 190 + tran_start = (unsigned long)(xmit->buf + xmit->tail); 191 + if (uart_circ_empty(xmit) || uart_tx_stopped(port) || 192 + !tran_size) 193 + return; 194 + if (sirfport->tx_dma_state == TX_DMA_PAUSE) { 195 + dmaengine_resume(sirfport->tx_dma_chan); 196 + return; 197 + } 198 + if (sirfport->tx_dma_state == TX_DMA_RUNNING) 199 + return; 200 + if (!sirfport->is_marco) 201 + wr_regl(port, ureg->sirfsoc_int_en_reg, 202 + rd_regl(port, ureg->sirfsoc_int_en_reg)& 203 + ~(uint_en->sirfsoc_txfifo_empty_en)); 204 + else 205 + wr_regl(port, SIRFUART_INT_EN_CLR, 206 + uint_en->sirfsoc_txfifo_empty_en); 207 + /* 208 + * DMA requires buffer address and buffer length are both aligned with 209 + * 4 bytes, so we use PIO for 210 + * 1. if address is not aligned with 4bytes, use PIO for the first 1~3 211 + * bytes, and move to DMA for the left part aligned with 4bytes 212 + * 2. if buffer length is not aligned with 4bytes, use DMA for aligned 213 + * part first, move to PIO for the left 1~3 bytes 214 + */ 215 + if (tran_size < 4 || BYTES_TO_ALIGN(tran_start)) { 216 + wr_regl(port, ureg->sirfsoc_tx_fifo_op, SIRFUART_FIFO_STOP); 217 + wr_regl(port, ureg->sirfsoc_tx_dma_io_ctrl, 218 + rd_regl(port, ureg->sirfsoc_tx_dma_io_ctrl)| 219 + SIRFUART_IO_MODE); 220 + if (BYTES_TO_ALIGN(tran_start)) { 221 + pio_tx_size = sirfsoc_uart_pio_tx_chars(sirfport, 222 + BYTES_TO_ALIGN(tran_start)); 223 + tran_size -= pio_tx_size; 224 + } 225 + if (tran_size < 4) 226 + sirfsoc_uart_pio_tx_chars(sirfport, tran_size); 227 + if (!sirfport->is_marco) 228 + wr_regl(port, ureg->sirfsoc_int_en_reg, 229 + rd_regl(port, ureg->sirfsoc_int_en_reg)| 230 + uint_en->sirfsoc_txfifo_empty_en); 231 + else 232 + wr_regl(port, ureg->sirfsoc_int_en_reg, 233 + uint_en->sirfsoc_txfifo_empty_en); 234 + wr_regl(port, ureg->sirfsoc_tx_fifo_op, SIRFUART_FIFO_START); 235 + } else { 236 + /* tx transfer mode switch into dma mode */ 237 + wr_regl(port, ureg->sirfsoc_tx_fifo_op, SIRFUART_FIFO_STOP); 238 + wr_regl(port, ureg->sirfsoc_tx_dma_io_ctrl, 239 + rd_regl(port, ureg->sirfsoc_tx_dma_io_ctrl)& 240 + ~SIRFUART_IO_MODE); 241 + wr_regl(port, ureg->sirfsoc_tx_fifo_op, SIRFUART_FIFO_START); 242 + tran_size &= ~(0x3); 243 + 244 + sirfport->tx_dma_addr = dma_map_single(port->dev, 245 + xmit->buf + xmit->tail, 246 + tran_size, DMA_TO_DEVICE); 247 + sirfport->tx_dma_desc = dmaengine_prep_slave_single( 248 + sirfport->tx_dma_chan, sirfport->tx_dma_addr, 249 + tran_size, DMA_MEM_TO_DEV, DMA_PREP_INTERRUPT); 250 + if (!sirfport->tx_dma_desc) { 251 + dev_err(port->dev, "DMA prep slave single fail\n"); 252 + return; 253 + } 254 + sirfport->tx_dma_desc->callback = 255 + sirfsoc_uart_tx_dma_complete_callback; 256 + sirfport->tx_dma_desc->callback_param = (void *)sirfport; 257 + sirfport->transfer_size = tran_size; 258 + 259 + dmaengine_submit(sirfport->tx_dma_desc); 260 + dma_async_issue_pending(sirfport->tx_dma_chan); 261 + sirfport->tx_dma_state = TX_DMA_RUNNING; 262 + } 263 + } 264 + 265 + static void sirfsoc_uart_start_tx(struct uart_port *port) 169 266 { 170 267 struct sirfsoc_uart_port *sirfport = to_sirfport(port); 171 - unsigned long regv; 172 - sirfsoc_uart_pio_tx_chars(sirfport, 1); 173 - wr_regl(port, SIRFUART_TX_FIFO_OP, SIRFUART_TX_FIFO_START); 174 - regv = rd_regl(port, SIRFUART_INT_EN); 175 - wr_regl(port, SIRFUART_INT_EN, regv | SIRFUART_TX_INT_EN); 268 + struct sirfsoc_register *ureg = &sirfport->uart_reg->uart_reg; 269 + struct sirfsoc_int_en *uint_en = &sirfport->uart_reg->uart_int_en; 270 + if (IS_DMA_CHAN_VALID(sirfport->tx_dma_no)) 271 + sirfsoc_uart_tx_with_dma(sirfport); 272 + else { 273 + sirfsoc_uart_pio_tx_chars(sirfport, 1); 274 + wr_regl(port, ureg->sirfsoc_tx_fifo_op, SIRFUART_FIFO_START); 275 + if (!sirfport->is_marco) 276 + wr_regl(port, ureg->sirfsoc_int_en_reg, 277 + rd_regl(port, ureg->sirfsoc_int_en_reg)| 278 + uint_en->sirfsoc_txfifo_empty_en); 279 + else 280 + wr_regl(port, ureg->sirfsoc_int_en_reg, 281 + uint_en->sirfsoc_txfifo_empty_en); 282 + } 176 283 } 177 284 178 285 static void sirfsoc_uart_stop_rx(struct uart_port *port) 179 286 { 180 - unsigned long regv; 181 - wr_regl(port, SIRFUART_RX_FIFO_OP, 0); 182 - regv = rd_regl(port, SIRFUART_INT_EN); 183 - wr_regl(port, SIRFUART_INT_EN, regv & ~SIRFUART_RX_IO_INT_EN); 287 + struct sirfsoc_uart_port *sirfport = to_sirfport(port); 288 + struct sirfsoc_register *ureg = &sirfport->uart_reg->uart_reg; 289 + struct sirfsoc_int_en *uint_en = &sirfport->uart_reg->uart_int_en; 290 + 291 + wr_regl(port, ureg->sirfsoc_rx_fifo_op, 0); 292 + if (IS_DMA_CHAN_VALID(sirfport->rx_dma_no)) { 293 + if (!sirfport->is_marco) 294 + wr_regl(port, ureg->sirfsoc_int_en_reg, 295 + rd_regl(port, ureg->sirfsoc_int_en_reg) & 296 + ~(SIRFUART_RX_DMA_INT_EN(port, uint_en) | 297 + uint_en->sirfsoc_rx_done_en)); 298 + else 299 + wr_regl(port, SIRFUART_INT_EN_CLR, 300 + SIRFUART_RX_DMA_INT_EN(port, uint_en)| 301 + uint_en->sirfsoc_rx_done_en); 302 + dmaengine_terminate_all(sirfport->rx_dma_chan); 303 + } else { 304 + if (!sirfport->is_marco) 305 + wr_regl(port, ureg->sirfsoc_int_en_reg, 306 + rd_regl(port, ureg->sirfsoc_int_en_reg)& 307 + ~(SIRFUART_RX_IO_INT_EN(port, uint_en))); 308 + else 309 + wr_regl(port, SIRFUART_INT_EN_CLR, 310 + SIRFUART_RX_IO_INT_EN(port, uint_en)); 311 + } 184 312 } 185 313 186 314 static void sirfsoc_uart_disable_ms(struct uart_port *port) 187 315 { 188 316 struct sirfsoc_uart_port *sirfport = to_sirfport(port); 189 - unsigned long reg; 190 - sirfport->ms_enabled = 0; 317 + struct sirfsoc_register *ureg = &sirfport->uart_reg->uart_reg; 318 + struct sirfsoc_int_en *uint_en = &sirfport->uart_reg->uart_int_en; 319 + 191 320 if (!sirfport->hw_flow_ctrl) 192 321 return; 193 - reg = rd_regl(port, SIRFUART_AFC_CTRL); 194 - wr_regl(port, SIRFUART_AFC_CTRL, reg & ~0x3FF); 195 - reg = rd_regl(port, SIRFUART_INT_EN); 196 - wr_regl(port, SIRFUART_INT_EN, reg & ~SIRFUART_CTS_INT_EN); 322 + sirfport->ms_enabled = false; 323 + if (sirfport->uart_reg->uart_type == SIRF_REAL_UART) { 324 + wr_regl(port, ureg->sirfsoc_afc_ctrl, 325 + rd_regl(port, ureg->sirfsoc_afc_ctrl) & ~0x3FF); 326 + if (!sirfport->is_marco) 327 + wr_regl(port, ureg->sirfsoc_int_en_reg, 328 + rd_regl(port, ureg->sirfsoc_int_en_reg)& 329 + ~uint_en->sirfsoc_cts_en); 330 + else 331 + wr_regl(port, SIRFUART_INT_EN_CLR, 332 + uint_en->sirfsoc_cts_en); 333 + } else 334 + disable_irq(gpio_to_irq(sirfport->cts_gpio)); 335 + } 336 + 337 + static irqreturn_t sirfsoc_uart_usp_cts_handler(int irq, void *dev_id) 338 + { 339 + struct sirfsoc_uart_port *sirfport = (struct sirfsoc_uart_port *)dev_id; 340 + struct uart_port *port = &sirfport->port; 341 + if (gpio_is_valid(sirfport->cts_gpio) && sirfport->ms_enabled) 342 + uart_handle_cts_change(port, 343 + !gpio_get_value(sirfport->cts_gpio)); 344 + return IRQ_HANDLED; 197 345 } 198 346 199 347 static void sirfsoc_uart_enable_ms(struct uart_port *port) 200 348 { 201 349 struct sirfsoc_uart_port *sirfport = to_sirfport(port); 202 - unsigned long reg; 203 - unsigned long flg; 350 + struct sirfsoc_register *ureg = &sirfport->uart_reg->uart_reg; 351 + struct sirfsoc_int_en *uint_en = &sirfport->uart_reg->uart_int_en; 352 + 204 353 if (!sirfport->hw_flow_ctrl) 205 354 return; 206 - flg = SIRFUART_AFC_RX_EN | SIRFUART_AFC_TX_EN; 207 - reg = rd_regl(port, SIRFUART_AFC_CTRL); 208 - wr_regl(port, SIRFUART_AFC_CTRL, reg | flg); 209 - reg = rd_regl(port, SIRFUART_INT_EN); 210 - wr_regl(port, SIRFUART_INT_EN, reg | SIRFUART_CTS_INT_EN); 211 - uart_handle_cts_change(port, 212 - !(rd_regl(port, SIRFUART_AFC_CTRL) & SIRFUART_CTS_IN_STATUS)); 213 - sirfport->ms_enabled = 1; 355 + sirfport->ms_enabled = true; 356 + if (sirfport->uart_reg->uart_type == SIRF_REAL_UART) { 357 + wr_regl(port, ureg->sirfsoc_afc_ctrl, 358 + rd_regl(port, ureg->sirfsoc_afc_ctrl) | 359 + SIRFUART_AFC_TX_EN | SIRFUART_AFC_RX_EN); 360 + if (!sirfport->is_marco) 361 + wr_regl(port, ureg->sirfsoc_int_en_reg, 362 + rd_regl(port, ureg->sirfsoc_int_en_reg) 363 + | uint_en->sirfsoc_cts_en); 364 + else 365 + wr_regl(port, ureg->sirfsoc_int_en_reg, 366 + uint_en->sirfsoc_cts_en); 367 + } else 368 + enable_irq(gpio_to_irq(sirfport->cts_gpio)); 214 369 } 215 370 216 371 static void sirfsoc_uart_break_ctl(struct uart_port *port, int break_state) 217 372 { 218 - unsigned long ulcon = rd_regl(port, SIRFUART_LINE_CTRL); 219 - if (break_state) 220 - ulcon |= SIRFUART_SET_BREAK; 221 - else 222 - ulcon &= ~SIRFUART_SET_BREAK; 223 - wr_regl(port, SIRFUART_LINE_CTRL, ulcon); 373 + struct sirfsoc_uart_port *sirfport = to_sirfport(port); 374 + struct sirfsoc_register *ureg = &sirfport->uart_reg->uart_reg; 375 + if (sirfport->uart_reg->uart_type == SIRF_REAL_UART) { 376 + unsigned long ulcon = rd_regl(port, ureg->sirfsoc_line_ctrl); 377 + if (break_state) 378 + ulcon |= SIRFUART_SET_BREAK; 379 + else 380 + ulcon &= ~SIRFUART_SET_BREAK; 381 + wr_regl(port, ureg->sirfsoc_line_ctrl, ulcon); 382 + } 224 383 } 225 384 226 385 static unsigned int 227 386 sirfsoc_uart_pio_rx_chars(struct uart_port *port, unsigned int max_rx_count) 228 387 { 388 + struct sirfsoc_uart_port *sirfport = to_sirfport(port); 389 + struct sirfsoc_register *ureg = &sirfport->uart_reg->uart_reg; 390 + struct sirfsoc_fifo_status *ufifo_st = &sirfport->uart_reg->fifo_status; 229 391 unsigned int ch, rx_count = 0; 230 - 231 - while (!(rd_regl(port, SIRFUART_RX_FIFO_STATUS) & 232 - SIRFUART_FIFOEMPTY_MASK(port))) { 233 - ch = rd_regl(port, SIRFUART_RX_FIFO_DATA) | SIRFUART_DUMMY_READ; 392 + struct tty_struct *tty; 393 + tty = tty_port_tty_get(&port->state->port); 394 + if (!tty) 395 + return -ENODEV; 396 + while (!(rd_regl(port, ureg->sirfsoc_rx_fifo_status) & 397 + ufifo_st->ff_empty(port->line))) { 398 + ch = rd_regl(port, ureg->sirfsoc_rx_fifo_data) | 399 + SIRFUART_DUMMY_READ; 234 400 if (unlikely(uart_handle_sysrq_char(port, ch))) 235 401 continue; 236 402 uart_insert_char(port, 0, 0, ch, TTY_NORMAL); ··· 426 218 break; 427 219 } 428 220 221 + sirfport->rx_io_count += rx_count; 429 222 port->icount.rx += rx_count; 223 + 224 + spin_unlock(&port->lock); 430 225 tty_flip_buffer_push(&port->state->port); 226 + spin_lock(&port->lock); 431 227 432 228 return rx_count; 433 229 } ··· 440 228 sirfsoc_uart_pio_tx_chars(struct sirfsoc_uart_port *sirfport, int count) 441 229 { 442 230 struct uart_port *port = &sirfport->port; 231 + struct sirfsoc_register *ureg = &sirfport->uart_reg->uart_reg; 232 + struct sirfsoc_fifo_status *ufifo_st = &sirfport->uart_reg->fifo_status; 443 233 struct circ_buf *xmit = &port->state->xmit; 444 234 unsigned int num_tx = 0; 445 235 while (!uart_circ_empty(xmit) && 446 - !(rd_regl(port, SIRFUART_TX_FIFO_STATUS) & 447 - SIRFUART_FIFOFULL_MASK(port)) && 236 + !(rd_regl(port, ureg->sirfsoc_tx_fifo_status) & 237 + ufifo_st->ff_full(port->line)) && 448 238 count--) { 449 - wr_regl(port, SIRFUART_TX_FIFO_DATA, xmit->buf[xmit->tail]); 239 + wr_regl(port, ureg->sirfsoc_tx_fifo_data, 240 + xmit->buf[xmit->tail]); 450 241 xmit->tail = (xmit->tail + 1) & (UART_XMIT_SIZE - 1); 451 242 port->icount.tx++; 452 243 num_tx++; ··· 459 244 return num_tx; 460 245 } 461 246 247 + static void sirfsoc_uart_tx_dma_complete_callback(void *param) 248 + { 249 + struct sirfsoc_uart_port *sirfport = (struct sirfsoc_uart_port *)param; 250 + struct uart_port *port = &sirfport->port; 251 + struct circ_buf *xmit = &port->state->xmit; 252 + unsigned long flags; 253 + 254 + xmit->tail = (xmit->tail + sirfport->transfer_size) & 255 + (UART_XMIT_SIZE - 1); 256 + port->icount.tx += sirfport->transfer_size; 257 + if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 258 + uart_write_wakeup(port); 259 + if (sirfport->tx_dma_addr) 260 + dma_unmap_single(port->dev, sirfport->tx_dma_addr, 261 + sirfport->transfer_size, DMA_TO_DEVICE); 262 + spin_lock_irqsave(&sirfport->tx_lock, flags); 263 + sirfport->tx_dma_state = TX_DMA_IDLE; 264 + sirfsoc_uart_tx_with_dma(sirfport); 265 + spin_unlock_irqrestore(&sirfport->tx_lock, flags); 266 + } 267 + 268 + static void sirfsoc_uart_insert_rx_buf_to_tty( 269 + struct sirfsoc_uart_port *sirfport, int count) 270 + { 271 + struct uart_port *port = &sirfport->port; 272 + struct tty_port *tport = &port->state->port; 273 + int inserted; 274 + 275 + inserted = tty_insert_flip_string(tport, 276 + sirfport->rx_dma_items[sirfport->rx_completed].xmit.buf, count); 277 + port->icount.rx += inserted; 278 + tty_flip_buffer_push(tport); 279 + } 280 + 281 + static void sirfsoc_rx_submit_one_dma_desc(struct uart_port *port, int index) 282 + { 283 + struct sirfsoc_uart_port *sirfport = to_sirfport(port); 284 + 285 + sirfport->rx_dma_items[index].xmit.tail = 286 + sirfport->rx_dma_items[index].xmit.head = 0; 287 + sirfport->rx_dma_items[index].desc = 288 + dmaengine_prep_slave_single(sirfport->rx_dma_chan, 289 + sirfport->rx_dma_items[index].dma_addr, SIRFSOC_RX_DMA_BUF_SIZE, 290 + DMA_DEV_TO_MEM, DMA_PREP_INTERRUPT); 291 + if (!sirfport->rx_dma_items[index].desc) { 292 + dev_err(port->dev, "DMA slave single fail\n"); 293 + return; 294 + } 295 + sirfport->rx_dma_items[index].desc->callback = 296 + sirfsoc_uart_rx_dma_complete_callback; 297 + sirfport->rx_dma_items[index].desc->callback_param = sirfport; 298 + sirfport->rx_dma_items[index].cookie = 299 + dmaengine_submit(sirfport->rx_dma_items[index].desc); 300 + dma_async_issue_pending(sirfport->rx_dma_chan); 301 + } 302 + 303 + static void sirfsoc_rx_tmo_process_tl(unsigned long param) 304 + { 305 + struct sirfsoc_uart_port *sirfport = (struct sirfsoc_uart_port *)param; 306 + struct uart_port *port = &sirfport->port; 307 + struct sirfsoc_register *ureg = &sirfport->uart_reg->uart_reg; 308 + struct sirfsoc_int_en *uint_en = &sirfport->uart_reg->uart_int_en; 309 + struct sirfsoc_int_status *uint_st = &sirfport->uart_reg->uart_int_st; 310 + unsigned int count; 311 + unsigned long flags; 312 + 313 + spin_lock_irqsave(&sirfport->rx_lock, flags); 314 + while (sirfport->rx_completed != sirfport->rx_issued) { 315 + sirfsoc_uart_insert_rx_buf_to_tty(sirfport, 316 + SIRFSOC_RX_DMA_BUF_SIZE); 317 + sirfsoc_rx_submit_one_dma_desc(port, sirfport->rx_completed++); 318 + sirfport->rx_completed %= SIRFSOC_RX_LOOP_BUF_CNT; 319 + } 320 + count = CIRC_CNT(sirfport->rx_dma_items[sirfport->rx_issued].xmit.head, 321 + sirfport->rx_dma_items[sirfport->rx_issued].xmit.tail, 322 + SIRFSOC_RX_DMA_BUF_SIZE); 323 + if (count > 0) 324 + sirfsoc_uart_insert_rx_buf_to_tty(sirfport, count); 325 + wr_regl(port, ureg->sirfsoc_rx_dma_io_ctrl, 326 + rd_regl(port, ureg->sirfsoc_rx_dma_io_ctrl) | 327 + SIRFUART_IO_MODE); 328 + sirfsoc_uart_pio_rx_chars(port, 4 - sirfport->rx_io_count); 329 + spin_unlock_irqrestore(&sirfport->rx_lock, flags); 330 + if (sirfport->rx_io_count == 4) { 331 + spin_lock_irqsave(&sirfport->rx_lock, flags); 332 + sirfport->rx_io_count = 0; 333 + wr_regl(port, ureg->sirfsoc_int_st_reg, 334 + uint_st->sirfsoc_rx_done); 335 + if (!sirfport->is_marco) 336 + wr_regl(port, ureg->sirfsoc_int_en_reg, 337 + rd_regl(port, ureg->sirfsoc_int_en_reg) & 338 + ~(uint_en->sirfsoc_rx_done_en)); 339 + else 340 + wr_regl(port, SIRFUART_INT_EN_CLR, 341 + uint_en->sirfsoc_rx_done_en); 342 + spin_unlock_irqrestore(&sirfport->rx_lock, flags); 343 + 344 + sirfsoc_uart_start_next_rx_dma(port); 345 + } else { 346 + spin_lock_irqsave(&sirfport->rx_lock, flags); 347 + wr_regl(port, ureg->sirfsoc_int_st_reg, 348 + uint_st->sirfsoc_rx_done); 349 + if (!sirfport->is_marco) 350 + wr_regl(port, ureg->sirfsoc_int_en_reg, 351 + rd_regl(port, ureg->sirfsoc_int_en_reg) | 352 + (uint_en->sirfsoc_rx_done_en)); 353 + else 354 + wr_regl(port, ureg->sirfsoc_int_en_reg, 355 + uint_en->sirfsoc_rx_done_en); 356 + spin_unlock_irqrestore(&sirfport->rx_lock, flags); 357 + } 358 + } 359 + 360 + static void sirfsoc_uart_handle_rx_tmo(struct sirfsoc_uart_port *sirfport) 361 + { 362 + struct uart_port *port = &sirfport->port; 363 + struct sirfsoc_register *ureg = &sirfport->uart_reg->uart_reg; 364 + struct sirfsoc_int_en *uint_en = &sirfport->uart_reg->uart_int_en; 365 + struct dma_tx_state tx_state; 366 + spin_lock(&sirfport->rx_lock); 367 + 368 + dmaengine_tx_status(sirfport->rx_dma_chan, 369 + sirfport->rx_dma_items[sirfport->rx_issued].cookie, &tx_state); 370 + dmaengine_terminate_all(sirfport->rx_dma_chan); 371 + sirfport->rx_dma_items[sirfport->rx_issued].xmit.head = 372 + SIRFSOC_RX_DMA_BUF_SIZE - tx_state.residue; 373 + if (!sirfport->is_marco) 374 + wr_regl(port, ureg->sirfsoc_int_en_reg, 375 + rd_regl(port, ureg->sirfsoc_int_en_reg) & 376 + ~(uint_en->sirfsoc_rx_timeout_en)); 377 + else 378 + wr_regl(port, SIRFUART_INT_EN_CLR, 379 + uint_en->sirfsoc_rx_timeout_en); 380 + spin_unlock(&sirfport->rx_lock); 381 + tasklet_schedule(&sirfport->rx_tmo_process_tasklet); 382 + } 383 + 384 + static void sirfsoc_uart_handle_rx_done(struct sirfsoc_uart_port *sirfport) 385 + { 386 + struct uart_port *port = &sirfport->port; 387 + struct sirfsoc_register *ureg = &sirfport->uart_reg->uart_reg; 388 + struct sirfsoc_int_en *uint_en = &sirfport->uart_reg->uart_int_en; 389 + struct sirfsoc_int_status *uint_st = &sirfport->uart_reg->uart_int_st; 390 + 391 + sirfsoc_uart_pio_rx_chars(port, 4 - sirfport->rx_io_count); 392 + if (sirfport->rx_io_count == 4) { 393 + sirfport->rx_io_count = 0; 394 + if (!sirfport->is_marco) 395 + wr_regl(port, ureg->sirfsoc_int_en_reg, 396 + rd_regl(port, ureg->sirfsoc_int_en_reg) & 397 + ~(uint_en->sirfsoc_rx_done_en)); 398 + else 399 + wr_regl(port, SIRFUART_INT_EN_CLR, 400 + uint_en->sirfsoc_rx_done_en); 401 + wr_regl(port, ureg->sirfsoc_int_st_reg, 402 + uint_st->sirfsoc_rx_timeout); 403 + sirfsoc_uart_start_next_rx_dma(port); 404 + } 405 + } 406 + 462 407 static irqreturn_t sirfsoc_uart_isr(int irq, void *dev_id) 463 408 { 464 409 unsigned long intr_status; ··· 626 251 unsigned long flag = TTY_NORMAL; 627 252 struct sirfsoc_uart_port *sirfport = (struct sirfsoc_uart_port *)dev_id; 628 253 struct uart_port *port = &sirfport->port; 254 + struct sirfsoc_register *ureg = &sirfport->uart_reg->uart_reg; 255 + struct sirfsoc_fifo_status *ufifo_st = &sirfport->uart_reg->fifo_status; 256 + struct sirfsoc_int_status *uint_st = &sirfport->uart_reg->uart_int_st; 257 + struct sirfsoc_int_en *uint_en = &sirfport->uart_reg->uart_int_en; 629 258 struct uart_state *state = port->state; 630 259 struct circ_buf *xmit = &port->state->xmit; 631 260 spin_lock(&port->lock); 632 - intr_status = rd_regl(port, SIRFUART_INT_STATUS); 633 - wr_regl(port, SIRFUART_INT_STATUS, intr_status); 634 - intr_status &= rd_regl(port, SIRFUART_INT_EN); 635 - if (unlikely(intr_status & (SIRFUART_ERR_INT_STAT))) { 636 - if (intr_status & SIRFUART_RXD_BREAK) { 261 + intr_status = rd_regl(port, ureg->sirfsoc_int_st_reg); 262 + wr_regl(port, ureg->sirfsoc_int_st_reg, intr_status); 263 + intr_status &= rd_regl(port, ureg->sirfsoc_int_en_reg); 264 + if (unlikely(intr_status & (SIRFUART_ERR_INT_STAT(port, uint_st)))) { 265 + if (intr_status & uint_st->sirfsoc_rxd_brk) { 266 + port->icount.brk++; 637 267 if (uart_handle_break(port)) 638 268 goto recv_char; 639 - uart_insert_char(port, intr_status, 640 - SIRFUART_RX_OFLOW, 0, TTY_BREAK); 641 - spin_unlock(&port->lock); 642 - return IRQ_HANDLED; 643 269 } 644 - if (intr_status & SIRFUART_RX_OFLOW) 270 + if (intr_status & uint_st->sirfsoc_rx_oflow) 645 271 port->icount.overrun++; 646 - if (intr_status & SIRFUART_FRM_ERR) { 272 + if (intr_status & uint_st->sirfsoc_frm_err) { 647 273 port->icount.frame++; 648 274 flag = TTY_FRAME; 649 275 } 650 - if (intr_status & SIRFUART_PARITY_ERR) 276 + if (intr_status & uint_st->sirfsoc_parity_err) 651 277 flag = TTY_PARITY; 652 - wr_regl(port, SIRFUART_RX_FIFO_OP, SIRFUART_RX_FIFO_RESET); 653 - wr_regl(port, SIRFUART_RX_FIFO_OP, 0); 654 - wr_regl(port, SIRFUART_RX_FIFO_OP, SIRFUART_RX_FIFO_START); 278 + wr_regl(port, ureg->sirfsoc_rx_fifo_op, SIRFUART_FIFO_RESET); 279 + wr_regl(port, ureg->sirfsoc_rx_fifo_op, 0); 280 + wr_regl(port, ureg->sirfsoc_rx_fifo_op, SIRFUART_FIFO_START); 655 281 intr_status &= port->read_status_mask; 656 282 uart_insert_char(port, intr_status, 657 - SIRFUART_RX_OFLOW_INT, 0, flag); 283 + uint_en->sirfsoc_rx_oflow_en, 0, flag); 284 + tty_flip_buffer_push(&state->port); 658 285 } 659 286 recv_char: 660 - if (intr_status & SIRFUART_CTS_INT_EN) { 661 - cts_status = !(rd_regl(port, SIRFUART_AFC_CTRL) & 662 - SIRFUART_CTS_IN_STATUS); 663 - if (cts_status != 0) { 664 - uart_handle_cts_change(port, 1); 665 - } else { 666 - uart_handle_cts_change(port, 0); 667 - wake_up_interruptible(&state->port.delta_msr_wait); 668 - } 287 + if ((sirfport->uart_reg->uart_type == SIRF_REAL_UART) && 288 + (intr_status & SIRFUART_CTS_INT_ST(uint_st)) && 289 + !sirfport->tx_dma_state) { 290 + cts_status = rd_regl(port, ureg->sirfsoc_afc_ctrl) & 291 + SIRFUART_AFC_CTS_STATUS; 292 + if (cts_status != 0) 293 + cts_status = 0; 294 + else 295 + cts_status = 1; 296 + uart_handle_cts_change(port, cts_status); 297 + wake_up_interruptible(&state->port.delta_msr_wait); 669 298 } 670 - if (intr_status & SIRFUART_RX_IO_INT_EN) 671 - sirfsoc_uart_pio_rx_chars(port, SIRFSOC_UART_IO_RX_MAX_CNT); 672 - if (intr_status & SIRFUART_TX_INT_EN) { 673 - if (uart_circ_empty(xmit) || uart_tx_stopped(port)) { 674 - spin_unlock(&port->lock); 675 - return IRQ_HANDLED; 676 - } else { 677 - sirfsoc_uart_pio_tx_chars(sirfport, 299 + if (IS_DMA_CHAN_VALID(sirfport->rx_dma_no)) { 300 + if (intr_status & uint_st->sirfsoc_rx_timeout) 301 + sirfsoc_uart_handle_rx_tmo(sirfport); 302 + if (intr_status & uint_st->sirfsoc_rx_done) 303 + sirfsoc_uart_handle_rx_done(sirfport); 304 + } else { 305 + if (intr_status & SIRFUART_RX_IO_INT_ST(uint_st)) 306 + sirfsoc_uart_pio_rx_chars(port, 307 + SIRFSOC_UART_IO_RX_MAX_CNT); 308 + } 309 + if (intr_status & uint_st->sirfsoc_txfifo_empty) { 310 + if (IS_DMA_CHAN_VALID(sirfport->tx_dma_no)) 311 + sirfsoc_uart_tx_with_dma(sirfport); 312 + else { 313 + if (uart_circ_empty(xmit) || uart_tx_stopped(port)) { 314 + spin_unlock(&port->lock); 315 + return IRQ_HANDLED; 316 + } else { 317 + sirfsoc_uart_pio_tx_chars(sirfport, 678 318 SIRFSOC_UART_IO_TX_REASONABLE_CNT); 679 - if ((uart_circ_empty(xmit)) && 680 - (rd_regl(port, SIRFUART_TX_FIFO_STATUS) & 681 - SIRFUART_FIFOEMPTY_MASK(port))) 682 - sirfsoc_uart_stop_tx(port); 319 + if ((uart_circ_empty(xmit)) && 320 + (rd_regl(port, ureg->sirfsoc_tx_fifo_status) & 321 + ufifo_st->ff_empty(port->line))) 322 + sirfsoc_uart_stop_tx(port); 323 + } 683 324 } 684 325 } 685 326 spin_unlock(&port->lock); 686 327 return IRQ_HANDLED; 687 328 } 688 329 330 + static void sirfsoc_uart_rx_dma_complete_tl(unsigned long param) 331 + { 332 + struct sirfsoc_uart_port *sirfport = (struct sirfsoc_uart_port *)param; 333 + struct uart_port *port = &sirfport->port; 334 + unsigned long flags; 335 + spin_lock_irqsave(&sirfport->rx_lock, flags); 336 + while (sirfport->rx_completed != sirfport->rx_issued) { 337 + sirfsoc_uart_insert_rx_buf_to_tty(sirfport, 338 + SIRFSOC_RX_DMA_BUF_SIZE); 339 + sirfsoc_rx_submit_one_dma_desc(port, sirfport->rx_completed++); 340 + sirfport->rx_completed %= SIRFSOC_RX_LOOP_BUF_CNT; 341 + } 342 + spin_unlock_irqrestore(&sirfport->rx_lock, flags); 343 + } 344 + 345 + static void sirfsoc_uart_rx_dma_complete_callback(void *param) 346 + { 347 + struct sirfsoc_uart_port *sirfport = (struct sirfsoc_uart_port *)param; 348 + spin_lock(&sirfport->rx_lock); 349 + sirfport->rx_issued++; 350 + sirfport->rx_issued %= SIRFSOC_RX_LOOP_BUF_CNT; 351 + spin_unlock(&sirfport->rx_lock); 352 + tasklet_schedule(&sirfport->rx_dma_complete_tasklet); 353 + } 354 + 355 + /* submit rx dma task into dmaengine */ 356 + static void sirfsoc_uart_start_next_rx_dma(struct uart_port *port) 357 + { 358 + struct sirfsoc_uart_port *sirfport = to_sirfport(port); 359 + struct sirfsoc_register *ureg = &sirfport->uart_reg->uart_reg; 360 + struct sirfsoc_int_en *uint_en = &sirfport->uart_reg->uart_int_en; 361 + unsigned long flags; 362 + int i; 363 + spin_lock_irqsave(&sirfport->rx_lock, flags); 364 + sirfport->rx_io_count = 0; 365 + wr_regl(port, ureg->sirfsoc_rx_dma_io_ctrl, 366 + rd_regl(port, ureg->sirfsoc_rx_dma_io_ctrl) & 367 + ~SIRFUART_IO_MODE); 368 + spin_unlock_irqrestore(&sirfport->rx_lock, flags); 369 + for (i = 0; i < SIRFSOC_RX_LOOP_BUF_CNT; i++) 370 + sirfsoc_rx_submit_one_dma_desc(port, i); 371 + sirfport->rx_completed = sirfport->rx_issued = 0; 372 + spin_lock_irqsave(&sirfport->rx_lock, flags); 373 + if (!sirfport->is_marco) 374 + wr_regl(port, ureg->sirfsoc_int_en_reg, 375 + rd_regl(port, ureg->sirfsoc_int_en_reg) | 376 + SIRFUART_RX_DMA_INT_EN(port, uint_en)); 377 + else 378 + wr_regl(port, ureg->sirfsoc_int_en_reg, 379 + SIRFUART_RX_DMA_INT_EN(port, uint_en)); 380 + spin_unlock_irqrestore(&sirfport->rx_lock, flags); 381 + } 382 + 689 383 static void sirfsoc_uart_start_rx(struct uart_port *port) 690 384 { 691 - unsigned long regv; 692 - regv = rd_regl(port, SIRFUART_INT_EN); 693 - wr_regl(port, SIRFUART_INT_EN, regv | SIRFUART_RX_IO_INT_EN); 694 - wr_regl(port, SIRFUART_RX_FIFO_OP, SIRFUART_RX_FIFO_RESET); 695 - wr_regl(port, SIRFUART_RX_FIFO_OP, 0); 696 - wr_regl(port, SIRFUART_RX_FIFO_OP, SIRFUART_RX_FIFO_START); 385 + struct sirfsoc_uart_port *sirfport = to_sirfport(port); 386 + struct sirfsoc_register *ureg = &sirfport->uart_reg->uart_reg; 387 + struct sirfsoc_int_en *uint_en = &sirfport->uart_reg->uart_int_en; 388 + 389 + sirfport->rx_io_count = 0; 390 + wr_regl(port, ureg->sirfsoc_rx_fifo_op, SIRFUART_FIFO_RESET); 391 + wr_regl(port, ureg->sirfsoc_rx_fifo_op, 0); 392 + wr_regl(port, ureg->sirfsoc_rx_fifo_op, SIRFUART_FIFO_START); 393 + if (IS_DMA_CHAN_VALID(sirfport->rx_dma_no)) 394 + sirfsoc_uart_start_next_rx_dma(port); 395 + else { 396 + if (!sirfport->is_marco) 397 + wr_regl(port, ureg->sirfsoc_int_en_reg, 398 + rd_regl(port, ureg->sirfsoc_int_en_reg) | 399 + SIRFUART_RX_IO_INT_EN(port, uint_en)); 400 + else 401 + wr_regl(port, ureg->sirfsoc_int_en_reg, 402 + SIRFUART_RX_IO_INT_EN(port, uint_en)); 403 + } 697 404 } 698 405 699 406 static unsigned int 700 - sirfsoc_calc_sample_div(unsigned long baud_rate, 701 - unsigned long ioclk_rate, unsigned long *setted_baud) 407 + sirfsoc_usp_calc_sample_div(unsigned long set_rate, 408 + unsigned long ioclk_rate, unsigned long *sample_reg) 409 + { 410 + unsigned long min_delta = ~0UL; 411 + unsigned short sample_div; 412 + unsigned long ioclk_div = 0; 413 + unsigned long temp_delta; 414 + 415 + for (sample_div = SIRF_MIN_SAMPLE_DIV; 416 + sample_div <= SIRF_MAX_SAMPLE_DIV; sample_div++) { 417 + temp_delta = ioclk_rate - 418 + (ioclk_rate + (set_rate * sample_div) / 2) 419 + / (set_rate * sample_div) * set_rate * sample_div; 420 + 421 + temp_delta = (temp_delta > 0) ? temp_delta : -temp_delta; 422 + if (temp_delta < min_delta) { 423 + ioclk_div = (2 * ioclk_rate / 424 + (set_rate * sample_div) + 1) / 2 - 1; 425 + if (ioclk_div > SIRF_IOCLK_DIV_MAX) 426 + continue; 427 + min_delta = temp_delta; 428 + *sample_reg = sample_div; 429 + if (!temp_delta) 430 + break; 431 + } 432 + } 433 + return ioclk_div; 434 + } 435 + 436 + static unsigned int 437 + sirfsoc_uart_calc_sample_div(unsigned long baud_rate, 438 + unsigned long ioclk_rate, unsigned long *set_baud) 702 439 { 703 440 unsigned long min_delta = ~0UL; 704 441 unsigned short sample_div; ··· 833 346 regv = regv & (~SIRF_SAMPLE_DIV_MASK); 834 347 regv = regv | (sample_div << SIRF_SAMPLE_DIV_SHIFT); 835 348 min_delta = temp_delta; 836 - *setted_baud = baud_tmp; 349 + *set_baud = baud_tmp; 837 350 } 838 351 } 839 352 return regv; ··· 844 357 struct ktermios *old) 845 358 { 846 359 struct sirfsoc_uart_port *sirfport = to_sirfport(port); 360 + struct sirfsoc_register *ureg = &sirfport->uart_reg->uart_reg; 361 + struct sirfsoc_int_en *uint_en = &sirfport->uart_reg->uart_int_en; 847 362 unsigned long config_reg = 0; 848 363 unsigned long baud_rate; 849 - unsigned long setted_baud; 364 + unsigned long set_baud; 850 365 unsigned long flags; 851 366 unsigned long ic; 852 367 unsigned int clk_div_reg = 0; 853 - unsigned long temp_reg_val; 368 + unsigned long txfifo_op_reg, ioclk_rate; 854 369 unsigned long rx_time_out; 855 370 int threshold_div; 856 - int temp; 371 + u32 data_bit_len, stop_bit_len, len_val; 372 + unsigned long sample_div_reg = 0xf; 373 + ioclk_rate = port->uartclk; 857 374 858 375 switch (termios->c_cflag & CSIZE) { 859 376 default: 860 377 case CS8: 378 + data_bit_len = 8; 861 379 config_reg |= SIRFUART_DATA_BIT_LEN_8; 862 380 break; 863 381 case CS7: 382 + data_bit_len = 7; 864 383 config_reg |= SIRFUART_DATA_BIT_LEN_7; 865 384 break; 866 385 case CS6: 386 + data_bit_len = 6; 867 387 config_reg |= SIRFUART_DATA_BIT_LEN_6; 868 388 break; 869 389 case CS5: 390 + data_bit_len = 5; 870 391 config_reg |= SIRFUART_DATA_BIT_LEN_5; 871 392 break; 872 393 } 873 - if (termios->c_cflag & CSTOPB) 394 + if (termios->c_cflag & CSTOPB) { 874 395 config_reg |= SIRFUART_STOP_BIT_LEN_2; 875 - baud_rate = uart_get_baud_rate(port, termios, old, 0, 4000000); 396 + stop_bit_len = 2; 397 + } else 398 + stop_bit_len = 1; 399 + 876 400 spin_lock_irqsave(&port->lock, flags); 877 - port->read_status_mask = SIRFUART_RX_OFLOW_INT; 401 + port->read_status_mask = uint_en->sirfsoc_rx_oflow_en; 878 402 port->ignore_status_mask = 0; 879 - /* read flags */ 880 - if (termios->c_iflag & INPCK) 881 - port->read_status_mask |= 882 - SIRFUART_FRM_ERR_INT | SIRFUART_PARITY_ERR_INT; 403 + if (sirfport->uart_reg->uart_type == SIRF_REAL_UART) { 404 + if (termios->c_iflag & INPCK) 405 + port->read_status_mask |= uint_en->sirfsoc_frm_err_en | 406 + uint_en->sirfsoc_parity_err_en; 407 + } else { 408 + if (termios->c_iflag & INPCK) 409 + port->read_status_mask |= uint_en->sirfsoc_frm_err_en; 410 + } 883 411 if (termios->c_iflag & (BRKINT | PARMRK)) 884 - port->read_status_mask |= SIRFUART_RXD_BREAK_INT; 885 - /* ignore flags */ 886 - if (termios->c_iflag & IGNPAR) 412 + port->read_status_mask |= uint_en->sirfsoc_rxd_brk_en; 413 + if (sirfport->uart_reg->uart_type == SIRF_REAL_UART) { 414 + if (termios->c_iflag & IGNPAR) 415 + port->ignore_status_mask |= 416 + uint_en->sirfsoc_frm_err_en | 417 + uint_en->sirfsoc_parity_err_en; 418 + if (termios->c_cflag & PARENB) { 419 + if (termios->c_cflag & CMSPAR) { 420 + if (termios->c_cflag & PARODD) 421 + config_reg |= SIRFUART_STICK_BIT_MARK; 422 + else 423 + config_reg |= SIRFUART_STICK_BIT_SPACE; 424 + } else if (termios->c_cflag & PARODD) { 425 + config_reg |= SIRFUART_STICK_BIT_ODD; 426 + } else { 427 + config_reg |= SIRFUART_STICK_BIT_EVEN; 428 + } 429 + } 430 + } else { 431 + if (termios->c_iflag & IGNPAR) 432 + port->ignore_status_mask |= 433 + uint_en->sirfsoc_frm_err_en; 434 + if (termios->c_cflag & PARENB) 435 + dev_warn(port->dev, 436 + "USP-UART not support parity err\n"); 437 + } 438 + if (termios->c_iflag & IGNBRK) { 887 439 port->ignore_status_mask |= 888 - SIRFUART_FRM_ERR_INT | SIRFUART_PARITY_ERR_INT; 440 + uint_en->sirfsoc_rxd_brk_en; 441 + if (termios->c_iflag & IGNPAR) 442 + port->ignore_status_mask |= 443 + uint_en->sirfsoc_rx_oflow_en; 444 + } 889 445 if ((termios->c_cflag & CREAD) == 0) 890 446 port->ignore_status_mask |= SIRFUART_DUMMY_READ; 891 - /* enable parity if PARENB is set*/ 892 - if (termios->c_cflag & PARENB) { 893 - if (termios->c_cflag & CMSPAR) { 894 - if (termios->c_cflag & PARODD) 895 - config_reg |= SIRFUART_STICK_BIT_MARK; 896 - else 897 - config_reg |= SIRFUART_STICK_BIT_SPACE; 898 - } else if (termios->c_cflag & PARODD) { 899 - config_reg |= SIRFUART_STICK_BIT_ODD; 900 - } else { 901 - config_reg |= SIRFUART_STICK_BIT_EVEN; 902 - } 903 - } 904 447 /* Hardware Flow Control Settings */ 905 448 if (UART_ENABLE_MS(port, termios->c_cflag)) { 906 449 if (!sirfport->ms_enabled) ··· 939 422 if (sirfport->ms_enabled) 940 423 sirfsoc_uart_disable_ms(port); 941 424 } 942 - 943 - if (port->uartclk == 150000000) { 944 - /* common rate: fast calculation */ 425 + baud_rate = uart_get_baud_rate(port, termios, old, 0, 4000000); 426 + if (ioclk_rate == 150000000) { 945 427 for (ic = 0; ic < SIRF_BAUD_RATE_SUPPORT_NR; ic++) 946 428 if (baud_rate == baudrate_to_regv[ic].baud_rate) 947 429 clk_div_reg = baudrate_to_regv[ic].reg_val; 948 430 } 949 - 950 - setted_baud = baud_rate; 951 - /* arbitary rate setting */ 952 - if (unlikely(clk_div_reg == 0)) 953 - clk_div_reg = sirfsoc_calc_sample_div(baud_rate, port->uartclk, 954 - &setted_baud); 955 - wr_regl(port, SIRFUART_DIVISOR, clk_div_reg); 956 - 431 + set_baud = baud_rate; 432 + if (sirfport->uart_reg->uart_type == SIRF_REAL_UART) { 433 + if (unlikely(clk_div_reg == 0)) 434 + clk_div_reg = sirfsoc_uart_calc_sample_div(baud_rate, 435 + ioclk_rate, &set_baud); 436 + wr_regl(port, ureg->sirfsoc_divisor, clk_div_reg); 437 + } else { 438 + clk_div_reg = sirfsoc_usp_calc_sample_div(baud_rate, 439 + ioclk_rate, &sample_div_reg); 440 + sample_div_reg--; 441 + set_baud = ((ioclk_rate / (clk_div_reg+1) - 1) / 442 + (sample_div_reg + 1)); 443 + /* setting usp mode 2 */ 444 + len_val = ((1 << SIRFSOC_USP_MODE2_RXD_DELAY_OFFSET) | 445 + (1 << SIRFSOC_USP_MODE2_TXD_DELAY_OFFSET)); 446 + len_val |= ((clk_div_reg & SIRFSOC_USP_MODE2_CLK_DIVISOR_MASK) 447 + << SIRFSOC_USP_MODE2_CLK_DIVISOR_OFFSET); 448 + wr_regl(port, ureg->sirfsoc_mode2, len_val); 449 + } 957 450 if (tty_termios_baud_rate(termios)) 958 - tty_termios_encode_baud_rate(termios, setted_baud, setted_baud); 959 - 960 - /* set receive timeout */ 961 - rx_time_out = SIRFSOC_UART_RX_TIMEOUT(baud_rate, 20000); 962 - rx_time_out = (rx_time_out > 0xFFFF) ? 0xFFFF : rx_time_out; 963 - config_reg |= SIRFUART_RECV_TIMEOUT(rx_time_out); 964 - temp_reg_val = rd_regl(port, SIRFUART_TX_FIFO_OP); 965 - wr_regl(port, SIRFUART_RX_FIFO_OP, 0); 966 - wr_regl(port, SIRFUART_TX_FIFO_OP, 967 - temp_reg_val & ~SIRFUART_TX_FIFO_START); 968 - wr_regl(port, SIRFUART_TX_DMA_IO_CTRL, SIRFUART_TX_MODE_IO); 969 - wr_regl(port, SIRFUART_RX_DMA_IO_CTRL, SIRFUART_RX_MODE_IO); 970 - wr_regl(port, SIRFUART_LINE_CTRL, config_reg); 971 - 451 + tty_termios_encode_baud_rate(termios, set_baud, set_baud); 452 + /* set receive timeout && data bits len */ 453 + rx_time_out = SIRFSOC_UART_RX_TIMEOUT(set_baud, 20000); 454 + rx_time_out = SIRFUART_RECV_TIMEOUT_VALUE(rx_time_out); 455 + txfifo_op_reg = rd_regl(port, ureg->sirfsoc_tx_fifo_op); 456 + wr_regl(port, ureg->sirfsoc_rx_fifo_op, SIRFUART_FIFO_STOP); 457 + wr_regl(port, ureg->sirfsoc_tx_fifo_op, 458 + (txfifo_op_reg & ~SIRFUART_FIFO_START)); 459 + if (sirfport->uart_reg->uart_type == SIRF_REAL_UART) { 460 + config_reg |= SIRFUART_RECV_TIMEOUT(port, rx_time_out); 461 + wr_regl(port, ureg->sirfsoc_line_ctrl, config_reg); 462 + } else { 463 + /*tx frame ctrl*/ 464 + len_val = (data_bit_len - 1) << SIRFSOC_USP_TX_DATA_LEN_OFFSET; 465 + len_val |= (data_bit_len + 1 + stop_bit_len - 1) << 466 + SIRFSOC_USP_TX_FRAME_LEN_OFFSET; 467 + len_val |= ((data_bit_len - 1) << 468 + SIRFSOC_USP_TX_SHIFTER_LEN_OFFSET); 469 + len_val |= (((clk_div_reg & 0xc00) >> 10) << 470 + SIRFSOC_USP_TX_CLK_DIVISOR_OFFSET); 471 + wr_regl(port, ureg->sirfsoc_tx_frame_ctrl, len_val); 472 + /*rx frame ctrl*/ 473 + len_val = (data_bit_len - 1) << SIRFSOC_USP_RX_DATA_LEN_OFFSET; 474 + len_val |= (data_bit_len + 1 + stop_bit_len - 1) << 475 + SIRFSOC_USP_RX_FRAME_LEN_OFFSET; 476 + len_val |= (data_bit_len - 1) << 477 + SIRFSOC_USP_RX_SHIFTER_LEN_OFFSET; 478 + len_val |= (((clk_div_reg & 0xf000) >> 12) << 479 + SIRFSOC_USP_RX_CLK_DIVISOR_OFFSET); 480 + wr_regl(port, ureg->sirfsoc_rx_frame_ctrl, len_val); 481 + /*async param*/ 482 + wr_regl(port, ureg->sirfsoc_async_param_reg, 483 + (SIRFUART_RECV_TIMEOUT(port, rx_time_out)) | 484 + (sample_div_reg & SIRFSOC_USP_ASYNC_DIV2_MASK) << 485 + SIRFSOC_USP_ASYNC_DIV2_OFFSET); 486 + } 487 + if (IS_DMA_CHAN_VALID(sirfport->tx_dma_no)) 488 + wr_regl(port, ureg->sirfsoc_tx_dma_io_ctrl, SIRFUART_DMA_MODE); 489 + else 490 + wr_regl(port, ureg->sirfsoc_tx_dma_io_ctrl, SIRFUART_IO_MODE); 491 + if (IS_DMA_CHAN_VALID(sirfport->rx_dma_no)) 492 + wr_regl(port, ureg->sirfsoc_rx_dma_io_ctrl, SIRFUART_DMA_MODE); 493 + else 494 + wr_regl(port, ureg->sirfsoc_rx_dma_io_ctrl, SIRFUART_IO_MODE); 972 495 /* Reset Rx/Tx FIFO Threshold level for proper baudrate */ 973 - if (baud_rate < 1000000) 496 + if (set_baud < 1000000) 974 497 threshold_div = 1; 975 498 else 976 499 threshold_div = 2; 977 - temp = port->line == 1 ? 16 : 64; 978 - wr_regl(port, SIRFUART_TX_FIFO_CTRL, temp / threshold_div); 979 - wr_regl(port, SIRFUART_RX_FIFO_CTRL, temp / threshold_div); 980 - temp_reg_val |= SIRFUART_TX_FIFO_START; 981 - wr_regl(port, SIRFUART_TX_FIFO_OP, temp_reg_val); 982 - uart_update_timeout(port, termios->c_cflag, baud_rate); 500 + wr_regl(port, ureg->sirfsoc_tx_fifo_ctrl, 501 + SIRFUART_FIFO_THD(port) / threshold_div); 502 + wr_regl(port, ureg->sirfsoc_rx_fifo_ctrl, 503 + SIRFUART_FIFO_THD(port) / threshold_div); 504 + txfifo_op_reg |= SIRFUART_FIFO_START; 505 + wr_regl(port, ureg->sirfsoc_tx_fifo_op, txfifo_op_reg); 506 + uart_update_timeout(port, termios->c_cflag, set_baud); 983 507 sirfsoc_uart_start_rx(port); 984 - wr_regl(port, SIRFUART_TX_RX_EN, SIRFUART_TX_EN | SIRFUART_RX_EN); 508 + wr_regl(port, ureg->sirfsoc_tx_rx_en, SIRFUART_TX_EN | SIRFUART_RX_EN); 985 509 spin_unlock_irqrestore(&port->lock, flags); 986 510 } 987 511 988 - static void startup_uart_controller(struct uart_port *port) 512 + static unsigned int sirfsoc_uart_init_tx_dma(struct uart_port *port) 989 513 { 990 - unsigned long temp_regv; 991 - int temp; 992 - temp_regv = rd_regl(port, SIRFUART_TX_DMA_IO_CTRL); 993 - wr_regl(port, SIRFUART_TX_DMA_IO_CTRL, temp_regv | SIRFUART_TX_MODE_IO); 994 - temp_regv = rd_regl(port, SIRFUART_RX_DMA_IO_CTRL); 995 - wr_regl(port, SIRFUART_RX_DMA_IO_CTRL, temp_regv | SIRFUART_RX_MODE_IO); 996 - wr_regl(port, SIRFUART_TX_DMA_IO_LEN, 0); 997 - wr_regl(port, SIRFUART_RX_DMA_IO_LEN, 0); 998 - wr_regl(port, SIRFUART_TX_RX_EN, SIRFUART_RX_EN | SIRFUART_TX_EN); 999 - wr_regl(port, SIRFUART_TX_FIFO_OP, SIRFUART_TX_FIFO_RESET); 1000 - wr_regl(port, SIRFUART_TX_FIFO_OP, 0); 1001 - wr_regl(port, SIRFUART_RX_FIFO_OP, SIRFUART_RX_FIFO_RESET); 1002 - wr_regl(port, SIRFUART_RX_FIFO_OP, 0); 1003 - temp = port->line == 1 ? 16 : 64; 1004 - wr_regl(port, SIRFUART_TX_FIFO_CTRL, temp); 1005 - wr_regl(port, SIRFUART_RX_FIFO_CTRL, temp); 514 + struct sirfsoc_uart_port *sirfport = to_sirfport(port); 515 + dma_cap_mask_t dma_mask; 516 + struct dma_slave_config tx_slv_cfg = { 517 + .dst_maxburst = 2, 518 + }; 519 + 520 + dma_cap_zero(dma_mask); 521 + dma_cap_set(DMA_SLAVE, dma_mask); 522 + sirfport->tx_dma_chan = dma_request_channel(dma_mask, 523 + (dma_filter_fn)sirfsoc_dma_filter_id, 524 + (void *)sirfport->tx_dma_no); 525 + if (!sirfport->tx_dma_chan) { 526 + dev_err(port->dev, "Uart Request Dma Channel Fail %d\n", 527 + sirfport->tx_dma_no); 528 + return -EPROBE_DEFER; 529 + } 530 + dmaengine_slave_config(sirfport->tx_dma_chan, &tx_slv_cfg); 531 + 532 + return 0; 533 + } 534 + 535 + static unsigned int sirfsoc_uart_init_rx_dma(struct uart_port *port) 536 + { 537 + struct sirfsoc_uart_port *sirfport = to_sirfport(port); 538 + dma_cap_mask_t dma_mask; 539 + int ret; 540 + int i, j; 541 + struct dma_slave_config slv_cfg = { 542 + .src_maxburst = 2, 543 + }; 544 + 545 + dma_cap_zero(dma_mask); 546 + dma_cap_set(DMA_SLAVE, dma_mask); 547 + sirfport->rx_dma_chan = dma_request_channel(dma_mask, 548 + (dma_filter_fn)sirfsoc_dma_filter_id, 549 + (void *)sirfport->rx_dma_no); 550 + if (!sirfport->rx_dma_chan) { 551 + dev_err(port->dev, "Uart Request Dma Channel Fail %d\n", 552 + sirfport->rx_dma_no); 553 + ret = -EPROBE_DEFER; 554 + goto request_err; 555 + } 556 + for (i = 0; i < SIRFSOC_RX_LOOP_BUF_CNT; i++) { 557 + sirfport->rx_dma_items[i].xmit.buf = 558 + dma_alloc_coherent(port->dev, SIRFSOC_RX_DMA_BUF_SIZE, 559 + &sirfport->rx_dma_items[i].dma_addr, GFP_KERNEL); 560 + if (!sirfport->rx_dma_items[i].xmit.buf) { 561 + dev_err(port->dev, "Uart alloc bufa failed\n"); 562 + ret = -ENOMEM; 563 + goto alloc_coherent_err; 564 + } 565 + sirfport->rx_dma_items[i].xmit.head = 566 + sirfport->rx_dma_items[i].xmit.tail = 0; 567 + } 568 + dmaengine_slave_config(sirfport->rx_dma_chan, &slv_cfg); 569 + 570 + return 0; 571 + alloc_coherent_err: 572 + for (j = 0; j < i; j++) 573 + dma_free_coherent(port->dev, SIRFSOC_RX_DMA_BUF_SIZE, 574 + sirfport->rx_dma_items[j].xmit.buf, 575 + sirfport->rx_dma_items[j].dma_addr); 576 + dma_release_channel(sirfport->rx_dma_chan); 577 + request_err: 578 + return ret; 579 + } 580 + 581 + static void sirfsoc_uart_uninit_tx_dma(struct sirfsoc_uart_port *sirfport) 582 + { 583 + dmaengine_terminate_all(sirfport->tx_dma_chan); 584 + dma_release_channel(sirfport->tx_dma_chan); 585 + } 586 + 587 + static void sirfsoc_uart_uninit_rx_dma(struct sirfsoc_uart_port *sirfport) 588 + { 589 + int i; 590 + struct uart_port *port = &sirfport->port; 591 + dmaengine_terminate_all(sirfport->rx_dma_chan); 592 + dma_release_channel(sirfport->rx_dma_chan); 593 + for (i = 0; i < SIRFSOC_RX_LOOP_BUF_CNT; i++) 594 + dma_free_coherent(port->dev, SIRFSOC_RX_DMA_BUF_SIZE, 595 + sirfport->rx_dma_items[i].xmit.buf, 596 + sirfport->rx_dma_items[i].dma_addr); 1006 597 } 1007 598 1008 599 static int sirfsoc_uart_startup(struct uart_port *port) 1009 600 { 1010 601 struct sirfsoc_uart_port *sirfport = to_sirfport(port); 602 + struct sirfsoc_register *ureg = &sirfport->uart_reg->uart_reg; 1011 603 unsigned int index = port->line; 1012 604 int ret; 1013 605 set_irq_flags(port->irq, IRQF_VALID | IRQF_NOAUTOEN); ··· 1130 504 index, port->irq); 1131 505 goto irq_err; 1132 506 } 1133 - startup_uart_controller(port); 507 + 508 + /* initial hardware settings */ 509 + wr_regl(port, ureg->sirfsoc_tx_dma_io_ctrl, 510 + rd_regl(port, ureg->sirfsoc_tx_dma_io_ctrl) | 511 + SIRFUART_IO_MODE); 512 + wr_regl(port, ureg->sirfsoc_rx_dma_io_ctrl, 513 + rd_regl(port, ureg->sirfsoc_rx_dma_io_ctrl) | 514 + SIRFUART_IO_MODE); 515 + wr_regl(port, ureg->sirfsoc_tx_dma_io_len, 0); 516 + wr_regl(port, ureg->sirfsoc_rx_dma_io_len, 0); 517 + wr_regl(port, ureg->sirfsoc_tx_rx_en, SIRFUART_RX_EN | SIRFUART_TX_EN); 518 + if (sirfport->uart_reg->uart_type == SIRF_USP_UART) 519 + wr_regl(port, ureg->sirfsoc_mode1, 520 + SIRFSOC_USP_ENDIAN_CTRL_LSBF | 521 + SIRFSOC_USP_EN); 522 + wr_regl(port, ureg->sirfsoc_tx_fifo_op, SIRFUART_FIFO_RESET); 523 + wr_regl(port, ureg->sirfsoc_tx_fifo_op, 0); 524 + wr_regl(port, ureg->sirfsoc_rx_fifo_op, SIRFUART_FIFO_RESET); 525 + wr_regl(port, ureg->sirfsoc_rx_fifo_op, 0); 526 + wr_regl(port, ureg->sirfsoc_tx_fifo_ctrl, SIRFUART_FIFO_THD(port)); 527 + wr_regl(port, ureg->sirfsoc_rx_fifo_ctrl, SIRFUART_FIFO_THD(port)); 528 + 529 + if (IS_DMA_CHAN_VALID(sirfport->rx_dma_no)) { 530 + ret = sirfsoc_uart_init_rx_dma(port); 531 + if (ret) 532 + goto init_rx_err; 533 + wr_regl(port, ureg->sirfsoc_rx_fifo_level_chk, 534 + SIRFUART_RX_FIFO_CHK_SC(port->line, 0x4) | 535 + SIRFUART_RX_FIFO_CHK_LC(port->line, 0xe) | 536 + SIRFUART_RX_FIFO_CHK_HC(port->line, 0x1b)); 537 + } 538 + if (IS_DMA_CHAN_VALID(sirfport->tx_dma_no)) { 539 + sirfsoc_uart_init_tx_dma(port); 540 + sirfport->tx_dma_state = TX_DMA_IDLE; 541 + wr_regl(port, ureg->sirfsoc_tx_fifo_level_chk, 542 + SIRFUART_TX_FIFO_CHK_SC(port->line, 0x1b) | 543 + SIRFUART_TX_FIFO_CHK_LC(port->line, 0xe) | 544 + SIRFUART_TX_FIFO_CHK_HC(port->line, 0x4)); 545 + } 546 + sirfport->ms_enabled = false; 547 + if (sirfport->uart_reg->uart_type == SIRF_USP_UART && 548 + sirfport->hw_flow_ctrl) { 549 + set_irq_flags(gpio_to_irq(sirfport->cts_gpio), 550 + IRQF_VALID | IRQF_NOAUTOEN); 551 + ret = request_irq(gpio_to_irq(sirfport->cts_gpio), 552 + sirfsoc_uart_usp_cts_handler, IRQF_TRIGGER_FALLING | 553 + IRQF_TRIGGER_RISING, "usp_cts_irq", sirfport); 554 + if (ret != 0) { 555 + dev_err(port->dev, "UART-USP:request gpio irq fail\n"); 556 + goto init_rx_err; 557 + } 558 + } 559 + 1134 560 enable_irq(port->irq); 561 + 562 + return 0; 563 + init_rx_err: 564 + free_irq(port->irq, sirfport); 1135 565 irq_err: 1136 566 return ret; 1137 567 } ··· 1195 513 static void sirfsoc_uart_shutdown(struct uart_port *port) 1196 514 { 1197 515 struct sirfsoc_uart_port *sirfport = to_sirfport(port); 1198 - wr_regl(port, SIRFUART_INT_EN, 0); 516 + struct sirfsoc_register *ureg = &sirfport->uart_reg->uart_reg; 517 + if (!sirfport->is_marco) 518 + wr_regl(port, ureg->sirfsoc_int_en_reg, 0); 519 + else 520 + wr_regl(port, SIRFUART_INT_EN_CLR, ~0UL); 521 + 1199 522 free_irq(port->irq, sirfport); 1200 - if (sirfport->ms_enabled) { 523 + if (sirfport->ms_enabled) 1201 524 sirfsoc_uart_disable_ms(port); 1202 - sirfport->ms_enabled = 0; 525 + if (sirfport->uart_reg->uart_type == SIRF_USP_UART && 526 + sirfport->hw_flow_ctrl) { 527 + gpio_set_value(sirfport->rts_gpio, 1); 528 + free_irq(gpio_to_irq(sirfport->cts_gpio), sirfport); 529 + } 530 + if (IS_DMA_CHAN_VALID(sirfport->rx_dma_no)) 531 + sirfsoc_uart_uninit_rx_dma(sirfport); 532 + if (IS_DMA_CHAN_VALID(sirfport->tx_dma_no)) { 533 + sirfsoc_uart_uninit_tx_dma(sirfport); 534 + sirfport->tx_dma_state = TX_DMA_IDLE; 1203 535 } 1204 536 } 1205 537 ··· 1224 528 1225 529 static int sirfsoc_uart_request_port(struct uart_port *port) 1226 530 { 531 + struct sirfsoc_uart_port *sirfport = to_sirfport(port); 532 + struct sirfsoc_uart_param *uart_param = &sirfport->uart_reg->uart_param; 1227 533 void *ret; 1228 534 ret = request_mem_region(port->mapbase, 1229 - SIRFUART_MAP_SIZE, SIRFUART_PORT_NAME); 535 + SIRFUART_MAP_SIZE, uart_param->port_name); 1230 536 return ret ? 0 : -EBUSY; 1231 537 } 1232 538 ··· 1264 566 }; 1265 567 1266 568 #ifdef CONFIG_SERIAL_SIRFSOC_CONSOLE 1267 - static int __init sirfsoc_uart_console_setup(struct console *co, char *options) 569 + static int __init 570 + sirfsoc_uart_console_setup(struct console *co, char *options) 1268 571 { 1269 572 unsigned int baud = 115200; 1270 573 unsigned int bits = 8; 1271 574 unsigned int parity = 'n'; 1272 575 unsigned int flow = 'n'; 1273 576 struct uart_port *port = &sirfsoc_uart_ports[co->index].port; 1274 - 577 + struct sirfsoc_uart_port *sirfport = to_sirfport(port); 578 + struct sirfsoc_register *ureg = &sirfport->uart_reg->uart_reg; 1275 579 if (co->index < 0 || co->index >= SIRFSOC_UART_NR) 1276 580 return -EINVAL; 1277 581 1278 582 if (!port->mapbase) 1279 583 return -ENODEV; 1280 584 585 + /* enable usp in mode1 register */ 586 + if (sirfport->uart_reg->uart_type == SIRF_USP_UART) 587 + wr_regl(port, ureg->sirfsoc_mode1, SIRFSOC_USP_EN | 588 + SIRFSOC_USP_ENDIAN_CTRL_LSBF); 1281 589 if (options) 1282 590 uart_parse_options(options, &baud, &parity, &bits, &flow); 1283 591 port->cons = co; 592 + 593 + /* default console tx/rx transfer using io mode */ 594 + sirfport->rx_dma_no = UNVALID_DMA_CHAN; 595 + sirfport->tx_dma_no = UNVALID_DMA_CHAN; 1284 596 return uart_set_options(port, co, baud, parity, bits, flow); 1285 597 } 1286 598 1287 599 static void sirfsoc_uart_console_putchar(struct uart_port *port, int ch) 1288 600 { 601 + struct sirfsoc_uart_port *sirfport = to_sirfport(port); 602 + struct sirfsoc_register *ureg = &sirfport->uart_reg->uart_reg; 603 + struct sirfsoc_fifo_status *ufifo_st = &sirfport->uart_reg->fifo_status; 1289 604 while (rd_regl(port, 1290 - SIRFUART_TX_FIFO_STATUS) & SIRFUART_FIFOFULL_MASK(port)) 605 + ureg->sirfsoc_tx_fifo_status) & ufifo_st->ff_full(port->line)) 1291 606 cpu_relax(); 1292 - wr_regb(port, SIRFUART_TX_FIFO_DATA, ch); 607 + wr_regb(port, ureg->sirfsoc_tx_fifo_data, ch); 1293 608 } 1294 609 1295 610 static void sirfsoc_uart_console_write(struct console *co, const char *s, ··· 1344 633 #endif 1345 634 }; 1346 635 1347 - int sirfsoc_uart_probe(struct platform_device *pdev) 636 + static struct of_device_id sirfsoc_uart_ids[] = { 637 + { .compatible = "sirf,prima2-uart", .data = &sirfsoc_uart,}, 638 + { .compatible = "sirf,marco-uart", .data = &sirfsoc_uart}, 639 + { .compatible = "sirf,prima2-usp-uart", .data = &sirfsoc_usp}, 640 + {} 641 + }; 642 + MODULE_DEVICE_TABLE(of, sirfsoc_uart_ids); 643 + 644 + static int sirfsoc_uart_probe(struct platform_device *pdev) 1348 645 { 1349 646 struct sirfsoc_uart_port *sirfport; 1350 647 struct uart_port *port; 1351 648 struct resource *res; 1352 649 int ret; 650 + const struct of_device_id *match; 1353 651 652 + match = of_match_node(sirfsoc_uart_ids, pdev->dev.of_node); 1354 653 if (of_property_read_u32(pdev->dev.of_node, "cell-index", &pdev->id)) { 1355 654 dev_err(&pdev->dev, 1356 655 "Unable to find cell-index in uart node.\n"); 1357 656 ret = -EFAULT; 1358 657 goto err; 1359 658 } 1360 - 659 + if (of_device_is_compatible(pdev->dev.of_node, "sirf,prima2-usp-uart")) 660 + pdev->id += ((struct sirfsoc_uart_register *) 661 + match->data)->uart_param.register_uart_nr; 1361 662 sirfport = &sirfsoc_uart_ports[pdev->id]; 1362 663 port = &sirfport->port; 1363 664 port->dev = &pdev->dev; 1364 665 port->private_data = sirfport; 666 + sirfport->uart_reg = (struct sirfsoc_uart_register *)match->data; 1365 667 1366 - if (of_find_property(pdev->dev.of_node, "hw_flow_ctrl", NULL)) 1367 - sirfport->hw_flow_ctrl = 1; 668 + sirfport->hw_flow_ctrl = of_property_read_bool(pdev->dev.of_node, 669 + "sirf,uart-has-rtscts"); 670 + if (of_device_is_compatible(pdev->dev.of_node, "sirf,prima2-uart")) { 671 + sirfport->uart_reg->uart_type = SIRF_REAL_UART; 672 + if (of_property_read_u32(pdev->dev.of_node, 673 + "sirf,uart-dma-rx-channel", 674 + &sirfport->rx_dma_no)) 675 + sirfport->rx_dma_no = UNVALID_DMA_CHAN; 676 + if (of_property_read_u32(pdev->dev.of_node, 677 + "sirf,uart-dma-tx-channel", 678 + &sirfport->tx_dma_no)) 679 + sirfport->tx_dma_no = UNVALID_DMA_CHAN; 680 + } 681 + if (of_device_is_compatible(pdev->dev.of_node, "sirf,prima2-usp-uart")) { 682 + sirfport->uart_reg->uart_type = SIRF_USP_UART; 683 + if (of_property_read_u32(pdev->dev.of_node, 684 + "sirf,usp-dma-rx-channel", 685 + &sirfport->rx_dma_no)) 686 + sirfport->rx_dma_no = UNVALID_DMA_CHAN; 687 + if (of_property_read_u32(pdev->dev.of_node, 688 + "sirf,usp-dma-tx-channel", 689 + &sirfport->tx_dma_no)) 690 + sirfport->tx_dma_no = UNVALID_DMA_CHAN; 691 + if (!sirfport->hw_flow_ctrl) 692 + goto usp_no_flow_control; 693 + if (of_find_property(pdev->dev.of_node, "cts-gpios", NULL)) 694 + sirfport->cts_gpio = of_get_named_gpio( 695 + pdev->dev.of_node, "cts-gpios", 0); 696 + else 697 + sirfport->cts_gpio = -1; 698 + if (of_find_property(pdev->dev.of_node, "rts-gpios", NULL)) 699 + sirfport->rts_gpio = of_get_named_gpio( 700 + pdev->dev.of_node, "rts-gpios", 0); 701 + else 702 + sirfport->rts_gpio = -1; 703 + 704 + if ((!gpio_is_valid(sirfport->cts_gpio) || 705 + !gpio_is_valid(sirfport->rts_gpio))) { 706 + ret = -EINVAL; 707 + dev_err(&pdev->dev, 708 + "Usp flow control must have cts and rts gpio"); 709 + goto err; 710 + } 711 + ret = devm_gpio_request(&pdev->dev, sirfport->cts_gpio, 712 + "usp-cts-gpio"); 713 + if (ret) { 714 + dev_err(&pdev->dev, "Unable request cts gpio"); 715 + goto err; 716 + } 717 + gpio_direction_input(sirfport->cts_gpio); 718 + ret = devm_gpio_request(&pdev->dev, sirfport->rts_gpio, 719 + "usp-rts-gpio"); 720 + if (ret) { 721 + dev_err(&pdev->dev, "Unable request rts gpio"); 722 + goto err; 723 + } 724 + gpio_direction_output(sirfport->rts_gpio, 1); 725 + } 726 + usp_no_flow_control: 727 + if (of_device_is_compatible(pdev->dev.of_node, "sirf,marco-uart")) 728 + sirfport->is_marco = true; 1368 729 1369 730 if (of_property_read_u32(pdev->dev.of_node, 1370 731 "fifosize", ··· 1453 670 ret = -EFAULT; 1454 671 goto err; 1455 672 } 673 + spin_lock_init(&sirfport->rx_lock); 674 + spin_lock_init(&sirfport->tx_lock); 675 + tasklet_init(&sirfport->rx_dma_complete_tasklet, 676 + sirfsoc_uart_rx_dma_complete_tl, (unsigned long)sirfport); 677 + tasklet_init(&sirfport->rx_tmo_process_tasklet, 678 + sirfsoc_rx_tmo_process_tl, (unsigned long)sirfport); 1456 679 port->mapbase = res->start; 1457 680 port->membase = devm_ioremap(&pdev->dev, res->start, resource_size(res)); 1458 681 if (!port->membase) { ··· 1474 685 } 1475 686 port->irq = res->start; 1476 687 1477 - if (sirfport->hw_flow_ctrl) { 1478 - sirfport->p = pinctrl_get_select_default(&pdev->dev); 1479 - if (IS_ERR(sirfport->p)) { 1480 - ret = PTR_ERR(sirfport->p); 1481 - goto err; 1482 - } 1483 - } 1484 - 1485 688 sirfport->clk = clk_get(&pdev->dev, NULL); 1486 689 if (IS_ERR(sirfport->clk)) { 1487 690 ret = PTR_ERR(sirfport->clk); 1488 - goto clk_err; 691 + goto err; 1489 692 } 1490 693 clk_prepare_enable(sirfport->clk); 1491 694 port->uartclk = clk_get_rate(sirfport->clk); ··· 1497 716 port_err: 1498 717 clk_disable_unprepare(sirfport->clk); 1499 718 clk_put(sirfport->clk); 1500 - clk_err: 1501 - platform_set_drvdata(pdev, NULL); 1502 - if (sirfport->hw_flow_ctrl) 1503 - pinctrl_put(sirfport->p); 1504 719 err: 1505 720 return ret; 1506 721 } ··· 1505 728 { 1506 729 struct sirfsoc_uart_port *sirfport = platform_get_drvdata(pdev); 1507 730 struct uart_port *port = &sirfport->port; 1508 - platform_set_drvdata(pdev, NULL); 1509 - if (sirfport->hw_flow_ctrl) 1510 - pinctrl_put(sirfport->p); 1511 731 clk_disable_unprepare(sirfport->clk); 1512 732 clk_put(sirfport->clk); 1513 733 uart_remove_one_port(&sirfsoc_uart_drv, port); ··· 1527 753 uart_resume_port(&sirfsoc_uart_drv, port); 1528 754 return 0; 1529 755 } 1530 - 1531 - static struct of_device_id sirfsoc_uart_ids[] = { 1532 - { .compatible = "sirf,prima2-uart", }, 1533 - { .compatible = "sirf,marco-uart", }, 1534 - {} 1535 - }; 1536 - MODULE_DEVICE_TABLE(of, sirfsoc_uart_ids); 1537 756 1538 757 static struct platform_driver sirfsoc_uart_driver = { 1539 758 .probe = sirfsoc_uart_probe,
+392 -105
drivers/tty/serial/sirfsoc_uart.h
··· 6 6 * Licensed under GPLv2 or later. 7 7 */ 8 8 #include <linux/bitops.h> 9 + struct sirfsoc_uart_param { 10 + const char *uart_name; 11 + const char *port_name; 12 + u32 uart_nr; 13 + u32 register_uart_nr; 14 + }; 9 15 10 - /* UART Register Offset Define */ 11 - #define SIRFUART_LINE_CTRL 0x0040 12 - #define SIRFUART_TX_RX_EN 0x004c 13 - #define SIRFUART_DIVISOR 0x0050 14 - #define SIRFUART_INT_EN 0x0054 15 - #define SIRFUART_INT_STATUS 0x0058 16 - #define SIRFUART_TX_DMA_IO_CTRL 0x0100 17 - #define SIRFUART_TX_DMA_IO_LEN 0x0104 18 - #define SIRFUART_TX_FIFO_CTRL 0x0108 19 - #define SIRFUART_TX_FIFO_LEVEL_CHK 0x010C 20 - #define SIRFUART_TX_FIFO_OP 0x0110 21 - #define SIRFUART_TX_FIFO_STATUS 0x0114 22 - #define SIRFUART_TX_FIFO_DATA 0x0118 23 - #define SIRFUART_RX_DMA_IO_CTRL 0x0120 24 - #define SIRFUART_RX_DMA_IO_LEN 0x0124 25 - #define SIRFUART_RX_FIFO_CTRL 0x0128 26 - #define SIRFUART_RX_FIFO_LEVEL_CHK 0x012C 27 - #define SIRFUART_RX_FIFO_OP 0x0130 28 - #define SIRFUART_RX_FIFO_STATUS 0x0134 29 - #define SIRFUART_RX_FIFO_DATA 0x0138 30 - #define SIRFUART_AFC_CTRL 0x0140 31 - #define SIRFUART_SWH_DMA_IO 0x0148 16 + struct sirfsoc_register { 17 + /* hardware uart specific */ 18 + u32 sirfsoc_line_ctrl; 19 + u32 sirfsoc_divisor; 20 + /* uart - usp common */ 21 + u32 sirfsoc_tx_rx_en; 22 + u32 sirfsoc_int_en_reg; 23 + u32 sirfsoc_int_st_reg; 24 + u32 sirfsoc_tx_dma_io_ctrl; 25 + u32 sirfsoc_tx_dma_io_len; 26 + u32 sirfsoc_tx_fifo_ctrl; 27 + u32 sirfsoc_tx_fifo_level_chk; 28 + u32 sirfsoc_tx_fifo_op; 29 + u32 sirfsoc_tx_fifo_status; 30 + u32 sirfsoc_tx_fifo_data; 31 + u32 sirfsoc_rx_dma_io_ctrl; 32 + u32 sirfsoc_rx_dma_io_len; 33 + u32 sirfsoc_rx_fifo_ctrl; 34 + u32 sirfsoc_rx_fifo_level_chk; 35 + u32 sirfsoc_rx_fifo_op; 36 + u32 sirfsoc_rx_fifo_status; 37 + u32 sirfsoc_rx_fifo_data; 38 + u32 sirfsoc_afc_ctrl; 39 + u32 sirfsoc_swh_dma_io; 40 + /* hardware usp specific */ 41 + u32 sirfsoc_mode1; 42 + u32 sirfsoc_mode2; 43 + u32 sirfsoc_tx_frame_ctrl; 44 + u32 sirfsoc_rx_frame_ctrl; 45 + u32 sirfsoc_async_param_reg; 46 + }; 32 47 33 - /* UART Line Control Register */ 48 + typedef u32 (*fifo_full_mask)(int line); 49 + typedef u32 (*fifo_empty_mask)(int line); 50 + 51 + struct sirfsoc_fifo_status { 52 + fifo_full_mask ff_full; 53 + fifo_empty_mask ff_empty; 54 + }; 55 + 56 + struct sirfsoc_int_en { 57 + u32 sirfsoc_rx_done_en; 58 + u32 sirfsoc_tx_done_en; 59 + u32 sirfsoc_rx_oflow_en; 60 + u32 sirfsoc_tx_allout_en; 61 + u32 sirfsoc_rx_io_dma_en; 62 + u32 sirfsoc_tx_io_dma_en; 63 + u32 sirfsoc_rxfifo_full_en; 64 + u32 sirfsoc_txfifo_empty_en; 65 + u32 sirfsoc_rxfifo_thd_en; 66 + u32 sirfsoc_txfifo_thd_en; 67 + u32 sirfsoc_frm_err_en; 68 + u32 sirfsoc_rxd_brk_en; 69 + u32 sirfsoc_rx_timeout_en; 70 + u32 sirfsoc_parity_err_en; 71 + u32 sirfsoc_cts_en; 72 + u32 sirfsoc_rts_en; 73 + }; 74 + 75 + struct sirfsoc_int_status { 76 + u32 sirfsoc_rx_done; 77 + u32 sirfsoc_tx_done; 78 + u32 sirfsoc_rx_oflow; 79 + u32 sirfsoc_tx_allout; 80 + u32 sirfsoc_rx_io_dma; 81 + u32 sirfsoc_tx_io_dma; 82 + u32 sirfsoc_rxfifo_full; 83 + u32 sirfsoc_txfifo_empty; 84 + u32 sirfsoc_rxfifo_thd; 85 + u32 sirfsoc_txfifo_thd; 86 + u32 sirfsoc_frm_err; 87 + u32 sirfsoc_rxd_brk; 88 + u32 sirfsoc_rx_timeout; 89 + u32 sirfsoc_parity_err; 90 + u32 sirfsoc_cts; 91 + u32 sirfsoc_rts; 92 + }; 93 + 94 + enum sirfsoc_uart_type { 95 + SIRF_REAL_UART, 96 + SIRF_USP_UART, 97 + }; 98 + 99 + struct sirfsoc_uart_register { 100 + struct sirfsoc_register uart_reg; 101 + struct sirfsoc_int_en uart_int_en; 102 + struct sirfsoc_int_status uart_int_st; 103 + struct sirfsoc_fifo_status fifo_status; 104 + struct sirfsoc_uart_param uart_param; 105 + enum sirfsoc_uart_type uart_type; 106 + }; 107 + 108 + u32 usp_ff_full(int line) 109 + { 110 + return 0x80; 111 + } 112 + u32 usp_ff_empty(int line) 113 + { 114 + return 0x100; 115 + } 116 + u32 uart_ff_full(int line) 117 + { 118 + return (line == 1) ? (0x20) : (0x80); 119 + } 120 + u32 uart_ff_empty(int line) 121 + { 122 + return (line == 1) ? (0x40) : (0x100); 123 + } 124 + struct sirfsoc_uart_register sirfsoc_usp = { 125 + .uart_reg = { 126 + .sirfsoc_mode1 = 0x0000, 127 + .sirfsoc_mode2 = 0x0004, 128 + .sirfsoc_tx_frame_ctrl = 0x0008, 129 + .sirfsoc_rx_frame_ctrl = 0x000c, 130 + .sirfsoc_tx_rx_en = 0x0010, 131 + .sirfsoc_int_en_reg = 0x0014, 132 + .sirfsoc_int_st_reg = 0x0018, 133 + .sirfsoc_async_param_reg = 0x0024, 134 + .sirfsoc_tx_dma_io_ctrl = 0x0100, 135 + .sirfsoc_tx_dma_io_len = 0x0104, 136 + .sirfsoc_tx_fifo_ctrl = 0x0108, 137 + .sirfsoc_tx_fifo_level_chk = 0x010c, 138 + .sirfsoc_tx_fifo_op = 0x0110, 139 + .sirfsoc_tx_fifo_status = 0x0114, 140 + .sirfsoc_tx_fifo_data = 0x0118, 141 + .sirfsoc_rx_dma_io_ctrl = 0x0120, 142 + .sirfsoc_rx_dma_io_len = 0x0124, 143 + .sirfsoc_rx_fifo_ctrl = 0x0128, 144 + .sirfsoc_rx_fifo_level_chk = 0x012c, 145 + .sirfsoc_rx_fifo_op = 0x0130, 146 + .sirfsoc_rx_fifo_status = 0x0134, 147 + .sirfsoc_rx_fifo_data = 0x0138, 148 + }, 149 + .uart_int_en = { 150 + .sirfsoc_rx_done_en = BIT(0), 151 + .sirfsoc_tx_done_en = BIT(1), 152 + .sirfsoc_rx_oflow_en = BIT(2), 153 + .sirfsoc_tx_allout_en = BIT(3), 154 + .sirfsoc_rx_io_dma_en = BIT(4), 155 + .sirfsoc_tx_io_dma_en = BIT(5), 156 + .sirfsoc_rxfifo_full_en = BIT(6), 157 + .sirfsoc_txfifo_empty_en = BIT(7), 158 + .sirfsoc_rxfifo_thd_en = BIT(8), 159 + .sirfsoc_txfifo_thd_en = BIT(9), 160 + .sirfsoc_frm_err_en = BIT(10), 161 + .sirfsoc_rx_timeout_en = BIT(11), 162 + .sirfsoc_rxd_brk_en = BIT(15), 163 + }, 164 + .uart_int_st = { 165 + .sirfsoc_rx_done = BIT(0), 166 + .sirfsoc_tx_done = BIT(1), 167 + .sirfsoc_rx_oflow = BIT(2), 168 + .sirfsoc_tx_allout = BIT(3), 169 + .sirfsoc_rx_io_dma = BIT(4), 170 + .sirfsoc_tx_io_dma = BIT(5), 171 + .sirfsoc_rxfifo_full = BIT(6), 172 + .sirfsoc_txfifo_empty = BIT(7), 173 + .sirfsoc_rxfifo_thd = BIT(8), 174 + .sirfsoc_txfifo_thd = BIT(9), 175 + .sirfsoc_frm_err = BIT(10), 176 + .sirfsoc_rx_timeout = BIT(11), 177 + .sirfsoc_rxd_brk = BIT(15), 178 + }, 179 + .fifo_status = { 180 + .ff_full = usp_ff_full, 181 + .ff_empty = usp_ff_empty, 182 + }, 183 + .uart_param = { 184 + .uart_name = "ttySiRF", 185 + .port_name = "sirfsoc-uart", 186 + .uart_nr = 2, 187 + .register_uart_nr = 3, 188 + }, 189 + }; 190 + 191 + struct sirfsoc_uart_register sirfsoc_uart = { 192 + .uart_reg = { 193 + .sirfsoc_line_ctrl = 0x0040, 194 + .sirfsoc_tx_rx_en = 0x004c, 195 + .sirfsoc_divisor = 0x0050, 196 + .sirfsoc_int_en_reg = 0x0054, 197 + .sirfsoc_int_st_reg = 0x0058, 198 + .sirfsoc_tx_dma_io_ctrl = 0x0100, 199 + .sirfsoc_tx_dma_io_len = 0x0104, 200 + .sirfsoc_tx_fifo_ctrl = 0x0108, 201 + .sirfsoc_tx_fifo_level_chk = 0x010c, 202 + .sirfsoc_tx_fifo_op = 0x0110, 203 + .sirfsoc_tx_fifo_status = 0x0114, 204 + .sirfsoc_tx_fifo_data = 0x0118, 205 + .sirfsoc_rx_dma_io_ctrl = 0x0120, 206 + .sirfsoc_rx_dma_io_len = 0x0124, 207 + .sirfsoc_rx_fifo_ctrl = 0x0128, 208 + .sirfsoc_rx_fifo_level_chk = 0x012c, 209 + .sirfsoc_rx_fifo_op = 0x0130, 210 + .sirfsoc_rx_fifo_status = 0x0134, 211 + .sirfsoc_rx_fifo_data = 0x0138, 212 + .sirfsoc_afc_ctrl = 0x0140, 213 + .sirfsoc_swh_dma_io = 0x0148, 214 + }, 215 + .uart_int_en = { 216 + .sirfsoc_rx_done_en = BIT(0), 217 + .sirfsoc_tx_done_en = BIT(1), 218 + .sirfsoc_rx_oflow_en = BIT(2), 219 + .sirfsoc_tx_allout_en = BIT(3), 220 + .sirfsoc_rx_io_dma_en = BIT(4), 221 + .sirfsoc_tx_io_dma_en = BIT(5), 222 + .sirfsoc_rxfifo_full_en = BIT(6), 223 + .sirfsoc_txfifo_empty_en = BIT(7), 224 + .sirfsoc_rxfifo_thd_en = BIT(8), 225 + .sirfsoc_txfifo_thd_en = BIT(9), 226 + .sirfsoc_frm_err_en = BIT(10), 227 + .sirfsoc_rxd_brk_en = BIT(11), 228 + .sirfsoc_rx_timeout_en = BIT(12), 229 + .sirfsoc_parity_err_en = BIT(13), 230 + .sirfsoc_cts_en = BIT(14), 231 + .sirfsoc_rts_en = BIT(15), 232 + }, 233 + .uart_int_st = { 234 + .sirfsoc_rx_done = BIT(0), 235 + .sirfsoc_tx_done = BIT(1), 236 + .sirfsoc_rx_oflow = BIT(2), 237 + .sirfsoc_tx_allout = BIT(3), 238 + .sirfsoc_rx_io_dma = BIT(4), 239 + .sirfsoc_tx_io_dma = BIT(5), 240 + .sirfsoc_rxfifo_full = BIT(6), 241 + .sirfsoc_txfifo_empty = BIT(7), 242 + .sirfsoc_rxfifo_thd = BIT(8), 243 + .sirfsoc_txfifo_thd = BIT(9), 244 + .sirfsoc_frm_err = BIT(10), 245 + .sirfsoc_rxd_brk = BIT(11), 246 + .sirfsoc_rx_timeout = BIT(12), 247 + .sirfsoc_parity_err = BIT(13), 248 + .sirfsoc_cts = BIT(14), 249 + .sirfsoc_rts = BIT(15), 250 + }, 251 + .fifo_status = { 252 + .ff_full = uart_ff_full, 253 + .ff_empty = uart_ff_empty, 254 + }, 255 + .uart_param = { 256 + .uart_name = "ttySiRF", 257 + .port_name = "sirfsoc_uart", 258 + .uart_nr = 3, 259 + .register_uart_nr = 0, 260 + }, 261 + }; 262 + /* uart io ctrl */ 34 263 #define SIRFUART_DATA_BIT_LEN_MASK 0x3 35 264 #define SIRFUART_DATA_BIT_LEN_5 BIT(0) 36 265 #define SIRFUART_DATA_BIT_LEN_6 1 ··· 279 50 #define SIRFUART_LOOP_BACK BIT(7) 280 51 #define SIRFUART_PARITY_MASK (7 << 3) 281 52 #define SIRFUART_DUMMY_READ BIT(16) 282 - 283 - #define SIRFSOC_UART_RX_TIMEOUT(br, to) (((br) * (((to) + 999) / 1000)) / 1000) 284 - #define SIRFUART_RECV_TIMEOUT_MASK (0xFFFF << 16) 285 - #define SIRFUART_RECV_TIMEOUT(x) (((x) & 0xFFFF) << 16) 286 - 287 - /* UART Auto Flow Control */ 288 - #define SIRFUART_AFC_RX_THD_MASK 0x000000FF 53 + #define SIRFUART_AFC_CTRL_RX_THD 0x70 289 54 #define SIRFUART_AFC_RX_EN BIT(8) 290 55 #define SIRFUART_AFC_TX_EN BIT(9) 291 - #define SIRFUART_CTS_CTRL BIT(10) 292 - #define SIRFUART_RTS_CTRL BIT(11) 293 - #define SIRFUART_CTS_IN_STATUS BIT(12) 294 - #define SIRFUART_RTS_OUT_STATUS BIT(13) 295 - 296 - /* UART Interrupt Enable Register */ 297 - #define SIRFUART_RX_DONE_INT BIT(0) 298 - #define SIRFUART_TX_DONE_INT BIT(1) 299 - #define SIRFUART_RX_OFLOW_INT BIT(2) 300 - #define SIRFUART_TX_ALLOUT_INT BIT(3) 301 - #define SIRFUART_RX_IO_DMA_INT BIT(4) 302 - #define SIRFUART_TX_IO_DMA_INT BIT(5) 303 - #define SIRFUART_RXFIFO_FULL_INT BIT(6) 304 - #define SIRFUART_TXFIFO_EMPTY_INT BIT(7) 305 - #define SIRFUART_RXFIFO_THD_INT BIT(8) 306 - #define SIRFUART_TXFIFO_THD_INT BIT(9) 307 - #define SIRFUART_FRM_ERR_INT BIT(10) 308 - #define SIRFUART_RXD_BREAK_INT BIT(11) 309 - #define SIRFUART_RX_TIMEOUT_INT BIT(12) 310 - #define SIRFUART_PARITY_ERR_INT BIT(13) 311 - #define SIRFUART_CTS_INT_EN BIT(14) 312 - #define SIRFUART_RTS_INT_EN BIT(15) 313 - 314 - /* UART Interrupt Status Register */ 315 - #define SIRFUART_RX_DONE BIT(0) 316 - #define SIRFUART_TX_DONE BIT(1) 317 - #define SIRFUART_RX_OFLOW BIT(2) 318 - #define SIRFUART_TX_ALL_EMPTY BIT(3) 319 - #define SIRFUART_DMA_IO_RX_DONE BIT(4) 320 - #define SIRFUART_DMA_IO_TX_DONE BIT(5) 321 - #define SIRFUART_RXFIFO_FULL BIT(6) 322 - #define SIRFUART_TXFIFO_EMPTY BIT(7) 323 - #define SIRFUART_RXFIFO_THD_REACH BIT(8) 324 - #define SIRFUART_TXFIFO_THD_REACH BIT(9) 325 - #define SIRFUART_FRM_ERR BIT(10) 326 - #define SIRFUART_RXD_BREAK BIT(11) 327 - #define SIRFUART_RX_TIMEOUT BIT(12) 328 - #define SIRFUART_PARITY_ERR BIT(13) 329 - #define SIRFUART_CTS_CHANGE BIT(14) 330 - #define SIRFUART_RTS_CHANGE BIT(15) 331 - #define SIRFUART_PLUG_IN BIT(16) 332 - 333 - #define SIRFUART_ERR_INT_STAT \ 334 - (SIRFUART_RX_OFLOW | \ 335 - SIRFUART_FRM_ERR | \ 336 - SIRFUART_RXD_BREAK | \ 337 - SIRFUART_PARITY_ERR) 338 - #define SIRFUART_ERR_INT_EN \ 339 - (SIRFUART_RX_OFLOW_INT | \ 340 - SIRFUART_FRM_ERR_INT | \ 341 - SIRFUART_RXD_BREAK_INT | \ 342 - SIRFUART_PARITY_ERR_INT) 343 - #define SIRFUART_TX_INT_EN SIRFUART_TXFIFO_EMPTY_INT 344 - #define SIRFUART_RX_IO_INT_EN \ 345 - (SIRFUART_RX_TIMEOUT_INT | \ 346 - SIRFUART_RXFIFO_THD_INT | \ 347 - SIRFUART_RXFIFO_FULL_INT | \ 348 - SIRFUART_ERR_INT_EN) 349 - 56 + #define SIRFUART_AFC_CTS_CTRL BIT(10) 57 + #define SIRFUART_AFC_RTS_CTRL BIT(11) 58 + #define SIRFUART_AFC_CTS_STATUS BIT(12) 59 + #define SIRFUART_AFC_RTS_STATUS BIT(13) 350 60 /* UART FIFO Register */ 351 - #define SIRFUART_TX_FIFO_STOP 0x0 352 - #define SIRFUART_TX_FIFO_RESET 0x1 353 - #define SIRFUART_TX_FIFO_START 0x2 354 - #define SIRFUART_RX_FIFO_STOP 0x0 355 - #define SIRFUART_RX_FIFO_RESET 0x1 356 - #define SIRFUART_RX_FIFO_START 0x2 357 - #define SIRFUART_TX_MODE_DMA 0 358 - #define SIRFUART_TX_MODE_IO 1 359 - #define SIRFUART_RX_MODE_DMA 0 360 - #define SIRFUART_RX_MODE_IO 1 61 + #define SIRFUART_FIFO_STOP 0x0 62 + #define SIRFUART_FIFO_RESET BIT(0) 63 + #define SIRFUART_FIFO_START BIT(1) 361 64 362 - #define SIRFUART_RX_EN 0x1 363 - #define SIRFUART_TX_EN 0x2 65 + #define SIRFUART_RX_EN BIT(0) 66 + #define SIRFUART_TX_EN BIT(1) 364 67 68 + #define SIRFUART_IO_MODE BIT(0) 69 + #define SIRFUART_DMA_MODE 0x0 70 + 71 + /* Macro Specific*/ 72 + #define SIRFUART_INT_EN_CLR 0x0060 73 + /* Baud Rate Calculation */ 74 + #define SIRF_MIN_SAMPLE_DIV 0xf 75 + #define SIRF_MAX_SAMPLE_DIV 0x3f 76 + #define SIRF_IOCLK_DIV_MAX 0xffff 77 + #define SIRF_SAMPLE_DIV_SHIFT 16 78 + #define SIRF_IOCLK_DIV_MASK 0xffff 79 + #define SIRF_SAMPLE_DIV_MASK 0x3f0000 80 + #define SIRF_BAUD_RATE_SUPPORT_NR 18 81 + 82 + /* USP SPEC */ 83 + #define SIRFSOC_USP_ENDIAN_CTRL_LSBF BIT(4) 84 + #define SIRFSOC_USP_EN BIT(5) 85 + #define SIRFSOC_USP_MODE2_RXD_DELAY_OFFSET 0 86 + #define SIRFSOC_USP_MODE2_TXD_DELAY_OFFSET 8 87 + #define SIRFSOC_USP_MODE2_CLK_DIVISOR_MASK 0x3ff 88 + #define SIRFSOC_USP_MODE2_CLK_DIVISOR_OFFSET 21 89 + #define SIRFSOC_USP_TX_DATA_LEN_OFFSET 0 90 + #define SIRFSOC_USP_TX_SYNC_LEN_OFFSET 8 91 + #define SIRFSOC_USP_TX_FRAME_LEN_OFFSET 16 92 + #define SIRFSOC_USP_TX_SHIFTER_LEN_OFFSET 24 93 + #define SIRFSOC_USP_TX_CLK_DIVISOR_OFFSET 30 94 + #define SIRFSOC_USP_RX_DATA_LEN_OFFSET 0 95 + #define SIRFSOC_USP_RX_FRAME_LEN_OFFSET 8 96 + #define SIRFSOC_USP_RX_SHIFTER_LEN_OFFSET 16 97 + #define SIRFSOC_USP_RX_CLK_DIVISOR_OFFSET 24 98 + #define SIRFSOC_USP_ASYNC_DIV2_MASK 0x3f 99 + #define SIRFSOC_USP_ASYNC_DIV2_OFFSET 16 100 + 101 + /* USP-UART Common */ 102 + #define SIRFSOC_UART_RX_TIMEOUT(br, to) (((br) * (((to) + 999) / 1000)) / 1000) 103 + #define SIRFUART_RECV_TIMEOUT_VALUE(x) \ 104 + (((x) > 0xFFFF) ? 0xFFFF : ((x) & 0xFFFF)) 105 + #define SIRFUART_RECV_TIMEOUT(port, x) \ 106 + (((port)->line > 2) ? (x & 0xFFFF) : ((x) & 0xFFFF) << 16) 107 + 108 + #define SIRFUART_FIFO_THD(port) ((port->line) == 1 ? 16 : 64) 109 + #define SIRFUART_ERR_INT_STAT(port, unit_st) \ 110 + (uint_st->sirfsoc_rx_oflow | \ 111 + uint_st->sirfsoc_frm_err | \ 112 + uint_st->sirfsoc_rxd_brk | \ 113 + ((port->line > 2) ? 0 : uint_st->sirfsoc_parity_err)) 114 + #define SIRFUART_RX_IO_INT_EN(port, uint_en) \ 115 + (uint_en->sirfsoc_rx_timeout_en |\ 116 + uint_en->sirfsoc_rxfifo_thd_en |\ 117 + uint_en->sirfsoc_rxfifo_full_en |\ 118 + uint_en->sirfsoc_frm_err_en |\ 119 + uint_en->sirfsoc_rx_oflow_en |\ 120 + uint_en->sirfsoc_rxd_brk_en |\ 121 + ((port->line > 2) ? 0 : uint_en->sirfsoc_parity_err_en)) 122 + #define SIRFUART_RX_IO_INT_ST(uint_st) \ 123 + (uint_st->sirfsoc_rx_timeout |\ 124 + uint_st->sirfsoc_rxfifo_thd |\ 125 + uint_st->sirfsoc_rxfifo_full) 126 + #define SIRFUART_CTS_INT_ST(uint_st) (uint_st->sirfsoc_cts) 127 + #define SIRFUART_RX_DMA_INT_EN(port, uint_en) \ 128 + (uint_en->sirfsoc_rx_timeout_en |\ 129 + uint_en->sirfsoc_frm_err_en |\ 130 + uint_en->sirfsoc_rx_oflow_en |\ 131 + uint_en->sirfsoc_rxd_brk_en |\ 132 + ((port->line > 2) ? 0 : uint_en->sirfsoc_parity_err_en)) 365 133 /* Generic Definitions */ 366 134 #define SIRFSOC_UART_NAME "ttySiRF" 367 135 #define SIRFSOC_UART_MAJOR 0 368 136 #define SIRFSOC_UART_MINOR 0 369 137 #define SIRFUART_PORT_NAME "sirfsoc-uart" 370 138 #define SIRFUART_MAP_SIZE 0x200 371 - #define SIRFSOC_UART_NR 5 139 + #define SIRFSOC_UART_NR 6 372 140 #define SIRFSOC_PORT_TYPE 0xa5 373 141 374 142 /* Baud Rate Calculation */ ··· 377 151 #define SIRF_SAMPLE_DIV_MASK 0x3f0000 378 152 #define SIRF_BAUD_RATE_SUPPORT_NR 18 379 153 154 + /* Uart Common Use Macro*/ 155 + #define SIRFSOC_RX_DMA_BUF_SIZE 256 156 + #define BYTES_TO_ALIGN(dma_addr) ((unsigned long)(dma_addr) & 0x3) 157 + #define LOOP_DMA_BUFA_FILL 1 158 + #define LOOP_DMA_BUFB_FILL 2 159 + #define TX_TRAN_PIO 1 160 + #define TX_TRAN_DMA 2 161 + /* Uart Fifo Level Chk */ 162 + #define SIRFUART_TX_FIFO_SC_OFFSET 0 163 + #define SIRFUART_TX_FIFO_LC_OFFSET 10 164 + #define SIRFUART_TX_FIFO_HC_OFFSET 20 165 + #define SIRFUART_TX_FIFO_CHK_SC(line, value) ((((line) == 1) ? (value & 0x3) :\ 166 + (value & 0x1f)) << SIRFUART_TX_FIFO_SC_OFFSET) 167 + #define SIRFUART_TX_FIFO_CHK_LC(line, value) ((((line) == 1) ? (value & 0x3) :\ 168 + (value & 0x1f)) << SIRFUART_TX_FIFO_LC_OFFSET) 169 + #define SIRFUART_TX_FIFO_CHK_HC(line, value) ((((line) == 1) ? (value & 0x3) :\ 170 + (value & 0x1f)) << SIRFUART_TX_FIFO_HC_OFFSET) 171 + 172 + #define SIRFUART_RX_FIFO_CHK_SC SIRFUART_TX_FIFO_CHK_SC 173 + #define SIRFUART_RX_FIFO_CHK_LC SIRFUART_TX_FIFO_CHK_LC 174 + #define SIRFUART_RX_FIFO_CHK_HC SIRFUART_TX_FIFO_CHK_HC 175 + /* Indicate how many buffers used */ 176 + #define SIRFSOC_RX_LOOP_BUF_CNT 2 177 + 178 + /* Indicate if DMA channel valid */ 179 + #define IS_DMA_CHAN_VALID(x) ((x) != -1) 180 + #define UNVALID_DMA_CHAN -1 380 181 /* For Fast Baud Rate Calculation */ 381 182 struct sirfsoc_baudrate_to_regv { 382 183 unsigned int baud_rate; 383 184 unsigned int reg_val; 384 185 }; 385 186 187 + enum sirfsoc_tx_state { 188 + TX_DMA_IDLE, 189 + TX_DMA_RUNNING, 190 + TX_DMA_PAUSE, 191 + }; 192 + 193 + struct sirfsoc_loop_buffer { 194 + struct circ_buf xmit; 195 + dma_cookie_t cookie; 196 + struct dma_async_tx_descriptor *desc; 197 + dma_addr_t dma_addr; 198 + }; 199 + 386 200 struct sirfsoc_uart_port { 387 - unsigned char hw_flow_ctrl; 388 - unsigned char ms_enabled; 201 + bool hw_flow_ctrl; 202 + bool ms_enabled; 389 203 390 204 struct uart_port port; 391 - struct pinctrl *p; 392 205 struct clk *clk; 206 + /* for SiRFmarco, there are SET/CLR for UART_INT_EN */ 207 + bool is_marco; 208 + struct sirfsoc_uart_register *uart_reg; 209 + int rx_dma_no; 210 + int tx_dma_no; 211 + struct dma_chan *rx_dma_chan; 212 + struct dma_chan *tx_dma_chan; 213 + dma_addr_t tx_dma_addr; 214 + struct dma_async_tx_descriptor *tx_dma_desc; 215 + spinlock_t rx_lock; 216 + spinlock_t tx_lock; 217 + struct tasklet_struct rx_dma_complete_tasklet; 218 + struct tasklet_struct rx_tmo_process_tasklet; 219 + unsigned int rx_io_count; 220 + unsigned long transfer_size; 221 + enum sirfsoc_tx_state tx_dma_state; 222 + unsigned int cts_gpio; 223 + unsigned int rts_gpio; 224 + 225 + struct sirfsoc_loop_buffer rx_dma_items[SIRFSOC_RX_LOOP_BUF_CNT]; 226 + int rx_completed; 227 + int rx_issued; 393 228 }; 394 229 395 230 /* Hardware Flow Control */
+932
drivers/tty/serial/st-asc.c
··· 1 + /* 2 + * st-asc.c: ST Asynchronous serial controller (ASC) driver 3 + * 4 + * Copyright (C) 2003-2013 STMicroelectronics (R&D) Limited 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License as published by 8 + * the Free Software Foundation; either version 2 of the License, or 9 + * (at your option) any later version. 10 + * 11 + */ 12 + 13 + #if defined(CONFIG_SERIAL_ST_ASC_CONSOLE) && defined(CONFIG_MAGIC_SYSRQ) 14 + #define SUPPORT_SYSRQ 15 + #endif 16 + 17 + #include <linux/module.h> 18 + #include <linux/serial.h> 19 + #include <linux/console.h> 20 + #include <linux/sysrq.h> 21 + #include <linux/platform_device.h> 22 + #include <linux/io.h> 23 + #include <linux/irq.h> 24 + #include <linux/tty.h> 25 + #include <linux/tty_flip.h> 26 + #include <linux/delay.h> 27 + #include <linux/spinlock.h> 28 + #include <linux/pm_runtime.h> 29 + #include <linux/of.h> 30 + #include <linux/of_platform.h> 31 + #include <linux/serial_core.h> 32 + #include <linux/clk.h> 33 + 34 + #define DRIVER_NAME "st-asc" 35 + #define ASC_SERIAL_NAME "ttyAS" 36 + #define ASC_FIFO_SIZE 16 37 + #define ASC_MAX_PORTS 8 38 + 39 + struct asc_port { 40 + struct uart_port port; 41 + struct clk *clk; 42 + unsigned int hw_flow_control:1; 43 + unsigned int force_m1:1; 44 + }; 45 + 46 + static struct asc_port asc_ports[ASC_MAX_PORTS]; 47 + static struct uart_driver asc_uart_driver; 48 + 49 + /*---- UART Register definitions ------------------------------*/ 50 + 51 + /* Register offsets */ 52 + 53 + #define ASC_BAUDRATE 0x00 54 + #define ASC_TXBUF 0x04 55 + #define ASC_RXBUF 0x08 56 + #define ASC_CTL 0x0C 57 + #define ASC_INTEN 0x10 58 + #define ASC_STA 0x14 59 + #define ASC_GUARDTIME 0x18 60 + #define ASC_TIMEOUT 0x1C 61 + #define ASC_TXRESET 0x20 62 + #define ASC_RXRESET 0x24 63 + #define ASC_RETRIES 0x28 64 + 65 + /* ASC_RXBUF */ 66 + #define ASC_RXBUF_PE 0x100 67 + #define ASC_RXBUF_FE 0x200 68 + /** 69 + * Some of status comes from higher bits of the character and some come from 70 + * the status register. Combining both of them in to single status using dummy 71 + * bits. 72 + */ 73 + #define ASC_RXBUF_DUMMY_RX 0x10000 74 + #define ASC_RXBUF_DUMMY_BE 0x20000 75 + #define ASC_RXBUF_DUMMY_OE 0x40000 76 + 77 + /* ASC_CTL */ 78 + 79 + #define ASC_CTL_MODE_MSK 0x0007 80 + #define ASC_CTL_MODE_8BIT 0x0001 81 + #define ASC_CTL_MODE_7BIT_PAR 0x0003 82 + #define ASC_CTL_MODE_9BIT 0x0004 83 + #define ASC_CTL_MODE_8BIT_WKUP 0x0005 84 + #define ASC_CTL_MODE_8BIT_PAR 0x0007 85 + #define ASC_CTL_STOP_MSK 0x0018 86 + #define ASC_CTL_STOP_HALFBIT 0x0000 87 + #define ASC_CTL_STOP_1BIT 0x0008 88 + #define ASC_CTL_STOP_1_HALFBIT 0x0010 89 + #define ASC_CTL_STOP_2BIT 0x0018 90 + #define ASC_CTL_PARITYODD 0x0020 91 + #define ASC_CTL_LOOPBACK 0x0040 92 + #define ASC_CTL_RUN 0x0080 93 + #define ASC_CTL_RXENABLE 0x0100 94 + #define ASC_CTL_SCENABLE 0x0200 95 + #define ASC_CTL_FIFOENABLE 0x0400 96 + #define ASC_CTL_CTSENABLE 0x0800 97 + #define ASC_CTL_BAUDMODE 0x1000 98 + 99 + /* ASC_GUARDTIME */ 100 + 101 + #define ASC_GUARDTIME_MSK 0x00FF 102 + 103 + /* ASC_INTEN */ 104 + 105 + #define ASC_INTEN_RBE 0x0001 106 + #define ASC_INTEN_TE 0x0002 107 + #define ASC_INTEN_THE 0x0004 108 + #define ASC_INTEN_PE 0x0008 109 + #define ASC_INTEN_FE 0x0010 110 + #define ASC_INTEN_OE 0x0020 111 + #define ASC_INTEN_TNE 0x0040 112 + #define ASC_INTEN_TOI 0x0080 113 + #define ASC_INTEN_RHF 0x0100 114 + 115 + /* ASC_RETRIES */ 116 + 117 + #define ASC_RETRIES_MSK 0x00FF 118 + 119 + /* ASC_RXBUF */ 120 + 121 + #define ASC_RXBUF_MSK 0x03FF 122 + 123 + /* ASC_STA */ 124 + 125 + #define ASC_STA_RBF 0x0001 126 + #define ASC_STA_TE 0x0002 127 + #define ASC_STA_THE 0x0004 128 + #define ASC_STA_PE 0x0008 129 + #define ASC_STA_FE 0x0010 130 + #define ASC_STA_OE 0x0020 131 + #define ASC_STA_TNE 0x0040 132 + #define ASC_STA_TOI 0x0080 133 + #define ASC_STA_RHF 0x0100 134 + #define ASC_STA_TF 0x0200 135 + #define ASC_STA_NKD 0x0400 136 + 137 + /* ASC_TIMEOUT */ 138 + 139 + #define ASC_TIMEOUT_MSK 0x00FF 140 + 141 + /* ASC_TXBUF */ 142 + 143 + #define ASC_TXBUF_MSK 0x01FF 144 + 145 + /*---- Inline function definitions ---------------------------*/ 146 + 147 + static inline struct asc_port *to_asc_port(struct uart_port *port) 148 + { 149 + return container_of(port, struct asc_port, port); 150 + } 151 + 152 + static inline u32 asc_in(struct uart_port *port, u32 offset) 153 + { 154 + return readl(port->membase + offset); 155 + } 156 + 157 + static inline void asc_out(struct uart_port *port, u32 offset, u32 value) 158 + { 159 + writel(value, port->membase + offset); 160 + } 161 + 162 + /* 163 + * Some simple utility functions to enable and disable interrupts. 164 + * Note that these need to be called with interrupts disabled. 165 + */ 166 + static inline void asc_disable_tx_interrupts(struct uart_port *port) 167 + { 168 + u32 intenable = asc_in(port, ASC_INTEN) & ~ASC_INTEN_THE; 169 + asc_out(port, ASC_INTEN, intenable); 170 + (void)asc_in(port, ASC_INTEN); /* Defeat bus write posting */ 171 + } 172 + 173 + static inline void asc_enable_tx_interrupts(struct uart_port *port) 174 + { 175 + u32 intenable = asc_in(port, ASC_INTEN) | ASC_INTEN_THE; 176 + asc_out(port, ASC_INTEN, intenable); 177 + } 178 + 179 + static inline void asc_disable_rx_interrupts(struct uart_port *port) 180 + { 181 + u32 intenable = asc_in(port, ASC_INTEN) & ~ASC_INTEN_RBE; 182 + asc_out(port, ASC_INTEN, intenable); 183 + (void)asc_in(port, ASC_INTEN); /* Defeat bus write posting */ 184 + } 185 + 186 + static inline void asc_enable_rx_interrupts(struct uart_port *port) 187 + { 188 + u32 intenable = asc_in(port, ASC_INTEN) | ASC_INTEN_RBE; 189 + asc_out(port, ASC_INTEN, intenable); 190 + } 191 + 192 + static inline u32 asc_txfifo_is_empty(struct uart_port *port) 193 + { 194 + return asc_in(port, ASC_STA) & ASC_STA_TE; 195 + } 196 + 197 + static inline int asc_txfifo_is_full(struct uart_port *port) 198 + { 199 + return asc_in(port, ASC_STA) & ASC_STA_TF; 200 + } 201 + 202 + static inline const char *asc_port_name(struct uart_port *port) 203 + { 204 + return to_platform_device(port->dev)->name; 205 + } 206 + 207 + /*----------------------------------------------------------------------*/ 208 + 209 + /* 210 + * This section contains code to support the use of the ASC as a 211 + * generic serial port. 212 + */ 213 + 214 + static inline unsigned asc_hw_txroom(struct uart_port *port) 215 + { 216 + u32 status = asc_in(port, ASC_STA); 217 + 218 + if (status & ASC_STA_THE) 219 + return port->fifosize / 2; 220 + else if (!(status & ASC_STA_TF)) 221 + return 1; 222 + 223 + return 0; 224 + } 225 + 226 + /* 227 + * Start transmitting chars. 228 + * This is called from both interrupt and task level. 229 + * Either way interrupts are disabled. 230 + */ 231 + static void asc_transmit_chars(struct uart_port *port) 232 + { 233 + struct circ_buf *xmit = &port->state->xmit; 234 + int txroom; 235 + unsigned char c; 236 + 237 + txroom = asc_hw_txroom(port); 238 + 239 + if ((txroom != 0) && port->x_char) { 240 + c = port->x_char; 241 + port->x_char = 0; 242 + asc_out(port, ASC_TXBUF, c); 243 + port->icount.tx++; 244 + txroom = asc_hw_txroom(port); 245 + } 246 + 247 + if (uart_tx_stopped(port)) { 248 + /* 249 + * We should try and stop the hardware here, but I 250 + * don't think the ASC has any way to do that. 251 + */ 252 + asc_disable_tx_interrupts(port); 253 + return; 254 + } 255 + 256 + if (uart_circ_empty(xmit)) { 257 + asc_disable_tx_interrupts(port); 258 + return; 259 + } 260 + 261 + if (txroom == 0) 262 + return; 263 + 264 + do { 265 + c = xmit->buf[xmit->tail]; 266 + xmit->tail = (xmit->tail + 1) & (UART_XMIT_SIZE - 1); 267 + asc_out(port, ASC_TXBUF, c); 268 + port->icount.tx++; 269 + txroom--; 270 + } while ((txroom > 0) && (!uart_circ_empty(xmit))); 271 + 272 + if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 273 + uart_write_wakeup(port); 274 + 275 + if (uart_circ_empty(xmit)) 276 + asc_disable_tx_interrupts(port); 277 + } 278 + 279 + static void asc_receive_chars(struct uart_port *port) 280 + { 281 + struct tty_port *tport = &port->state->port; 282 + unsigned long status; 283 + unsigned long c = 0; 284 + char flag; 285 + 286 + if (port->irq_wake) 287 + pm_wakeup_event(tport->tty->dev, 0); 288 + 289 + while ((status = asc_in(port, ASC_STA)) & ASC_STA_RBF) { 290 + c = asc_in(port, ASC_RXBUF) | ASC_RXBUF_DUMMY_RX; 291 + flag = TTY_NORMAL; 292 + port->icount.rx++; 293 + 294 + if ((c & (ASC_RXBUF_FE | ASC_RXBUF_PE)) || 295 + status & ASC_STA_OE) { 296 + 297 + if (c & ASC_RXBUF_FE) { 298 + if (c == ASC_RXBUF_FE) { 299 + port->icount.brk++; 300 + if (uart_handle_break(port)) 301 + continue; 302 + c |= ASC_RXBUF_DUMMY_BE; 303 + } else { 304 + port->icount.frame++; 305 + } 306 + } else if (c & ASC_RXBUF_PE) { 307 + port->icount.parity++; 308 + } 309 + /* 310 + * Reading any data from the RX FIFO clears the 311 + * overflow error condition. 312 + */ 313 + if (status & ASC_STA_OE) { 314 + port->icount.overrun++; 315 + c |= ASC_RXBUF_DUMMY_OE; 316 + } 317 + 318 + c &= port->read_status_mask; 319 + 320 + if (c & ASC_RXBUF_DUMMY_BE) 321 + flag = TTY_BREAK; 322 + else if (c & ASC_RXBUF_PE) 323 + flag = TTY_PARITY; 324 + else if (c & ASC_RXBUF_FE) 325 + flag = TTY_FRAME; 326 + } 327 + 328 + if (uart_handle_sysrq_char(port, c)) 329 + continue; 330 + 331 + uart_insert_char(port, c, ASC_RXBUF_DUMMY_OE, c & 0xff, flag); 332 + } 333 + 334 + /* Tell the rest of the system the news. New characters! */ 335 + tty_flip_buffer_push(tport); 336 + } 337 + 338 + static irqreturn_t asc_interrupt(int irq, void *ptr) 339 + { 340 + struct uart_port *port = ptr; 341 + u32 status; 342 + 343 + spin_lock(&port->lock); 344 + 345 + status = asc_in(port, ASC_STA); 346 + 347 + if (status & ASC_STA_RBF) { 348 + /* Receive FIFO not empty */ 349 + asc_receive_chars(port); 350 + } 351 + 352 + if ((status & ASC_STA_THE) && 353 + (asc_in(port, ASC_INTEN) & ASC_INTEN_THE)) { 354 + /* Transmitter FIFO at least half empty */ 355 + asc_transmit_chars(port); 356 + } 357 + 358 + spin_unlock(&port->lock); 359 + 360 + return IRQ_HANDLED; 361 + } 362 + 363 + /*----------------------------------------------------------------------*/ 364 + 365 + /* 366 + * UART Functions 367 + */ 368 + 369 + static unsigned int asc_tx_empty(struct uart_port *port) 370 + { 371 + return asc_txfifo_is_empty(port) ? TIOCSER_TEMT : 0; 372 + } 373 + 374 + static void asc_set_mctrl(struct uart_port *port, unsigned int mctrl) 375 + { 376 + /* 377 + * This routine is used for seting signals of: DTR, DCD, CTS/RTS 378 + * We use ASC's hardware for CTS/RTS, so don't need any for that. 379 + * Some boards have DTR and DCD implemented using PIO pins, 380 + * code to do this should be hooked in here. 381 + */ 382 + } 383 + 384 + static unsigned int asc_get_mctrl(struct uart_port *port) 385 + { 386 + /* 387 + * This routine is used for geting signals of: DTR, DCD, DSR, RI, 388 + * and CTS/RTS 389 + */ 390 + return TIOCM_CAR | TIOCM_DSR | TIOCM_CTS; 391 + } 392 + 393 + /* There are probably characters waiting to be transmitted. */ 394 + static void asc_start_tx(struct uart_port *port) 395 + { 396 + struct circ_buf *xmit = &port->state->xmit; 397 + 398 + if (!uart_circ_empty(xmit)) 399 + asc_enable_tx_interrupts(port); 400 + } 401 + 402 + /* Transmit stop */ 403 + static void asc_stop_tx(struct uart_port *port) 404 + { 405 + asc_disable_tx_interrupts(port); 406 + } 407 + 408 + /* Receive stop */ 409 + static void asc_stop_rx(struct uart_port *port) 410 + { 411 + asc_disable_rx_interrupts(port); 412 + } 413 + 414 + /* Force modem status interrupts on */ 415 + static void asc_enable_ms(struct uart_port *port) 416 + { 417 + /* Nothing here yet .. */ 418 + } 419 + 420 + /* Handle breaks - ignored by us */ 421 + static void asc_break_ctl(struct uart_port *port, int break_state) 422 + { 423 + /* Nothing here yet .. */ 424 + } 425 + 426 + /* 427 + * Enable port for reception. 428 + */ 429 + static int asc_startup(struct uart_port *port) 430 + { 431 + if (request_irq(port->irq, asc_interrupt, IRQF_NO_SUSPEND, 432 + asc_port_name(port), port)) { 433 + dev_err(port->dev, "cannot allocate irq.\n"); 434 + return -ENODEV; 435 + } 436 + 437 + asc_transmit_chars(port); 438 + asc_enable_rx_interrupts(port); 439 + 440 + return 0; 441 + } 442 + 443 + static void asc_shutdown(struct uart_port *port) 444 + { 445 + asc_disable_tx_interrupts(port); 446 + asc_disable_rx_interrupts(port); 447 + free_irq(port->irq, port); 448 + } 449 + 450 + static void asc_pm(struct uart_port *port, unsigned int state, 451 + unsigned int oldstate) 452 + { 453 + struct asc_port *ascport = to_asc_port(port); 454 + unsigned long flags = 0; 455 + u32 ctl; 456 + 457 + switch (state) { 458 + case UART_PM_STATE_ON: 459 + clk_prepare_enable(ascport->clk); 460 + break; 461 + case UART_PM_STATE_OFF: 462 + /* 463 + * Disable the ASC baud rate generator, which is as close as 464 + * we can come to turning it off. Note this is not called with 465 + * the port spinlock held. 466 + */ 467 + spin_lock_irqsave(&port->lock, flags); 468 + ctl = asc_in(port, ASC_CTL) & ~ASC_CTL_RUN; 469 + asc_out(port, ASC_CTL, ctl); 470 + spin_unlock_irqrestore(&port->lock, flags); 471 + clk_disable_unprepare(ascport->clk); 472 + break; 473 + } 474 + } 475 + 476 + static void asc_set_termios(struct uart_port *port, struct ktermios *termios, 477 + struct ktermios *old) 478 + { 479 + struct asc_port *ascport = to_asc_port(port); 480 + unsigned int baud; 481 + u32 ctrl_val; 482 + tcflag_t cflag; 483 + unsigned long flags; 484 + 485 + /* Update termios to reflect hardware capabilities */ 486 + termios->c_cflag &= ~(CMSPAR | 487 + (ascport->hw_flow_control ? 0 : CRTSCTS)); 488 + 489 + port->uartclk = clk_get_rate(ascport->clk); 490 + 491 + baud = uart_get_baud_rate(port, termios, old, 0, port->uartclk/16); 492 + cflag = termios->c_cflag; 493 + 494 + spin_lock_irqsave(&port->lock, flags); 495 + 496 + /* read control register */ 497 + ctrl_val = asc_in(port, ASC_CTL); 498 + 499 + /* stop serial port and reset value */ 500 + asc_out(port, ASC_CTL, (ctrl_val & ~ASC_CTL_RUN)); 501 + ctrl_val = ASC_CTL_RXENABLE | ASC_CTL_FIFOENABLE; 502 + 503 + /* reset fifo rx & tx */ 504 + asc_out(port, ASC_TXRESET, 1); 505 + asc_out(port, ASC_RXRESET, 1); 506 + 507 + /* set character length */ 508 + if ((cflag & CSIZE) == CS7) { 509 + ctrl_val |= ASC_CTL_MODE_7BIT_PAR; 510 + } else { 511 + ctrl_val |= (cflag & PARENB) ? ASC_CTL_MODE_8BIT_PAR : 512 + ASC_CTL_MODE_8BIT; 513 + } 514 + 515 + /* set stop bit */ 516 + ctrl_val |= (cflag & CSTOPB) ? ASC_CTL_STOP_2BIT : ASC_CTL_STOP_1BIT; 517 + 518 + /* odd parity */ 519 + if (cflag & PARODD) 520 + ctrl_val |= ASC_CTL_PARITYODD; 521 + 522 + /* hardware flow control */ 523 + if ((cflag & CRTSCTS)) 524 + ctrl_val |= ASC_CTL_CTSENABLE; 525 + 526 + if ((baud < 19200) && !ascport->force_m1) { 527 + asc_out(port, ASC_BAUDRATE, (port->uartclk / (16 * baud))); 528 + } else { 529 + /* 530 + * MODE 1: recommended for high bit rates (above 19.2K) 531 + * 532 + * baudrate * 16 * 2^16 533 + * ASCBaudRate = ------------------------ 534 + * inputclock 535 + * 536 + * However to keep the maths inside 32bits we divide top and 537 + * bottom by 64. The +1 is to avoid a divide by zero if the 538 + * input clock rate is something unexpected. 539 + */ 540 + u32 counter = (baud * 16384) / ((port->uartclk / 64) + 1); 541 + asc_out(port, ASC_BAUDRATE, counter); 542 + ctrl_val |= ASC_CTL_BAUDMODE; 543 + } 544 + 545 + uart_update_timeout(port, cflag, baud); 546 + 547 + ascport->port.read_status_mask = ASC_RXBUF_DUMMY_OE; 548 + if (termios->c_iflag & INPCK) 549 + ascport->port.read_status_mask |= ASC_RXBUF_FE | ASC_RXBUF_PE; 550 + if (termios->c_iflag & (BRKINT | PARMRK)) 551 + ascport->port.read_status_mask |= ASC_RXBUF_DUMMY_BE; 552 + 553 + /* 554 + * Characters to ignore 555 + */ 556 + ascport->port.ignore_status_mask = 0; 557 + if (termios->c_iflag & IGNPAR) 558 + ascport->port.ignore_status_mask |= ASC_RXBUF_FE | ASC_RXBUF_PE; 559 + if (termios->c_iflag & IGNBRK) { 560 + ascport->port.ignore_status_mask |= ASC_RXBUF_DUMMY_BE; 561 + /* 562 + * If we're ignoring parity and break indicators, 563 + * ignore overruns too (for real raw support). 564 + */ 565 + if (termios->c_iflag & IGNPAR) 566 + ascport->port.ignore_status_mask |= ASC_RXBUF_DUMMY_OE; 567 + } 568 + 569 + /* 570 + * Ignore all characters if CREAD is not set. 571 + */ 572 + if (!(termios->c_cflag & CREAD)) 573 + ascport->port.ignore_status_mask |= ASC_RXBUF_DUMMY_RX; 574 + 575 + /* Set the timeout */ 576 + asc_out(port, ASC_TIMEOUT, 20); 577 + 578 + /* write final value and enable port */ 579 + asc_out(port, ASC_CTL, (ctrl_val | ASC_CTL_RUN)); 580 + 581 + spin_unlock_irqrestore(&port->lock, flags); 582 + } 583 + 584 + static const char *asc_type(struct uart_port *port) 585 + { 586 + return (port->type == PORT_ASC) ? DRIVER_NAME : NULL; 587 + } 588 + 589 + static void asc_release_port(struct uart_port *port) 590 + { 591 + } 592 + 593 + static int asc_request_port(struct uart_port *port) 594 + { 595 + return 0; 596 + } 597 + 598 + /* 599 + * Called when the port is opened, and UPF_BOOT_AUTOCONF flag is set 600 + * Set type field if successful 601 + */ 602 + static void asc_config_port(struct uart_port *port, int flags) 603 + { 604 + if ((flags & UART_CONFIG_TYPE)) 605 + port->type = PORT_ASC; 606 + } 607 + 608 + static int 609 + asc_verify_port(struct uart_port *port, struct serial_struct *ser) 610 + { 611 + /* No user changeable parameters */ 612 + return -EINVAL; 613 + } 614 + 615 + #ifdef CONFIG_CONSOLE_POLL 616 + /* 617 + * Console polling routines for writing and reading from the uart while 618 + * in an interrupt or debug context (i.e. kgdb). 619 + */ 620 + 621 + static int asc_get_poll_char(struct uart_port *port) 622 + { 623 + if (!(asc_in(port, ASC_STA) & ASC_STA_RBF)) 624 + return NO_POLL_CHAR; 625 + 626 + return asc_in(port, ASC_RXBUF); 627 + } 628 + 629 + static void asc_put_poll_char(struct uart_port *port, unsigned char c) 630 + { 631 + while (asc_txfifo_is_full(port)) 632 + cpu_relax(); 633 + asc_out(port, ASC_TXBUF, c); 634 + } 635 + 636 + #endif /* CONFIG_CONSOLE_POLL */ 637 + 638 + /*---------------------------------------------------------------------*/ 639 + 640 + static struct uart_ops asc_uart_ops = { 641 + .tx_empty = asc_tx_empty, 642 + .set_mctrl = asc_set_mctrl, 643 + .get_mctrl = asc_get_mctrl, 644 + .start_tx = asc_start_tx, 645 + .stop_tx = asc_stop_tx, 646 + .stop_rx = asc_stop_rx, 647 + .enable_ms = asc_enable_ms, 648 + .break_ctl = asc_break_ctl, 649 + .startup = asc_startup, 650 + .shutdown = asc_shutdown, 651 + .set_termios = asc_set_termios, 652 + .type = asc_type, 653 + .release_port = asc_release_port, 654 + .request_port = asc_request_port, 655 + .config_port = asc_config_port, 656 + .verify_port = asc_verify_port, 657 + .pm = asc_pm, 658 + #ifdef CONFIG_CONSOLE_POLL 659 + .poll_get_char = asc_get_poll_char, 660 + .poll_put_char = asc_put_poll_char, 661 + #endif /* CONFIG_CONSOLE_POLL */ 662 + }; 663 + 664 + static int asc_init_port(struct asc_port *ascport, 665 + struct platform_device *pdev) 666 + { 667 + struct uart_port *port = &ascport->port; 668 + struct resource *res; 669 + 670 + port->iotype = UPIO_MEM; 671 + port->flags = UPF_BOOT_AUTOCONF; 672 + port->ops = &asc_uart_ops; 673 + port->fifosize = ASC_FIFO_SIZE; 674 + port->dev = &pdev->dev; 675 + port->irq = platform_get_irq(pdev, 0); 676 + 677 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 678 + port->membase = devm_ioremap_resource(&pdev->dev, res); 679 + if (IS_ERR(port->membase)) 680 + return PTR_ERR(port->membase); 681 + port->mapbase = res->start; 682 + 683 + spin_lock_init(&port->lock); 684 + 685 + ascport->clk = devm_clk_get(&pdev->dev, NULL); 686 + 687 + if (WARN_ON(IS_ERR(ascport->clk))) 688 + return -EINVAL; 689 + /* ensure that clk rate is correct by enabling the clk */ 690 + clk_prepare_enable(ascport->clk); 691 + ascport->port.uartclk = clk_get_rate(ascport->clk); 692 + WARN_ON(ascport->port.uartclk == 0); 693 + clk_disable_unprepare(ascport->clk); 694 + 695 + return 0; 696 + } 697 + 698 + static struct asc_port *asc_of_get_asc_port(struct platform_device *pdev) 699 + { 700 + struct device_node *np = pdev->dev.of_node; 701 + int id; 702 + 703 + if (!np) 704 + return NULL; 705 + 706 + id = of_alias_get_id(np, ASC_SERIAL_NAME); 707 + 708 + if (id < 0) 709 + id = 0; 710 + 711 + if (WARN_ON(id >= ASC_MAX_PORTS)) 712 + return NULL; 713 + 714 + asc_ports[id].hw_flow_control = of_property_read_bool(np, 715 + "st,hw-flow-control"); 716 + asc_ports[id].force_m1 = of_property_read_bool(np, "st,force_m1"); 717 + asc_ports[id].port.line = id; 718 + return &asc_ports[id]; 719 + } 720 + 721 + #ifdef CONFIG_OF 722 + static struct of_device_id asc_match[] = { 723 + { .compatible = "st,asc", }, 724 + {}, 725 + }; 726 + 727 + MODULE_DEVICE_TABLE(of, asc_match); 728 + #endif 729 + 730 + static int asc_serial_probe(struct platform_device *pdev) 731 + { 732 + int ret; 733 + struct asc_port *ascport; 734 + 735 + ascport = asc_of_get_asc_port(pdev); 736 + if (!ascport) 737 + return -ENODEV; 738 + 739 + ret = asc_init_port(ascport, pdev); 740 + if (ret) 741 + return ret; 742 + 743 + ret = uart_add_one_port(&asc_uart_driver, &ascport->port); 744 + if (ret) 745 + return ret; 746 + 747 + platform_set_drvdata(pdev, &ascport->port); 748 + 749 + return 0; 750 + } 751 + 752 + static int asc_serial_remove(struct platform_device *pdev) 753 + { 754 + struct uart_port *port = platform_get_drvdata(pdev); 755 + 756 + return uart_remove_one_port(&asc_uart_driver, port); 757 + } 758 + 759 + #ifdef CONFIG_PM_SLEEP 760 + static int asc_serial_suspend(struct device *dev) 761 + { 762 + struct platform_device *pdev = to_platform_device(dev); 763 + struct uart_port *port = platform_get_drvdata(pdev); 764 + 765 + return uart_suspend_port(&asc_uart_driver, port); 766 + } 767 + 768 + static int asc_serial_resume(struct device *dev) 769 + { 770 + struct platform_device *pdev = to_platform_device(dev); 771 + struct uart_port *port = platform_get_drvdata(pdev); 772 + 773 + return uart_resume_port(&asc_uart_driver, port); 774 + } 775 + 776 + #endif /* CONFIG_PM_SLEEP */ 777 + 778 + /*----------------------------------------------------------------------*/ 779 + 780 + #ifdef CONFIG_SERIAL_ST_ASC_CONSOLE 781 + static void asc_console_putchar(struct uart_port *port, int ch) 782 + { 783 + unsigned int timeout = 1000000; 784 + 785 + /* Wait for upto 1 second in case flow control is stopping us. */ 786 + while (--timeout && asc_txfifo_is_full(port)) 787 + udelay(1); 788 + 789 + asc_out(port, ASC_TXBUF, ch); 790 + } 791 + 792 + /* 793 + * Print a string to the serial port trying not to disturb 794 + * any possible real use of the port... 795 + */ 796 + 797 + static void asc_console_write(struct console *co, const char *s, unsigned count) 798 + { 799 + struct uart_port *port = &asc_ports[co->index].port; 800 + unsigned long flags; 801 + unsigned long timeout = 1000000; 802 + int locked = 1; 803 + u32 intenable; 804 + 805 + local_irq_save(flags); 806 + if (port->sysrq) 807 + locked = 0; /* asc_interrupt has already claimed the lock */ 808 + else if (oops_in_progress) 809 + locked = spin_trylock(&port->lock); 810 + else 811 + spin_lock(&port->lock); 812 + 813 + /* 814 + * Disable interrupts so we don't get the IRQ line bouncing 815 + * up and down while interrupts are disabled. 816 + */ 817 + intenable = asc_in(port, ASC_INTEN); 818 + asc_out(port, ASC_INTEN, 0); 819 + (void)asc_in(port, ASC_INTEN); /* Defeat bus write posting */ 820 + 821 + uart_console_write(port, s, count, asc_console_putchar); 822 + 823 + while (--timeout && !asc_txfifo_is_empty(port)) 824 + udelay(1); 825 + 826 + asc_out(port, ASC_INTEN, intenable); 827 + 828 + if (locked) 829 + spin_unlock(&port->lock); 830 + local_irq_restore(flags); 831 + } 832 + 833 + static int asc_console_setup(struct console *co, char *options) 834 + { 835 + struct asc_port *ascport; 836 + int baud = 9600; 837 + int bits = 8; 838 + int parity = 'n'; 839 + int flow = 'n'; 840 + 841 + if (co->index >= ASC_MAX_PORTS) 842 + return -ENODEV; 843 + 844 + ascport = &asc_ports[co->index]; 845 + 846 + /* 847 + * This driver does not support early console initialization 848 + * (use ARM early printk support instead), so we only expect 849 + * this to be called during the uart port registration when the 850 + * driver gets probed and the port should be mapped at that point. 851 + */ 852 + BUG_ON(ascport->port.mapbase == 0 || ascport->port.membase == NULL); 853 + 854 + if (options) 855 + uart_parse_options(options, &baud, &parity, &bits, &flow); 856 + 857 + return uart_set_options(&ascport->port, co, baud, parity, bits, flow); 858 + } 859 + 860 + static struct console asc_console = { 861 + .name = ASC_SERIAL_NAME, 862 + .device = uart_console_device, 863 + .write = asc_console_write, 864 + .setup = asc_console_setup, 865 + .flags = CON_PRINTBUFFER, 866 + .index = -1, 867 + .data = &asc_uart_driver, 868 + }; 869 + 870 + #define ASC_SERIAL_CONSOLE (&asc_console) 871 + 872 + #else 873 + #define ASC_SERIAL_CONSOLE NULL 874 + #endif /* CONFIG_SERIAL_ST_ASC_CONSOLE */ 875 + 876 + static struct uart_driver asc_uart_driver = { 877 + .owner = THIS_MODULE, 878 + .driver_name = DRIVER_NAME, 879 + .dev_name = ASC_SERIAL_NAME, 880 + .major = 0, 881 + .minor = 0, 882 + .nr = ASC_MAX_PORTS, 883 + .cons = ASC_SERIAL_CONSOLE, 884 + }; 885 + 886 + static const struct dev_pm_ops asc_serial_pm_ops = { 887 + SET_SYSTEM_SLEEP_PM_OPS(asc_serial_suspend, asc_serial_resume) 888 + }; 889 + 890 + static struct platform_driver asc_serial_driver = { 891 + .probe = asc_serial_probe, 892 + .remove = asc_serial_remove, 893 + .driver = { 894 + .name = DRIVER_NAME, 895 + .pm = &asc_serial_pm_ops, 896 + .owner = THIS_MODULE, 897 + .of_match_table = of_match_ptr(asc_match), 898 + }, 899 + }; 900 + 901 + static int __init asc_init(void) 902 + { 903 + int ret; 904 + static char banner[] __initdata = 905 + KERN_INFO "STMicroelectronics ASC driver initialized\n"; 906 + 907 + printk(banner); 908 + 909 + ret = uart_register_driver(&asc_uart_driver); 910 + if (ret) 911 + return ret; 912 + 913 + ret = platform_driver_register(&asc_serial_driver); 914 + if (ret) 915 + uart_unregister_driver(&asc_uart_driver); 916 + 917 + return ret; 918 + } 919 + 920 + static void __exit asc_exit(void) 921 + { 922 + platform_driver_unregister(&asc_serial_driver); 923 + uart_unregister_driver(&asc_uart_driver); 924 + } 925 + 926 + module_init(asc_init); 927 + module_exit(asc_exit); 928 + 929 + MODULE_ALIAS("platform:" DRIVER_NAME); 930 + MODULE_AUTHOR("STMicroelectronics (R&D) Limited"); 931 + MODULE_DESCRIPTION("STMicroelectronics ASC serial port driver"); 932 + MODULE_LICENSE("GPL");
+2 -2
drivers/tty/serial/timbuart.c
··· 162 162 dev_dbg(port->dev, "%s - leaving\n", __func__); 163 163 } 164 164 165 - void timbuart_handle_rx_port(struct uart_port *port, u32 isr, u32 *ier) 165 + static void timbuart_handle_rx_port(struct uart_port *port, u32 isr, u32 *ier) 166 166 { 167 167 if (isr & RXFLAGS) { 168 168 /* Some RX status is set */ ··· 184 184 dev_dbg(port->dev, "%s - leaving\n", __func__); 185 185 } 186 186 187 - void timbuart_tasklet(unsigned long arg) 187 + static void timbuart_tasklet(unsigned long arg) 188 188 { 189 189 struct timbuart_port *uart = (struct timbuart_port *)arg; 190 190 u32 isr, ier = 0;
+1 -1
drivers/tty/serial/vr41xx_siu.c
··· 705 705 { 706 706 struct uart_port *port; 707 707 struct resource *res; 708 - int *type = pdev->dev.platform_data; 708 + int *type = dev_get_platdata(&pdev->dev); 709 709 int i; 710 710 711 711 if (!type)
+2 -1
drivers/tty/serial/vt8500_serial.c
··· 170 170 tty_insert_flip_char(tport, c, flag); 171 171 } 172 172 173 + spin_unlock(&port->lock); 173 174 tty_flip_buffer_push(tport); 175 + spin_lock(&port->lock); 174 176 } 175 177 176 178 static void handle_tx(struct uart_port *port) ··· 632 630 { 633 631 struct vt8500_port *vt8500_port = platform_get_drvdata(pdev); 634 632 635 - platform_set_drvdata(pdev, NULL); 636 633 clk_disable_unprepare(vt8500_port->clk); 637 634 uart_remove_one_port(&vt8500_uart_driver, &vt8500_port->uart); 638 635
+65 -65
drivers/tty/synclink.c
··· 577 577 578 578 #define SICR_RXC_ACTIVE BIT15 579 579 #define SICR_RXC_INACTIVE BIT14 580 - #define SICR_RXC (BIT15+BIT14) 580 + #define SICR_RXC (BIT15|BIT14) 581 581 #define SICR_TXC_ACTIVE BIT13 582 582 #define SICR_TXC_INACTIVE BIT12 583 - #define SICR_TXC (BIT13+BIT12) 583 + #define SICR_TXC (BIT13|BIT12) 584 584 #define SICR_RI_ACTIVE BIT11 585 585 #define SICR_RI_INACTIVE BIT10 586 - #define SICR_RI (BIT11+BIT10) 586 + #define SICR_RI (BIT11|BIT10) 587 587 #define SICR_DSR_ACTIVE BIT9 588 588 #define SICR_DSR_INACTIVE BIT8 589 - #define SICR_DSR (BIT9+BIT8) 589 + #define SICR_DSR (BIT9|BIT8) 590 590 #define SICR_DCD_ACTIVE BIT7 591 591 #define SICR_DCD_INACTIVE BIT6 592 - #define SICR_DCD (BIT7+BIT6) 592 + #define SICR_DCD (BIT7|BIT6) 593 593 #define SICR_CTS_ACTIVE BIT5 594 594 #define SICR_CTS_INACTIVE BIT4 595 - #define SICR_CTS (BIT5+BIT4) 595 + #define SICR_CTS (BIT5|BIT4) 596 596 #define SICR_RCC_UNDERFLOW BIT3 597 597 #define SICR_DPLL_NO_SYNC BIT2 598 598 #define SICR_BRG1_ZERO BIT1 ··· 1161 1161 { 1162 1162 u16 status = usc_InReg( info, RCSR ); 1163 1163 1164 - if ( debug_level >= DEBUG_LEVEL_ISR ) 1164 + if ( debug_level >= DEBUG_LEVEL_ISR ) 1165 1165 printk("%s(%d):mgsl_isr_receive_status status=%04X\n", 1166 1166 __FILE__,__LINE__,status); 1167 1167 ··· 1181 1181 (usc_InReg(info, RICR) & ~RXSTATUS_ABORT_RECEIVED)); 1182 1182 } 1183 1183 1184 - if (status & (RXSTATUS_EXITED_HUNT + RXSTATUS_IDLE_RECEIVED)) { 1184 + if (status & (RXSTATUS_EXITED_HUNT | RXSTATUS_IDLE_RECEIVED)) { 1185 1185 if (status & RXSTATUS_EXITED_HUNT) 1186 1186 info->icount.exithunt++; 1187 1187 if (status & RXSTATUS_IDLE_RECEIVED) ··· 1463 1463 1464 1464 /* get the status of the received byte */ 1465 1465 status = usc_InReg(info, RCSR); 1466 - if ( status & (RXSTATUS_FRAMING_ERROR + RXSTATUS_PARITY_ERROR + 1467 - RXSTATUS_OVERRUN + RXSTATUS_BREAK_RECEIVED) ) 1466 + if ( status & (RXSTATUS_FRAMING_ERROR | RXSTATUS_PARITY_ERROR | 1467 + RXSTATUS_OVERRUN | RXSTATUS_BREAK_RECEIVED) ) 1468 1468 usc_UnlatchRxstatusBits(info,RXSTATUS_ALL); 1469 1469 1470 1470 icount->rx++; 1471 1471 1472 1472 flag = 0; 1473 - if ( status & (RXSTATUS_FRAMING_ERROR + RXSTATUS_PARITY_ERROR + 1474 - RXSTATUS_OVERRUN + RXSTATUS_BREAK_RECEIVED) ) { 1475 - printk("rxerr=%04X\n",status); 1473 + if ( status & (RXSTATUS_FRAMING_ERROR | RXSTATUS_PARITY_ERROR | 1474 + RXSTATUS_OVERRUN | RXSTATUS_BREAK_RECEIVED) ) { 1475 + printk("rxerr=%04X\n",status); 1476 1476 /* update error statistics */ 1477 1477 if ( status & RXSTATUS_BREAK_RECEIVED ) { 1478 - status &= ~(RXSTATUS_FRAMING_ERROR + RXSTATUS_PARITY_ERROR); 1478 + status &= ~(RXSTATUS_FRAMING_ERROR | RXSTATUS_PARITY_ERROR); 1479 1479 icount->brk++; 1480 - } else if (status & RXSTATUS_PARITY_ERROR) 1480 + } else if (status & RXSTATUS_PARITY_ERROR) 1481 1481 icount->parity++; 1482 1482 else if (status & RXSTATUS_FRAMING_ERROR) 1483 1483 icount->frame++; ··· 1488 1488 icount->overrun++; 1489 1489 } 1490 1490 1491 - /* discard char if tty control flags say so */ 1491 + /* discard char if tty control flags say so */ 1492 1492 if (status & info->ignore_status_mask) 1493 1493 continue; 1494 1494 ··· 1545 1545 usc_EnableReceiver(info,DISABLE_UNCONDITIONAL); 1546 1546 usc_DmaCmd(info, DmaCmd_ResetRxChannel); 1547 1547 usc_UnlatchRxstatusBits(info, RXSTATUS_ALL); 1548 - usc_ClearIrqPendingBits(info, RECEIVE_DATA + RECEIVE_STATUS); 1549 - usc_DisableInterrupts(info, RECEIVE_DATA + RECEIVE_STATUS); 1548 + usc_ClearIrqPendingBits(info, RECEIVE_DATA | RECEIVE_STATUS); 1549 + usc_DisableInterrupts(info, RECEIVE_DATA | RECEIVE_STATUS); 1550 1550 1551 1551 /* schedule BH handler to restart receiver */ 1552 1552 info->pending_bh |= BH_RECEIVE; ··· 1595 1595 u16 status; 1596 1596 1597 1597 /* clear interrupt pending and IUS bit for Rx DMA IRQ */ 1598 - usc_OutDmaReg( info, CDIR, BIT9+BIT1 ); 1598 + usc_OutDmaReg( info, CDIR, BIT9 | BIT1 ); 1599 1599 1600 1600 /* Read the receive DMA status to identify interrupt type. */ 1601 1601 /* This also clears the status bits. */ ··· 1639 1639 u16 status; 1640 1640 1641 1641 /* clear interrupt pending and IUS bit for Tx DMA IRQ */ 1642 - usc_OutDmaReg(info, CDIR, BIT8+BIT0 ); 1642 + usc_OutDmaReg(info, CDIR, BIT8 | BIT0 ); 1643 1643 1644 1644 /* Read the transmit DMA status to identify interrupt type. */ 1645 1645 /* This also clears the status bits. */ ··· 1832 1832 usc_DisableMasterIrqBit(info); 1833 1833 usc_stop_receiver(info); 1834 1834 usc_stop_transmitter(info); 1835 - usc_DisableInterrupts(info,RECEIVE_DATA + RECEIVE_STATUS + 1836 - TRANSMIT_DATA + TRANSMIT_STATUS + IO_PIN + MISC ); 1835 + usc_DisableInterrupts(info,RECEIVE_DATA | RECEIVE_STATUS | 1836 + TRANSMIT_DATA | TRANSMIT_STATUS | IO_PIN | MISC ); 1837 1837 usc_DisableDmaInterrupts(info,DICR_MASTER + DICR_TRANSMIT + DICR_RECEIVE); 1838 1838 1839 1839 /* Disable DMAEN (Port 7, Bit 14) */ ··· 1886 1886 info->ri_chkcount = 0; 1887 1887 info->dsr_chkcount = 0; 1888 1888 1889 - usc_EnableStatusIrqs(info,SICR_CTS+SICR_DSR+SICR_DCD+SICR_RI); 1889 + usc_EnableStatusIrqs(info,SICR_CTS+SICR_DSR+SICR_DCD+SICR_RI); 1890 1890 usc_EnableInterrupts(info, IO_PIN); 1891 1891 usc_get_serial_signals(info); 1892 1892 ··· 2773 2773 if (!waitqueue_active(&info->event_wait_q)) { 2774 2774 /* disable enable exit hunt mode/idle rcvd IRQs */ 2775 2775 usc_OutReg(info, RICR, usc_InReg(info,RICR) & 2776 - ~(RXSTATUS_EXITED_HUNT + RXSTATUS_IDLE_RECEIVED)); 2776 + ~(RXSTATUS_EXITED_HUNT | RXSTATUS_IDLE_RECEIVED)); 2777 2777 } 2778 2778 spin_unlock_irqrestore(&info->irq_spinlock,flags); 2779 2779 } ··· 3092 3092 printk("%s(%d):mgsl_close(%s) entry, count=%d\n", 3093 3093 __FILE__,__LINE__, info->device_name, info->port.count); 3094 3094 3095 - if (tty_port_close_start(&info->port, tty, filp) == 0) 3095 + if (tty_port_close_start(&info->port, tty, filp) == 0) 3096 3096 goto cleanup; 3097 3097 3098 3098 mutex_lock(&info->port.mutex); ··· 4297 4297 spin_lock_init(&info->irq_spinlock); 4298 4298 spin_lock_init(&info->netlock); 4299 4299 memcpy(&info->params,&default_params,sizeof(MGSL_PARAMS)); 4300 - info->idle_mode = HDLC_TXIDLE_FLAGS; 4300 + info->idle_mode = HDLC_TXIDLE_FLAGS; 4301 4301 info->num_tx_dma_buffers = 1; 4302 4302 info->num_tx_holding_buffers = 0; 4303 4303 } ··· 4722 4722 else if ( info->params.flags & HDLC_FLAG_UNDERRUN_FLAG ) 4723 4723 RegValue |= BIT15; 4724 4724 else if ( info->params.flags & HDLC_FLAG_UNDERRUN_CRC ) 4725 - RegValue |= BIT15 + BIT14; 4725 + RegValue |= BIT15 | BIT14; 4726 4726 } 4727 4727 4728 4728 if ( info->params.preamble != HDLC_PREAMBLE_PATTERN_NONE ) ··· 4763 4763 switch ( info->params.encoding ) { 4764 4764 case HDLC_ENCODING_NRZB: RegValue |= BIT13; break; 4765 4765 case HDLC_ENCODING_NRZI_MARK: RegValue |= BIT14; break; 4766 - case HDLC_ENCODING_NRZI_SPACE: RegValue |= BIT14 + BIT13; break; 4766 + case HDLC_ENCODING_NRZI_SPACE: RegValue |= BIT14 | BIT13; break; 4767 4767 case HDLC_ENCODING_BIPHASE_MARK: RegValue |= BIT15; break; 4768 - case HDLC_ENCODING_BIPHASE_SPACE: RegValue |= BIT15 + BIT13; break; 4769 - case HDLC_ENCODING_BIPHASE_LEVEL: RegValue |= BIT15 + BIT14; break; 4770 - case HDLC_ENCODING_DIFF_BIPHASE_LEVEL: RegValue |= BIT15 + BIT14 + BIT13; break; 4768 + case HDLC_ENCODING_BIPHASE_SPACE: RegValue |= BIT15 | BIT13; break; 4769 + case HDLC_ENCODING_BIPHASE_LEVEL: RegValue |= BIT15 | BIT14; break; 4770 + case HDLC_ENCODING_DIFF_BIPHASE_LEVEL: RegValue |= BIT15 | BIT14 | BIT13; break; 4771 4771 } 4772 4772 4773 4773 if ( (info->params.crc_type & HDLC_CRC_MASK) == HDLC_CRC_16_CCITT ) ··· 4838 4838 switch ( info->params.encoding ) { 4839 4839 case HDLC_ENCODING_NRZB: RegValue |= BIT13; break; 4840 4840 case HDLC_ENCODING_NRZI_MARK: RegValue |= BIT14; break; 4841 - case HDLC_ENCODING_NRZI_SPACE: RegValue |= BIT14 + BIT13; break; 4841 + case HDLC_ENCODING_NRZI_SPACE: RegValue |= BIT14 | BIT13; break; 4842 4842 case HDLC_ENCODING_BIPHASE_MARK: RegValue |= BIT15; break; 4843 - case HDLC_ENCODING_BIPHASE_SPACE: RegValue |= BIT15 + BIT13; break; 4844 - case HDLC_ENCODING_BIPHASE_LEVEL: RegValue |= BIT15 + BIT14; break; 4845 - case HDLC_ENCODING_DIFF_BIPHASE_LEVEL: RegValue |= BIT15 + BIT14 + BIT13; break; 4843 + case HDLC_ENCODING_BIPHASE_SPACE: RegValue |= BIT15 | BIT13; break; 4844 + case HDLC_ENCODING_BIPHASE_LEVEL: RegValue |= BIT15 | BIT14; break; 4845 + case HDLC_ENCODING_DIFF_BIPHASE_LEVEL: RegValue |= BIT15 | BIT14 | BIT13; break; 4846 4846 } 4847 4847 4848 4848 if ( (info->params.crc_type & HDLC_CRC_MASK) == HDLC_CRC_16_CCITT ) 4849 - RegValue |= BIT9 + BIT8; 4849 + RegValue |= BIT9 | BIT8; 4850 4850 else if ( (info->params.crc_type & HDLC_CRC_MASK) == HDLC_CRC_32_CCITT ) 4851 4851 RegValue |= ( BIT12 | BIT10 | BIT9 | BIT8); 4852 4852 ··· 4957 4957 4958 4958 RegValue = 0x0000; 4959 4959 4960 - if ( info->params.flags & (HDLC_FLAG_RXC_DPLL + HDLC_FLAG_TXC_DPLL) ) { 4960 + if ( info->params.flags & (HDLC_FLAG_RXC_DPLL | HDLC_FLAG_TXC_DPLL) ) { 4961 4961 u32 XtalSpeed; 4962 4962 u32 DpllDivisor; 4963 4963 u16 Tc; ··· 5019 5019 case HDLC_ENCODING_BIPHASE_MARK: 5020 5020 case HDLC_ENCODING_BIPHASE_SPACE: RegValue |= BIT9; break; 5021 5021 case HDLC_ENCODING_BIPHASE_LEVEL: 5022 - case HDLC_ENCODING_DIFF_BIPHASE_LEVEL: RegValue |= BIT9 + BIT8; break; 5022 + case HDLC_ENCODING_DIFF_BIPHASE_LEVEL: RegValue |= BIT9 | BIT8; break; 5023 5023 } 5024 5024 } 5025 5025 ··· 5056 5056 /* enable Master Interrupt Enable bit (MIE) */ 5057 5057 usc_EnableMasterIrqBit( info ); 5058 5058 5059 - usc_ClearIrqPendingBits( info, RECEIVE_STATUS + RECEIVE_DATA + 5060 - TRANSMIT_STATUS + TRANSMIT_DATA + MISC); 5059 + usc_ClearIrqPendingBits( info, RECEIVE_STATUS | RECEIVE_DATA | 5060 + TRANSMIT_STATUS | TRANSMIT_DATA | MISC); 5061 5061 5062 5062 /* arm RCC underflow interrupt */ 5063 5063 usc_OutReg(info, SICR, (u16)(usc_InReg(info,SICR) | BIT3)); ··· 5175 5175 switch ( info->params.preamble_length ) { 5176 5176 case HDLC_PREAMBLE_LENGTH_16BITS: RegValue |= BIT10; break; 5177 5177 case HDLC_PREAMBLE_LENGTH_32BITS: RegValue |= BIT11; break; 5178 - case HDLC_PREAMBLE_LENGTH_64BITS: RegValue |= BIT11 + BIT10; break; 5178 + case HDLC_PREAMBLE_LENGTH_64BITS: RegValue |= BIT11 | BIT10; break; 5179 5179 } 5180 5180 5181 5181 switch ( info->params.preamble ) { 5182 - case HDLC_PREAMBLE_PATTERN_FLAGS: RegValue |= BIT8 + BIT12; break; 5182 + case HDLC_PREAMBLE_PATTERN_FLAGS: RegValue |= BIT8 | BIT12; break; 5183 5183 case HDLC_PREAMBLE_PATTERN_ONES: RegValue |= BIT8; break; 5184 5184 case HDLC_PREAMBLE_PATTERN_10: RegValue |= BIT9; break; 5185 - case HDLC_PREAMBLE_PATTERN_01: RegValue |= BIT9 + BIT8; break; 5185 + case HDLC_PREAMBLE_PATTERN_01: RegValue |= BIT9 | BIT8; break; 5186 5186 } 5187 5187 5188 5188 usc_OutReg( info, CCR, RegValue ); ··· 5221 5221 { 5222 5222 if (enable) { 5223 5223 /* blank external TXD output */ 5224 - usc_OutReg(info,IOCR,usc_InReg(info,IOCR) | (BIT7+BIT6)); 5224 + usc_OutReg(info,IOCR,usc_InReg(info,IOCR) | (BIT7 | BIT6)); 5225 5225 5226 5226 /* Clock mode Control Register (CMCR) 5227 5227 * ··· 5260 5260 outw( 0x0300, info->io_base + CCAR ); 5261 5261 } else { 5262 5262 /* enable external TXD output */ 5263 - usc_OutReg(info,IOCR,usc_InReg(info,IOCR) & ~(BIT7+BIT6)); 5263 + usc_OutReg(info,IOCR,usc_InReg(info,IOCR) & ~(BIT7 | BIT6)); 5264 5264 5265 5265 /* clear Internal Data loopback mode */ 5266 5266 info->loopback_bits = 0; ··· 5447 5447 usc_OutDmaReg( info, NRARU, (u16)(phys_addr >> 16) ); 5448 5448 5449 5449 usc_UnlatchRxstatusBits( info, RXSTATUS_ALL ); 5450 - usc_ClearIrqPendingBits( info, RECEIVE_DATA + RECEIVE_STATUS ); 5450 + usc_ClearIrqPendingBits( info, RECEIVE_DATA | RECEIVE_STATUS ); 5451 5451 usc_EnableInterrupts( info, RECEIVE_STATUS ); 5452 5452 5453 5453 /* 1. Arm End of Buffer (EOB) Receive DMA Interrupt (BIT2 of RDIAR) */ 5454 5454 /* 2. Enable Receive DMA Interrupts (BIT1 of DICR) */ 5455 5455 5456 - usc_OutDmaReg( info, RDIAR, BIT3 + BIT2 ); 5456 + usc_OutDmaReg( info, RDIAR, BIT3 | BIT2 ); 5457 5457 usc_OutDmaReg( info, DICR, (u16)(usc_InDmaReg(info,DICR) | BIT1) ); 5458 5458 usc_DmaCmd( info, DmaCmd_InitRxChannel ); 5459 5459 if ( info->params.flags & HDLC_FLAG_AUTO_DCD ) ··· 5488 5488 usc_DmaCmd( info, DmaCmd_ResetRxChannel ); 5489 5489 5490 5490 usc_UnlatchRxstatusBits( info, RXSTATUS_ALL ); 5491 - usc_ClearIrqPendingBits( info, RECEIVE_DATA + RECEIVE_STATUS ); 5492 - usc_DisableInterrupts( info, RECEIVE_DATA + RECEIVE_STATUS ); 5491 + usc_ClearIrqPendingBits( info, RECEIVE_DATA | RECEIVE_STATUS ); 5492 + usc_DisableInterrupts( info, RECEIVE_DATA | RECEIVE_STATUS ); 5493 5493 5494 5494 usc_EnableReceiver(info,DISABLE_UNCONDITIONAL); 5495 5495 ··· 5536 5536 usc_OutDmaReg( info, NRARU, (u16)(phys_addr >> 16) ); 5537 5537 5538 5538 usc_UnlatchRxstatusBits( info, RXSTATUS_ALL ); 5539 - usc_ClearIrqPendingBits( info, RECEIVE_DATA + RECEIVE_STATUS ); 5539 + usc_ClearIrqPendingBits( info, RECEIVE_DATA | RECEIVE_STATUS ); 5540 5540 usc_EnableInterrupts( info, RECEIVE_STATUS ); 5541 5541 5542 5542 /* 1. Arm End of Buffer (EOB) Receive DMA Interrupt (BIT2 of RDIAR) */ 5543 5543 /* 2. Enable Receive DMA Interrupts (BIT1 of DICR) */ 5544 5544 5545 - usc_OutDmaReg( info, RDIAR, BIT3 + BIT2 ); 5545 + usc_OutDmaReg( info, RDIAR, BIT3 | BIT2 ); 5546 5546 usc_OutDmaReg( info, DICR, (u16)(usc_InDmaReg(info,DICR) | BIT1) ); 5547 5547 usc_DmaCmd( info, DmaCmd_InitRxChannel ); 5548 5548 if ( info->params.flags & HDLC_FLAG_AUTO_DCD ) ··· 5551 5551 usc_EnableReceiver(info,ENABLE_UNCONDITIONAL); 5552 5552 } else { 5553 5553 usc_UnlatchRxstatusBits(info, RXSTATUS_ALL); 5554 - usc_ClearIrqPendingBits(info, RECEIVE_DATA + RECEIVE_STATUS); 5554 + usc_ClearIrqPendingBits(info, RECEIVE_DATA | RECEIVE_STATUS); 5555 5555 usc_EnableInterrupts(info, RECEIVE_DATA); 5556 5556 5557 5557 usc_RTCmd( info, RTCmd_PurgeRxFifo ); ··· 5925 5925 RegValue = 0; 5926 5926 5927 5927 if ( info->params.data_bits != 8 ) 5928 - RegValue |= BIT4+BIT3+BIT2; 5928 + RegValue |= BIT4 | BIT3 | BIT2; 5929 5929 5930 5930 if ( info->params.parity != ASYNC_PARITY_NONE ) { 5931 5931 RegValue |= BIT5; ··· 5982 5982 RegValue = 0; 5983 5983 5984 5984 if ( info->params.data_bits != 8 ) 5985 - RegValue |= BIT4+BIT3+BIT2; 5985 + RegValue |= BIT4 | BIT3 | BIT2; 5986 5986 5987 5987 if ( info->params.parity != ASYNC_PARITY_NONE ) { 5988 5988 RegValue |= BIT5; ··· 6129 6129 6130 6130 /* WAIT FOR RECEIVE COMPLETE */ 6131 6131 for (i=0 ; i<1000 ; i++) 6132 - if (usc_InReg( info, RCSR ) & (BIT8 + BIT4 + BIT3 + BIT1)) 6132 + if (usc_InReg( info, RCSR ) & (BIT8 | BIT4 | BIT3 | BIT1)) 6133 6133 break; 6134 6134 6135 6135 /* clear Internal Data loopback mode */ ··· 6579 6579 6580 6580 status = info->rx_buffer_list[EndIndex].status; 6581 6581 6582 - if ( status & (RXSTATUS_SHORT_FRAME + RXSTATUS_OVERRUN + 6583 - RXSTATUS_CRC_ERROR + RXSTATUS_ABORT) ) { 6582 + if ( status & (RXSTATUS_SHORT_FRAME | RXSTATUS_OVERRUN | 6583 + RXSTATUS_CRC_ERROR | RXSTATUS_ABORT) ) { 6584 6584 if ( status & RXSTATUS_SHORT_FRAME ) 6585 6585 info->icount.rxshort++; 6586 6586 else if ( status & RXSTATUS_ABORT ) ··· 6762 6762 6763 6763 status = info->rx_buffer_list[CurrentIndex].status; 6764 6764 6765 - if ( status & (RXSTATUS_SHORT_FRAME + RXSTATUS_OVERRUN + 6766 - RXSTATUS_CRC_ERROR + RXSTATUS_ABORT) ) { 6765 + if ( status & (RXSTATUS_SHORT_FRAME | RXSTATUS_OVERRUN | 6766 + RXSTATUS_CRC_ERROR | RXSTATUS_ABORT) ) { 6767 6767 if ( status & RXSTATUS_SHORT_FRAME ) 6768 6768 info->icount.rxshort++; 6769 6769 else if ( status & RXSTATUS_ABORT ) ··· 6899 6899 /* set CMR:13 to start transmit when 6900 6900 * next GoAhead (abort) is received 6901 6901 */ 6902 - info->cmr_value |= BIT13; 6902 + info->cmr_value |= BIT13; 6903 6903 } 6904 6904 6905 6905 /* begin loading the frame in the next available tx dma ··· 7278 7278 7279 7279 spin_unlock_irqrestore(&info->irq_spinlock,flags); 7280 7280 7281 - 7281 + 7282 7282 /******************************/ 7283 7283 /* WAIT FOR TRANSMIT COMPLETE */ 7284 7284 /******************************/ ··· 7292 7292 status = usc_InReg( info, TCSR ); 7293 7293 spin_unlock_irqrestore(&info->irq_spinlock,flags); 7294 7294 7295 - while ( !(status & (BIT6+BIT5+BIT4+BIT2+BIT1)) ) { 7295 + while ( !(status & (BIT6 | BIT5 | BIT4 | BIT2 | BIT1)) ) { 7296 7296 if (time_after(jiffies, EndTime)) { 7297 7297 rc = false; 7298 7298 break; ··· 7307 7307 7308 7308 if ( rc ){ 7309 7309 /* CHECK FOR TRANSMIT ERRORS */ 7310 - if ( status & (BIT5 + BIT1) ) 7310 + if ( status & (BIT5 | BIT1) ) 7311 7311 rc = false; 7312 7312 } 7313 7313 ··· 7333 7333 /* CHECK FOR RECEIVE ERRORS */ 7334 7334 status = info->rx_buffer_list[0].status; 7335 7335 7336 - if ( status & (BIT8 + BIT3 + BIT1) ) { 7336 + if ( status & (BIT8 | BIT3 | BIT1) ) { 7337 7337 /* receive error has occurred */ 7338 7338 rc = false; 7339 7339 } else { ··· 7605 7605 { 7606 7606 info->loopmode_send_done_requested = false; 7607 7607 /* clear CMR:13 to 0 to start echoing RxData to TxData */ 7608 - info->cmr_value &= ~BIT13; 7608 + info->cmr_value &= ~BIT13; 7609 7609 usc_OutReg(info, CMR, info->cmr_value); 7610 7610 } 7611 7611
+202 -211
drivers/tty/tty_buffer.c
··· 18 18 #include <linux/module.h> 19 19 #include <linux/ratelimit.h> 20 20 21 + 22 + #define MIN_TTYB_SIZE 256 23 + #define TTYB_ALIGN_MASK 255 24 + 25 + /* 26 + * Byte threshold to limit memory consumption for flip buffers. 27 + * The actual memory limit is > 2x this amount. 28 + */ 29 + #define TTYB_MEM_LIMIT 65536 30 + 31 + /* 32 + * We default to dicing tty buffer allocations to this many characters 33 + * in order to avoid multiple page allocations. We know the size of 34 + * tty_buffer itself but it must also be taken into account that the 35 + * the buffer is 256 byte aligned. See tty_buffer_find for the allocation 36 + * logic this must match 37 + */ 38 + 39 + #define TTY_BUFFER_PAGE (((PAGE_SIZE - sizeof(struct tty_buffer)) / 2) & ~0xFF) 40 + 41 + 42 + /** 43 + * tty_buffer_lock_exclusive - gain exclusive access to buffer 44 + * tty_buffer_unlock_exclusive - release exclusive access 45 + * 46 + * @port - tty_port owning the flip buffer 47 + * 48 + * Guarantees safe use of the line discipline's receive_buf() method by 49 + * excluding the buffer work and any pending flush from using the flip 50 + * buffer. Data can continue to be added concurrently to the flip buffer 51 + * from the driver side. 52 + * 53 + * On release, the buffer work is restarted if there is data in the 54 + * flip buffer 55 + */ 56 + 57 + void tty_buffer_lock_exclusive(struct tty_port *port) 58 + { 59 + struct tty_bufhead *buf = &port->buf; 60 + 61 + atomic_inc(&buf->priority); 62 + mutex_lock(&buf->lock); 63 + } 64 + 65 + void tty_buffer_unlock_exclusive(struct tty_port *port) 66 + { 67 + struct tty_bufhead *buf = &port->buf; 68 + int restart; 69 + 70 + restart = buf->head->commit != buf->head->read; 71 + 72 + atomic_dec(&buf->priority); 73 + mutex_unlock(&buf->lock); 74 + if (restart) 75 + queue_work(system_unbound_wq, &buf->work); 76 + } 77 + 78 + /** 79 + * tty_buffer_space_avail - return unused buffer space 80 + * @port - tty_port owning the flip buffer 81 + * 82 + * Returns the # of bytes which can be written by the driver without 83 + * reaching the buffer limit. 84 + * 85 + * Note: this does not guarantee that memory is available to write 86 + * the returned # of bytes (use tty_prepare_flip_string_xxx() to 87 + * pre-allocate if memory guarantee is required). 88 + */ 89 + 90 + int tty_buffer_space_avail(struct tty_port *port) 91 + { 92 + int space = TTYB_MEM_LIMIT - atomic_read(&port->buf.memory_used); 93 + return max(space, 0); 94 + } 95 + 96 + static void tty_buffer_reset(struct tty_buffer *p, size_t size) 97 + { 98 + p->used = 0; 99 + p->size = size; 100 + p->next = NULL; 101 + p->commit = 0; 102 + p->read = 0; 103 + } 104 + 21 105 /** 22 106 * tty_buffer_free_all - free buffers used by a tty 23 107 * @tty: tty to free from 24 108 * 25 109 * Remove all the buffers pending on a tty whether queued with data 26 110 * or in the free ring. Must be called when the tty is no longer in use 27 - * 28 - * Locking: none 29 111 */ 30 112 31 113 void tty_buffer_free_all(struct tty_port *port) 32 114 { 33 115 struct tty_bufhead *buf = &port->buf; 34 - struct tty_buffer *thead; 116 + struct tty_buffer *p, *next; 117 + struct llist_node *llist; 35 118 36 - while ((thead = buf->head) != NULL) { 37 - buf->head = thead->next; 38 - kfree(thead); 119 + while ((p = buf->head) != NULL) { 120 + buf->head = p->next; 121 + if (p->size > 0) 122 + kfree(p); 39 123 } 40 - while ((thead = buf->free) != NULL) { 41 - buf->free = thead->next; 42 - kfree(thead); 43 - } 44 - buf->tail = NULL; 45 - buf->memory_used = 0; 124 + llist = llist_del_all(&buf->free); 125 + llist_for_each_entry_safe(p, next, llist, free) 126 + kfree(p); 127 + 128 + tty_buffer_reset(&buf->sentinel, 0); 129 + buf->head = &buf->sentinel; 130 + buf->tail = &buf->sentinel; 131 + 132 + atomic_set(&buf->memory_used, 0); 46 133 } 47 134 48 135 /** ··· 138 51 * @size: desired size (characters) 139 52 * 140 53 * Allocate a new tty buffer to hold the desired number of characters. 54 + * We round our buffers off in 256 character chunks to get better 55 + * allocation behaviour. 141 56 * Return NULL if out of memory or the allocation would exceed the 142 57 * per device queue 143 - * 144 - * Locking: Caller must hold tty->buf.lock 145 58 */ 146 59 147 60 static struct tty_buffer *tty_buffer_alloc(struct tty_port *port, size_t size) 148 61 { 62 + struct llist_node *free; 149 63 struct tty_buffer *p; 150 64 151 - if (port->buf.memory_used + size > 65536) 65 + /* Round the buffer size out */ 66 + size = __ALIGN_MASK(size, TTYB_ALIGN_MASK); 67 + 68 + if (size <= MIN_TTYB_SIZE) { 69 + free = llist_del_first(&port->buf.free); 70 + if (free) { 71 + p = llist_entry(free, struct tty_buffer, free); 72 + goto found; 73 + } 74 + } 75 + 76 + /* Should possibly check if this fails for the largest buffer we 77 + have queued and recycle that ? */ 78 + if (atomic_read(&port->buf.memory_used) > TTYB_MEM_LIMIT) 152 79 return NULL; 153 80 p = kmalloc(sizeof(struct tty_buffer) + 2 * size, GFP_ATOMIC); 154 81 if (p == NULL) 155 82 return NULL; 156 - p->used = 0; 157 - p->size = size; 158 - p->next = NULL; 159 - p->commit = 0; 160 - p->read = 0; 161 - p->char_buf_ptr = (char *)(p->data); 162 - p->flag_buf_ptr = (unsigned char *)p->char_buf_ptr + size; 163 - port->buf.memory_used += size; 83 + 84 + found: 85 + tty_buffer_reset(p, size); 86 + atomic_add(size, &port->buf.memory_used); 164 87 return p; 165 88 } 166 89 ··· 181 84 * 182 85 * Free a tty buffer, or add it to the free list according to our 183 86 * internal strategy 184 - * 185 - * Locking: Caller must hold tty->buf.lock 186 87 */ 187 88 188 89 static void tty_buffer_free(struct tty_port *port, struct tty_buffer *b) ··· 188 93 struct tty_bufhead *buf = &port->buf; 189 94 190 95 /* Dumb strategy for now - should keep some stats */ 191 - buf->memory_used -= b->size; 192 - WARN_ON(buf->memory_used < 0); 96 + WARN_ON(atomic_sub_return(b->size, &buf->memory_used) < 0); 193 97 194 - if (b->size >= 512) 98 + if (b->size > MIN_TTYB_SIZE) 195 99 kfree(b); 196 - else { 197 - b->next = buf->free; 198 - buf->free = b; 199 - } 200 - } 201 - 202 - /** 203 - * __tty_buffer_flush - flush full tty buffers 204 - * @tty: tty to flush 205 - * 206 - * flush all the buffers containing receive data. Caller must 207 - * hold the buffer lock and must have ensured no parallel flush to 208 - * ldisc is running. 209 - * 210 - * Locking: Caller must hold tty->buf.lock 211 - */ 212 - 213 - static void __tty_buffer_flush(struct tty_port *port) 214 - { 215 - struct tty_bufhead *buf = &port->buf; 216 - struct tty_buffer *thead; 217 - 218 - if (unlikely(buf->head == NULL)) 219 - return; 220 - while ((thead = buf->head->next) != NULL) { 221 - tty_buffer_free(port, buf->head); 222 - buf->head = thead; 223 - } 224 - WARN_ON(buf->head != buf->tail); 225 - buf->head->read = buf->head->commit; 100 + else if (b->size > 0) 101 + llist_add(&b->free, &buf->free); 226 102 } 227 103 228 104 /** ··· 204 138 * being processed by flush_to_ldisc then we defer the processing 205 139 * to that function 206 140 * 207 - * Locking: none 141 + * Locking: takes buffer lock to ensure single-threaded flip buffer 142 + * 'consumer' 208 143 */ 209 144 210 145 void tty_buffer_flush(struct tty_struct *tty) 211 146 { 212 147 struct tty_port *port = tty->port; 213 148 struct tty_bufhead *buf = &port->buf; 214 - unsigned long flags; 149 + struct tty_buffer *next; 215 150 216 - spin_lock_irqsave(&buf->lock, flags); 151 + atomic_inc(&buf->priority); 217 152 218 - /* If the data is being pushed to the tty layer then we can't 219 - process it here. Instead set a flag and the flush_to_ldisc 220 - path will process the flush request before it exits */ 221 - if (test_bit(TTYP_FLUSHING, &port->iflags)) { 222 - set_bit(TTYP_FLUSHPENDING, &port->iflags); 223 - spin_unlock_irqrestore(&buf->lock, flags); 224 - wait_event(tty->read_wait, 225 - test_bit(TTYP_FLUSHPENDING, &port->iflags) == 0); 226 - return; 227 - } else 228 - __tty_buffer_flush(port); 229 - spin_unlock_irqrestore(&buf->lock, flags); 230 - } 231 - 232 - /** 233 - * tty_buffer_find - find a free tty buffer 234 - * @tty: tty owning the buffer 235 - * @size: characters wanted 236 - * 237 - * Locate an existing suitable tty buffer or if we are lacking one then 238 - * allocate a new one. We round our buffers off in 256 character chunks 239 - * to get better allocation behaviour. 240 - * 241 - * Locking: Caller must hold tty->buf.lock 242 - */ 243 - 244 - static struct tty_buffer *tty_buffer_find(struct tty_port *port, size_t size) 245 - { 246 - struct tty_buffer **tbh = &port->buf.free; 247 - while ((*tbh) != NULL) { 248 - struct tty_buffer *t = *tbh; 249 - if (t->size >= size) { 250 - *tbh = t->next; 251 - t->next = NULL; 252 - t->used = 0; 253 - t->commit = 0; 254 - t->read = 0; 255 - port->buf.memory_used += t->size; 256 - return t; 257 - } 258 - tbh = &((*tbh)->next); 153 + mutex_lock(&buf->lock); 154 + while ((next = buf->head->next) != NULL) { 155 + tty_buffer_free(port, buf->head); 156 + buf->head = next; 259 157 } 260 - /* Round the buffer size out */ 261 - size = (size + 0xFF) & ~0xFF; 262 - return tty_buffer_alloc(port, size); 263 - /* Should possibly check if this fails for the largest buffer we 264 - have queued and recycle that ? */ 158 + buf->head->read = buf->head->commit; 159 + atomic_dec(&buf->priority); 160 + mutex_unlock(&buf->lock); 265 161 } 162 + 266 163 /** 267 164 * tty_buffer_request_room - grow tty buffer if needed 268 165 * @tty: tty structure ··· 233 204 * 234 205 * Make at least size bytes of linear space available for the tty 235 206 * buffer. If we fail return the size we managed to find. 236 - * 237 - * Locking: Takes port->buf.lock 238 207 */ 239 208 int tty_buffer_request_room(struct tty_port *port, size_t size) 240 209 { 241 210 struct tty_bufhead *buf = &port->buf; 242 211 struct tty_buffer *b, *n; 243 212 int left; 244 - unsigned long flags; 245 - spin_lock_irqsave(&buf->lock, flags); 246 - /* OPTIMISATION: We could keep a per tty "zero" sized buffer to 247 - remove this conditional if its worth it. This would be invisible 248 - to the callers */ 213 + 249 214 b = buf->tail; 250 - if (b != NULL) 251 - left = b->size - b->used; 252 - else 253 - left = 0; 215 + left = b->size - b->used; 254 216 255 217 if (left < size) { 256 218 /* This is the slow path - looking for new buffers to use */ 257 - if ((n = tty_buffer_find(port, size)) != NULL) { 258 - if (b != NULL) { 259 - b->next = n; 260 - b->commit = b->used; 261 - } else 262 - buf->head = n; 219 + if ((n = tty_buffer_alloc(port, size)) != NULL) { 263 220 buf->tail = n; 221 + b->commit = b->used; 222 + smp_mb(); 223 + b->next = n; 264 224 } else 265 225 size = left; 266 226 } 267 - spin_unlock_irqrestore(&buf->lock, flags); 268 227 return size; 269 228 } 270 229 EXPORT_SYMBOL_GPL(tty_buffer_request_room); ··· 266 249 * 267 250 * Queue a series of bytes to the tty buffering. All the characters 268 251 * passed are marked with the supplied flag. Returns the number added. 269 - * 270 - * Locking: Called functions may take port->buf.lock 271 252 */ 272 253 273 254 int tty_insert_flip_string_fixed_flag(struct tty_port *port, ··· 276 261 int goal = min_t(size_t, size - copied, TTY_BUFFER_PAGE); 277 262 int space = tty_buffer_request_room(port, goal); 278 263 struct tty_buffer *tb = port->buf.tail; 279 - /* If there is no space then tb may be NULL */ 280 - if (unlikely(space == 0)) { 264 + if (unlikely(space == 0)) 281 265 break; 282 - } 283 - memcpy(tb->char_buf_ptr + tb->used, chars, space); 284 - memset(tb->flag_buf_ptr + tb->used, flag, space); 266 + memcpy(char_buf_ptr(tb, tb->used), chars, space); 267 + memset(flag_buf_ptr(tb, tb->used), flag, space); 285 268 tb->used += space; 286 269 copied += space; 287 270 chars += space; ··· 300 287 * Queue a series of bytes to the tty buffering. For each character 301 288 * the flags array indicates the status of the character. Returns the 302 289 * number added. 303 - * 304 - * Locking: Called functions may take port->buf.lock 305 290 */ 306 291 307 292 int tty_insert_flip_string_flags(struct tty_port *port, ··· 310 299 int goal = min_t(size_t, size - copied, TTY_BUFFER_PAGE); 311 300 int space = tty_buffer_request_room(port, goal); 312 301 struct tty_buffer *tb = port->buf.tail; 313 - /* If there is no space then tb may be NULL */ 314 - if (unlikely(space == 0)) { 302 + if (unlikely(space == 0)) 315 303 break; 316 - } 317 - memcpy(tb->char_buf_ptr + tb->used, chars, space); 318 - memcpy(tb->flag_buf_ptr + tb->used, flags, space); 304 + memcpy(char_buf_ptr(tb, tb->used), chars, space); 305 + memcpy(flag_buf_ptr(tb, tb->used), flags, space); 319 306 tb->used += space; 320 307 copied += space; 321 308 chars += space; ··· 334 325 * processing by the line discipline. 335 326 * Note that this function can only be used when the low_latency flag 336 327 * is unset. Otherwise the workqueue won't be flushed. 337 - * 338 - * Locking: Takes port->buf.lock 339 328 */ 340 329 341 330 void tty_schedule_flip(struct tty_port *port) 342 331 { 343 332 struct tty_bufhead *buf = &port->buf; 344 - unsigned long flags; 345 333 WARN_ON(port->low_latency); 346 334 347 - spin_lock_irqsave(&buf->lock, flags); 348 - if (buf->tail != NULL) 349 - buf->tail->commit = buf->tail->used; 350 - spin_unlock_irqrestore(&buf->lock, flags); 335 + buf->tail->commit = buf->tail->used; 351 336 schedule_work(&buf->work); 352 337 } 353 338 EXPORT_SYMBOL(tty_schedule_flip); ··· 357 354 * accounted for as ready for normal characters. This is used for drivers 358 355 * that need their own block copy routines into the buffer. There is no 359 356 * guarantee the buffer is a DMA target! 360 - * 361 - * Locking: May call functions taking port->buf.lock 362 357 */ 363 358 364 359 int tty_prepare_flip_string(struct tty_port *port, unsigned char **chars, ··· 365 364 int space = tty_buffer_request_room(port, size); 366 365 if (likely(space)) { 367 366 struct tty_buffer *tb = port->buf.tail; 368 - *chars = tb->char_buf_ptr + tb->used; 369 - memset(tb->flag_buf_ptr + tb->used, TTY_NORMAL, space); 367 + *chars = char_buf_ptr(tb, tb->used); 368 + memset(flag_buf_ptr(tb, tb->used), TTY_NORMAL, space); 370 369 tb->used += space; 371 370 } 372 371 return space; ··· 385 384 * accounted for as ready for characters. This is used for drivers 386 385 * that need their own block copy routines into the buffer. There is no 387 386 * guarantee the buffer is a DMA target! 388 - * 389 - * Locking: May call functions taking port->buf.lock 390 387 */ 391 388 392 389 int tty_prepare_flip_string_flags(struct tty_port *port, ··· 393 394 int space = tty_buffer_request_room(port, size); 394 395 if (likely(space)) { 395 396 struct tty_buffer *tb = port->buf.tail; 396 - *chars = tb->char_buf_ptr + tb->used; 397 - *flags = tb->flag_buf_ptr + tb->used; 397 + *chars = char_buf_ptr(tb, tb->used); 398 + *flags = flag_buf_ptr(tb, tb->used); 398 399 tb->used += space; 399 400 } 400 401 return space; ··· 402 403 EXPORT_SYMBOL_GPL(tty_prepare_flip_string_flags); 403 404 404 405 406 + static int 407 + receive_buf(struct tty_struct *tty, struct tty_buffer *head, int count) 408 + { 409 + struct tty_ldisc *disc = tty->ldisc; 410 + unsigned char *p = char_buf_ptr(head, head->read); 411 + char *f = flag_buf_ptr(head, head->read); 412 + 413 + if (disc->ops->receive_buf2) 414 + count = disc->ops->receive_buf2(tty, p, f, count); 415 + else { 416 + count = min_t(int, count, tty->receive_room); 417 + if (count) 418 + disc->ops->receive_buf(tty, p, f, count); 419 + } 420 + head->read += count; 421 + return count; 422 + } 405 423 406 424 /** 407 425 * flush_to_ldisc ··· 427 411 * This routine is called out of the software interrupt to flush data 428 412 * from the buffer chain to the line discipline. 429 413 * 430 - * Locking: holds tty->buf.lock to guard buffer list. Drops the lock 431 - * while invoking the line discipline receive_buf method. The 432 - * receive_buf method is single threaded for each tty instance. 414 + * The receive_buf method is single threaded for each tty instance. 415 + * 416 + * Locking: takes buffer lock to ensure single-threaded flip buffer 417 + * 'consumer' 433 418 */ 434 419 435 420 static void flush_to_ldisc(struct work_struct *work) ··· 438 421 struct tty_port *port = container_of(work, struct tty_port, buf.work); 439 422 struct tty_bufhead *buf = &port->buf; 440 423 struct tty_struct *tty; 441 - unsigned long flags; 442 424 struct tty_ldisc *disc; 443 425 444 426 tty = port->itty; ··· 445 429 return; 446 430 447 431 disc = tty_ldisc_ref(tty); 448 - if (disc == NULL) /* !TTY_LDISC */ 432 + if (disc == NULL) 449 433 return; 450 434 451 - spin_lock_irqsave(&buf->lock, flags); 435 + mutex_lock(&buf->lock); 452 436 453 - if (!test_and_set_bit(TTYP_FLUSHING, &port->iflags)) { 454 - struct tty_buffer *head; 455 - while ((head = buf->head) != NULL) { 456 - int count; 457 - char *char_buf; 458 - unsigned char *flag_buf; 437 + while (1) { 438 + struct tty_buffer *head = buf->head; 439 + int count; 459 440 460 - count = head->commit - head->read; 461 - if (!count) { 462 - if (head->next == NULL) 463 - break; 464 - buf->head = head->next; 465 - tty_buffer_free(port, head); 466 - continue; 467 - } 468 - if (!tty->receive_room) 441 + /* Ldisc or user is trying to gain exclusive access */ 442 + if (atomic_read(&buf->priority)) 443 + break; 444 + 445 + count = head->commit - head->read; 446 + if (!count) { 447 + if (head->next == NULL) 469 448 break; 470 - if (count > tty->receive_room) 471 - count = tty->receive_room; 472 - char_buf = head->char_buf_ptr + head->read; 473 - flag_buf = head->flag_buf_ptr + head->read; 474 - head->read += count; 475 - spin_unlock_irqrestore(&buf->lock, flags); 476 - disc->ops->receive_buf(tty, char_buf, 477 - flag_buf, count); 478 - spin_lock_irqsave(&buf->lock, flags); 479 - /* Ldisc or user is trying to flush the buffers. 480 - We may have a deferred request to flush the 481 - input buffer, if so pull the chain under the lock 482 - and empty the queue */ 483 - if (test_bit(TTYP_FLUSHPENDING, &port->iflags)) { 484 - __tty_buffer_flush(port); 485 - clear_bit(TTYP_FLUSHPENDING, &port->iflags); 486 - wake_up(&tty->read_wait); 487 - break; 488 - } 449 + buf->head = head->next; 450 + tty_buffer_free(port, head); 451 + continue; 489 452 } 490 - clear_bit(TTYP_FLUSHING, &port->iflags); 453 + 454 + count = receive_buf(tty, head, count); 455 + if (!count) 456 + break; 491 457 } 492 458 493 - spin_unlock_irqrestore(&buf->lock, flags); 459 + mutex_unlock(&buf->lock); 494 460 495 461 tty_ldisc_deref(disc); 496 462 } ··· 501 503 * 502 504 * In the event of the queue being busy for flipping the work will be 503 505 * held off and retried later. 504 - * 505 - * Locking: tty buffer lock. Driver locks in low latency mode. 506 506 */ 507 507 508 508 void tty_flip_buffer_push(struct tty_port *port) 509 509 { 510 510 struct tty_bufhead *buf = &port->buf; 511 - unsigned long flags; 512 511 513 - spin_lock_irqsave(&buf->lock, flags); 514 - if (buf->tail != NULL) 515 - buf->tail->commit = buf->tail->used; 516 - spin_unlock_irqrestore(&buf->lock, flags); 512 + buf->tail->commit = buf->tail->used; 517 513 518 514 if (port->low_latency) 519 515 flush_to_ldisc(&buf->work); ··· 522 530 * 523 531 * Set up the initial state of the buffer management for a tty device. 524 532 * Must be called before the other tty buffer functions are used. 525 - * 526 - * Locking: none 527 533 */ 528 534 529 535 void tty_buffer_init(struct tty_port *port) 530 536 { 531 537 struct tty_bufhead *buf = &port->buf; 532 538 533 - spin_lock_init(&buf->lock); 534 - buf->head = NULL; 535 - buf->tail = NULL; 536 - buf->free = NULL; 537 - buf->memory_used = 0; 539 + mutex_init(&buf->lock); 540 + tty_buffer_reset(&buf->sentinel, 0); 541 + buf->head = &buf->sentinel; 542 + buf->tail = &buf->sentinel; 543 + init_llist_head(&buf->free); 544 + atomic_set(&buf->memory_used, 0); 545 + atomic_set(&buf->priority, 0); 538 546 INIT_WORK(&buf->work, flush_to_ldisc); 539 547 } 540 -
+19 -14
drivers/tty/tty_io.c
··· 603 603 * BTM 604 604 * redirect lock for undoing redirection 605 605 * file list lock for manipulating list of ttys 606 - * tty_ldisc_lock from called functions 607 - * termios_mutex resetting termios data 606 + * tty_ldiscs_lock from called functions 607 + * termios_rwsem resetting termios data 608 608 * tasklist_lock to walk task list for hangup event 609 609 * ->siglock to protect ->signal/->sighand 610 610 */ ··· 628 628 spin_unlock(&redirect_lock); 629 629 630 630 tty_lock(tty); 631 + 632 + if (test_bit(TTY_HUPPED, &tty->flags)) { 633 + tty_unlock(tty); 634 + return; 635 + } 631 636 632 637 /* some functions below drop BTM, so we need this bit */ 633 638 set_bit(TTY_HUPPING, &tty->flags); ··· 669 664 670 665 spin_lock_irq(&tty->ctrl_lock); 671 666 clear_bit(TTY_THROTTLED, &tty->flags); 672 - clear_bit(TTY_PUSH, &tty->flags); 673 667 clear_bit(TTY_DO_WRITE_WAKEUP, &tty->flags); 674 668 put_pid(tty->session); 675 669 put_pid(tty->pgrp); ··· 1392 1388 struct tty_driver *driver = tty->driver; 1393 1389 1394 1390 if (test_bit(TTY_CLOSING, &tty->flags) || 1395 - test_bit(TTY_HUPPING, &tty->flags) || 1396 - test_bit(TTY_LDISC_CHANGING, &tty->flags)) 1391 + test_bit(TTY_HUPPING, &tty->flags)) 1397 1392 return -EIO; 1398 1393 1399 1394 if (driver->type == TTY_DRIVER_TYPE_PTY && ··· 1408 1405 } 1409 1406 tty->count++; 1410 1407 1411 - WARN_ON(!test_bit(TTY_LDISC, &tty->flags)); 1408 + WARN_ON(!tty->ldisc); 1412 1409 1413 1410 return 0; 1414 1411 } ··· 2205 2202 * FIXME: does not honour flow control ?? 2206 2203 * 2207 2204 * Locking: 2208 - * Called functions take tty_ldisc_lock 2205 + * Called functions take tty_ldiscs_lock 2209 2206 * current->signal->tty check is safe without locks 2210 2207 * 2211 2208 * FIXME: may race normal receive processing ··· 2234 2231 * 2235 2232 * Copies the kernel idea of the window size into the user buffer. 2236 2233 * 2237 - * Locking: tty->termios_mutex is taken to ensure the winsize data 2234 + * Locking: tty->winsize_mutex is taken to ensure the winsize data 2238 2235 * is consistent. 2239 2236 */ 2240 2237 ··· 2242 2239 { 2243 2240 int err; 2244 2241 2245 - mutex_lock(&tty->termios_mutex); 2242 + mutex_lock(&tty->winsize_mutex); 2246 2243 err = copy_to_user(arg, &tty->winsize, sizeof(*arg)); 2247 - mutex_unlock(&tty->termios_mutex); 2244 + mutex_unlock(&tty->winsize_mutex); 2248 2245 2249 2246 return err ? -EFAULT: 0; 2250 2247 } ··· 2265 2262 unsigned long flags; 2266 2263 2267 2264 /* Lock the tty */ 2268 - mutex_lock(&tty->termios_mutex); 2265 + mutex_lock(&tty->winsize_mutex); 2269 2266 if (!memcmp(ws, &tty->winsize, sizeof(*ws))) 2270 2267 goto done; 2271 2268 /* Get the PID values and reference them so we can ··· 2280 2277 2281 2278 tty->winsize = *ws; 2282 2279 done: 2283 - mutex_unlock(&tty->termios_mutex); 2280 + mutex_unlock(&tty->winsize_mutex); 2284 2281 return 0; 2285 2282 } 2286 2283 EXPORT_SYMBOL(tty_do_resize); ··· 3019 3016 tty->session = NULL; 3020 3017 tty->pgrp = NULL; 3021 3018 mutex_init(&tty->legacy_mutex); 3022 - mutex_init(&tty->termios_mutex); 3023 - mutex_init(&tty->ldisc_mutex); 3019 + mutex_init(&tty->throttle_mutex); 3020 + init_rwsem(&tty->termios_rwsem); 3021 + mutex_init(&tty->winsize_mutex); 3022 + init_ldsem(&tty->ldisc_sem); 3024 3023 init_waitqueue_head(&tty->write_wait); 3025 3024 init_waitqueue_head(&tty->read_wait); 3026 3025 INIT_WORK(&tty->hangup_work, do_tty_hangup);
+45 -45
drivers/tty/tty_ioctl.c
··· 94 94 * @tty: terminal 95 95 * 96 96 * Indicate that a tty should stop transmitting data down the stack. 97 - * Takes the termios mutex to protect against parallel throttle/unthrottle 97 + * Takes the termios rwsem to protect against parallel throttle/unthrottle 98 98 * and also to ensure the driver can consistently reference its own 99 99 * termios data at this point when implementing software flow control. 100 100 */ 101 101 102 102 void tty_throttle(struct tty_struct *tty) 103 103 { 104 - mutex_lock(&tty->termios_mutex); 104 + down_write(&tty->termios_rwsem); 105 105 /* check TTY_THROTTLED first so it indicates our state */ 106 106 if (!test_and_set_bit(TTY_THROTTLED, &tty->flags) && 107 107 tty->ops->throttle) 108 108 tty->ops->throttle(tty); 109 109 tty->flow_change = 0; 110 - mutex_unlock(&tty->termios_mutex); 110 + up_write(&tty->termios_rwsem); 111 111 } 112 112 EXPORT_SYMBOL(tty_throttle); 113 113 ··· 116 116 * @tty: terminal 117 117 * 118 118 * Indicate that a tty may continue transmitting data down the stack. 119 - * Takes the termios mutex to protect against parallel throttle/unthrottle 119 + * Takes the termios rwsem to protect against parallel throttle/unthrottle 120 120 * and also to ensure the driver can consistently reference its own 121 121 * termios data at this point when implementing software flow control. 122 122 * ··· 126 126 127 127 void tty_unthrottle(struct tty_struct *tty) 128 128 { 129 - mutex_lock(&tty->termios_mutex); 129 + down_write(&tty->termios_rwsem); 130 130 if (test_and_clear_bit(TTY_THROTTLED, &tty->flags) && 131 131 tty->ops->unthrottle) 132 132 tty->ops->unthrottle(tty); 133 133 tty->flow_change = 0; 134 - mutex_unlock(&tty->termios_mutex); 134 + up_write(&tty->termios_rwsem); 135 135 } 136 136 EXPORT_SYMBOL(tty_unthrottle); 137 137 ··· 151 151 { 152 152 int ret = 0; 153 153 154 - mutex_lock(&tty->termios_mutex); 154 + mutex_lock(&tty->throttle_mutex); 155 155 if (!test_bit(TTY_THROTTLED, &tty->flags)) { 156 156 if (tty->flow_change != TTY_THROTTLE_SAFE) 157 157 ret = 1; ··· 161 161 tty->ops->throttle(tty); 162 162 } 163 163 } 164 - mutex_unlock(&tty->termios_mutex); 164 + mutex_unlock(&tty->throttle_mutex); 165 165 166 166 return ret; 167 167 } ··· 182 182 { 183 183 int ret = 0; 184 184 185 - mutex_lock(&tty->termios_mutex); 185 + mutex_lock(&tty->throttle_mutex); 186 186 if (test_bit(TTY_THROTTLED, &tty->flags)) { 187 187 if (tty->flow_change != TTY_UNTHROTTLE_SAFE) 188 188 ret = 1; ··· 192 192 tty->ops->unthrottle(tty); 193 193 } 194 194 } 195 - mutex_unlock(&tty->termios_mutex); 195 + mutex_unlock(&tty->throttle_mutex); 196 196 197 197 return ret; 198 198 } ··· 468 468 * @obad: output baud rate 469 469 * 470 470 * Update the current termios data for the tty with the new speed 471 - * settings. The caller must hold the termios_mutex for the tty in 471 + * settings. The caller must hold the termios_rwsem for the tty in 472 472 * question. 473 473 */ 474 474 ··· 528 528 * is a bit of layering violation here with n_tty in terms of the 529 529 * internal knowledge of this function. 530 530 * 531 - * Locking: termios_mutex 531 + * Locking: termios_rwsem 532 532 */ 533 533 534 534 int tty_set_termios(struct tty_struct *tty, struct ktermios *new_termios) ··· 544 544 545 545 /* FIXME: we need to decide on some locking/ordering semantics 546 546 for the set_termios notification eventually */ 547 - mutex_lock(&tty->termios_mutex); 547 + down_write(&tty->termios_rwsem); 548 548 old_termios = tty->termios; 549 549 tty->termios = *new_termios; 550 550 unset_locked_termios(&tty->termios, &old_termios, &tty->termios_locked); ··· 586 586 (ld->ops->set_termios)(tty, &old_termios); 587 587 tty_ldisc_deref(ld); 588 588 } 589 - mutex_unlock(&tty->termios_mutex); 589 + up_write(&tty->termios_rwsem); 590 590 return 0; 591 591 } 592 592 EXPORT_SYMBOL_GPL(tty_set_termios); ··· 601 601 * functions before using tty_set_termios to do the actual changes. 602 602 * 603 603 * Locking: 604 - * Called functions take ldisc and termios_mutex locks 604 + * Called functions take ldisc and termios_rwsem locks 605 605 */ 606 606 607 607 static int set_termios(struct tty_struct *tty, void __user *arg, int opt) ··· 613 613 if (retval) 614 614 return retval; 615 615 616 - mutex_lock(&tty->termios_mutex); 616 + down_read(&tty->termios_rwsem); 617 617 tmp_termios = tty->termios; 618 - mutex_unlock(&tty->termios_mutex); 618 + up_read(&tty->termios_rwsem); 619 619 620 620 if (opt & TERMIOS_TERMIO) { 621 621 if (user_termio_to_kernel_termios(&tmp_termios, ··· 667 667 668 668 static void copy_termios(struct tty_struct *tty, struct ktermios *kterm) 669 669 { 670 - mutex_lock(&tty->termios_mutex); 670 + down_read(&tty->termios_rwsem); 671 671 *kterm = tty->termios; 672 - mutex_unlock(&tty->termios_mutex); 672 + up_read(&tty->termios_rwsem); 673 673 } 674 674 675 675 static void copy_termios_locked(struct tty_struct *tty, struct ktermios *kterm) 676 676 { 677 - mutex_lock(&tty->termios_mutex); 677 + down_read(&tty->termios_rwsem); 678 678 *kterm = tty->termios_locked; 679 - mutex_unlock(&tty->termios_mutex); 679 + up_read(&tty->termios_rwsem); 680 680 } 681 681 682 682 static int get_termio(struct tty_struct *tty, struct termio __user *termio) ··· 723 723 return -ERESTARTSYS; 724 724 } 725 725 726 - mutex_lock(&tty->termios_mutex); 726 + down_write(&tty->termios_rwsem); 727 727 if (tty->ops->set_termiox) 728 728 tty->ops->set_termiox(tty, &tnew); 729 - mutex_unlock(&tty->termios_mutex); 729 + up_write(&tty->termios_rwsem); 730 730 return 0; 731 731 } 732 732 ··· 761 761 { 762 762 struct sgttyb tmp; 763 763 764 - mutex_lock(&tty->termios_mutex); 764 + down_read(&tty->termios_rwsem); 765 765 tmp.sg_ispeed = tty->termios.c_ispeed; 766 766 tmp.sg_ospeed = tty->termios.c_ospeed; 767 767 tmp.sg_erase = tty->termios.c_cc[VERASE]; 768 768 tmp.sg_kill = tty->termios.c_cc[VKILL]; 769 769 tmp.sg_flags = get_sgflags(tty); 770 - mutex_unlock(&tty->termios_mutex); 770 + up_read(&tty->termios_rwsem); 771 771 772 772 return copy_to_user(sgttyb, &tmp, sizeof(tmp)) ? -EFAULT : 0; 773 773 } ··· 806 806 * Updates a terminal from the legacy BSD style terminal information 807 807 * structure. 808 808 * 809 - * Locking: termios_mutex 809 + * Locking: termios_rwsem 810 810 */ 811 811 812 812 static int set_sgttyb(struct tty_struct *tty, struct sgttyb __user *sgttyb) ··· 822 822 if (copy_from_user(&tmp, sgttyb, sizeof(tmp))) 823 823 return -EFAULT; 824 824 825 - mutex_lock(&tty->termios_mutex); 825 + down_write(&tty->termios_rwsem); 826 826 termios = tty->termios; 827 827 termios.c_cc[VERASE] = tmp.sg_erase; 828 828 termios.c_cc[VKILL] = tmp.sg_kill; ··· 832 832 tty_termios_encode_baud_rate(&termios, termios.c_ispeed, 833 833 termios.c_ospeed); 834 834 #endif 835 - mutex_unlock(&tty->termios_mutex); 835 + up_write(&tty->termios_rwsem); 836 836 tty_set_termios(tty, &termios); 837 837 return 0; 838 838 } ··· 843 843 { 844 844 struct tchars tmp; 845 845 846 - mutex_lock(&tty->termios_mutex); 846 + down_read(&tty->termios_rwsem); 847 847 tmp.t_intrc = tty->termios.c_cc[VINTR]; 848 848 tmp.t_quitc = tty->termios.c_cc[VQUIT]; 849 849 tmp.t_startc = tty->termios.c_cc[VSTART]; 850 850 tmp.t_stopc = tty->termios.c_cc[VSTOP]; 851 851 tmp.t_eofc = tty->termios.c_cc[VEOF]; 852 852 tmp.t_brkc = tty->termios.c_cc[VEOL2]; /* what is brkc anyway? */ 853 - mutex_unlock(&tty->termios_mutex); 853 + up_read(&tty->termios_rwsem); 854 854 return copy_to_user(tchars, &tmp, sizeof(tmp)) ? -EFAULT : 0; 855 855 } 856 856 ··· 860 860 861 861 if (copy_from_user(&tmp, tchars, sizeof(tmp))) 862 862 return -EFAULT; 863 - mutex_lock(&tty->termios_mutex); 863 + down_write(&tty->termios_rwsem); 864 864 tty->termios.c_cc[VINTR] = tmp.t_intrc; 865 865 tty->termios.c_cc[VQUIT] = tmp.t_quitc; 866 866 tty->termios.c_cc[VSTART] = tmp.t_startc; 867 867 tty->termios.c_cc[VSTOP] = tmp.t_stopc; 868 868 tty->termios.c_cc[VEOF] = tmp.t_eofc; 869 869 tty->termios.c_cc[VEOL2] = tmp.t_brkc; /* what is brkc anyway? */ 870 - mutex_unlock(&tty->termios_mutex); 870 + up_write(&tty->termios_rwsem); 871 871 return 0; 872 872 } 873 873 #endif ··· 877 877 { 878 878 struct ltchars tmp; 879 879 880 - mutex_lock(&tty->termios_mutex); 880 + down_read(&tty->termios_rwsem); 881 881 tmp.t_suspc = tty->termios.c_cc[VSUSP]; 882 882 /* what is dsuspc anyway? */ 883 883 tmp.t_dsuspc = tty->termios.c_cc[VSUSP]; ··· 886 886 tmp.t_flushc = tty->termios.c_cc[VEOL2]; 887 887 tmp.t_werasc = tty->termios.c_cc[VWERASE]; 888 888 tmp.t_lnextc = tty->termios.c_cc[VLNEXT]; 889 - mutex_unlock(&tty->termios_mutex); 889 + up_read(&tty->termios_rwsem); 890 890 return copy_to_user(ltchars, &tmp, sizeof(tmp)) ? -EFAULT : 0; 891 891 } 892 892 ··· 897 897 if (copy_from_user(&tmp, ltchars, sizeof(tmp))) 898 898 return -EFAULT; 899 899 900 - mutex_lock(&tty->termios_mutex); 900 + down_write(&tty->termios_rwsem); 901 901 tty->termios.c_cc[VSUSP] = tmp.t_suspc; 902 902 /* what is dsuspc anyway? */ 903 903 tty->termios.c_cc[VEOL2] = tmp.t_dsuspc; ··· 906 906 tty->termios.c_cc[VEOL2] = tmp.t_flushc; 907 907 tty->termios.c_cc[VWERASE] = tmp.t_werasc; 908 908 tty->termios.c_cc[VLNEXT] = tmp.t_lnextc; 909 - mutex_unlock(&tty->termios_mutex); 909 + up_write(&tty->termios_rwsem); 910 910 return 0; 911 911 } 912 912 #endif ··· 946 946 * @arg: enable/disable CLOCAL 947 947 * 948 948 * Perform a change to the CLOCAL state and call into the driver 949 - * layer to make it visible. All done with the termios mutex 949 + * layer to make it visible. All done with the termios rwsem 950 950 */ 951 951 952 952 static int tty_change_softcar(struct tty_struct *tty, int arg) ··· 955 955 int bit = arg ? CLOCAL : 0; 956 956 struct ktermios old; 957 957 958 - mutex_lock(&tty->termios_mutex); 958 + down_write(&tty->termios_rwsem); 959 959 old = tty->termios; 960 960 tty->termios.c_cflag &= ~CLOCAL; 961 961 tty->termios.c_cflag |= bit; ··· 963 963 tty->ops->set_termios(tty, &old); 964 964 if ((tty->termios.c_cflag & CLOCAL) != bit) 965 965 ret = -EINVAL; 966 - mutex_unlock(&tty->termios_mutex); 966 + up_write(&tty->termios_rwsem); 967 967 return ret; 968 968 } 969 969 ··· 1066 1066 if (user_termios_to_kernel_termios(&kterm, 1067 1067 (struct termios __user *) arg)) 1068 1068 return -EFAULT; 1069 - mutex_lock(&real_tty->termios_mutex); 1069 + down_write(&real_tty->termios_rwsem); 1070 1070 real_tty->termios_locked = kterm; 1071 - mutex_unlock(&real_tty->termios_mutex); 1071 + up_write(&real_tty->termios_rwsem); 1072 1072 return 0; 1073 1073 #else 1074 1074 case TIOCGLCKTRMIOS: ··· 1083 1083 if (user_termios_to_kernel_termios_1(&kterm, 1084 1084 (struct termios __user *) arg)) 1085 1085 return -EFAULT; 1086 - mutex_lock(&real_tty->termios_mutex); 1086 + down_write(&real_tty->termios_rwsem); 1087 1087 real_tty->termios_locked = kterm; 1088 - mutex_unlock(&real_tty->termios_mutex); 1088 + up_write(&real_tty->termios_rwsem); 1089 1089 return ret; 1090 1090 #endif 1091 1091 #ifdef TCGETX ··· 1093 1093 struct termiox ktermx; 1094 1094 if (real_tty->termiox == NULL) 1095 1095 return -EINVAL; 1096 - mutex_lock(&real_tty->termios_mutex); 1096 + down_read(&real_tty->termios_rwsem); 1097 1097 memcpy(&ktermx, real_tty->termiox, sizeof(struct termiox)); 1098 - mutex_unlock(&real_tty->termios_mutex); 1098 + up_read(&real_tty->termios_rwsem); 1099 1099 if (copy_to_user(p, &ktermx, sizeof(struct termiox))) 1100 1100 ret = -EFAULT; 1101 1101 return ret;
+156 -309
drivers/tty/tty_ldisc.c
··· 31 31 #define tty_ldisc_debug(tty, f, args...) 32 32 #endif 33 33 34 + /* lockdep nested classes for tty->ldisc_sem */ 35 + enum { 36 + LDISC_SEM_NORMAL, 37 + LDISC_SEM_OTHER, 38 + }; 39 + 40 + 34 41 /* 35 42 * This guards the refcounted line discipline lists. The lock 36 43 * must be taken with irqs off because there are hangup path 37 44 * callers who will do ldisc lookups and cannot sleep. 38 45 */ 39 46 40 - static DEFINE_RAW_SPINLOCK(tty_ldisc_lock); 41 - static DECLARE_WAIT_QUEUE_HEAD(tty_ldisc_wait); 47 + static DEFINE_RAW_SPINLOCK(tty_ldiscs_lock); 42 48 /* Line disc dispatch table */ 43 49 static struct tty_ldisc_ops *tty_ldiscs[NR_LDISCS]; 44 50 ··· 58 52 * from this point onwards. 59 53 * 60 54 * Locking: 61 - * takes tty_ldisc_lock to guard against ldisc races 55 + * takes tty_ldiscs_lock to guard against ldisc races 62 56 */ 63 57 64 58 int tty_register_ldisc(int disc, struct tty_ldisc_ops *new_ldisc) ··· 69 63 if (disc < N_TTY || disc >= NR_LDISCS) 70 64 return -EINVAL; 71 65 72 - raw_spin_lock_irqsave(&tty_ldisc_lock, flags); 66 + raw_spin_lock_irqsave(&tty_ldiscs_lock, flags); 73 67 tty_ldiscs[disc] = new_ldisc; 74 68 new_ldisc->num = disc; 75 69 new_ldisc->refcount = 0; 76 - raw_spin_unlock_irqrestore(&tty_ldisc_lock, flags); 70 + raw_spin_unlock_irqrestore(&tty_ldiscs_lock, flags); 77 71 78 72 return ret; 79 73 } ··· 88 82 * currently in use. 89 83 * 90 84 * Locking: 91 - * takes tty_ldisc_lock to guard against ldisc races 85 + * takes tty_ldiscs_lock to guard against ldisc races 92 86 */ 93 87 94 88 int tty_unregister_ldisc(int disc) ··· 99 93 if (disc < N_TTY || disc >= NR_LDISCS) 100 94 return -EINVAL; 101 95 102 - raw_spin_lock_irqsave(&tty_ldisc_lock, flags); 96 + raw_spin_lock_irqsave(&tty_ldiscs_lock, flags); 103 97 if (tty_ldiscs[disc]->refcount) 104 98 ret = -EBUSY; 105 99 else 106 100 tty_ldiscs[disc] = NULL; 107 - raw_spin_unlock_irqrestore(&tty_ldisc_lock, flags); 101 + raw_spin_unlock_irqrestore(&tty_ldiscs_lock, flags); 108 102 109 103 return ret; 110 104 } ··· 115 109 unsigned long flags; 116 110 struct tty_ldisc_ops *ldops, *ret; 117 111 118 - raw_spin_lock_irqsave(&tty_ldisc_lock, flags); 112 + raw_spin_lock_irqsave(&tty_ldiscs_lock, flags); 119 113 ret = ERR_PTR(-EINVAL); 120 114 ldops = tty_ldiscs[disc]; 121 115 if (ldops) { ··· 125 119 ret = ldops; 126 120 } 127 121 } 128 - raw_spin_unlock_irqrestore(&tty_ldisc_lock, flags); 122 + raw_spin_unlock_irqrestore(&tty_ldiscs_lock, flags); 129 123 return ret; 130 124 } 131 125 ··· 133 127 { 134 128 unsigned long flags; 135 129 136 - raw_spin_lock_irqsave(&tty_ldisc_lock, flags); 130 + raw_spin_lock_irqsave(&tty_ldiscs_lock, flags); 137 131 ldops->refcount--; 138 132 module_put(ldops->owner); 139 - raw_spin_unlock_irqrestore(&tty_ldisc_lock, flags); 133 + raw_spin_unlock_irqrestore(&tty_ldiscs_lock, flags); 140 134 } 141 135 142 136 /** ··· 149 143 * available 150 144 * 151 145 * Locking: 152 - * takes tty_ldisc_lock to guard against ldisc races 146 + * takes tty_ldiscs_lock to guard against ldisc races 153 147 */ 154 148 155 - static struct tty_ldisc *tty_ldisc_get(int disc) 149 + static struct tty_ldisc *tty_ldisc_get(struct tty_struct *tty, int disc) 156 150 { 157 151 struct tty_ldisc *ld; 158 152 struct tty_ldisc_ops *ldops; ··· 179 173 } 180 174 181 175 ld->ops = ldops; 182 - atomic_set(&ld->users, 1); 183 - init_waitqueue_head(&ld->wq_idle); 176 + ld->tty = tty; 184 177 185 178 return ld; 186 179 } ··· 191 186 */ 192 187 static inline void tty_ldisc_put(struct tty_ldisc *ld) 193 188 { 194 - unsigned long flags; 195 - 196 189 if (WARN_ON_ONCE(!ld)) 197 190 return; 198 191 199 - raw_spin_lock_irqsave(&tty_ldisc_lock, flags); 200 - 201 - /* unreleased reader reference(s) will cause this WARN */ 202 - WARN_ON(!atomic_dec_and_test(&ld->users)); 203 - 204 - ld->ops->refcount--; 205 - module_put(ld->ops->owner); 192 + put_ldops(ld->ops); 206 193 kfree(ld); 207 - raw_spin_unlock_irqrestore(&tty_ldisc_lock, flags); 208 194 } 209 195 210 196 static void *tty_ldiscs_seq_start(struct seq_file *m, loff_t *pos) ··· 247 251 }; 248 252 249 253 /** 250 - * tty_ldisc_try - internal helper 251 - * @tty: the tty 252 - * 253 - * Make a single attempt to grab and bump the refcount on 254 - * the tty ldisc. Return 0 on failure or 1 on success. This is 255 - * used to implement both the waiting and non waiting versions 256 - * of tty_ldisc_ref 257 - * 258 - * Locking: takes tty_ldisc_lock 259 - */ 260 - 261 - static struct tty_ldisc *tty_ldisc_try(struct tty_struct *tty) 262 - { 263 - unsigned long flags; 264 - struct tty_ldisc *ld; 265 - 266 - /* FIXME: this allows reference acquire after TTY_LDISC is cleared */ 267 - raw_spin_lock_irqsave(&tty_ldisc_lock, flags); 268 - ld = NULL; 269 - if (test_bit(TTY_LDISC, &tty->flags) && tty->ldisc) { 270 - ld = tty->ldisc; 271 - atomic_inc(&ld->users); 272 - } 273 - raw_spin_unlock_irqrestore(&tty_ldisc_lock, flags); 274 - return ld; 275 - } 276 - 277 - /** 278 254 * tty_ldisc_ref_wait - wait for the tty ldisc 279 255 * @tty: tty device 280 256 * ··· 259 291 * against a discipline change, such as an existing ldisc reference 260 292 * (which we check for) 261 293 * 262 - * Locking: call functions take tty_ldisc_lock 294 + * Note: only callable from a file_operations routine (which 295 + * guarantees tty->ldisc != NULL when the lock is acquired). 263 296 */ 264 297 265 298 struct tty_ldisc *tty_ldisc_ref_wait(struct tty_struct *tty) 266 299 { 267 - struct tty_ldisc *ld; 268 - 269 - /* wait_event is a macro */ 270 - wait_event(tty_ldisc_wait, (ld = tty_ldisc_try(tty)) != NULL); 271 - return ld; 300 + ldsem_down_read(&tty->ldisc_sem, MAX_SCHEDULE_TIMEOUT); 301 + WARN_ON(!tty->ldisc); 302 + return tty->ldisc; 272 303 } 273 304 EXPORT_SYMBOL_GPL(tty_ldisc_ref_wait); 274 305 ··· 278 311 * Dereference the line discipline for the terminal and take a 279 312 * reference to it. If the line discipline is in flux then 280 313 * return NULL. Can be called from IRQ and timer functions. 281 - * 282 - * Locking: called functions take tty_ldisc_lock 283 314 */ 284 315 285 316 struct tty_ldisc *tty_ldisc_ref(struct tty_struct *tty) 286 317 { 287 - return tty_ldisc_try(tty); 318 + struct tty_ldisc *ld = NULL; 319 + 320 + if (ldsem_down_read_trylock(&tty->ldisc_sem)) { 321 + ld = tty->ldisc; 322 + if (!ld) 323 + ldsem_up_read(&tty->ldisc_sem); 324 + } 325 + return ld; 288 326 } 289 327 EXPORT_SYMBOL_GPL(tty_ldisc_ref); 290 328 ··· 299 327 * 300 328 * Undoes the effect of tty_ldisc_ref or tty_ldisc_ref_wait. May 301 329 * be called in IRQ context. 302 - * 303 - * Locking: takes tty_ldisc_lock 304 330 */ 305 331 306 332 void tty_ldisc_deref(struct tty_ldisc *ld) 307 333 { 308 - unsigned long flags; 309 - 310 - if (WARN_ON_ONCE(!ld)) 311 - return; 312 - 313 - raw_spin_lock_irqsave(&tty_ldisc_lock, flags); 314 - /* 315 - * WARNs if one-too-many reader references were released 316 - * - the last reference must be released with tty_ldisc_put 317 - */ 318 - WARN_ON(atomic_dec_and_test(&ld->users)); 319 - raw_spin_unlock_irqrestore(&tty_ldisc_lock, flags); 320 - 321 - if (waitqueue_active(&ld->wq_idle)) 322 - wake_up(&ld->wq_idle); 334 + ldsem_up_read(&ld->tty->ldisc_sem); 323 335 } 324 336 EXPORT_SYMBOL_GPL(tty_ldisc_deref); 325 337 326 - /** 327 - * tty_ldisc_enable - allow ldisc use 328 - * @tty: terminal to activate ldisc on 329 - * 330 - * Set the TTY_LDISC flag when the line discipline can be called 331 - * again. Do necessary wakeups for existing sleepers. Clear the LDISC 332 - * changing flag to indicate any ldisc change is now over. 333 - * 334 - * Note: nobody should set the TTY_LDISC bit except via this function. 335 - * Clearing directly is allowed. 336 - */ 337 338 338 - static void tty_ldisc_enable(struct tty_struct *tty) 339 + static inline int __lockfunc 340 + tty_ldisc_lock(struct tty_struct *tty, unsigned long timeout) 341 + { 342 + return ldsem_down_write(&tty->ldisc_sem, timeout); 343 + } 344 + 345 + static inline int __lockfunc 346 + tty_ldisc_lock_nested(struct tty_struct *tty, unsigned long timeout) 347 + { 348 + return ldsem_down_write_nested(&tty->ldisc_sem, 349 + LDISC_SEM_OTHER, timeout); 350 + } 351 + 352 + static inline void tty_ldisc_unlock(struct tty_struct *tty) 353 + { 354 + return ldsem_up_write(&tty->ldisc_sem); 355 + } 356 + 357 + static int __lockfunc 358 + tty_ldisc_lock_pair_timeout(struct tty_struct *tty, struct tty_struct *tty2, 359 + unsigned long timeout) 360 + { 361 + int ret; 362 + 363 + if (tty < tty2) { 364 + ret = tty_ldisc_lock(tty, timeout); 365 + if (ret) { 366 + ret = tty_ldisc_lock_nested(tty2, timeout); 367 + if (!ret) 368 + tty_ldisc_unlock(tty); 369 + } 370 + } else { 371 + /* if this is possible, it has lots of implications */ 372 + WARN_ON_ONCE(tty == tty2); 373 + if (tty2 && tty != tty2) { 374 + ret = tty_ldisc_lock(tty2, timeout); 375 + if (ret) { 376 + ret = tty_ldisc_lock_nested(tty, timeout); 377 + if (!ret) 378 + tty_ldisc_unlock(tty2); 379 + } 380 + } else 381 + ret = tty_ldisc_lock(tty, timeout); 382 + } 383 + 384 + if (!ret) 385 + return -EBUSY; 386 + 387 + set_bit(TTY_LDISC_HALTED, &tty->flags); 388 + if (tty2) 389 + set_bit(TTY_LDISC_HALTED, &tty2->flags); 390 + return 0; 391 + } 392 + 393 + static void __lockfunc 394 + tty_ldisc_lock_pair(struct tty_struct *tty, struct tty_struct *tty2) 395 + { 396 + tty_ldisc_lock_pair_timeout(tty, tty2, MAX_SCHEDULE_TIMEOUT); 397 + } 398 + 399 + static void __lockfunc tty_ldisc_unlock_pair(struct tty_struct *tty, 400 + struct tty_struct *tty2) 401 + { 402 + tty_ldisc_unlock(tty); 403 + if (tty2) 404 + tty_ldisc_unlock(tty2); 405 + } 406 + 407 + static void __lockfunc tty_ldisc_enable_pair(struct tty_struct *tty, 408 + struct tty_struct *tty2) 339 409 { 340 410 clear_bit(TTY_LDISC_HALTED, &tty->flags); 341 - set_bit(TTY_LDISC, &tty->flags); 342 - clear_bit(TTY_LDISC_CHANGING, &tty->flags); 343 - wake_up(&tty_ldisc_wait); 411 + if (tty2) 412 + clear_bit(TTY_LDISC_HALTED, &tty2->flags); 413 + 414 + tty_ldisc_unlock_pair(tty, tty2); 344 415 } 345 416 346 417 /** ··· 415 400 * they are not on hot paths so a little discipline won't do 416 401 * any harm. 417 402 * 418 - * Locking: takes termios_mutex 403 + * Locking: takes termios_rwsem 419 404 */ 420 405 421 406 static void tty_set_termios_ldisc(struct tty_struct *tty, int num) 422 407 { 423 - mutex_lock(&tty->termios_mutex); 408 + down_write(&tty->termios_rwsem); 424 409 tty->termios.c_line = num; 425 - mutex_unlock(&tty->termios_mutex); 410 + up_write(&tty->termios_rwsem); 426 411 } 427 412 428 413 /** ··· 483 468 int r; 484 469 485 470 /* There is an outstanding reference here so this is safe */ 486 - old = tty_ldisc_get(old->ops->num); 471 + old = tty_ldisc_get(tty, old->ops->num); 487 472 WARN_ON(IS_ERR(old)); 488 473 tty->ldisc = old; 489 474 tty_set_termios_ldisc(tty, old->ops->num); 490 475 if (tty_ldisc_open(tty, old) < 0) { 491 476 tty_ldisc_put(old); 492 477 /* This driver is always present */ 493 - new_ldisc = tty_ldisc_get(N_TTY); 478 + new_ldisc = tty_ldisc_get(tty, N_TTY); 494 479 if (IS_ERR(new_ldisc)) 495 480 panic("n_tty: get"); 496 481 tty->ldisc = new_ldisc; ··· 504 489 } 505 490 506 491 /** 507 - * tty_ldisc_wait_idle - wait for the ldisc to become idle 508 - * @tty: tty to wait for 509 - * @timeout: for how long to wait at most 510 - * 511 - * Wait for the line discipline to become idle. The discipline must 512 - * have been halted for this to guarantee it remains idle. 513 - */ 514 - static int tty_ldisc_wait_idle(struct tty_struct *tty, long timeout) 515 - { 516 - long ret; 517 - ret = wait_event_timeout(tty->ldisc->wq_idle, 518 - atomic_read(&tty->ldisc->users) == 1, timeout); 519 - return ret > 0 ? 0 : -EBUSY; 520 - } 521 - 522 - /** 523 - * tty_ldisc_halt - shut down the line discipline 524 - * @tty: tty device 525 - * @o_tty: paired pty device (can be NULL) 526 - * @timeout: # of jiffies to wait for ldisc refs to be released 527 - * 528 - * Shut down the line discipline and work queue for this tty device and 529 - * its paired pty (if exists). Clearing the TTY_LDISC flag ensures 530 - * no further references can be obtained, while waiting for existing 531 - * references to be released ensures no more data is fed to the ldisc. 532 - * 533 - * You need to do a 'flush_scheduled_work()' (outside the ldisc_mutex) 534 - * in order to make sure any currently executing ldisc work is also 535 - * flushed. 536 - */ 537 - 538 - static int tty_ldisc_halt(struct tty_struct *tty, struct tty_struct *o_tty, 539 - long timeout) 540 - { 541 - int retval; 542 - 543 - clear_bit(TTY_LDISC, &tty->flags); 544 - if (o_tty) 545 - clear_bit(TTY_LDISC, &o_tty->flags); 546 - 547 - retval = tty_ldisc_wait_idle(tty, timeout); 548 - if (!retval && o_tty) 549 - retval = tty_ldisc_wait_idle(o_tty, timeout); 550 - if (retval) 551 - return retval; 552 - 553 - set_bit(TTY_LDISC_HALTED, &tty->flags); 554 - if (o_tty) 555 - set_bit(TTY_LDISC_HALTED, &o_tty->flags); 556 - 557 - return 0; 558 - } 559 - 560 - /** 561 - * tty_ldisc_hangup_halt - halt the line discipline for hangup 562 - * @tty: tty being hung up 563 - * 564 - * Shut down the line discipline and work queue for the tty device 565 - * being hungup. Clear the TTY_LDISC flag to ensure no further 566 - * references can be obtained and wait for remaining references to be 567 - * released to ensure no more data is fed to this ldisc. 568 - * Caller must hold legacy and ->ldisc_mutex. 569 - * 570 - * NB: tty_set_ldisc() is prevented from changing the ldisc concurrently 571 - * with this function by checking the TTY_HUPPING flag. 572 - */ 573 - static bool tty_ldisc_hangup_halt(struct tty_struct *tty) 574 - { 575 - char cur_n[TASK_COMM_LEN], tty_n[64]; 576 - long timeout = 3 * HZ; 577 - 578 - clear_bit(TTY_LDISC, &tty->flags); 579 - 580 - if (tty->ldisc) { /* Not yet closed */ 581 - tty_unlock(tty); 582 - 583 - while (tty_ldisc_wait_idle(tty, timeout) == -EBUSY) { 584 - timeout = MAX_SCHEDULE_TIMEOUT; 585 - printk_ratelimited(KERN_WARNING 586 - "%s: waiting (%s) for %s took too long, but we keep waiting...\n", 587 - __func__, get_task_comm(cur_n, current), 588 - tty_name(tty, tty_n)); 589 - } 590 - 591 - set_bit(TTY_LDISC_HALTED, &tty->flags); 592 - 593 - /* must reacquire both locks and preserve lock order */ 594 - mutex_unlock(&tty->ldisc_mutex); 595 - tty_lock(tty); 596 - mutex_lock(&tty->ldisc_mutex); 597 - } 598 - return !!tty->ldisc; 599 - } 600 - 601 - /** 602 492 * tty_set_ldisc - set line discipline 603 493 * @tty: the terminal to set 604 494 * @ldisc: the line discipline ··· 512 592 * context. The ldisc change logic has to protect itself against any 513 593 * overlapping ldisc change (including on the other end of pty pairs), 514 594 * the close of one side of a tty/pty pair, and eventually hangup. 515 - * 516 - * Locking: takes tty_ldisc_lock, termios_mutex 517 595 */ 518 596 519 597 int tty_set_ldisc(struct tty_struct *tty, int ldisc) 520 598 { 521 599 int retval; 522 - struct tty_ldisc *o_ldisc, *new_ldisc; 523 - struct tty_struct *o_tty; 600 + struct tty_ldisc *old_ldisc, *new_ldisc; 601 + struct tty_struct *o_tty = tty->link; 524 602 525 - new_ldisc = tty_ldisc_get(ldisc); 603 + new_ldisc = tty_ldisc_get(tty, ldisc); 526 604 if (IS_ERR(new_ldisc)) 527 605 return PTR_ERR(new_ldisc); 528 606 529 - tty_lock(tty); 530 - /* 531 - * We need to look at the tty locking here for pty/tty pairs 532 - * when both sides try to change in parallel. 533 - */ 534 - 535 - o_tty = tty->link; /* o_tty is the pty side or NULL */ 536 - 607 + retval = tty_ldisc_lock_pair_timeout(tty, o_tty, 5 * HZ); 608 + if (retval) { 609 + tty_ldisc_put(new_ldisc); 610 + return retval; 611 + } 537 612 538 613 /* 539 614 * Check the no-op case 540 615 */ 541 616 542 617 if (tty->ldisc->ops->num == ldisc) { 543 - tty_unlock(tty); 618 + tty_ldisc_enable_pair(tty, o_tty); 544 619 tty_ldisc_put(new_ldisc); 545 620 return 0; 546 621 } 547 622 548 - mutex_lock(&tty->ldisc_mutex); 549 - 550 - /* 551 - * We could be midstream of another ldisc change which has 552 - * dropped the lock during processing. If so we need to wait. 553 - */ 554 - 555 - while (test_bit(TTY_LDISC_CHANGING, &tty->flags)) { 556 - mutex_unlock(&tty->ldisc_mutex); 557 - tty_unlock(tty); 558 - wait_event(tty_ldisc_wait, 559 - test_bit(TTY_LDISC_CHANGING, &tty->flags) == 0); 560 - tty_lock(tty); 561 - mutex_lock(&tty->ldisc_mutex); 562 - } 563 - 564 - set_bit(TTY_LDISC_CHANGING, &tty->flags); 565 - 566 - /* 567 - * No more input please, we are switching. The new ldisc 568 - * will update this value in the ldisc open function 569 - */ 570 - 571 - tty->receive_room = 0; 572 - 573 - o_ldisc = tty->ldisc; 574 - 575 - tty_unlock(tty); 576 - /* 577 - * Make sure we don't change while someone holds a 578 - * reference to the line discipline. The TTY_LDISC bit 579 - * prevents anyone taking a reference once it is clear. 580 - * We need the lock to avoid racing reference takers. 581 - * 582 - * We must clear the TTY_LDISC bit here to avoid a livelock 583 - * with a userspace app continually trying to use the tty in 584 - * parallel to the change and re-referencing the tty. 585 - */ 586 - 587 - retval = tty_ldisc_halt(tty, o_tty, 5 * HZ); 588 - 589 - /* 590 - * Wait for hangup to complete, if pending. 591 - * We must drop the mutex here in case a hangup is also in process. 592 - */ 593 - 594 - mutex_unlock(&tty->ldisc_mutex); 595 - 596 - flush_work(&tty->hangup_work); 597 - 623 + old_ldisc = tty->ldisc; 598 624 tty_lock(tty); 599 - mutex_lock(&tty->ldisc_mutex); 600 625 601 - /* handle wait idle failure locked */ 602 - if (retval) { 603 - tty_ldisc_put(new_ldisc); 604 - goto enable; 605 - } 606 - 607 - if (test_bit(TTY_HUPPING, &tty->flags)) { 626 + if (test_bit(TTY_HUPPING, &tty->flags) || 627 + test_bit(TTY_HUPPED, &tty->flags)) { 608 628 /* We were raced by the hangup method. It will have stomped 609 629 the ldisc data and closed the ldisc down */ 610 - clear_bit(TTY_LDISC_CHANGING, &tty->flags); 611 - mutex_unlock(&tty->ldisc_mutex); 630 + tty_ldisc_enable_pair(tty, o_tty); 612 631 tty_ldisc_put(new_ldisc); 613 632 tty_unlock(tty); 614 633 return -EIO; 615 634 } 616 635 617 - /* Shutdown the current discipline. */ 618 - tty_ldisc_close(tty, o_ldisc); 636 + /* Shutdown the old discipline. */ 637 + tty_ldisc_close(tty, old_ldisc); 619 638 620 639 /* Now set up the new line discipline. */ 621 640 tty->ldisc = new_ldisc; ··· 564 705 if (retval < 0) { 565 706 /* Back to the old one or N_TTY if we can't */ 566 707 tty_ldisc_put(new_ldisc); 567 - tty_ldisc_restore(tty, o_ldisc); 708 + tty_ldisc_restore(tty, old_ldisc); 568 709 } 569 710 570 - /* At this point we hold a reference to the new ldisc and a 571 - a reference to the old ldisc. If we ended up flipping back 572 - to the existing ldisc we have two references to it */ 573 - 574 - if (tty->ldisc->ops->num != o_ldisc->ops->num && tty->ops->set_ldisc) 711 + if (tty->ldisc->ops->num != old_ldisc->ops->num && tty->ops->set_ldisc) 575 712 tty->ops->set_ldisc(tty); 576 713 577 - tty_ldisc_put(o_ldisc); 714 + /* At this point we hold a reference to the new ldisc and a 715 + reference to the old ldisc, or we hold two references to 716 + the old ldisc (if it was restored as part of error cleanup 717 + above). In either case, releasing a single reference from 718 + the old ldisc is correct. */ 578 719 579 - enable: 720 + tty_ldisc_put(old_ldisc); 721 + 580 722 /* 581 723 * Allow ldisc referencing to occur again 582 724 */ 583 - 584 - tty_ldisc_enable(tty); 585 - if (o_tty) 586 - tty_ldisc_enable(o_tty); 725 + tty_ldisc_enable_pair(tty, o_tty); 587 726 588 727 /* Restart the work queue in case no characters kick it off. Safe if 589 728 already running */ ··· 589 732 if (o_tty) 590 733 schedule_work(&o_tty->port->buf.work); 591 734 592 - mutex_unlock(&tty->ldisc_mutex); 593 735 tty_unlock(tty); 594 736 return retval; 595 737 } ··· 602 746 603 747 static void tty_reset_termios(struct tty_struct *tty) 604 748 { 605 - mutex_lock(&tty->termios_mutex); 749 + down_write(&tty->termios_rwsem); 606 750 tty->termios = tty->driver->init_termios; 607 751 tty->termios.c_ispeed = tty_termios_input_baud_rate(&tty->termios); 608 752 tty->termios.c_ospeed = tty_termios_baud_rate(&tty->termios); 609 - mutex_unlock(&tty->termios_mutex); 753 + up_write(&tty->termios_rwsem); 610 754 } 611 755 612 756 ··· 621 765 622 766 static int tty_ldisc_reinit(struct tty_struct *tty, int ldisc) 623 767 { 624 - struct tty_ldisc *ld = tty_ldisc_get(ldisc); 768 + struct tty_ldisc *ld = tty_ldisc_get(tty, ldisc); 625 769 626 770 if (IS_ERR(ld)) 627 771 return -1; ··· 660 804 661 805 tty_ldisc_debug(tty, "closing ldisc: %p\n", tty->ldisc); 662 806 663 - /* 664 - * FIXME! What are the locking issues here? This may me overdoing 665 - * things... This question is especially important now that we've 666 - * removed the irqlock. 667 - */ 668 807 ld = tty_ldisc_ref(tty); 669 808 if (ld != NULL) { 670 - /* We may have no line discipline at this point */ 671 809 if (ld->ops->flush_buffer) 672 810 ld->ops->flush_buffer(tty); 673 811 tty_driver_flush_buffer(tty); ··· 672 822 ld->ops->hangup(tty); 673 823 tty_ldisc_deref(ld); 674 824 } 675 - /* 676 - * FIXME: Once we trust the LDISC code better we can wait here for 677 - * ldisc completion and fix the driver call race 678 - */ 825 + 679 826 wake_up_interruptible_poll(&tty->write_wait, POLLOUT); 680 827 wake_up_interruptible_poll(&tty->read_wait, POLLIN); 828 + 829 + tty_unlock(tty); 830 + 681 831 /* 682 832 * Shutdown the current line discipline, and reset it to 683 833 * N_TTY if need be. 684 834 * 685 835 * Avoid racing set_ldisc or tty_ldisc_release 686 836 */ 687 - mutex_lock(&tty->ldisc_mutex); 837 + tty_ldisc_lock_pair(tty, tty->link); 838 + tty_lock(tty); 688 839 689 - if (tty_ldisc_hangup_halt(tty)) { 840 + if (tty->ldisc) { 690 841 691 842 /* At this point we have a halted ldisc; we want to close it and 692 843 reopen a new ldisc. We could defer the reopen to the next ··· 706 855 BUG_ON(tty_ldisc_reinit(tty, N_TTY)); 707 856 WARN_ON(tty_ldisc_open(tty, tty->ldisc)); 708 857 } 709 - tty_ldisc_enable(tty); 710 858 } 711 - mutex_unlock(&tty->ldisc_mutex); 859 + tty_ldisc_enable_pair(tty, tty->link); 712 860 if (reset) 713 861 tty_reset_termios(tty); 714 862 ··· 739 889 tty_ldisc_close(tty, ld); 740 890 return retval; 741 891 } 742 - tty_ldisc_enable(o_tty); 743 892 } 744 - tty_ldisc_enable(tty); 745 893 return 0; 746 894 } 747 895 748 896 static void tty_ldisc_kill(struct tty_struct *tty) 749 897 { 750 - mutex_lock(&tty->ldisc_mutex); 751 898 /* 752 899 * Now kill off the ldisc 753 900 */ ··· 755 908 756 909 /* Ensure the next open requests the N_TTY ldisc */ 757 910 tty_set_termios_ldisc(tty, N_TTY); 758 - mutex_unlock(&tty->ldisc_mutex); 759 911 } 760 912 761 913 /** ··· 776 930 777 931 tty_ldisc_debug(tty, "closing ldisc: %p\n", tty->ldisc); 778 932 779 - tty_ldisc_halt(tty, o_tty, MAX_SCHEDULE_TIMEOUT); 780 - 933 + tty_ldisc_lock_pair(tty, o_tty); 781 934 tty_lock_pair(tty, o_tty); 782 - /* This will need doing differently if we need to lock */ 935 + 783 936 tty_ldisc_kill(tty); 784 937 if (o_tty) 785 938 tty_ldisc_kill(o_tty); 786 939 787 940 tty_unlock_pair(tty, o_tty); 941 + tty_ldisc_unlock_pair(tty, o_tty); 942 + 788 943 /* And the memory resources remaining (buffers, termios) will be 789 944 disposed of when the kref hits zero */ 790 945 ··· 802 955 803 956 void tty_ldisc_init(struct tty_struct *tty) 804 957 { 805 - struct tty_ldisc *ld = tty_ldisc_get(N_TTY); 958 + struct tty_ldisc *ld = tty_ldisc_get(tty, N_TTY); 806 959 if (IS_ERR(ld)) 807 960 panic("n_tty: init_tty"); 808 961 tty->ldisc = ld;
+1 -20
drivers/tty/vt/keyboard.c
··· 132 132 static unsigned char ledstate = 0xff; /* undefined */ 133 133 static unsigned char ledioctl; 134 134 135 - static struct ledptr { 136 - unsigned int *addr; 137 - unsigned int mask; 138 - unsigned char valid:1; 139 - } ledptrs[3]; 140 - 141 135 /* 142 136 * Notifier list for console keyboard events 143 137 */ ··· 988 994 static inline unsigned char getleds(void) 989 995 { 990 996 struct kbd_struct *kbd = kbd_table + fg_console; 991 - unsigned char leds; 992 - int i; 993 997 994 998 if (kbd->ledmode == LED_SHOW_IOCTL) 995 999 return ledioctl; 996 1000 997 - leds = kbd->ledflagstate; 998 - 999 - if (kbd->ledmode == LED_SHOW_MEM) { 1000 - for (i = 0; i < 3; i++) 1001 - if (ledptrs[i].valid) { 1002 - if (*ledptrs[i].addr & ledptrs[i].mask) 1003 - leds |= (1 << i); 1004 - else 1005 - leds &= ~(1 << i); 1006 - } 1007 - } 1008 - return leds; 1001 + return kbd->ledflagstate; 1009 1002 } 1010 1003 1011 1004 static int kbd_update_leds_helper(struct input_handle *handle, void *data)
+5 -3
drivers/tty/vt/selection.c
··· 24 24 #include <linux/selection.h> 25 25 #include <linux/tiocl.h> 26 26 #include <linux/console.h> 27 + #include <linux/tty_flip.h> 27 28 28 29 /* Don't take this from <ctype.h>: 011-015 on the screen aren't spaces */ 29 30 #define isspace(c) ((c) == ' ') ··· 347 346 console_unlock(); 348 347 349 348 ld = tty_ldisc_ref_wait(tty); 349 + tty_buffer_lock_exclusive(&vc->port); 350 350 351 - /* FIXME: this is completely unsafe */ 352 351 add_wait_queue(&vc->paste_wait, &wait); 353 352 while (sel_buffer && sel_buffer_lth > pasted) { 354 353 set_current_state(TASK_INTERRUPTIBLE); ··· 357 356 continue; 358 357 } 359 358 count = sel_buffer_lth - pasted; 360 - count = min(count, tty->receive_room); 361 - ld->ops->receive_buf(tty, sel_buffer + pasted, NULL, count); 359 + count = tty_ldisc_receive_buf(ld, sel_buffer + pasted, NULL, 360 + count); 362 361 pasted += count; 363 362 } 364 363 remove_wait_queue(&vc->paste_wait, &wait); 365 364 __set_current_state(TASK_RUNNING); 366 365 366 + tty_buffer_unlock_exclusive(&vc->port); 367 367 tty_ldisc_deref(ld); 368 368 return 0; 369 369 }
+5 -3
drivers/tty/vt/vt.c
··· 828 828 * If the caller passes a tty structure then update the termios winsize 829 829 * information and perform any necessary signal handling. 830 830 * 831 - * Caller must hold the console semaphore. Takes the termios mutex and 831 + * Caller must hold the console semaphore. Takes the termios rwsem and 832 832 * ctrl_lock of the tty IFF a tty is passed. 833 833 */ 834 834 ··· 972 972 * the actual work. 973 973 * 974 974 * Takes the console sem and the called methods then take the tty 975 - * termios_mutex and the tty ctrl_lock in that order. 975 + * termios_rwsem and the tty ctrl_lock in that order. 976 976 */ 977 977 static int vt_resize(struct tty_struct *tty, struct winsize *ws) 978 978 { ··· 2809 2809 console_unlock(); 2810 2810 } 2811 2811 2812 + static int default_color = 7; /* white */ 2812 2813 static int default_italic_color = 2; // green (ASCII) 2813 2814 static int default_underline_color = 3; // cyan (ASCII) 2815 + module_param_named(color, default_color, int, S_IRUGO | S_IWUSR); 2814 2816 module_param_named(italic, default_italic_color, int, S_IRUGO | S_IWUSR); 2815 2817 module_param_named(underline, default_underline_color, int, S_IRUGO | S_IWUSR); 2816 2818 ··· 2834 2832 vc->vc_palette[k++] = default_grn[j] ; 2835 2833 vc->vc_palette[k++] = default_blu[j] ; 2836 2834 } 2837 - vc->vc_def_color = 0x07; /* white */ 2835 + vc->vc_def_color = default_color; 2838 2836 vc->vc_ulcolor = default_underline_color; 2839 2837 vc->vc_itcolor = default_italic_color; 2840 2838 vc->vc_halfcolor = 0x08; /* grey */
+2
include/linux/atmel_serial.h
··· 124 124 #define ATMEL_US_NER 0x44 /* Number of Errors Register */ 125 125 #define ATMEL_US_IF 0x4c /* IrDA Filter Register */ 126 126 127 + #define ATMEL_US_NAME 0xf0 /* Ip Name */ 128 + 127 129 #endif
+1 -2
include/linux/kbd_kern.h
··· 36 36 #define VC_CTRLRLOCK KG_CTRLR /* ctrlr lock mode */ 37 37 unsigned char slockstate; /* for `sticky' Shift, Ctrl, etc. */ 38 38 39 - unsigned char ledmode:2; /* one 2-bit value */ 39 + unsigned char ledmode:1; 40 40 #define LED_SHOW_FLAGS 0 /* traditional state */ 41 41 #define LED_SHOW_IOCTL 1 /* only change leds upon ioctl */ 42 - #define LED_SHOW_MEM 2 /* `heartbeat': peek into memory */ 43 42 44 43 unsigned char ledflagstate:4; /* flags, not lights */ 45 44 unsigned char default_ledflagstate:4;
+23
include/linux/llist.h
··· 125 125 (pos) = llist_entry((pos)->member.next, typeof(*(pos)), member)) 126 126 127 127 /** 128 + * llist_for_each_entry_safe - iterate over some deleted entries of lock-less list of given type 129 + * safe against removal of list entry 130 + * @pos: the type * to use as a loop cursor. 131 + * @n: another type * to use as temporary storage 132 + * @node: the first entry of deleted list entries. 133 + * @member: the name of the llist_node with the struct. 134 + * 135 + * In general, some entries of the lock-less list can be traversed 136 + * safely only after being removed from list, so start with an entry 137 + * instead of list head. 138 + * 139 + * If being used on entries deleted from lock-less list directly, the 140 + * traverse order is from the newest to the oldest added entry. If 141 + * you want to traverse from the oldest to the newest, you must 142 + * reverse the order by yourself before traversing. 143 + */ 144 + #define llist_for_each_entry_safe(pos, n, node, member) \ 145 + for (pos = llist_entry((node), typeof(*pos), member); \ 146 + &pos->member != NULL && \ 147 + (n = llist_entry(pos->member.next, typeof(*n), member), true); \ 148 + pos = n) 149 + 150 + /** 128 151 * llist_empty - tests whether a lock-less list is empty 129 152 * @head: the list to test 130 153 *
+7
include/linux/of.h
··· 343 343 s; \ 344 344 s = of_prop_next_string(prop, s)) 345 345 346 + int of_device_is_stdout_path(struct device_node *dn); 347 + 346 348 #else /* CONFIG_OF */ 347 349 348 350 static inline const char* of_node_full_name(struct device_node *np) ··· 503 501 } 504 502 505 503 static inline int of_machine_is_compatible(const char *compat) 504 + { 505 + return 0; 506 + } 507 + 508 + static inline int of_device_is_stdout_path(struct device_node *dn) 506 509 { 507 510 return 0; 508 511 }
+2 -2
include/linux/pci_ids.h
··· 1311 1311 #define PCI_DEVICE_ID_IMS_TT128 0x9128 1312 1312 #define PCI_DEVICE_ID_IMS_TT3D 0x9135 1313 1313 1314 + #define PCI_VENDOR_ID_AMCC 0x10e8 1315 + 1314 1316 #define PCI_VENDOR_ID_INTERG 0x10ea 1315 1317 #define PCI_DEVICE_ID_INTERG_1682 0x1682 1316 1318 #define PCI_DEVICE_ID_INTERG_2000 0x2000 ··· 2258 2256 /* 2259 2257 * ADDI-DATA GmbH communication cards <info@addi-data.com> 2260 2258 */ 2261 - #define PCI_VENDOR_ID_ADDIDATA_OLD 0x10E8 2262 2259 #define PCI_VENDOR_ID_ADDIDATA 0x15B8 2263 2260 #define PCI_DEVICE_ID_ADDIDATA_APCI7500 0x7000 2264 2261 #define PCI_DEVICE_ID_ADDIDATA_APCI7420 0x7001 2265 2262 #define PCI_DEVICE_ID_ADDIDATA_APCI7300 0x7002 2266 - #define PCI_DEVICE_ID_ADDIDATA_APCI7800 0x818E 2267 2263 #define PCI_DEVICE_ID_ADDIDATA_APCI7500_2 0x7009 2268 2264 #define PCI_DEVICE_ID_ADDIDATA_APCI7420_2 0x700A 2269 2265 #define PCI_DEVICE_ID_ADDIDATA_APCI7300_2 0x700B
+3 -6
include/linux/platform_data/max310x.h
··· 1 1 /* 2 - * Maxim (Dallas) MAX3107/8 serial driver 2 + * Maxim (Dallas) MAX3107/8/9, MAX14830 serial driver 3 3 * 4 4 * Copyright (C) 2012 Alexander Shiyan <shc_work@mail.ru> 5 5 * ··· 37 37 * }; 38 38 */ 39 39 40 - #define MAX310X_MAX_UARTS 1 40 + #define MAX310X_MAX_UARTS 4 41 41 42 42 /* MAX310X platform data structure */ 43 43 struct max310x_pdata { 44 44 /* Flags global to driver */ 45 - const u8 driver_flags:2; 45 + const u8 driver_flags; 46 46 #define MAX310X_EXT_CLK (0x00000001) /* External clock enable */ 47 - #define MAX310X_AUTOSLEEP (0x00000002) /* Enable AutoSleep mode */ 48 47 /* Flags global to UART port */ 49 48 const u8 uart_flags[MAX310X_MAX_UARTS]; 50 49 #define MAX310X_LOOPBACK (0x00000001) /* Loopback mode enable */ ··· 59 60 void (*init)(void); 60 61 /* Called before finish */ 61 62 void (*exit)(void); 62 - /* Suspend callback */ 63 - void (*suspend)(int do_suspend); 64 63 }; 65 64 66 65 #endif
-3
include/linux/platform_data/serial-sccnxp.h
··· 60 60 * }; 61 61 * 62 62 * static struct sccnxp_pdata sc2892_info = { 63 - * .frequency = 3686400, 64 63 * .mctrl_cfg[0] = MCTRL_SIG(DIR_OP, LINE_OP0), 65 64 * .mctrl_cfg[1] = MCTRL_SIG(DIR_OP, LINE_OP1), 66 65 * }; ··· 77 78 78 79 /* SCCNXP platform data structure */ 79 80 struct sccnxp_pdata { 80 - /* Frequency (extrenal clock or crystal) */ 81 - int frequency; 82 81 /* Shift for A0 line */ 83 82 const u8 reg_shift; 84 83 /* Modem control lines configuration */
+39 -27
include/linux/tty.h
··· 10 10 #include <linux/mutex.h> 11 11 #include <linux/tty_flags.h> 12 12 #include <uapi/linux/tty.h> 13 + #include <linux/rwsem.h> 14 + #include <linux/llist.h> 13 15 14 16 15 17 ··· 31 29 #define __DISABLED_CHAR '\0' 32 30 33 31 struct tty_buffer { 34 - struct tty_buffer *next; 35 - char *char_buf_ptr; 36 - unsigned char *flag_buf_ptr; 32 + union { 33 + struct tty_buffer *next; 34 + struct llist_node free; 35 + }; 37 36 int used; 38 37 int size; 39 38 int commit; ··· 43 40 unsigned long data[0]; 44 41 }; 45 42 46 - /* 47 - * We default to dicing tty buffer allocations to this many characters 48 - * in order to avoid multiple page allocations. We know the size of 49 - * tty_buffer itself but it must also be taken into account that the 50 - * the buffer is 256 byte aligned. See tty_buffer_find for the allocation 51 - * logic this must match 52 - */ 43 + static inline unsigned char *char_buf_ptr(struct tty_buffer *b, int ofs) 44 + { 45 + return ((unsigned char *)b->data) + ofs; 46 + } 53 47 54 - #define TTY_BUFFER_PAGE (((PAGE_SIZE - sizeof(struct tty_buffer)) / 2) & ~0xFF) 55 - 48 + static inline char *flag_buf_ptr(struct tty_buffer *b, int ofs) 49 + { 50 + return (char *)char_buf_ptr(b, ofs) + b->size; 51 + } 56 52 57 53 struct tty_bufhead { 58 - struct work_struct work; 59 - spinlock_t lock; 60 54 struct tty_buffer *head; /* Queue head */ 55 + struct work_struct work; 56 + struct mutex lock; 57 + atomic_t priority; 58 + struct tty_buffer sentinel; 59 + struct llist_head free; /* Free queue head */ 60 + atomic_t memory_used; /* In-use buffers excluding free list */ 61 61 struct tty_buffer *tail; /* Active buffer */ 62 - struct tty_buffer *free; /* Free queue head */ 63 - int memory_used; /* Buffer space used excluding 64 - free queue */ 65 62 }; 66 63 /* 67 64 * When a break, frame error, or parity error happens, these codes are ··· 202 199 wait_queue_head_t close_wait; /* Close waiters */ 203 200 wait_queue_head_t delta_msr_wait; /* Modem status change */ 204 201 unsigned long flags; /* TTY flags ASY_*/ 205 - unsigned long iflags; /* TTYP_ internal flags */ 206 - #define TTYP_FLUSHING 1 /* Flushing to ldisc in progress */ 207 - #define TTYP_FLUSHPENDING 2 /* Queued buffer flush pending */ 208 202 unsigned char console:1, /* port is a console */ 209 203 low_latency:1; /* direct buffer flush */ 210 204 struct mutex mutex; /* Locking */ ··· 238 238 int index; 239 239 240 240 /* Protects ldisc changes: Lock tty not pty */ 241 - struct mutex ldisc_mutex; 241 + struct ld_semaphore ldisc_sem; 242 242 struct tty_ldisc *ldisc; 243 243 244 244 struct mutex atomic_write_lock; 245 245 struct mutex legacy_mutex; 246 - struct mutex termios_mutex; 246 + struct mutex throttle_mutex; 247 + struct rw_semaphore termios_rwsem; 248 + struct mutex winsize_mutex; 247 249 spinlock_t ctrl_lock; 248 - /* Termios values are protected by the termios mutex */ 250 + /* Termios values are protected by the termios rwsem */ 249 251 struct ktermios termios, termios_locked; 250 252 struct termiox *termiox; /* May be NULL for unsupported */ 251 253 char name[64]; ··· 255 253 struct pid *session; 256 254 unsigned long flags; 257 255 int count; 258 - struct winsize winsize; /* termios mutex */ 256 + struct winsize winsize; /* winsize_mutex */ 259 257 unsigned char stopped:1, hw_stopped:1, flow_stopped:1, packet:1; 260 258 unsigned char ctrl_status; /* ctrl_lock */ 261 259 unsigned int receive_room; /* Bytes free for queue */ ··· 305 303 #define TTY_EXCLUSIVE 3 /* Exclusive open mode */ 306 304 #define TTY_DEBUG 4 /* Debugging */ 307 305 #define TTY_DO_WRITE_WAKEUP 5 /* Call write_wakeup after queuing new */ 308 - #define TTY_PUSH 6 /* n_tty private */ 309 306 #define TTY_CLOSING 7 /* ->close() in progress */ 310 - #define TTY_LDISC 9 /* Line discipline attached */ 311 - #define TTY_LDISC_CHANGING 10 /* Line discipline changing */ 312 307 #define TTY_LDISC_OPEN 11 /* Line discipline is open */ 313 308 #define TTY_PTY_LOCK 16 /* pty private */ 314 309 #define TTY_NO_WRITE_SPLIT 17 /* Preserve write boundaries to driver */ ··· 557 558 extern void tty_ldisc_init(struct tty_struct *tty); 558 559 extern void tty_ldisc_deinit(struct tty_struct *tty); 559 560 extern void tty_ldisc_begin(void); 561 + 562 + static inline int tty_ldisc_receive_buf(struct tty_ldisc *ld, unsigned char *p, 563 + char *f, int count) 564 + { 565 + if (ld->ops->receive_buf2) 566 + count = ld->ops->receive_buf2(ld->tty, p, f, count); 567 + else { 568 + count = min_t(int, count, ld->tty->receive_room); 569 + if (count) 570 + ld->ops->receive_buf(ld->tty, p, f, count); 571 + } 572 + return count; 573 + } 560 574 561 575 562 576 /* n_tty.c */
+6 -2
include/linux/tty_flip.h
··· 1 1 #ifndef _LINUX_TTY_FLIP_H 2 2 #define _LINUX_TTY_FLIP_H 3 3 4 + extern int tty_buffer_space_avail(struct tty_port *port); 4 5 extern int tty_buffer_request_room(struct tty_port *port, size_t size); 5 6 extern int tty_insert_flip_string_flags(struct tty_port *port, 6 7 const unsigned char *chars, const char *flags, size_t size); ··· 19 18 { 20 19 struct tty_buffer *tb = port->buf.tail; 21 20 if (tb && tb->used < tb->size) { 22 - tb->flag_buf_ptr[tb->used] = flag; 23 - tb->char_buf_ptr[tb->used++] = ch; 21 + *flag_buf_ptr(tb, tb->used) = flag; 22 + *char_buf_ptr(tb, tb->used++) = ch; 24 23 return 1; 25 24 } 26 25 return tty_insert_flip_string_flags(port, &ch, &flag, 1); ··· 31 30 { 32 31 return tty_insert_flip_string_fixed_flag(port, chars, TTY_NORMAL, size); 33 32 } 33 + 34 + extern void tty_buffer_lock_exclusive(struct tty_port *port); 35 + extern void tty_buffer_unlock_exclusive(struct tty_port *port); 34 36 35 37 #endif /* _LINUX_TTY_FLIP_H */
+14 -2
include/linux/tty_ldisc.h
··· 109 109 * 110 110 * Tells the discipline that the DCD pin has changed its status. 111 111 * Used exclusively by the N_PPS (Pulse-Per-Second) line discipline. 112 + * 113 + * int (*receive_buf2)(struct tty_struct *, const unsigned char *cp, 114 + * char *fp, int count); 115 + * 116 + * This function is called by the low-level tty driver to send 117 + * characters received by the hardware to the line discpline for 118 + * processing. <cp> is a pointer to the buffer of input 119 + * character received by the device. <fp> is a pointer to a 120 + * pointer of flag bytes which indicate whether a character was 121 + * received with a parity error, etc. 122 + * If assigned, prefer this function for automatic flow control. 112 123 */ 113 124 114 125 #include <linux/fs.h> ··· 206 195 void (*write_wakeup)(struct tty_struct *); 207 196 void (*dcd_change)(struct tty_struct *, unsigned int); 208 197 void (*fasync)(struct tty_struct *tty, int on); 198 + int (*receive_buf2)(struct tty_struct *, const unsigned char *cp, 199 + char *fp, int count); 209 200 210 201 struct module *owner; 211 202 ··· 216 203 217 204 struct tty_ldisc { 218 205 struct tty_ldisc_ops *ops; 219 - atomic_t users; 220 - wait_queue_head_t wq_idle; 206 + struct tty_struct *tty; 221 207 }; 222 208 223 209 #define TTY_LDISC_MAGIC 0x5403
+3
include/uapi/linux/serial_core.h
··· 232 232 /* SH-SCI */ 233 233 #define PORT_HSCIF 104 234 234 235 + /* ST ASC type numbers */ 236 + #define PORT_ASC 105 237 + 235 238 #endif /* _UAPILINUX_SERIAL_CORE_H */
+7
kernel/printk/printk.c
··· 2226 2226 struct console *bcon = NULL; 2227 2227 struct console_cmdline *c; 2228 2228 2229 + if (console_drivers) 2230 + for_each_console(bcon) 2231 + if (WARN(bcon == newcon, 2232 + "console '%s%d' already registered\n", 2233 + bcon->name, bcon->index)) 2234 + return; 2235 + 2229 2236 /* 2230 2237 * before we register a new CON_BOOT console, make sure we don't 2231 2238 * already have a valid console