Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'omap-for-v4.2/wakeirq-drivers-v2' of git://git.kernel.org/pub/scm/linux/kernel/git/tmlind/linux-omap into next/late

Merge "omap generic wakeirq for v4.2 merge window" from Tony Lindgren:

Omap driver changes for v4.2 to switch drivers over to Linux generic
wake IRQ events for omap_hsmmc, 8250_omap and omap-serial
drivers.

The generic wake IRQs also fix issues that these drivers potentially
have with IRQ re-entrancy at least for serial-omap.

Note that because of dependencies and merge conflicts these are
based on Rafael's pm-wakeirq and Greg's tty-next branches.

* tag 'omap-for-v4.2/wakeirq-drivers-v2' of git://git.kernel.org/pub/scm/linux/kernel/git/tmlind/linux-omap: (148 commits)
serial: 8250_omap: Move wake-up interrupt to generic wakeirq
serial: omap: Switch wake-up interrupt to generic wakeirq
tty: move linux/gsmmux.h to uapi
doc: dt: add documentation for nxp,lpc1850-uart
serial: 8250: add LPC18xx/43xx UART driver
serial: 8250_uniphier: add UniPhier serial driver
serial: 8250_dw: support ACPI platforms with integrated DMA engine
serial: of_serial: check the return value of clk_prepare_enable()
serial: of_serial: use devm_clk_get() instead of clk_get()
serial: earlycon: Add support for big-endian MMIO accesses
serial: sirf: use hrtimer for data rx
serial: sirf: correct the fifo empty_bit
serial: sirf: fix system hung on console log output
serial: 8250: remove return statements from void function
sc16is7xx: use kworker for RS-485 configuration
sc16is7xx: use kworker to update ier bits
sc16is7xx: use kworker for md_proc
sc16is7xx: move RTS delay to workqueue
sc16is7xx: use kthread_worker for tx_work and irq
sc16is7xx: use LSR_TEMT_BIT in .tx_empty()
...

+2894 -1892
+10
Documentation/devicetree/bindings/serial/arm_sbsa_uart.txt
··· 1 + * ARM SBSA defined generic UART 2 + This UART uses a subset of the PL011 registers and consequently lives 3 + in the PL011 driver. It's baudrate and other communication parameters 4 + cannot be adjusted at runtime, so it lacks a clock specifier here. 5 + 6 + Required properties: 7 + - compatible: must be "arm,sbsa-uart" 8 + - reg: exactly one register range 9 + - interrupts: exactly one interrupt specifier 10 + - current-speed: the (fixed) baud rate set by the firmware
+10 -2
Documentation/devicetree/bindings/serial/mtk-uart.txt
··· 14 14 15 15 - interrupts: A single interrupt specifier. 16 16 17 - - clocks: Clock driving the hardware. 17 + - clocks : Must contain an entry for each entry in clock-names. 18 + See ../clocks/clock-bindings.txt for details. 19 + - clock-names: 20 + - "baud": The clock the baudrate is derived from 21 + - "bus": The bus clock for register accesses (optional) 22 + 23 + For compatibility with older device trees an unnamed clock is used for the 24 + baud clock if the baudclk does not exist. Do not use this for new designs. 18 25 19 26 Example: 20 27 ··· 29 22 compatible = "mediatek,mt6589-uart", "mediatek,mt6577-uart"; 30 23 reg = <0x11006000 0x400>; 31 24 interrupts = <GIC_SPI 51 IRQ_TYPE_LEVEL_LOW>; 32 - clocks = <&uart_clk>; 25 + clocks = <&uart_clk>, <&bus_clk>; 26 + clock-names = "baud", "bus"; 33 27 };
+28
Documentation/devicetree/bindings/serial/nxp,lpc1850-uart.txt
··· 1 + * NXP LPC1850 UART 2 + 3 + Required properties: 4 + - compatible : "nxp,lpc1850-uart", "ns16550a". 5 + - reg : offset and length of the register set for the device. 6 + - interrupts : should contain uart interrupt. 7 + - clocks : phandle to the input clocks. 8 + - clock-names : required elements: "uartclk", "reg". 9 + 10 + Optional properties: 11 + - dmas : Two or more DMA channel specifiers following the 12 + convention outlined in bindings/dma/dma.txt 13 + - dma-names : Names for the dma channels, if present. There must 14 + be at least one channel named "tx" for transmit 15 + and named "rx" for receive. 16 + 17 + Since it's also possible to also use the of_serial.c driver all 18 + parameters from 8250.txt also apply but are optional. 19 + 20 + Example: 21 + uart0: serial@40081000 { 22 + compatible = "nxp,lpc1850-uart", "ns16550a"; 23 + reg = <0x40081000 0x1000>; 24 + reg-shift = <2>; 25 + interrupts = <24>; 26 + clocks = <&ccu2 CLK_APB0_UART0>, <&ccu1 CLK_CPU_UART0>; 27 + clock-names = "uartclk", "reg"; 28 + };
+37
Documentation/devicetree/bindings/serial/nxp,sc16is7xx.txt
··· 1 1 * NXP SC16IS7xx advanced Universal Asynchronous Receiver-Transmitter (UART) 2 + * i2c as bus 2 3 3 4 Required properties: 4 5 - compatible: Should be one of the following: ··· 32 31 gpio-controller; 33 32 #gpio-cells = <2>; 34 33 }; 34 + 35 + * spi as bus 36 + 37 + Required properties: 38 + - compatible: Should be one of the following: 39 + - "nxp,sc16is740" for NXP SC16IS740, 40 + - "nxp,sc16is741" for NXP SC16IS741, 41 + - "nxp,sc16is750" for NXP SC16IS750, 42 + - "nxp,sc16is752" for NXP SC16IS752, 43 + - "nxp,sc16is760" for NXP SC16IS760, 44 + - "nxp,sc16is762" for NXP SC16IS762. 45 + - reg: SPI chip select number. 46 + - interrupt-parent: The phandle for the interrupt controller that 47 + services interrupts for this IC. 48 + - interrupts: Specifies the interrupt source of the parent interrupt 49 + controller. The format of the interrupt specifier depends on the 50 + parent interrupt controller. 51 + - clocks: phandle to the IC source clock. 52 + 53 + Optional properties: 54 + - gpio-controller: Marks the device node as a GPIO controller. 55 + - #gpio-cells: Should be two. The first cell is the GPIO number and 56 + the second cell is used to specify the GPIO polarity: 57 + 0 = active high, 58 + 1 = active low. 59 + 60 + Example: 61 + sc16is750: sc16is750@0 { 62 + compatible = "nxp,sc16is750"; 63 + reg = <0>; 64 + clocks = <&clk20m>; 65 + interrupt-parent = <&gpio3>; 66 + interrupts = <7 IRQ_TYPE_EDGE_FALLING>; 67 + gpio-controller; 68 + #gpio-cells = <2>; 69 + };
+7
Documentation/devicetree/bindings/serial/renesas,sci-serial.txt
··· 44 44 Note: Each enabled SCIx UART should have an alias correctly numbered in the 45 45 "aliases" node. 46 46 47 + Optional properties: 48 + - dmas: Must contain a list of two references to DMA specifiers, one for 49 + transmission, and one for reception. 50 + - dma-names: Must contain a list of two DMA names, "tx" and "rx". 51 + 47 52 Example: 48 53 aliases { 49 54 serial0 = &scifa0; ··· 61 56 interrupts = <0 144 IRQ_TYPE_LEVEL_HIGH>; 62 57 clocks = <&mstp2_clks R8A7790_CLK_SCIFA0>; 63 58 clock-names = "sci_ick"; 59 + dmas = <&dmac0 0x21>, <&dmac0 0x22>; 60 + dma-names = "tx", "rx"; 64 61 };
+1 -14
Documentation/devicetree/bindings/serial/sirf-uart.txt
··· 2 2 3 3 Required properties: 4 4 - compatible : Should be "sirf,prima2-uart", "sirf, prima2-usp-uart", 5 - "sirf,atlas7-uart" or "sirf,atlas7-bt-uart" which means 6 - uart located in BT module and used for BT. 5 + "sirf,atlas7-uart" or "sirf,atlas7-usp-uart". 7 6 - reg : Offset and length of the register set for the device 8 7 - interrupts : Should contain uart interrupt 9 8 - fifosize : Should define hardware rx/tx fifo size ··· 32 33 rts-gpios = <&gpio 15 0>; 33 34 cts-gpios = <&gpio 46 0>; 34 35 }; 35 - 36 - for uart use in BT module, 37 - uart6: uart@11000000 { 38 - cell-index = <6>; 39 - compatible = "sirf,atlas7-bt-uart", "sirf,atlas7-uart"; 40 - reg = <0x11000000 0x1000>; 41 - interrupts = <0 100 0>; 42 - clocks = <&clks 138>, <&clks 140>, <&clks 141>; 43 - clock-names = "uart", "general", "noc"; 44 - fifosize = <128>; 45 - status = "disabled"; 46 - }
+5 -4
Documentation/kernel-parameters.txt
··· 959 959 uart[8250],io,<addr>[,options] 960 960 uart[8250],mmio,<addr>[,options] 961 961 uart[8250],mmio32,<addr>[,options] 962 + uart[8250],mmio32be,<addr>[,options] 962 963 uart[8250],0x<addr>[,options] 963 964 Start an early, polled-mode console on the 8250/16550 964 965 UART at the specified I/O port or MMIO address. 965 966 MMIO inter-register address stride is either 8-bit 966 - (mmio) or 32-bit (mmio32). 967 - If none of [io|mmio|mmio32], <addr> is assumed to be 968 - equivalent to 'mmio'. 'options' are specified in the 969 - same format described for "console=ttyS<n>"; if 967 + (mmio) or 32-bit (mmio32 or mmio32be). 968 + If none of [io|mmio|mmio32|mmio32be], <addr> is assumed 969 + to be equivalent to 'mmio'. 'options' are specified 970 + in the same format described for "console=ttyS<n>"; if 970 971 unspecified, the h/w is not initialized. 971 972 972 973 pl011,<addr>
+6
Documentation/power/runtime_pm.txt
··· 556 556 should be used. Of course, for this purpose the device's runtime PM has to be 557 557 enabled earlier by calling pm_runtime_enable(). 558 558 559 + Note, if the device may execute pm_runtime calls during the probe (such as 560 + if it is registers with a subsystem that may call back in) then the 561 + pm_runtime_get_sync() call paired with a pm_runtime_put() call will be 562 + appropriate to ensure that the device is not put back to sleep during the 563 + probe. This can happen with systems such as the network device layer. 564 + 559 565 It may be desirable to suspend the device once ->probe() has finished. 560 566 Therefore the driver core uses the asyncronous pm_request_idle() to submit a 561 567 request to execute the subsystem-level idle callback for the device at that
+1 -1
arch/alpha/include/asm/serial.h
··· 13 13 #define BASE_BAUD ( 1843200 / 16 ) 14 14 15 15 /* Standard COM flags (except for COM4, because of the 8514 problem) */ 16 - #ifdef CONFIG_SERIAL_DETECT_IRQ 16 + #ifdef CONFIG_SERIAL_8250_DETECT_IRQ 17 17 #define STD_COM_FLAGS (ASYNC_BOOT_AUTOCONF | ASYNC_SKIP_TEST | ASYNC_AUTO_IRQ) 18 18 #define STD_COM4_FLAGS (ASYNC_BOOT_AUTOCONF | ASYNC_AUTO_IRQ) 19 19 #else
+3
arch/arm/common/edma.c
··· 1350 1350 edma_shadow0_write_array(ctlr, SH_SECR, j, mask); 1351 1351 edma_write_array(ctlr, EDMA_EMCR, j, mask); 1352 1352 1353 + /* clear possibly pending completion interrupt */ 1354 + edma_shadow0_write_array(ctlr, SH_ICR, j, mask); 1355 + 1353 1356 pr_debug("EDMA: EER%d %08x\n", j, 1354 1357 edma_shadow0_read_array(ctlr, SH_EER, j)); 1355 1358
+4 -4
arch/blackfin/include/asm/bfin_serial.h
··· 22 22 defined(CONFIG_BFIN_UART2_CTSRTS) || \ 23 23 defined(CONFIG_BFIN_UART3_CTSRTS) 24 24 # if defined(BFIN_UART_BF54X_STYLE) || defined(BFIN_UART_BF60X_STYLE) 25 - # define CONFIG_SERIAL_BFIN_HARD_CTSRTS 25 + # define SERIAL_BFIN_HARD_CTSRTS 26 26 # else 27 - # define CONFIG_SERIAL_BFIN_CTSRTS 27 + # define SERIAL_BFIN_CTSRTS 28 28 # endif 29 29 #endif 30 30 ··· 50 50 #elif ANOMALY_05000363 51 51 unsigned int anomaly_threshold; 52 52 #endif 53 - #if defined(CONFIG_SERIAL_BFIN_CTSRTS) || \ 54 - defined(CONFIG_SERIAL_BFIN_HARD_CTSRTS) 53 + #if defined(SERIAL_BFIN_CTSRTS) || \ 54 + defined(SERIAL_BFIN_HARD_CTSRTS) 55 55 int cts_pin; 56 56 int rts_pin; 57 57 #endif
+1 -1
arch/m68k/include/asm/serial.h
··· 17 17 #define BASE_BAUD ( 1843200 / 16 ) 18 18 19 19 /* Standard COM flags (except for COM4, because of the 8514 problem) */ 20 - #ifdef CONFIG_SERIAL_DETECT_IRQ 20 + #ifdef CONFIG_SERIAL_8250_DETECT_IRQ 21 21 #define STD_COM_FLAGS (ASYNC_BOOT_AUTOCONF | ASYNC_SKIP_TEST | ASYNC_AUTO_IRQ) 22 22 #define STD_COM4_FLAGS (ASYNC_BOOT_AUTOCONF | ASYNC_AUTO_IRQ) 23 23 #else
+2 -2
arch/mn10300/include/asm/serial.h
··· 13 13 #define _ASM_SERIAL_H 14 14 15 15 /* Standard COM flags (except for COM4, because of the 8514 problem) */ 16 - #ifdef CONFIG_SERIAL_DETECT_IRQ 16 + #ifdef CONFIG_SERIAL_8250_DETECT_IRQ 17 17 #define STD_COM_FLAGS (ASYNC_BOOT_AUTOCONF | ASYNC_SKIP_TEST | ASYNC_AUTO_IRQ) 18 18 #define STD_COM4_FLAGS (ASYNC_BOOT_AUTOCONF | ASYNC_AUTO_IRQ) 19 19 #else ··· 21 21 #define STD_COM4_FLAGS ASYNC_BOOT_AUTOCONF 22 22 #endif 23 23 24 - #ifdef CONFIG_SERIAL_MANY_PORTS 24 + #ifdef CONFIG_SERIAL_8250_MANY_PORTS 25 25 #define FOURPORT_FLAGS ASYNC_FOURPORT 26 26 #define ACCENT_FLAGS 0 27 27 #define BOCA_FLAGS 0
+1 -1
arch/x86/include/asm/serial.h
··· 11 11 #define BASE_BAUD (1843200/16) 12 12 13 13 /* Standard COM flags (except for COM4, because of the 8514 problem) */ 14 - #ifdef CONFIG_SERIAL_DETECT_IRQ 14 + #ifdef CONFIG_SERIAL_8250_DETECT_IRQ 15 15 # define STD_COMX_FLAGS (UPF_BOOT_AUTOCONF | UPF_SKIP_TEST | UPF_AUTO_IRQ) 16 16 # define STD_COM4_FLAGS (UPF_BOOT_AUTOCONF | 0 | UPF_AUTO_IRQ) 17 17 #else
+1 -1
drivers/base/power/Makefile
··· 1 - obj-$(CONFIG_PM) += sysfs.o generic_ops.o common.o qos.o runtime.o 1 + obj-$(CONFIG_PM) += sysfs.o generic_ops.o common.o qos.o runtime.o wakeirq.o 2 2 obj-$(CONFIG_PM_SLEEP) += main.o wakeup.o 3 3 obj-$(CONFIG_PM_TRACE_RTC) += trace.o 4 4 obj-$(CONFIG_PM_OPP) += opp.o
+3
drivers/base/power/main.c
··· 24 24 #include <linux/pm.h> 25 25 #include <linux/pm_runtime.h> 26 26 #include <linux/pm-trace.h> 27 + #include <linux/pm_wakeirq.h> 27 28 #include <linux/interrupt.h> 28 29 #include <linux/sched.h> 29 30 #include <linux/async.h> ··· 588 587 async_synchronize_full(); 589 588 dpm_show_time(starttime, state, "noirq"); 590 589 resume_device_irqs(); 590 + device_wakeup_disarm_wake_irqs(); 591 591 cpuidle_resume(); 592 592 trace_suspend_resume(TPS("dpm_resume_noirq"), state.event, false); 593 593 } ··· 1106 1104 1107 1105 trace_suspend_resume(TPS("dpm_suspend_noirq"), state.event, true); 1108 1106 cpuidle_pause(); 1107 + device_wakeup_arm_wake_irqs(); 1109 1108 suspend_device_irqs(); 1110 1109 mutex_lock(&dpm_list_mtx); 1111 1110 pm_transition = state;
+48
drivers/base/power/power.h
··· 20 20 extern void pm_runtime_init(struct device *dev); 21 21 extern void pm_runtime_remove(struct device *dev); 22 22 23 + struct wake_irq { 24 + struct device *dev; 25 + int irq; 26 + bool dedicated_irq:1; 27 + }; 28 + 29 + extern void dev_pm_arm_wake_irq(struct wake_irq *wirq); 30 + extern void dev_pm_disarm_wake_irq(struct wake_irq *wirq); 31 + 32 + #ifdef CONFIG_PM_SLEEP 33 + 34 + extern int device_wakeup_attach_irq(struct device *dev, 35 + struct wake_irq *wakeirq); 36 + extern void device_wakeup_detach_irq(struct device *dev); 37 + extern void device_wakeup_arm_wake_irqs(void); 38 + extern void device_wakeup_disarm_wake_irqs(void); 39 + 40 + #else 41 + 42 + static inline int 43 + device_wakeup_attach_irq(struct device *dev, 44 + struct wake_irq *wakeirq) 45 + { 46 + return 0; 47 + } 48 + 49 + static inline void device_wakeup_detach_irq(struct device *dev) 50 + { 51 + } 52 + 53 + static inline void device_wakeup_arm_wake_irqs(void) 54 + { 55 + } 56 + 57 + static inline void device_wakeup_disarm_wake_irqs(void) 58 + { 59 + } 60 + 61 + #endif /* CONFIG_PM_SLEEP */ 62 + 23 63 /* 24 64 * sysfs.c 25 65 */ ··· 91 51 static inline void wakeup_sysfs_remove(struct device *dev) {} 92 52 static inline int pm_qos_sysfs_add(struct device *dev) { return 0; } 93 53 static inline void pm_qos_sysfs_remove(struct device *dev) {} 54 + 55 + static inline void dev_pm_arm_wake_irq(struct wake_irq *wirq) 56 + { 57 + } 58 + 59 + static inline void dev_pm_disarm_wake_irq(struct wake_irq *wirq) 60 + { 61 + } 94 62 95 63 #endif 96 64
+6
drivers/base/power/runtime.c
··· 10 10 #include <linux/sched.h> 11 11 #include <linux/export.h> 12 12 #include <linux/pm_runtime.h> 13 + #include <linux/pm_wakeirq.h> 13 14 #include <trace/events/rpm.h> 14 15 #include "power.h" 15 16 ··· 515 514 516 515 callback = RPM_GET_CALLBACK(dev, runtime_suspend); 517 516 517 + dev_pm_enable_wake_irq(dev); 518 518 retval = rpm_callback(callback, dev); 519 519 if (retval) 520 520 goto fail; ··· 554 552 return retval; 555 553 556 554 fail: 555 + dev_pm_disable_wake_irq(dev); 557 556 __update_runtime_status(dev, RPM_ACTIVE); 558 557 dev->power.deferred_resume = false; 559 558 wake_up_all(&dev->power.wait_queue); ··· 737 734 738 735 callback = RPM_GET_CALLBACK(dev, runtime_resume); 739 736 737 + dev_pm_disable_wake_irq(dev); 740 738 retval = rpm_callback(callback, dev); 741 739 if (retval) { 742 740 __update_runtime_status(dev, RPM_SUSPENDED); 743 741 pm_runtime_cancel_pending(dev); 742 + dev_pm_enable_wake_irq(dev); 744 743 } else { 745 744 no_callback: 746 745 __update_runtime_status(dev, RPM_ACTIVE); 746 + pm_runtime_mark_last_busy(dev); 747 747 if (parent) 748 748 atomic_inc(&parent->power.child_count); 749 749 }
+273
drivers/base/power/wakeirq.c
··· 1 + /* 2 + * wakeirq.c - Device wakeirq helper functions 3 + * 4 + * This program is free software; you can redistribute it and/or modify 5 + * it under the terms of the GNU General Public License version 2 as 6 + * published by the Free Software Foundation. 7 + * 8 + * This program is distributed "as is" WITHOUT ANY WARRANTY of any 9 + * kind, whether express or implied; without even the implied warranty 10 + * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 11 + * GNU General Public License for more details. 12 + */ 13 + 14 + #include <linux/device.h> 15 + #include <linux/interrupt.h> 16 + #include <linux/irq.h> 17 + #include <linux/slab.h> 18 + #include <linux/pm_runtime.h> 19 + #include <linux/pm_wakeirq.h> 20 + 21 + #include "power.h" 22 + 23 + /** 24 + * dev_pm_attach_wake_irq - Attach device interrupt as a wake IRQ 25 + * @dev: Device entry 26 + * @irq: Device wake-up capable interrupt 27 + * @wirq: Wake irq specific data 28 + * 29 + * Internal function to attach either a device IO interrupt or a 30 + * dedicated wake-up interrupt as a wake IRQ. 31 + */ 32 + static int dev_pm_attach_wake_irq(struct device *dev, int irq, 33 + struct wake_irq *wirq) 34 + { 35 + unsigned long flags; 36 + int err; 37 + 38 + if (!dev || !wirq) 39 + return -EINVAL; 40 + 41 + spin_lock_irqsave(&dev->power.lock, flags); 42 + if (dev_WARN_ONCE(dev, dev->power.wakeirq, 43 + "wake irq already initialized\n")) { 44 + spin_unlock_irqrestore(&dev->power.lock, flags); 45 + return -EEXIST; 46 + } 47 + 48 + dev->power.wakeirq = wirq; 49 + spin_unlock_irqrestore(&dev->power.lock, flags); 50 + 51 + err = device_wakeup_attach_irq(dev, wirq); 52 + if (err) 53 + return err; 54 + 55 + return 0; 56 + } 57 + 58 + /** 59 + * dev_pm_set_wake_irq - Attach device IO interrupt as wake IRQ 60 + * @dev: Device entry 61 + * @irq: Device IO interrupt 62 + * 63 + * Attach a device IO interrupt as a wake IRQ. The wake IRQ gets 64 + * automatically configured for wake-up from suspend based 65 + * on the device specific sysfs wakeup entry. Typically called 66 + * during driver probe after calling device_init_wakeup(). 67 + */ 68 + int dev_pm_set_wake_irq(struct device *dev, int irq) 69 + { 70 + struct wake_irq *wirq; 71 + int err; 72 + 73 + wirq = kzalloc(sizeof(*wirq), GFP_KERNEL); 74 + if (!wirq) 75 + return -ENOMEM; 76 + 77 + wirq->dev = dev; 78 + wirq->irq = irq; 79 + 80 + err = dev_pm_attach_wake_irq(dev, irq, wirq); 81 + if (err) 82 + kfree(wirq); 83 + 84 + return err; 85 + } 86 + EXPORT_SYMBOL_GPL(dev_pm_set_wake_irq); 87 + 88 + /** 89 + * dev_pm_clear_wake_irq - Detach a device IO interrupt wake IRQ 90 + * @dev: Device entry 91 + * 92 + * Detach a device wake IRQ and free resources. 93 + * 94 + * Note that it's OK for drivers to call this without calling 95 + * dev_pm_set_wake_irq() as all the driver instances may not have 96 + * a wake IRQ configured. This avoid adding wake IRQ specific 97 + * checks into the drivers. 98 + */ 99 + void dev_pm_clear_wake_irq(struct device *dev) 100 + { 101 + struct wake_irq *wirq = dev->power.wakeirq; 102 + unsigned long flags; 103 + 104 + if (!wirq) 105 + return; 106 + 107 + spin_lock_irqsave(&dev->power.lock, flags); 108 + dev->power.wakeirq = NULL; 109 + spin_unlock_irqrestore(&dev->power.lock, flags); 110 + 111 + device_wakeup_detach_irq(dev); 112 + if (wirq->dedicated_irq) 113 + free_irq(wirq->irq, wirq); 114 + kfree(wirq); 115 + } 116 + EXPORT_SYMBOL_GPL(dev_pm_clear_wake_irq); 117 + 118 + /** 119 + * handle_threaded_wake_irq - Handler for dedicated wake-up interrupts 120 + * @irq: Device specific dedicated wake-up interrupt 121 + * @_wirq: Wake IRQ data 122 + * 123 + * Some devices have a separate wake-up interrupt in addition to the 124 + * device IO interrupt. The wake-up interrupt signals that a device 125 + * should be woken up from it's idle state. This handler uses device 126 + * specific pm_runtime functions to wake the device, and then it's 127 + * up to the device to do whatever it needs to. Note that as the 128 + * device may need to restore context and start up regulators, we 129 + * use a threaded IRQ. 130 + * 131 + * Also note that we are not resending the lost device interrupts. 132 + * We assume that the wake-up interrupt just needs to wake-up the 133 + * device, and then device's pm_runtime_resume() can deal with the 134 + * situation. 135 + */ 136 + static irqreturn_t handle_threaded_wake_irq(int irq, void *_wirq) 137 + { 138 + struct wake_irq *wirq = _wirq; 139 + int res; 140 + 141 + /* We don't want RPM_ASYNC or RPM_NOWAIT here */ 142 + res = pm_runtime_resume(wirq->dev); 143 + if (res < 0) 144 + dev_warn(wirq->dev, 145 + "wake IRQ with no resume: %i\n", res); 146 + 147 + return IRQ_HANDLED; 148 + } 149 + 150 + /** 151 + * dev_pm_set_dedicated_wake_irq - Request a dedicated wake-up interrupt 152 + * @dev: Device entry 153 + * @irq: Device wake-up interrupt 154 + * 155 + * Unless your hardware has separate wake-up interrupts in addition 156 + * to the device IO interrupts, you don't need this. 157 + * 158 + * Sets up a threaded interrupt handler for a device that has 159 + * a dedicated wake-up interrupt in addition to the device IO 160 + * interrupt. 161 + * 162 + * The interrupt starts disabled, and needs to be managed for 163 + * the device by the bus code or the device driver using 164 + * dev_pm_enable_wake_irq() and dev_pm_disable_wake_irq() 165 + * functions. 166 + */ 167 + int dev_pm_set_dedicated_wake_irq(struct device *dev, int irq) 168 + { 169 + struct wake_irq *wirq; 170 + int err; 171 + 172 + wirq = kzalloc(sizeof(*wirq), GFP_KERNEL); 173 + if (!wirq) 174 + return -ENOMEM; 175 + 176 + wirq->dev = dev; 177 + wirq->irq = irq; 178 + wirq->dedicated_irq = true; 179 + irq_set_status_flags(irq, IRQ_NOAUTOEN); 180 + 181 + /* 182 + * Consumer device may need to power up and restore state 183 + * so we use a threaded irq. 184 + */ 185 + err = request_threaded_irq(irq, NULL, handle_threaded_wake_irq, 186 + IRQF_ONESHOT, dev_name(dev), wirq); 187 + if (err) 188 + goto err_free; 189 + 190 + err = dev_pm_attach_wake_irq(dev, irq, wirq); 191 + if (err) 192 + goto err_free_irq; 193 + 194 + return err; 195 + 196 + err_free_irq: 197 + free_irq(irq, wirq); 198 + err_free: 199 + kfree(wirq); 200 + 201 + return err; 202 + } 203 + EXPORT_SYMBOL_GPL(dev_pm_set_dedicated_wake_irq); 204 + 205 + /** 206 + * dev_pm_enable_wake_irq - Enable device wake-up interrupt 207 + * @dev: Device 208 + * 209 + * Called from the bus code or the device driver for 210 + * runtime_suspend() to enable the wake-up interrupt while 211 + * the device is running. 212 + * 213 + * Note that for runtime_suspend()) the wake-up interrupts 214 + * should be unconditionally enabled unlike for suspend() 215 + * that is conditional. 216 + */ 217 + void dev_pm_enable_wake_irq(struct device *dev) 218 + { 219 + struct wake_irq *wirq = dev->power.wakeirq; 220 + 221 + if (wirq && wirq->dedicated_irq) 222 + enable_irq(wirq->irq); 223 + } 224 + EXPORT_SYMBOL_GPL(dev_pm_enable_wake_irq); 225 + 226 + /** 227 + * dev_pm_disable_wake_irq - Disable device wake-up interrupt 228 + * @dev: Device 229 + * 230 + * Called from the bus code or the device driver for 231 + * runtime_resume() to disable the wake-up interrupt while 232 + * the device is running. 233 + */ 234 + void dev_pm_disable_wake_irq(struct device *dev) 235 + { 236 + struct wake_irq *wirq = dev->power.wakeirq; 237 + 238 + if (wirq && wirq->dedicated_irq) 239 + disable_irq_nosync(wirq->irq); 240 + } 241 + EXPORT_SYMBOL_GPL(dev_pm_disable_wake_irq); 242 + 243 + /** 244 + * dev_pm_arm_wake_irq - Arm device wake-up 245 + * @wirq: Device wake-up interrupt 246 + * 247 + * Sets up the wake-up event conditionally based on the 248 + * device_may_wake(). 249 + */ 250 + void dev_pm_arm_wake_irq(struct wake_irq *wirq) 251 + { 252 + if (!wirq) 253 + return; 254 + 255 + if (device_may_wakeup(wirq->dev)) 256 + enable_irq_wake(wirq->irq); 257 + } 258 + 259 + /** 260 + * dev_pm_disarm_wake_irq - Disarm device wake-up 261 + * @wirq: Device wake-up interrupt 262 + * 263 + * Clears up the wake-up event conditionally based on the 264 + * device_may_wake(). 265 + */ 266 + void dev_pm_disarm_wake_irq(struct wake_irq *wirq) 267 + { 268 + if (!wirq) 269 + return; 270 + 271 + if (device_may_wakeup(wirq->dev)) 272 + disable_irq_wake(wirq->irq); 273 + }
+92
drivers/base/power/wakeup.c
··· 14 14 #include <linux/suspend.h> 15 15 #include <linux/seq_file.h> 16 16 #include <linux/debugfs.h> 17 + #include <linux/pm_wakeirq.h> 17 18 #include <trace/events/power.h> 18 19 19 20 #include "power.h" ··· 238 237 return ret; 239 238 } 240 239 EXPORT_SYMBOL_GPL(device_wakeup_enable); 240 + 241 + /** 242 + * device_wakeup_attach_irq - Attach a wakeirq to a wakeup source 243 + * @dev: Device to handle 244 + * @wakeirq: Device specific wakeirq entry 245 + * 246 + * Attach a device wakeirq to the wakeup source so the device 247 + * wake IRQ can be configured automatically for suspend and 248 + * resume. 249 + */ 250 + int device_wakeup_attach_irq(struct device *dev, 251 + struct wake_irq *wakeirq) 252 + { 253 + struct wakeup_source *ws; 254 + int ret = 0; 255 + 256 + spin_lock_irq(&dev->power.lock); 257 + ws = dev->power.wakeup; 258 + if (!ws) { 259 + dev_err(dev, "forgot to call call device_init_wakeup?\n"); 260 + ret = -EINVAL; 261 + goto unlock; 262 + } 263 + 264 + if (ws->wakeirq) { 265 + ret = -EEXIST; 266 + goto unlock; 267 + } 268 + 269 + ws->wakeirq = wakeirq; 270 + 271 + unlock: 272 + spin_unlock_irq(&dev->power.lock); 273 + 274 + return ret; 275 + } 276 + 277 + /** 278 + * device_wakeup_detach_irq - Detach a wakeirq from a wakeup source 279 + * @dev: Device to handle 280 + * 281 + * Removes a device wakeirq from the wakeup source. 282 + */ 283 + void device_wakeup_detach_irq(struct device *dev) 284 + { 285 + struct wakeup_source *ws; 286 + 287 + spin_lock_irq(&dev->power.lock); 288 + ws = dev->power.wakeup; 289 + if (!ws) 290 + goto unlock; 291 + 292 + ws->wakeirq = NULL; 293 + 294 + unlock: 295 + spin_unlock_irq(&dev->power.lock); 296 + } 297 + 298 + /** 299 + * device_wakeup_arm_wake_irqs(void) 300 + * 301 + * Itereates over the list of device wakeirqs to arm them. 302 + */ 303 + void device_wakeup_arm_wake_irqs(void) 304 + { 305 + struct wakeup_source *ws; 306 + 307 + rcu_read_lock(); 308 + list_for_each_entry_rcu(ws, &wakeup_sources, entry) { 309 + if (ws->wakeirq) 310 + dev_pm_arm_wake_irq(ws->wakeirq); 311 + } 312 + rcu_read_unlock(); 313 + } 314 + 315 + /** 316 + * device_wakeup_disarm_wake_irqs(void) 317 + * 318 + * Itereates over the list of device wakeirqs to disarm them. 319 + */ 320 + void device_wakeup_disarm_wake_irqs(void) 321 + { 322 + struct wakeup_source *ws; 323 + 324 + rcu_read_lock(); 325 + list_for_each_entry_rcu(ws, &wakeup_sources, entry) { 326 + if (ws->wakeirq) 327 + dev_pm_disarm_wake_irq(ws->wakeirq); 328 + } 329 + rcu_read_unlock(); 330 + } 241 331 242 332 /** 243 333 * device_wakeup_detach - Detach a device's wakeup source object from it.
+1 -6
drivers/dma/edma.c
··· 300 300 { 301 301 struct edma_chan *echan = to_edma_chan(chan); 302 302 303 - /* Pause/Resume only allowed with cyclic mode */ 304 - if (!echan->edesc || !echan->edesc->cyclic) 303 + if (!echan->edesc) 305 304 return -EINVAL; 306 305 307 306 edma_pause(echan->ch_num); ··· 310 311 static int edma_dma_resume(struct dma_chan *chan) 311 312 { 312 313 struct edma_chan *echan = to_edma_chan(chan); 313 - 314 - /* Pause/Resume only allowed with cyclic mode */ 315 - if (!echan->edesc->cyclic) 316 - return -EINVAL; 317 314 318 315 edma_resume(echan->ch_num); 319 316 return 0;
+2 -3
drivers/input/serio/serport.c
··· 167 167 { 168 168 struct serport *serport = (struct serport*) tty->disc_data; 169 169 struct serio *serio; 170 - char name[64]; 171 170 172 171 if (test_and_set_bit(SERPORT_BUSY, &serport->flags)) 173 172 return -EBUSY; ··· 176 177 return -ENOMEM; 177 178 178 179 strlcpy(serio->name, "Serial port", sizeof(serio->name)); 179 - snprintf(serio->phys, sizeof(serio->phys), "%s/serio0", tty_name(tty, name)); 180 + snprintf(serio->phys, sizeof(serio->phys), "%s/serio0", tty_name(tty)); 180 181 serio->id = serport->id; 181 182 serio->id.type = SERIO_RS232; 182 183 serio->write = serport_serio_write; ··· 186 187 serio->dev.parent = tty->dev; 187 188 188 189 serio_register_port(serport->serio); 189 - printk(KERN_INFO "serio: Serial port %s\n", tty_name(tty, name)); 190 + printk(KERN_INFO "serio: Serial port %s\n", tty_name(tty)); 190 191 191 192 wait_event_interruptible(serport->wait, test_bit(SERPORT_DEAD, &serport->flags)); 192 193 serio_unregister_port(serport->serio);
+6 -43
drivers/mmc/host/omap_hsmmc.c
··· 43 43 #include <linux/regulator/consumer.h> 44 44 #include <linux/pinctrl/consumer.h> 45 45 #include <linux/pm_runtime.h> 46 + #include <linux/pm_wakeirq.h> 46 47 #include <linux/platform_data/hsmmc-omap.h> 47 48 48 49 /* OMAP HSMMC Host Controller Registers */ ··· 219 218 unsigned int flags; 220 219 #define AUTO_CMD23 (1 << 0) /* Auto CMD23 support */ 221 220 #define HSMMC_SDIO_IRQ_ENABLED (1 << 1) /* SDIO irq enabled */ 222 - #define HSMMC_WAKE_IRQ_ENABLED (1 << 2) 223 221 struct omap_hsmmc_next next_data; 224 222 struct omap_hsmmc_platform_data *pdata; 225 223 ··· 1117 1117 return IRQ_HANDLED; 1118 1118 } 1119 1119 1120 - static irqreturn_t omap_hsmmc_wake_irq(int irq, void *dev_id) 1121 - { 1122 - struct omap_hsmmc_host *host = dev_id; 1123 - 1124 - /* cirq is level triggered, disable to avoid infinite loop */ 1125 - spin_lock(&host->irq_lock); 1126 - if (host->flags & HSMMC_WAKE_IRQ_ENABLED) { 1127 - disable_irq_nosync(host->wake_irq); 1128 - host->flags &= ~HSMMC_WAKE_IRQ_ENABLED; 1129 - } 1130 - spin_unlock(&host->irq_lock); 1131 - pm_request_resume(host->dev); /* no use counter */ 1132 - 1133 - return IRQ_HANDLED; 1134 - } 1135 - 1136 1120 static void set_sd_bus_power(struct omap_hsmmc_host *host) 1137 1121 { 1138 1122 unsigned long i; ··· 1649 1665 1650 1666 static int omap_hsmmc_configure_wake_irq(struct omap_hsmmc_host *host) 1651 1667 { 1652 - struct mmc_host *mmc = host->mmc; 1653 1668 int ret; 1654 1669 1655 1670 /* ··· 1660 1677 if (!host->dev->of_node || !host->wake_irq) 1661 1678 return -ENODEV; 1662 1679 1663 - /* Prevent auto-enabling of IRQ */ 1664 - irq_set_status_flags(host->wake_irq, IRQ_NOAUTOEN); 1665 - ret = devm_request_irq(host->dev, host->wake_irq, omap_hsmmc_wake_irq, 1666 - IRQF_TRIGGER_LOW | IRQF_ONESHOT, 1667 - mmc_hostname(mmc), host); 1680 + ret = dev_pm_set_dedicated_wake_irq(host->dev, host->wake_irq); 1668 1681 if (ret) { 1669 1682 dev_err(mmc_dev(host->mmc), "Unable to request wake IRQ\n"); 1670 1683 goto err; ··· 1697 1718 return 0; 1698 1719 1699 1720 err_free_irq: 1700 - devm_free_irq(host->dev, host->wake_irq, host); 1721 + dev_pm_clear_wake_irq(host->dev); 1701 1722 err: 1702 1723 dev_warn(host->dev, "no SDIO IRQ support, falling back to polling\n"); 1703 1724 host->wake_irq = 0; ··· 1986 2007 omap_hsmmc_ops.multi_io_quirk = omap_hsmmc_multi_io_quirk; 1987 2008 } 1988 2009 2010 + device_init_wakeup(&pdev->dev, true); 1989 2011 pm_runtime_enable(host->dev); 1990 2012 pm_runtime_get_sync(host->dev); 1991 2013 pm_runtime_set_autosuspend_delay(host->dev, MMC_AUTOSUSPEND_DELAY); ··· 2127 2147 if (host->use_reg) 2128 2148 omap_hsmmc_reg_put(host); 2129 2149 err_irq: 2150 + device_init_wakeup(&pdev->dev, false); 2130 2151 if (host->tx_chan) 2131 2152 dma_release_channel(host->tx_chan); 2132 2153 if (host->rx_chan) ··· 2159 2178 2160 2179 pm_runtime_put_sync(host->dev); 2161 2180 pm_runtime_disable(host->dev); 2181 + device_init_wakeup(&pdev->dev, false); 2162 2182 if (host->dbclk) 2163 2183 clk_disable_unprepare(host->dbclk); 2164 2184 ··· 2186 2204 OMAP_HSMMC_READ(host->base, HCTL) & ~SDBP); 2187 2205 } 2188 2206 2189 - /* do not wake up due to sdio irq */ 2190 - if ((host->mmc->caps & MMC_CAP_SDIO_IRQ) && 2191 - !(host->mmc->pm_flags & MMC_PM_WAKE_SDIO_IRQ)) 2192 - disable_irq(host->wake_irq); 2193 - 2194 2207 if (host->dbclk) 2195 2208 clk_disable_unprepare(host->dbclk); 2196 2209 ··· 2210 2233 omap_hsmmc_conf_bus_power(host); 2211 2234 2212 2235 omap_hsmmc_protect_card(host); 2213 - 2214 - if ((host->mmc->caps & MMC_CAP_SDIO_IRQ) && 2215 - !(host->mmc->pm_flags & MMC_PM_WAKE_SDIO_IRQ)) 2216 - enable_irq(host->wake_irq); 2217 - 2218 2236 pm_runtime_mark_last_busy(host->dev); 2219 2237 pm_runtime_put_autosuspend(host->dev); 2220 2238 return 0; ··· 2249 2277 } 2250 2278 2251 2279 pinctrl_pm_select_idle_state(dev); 2252 - 2253 - WARN_ON(host->flags & HSMMC_WAKE_IRQ_ENABLED); 2254 - enable_irq(host->wake_irq); 2255 - host->flags |= HSMMC_WAKE_IRQ_ENABLED; 2256 2280 } else { 2257 2281 pinctrl_pm_select_idle_state(dev); 2258 2282 } ··· 2270 2302 spin_lock_irqsave(&host->irq_lock, flags); 2271 2303 if ((host->mmc->caps & MMC_CAP_SDIO_IRQ) && 2272 2304 (host->flags & HSMMC_SDIO_IRQ_ENABLED)) { 2273 - /* sdio irq flag can't change while in runtime suspend */ 2274 - if (host->flags & HSMMC_WAKE_IRQ_ENABLED) { 2275 - disable_irq_nosync(host->wake_irq); 2276 - host->flags &= ~HSMMC_WAKE_IRQ_ENABLED; 2277 - } 2278 2305 2279 2306 pinctrl_pm_select_default_state(host->dev); 2280 2307
+4 -7
drivers/tty/amiserial.c
··· 966 966 struct serial_state *info = tty->driver_data; 967 967 unsigned long flags; 968 968 #ifdef SERIAL_DEBUG_THROTTLE 969 - char buf[64]; 970 - 971 - printk("throttle %s: %d....\n", tty_name(tty, buf), 969 + printk("throttle %s: %d....\n", tty_name(tty), 972 970 tty->ldisc.chars_in_buffer(tty)); 973 971 #endif 974 972 ··· 989 991 struct serial_state *info = tty->driver_data; 990 992 unsigned long flags; 991 993 #ifdef SERIAL_DEBUG_THROTTLE 992 - char buf[64]; 993 - 994 - printk("unthrottle %s: %d....\n", tty_name(tty, buf), 994 + printk("unthrottle %s: %d....\n", tty_name(tty), 995 995 tty->ldisc.chars_in_buffer(tty)); 996 996 #endif 997 997 ··· 1782 1786 struct serial_state *state = platform_get_drvdata(pdev); 1783 1787 1784 1788 /* printk("Unloading %s: version %s\n", serial_name, serial_version); */ 1785 - if ((error = tty_unregister_driver(serial_driver))) 1789 + error = tty_unregister_driver(serial_driver); 1790 + if (error) 1786 1791 printk("SERIAL: failed to unregister serial driver (%d)\n", 1787 1792 error); 1788 1793 put_tty_driver(serial_driver);
+2 -6
drivers/tty/cyclades.c
··· 2861 2861 unsigned long flags; 2862 2862 2863 2863 #ifdef CY_DEBUG_THROTTLE 2864 - char buf[64]; 2865 - 2866 - printk(KERN_DEBUG "cyc:throttle %s: %ld...ttyC%d\n", tty_name(tty, buf), 2864 + printk(KERN_DEBUG "cyc:throttle %s: %ld...ttyC%d\n", tty_name(tty), 2867 2865 tty->ldisc.chars_in_buffer(tty), info->line); 2868 2866 #endif 2869 2867 ··· 2900 2902 unsigned long flags; 2901 2903 2902 2904 #ifdef CY_DEBUG_THROTTLE 2903 - char buf[64]; 2904 - 2905 2905 printk(KERN_DEBUG "cyc:unthrottle %s: %ld...ttyC%d\n", 2906 - tty_name(tty, buf), tty_chars_in_buffer(tty), info->line); 2906 + tty_name(tty), tty_chars_in_buffer(tty), info->line); 2907 2907 #endif 2908 2908 2909 2909 if (serial_paranoia_check(info, tty->name, "cy_unthrottle"))
-7
drivers/tty/hvc/Kconfig
··· 42 42 help 43 43 IBM Console device driver which makes use of RTAS 44 44 45 - config HVC_BEAT 46 - bool "Toshiba's Beat Hypervisor Console support" 47 - depends on PPC_CELLEB 48 - select HVC_DRIVER 49 - help 50 - Toshiba's Cell Reference Set Beat Console device driver 51 - 52 45 config HVC_IUCV 53 46 bool "z/VM IUCV Hypervisor console support (VM only)" 54 47 depends on S390
-1
drivers/tty/hvc/Makefile
··· 4 4 obj-$(CONFIG_HVC_RTAS) += hvc_rtas.o 5 5 obj-$(CONFIG_HVC_TILE) += hvc_tile.o 6 6 obj-$(CONFIG_HVC_DCC) += hvc_dcc.o 7 - obj-$(CONFIG_HVC_BEAT) += hvc_beat.o 8 7 obj-$(CONFIG_HVC_DRIVER) += hvc_console.o 9 8 obj-$(CONFIG_HVC_IRQ) += hvc_irq.o 10 9 obj-$(CONFIG_HVC_XEN) += hvc_xen.o
-134
drivers/tty/hvc/hvc_beat.c
··· 1 - /* 2 - * Beat hypervisor console driver 3 - * 4 - * (C) Copyright 2006 TOSHIBA CORPORATION 5 - * 6 - * This code is based on drivers/char/hvc_rtas.c: 7 - * (C) Copyright IBM Corporation 2001-2005 8 - * (C) Copyright Red Hat, Inc. 2005 9 - * 10 - * This program is free software; you can redistribute it and/or modify 11 - * it under the terms of the GNU General Public License as published by 12 - * the Free Software Foundation; either version 2 of the License, or 13 - * (at your option) any later version. 14 - * 15 - * This program is distributed in the hope that it will be useful, 16 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 17 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 18 - * GNU General Public License for more details. 19 - * 20 - * You should have received a copy of the GNU General Public License along 21 - * with this program; if not, write to the Free Software Foundation, Inc., 22 - * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. 23 - */ 24 - 25 - #include <linux/module.h> 26 - #include <linux/init.h> 27 - #include <linux/err.h> 28 - #include <linux/string.h> 29 - #include <linux/console.h> 30 - #include <asm/prom.h> 31 - #include <asm/hvconsole.h> 32 - #include <asm/firmware.h> 33 - 34 - #include "hvc_console.h" 35 - 36 - extern int64_t beat_get_term_char(uint64_t, uint64_t *, uint64_t *, uint64_t *); 37 - extern int64_t beat_put_term_char(uint64_t, uint64_t, uint64_t, uint64_t); 38 - 39 - struct hvc_struct *hvc_beat_dev = NULL; 40 - 41 - /* bug: only one queue is available regardless of vtermno */ 42 - static int hvc_beat_get_chars(uint32_t vtermno, char *buf, int cnt) 43 - { 44 - static unsigned char q[sizeof(unsigned long) * 2] 45 - __attribute__((aligned(sizeof(unsigned long)))); 46 - static int qlen = 0; 47 - u64 got; 48 - 49 - again: 50 - if (qlen) { 51 - if (qlen > cnt) { 52 - memcpy(buf, q, cnt); 53 - qlen -= cnt; 54 - memmove(q + cnt, q, qlen); 55 - return cnt; 56 - } else { /* qlen <= cnt */ 57 - int r; 58 - 59 - memcpy(buf, q, qlen); 60 - r = qlen; 61 - qlen = 0; 62 - return r; 63 - } 64 - } 65 - if (beat_get_term_char(vtermno, &got, 66 - ((u64 *)q), ((u64 *)q) + 1) == 0) { 67 - qlen = got; 68 - goto again; 69 - } 70 - return 0; 71 - } 72 - 73 - static int hvc_beat_put_chars(uint32_t vtermno, const char *buf, int cnt) 74 - { 75 - unsigned long kb[2]; 76 - int rest, nlen; 77 - 78 - for (rest = cnt; rest > 0; rest -= nlen) { 79 - nlen = (rest > 16) ? 16 : rest; 80 - memcpy(kb, buf, nlen); 81 - beat_put_term_char(vtermno, nlen, kb[0], kb[1]); 82 - buf += nlen; 83 - } 84 - return cnt; 85 - } 86 - 87 - static const struct hv_ops hvc_beat_get_put_ops = { 88 - .get_chars = hvc_beat_get_chars, 89 - .put_chars = hvc_beat_put_chars, 90 - }; 91 - 92 - static int hvc_beat_useit = 1; 93 - 94 - static int hvc_beat_config(char *p) 95 - { 96 - hvc_beat_useit = simple_strtoul(p, NULL, 0); 97 - return 0; 98 - } 99 - 100 - static int __init hvc_beat_console_init(void) 101 - { 102 - if (hvc_beat_useit && of_machine_is_compatible("Beat")) { 103 - hvc_instantiate(0, 0, &hvc_beat_get_put_ops); 104 - } 105 - return 0; 106 - } 107 - 108 - /* temp */ 109 - static int __init hvc_beat_init(void) 110 - { 111 - struct hvc_struct *hp; 112 - 113 - if (!firmware_has_feature(FW_FEATURE_BEAT)) 114 - return -ENODEV; 115 - 116 - hp = hvc_alloc(0, 0, &hvc_beat_get_put_ops, 16); 117 - if (IS_ERR(hp)) 118 - return PTR_ERR(hp); 119 - hvc_beat_dev = hp; 120 - return 0; 121 - } 122 - 123 - static void __exit hvc_beat_exit(void) 124 - { 125 - if (hvc_beat_dev) 126 - hvc_remove(hvc_beat_dev); 127 - } 128 - 129 - module_init(hvc_beat_init); 130 - module_exit(hvc_beat_exit); 131 - 132 - __setup("hvc_beat=", hvc_beat_config); 133 - 134 - console_initcall(hvc_beat_console_init);
+2 -1
drivers/tty/hvc/hvc_console.c
··· 319 319 int rc; 320 320 321 321 /* Auto increments kref reference if found. */ 322 - if (!(hp = hvc_get_by_index(tty->index))) 322 + hp = hvc_get_by_index(tty->index); 323 + if (!hp) 323 324 return -ENODEV; 324 325 325 326 tty->driver_data = hp;
+2 -2
drivers/tty/hvc/hvcs.c
··· 1044 1044 * It is possible that the vty-server was removed between the time that 1045 1045 * the conn was registered and now. 1046 1046 */ 1047 - if (!(rc = request_irq(irq, &hvcs_handle_interrupt, 1048 - 0, "ibmhvcs", hvcsd))) { 1047 + rc = request_irq(irq, &hvcs_handle_interrupt, 0, "ibmhvcs", hvcsd); 1048 + if (!rc) { 1049 1049 /* 1050 1050 * It is possible the vty-server was removed after the irq was 1051 1051 * requested but before we have time to enable interrupts.
+2 -3
drivers/tty/n_gsm.c
··· 161 161 struct net_device *net; /* network interface, if created */ 162 162 }; 163 163 164 - /* DLCI 0, 62/63 are special or reseved see gsmtty_open */ 164 + /* DLCI 0, 62/63 are special or reserved see gsmtty_open */ 165 165 166 166 #define NUM_DLCI 64 167 167 ··· 2274 2274 const unsigned char *dp; 2275 2275 char *f; 2276 2276 int i; 2277 - char buf[64]; 2278 2277 char flags = TTY_NORMAL; 2279 2278 2280 2279 if (debug & 4) ··· 2295 2296 break; 2296 2297 default: 2297 2298 WARN_ONCE(1, "%s: unknown flag %d\n", 2298 - tty_name(tty, buf), flags); 2299 + tty_name(tty), flags); 2299 2300 break; 2300 2301 } 2301 2302 }
+2 -5
drivers/tty/n_tty.c
··· 1190 1190 static void n_tty_receive_overrun(struct tty_struct *tty) 1191 1191 { 1192 1192 struct n_tty_data *ldata = tty->disc_data; 1193 - char buf[64]; 1194 1193 1195 1194 ldata->num_overrun++; 1196 1195 if (time_after(jiffies, ldata->overrun_time + HZ) || 1197 1196 time_after(ldata->overrun_time, jiffies)) { 1198 1197 printk(KERN_WARNING "%s: %d input overrun(s)\n", 1199 - tty_name(tty, buf), 1198 + tty_name(tty), 1200 1199 ldata->num_overrun); 1201 1200 ldata->overrun_time = jiffies; 1202 1201 ldata->num_overrun = 0; ··· 1470 1471 static void 1471 1472 n_tty_receive_char_flagged(struct tty_struct *tty, unsigned char c, char flag) 1472 1473 { 1473 - char buf[64]; 1474 - 1475 1474 switch (flag) { 1476 1475 case TTY_BREAK: 1477 1476 n_tty_receive_break(tty); ··· 1483 1486 break; 1484 1487 default: 1485 1488 printk(KERN_ERR "%s: unknown flag %d\n", 1486 - tty_name(tty, buf), flag); 1489 + tty_name(tty), flag); 1487 1490 break; 1488 1491 } 1489 1492 }
+4 -4
drivers/tty/nozomi.c
··· 140 140 #define R_FCR 0x0000 /* Flow Control Register */ 141 141 #define R_IER 0x0004 /* Interrupt Enable Register */ 142 142 143 - #define CONFIG_MAGIC 0xEFEFFEFE 144 - #define TOGGLE_VALID 0x0000 143 + #define NOZOMI_CONFIG_MAGIC 0xEFEFFEFE 144 + #define TOGGLE_VALID 0x0000 145 145 146 146 /* Definition of interrupt tokens */ 147 147 #define MDM_DL1 0x0001 ··· 660 660 read_mem32((u32 *) &dc->config_table, dc->base_addr + 0, 661 661 sizeof(struct config_table)); 662 662 663 - if (dc->config_table.signature != CONFIG_MAGIC) { 663 + if (dc->config_table.signature != NOZOMI_CONFIG_MAGIC) { 664 664 dev_err(&dc->pdev->dev, "ConfigTable Bad! 0x%08X != 0x%08X\n", 665 - dc->config_table.signature, CONFIG_MAGIC); 665 + dc->config_table.signature, NOZOMI_CONFIG_MAGIC); 666 666 return 0; 667 667 } 668 668
+1 -1
drivers/tty/rocket.h
··· 44 44 #define ROCKET_HUP_NOTIFY 0x00000004 45 45 #define ROCKET_SPLIT_TERMIOS 0x00000008 46 46 #define ROCKET_SPD_MASK 0x00000070 47 - #define ROCKET_SPD_HI 0x00000010 /* Use 56000 instead of 38400 bps */ 47 + #define ROCKET_SPD_HI 0x00000010 /* Use 57600 instead of 38400 bps */ 48 48 #define ROCKET_SPD_VHI 0x00000020 /* Use 115200 instead of 38400 bps */ 49 49 #define ROCKET_SPD_SHI 0x00000030 /* Use 230400 instead of 38400 bps */ 50 50 #define ROCKET_SPD_WARP 0x00000040 /* Use 460800 instead of 38400 bps */
+2 -1
drivers/tty/serial/68328serial.c
··· 508 508 int i; 509 509 510 510 cflag = tty->termios.c_cflag; 511 - if (!(port = info->port)) 511 + port = info->port; 512 + if (!port) 512 513 return; 513 514 514 515 ustcnt = uart->ustcnt;
+10 -17
drivers/tty/serial/8250/8250_core.c
··· 85 85 #define BOTH_EMPTY (UART_LSR_TEMT | UART_LSR_THRE) 86 86 87 87 88 - #ifdef CONFIG_SERIAL_8250_DETECT_IRQ 89 - #define CONFIG_SERIAL_DETECT_IRQ 1 90 - #endif 91 - #ifdef CONFIG_SERIAL_8250_MANY_PORTS 92 - #define CONFIG_SERIAL_MANY_PORTS 1 93 - #endif 94 - 95 - /* 96 - * HUB6 is always on. This will be removed once the header 97 - * files have been cleaned. 98 - */ 99 - #define CONFIG_HUB6 1 100 - 101 88 #include <asm/serial.h> 102 89 /* 103 90 * SERIAL_PORT_DFNS tells us about built-in ports that have no ··· 2006 2019 static void serial8250_set_mctrl(struct uart_port *port, unsigned int mctrl) 2007 2020 { 2008 2021 if (port->set_mctrl) 2009 - return port->set_mctrl(port, mctrl); 2010 - return serial8250_do_set_mctrl(port, mctrl); 2022 + port->set_mctrl(port, mctrl); 2023 + else 2024 + serial8250_do_set_mctrl(port, mctrl); 2011 2025 } 2012 2026 2013 2027 static void serial8250_break_ctl(struct uart_port *port, int break_state) ··· 3536 3548 3537 3549 static int __init univ8250_console_init(void) 3538 3550 { 3551 + if (nr_uarts == 0) 3552 + return -ENODEV; 3553 + 3539 3554 serial8250_isa_init_ports(); 3540 3555 register_console(&univ8250_console); 3541 3556 return 0; ··· 3569 3578 { 3570 3579 struct uart_port *p; 3571 3580 3572 - if (port->line >= ARRAY_SIZE(serial8250_ports)) 3581 + if (port->line >= ARRAY_SIZE(serial8250_ports) || nr_uarts == 0) 3573 3582 return -ENODEV; 3574 3583 3575 3584 serial8250_isa_init_ports(); ··· 3841 3850 uart->port.mapbase = up->port.mapbase; 3842 3851 uart->port.mapsize = up->port.mapsize; 3843 3852 uart->port.private_data = up->port.private_data; 3844 - uart->port.fifosize = up->port.fifosize; 3845 3853 uart->tx_loadsz = up->tx_loadsz; 3846 3854 uart->capabilities = up->capabilities; 3847 3855 uart->port.throttle = up->port.throttle; ··· 3934 3944 static int __init serial8250_init(void) 3935 3945 { 3936 3946 int ret; 3947 + 3948 + if (nr_uarts == 0) 3949 + return -ENODEV; 3937 3950 3938 3951 serial8250_isa_init_ports(); 3939 3952
+18 -1
drivers/tty/serial/8250/8250_dw.c
··· 377 377 return 0; 378 378 } 379 379 380 + static bool dw8250_idma_filter(struct dma_chan *chan, void *param) 381 + { 382 + struct device *dev = param; 383 + 384 + if (dev != chan->device->dev->parent) 385 + return false; 386 + 387 + return true; 388 + } 389 + 380 390 static int dw8250_probe_acpi(struct uart_8250_port *up, 381 391 struct dw8250_data *data) 382 392 { ··· 399 389 p->serial_out = dw8250_serial_out32; 400 390 p->regshift = 2; 401 391 402 - up->dma = &data->dma; 392 + /* Platforms with iDMA */ 393 + if (platform_get_resource_byname(to_platform_device(up->port.dev), 394 + IORESOURCE_MEM, "lpss_priv")) { 395 + data->dma.rx_param = up->port.dev->parent; 396 + data->dma.tx_param = up->port.dev->parent; 397 + data->dma.fn = dw8250_idma_filter; 398 + } 403 399 400 + up->dma = &data->dma; 404 401 up->dma->rxconf.src_maxburst = p->fifosize / 4; 405 402 up->dma->txconf.dst_maxburst = p->fifosize / 4; 406 403
+1 -1
drivers/tty/serial/8250/8250_early.c
··· 131 131 serial8250_early_out(port, UART_LCR, c & ~UART_LCR_DLAB); 132 132 } 133 133 134 - static int __init early_serial8250_setup(struct earlycon_device *device, 134 + int __init early_serial8250_setup(struct earlycon_device *device, 135 135 const char *options) 136 136 { 137 137 if (!(device->port.membase || device->port.iobase))
+230
drivers/tty/serial/8250/8250_lpc18xx.c
··· 1 + /* 2 + * Serial port driver for NXP LPC18xx/43xx UART 3 + * 4 + * Copyright (C) 2015 Joachim Eastwood <manabian@gmail.com> 5 + * 6 + * Based on 8250_mtk.c: 7 + * Copyright (c) 2014 MundoReader S.L. 8 + * Matthias Brugger <matthias.bgg@gmail.com> 9 + * 10 + * This program is free software; you can redistribute it and/or modify 11 + * it under the terms of the GNU General Public License version 2 as 12 + * published by the Free Software Foundation. 13 + * 14 + */ 15 + 16 + #include <linux/clk.h> 17 + #include <linux/io.h> 18 + #include <linux/module.h> 19 + #include <linux/of.h> 20 + #include <linux/platform_device.h> 21 + 22 + #include "8250.h" 23 + 24 + /* Additional LPC18xx/43xx 8250 registers and bits */ 25 + #define LPC18XX_UART_RS485CTRL (0x04c / sizeof(u32)) 26 + #define LPC18XX_UART_RS485CTRL_NMMEN BIT(0) 27 + #define LPC18XX_UART_RS485CTRL_DCTRL BIT(4) 28 + #define LPC18XX_UART_RS485CTRL_OINV BIT(5) 29 + #define LPC18XX_UART_RS485DLY (0x054 / sizeof(u32)) 30 + #define LPC18XX_UART_RS485DLY_MAX 255 31 + 32 + struct lpc18xx_uart_data { 33 + struct uart_8250_dma dma; 34 + struct clk *clk_uart; 35 + struct clk *clk_reg; 36 + int line; 37 + }; 38 + 39 + static int lpc18xx_rs485_config(struct uart_port *port, 40 + struct serial_rs485 *rs485) 41 + { 42 + struct uart_8250_port *up = up_to_u8250p(port); 43 + u32 rs485_ctrl_reg = 0; 44 + u32 rs485_dly_reg = 0; 45 + unsigned baud_clk; 46 + 47 + if (rs485->flags & SER_RS485_ENABLED) 48 + memset(rs485->padding, 0, sizeof(rs485->padding)); 49 + else 50 + memset(rs485, 0, sizeof(*rs485)); 51 + 52 + rs485->flags &= SER_RS485_ENABLED | SER_RS485_RTS_ON_SEND | 53 + SER_RS485_RTS_AFTER_SEND; 54 + 55 + if (rs485->flags & SER_RS485_ENABLED) { 56 + rs485_ctrl_reg |= LPC18XX_UART_RS485CTRL_NMMEN | 57 + LPC18XX_UART_RS485CTRL_DCTRL; 58 + 59 + if (rs485->flags & SER_RS485_RTS_ON_SEND) { 60 + rs485_ctrl_reg |= LPC18XX_UART_RS485CTRL_OINV; 61 + rs485->flags &= ~SER_RS485_RTS_AFTER_SEND; 62 + } else { 63 + rs485->flags |= SER_RS485_RTS_AFTER_SEND; 64 + } 65 + } 66 + 67 + if (rs485->delay_rts_after_send) { 68 + baud_clk = port->uartclk / up->dl_read(up); 69 + rs485_dly_reg = DIV_ROUND_UP(rs485->delay_rts_after_send 70 + * baud_clk, MSEC_PER_SEC); 71 + 72 + if (rs485_dly_reg > LPC18XX_UART_RS485DLY_MAX) 73 + rs485_dly_reg = LPC18XX_UART_RS485DLY_MAX; 74 + 75 + /* Calculate the resulting delay in ms */ 76 + rs485->delay_rts_after_send = (rs485_dly_reg * MSEC_PER_SEC) 77 + / baud_clk; 78 + } 79 + 80 + /* Delay RTS before send not supported */ 81 + rs485->delay_rts_before_send = 0; 82 + 83 + serial_out(up, LPC18XX_UART_RS485CTRL, rs485_ctrl_reg); 84 + serial_out(up, LPC18XX_UART_RS485DLY, rs485_dly_reg); 85 + 86 + port->rs485 = *rs485; 87 + 88 + return 0; 89 + } 90 + 91 + static void lpc18xx_uart_serial_out(struct uart_port *p, int offset, int value) 92 + { 93 + /* 94 + * For DMA mode one must ensure that the UART_FCR_DMA_SELECT 95 + * bit is set when FIFO is enabled. Even if DMA is not used 96 + * setting this bit doesn't seem to affect anything. 97 + */ 98 + if (offset == UART_FCR && (value & UART_FCR_ENABLE_FIFO)) 99 + value |= UART_FCR_DMA_SELECT; 100 + 101 + offset = offset << p->regshift; 102 + writel(value, p->membase + offset); 103 + } 104 + 105 + static int lpc18xx_serial_probe(struct platform_device *pdev) 106 + { 107 + struct lpc18xx_uart_data *data; 108 + struct uart_8250_port uart; 109 + struct resource *res; 110 + int irq, ret; 111 + 112 + irq = platform_get_irq(pdev, 0); 113 + if (irq < 0) { 114 + dev_err(&pdev->dev, "irq not found"); 115 + return irq; 116 + } 117 + 118 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 119 + if (!res) { 120 + dev_err(&pdev->dev, "memory resource not found"); 121 + return -EINVAL; 122 + } 123 + 124 + memset(&uart, 0, sizeof(uart)); 125 + 126 + uart.port.membase = devm_ioremap(&pdev->dev, res->start, 127 + resource_size(res)); 128 + if (!uart.port.membase) 129 + return -ENOMEM; 130 + 131 + data = devm_kzalloc(&pdev->dev, sizeof(*data), GFP_KERNEL); 132 + if (!data) 133 + return -ENOMEM; 134 + 135 + data->clk_uart = devm_clk_get(&pdev->dev, "uartclk"); 136 + if (IS_ERR(data->clk_uart)) { 137 + dev_err(&pdev->dev, "uart clock not found\n"); 138 + return PTR_ERR(data->clk_uart); 139 + } 140 + 141 + data->clk_reg = devm_clk_get(&pdev->dev, "reg"); 142 + if (IS_ERR(data->clk_reg)) { 143 + dev_err(&pdev->dev, "reg clock not found\n"); 144 + return PTR_ERR(data->clk_reg); 145 + } 146 + 147 + ret = clk_prepare_enable(data->clk_reg); 148 + if (ret) { 149 + dev_err(&pdev->dev, "unable to enable reg clock\n"); 150 + return ret; 151 + } 152 + 153 + ret = clk_prepare_enable(data->clk_uart); 154 + if (ret) { 155 + dev_err(&pdev->dev, "unable to enable uart clock\n"); 156 + goto dis_clk_reg; 157 + } 158 + 159 + ret = of_alias_get_id(pdev->dev.of_node, "serial"); 160 + if (ret >= 0) 161 + uart.port.line = ret; 162 + 163 + data->dma.rx_param = data; 164 + data->dma.tx_param = data; 165 + 166 + spin_lock_init(&uart.port.lock); 167 + uart.port.dev = &pdev->dev; 168 + uart.port.irq = irq; 169 + uart.port.iotype = UPIO_MEM32; 170 + uart.port.mapbase = res->start; 171 + uart.port.regshift = 2; 172 + uart.port.type = PORT_16550A; 173 + uart.port.flags = UPF_FIXED_PORT | UPF_FIXED_TYPE | UPF_SKIP_TEST; 174 + uart.port.uartclk = clk_get_rate(data->clk_uart); 175 + uart.port.private_data = data; 176 + uart.port.rs485_config = lpc18xx_rs485_config; 177 + uart.port.serial_out = lpc18xx_uart_serial_out; 178 + 179 + uart.dma = &data->dma; 180 + uart.dma->rxconf.src_maxburst = 1; 181 + uart.dma->txconf.dst_maxburst = 1; 182 + 183 + ret = serial8250_register_8250_port(&uart); 184 + if (ret < 0) { 185 + dev_err(&pdev->dev, "unable to register 8250 port\n"); 186 + goto dis_uart_clk; 187 + } 188 + 189 + data->line = ret; 190 + platform_set_drvdata(pdev, data); 191 + 192 + return 0; 193 + 194 + dis_uart_clk: 195 + clk_disable_unprepare(data->clk_uart); 196 + dis_clk_reg: 197 + clk_disable_unprepare(data->clk_reg); 198 + return ret; 199 + } 200 + 201 + static int lpc18xx_serial_remove(struct platform_device *pdev) 202 + { 203 + struct lpc18xx_uart_data *data = platform_get_drvdata(pdev); 204 + 205 + serial8250_unregister_port(data->line); 206 + clk_disable_unprepare(data->clk_uart); 207 + clk_disable_unprepare(data->clk_reg); 208 + 209 + return 0; 210 + } 211 + 212 + static const struct of_device_id lpc18xx_serial_match[] = { 213 + { .compatible = "nxp,lpc1850-uart" }, 214 + { }, 215 + }; 216 + MODULE_DEVICE_TABLE(of, lpc18xx_serial_match); 217 + 218 + static struct platform_driver lpc18xx_serial_driver = { 219 + .probe = lpc18xx_serial_probe, 220 + .remove = lpc18xx_serial_remove, 221 + .driver = { 222 + .name = "lpc18xx-uart", 223 + .of_match_table = lpc18xx_serial_match, 224 + }, 225 + }; 226 + module_platform_driver(lpc18xx_serial_driver); 227 + 228 + MODULE_AUTHOR("Joachim Eastwood <manabian@gmail.com>"); 229 + MODULE_DESCRIPTION("Serial port driver NXP LPC18xx/43xx devices"); 230 + MODULE_LICENSE("GPL v2");
+75 -44
drivers/tty/serial/8250/8250_mtk.c
··· 34 34 struct mtk8250_data { 35 35 int line; 36 36 struct clk *uart_clk; 37 + struct clk *bus_clk; 37 38 }; 38 39 39 40 static void ··· 116 115 tty_termios_encode_baud_rate(termios, baud, baud); 117 116 } 118 117 118 + static int mtk8250_runtime_suspend(struct device *dev) 119 + { 120 + struct mtk8250_data *data = dev_get_drvdata(dev); 121 + 122 + clk_disable_unprepare(data->uart_clk); 123 + clk_disable_unprepare(data->bus_clk); 124 + 125 + return 0; 126 + } 127 + 128 + static int mtk8250_runtime_resume(struct device *dev) 129 + { 130 + struct mtk8250_data *data = dev_get_drvdata(dev); 131 + int err; 132 + 133 + err = clk_prepare_enable(data->uart_clk); 134 + if (err) { 135 + dev_warn(dev, "Can't enable clock\n"); 136 + return err; 137 + } 138 + 139 + err = clk_prepare_enable(data->bus_clk); 140 + if (err) { 141 + dev_warn(dev, "Can't enable bus clock\n"); 142 + return err; 143 + } 144 + 145 + return 0; 146 + } 147 + 119 148 static void 120 149 mtk8250_do_pm(struct uart_port *port, unsigned int state, unsigned int old) 121 150 { ··· 161 130 static int mtk8250_probe_of(struct platform_device *pdev, struct uart_port *p, 162 131 struct mtk8250_data *data) 163 132 { 164 - int err; 165 - struct device_node *np = pdev->dev.of_node; 166 - 167 - data->uart_clk = of_clk_get(np, 0); 133 + data->uart_clk = devm_clk_get(&pdev->dev, "baud"); 168 134 if (IS_ERR(data->uart_clk)) { 169 - dev_warn(&pdev->dev, "Can't get timer clock\n"); 170 - return PTR_ERR(data->uart_clk); 135 + /* 136 + * For compatibility with older device trees try unnamed 137 + * clk when no baud clk can be found. 138 + */ 139 + data->uart_clk = devm_clk_get(&pdev->dev, NULL); 140 + if (IS_ERR(data->uart_clk)) { 141 + dev_warn(&pdev->dev, "Can't get uart clock\n"); 142 + return PTR_ERR(data->uart_clk); 143 + } 144 + 145 + return 0; 171 146 } 172 147 173 - err = clk_prepare_enable(data->uart_clk); 174 - if (err) { 175 - dev_warn(&pdev->dev, "Can't prepare clock\n"); 176 - clk_put(data->uart_clk); 177 - return err; 178 - } 179 - p->uartclk = clk_get_rate(data->uart_clk); 148 + data->bus_clk = devm_clk_get(&pdev->dev, "bus"); 149 + if (IS_ERR(data->bus_clk)) 150 + return PTR_ERR(data->bus_clk); 180 151 181 152 return 0; 182 153 } ··· 223 190 uart.port.regshift = 2; 224 191 uart.port.private_data = data; 225 192 uart.port.set_termios = mtk8250_set_termios; 193 + uart.port.uartclk = clk_get_rate(data->uart_clk); 226 194 227 195 /* Disable Rate Fix function */ 228 196 writel(0x0, uart.port.membase + 229 197 (MTK_UART_RATE_FIX << uart.port.regshift)); 230 198 199 + platform_set_drvdata(pdev, data); 200 + 201 + pm_runtime_enable(&pdev->dev); 202 + if (!pm_runtime_enabled(&pdev->dev)) { 203 + err = mtk8250_runtime_resume(&pdev->dev); 204 + if (err) 205 + return err; 206 + } 207 + 231 208 data->line = serial8250_register_8250_port(&uart); 232 209 if (data->line < 0) 233 210 return data->line; 234 - 235 - platform_set_drvdata(pdev, data); 236 - 237 - pm_runtime_set_active(&pdev->dev); 238 - pm_runtime_enable(&pdev->dev); 239 211 240 212 return 0; 241 213 } ··· 252 214 pm_runtime_get_sync(&pdev->dev); 253 215 254 216 serial8250_unregister_port(data->line); 255 - if (!IS_ERR(data->uart_clk)) { 256 - clk_disable_unprepare(data->uart_clk); 257 - clk_put(data->uart_clk); 258 - } 259 217 260 218 pm_runtime_disable(&pdev->dev); 261 219 pm_runtime_put_noidle(&pdev->dev); 220 + 221 + if (!pm_runtime_status_suspended(&pdev->dev)) 222 + mtk8250_runtime_suspend(&pdev->dev); 223 + 262 224 return 0; 263 225 } 264 226 ··· 282 244 } 283 245 #endif /* CONFIG_PM_SLEEP */ 284 246 285 - #ifdef CONFIG_PM 286 - static int mtk8250_runtime_suspend(struct device *dev) 287 - { 288 - struct mtk8250_data *data = dev_get_drvdata(dev); 289 - 290 - if (!IS_ERR(data->uart_clk)) 291 - clk_disable_unprepare(data->uart_clk); 292 - 293 - return 0; 294 - } 295 - 296 - static int mtk8250_runtime_resume(struct device *dev) 297 - { 298 - struct mtk8250_data *data = dev_get_drvdata(dev); 299 - 300 - if (!IS_ERR(data->uart_clk)) 301 - clk_prepare_enable(data->uart_clk); 302 - 303 - return 0; 304 - } 305 - #endif 306 - 307 247 static const struct dev_pm_ops mtk8250_pm_ops = { 308 248 SET_SYSTEM_SLEEP_PM_OPS(mtk8250_suspend, mtk8250_resume) 309 249 SET_RUNTIME_PM_OPS(mtk8250_runtime_suspend, mtk8250_runtime_resume, ··· 304 288 .remove = mtk8250_remove, 305 289 }; 306 290 module_platform_driver(mtk8250_platform_driver); 291 + 292 + #ifdef CONFIG_SERIAL_8250_CONSOLE 293 + static int __init early_mtk8250_setup(struct earlycon_device *device, 294 + const char *options) 295 + { 296 + if (!device->port.membase) 297 + return -ENODEV; 298 + 299 + device->port.iotype = UPIO_MEM32; 300 + 301 + return early_serial8250_setup(device, NULL); 302 + } 303 + 304 + OF_EARLYCON_DECLARE(mtk8250, "mediatek,mt6577-uart", early_mtk8250_setup); 305 + #endif 307 306 308 307 MODULE_AUTHOR("Matthias Brugger"); 309 308 MODULE_LICENSE("GPL");
+53 -70
drivers/tty/serial/8250/8250_omap.c
··· 22 22 #include <linux/pm_runtime.h> 23 23 #include <linux/console.h> 24 24 #include <linux/pm_qos.h> 25 + #include <linux/pm_wakeirq.h> 25 26 #include <linux/dma-mapping.h> 26 27 27 28 #include "8250.h" ··· 99 98 struct pm_qos_request pm_qos_request; 100 99 struct work_struct qos_work; 101 100 struct uart_8250_dma omap8250_dma; 101 + spinlock_t rx_dma_lock; 102 102 }; 103 103 104 104 static u32 uart_read(struct uart_8250_port *up, u32 reg) ··· 553 551 pm_qos_update_request(&priv->pm_qos_request, priv->latency); 554 552 } 555 553 556 - static irqreturn_t omap_wake_irq(int irq, void *dev_id) 557 - { 558 - struct uart_port *port = dev_id; 559 - int ret; 560 - 561 - ret = port->handle_irq(port); 562 - if (ret) 563 - return IRQ_HANDLED; 564 - return IRQ_NONE; 565 - } 566 - 567 554 #ifdef CONFIG_SERIAL_8250_DMA 568 555 static int omap_8250_dma_handle_irq(struct uart_port *port); 569 556 #endif ··· 586 595 int ret; 587 596 588 597 if (priv->wakeirq) { 589 - ret = request_irq(priv->wakeirq, omap_wake_irq, 590 - port->irqflags, "uart wakeup irq", port); 598 + ret = dev_pm_set_dedicated_wake_irq(port->dev, priv->wakeirq); 591 599 if (ret) 592 600 return ret; 593 - disable_irq(priv->wakeirq); 594 601 } 595 602 596 603 pm_runtime_get_sync(port->dev); ··· 637 648 err: 638 649 pm_runtime_mark_last_busy(port->dev); 639 650 pm_runtime_put_autosuspend(port->dev); 640 - if (priv->wakeirq) 641 - free_irq(priv->wakeirq, port); 651 + dev_pm_clear_wake_irq(port->dev); 642 652 return ret; 643 653 } 644 654 ··· 669 681 670 682 pm_runtime_mark_last_busy(port->dev); 671 683 pm_runtime_put_autosuspend(port->dev); 672 - 673 684 free_irq(port->irq, port); 674 - if (priv->wakeirq) 675 - free_irq(priv->wakeirq, port); 685 + dev_pm_clear_wake_irq(port->dev); 676 686 } 677 687 678 688 static void omap_8250_throttle(struct uart_port *port) ··· 712 726 713 727 static void __dma_rx_do_complete(struct uart_8250_port *p, bool error) 714 728 { 729 + struct omap8250_priv *priv = p->port.private_data; 715 730 struct uart_8250_dma *dma = p->dma; 716 731 struct tty_port *tty_port = &p->port.state->port; 717 732 struct dma_tx_state state; 718 733 int count; 734 + unsigned long flags; 719 735 720 736 dma_sync_single_for_cpu(dma->rxchan->device->dev, dma->rx_addr, 721 737 dma->rx_size, DMA_FROM_DEVICE); 738 + 739 + spin_lock_irqsave(&priv->rx_dma_lock, flags); 740 + 741 + if (!dma->rx_running) 742 + goto unlock; 722 743 723 744 dma->rx_running = 0; 724 745 dmaengine_tx_status(dma->rxchan, dma->rx_cookie, &state); ··· 735 742 736 743 tty_insert_flip_string(tty_port, dma->rx_buf, count); 737 744 p->port.icount.rx += count; 745 + unlock: 746 + spin_unlock_irqrestore(&priv->rx_dma_lock, flags); 747 + 738 748 if (!error) 739 749 omap_8250_rx_dma(p, 0); 740 750 ··· 749 753 __dma_rx_do_complete(param, false); 750 754 } 751 755 756 + static void omap_8250_rx_dma_flush(struct uart_8250_port *p) 757 + { 758 + struct omap8250_priv *priv = p->port.private_data; 759 + struct uart_8250_dma *dma = p->dma; 760 + unsigned long flags; 761 + 762 + spin_lock_irqsave(&priv->rx_dma_lock, flags); 763 + 764 + if (!dma->rx_running) { 765 + spin_unlock_irqrestore(&priv->rx_dma_lock, flags); 766 + return; 767 + } 768 + 769 + dmaengine_pause(dma->rxchan); 770 + 771 + spin_unlock_irqrestore(&priv->rx_dma_lock, flags); 772 + 773 + __dma_rx_do_complete(p, true); 774 + } 775 + 752 776 static int omap_8250_rx_dma(struct uart_8250_port *p, unsigned int iir) 753 777 { 778 + struct omap8250_priv *priv = p->port.private_data; 754 779 struct uart_8250_dma *dma = p->dma; 780 + int err = 0; 755 781 struct dma_async_tx_descriptor *desc; 782 + unsigned long flags; 756 783 757 784 switch (iir & 0x3f) { 758 785 case UART_IIR_RLSI: 759 786 /* 8250_core handles errors and break interrupts */ 760 - if (dma->rx_running) { 761 - dmaengine_pause(dma->rxchan); 762 - __dma_rx_do_complete(p, true); 763 - } 787 + omap_8250_rx_dma_flush(p); 764 788 return -EIO; 765 789 case UART_IIR_RX_TIMEOUT: 766 790 /* 767 791 * If RCVR FIFO trigger level was not reached, complete the 768 792 * transfer and let 8250_core copy the remaining data. 769 793 */ 770 - if (dma->rx_running) { 771 - dmaengine_pause(dma->rxchan); 772 - __dma_rx_do_complete(p, true); 773 - } 794 + omap_8250_rx_dma_flush(p); 774 795 return -ETIMEDOUT; 775 796 case UART_IIR_RDI: 776 797 /* ··· 799 786 * the DMA won't do anything soon so we have to cancel the DMA 800 787 * transfer and purge the FIFO manually. 801 788 */ 802 - if (dma->rx_running) { 803 - dmaengine_pause(dma->rxchan); 804 - __dma_rx_do_complete(p, true); 805 - } 789 + omap_8250_rx_dma_flush(p); 806 790 return -ETIMEDOUT; 807 791 808 792 default: 809 793 break; 810 794 } 811 795 796 + spin_lock_irqsave(&priv->rx_dma_lock, flags); 797 + 812 798 if (dma->rx_running) 813 - return 0; 799 + goto out; 814 800 815 801 desc = dmaengine_prep_slave_single(dma->rxchan, dma->rx_addr, 816 802 dma->rx_size, DMA_DEV_TO_MEM, 817 803 DMA_PREP_INTERRUPT | DMA_CTRL_ACK); 818 - if (!desc) 819 - return -EBUSY; 804 + if (!desc) { 805 + err = -EBUSY; 806 + goto out; 807 + } 820 808 821 809 dma->rx_running = 1; 822 810 desc->callback = __dma_rx_complete; ··· 829 815 dma->rx_size, DMA_FROM_DEVICE); 830 816 831 817 dma_async_issue_pending(dma->rxchan); 832 - return 0; 818 + out: 819 + spin_unlock_irqrestore(&priv->rx_dma_lock, flags); 820 + return err; 833 821 } 834 822 835 823 static int omap_8250_tx_dma(struct uart_8250_port *p); ··· 1145 1129 priv->latency); 1146 1130 INIT_WORK(&priv->qos_work, omap8250_uart_qos_work); 1147 1131 1132 + spin_lock_init(&priv->rx_dma_lock); 1133 + 1148 1134 device_init_wakeup(&pdev->dev, true); 1149 1135 pm_runtime_use_autosuspend(&pdev->dev); 1150 1136 pm_runtime_set_autosuspend_delay(&pdev->dev, -1); ··· 1211 1193 return 0; 1212 1194 } 1213 1195 1214 - #ifdef CONFIG_PM 1215 - 1216 - static inline void omap8250_enable_wakeirq(struct omap8250_priv *priv, 1217 - bool enable) 1218 - { 1219 - if (!priv->wakeirq) 1220 - return; 1221 - 1222 - if (enable) 1223 - enable_irq(priv->wakeirq); 1224 - else 1225 - disable_irq_nosync(priv->wakeirq); 1226 - } 1227 - 1228 - static void omap8250_enable_wakeup(struct omap8250_priv *priv, 1229 - bool enable) 1230 - { 1231 - if (enable == priv->wakeups_enabled) 1232 - return; 1233 - 1234 - omap8250_enable_wakeirq(priv, enable); 1235 - priv->wakeups_enabled = enable; 1236 - } 1237 - #endif 1238 - 1239 1196 #ifdef CONFIG_PM_SLEEP 1240 1197 static int omap8250_prepare(struct device *dev) 1241 1198 { ··· 1237 1244 1238 1245 serial8250_suspend_port(priv->line); 1239 1246 flush_work(&priv->qos_work); 1240 - 1241 - if (device_may_wakeup(dev)) 1242 - omap8250_enable_wakeup(priv, true); 1243 - else 1244 - omap8250_enable_wakeup(priv, false); 1245 1247 return 0; 1246 1248 } 1247 1249 1248 1250 static int omap8250_resume(struct device *dev) 1249 1251 { 1250 1252 struct omap8250_priv *priv = dev_get_drvdata(dev); 1251 - 1252 - if (device_may_wakeup(dev)) 1253 - omap8250_enable_wakeup(priv, false); 1254 1253 1255 1254 serial8250_resume_port(priv->line); 1256 1255 return 0; ··· 1285 1300 return -EBUSY; 1286 1301 } 1287 1302 1288 - omap8250_enable_wakeup(priv, true); 1289 1303 if (up->dma) 1290 1304 omap_8250_rx_dma(up, UART_IIR_RX_TIMEOUT); 1291 1305 ··· 1305 1321 return 0; 1306 1322 1307 1323 up = serial8250_get_port(priv->line); 1308 - omap8250_enable_wakeup(priv, false); 1309 1324 loss_cntx = omap8250_lost_context(up); 1310 1325 1311 1326 if (loss_cntx)
+257
drivers/tty/serial/8250/8250_uniphier.c
··· 1 + /* 2 + * Copyright (C) 2015 Masahiro Yamada <yamada.masahiro@socionext.com> 3 + * 4 + * This program is free software; you can redistribute it and/or modify 5 + * it under the terms of the GNU General Public License as published by 6 + * the Free Software Foundation; either version 2 of the License, or 7 + * (at your option) any later version. 8 + * 9 + * This program is distributed in the hope that it will be useful, 10 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 11 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 12 + * GNU General Public License for more details. 13 + */ 14 + 15 + #include <linux/clk.h> 16 + #include <linux/io.h> 17 + #include <linux/module.h> 18 + #include <linux/of.h> 19 + #include <linux/platform_device.h> 20 + 21 + #include "8250.h" 22 + 23 + /* Most (but not all) of UniPhier UART devices have 64-depth FIFO. */ 24 + #define UNIPHIER_UART_DEFAULT_FIFO_SIZE 64 25 + 26 + #define UNIPHIER_UART_CHAR_FCR 3 /* Character / FIFO Control Register */ 27 + #define UNIPHIER_UART_LCR_MCR 4 /* Line/Modem Control Register */ 28 + #define UNIPHIER_UART_LCR_SHIFT 8 29 + #define UNIPHIER_UART_DLR 9 /* Divisor Latch Register */ 30 + 31 + struct uniphier8250_priv { 32 + int line; 33 + struct clk *clk; 34 + spinlock_t atomic_write_lock; 35 + }; 36 + 37 + /* 38 + * The register map is slightly different from that of 8250. 39 + * IO callbacks must be overridden for correct access to FCR, LCR, and MCR. 40 + */ 41 + static unsigned int uniphier_serial_in(struct uart_port *p, int offset) 42 + { 43 + unsigned int valshift = 0; 44 + 45 + switch (offset) { 46 + case UART_LCR: 47 + valshift = UNIPHIER_UART_LCR_SHIFT; 48 + /* fall through */ 49 + case UART_MCR: 50 + offset = UNIPHIER_UART_LCR_MCR; 51 + break; 52 + default: 53 + break; 54 + } 55 + 56 + offset <<= p->regshift; 57 + 58 + /* 59 + * The return value must be masked with 0xff because LCR and MCR reside 60 + * in the same register that must be accessed by 32-bit write/read. 61 + * 8 or 16 bit access to this hardware result in unexpected behavior. 62 + */ 63 + return (readl(p->membase + offset) >> valshift) & 0xff; 64 + } 65 + 66 + static void uniphier_serial_out(struct uart_port *p, int offset, int value) 67 + { 68 + unsigned int valshift = 0; 69 + bool normal = false; 70 + 71 + switch (offset) { 72 + case UART_FCR: 73 + offset = UNIPHIER_UART_CHAR_FCR; 74 + break; 75 + case UART_LCR: 76 + valshift = UNIPHIER_UART_LCR_SHIFT; 77 + /* Divisor latch access bit does not exist. */ 78 + value &= ~(UART_LCR_DLAB << valshift); 79 + /* fall through */ 80 + case UART_MCR: 81 + offset = UNIPHIER_UART_LCR_MCR; 82 + break; 83 + default: 84 + normal = true; 85 + break; 86 + } 87 + 88 + offset <<= p->regshift; 89 + 90 + if (normal) { 91 + writel(value, p->membase + offset); 92 + } else { 93 + /* 94 + * Special case: two registers share the same address that 95 + * must be 32-bit accessed. As this is not longer atomic safe, 96 + * take a lock just in case. 97 + */ 98 + struct uniphier8250_priv *priv = p->private_data; 99 + unsigned long flags; 100 + u32 tmp; 101 + 102 + spin_lock_irqsave(&priv->atomic_write_lock, flags); 103 + tmp = readl(p->membase + offset); 104 + tmp &= ~(0xff << valshift); 105 + tmp |= value << valshift; 106 + writel(tmp, p->membase + offset); 107 + spin_unlock_irqrestore(&priv->atomic_write_lock, flags); 108 + } 109 + } 110 + 111 + /* 112 + * This hardware does not have the divisor latch access bit. 113 + * The divisor latch register exists at different address. 114 + * Override dl_read/write callbacks. 115 + */ 116 + static int uniphier_serial_dl_read(struct uart_8250_port *up) 117 + { 118 + return readl(up->port.membase + UNIPHIER_UART_DLR); 119 + } 120 + 121 + static void uniphier_serial_dl_write(struct uart_8250_port *up, int value) 122 + { 123 + writel(value, up->port.membase + UNIPHIER_UART_DLR); 124 + } 125 + 126 + static int uniphier_of_serial_setup(struct device *dev, struct uart_port *port, 127 + struct uniphier8250_priv *priv) 128 + { 129 + int ret; 130 + u32 prop; 131 + struct device_node *np = dev->of_node; 132 + 133 + ret = of_alias_get_id(np, "serial"); 134 + if (ret < 0) { 135 + dev_err(dev, "failed to get alias id\n"); 136 + return ret; 137 + } 138 + port->line = priv->line = ret; 139 + 140 + /* Get clk rate through clk driver */ 141 + priv->clk = devm_clk_get(dev, NULL); 142 + if (IS_ERR(priv->clk)) { 143 + dev_err(dev, "failed to get clock\n"); 144 + return PTR_ERR(priv->clk); 145 + } 146 + 147 + ret = clk_prepare_enable(priv->clk); 148 + if (ret < 0) 149 + return ret; 150 + 151 + port->uartclk = clk_get_rate(priv->clk); 152 + 153 + /* Check for fifo size */ 154 + if (of_property_read_u32(np, "fifo-size", &prop) == 0) 155 + port->fifosize = prop; 156 + else 157 + port->fifosize = UNIPHIER_UART_DEFAULT_FIFO_SIZE; 158 + 159 + return 0; 160 + } 161 + 162 + static int uniphier_uart_probe(struct platform_device *pdev) 163 + { 164 + struct device *dev = &pdev->dev; 165 + struct uart_8250_port up; 166 + struct uniphier8250_priv *priv; 167 + struct resource *regs; 168 + void __iomem *membase; 169 + int irq; 170 + int ret; 171 + 172 + regs = platform_get_resource(pdev, IORESOURCE_MEM, 0); 173 + if (!regs) { 174 + dev_err(dev, "failed to get memory resource"); 175 + return -EINVAL; 176 + } 177 + 178 + membase = devm_ioremap(dev, regs->start, resource_size(regs)); 179 + if (!membase) 180 + return -ENOMEM; 181 + 182 + irq = platform_get_irq(pdev, 0); 183 + if (irq < 0) { 184 + dev_err(dev, "failed to get IRQ number"); 185 + return irq; 186 + } 187 + 188 + priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); 189 + if (!priv) 190 + return -ENOMEM; 191 + 192 + memset(&up, 0, sizeof(up)); 193 + 194 + ret = uniphier_of_serial_setup(dev, &up.port, priv); 195 + if (ret < 0) 196 + return ret; 197 + 198 + spin_lock_init(&priv->atomic_write_lock); 199 + 200 + up.port.dev = dev; 201 + up.port.private_data = priv; 202 + up.port.mapbase = regs->start; 203 + up.port.mapsize = resource_size(regs); 204 + up.port.membase = membase; 205 + up.port.irq = irq; 206 + 207 + up.port.type = PORT_16550A; 208 + up.port.iotype = UPIO_MEM32; 209 + up.port.regshift = 2; 210 + up.port.flags = UPF_FIXED_PORT | UPF_FIXED_TYPE; 211 + up.capabilities = UART_CAP_FIFO; 212 + 213 + up.port.serial_in = uniphier_serial_in; 214 + up.port.serial_out = uniphier_serial_out; 215 + up.dl_read = uniphier_serial_dl_read; 216 + up.dl_write = uniphier_serial_dl_write; 217 + 218 + ret = serial8250_register_8250_port(&up); 219 + if (ret < 0) { 220 + dev_err(dev, "failed to register 8250 port\n"); 221 + return ret; 222 + } 223 + 224 + platform_set_drvdata(pdev, priv); 225 + 226 + return 0; 227 + } 228 + 229 + static int uniphier_uart_remove(struct platform_device *pdev) 230 + { 231 + struct uniphier8250_priv *priv = platform_get_drvdata(pdev); 232 + 233 + serial8250_unregister_port(priv->line); 234 + clk_disable_unprepare(priv->clk); 235 + 236 + return 0; 237 + } 238 + 239 + static const struct of_device_id uniphier_uart_match[] = { 240 + { .compatible = "socionext,uniphier-uart" }, 241 + { /* sentinel */ } 242 + }; 243 + MODULE_DEVICE_TABLE(of, uniphier_uart_match); 244 + 245 + static struct platform_driver uniphier_uart_platform_driver = { 246 + .probe = uniphier_uart_probe, 247 + .remove = uniphier_uart_remove, 248 + .driver = { 249 + .name = "uniphier-uart", 250 + .of_match_table = uniphier_uart_match, 251 + }, 252 + }; 253 + module_platform_driver(uniphier_uart_platform_driver); 254 + 255 + MODULE_AUTHOR("Masahiro Yamada <yamada.masahiro@socionext.com>"); 256 + MODULE_DESCRIPTION("UniPhier UART driver"); 257 + MODULE_LICENSE("GPL");
+15
drivers/tty/serial/8250/Kconfig
··· 336 336 LPC to 4 UART. This device has some RS485 functionality not available 337 337 through the PNP driver. If unsure, say N. 338 338 339 + config SERIAL_8250_LPC18XX 340 + bool "NXP LPC18xx/43xx serial port support" 341 + depends on SERIAL_8250 && OF && (ARCH_LPC18XX || COMPILE_TEST) 342 + default ARCH_LPC18XX 343 + help 344 + If you have a LPC18xx/43xx based board and want to use the 345 + serial port, say Y to this option. If unsure, say Y. 346 + 339 347 config SERIAL_8250_MT6577 340 348 bool "Mediatek serial port support" 341 349 depends on SERIAL_8250 && ARCH_MEDIATEK 342 350 help 343 351 If you have a Mediatek based board and want to use the 344 352 serial port, say Y to this option. If unsure, say N. 353 + 354 + config SERIAL_8250_UNIPHIER 355 + tristate "Support for UniPhier on-chip UART" 356 + depends on SERIAL_8250 && ARCH_UNIPHIER 357 + help 358 + If you have a UniPhier based board and want to use the on-chip 359 + serial ports, say Y to this option. If unsure, say N.
+2
drivers/tty/serial/8250/Makefile
··· 22 22 obj-$(CONFIG_SERIAL_8250_EM) += 8250_em.o 23 23 obj-$(CONFIG_SERIAL_8250_OMAP) += 8250_omap.o 24 24 obj-$(CONFIG_SERIAL_8250_FINTEK) += 8250_fintek.o 25 + obj-$(CONFIG_SERIAL_8250_LPC18XX) += 8250_lpc18xx.o 25 26 obj-$(CONFIG_SERIAL_8250_MT6577) += 8250_mtk.o 27 + obj-$(CONFIG_SERIAL_8250_UNIPHIER) += 8250_uniphier.o
+37 -18
drivers/tty/serial/Kconfig
··· 241 241 tristate "Samsung SoC serial support" 242 242 depends on PLAT_SAMSUNG || ARCH_EXYNOS 243 243 select SERIAL_CORE 244 - select SERIAL_EARLYCON 245 244 help 246 245 Support for the on-chip UARTs on the Samsung S3C24XX series CPUs, 247 246 providing /dev/ttySAC0, 1 and 2 (note, some machines may not ··· 276 277 bool "Support for console on Samsung SoC serial port" 277 278 depends on SERIAL_SAMSUNG=y 278 279 select SERIAL_CORE_CONSOLE 280 + select SERIAL_EARLYCON 279 281 help 280 282 Allow selection of the S3C24XX on-board serial ports for use as 281 283 an virtual console. ··· 1179 1179 help 1180 1180 Support for console on SCCNXP serial ports. 1181 1181 1182 + config SERIAL_SC16IS7XX_CORE 1183 + tristate 1184 + 1182 1185 config SERIAL_SC16IS7XX 1183 - tristate "SC16IS7xx serial support" 1184 - depends on I2C 1185 - select SERIAL_CORE 1186 - select REGMAP_I2C if I2C 1187 - help 1188 - This selects support for SC16IS7xx serial ports. 1189 - Supported ICs are SC16IS740, SC16IS741, SC16IS750, SC16IS752, 1190 - SC16IS760 and SC16IS762. 1186 + tristate "SC16IS7xx serial support" 1187 + select SERIAL_CORE 1188 + depends on I2C || SPI_MASTER 1189 + help 1190 + This selects support for SC16IS7xx serial ports. 1191 + Supported ICs are SC16IS740, SC16IS741, SC16IS750, SC16IS752, 1192 + SC16IS760 and SC16IS762. Select supported buses using options below. 1193 + 1194 + config SERIAL_SC16IS7XX_I2C 1195 + bool "SC16IS7xx for I2C interface" 1196 + depends on SERIAL_SC16IS7XX 1197 + depends on I2C 1198 + select SERIAL_SC16IS7XX_CORE if SERIAL_SC16IS7XX 1199 + select REGMAP_I2C if I2C 1200 + default y 1201 + help 1202 + Enable SC16IS7xx driver on I2C bus, 1203 + If required say y, and say n to i2c if not required, 1204 + Enabled by default to support oldconfig. 1205 + You must select at least one bus for the driver to be built. 1206 + 1207 + config SERIAL_SC16IS7XX_SPI 1208 + bool "SC16IS7xx for spi interface" 1209 + depends on SERIAL_SC16IS7XX 1210 + depends on SPI_MASTER 1211 + select SERIAL_SC16IS7XX_CORE if SERIAL_SC16IS7XX 1212 + select REGMAP_SPI if SPI_MASTER 1213 + help 1214 + Enable SC16IS7xx driver on SPI bus, 1215 + If required say y, and say n to spi if not required, 1216 + This is additional support to exsisting driver. 1217 + You must select at least one bus for the driver to be built. 1191 1218 1192 1219 config SERIAL_BFIN_SPORT 1193 1220 tristate "Blackfin SPORT emulate UART" ··· 1376 1349 1377 1350 config SERIAL_IFX6X60 1378 1351 tristate "SPI protocol driver for Infineon 6x60 modem (EXPERIMENTAL)" 1379 - depends on GPIOLIB && SPI 1352 + depends on GPIOLIB && SPI && HAS_DMA 1380 1353 help 1381 1354 Support for the IFX6x60 modem devices on Intel MID platforms. 1382 1355 ··· 1404 1377 Say Y here if you wish to use the PCH UART as the system console 1405 1378 (the system console is the device which receives all kernel messages and 1406 1379 warnings and which allows logins in single user mode). 1407 - 1408 - config SERIAL_MSM_SMD 1409 - bool "Enable tty device interface for some SMD ports" 1410 - default n 1411 - depends on MSM_SMD 1412 - help 1413 - Enables userspace clients to read and write to some streaming SMD 1414 - ports via tty device interface for MSM chipset. 1415 1380 1416 1381 config SERIAL_MXS_AUART 1417 1382 depends on ARCH_MXS
+1 -2
drivers/tty/serial/Makefile
··· 53 53 obj-$(CONFIG_ETRAX_SERIAL) += crisv10.o 54 54 obj-$(CONFIG_SERIAL_ETRAXFS) += etraxfs-uart.o 55 55 obj-$(CONFIG_SERIAL_SCCNXP) += sccnxp.o 56 - obj-$(CONFIG_SERIAL_SC16IS7XX) += sc16is7xx.o 56 + obj-$(CONFIG_SERIAL_SC16IS7XX_CORE) += sc16is7xx.o 57 57 obj-$(CONFIG_SERIAL_JSM) += jsm/ 58 58 obj-$(CONFIG_SERIAL_TXX9) += serial_txx9.o 59 59 obj-$(CONFIG_SERIAL_VR41XX) += vr41xx_siu.o ··· 79 79 obj-$(CONFIG_SERIAL_VT8500) += vt8500_serial.o 80 80 obj-$(CONFIG_SERIAL_IFX6X60) += ifx6x60.o 81 81 obj-$(CONFIG_SERIAL_PCH_UART) += pch_uart.o 82 - obj-$(CONFIG_SERIAL_MSM_SMD) += msm_smd_tty.o 83 82 obj-$(CONFIG_SERIAL_MXS_AUART) += mxs-auart.o 84 83 obj-$(CONFIG_SERIAL_LANTIQ) += lantiq.o 85 84 obj-$(CONFIG_SERIAL_XILINX_PS_UART) += xilinx_uartps.o
+1 -1
drivers/tty/serial/altera_jtaguart.c
··· 387 387 388 388 #define ALTERA_JTAGUART_CONSOLE NULL 389 389 390 - #endif /* CONFIG_ALTERA_JTAGUART_CONSOLE */ 390 + #endif /* CONFIG_SERIAL_ALTERA_JTAGUART_CONSOLE */ 391 391 392 392 static struct uart_driver altera_jtaguart_driver = { 393 393 .owner = THIS_MODULE,
+1 -1
drivers/tty/serial/altera_uart.c
··· 493 493 494 494 #define ALTERA_UART_CONSOLE NULL 495 495 496 - #endif /* CONFIG_ALTERA_UART_CONSOLE */ 496 + #endif /* CONFIG_SERIAL_ALTERA_UART_CONSOLE */ 497 497 498 498 /* 499 499 * Define the altera_uart UART driver structure.
+425 -238
drivers/tty/serial/amba-pl011.c
··· 58 58 #include <linux/pinctrl/consumer.h> 59 59 #include <linux/sizes.h> 60 60 #include <linux/io.h> 61 - #include <linux/workqueue.h> 61 + #include <linux/acpi.h> 62 62 63 63 #define UART_NR 14 64 64 ··· 79 79 bool oversampling; 80 80 bool dma_threshold; 81 81 bool cts_event_workaround; 82 + bool always_enabled; 83 + bool fixed_options; 82 84 83 85 unsigned int (*get_fifosize)(struct amba_device *dev); 84 86 }; ··· 97 95 .oversampling = false, 98 96 .dma_threshold = false, 99 97 .cts_event_workaround = false, 98 + .always_enabled = false, 99 + .fixed_options = false, 100 100 .get_fifosize = get_fifosize_arm, 101 + }; 102 + 103 + static struct vendor_data vendor_sbsa = { 104 + .oversampling = false, 105 + .dma_threshold = false, 106 + .cts_event_workaround = false, 107 + .always_enabled = true, 108 + .fixed_options = true, 101 109 }; 102 110 103 111 static unsigned int get_fifosize_st(struct amba_device *dev) ··· 122 110 .oversampling = true, 123 111 .dma_threshold = true, 124 112 .cts_event_workaround = true, 113 + .always_enabled = false, 114 + .fixed_options = false, 125 115 .get_fifosize = get_fifosize_st, 126 116 }; 127 117 ··· 171 157 unsigned int lcrh_tx; /* vendor-specific */ 172 158 unsigned int lcrh_rx; /* vendor-specific */ 173 159 unsigned int old_cr; /* state during shutdown */ 174 - struct delayed_work tx_softirq_work; 175 160 bool autorts; 176 - unsigned int tx_irq_seen; /* 0=none, 1=1, 2=2 or more */ 161 + unsigned int fixed_baud; /* vendor-set fixed baud rate */ 177 162 char type[12]; 178 163 #ifdef CONFIG_DMA_ENGINE 179 164 /* DMA stuff */ ··· 1185 1172 pl011_dma_tx_stop(uap); 1186 1173 } 1187 1174 1188 - static bool pl011_tx_chars(struct uart_amba_port *uap); 1175 + static void pl011_tx_chars(struct uart_amba_port *uap, bool from_irq); 1189 1176 1190 1177 /* Start TX with programmed I/O only (no DMA) */ 1191 1178 static void pl011_start_tx_pio(struct uart_amba_port *uap) 1192 1179 { 1193 1180 uap->im |= UART011_TXIM; 1194 1181 writew(uap->im, uap->port.membase + UART011_IMSC); 1195 - if (!uap->tx_irq_seen) 1196 - pl011_tx_chars(uap); 1182 + pl011_tx_chars(uap, false); 1197 1183 } 1198 1184 1199 1185 static void pl011_start_tx(struct uart_port *port) ··· 1259 1247 spin_lock(&uap->port.lock); 1260 1248 } 1261 1249 1262 - /* 1263 - * Transmit a character 1264 - * 1265 - * Returns true if the character was successfully queued to the FIFO. 1266 - * Returns false otherwise. 1267 - */ 1268 - static bool pl011_tx_char(struct uart_amba_port *uap, unsigned char c) 1250 + static bool pl011_tx_char(struct uart_amba_port *uap, unsigned char c, 1251 + bool from_irq) 1269 1252 { 1270 - if (readw(uap->port.membase + UART01x_FR) & UART01x_FR_TXFF) 1253 + if (unlikely(!from_irq) && 1254 + readw(uap->port.membase + UART01x_FR) & UART01x_FR_TXFF) 1271 1255 return false; /* unable to transmit character */ 1272 1256 1273 1257 writew(c, uap->port.membase + UART01x_DR); ··· 1272 1264 return true; 1273 1265 } 1274 1266 1275 - static bool pl011_tx_chars(struct uart_amba_port *uap) 1267 + static void pl011_tx_chars(struct uart_amba_port *uap, bool from_irq) 1276 1268 { 1277 1269 struct circ_buf *xmit = &uap->port.state->xmit; 1278 - int count; 1279 - 1280 - if (unlikely(uap->tx_irq_seen < 2)) 1281 - /* 1282 - * Initial FIFO fill level unknown: we must check TXFF 1283 - * after each write, so just try to fill up the FIFO. 1284 - */ 1285 - count = uap->fifosize; 1286 - else /* tx_irq_seen >= 2 */ 1287 - /* 1288 - * FIFO initially at least half-empty, so we can simply 1289 - * write half the FIFO without polling TXFF. 1290 - 1291 - * Note: the *first* TX IRQ can still race with 1292 - * pl011_start_tx_pio(), which can result in the FIFO 1293 - * being fuller than expected in that case. 1294 - */ 1295 - count = uap->fifosize >> 1; 1296 - 1297 - /* 1298 - * If the FIFO is full we're guaranteed a TX IRQ at some later point, 1299 - * and can't transmit immediately in any case: 1300 - */ 1301 - if (unlikely(uap->tx_irq_seen < 2 && 1302 - readw(uap->port.membase + UART01x_FR) & UART01x_FR_TXFF)) 1303 - return false; 1270 + int count = uap->fifosize >> 1; 1304 1271 1305 1272 if (uap->port.x_char) { 1306 - if (!pl011_tx_char(uap, uap->port.x_char)) 1307 - goto done; 1273 + if (!pl011_tx_char(uap, uap->port.x_char, from_irq)) 1274 + return; 1308 1275 uap->port.x_char = 0; 1309 1276 --count; 1310 1277 } 1311 1278 if (uart_circ_empty(xmit) || uart_tx_stopped(&uap->port)) { 1312 1279 pl011_stop_tx(&uap->port); 1313 - goto done; 1280 + return; 1314 1281 } 1315 1282 1316 1283 /* If we are using DMA mode, try to send some characters. */ 1317 1284 if (pl011_dma_tx_irq(uap)) 1318 - goto done; 1285 + return; 1319 1286 1320 - while (count-- > 0 && pl011_tx_char(uap, xmit->buf[xmit->tail])) { 1321 - xmit->tail = (xmit->tail + 1) & (UART_XMIT_SIZE - 1); 1322 - if (uart_circ_empty(xmit)) 1287 + do { 1288 + if (likely(from_irq) && count-- == 0) 1323 1289 break; 1324 - } 1290 + 1291 + if (!pl011_tx_char(uap, xmit->buf[xmit->tail], from_irq)) 1292 + break; 1293 + 1294 + xmit->tail = (xmit->tail + 1) & (UART_XMIT_SIZE - 1); 1295 + } while (!uart_circ_empty(xmit)); 1325 1296 1326 1297 if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 1327 1298 uart_write_wakeup(&uap->port); 1328 1299 1329 - if (uart_circ_empty(xmit)) { 1300 + if (uart_circ_empty(xmit)) 1330 1301 pl011_stop_tx(&uap->port); 1331 - goto done; 1332 - } 1333 - 1334 - if (unlikely(!uap->tx_irq_seen)) 1335 - schedule_delayed_work(&uap->tx_softirq_work, uap->port.timeout); 1336 - 1337 - done: 1338 - return false; 1339 1302 } 1340 1303 1341 1304 static void pl011_modem_status(struct uart_amba_port *uap) ··· 1333 1354 wake_up_interruptible(&uap->port.state->port.delta_msr_wait); 1334 1355 } 1335 1356 1336 - static void pl011_tx_softirq(struct work_struct *work) 1357 + static void check_apply_cts_event_workaround(struct uart_amba_port *uap) 1337 1358 { 1338 - struct delayed_work *dwork = to_delayed_work(work); 1339 - struct uart_amba_port *uap = 1340 - container_of(dwork, struct uart_amba_port, tx_softirq_work); 1359 + unsigned int dummy_read; 1341 1360 1342 - spin_lock(&uap->port.lock); 1343 - while (pl011_tx_chars(uap)) ; 1344 - spin_unlock(&uap->port.lock); 1345 - } 1346 - 1347 - static void pl011_tx_irq_seen(struct uart_amba_port *uap) 1348 - { 1349 - if (likely(uap->tx_irq_seen > 1)) 1361 + if (!uap->vendor->cts_event_workaround) 1350 1362 return; 1351 1363 1352 - uap->tx_irq_seen++; 1353 - if (uap->tx_irq_seen < 2) 1354 - /* first TX IRQ */ 1355 - cancel_delayed_work(&uap->tx_softirq_work); 1364 + /* workaround to make sure that all bits are unlocked.. */ 1365 + writew(0x00, uap->port.membase + UART011_ICR); 1366 + 1367 + /* 1368 + * WA: introduce 26ns(1 uart clk) delay before W1C; 1369 + * single apb access will incur 2 pclk(133.12Mhz) delay, 1370 + * so add 2 dummy reads 1371 + */ 1372 + dummy_read = readw(uap->port.membase + UART011_ICR); 1373 + dummy_read = readw(uap->port.membase + UART011_ICR); 1356 1374 } 1357 1375 1358 1376 static irqreturn_t pl011_int(int irq, void *dev_id) ··· 1357 1381 struct uart_amba_port *uap = dev_id; 1358 1382 unsigned long flags; 1359 1383 unsigned int status, pass_counter = AMBA_ISR_PASS_LIMIT; 1384 + u16 imsc; 1360 1385 int handled = 0; 1361 - unsigned int dummy_read; 1362 1386 1363 1387 spin_lock_irqsave(&uap->port.lock, flags); 1364 - status = readw(uap->port.membase + UART011_MIS); 1388 + imsc = readw(uap->port.membase + UART011_IMSC); 1389 + status = readw(uap->port.membase + UART011_RIS) & imsc; 1365 1390 if (status) { 1366 1391 do { 1367 - if (uap->vendor->cts_event_workaround) { 1368 - /* workaround to make sure that all bits are unlocked.. */ 1369 - writew(0x00, uap->port.membase + UART011_ICR); 1370 - 1371 - /* 1372 - * WA: introduce 26ns(1 uart clk) delay before W1C; 1373 - * single apb access will incur 2 pclk(133.12Mhz) delay, 1374 - * so add 2 dummy reads 1375 - */ 1376 - dummy_read = readw(uap->port.membase + UART011_ICR); 1377 - dummy_read = readw(uap->port.membase + UART011_ICR); 1378 - } 1392 + check_apply_cts_event_workaround(uap); 1379 1393 1380 1394 writew(status & ~(UART011_TXIS|UART011_RTIS| 1381 1395 UART011_RXIS), ··· 1380 1414 if (status & (UART011_DSRMIS|UART011_DCDMIS| 1381 1415 UART011_CTSMIS|UART011_RIMIS)) 1382 1416 pl011_modem_status(uap); 1383 - if (status & UART011_TXIS) { 1384 - pl011_tx_irq_seen(uap); 1385 - pl011_tx_chars(uap); 1386 - } 1417 + if (status & UART011_TXIS) 1418 + pl011_tx_chars(uap, true); 1387 1419 1388 1420 if (pass_counter-- == 0) 1389 1421 break; 1390 1422 1391 - status = readw(uap->port.membase + UART011_MIS); 1423 + status = readw(uap->port.membase + UART011_RIS) & imsc; 1392 1424 } while (status != 0); 1393 1425 handled = 1; 1394 1426 } ··· 1581 1617 } 1582 1618 } 1583 1619 1620 + static int pl011_allocate_irq(struct uart_amba_port *uap) 1621 + { 1622 + writew(uap->im, uap->port.membase + UART011_IMSC); 1623 + 1624 + return request_irq(uap->port.irq, pl011_int, 0, "uart-pl011", uap); 1625 + } 1626 + 1627 + /* 1628 + * Enable interrupts, only timeouts when using DMA 1629 + * if initial RX DMA job failed, start in interrupt mode 1630 + * as well. 1631 + */ 1632 + static void pl011_enable_interrupts(struct uart_amba_port *uap) 1633 + { 1634 + spin_lock_irq(&uap->port.lock); 1635 + 1636 + /* Clear out any spuriously appearing RX interrupts */ 1637 + writew(UART011_RTIS | UART011_RXIS, 1638 + uap->port.membase + UART011_ICR); 1639 + uap->im = UART011_RTIM; 1640 + if (!pl011_dma_rx_running(uap)) 1641 + uap->im |= UART011_RXIM; 1642 + writew(uap->im, uap->port.membase + UART011_IMSC); 1643 + spin_unlock_irq(&uap->port.lock); 1644 + } 1645 + 1584 1646 static int pl011_startup(struct uart_port *port) 1585 1647 { 1586 1648 struct uart_amba_port *uap = ··· 1618 1628 if (retval) 1619 1629 goto clk_dis; 1620 1630 1621 - writew(uap->im, uap->port.membase + UART011_IMSC); 1622 - 1623 - /* 1624 - * Allocate the IRQ 1625 - */ 1626 - retval = request_irq(uap->port.irq, pl011_int, 0, "uart-pl011", uap); 1631 + retval = pl011_allocate_irq(uap); 1627 1632 if (retval) 1628 1633 goto clk_dis; 1629 1634 1630 1635 writew(uap->vendor->ifls, uap->port.membase + UART011_IFLS); 1631 - 1632 - /* Assume that TX IRQ doesn't work until we see one: */ 1633 - uap->tx_irq_seen = 0; 1634 1636 1635 1637 spin_lock_irq(&uap->port.lock); 1636 1638 ··· 1641 1659 /* Startup DMA */ 1642 1660 pl011_dma_startup(uap); 1643 1661 1644 - /* 1645 - * Finally, enable interrupts, only timeouts when using DMA 1646 - * if initial RX DMA job failed, start in interrupt mode 1647 - * as well. 1648 - */ 1649 - spin_lock_irq(&uap->port.lock); 1650 - /* Clear out any spuriously appearing RX interrupts */ 1651 - writew(UART011_RTIS | UART011_RXIS, 1652 - uap->port.membase + UART011_ICR); 1653 - uap->im = UART011_RTIM; 1654 - if (!pl011_dma_rx_running(uap)) 1655 - uap->im |= UART011_RXIM; 1656 - writew(uap->im, uap->port.membase + UART011_IMSC); 1657 - spin_unlock_irq(&uap->port.lock); 1662 + pl011_enable_interrupts(uap); 1658 1663 1659 1664 return 0; 1660 1665 1661 1666 clk_dis: 1662 1667 clk_disable_unprepare(uap->clk); 1663 1668 return retval; 1669 + } 1670 + 1671 + static int sbsa_uart_startup(struct uart_port *port) 1672 + { 1673 + struct uart_amba_port *uap = 1674 + container_of(port, struct uart_amba_port, port); 1675 + int retval; 1676 + 1677 + retval = pl011_hwinit(port); 1678 + if (retval) 1679 + return retval; 1680 + 1681 + retval = pl011_allocate_irq(uap); 1682 + if (retval) 1683 + return retval; 1684 + 1685 + /* The SBSA UART does not support any modem status lines. */ 1686 + uap->old_status = 0; 1687 + 1688 + pl011_enable_interrupts(uap); 1689 + 1690 + return 0; 1664 1691 } 1665 1692 1666 1693 static void pl011_shutdown_channel(struct uart_amba_port *uap, ··· 1682 1691 writew(val, uap->port.membase + lcrh); 1683 1692 } 1684 1693 1685 - static void pl011_shutdown(struct uart_port *port) 1694 + /* 1695 + * disable the port. It should not disable RTS and DTR. 1696 + * Also RTS and DTR state should be preserved to restore 1697 + * it during startup(). 1698 + */ 1699 + static void pl011_disable_uart(struct uart_amba_port *uap) 1686 1700 { 1687 - struct uart_amba_port *uap = 1688 - container_of(port, struct uart_amba_port, port); 1689 1701 unsigned int cr; 1690 1702 1691 - cancel_delayed_work_sync(&uap->tx_softirq_work); 1692 - 1693 - /* 1694 - * disable all interrupts 1695 - */ 1696 - spin_lock_irq(&uap->port.lock); 1697 - uap->im = 0; 1698 - writew(uap->im, uap->port.membase + UART011_IMSC); 1699 - writew(0xffff, uap->port.membase + UART011_ICR); 1700 - spin_unlock_irq(&uap->port.lock); 1701 - 1702 - pl011_dma_shutdown(uap); 1703 - 1704 - /* 1705 - * Free the interrupt 1706 - */ 1707 - free_irq(uap->port.irq, uap); 1708 - 1709 - /* 1710 - * disable the port 1711 - * disable the port. It should not disable RTS and DTR. 1712 - * Also RTS and DTR state should be preserved to restore 1713 - * it during startup(). 1714 - */ 1715 1703 uap->autorts = false; 1716 1704 spin_lock_irq(&uap->port.lock); 1717 1705 cr = readw(uap->port.membase + UART011_CR); ··· 1706 1736 pl011_shutdown_channel(uap, uap->lcrh_rx); 1707 1737 if (uap->lcrh_rx != uap->lcrh_tx) 1708 1738 pl011_shutdown_channel(uap, uap->lcrh_tx); 1739 + } 1740 + 1741 + static void pl011_disable_interrupts(struct uart_amba_port *uap) 1742 + { 1743 + spin_lock_irq(&uap->port.lock); 1744 + 1745 + /* mask all interrupts and clear all pending ones */ 1746 + uap->im = 0; 1747 + writew(uap->im, uap->port.membase + UART011_IMSC); 1748 + writew(0xffff, uap->port.membase + UART011_ICR); 1749 + 1750 + spin_unlock_irq(&uap->port.lock); 1751 + } 1752 + 1753 + static void pl011_shutdown(struct uart_port *port) 1754 + { 1755 + struct uart_amba_port *uap = 1756 + container_of(port, struct uart_amba_port, port); 1757 + 1758 + pl011_disable_interrupts(uap); 1759 + 1760 + pl011_dma_shutdown(uap); 1761 + 1762 + free_irq(uap->port.irq, uap); 1763 + 1764 + pl011_disable_uart(uap); 1709 1765 1710 1766 /* 1711 1767 * Shut down the clock producer ··· 1750 1754 1751 1755 if (uap->port.ops->flush_buffer) 1752 1756 uap->port.ops->flush_buffer(port); 1757 + } 1758 + 1759 + static void sbsa_uart_shutdown(struct uart_port *port) 1760 + { 1761 + struct uart_amba_port *uap = 1762 + container_of(port, struct uart_amba_port, port); 1763 + 1764 + pl011_disable_interrupts(uap); 1765 + 1766 + free_irq(uap->port.irq, uap); 1767 + 1768 + if (uap->port.ops->flush_buffer) 1769 + uap->port.ops->flush_buffer(port); 1770 + } 1771 + 1772 + static void 1773 + pl011_setup_status_masks(struct uart_port *port, struct ktermios *termios) 1774 + { 1775 + port->read_status_mask = UART011_DR_OE | 255; 1776 + if (termios->c_iflag & INPCK) 1777 + port->read_status_mask |= UART011_DR_FE | UART011_DR_PE; 1778 + if (termios->c_iflag & (IGNBRK | BRKINT | PARMRK)) 1779 + port->read_status_mask |= UART011_DR_BE; 1780 + 1781 + /* 1782 + * Characters to ignore 1783 + */ 1784 + port->ignore_status_mask = 0; 1785 + if (termios->c_iflag & IGNPAR) 1786 + port->ignore_status_mask |= UART011_DR_FE | UART011_DR_PE; 1787 + if (termios->c_iflag & IGNBRK) { 1788 + port->ignore_status_mask |= UART011_DR_BE; 1789 + /* 1790 + * If we're ignoring parity and break indicators, 1791 + * ignore overruns too (for real raw support). 1792 + */ 1793 + if (termios->c_iflag & IGNPAR) 1794 + port->ignore_status_mask |= UART011_DR_OE; 1795 + } 1796 + 1797 + /* 1798 + * Ignore all characters if CREAD is not set. 1799 + */ 1800 + if ((termios->c_cflag & CREAD) == 0) 1801 + port->ignore_status_mask |= UART_DUMMY_DR_RX; 1753 1802 } 1754 1803 1755 1804 static void ··· 1861 1820 */ 1862 1821 uart_update_timeout(port, termios->c_cflag, baud); 1863 1822 1864 - port->read_status_mask = UART011_DR_OE | 255; 1865 - if (termios->c_iflag & INPCK) 1866 - port->read_status_mask |= UART011_DR_FE | UART011_DR_PE; 1867 - if (termios->c_iflag & (IGNBRK | BRKINT | PARMRK)) 1868 - port->read_status_mask |= UART011_DR_BE; 1869 - 1870 - /* 1871 - * Characters to ignore 1872 - */ 1873 - port->ignore_status_mask = 0; 1874 - if (termios->c_iflag & IGNPAR) 1875 - port->ignore_status_mask |= UART011_DR_FE | UART011_DR_PE; 1876 - if (termios->c_iflag & IGNBRK) { 1877 - port->ignore_status_mask |= UART011_DR_BE; 1878 - /* 1879 - * If we're ignoring parity and break indicators, 1880 - * ignore overruns too (for real raw support). 1881 - */ 1882 - if (termios->c_iflag & IGNPAR) 1883 - port->ignore_status_mask |= UART011_DR_OE; 1884 - } 1885 - 1886 - /* 1887 - * Ignore all characters if CREAD is not set. 1888 - */ 1889 - if ((termios->c_cflag & CREAD) == 0) 1890 - port->ignore_status_mask |= UART_DUMMY_DR_RX; 1823 + pl011_setup_status_masks(port, termios); 1891 1824 1892 1825 if (UART_ENABLE_MS(port, termios->c_cflag)) 1893 1826 pl011_enable_ms(port); ··· 1913 1898 pl011_write_lcr_h(uap, lcr_h); 1914 1899 writew(old_cr, port->membase + UART011_CR); 1915 1900 1901 + spin_unlock_irqrestore(&port->lock, flags); 1902 + } 1903 + 1904 + static void 1905 + sbsa_uart_set_termios(struct uart_port *port, struct ktermios *termios, 1906 + struct ktermios *old) 1907 + { 1908 + struct uart_amba_port *uap = 1909 + container_of(port, struct uart_amba_port, port); 1910 + unsigned long flags; 1911 + 1912 + tty_termios_encode_baud_rate(termios, uap->fixed_baud, uap->fixed_baud); 1913 + 1914 + /* The SBSA UART only supports 8n1 without hardware flow control. */ 1915 + termios->c_cflag &= ~(CSIZE | CSTOPB | PARENB | PARODD); 1916 + termios->c_cflag &= ~(CMSPAR | CRTSCTS); 1917 + termios->c_cflag |= CS8 | CLOCAL; 1918 + 1919 + spin_lock_irqsave(&port->lock, flags); 1920 + uart_update_timeout(port, CS8, uap->fixed_baud); 1921 + pl011_setup_status_masks(port, termios); 1916 1922 spin_unlock_irqrestore(&port->lock, flags); 1917 1923 } 1918 1924 ··· 2012 1976 #endif 2013 1977 }; 2014 1978 1979 + static void sbsa_uart_set_mctrl(struct uart_port *port, unsigned int mctrl) 1980 + { 1981 + } 1982 + 1983 + static unsigned int sbsa_uart_get_mctrl(struct uart_port *port) 1984 + { 1985 + return 0; 1986 + } 1987 + 1988 + static const struct uart_ops sbsa_uart_pops = { 1989 + .tx_empty = pl011_tx_empty, 1990 + .set_mctrl = sbsa_uart_set_mctrl, 1991 + .get_mctrl = sbsa_uart_get_mctrl, 1992 + .stop_tx = pl011_stop_tx, 1993 + .start_tx = pl011_start_tx, 1994 + .stop_rx = pl011_stop_rx, 1995 + .startup = sbsa_uart_startup, 1996 + .shutdown = sbsa_uart_shutdown, 1997 + .set_termios = sbsa_uart_set_termios, 1998 + .type = pl011_type, 1999 + .release_port = pl011_release_port, 2000 + .request_port = pl011_request_port, 2001 + .config_port = pl011_config_port, 2002 + .verify_port = pl011_verify_port, 2003 + #ifdef CONFIG_CONSOLE_POLL 2004 + .poll_init = pl011_hwinit, 2005 + .poll_get_char = pl011_get_poll_char, 2006 + .poll_put_char = pl011_put_poll_char, 2007 + #endif 2008 + }; 2009 + 2015 2010 static struct uart_amba_port *amba_ports[UART_NR]; 2016 2011 2017 2012 #ifdef CONFIG_SERIAL_AMBA_PL011_CONSOLE ··· 2061 1994 pl011_console_write(struct console *co, const char *s, unsigned int count) 2062 1995 { 2063 1996 struct uart_amba_port *uap = amba_ports[co->index]; 2064 - unsigned int status, old_cr, new_cr; 1997 + unsigned int status, old_cr = 0, new_cr; 2065 1998 unsigned long flags; 2066 1999 int locked = 1; 2067 2000 ··· 2078 2011 /* 2079 2012 * First save the CR then disable the interrupts 2080 2013 */ 2081 - old_cr = readw(uap->port.membase + UART011_CR); 2082 - new_cr = old_cr & ~UART011_CR_CTSEN; 2083 - new_cr |= UART01x_CR_UARTEN | UART011_CR_TXE; 2084 - writew(new_cr, uap->port.membase + UART011_CR); 2014 + if (!uap->vendor->always_enabled) { 2015 + old_cr = readw(uap->port.membase + UART011_CR); 2016 + new_cr = old_cr & ~UART011_CR_CTSEN; 2017 + new_cr |= UART01x_CR_UARTEN | UART011_CR_TXE; 2018 + writew(new_cr, uap->port.membase + UART011_CR); 2019 + } 2085 2020 2086 2021 uart_console_write(&uap->port, s, count, pl011_console_putchar); 2087 2022 ··· 2094 2025 do { 2095 2026 status = readw(uap->port.membase + UART01x_FR); 2096 2027 } while (status & UART01x_FR_BUSY); 2097 - writew(old_cr, uap->port.membase + UART011_CR); 2028 + if (!uap->vendor->always_enabled) 2029 + writew(old_cr, uap->port.membase + UART011_CR); 2098 2030 2099 2031 if (locked) 2100 2032 spin_unlock(&uap->port.lock); ··· 2176 2106 2177 2107 uap->port.uartclk = clk_get_rate(uap->clk); 2178 2108 2179 - if (options) 2180 - uart_parse_options(options, &baud, &parity, &bits, &flow); 2181 - else 2182 - pl011_console_get_options(uap, &baud, &parity, &bits); 2109 + if (uap->vendor->fixed_options) { 2110 + baud = uap->fixed_baud; 2111 + } else { 2112 + if (options) 2113 + uart_parse_options(options, 2114 + &baud, &parity, &bits, &flow); 2115 + else 2116 + pl011_console_get_options(uap, &baud, &parity, &bits); 2117 + } 2183 2118 2184 2119 return uart_set_options(&uap->port, co, baud, parity, bits, flow); 2185 2120 } ··· 2276 2201 return ret; 2277 2202 } 2278 2203 2204 + /* unregisters the driver also if no more ports are left */ 2205 + static void pl011_unregister_port(struct uart_amba_port *uap) 2206 + { 2207 + int i; 2208 + bool busy = false; 2209 + 2210 + for (i = 0; i < ARRAY_SIZE(amba_ports); i++) { 2211 + if (amba_ports[i] == uap) 2212 + amba_ports[i] = NULL; 2213 + else if (amba_ports[i]) 2214 + busy = true; 2215 + } 2216 + pl011_dma_remove(uap); 2217 + if (!busy) 2218 + uart_unregister_driver(&amba_reg); 2219 + } 2220 + 2221 + static int pl011_find_free_port(void) 2222 + { 2223 + int i; 2224 + 2225 + for (i = 0; i < ARRAY_SIZE(amba_ports); i++) 2226 + if (amba_ports[i] == NULL) 2227 + return i; 2228 + 2229 + return -EBUSY; 2230 + } 2231 + 2232 + static int pl011_setup_port(struct device *dev, struct uart_amba_port *uap, 2233 + struct resource *mmiobase, int index) 2234 + { 2235 + void __iomem *base; 2236 + 2237 + base = devm_ioremap_resource(dev, mmiobase); 2238 + if (!base) 2239 + return -ENOMEM; 2240 + 2241 + index = pl011_probe_dt_alias(index, dev); 2242 + 2243 + uap->old_cr = 0; 2244 + uap->port.dev = dev; 2245 + uap->port.mapbase = mmiobase->start; 2246 + uap->port.membase = base; 2247 + uap->port.iotype = UPIO_MEM; 2248 + uap->port.fifosize = uap->fifosize; 2249 + uap->port.flags = UPF_BOOT_AUTOCONF; 2250 + uap->port.line = index; 2251 + 2252 + amba_ports[index] = uap; 2253 + 2254 + return 0; 2255 + } 2256 + 2257 + static int pl011_register_port(struct uart_amba_port *uap) 2258 + { 2259 + int ret; 2260 + 2261 + /* Ensure interrupts from this UART are masked and cleared */ 2262 + writew(0, uap->port.membase + UART011_IMSC); 2263 + writew(0xffff, uap->port.membase + UART011_ICR); 2264 + 2265 + if (!amba_reg.state) { 2266 + ret = uart_register_driver(&amba_reg); 2267 + if (ret < 0) { 2268 + dev_err(uap->port.dev, 2269 + "Failed to register AMBA-PL011 driver\n"); 2270 + return ret; 2271 + } 2272 + } 2273 + 2274 + ret = uart_add_one_port(&amba_reg, &uap->port); 2275 + if (ret) 2276 + pl011_unregister_port(uap); 2277 + 2278 + return ret; 2279 + } 2280 + 2279 2281 static int pl011_probe(struct amba_device *dev, const struct amba_id *id) 2280 2282 { 2281 2283 struct uart_amba_port *uap; 2282 2284 struct vendor_data *vendor = id->data; 2283 - void __iomem *base; 2284 - int i, ret; 2285 + int portnr, ret; 2285 2286 2286 - for (i = 0; i < ARRAY_SIZE(amba_ports); i++) 2287 - if (amba_ports[i] == NULL) 2288 - break; 2289 - 2290 - if (i == ARRAY_SIZE(amba_ports)) 2291 - return -EBUSY; 2287 + portnr = pl011_find_free_port(); 2288 + if (portnr < 0) 2289 + return portnr; 2292 2290 2293 2291 uap = devm_kzalloc(&dev->dev, sizeof(struct uart_amba_port), 2294 2292 GFP_KERNEL); 2295 - if (uap == NULL) 2296 - return -ENOMEM; 2297 - 2298 - i = pl011_probe_dt_alias(i, &dev->dev); 2299 - 2300 - base = devm_ioremap(&dev->dev, dev->res.start, 2301 - resource_size(&dev->res)); 2302 - if (!base) 2293 + if (!uap) 2303 2294 return -ENOMEM; 2304 2295 2305 2296 uap->clk = devm_clk_get(&dev->dev, NULL); ··· 2375 2234 uap->vendor = vendor; 2376 2235 uap->lcrh_rx = vendor->lcrh_rx; 2377 2236 uap->lcrh_tx = vendor->lcrh_tx; 2378 - uap->old_cr = 0; 2379 2237 uap->fifosize = vendor->get_fifosize(dev); 2380 - uap->port.dev = &dev->dev; 2381 - uap->port.mapbase = dev->res.start; 2382 - uap->port.membase = base; 2383 - uap->port.iotype = UPIO_MEM; 2384 2238 uap->port.irq = dev->irq[0]; 2385 - uap->port.fifosize = uap->fifosize; 2386 2239 uap->port.ops = &amba_pl011_pops; 2387 - uap->port.flags = UPF_BOOT_AUTOCONF; 2388 - uap->port.line = i; 2389 - INIT_DELAYED_WORK(&uap->tx_softirq_work, pl011_tx_softirq); 2390 - 2391 - /* Ensure interrupts from this UART are masked and cleared */ 2392 - writew(0, uap->port.membase + UART011_IMSC); 2393 - writew(0xffff, uap->port.membase + UART011_ICR); 2394 2240 2395 2241 snprintf(uap->type, sizeof(uap->type), "PL011 rev%u", amba_rev(dev)); 2396 2242 2397 - amba_ports[i] = uap; 2243 + ret = pl011_setup_port(&dev->dev, uap, &dev->res, portnr); 2244 + if (ret) 2245 + return ret; 2398 2246 2399 2247 amba_set_drvdata(dev, uap); 2400 2248 2401 - if (!amba_reg.state) { 2402 - ret = uart_register_driver(&amba_reg); 2403 - if (ret < 0) { 2404 - dev_err(&dev->dev, 2405 - "Failed to register AMBA-PL011 driver\n"); 2406 - return ret; 2407 - } 2408 - } 2409 - 2410 - ret = uart_add_one_port(&amba_reg, &uap->port); 2411 - if (ret) { 2412 - amba_ports[i] = NULL; 2413 - uart_unregister_driver(&amba_reg); 2414 - } 2415 - 2416 - return ret; 2249 + return pl011_register_port(uap); 2417 2250 } 2418 2251 2419 2252 static int pl011_remove(struct amba_device *dev) 2420 2253 { 2421 2254 struct uart_amba_port *uap = amba_get_drvdata(dev); 2422 - bool busy = false; 2423 - int i; 2424 2255 2425 2256 uart_remove_one_port(&amba_reg, &uap->port); 2426 - 2427 - for (i = 0; i < ARRAY_SIZE(amba_ports); i++) 2428 - if (amba_ports[i] == uap) 2429 - amba_ports[i] = NULL; 2430 - else if (amba_ports[i]) 2431 - busy = true; 2432 - 2433 - pl011_dma_remove(uap); 2434 - if (!busy) 2435 - uart_unregister_driver(&amba_reg); 2257 + pl011_unregister_port(uap); 2436 2258 return 0; 2437 2259 } 2438 2260 ··· 2422 2318 #endif 2423 2319 2424 2320 static SIMPLE_DEV_PM_OPS(pl011_dev_pm_ops, pl011_suspend, pl011_resume); 2321 + 2322 + static int sbsa_uart_probe(struct platform_device *pdev) 2323 + { 2324 + struct uart_amba_port *uap; 2325 + struct resource *r; 2326 + int portnr, ret; 2327 + int baudrate; 2328 + 2329 + /* 2330 + * Check the mandatory baud rate parameter in the DT node early 2331 + * so that we can easily exit with the error. 2332 + */ 2333 + if (pdev->dev.of_node) { 2334 + struct device_node *np = pdev->dev.of_node; 2335 + 2336 + ret = of_property_read_u32(np, "current-speed", &baudrate); 2337 + if (ret) 2338 + return ret; 2339 + } else { 2340 + baudrate = 115200; 2341 + } 2342 + 2343 + portnr = pl011_find_free_port(); 2344 + if (portnr < 0) 2345 + return portnr; 2346 + 2347 + uap = devm_kzalloc(&pdev->dev, sizeof(struct uart_amba_port), 2348 + GFP_KERNEL); 2349 + if (!uap) 2350 + return -ENOMEM; 2351 + 2352 + uap->vendor = &vendor_sbsa; 2353 + uap->fifosize = 32; 2354 + uap->port.irq = platform_get_irq(pdev, 0); 2355 + uap->port.ops = &sbsa_uart_pops; 2356 + uap->fixed_baud = baudrate; 2357 + 2358 + snprintf(uap->type, sizeof(uap->type), "SBSA"); 2359 + 2360 + r = platform_get_resource(pdev, IORESOURCE_MEM, 0); 2361 + 2362 + ret = pl011_setup_port(&pdev->dev, uap, r, portnr); 2363 + if (ret) 2364 + return ret; 2365 + 2366 + platform_set_drvdata(pdev, uap); 2367 + 2368 + return pl011_register_port(uap); 2369 + } 2370 + 2371 + static int sbsa_uart_remove(struct platform_device *pdev) 2372 + { 2373 + struct uart_amba_port *uap = platform_get_drvdata(pdev); 2374 + 2375 + uart_remove_one_port(&amba_reg, &uap->port); 2376 + pl011_unregister_port(uap); 2377 + return 0; 2378 + } 2379 + 2380 + static const struct of_device_id sbsa_uart_of_match[] = { 2381 + { .compatible = "arm,sbsa-uart", }, 2382 + {}, 2383 + }; 2384 + MODULE_DEVICE_TABLE(of, sbsa_uart_of_match); 2385 + 2386 + static const struct acpi_device_id sbsa_uart_acpi_match[] = { 2387 + { "ARMH0011", 0 }, 2388 + {}, 2389 + }; 2390 + MODULE_DEVICE_TABLE(acpi, sbsa_uart_acpi_match); 2391 + 2392 + static struct platform_driver arm_sbsa_uart_platform_driver = { 2393 + .probe = sbsa_uart_probe, 2394 + .remove = sbsa_uart_remove, 2395 + .driver = { 2396 + .name = "sbsa-uart", 2397 + .of_match_table = of_match_ptr(sbsa_uart_of_match), 2398 + .acpi_match_table = ACPI_PTR(sbsa_uart_acpi_match), 2399 + }, 2400 + }; 2425 2401 2426 2402 static struct amba_id pl011_ids[] = { 2427 2403 { ··· 2533 2349 { 2534 2350 printk(KERN_INFO "Serial: AMBA PL011 UART driver\n"); 2535 2351 2352 + if (platform_driver_register(&arm_sbsa_uart_platform_driver)) 2353 + pr_warn("could not register SBSA UART platform driver\n"); 2536 2354 return amba_driver_register(&pl011_driver); 2537 2355 } 2538 2356 2539 2357 static void __exit pl011_exit(void) 2540 2358 { 2359 + platform_driver_unregister(&arm_sbsa_uart_platform_driver); 2541 2360 amba_driver_unregister(&pl011_driver); 2542 2361 } 2543 2362
+10 -12
drivers/tty/serial/atmel_serial.c
··· 165 165 struct tasklet_struct tasklet; 166 166 unsigned int irq_status; 167 167 unsigned int irq_status_prev; 168 + unsigned int status_change; 168 169 169 170 struct circ_buf rx_ring; 170 171 ··· 316 315 if (rs485conf->flags & SER_RS485_ENABLED) { 317 316 dev_dbg(port->dev, "Setting UART to RS485\n"); 318 317 atmel_port->tx_done_mask = ATMEL_US_TXEMPTY; 319 - if ((rs485conf->delay_rts_after_send) > 0) 320 - UART_PUT_TTGR(port, rs485conf->delay_rts_after_send); 318 + UART_PUT_TTGR(port, rs485conf->delay_rts_after_send); 321 319 mode |= ATMEL_US_USMODE_RS485; 322 320 } else { 323 321 dev_dbg(port->dev, "Setting UART to RS232\n"); ··· 354 354 355 355 /* override mode to RS485 if needed, otherwise keep the current mode */ 356 356 if (port->rs485.flags & SER_RS485_ENABLED) { 357 - if ((port->rs485.delay_rts_after_send) > 0) 358 - UART_PUT_TTGR(port, port->rs485.delay_rts_after_send); 357 + UART_PUT_TTGR(port, port->rs485.delay_rts_after_send); 359 358 mode &= ~ATMEL_US_USMODE; 360 359 mode |= ATMEL_US_USMODE_RS485; 361 360 } ··· 1176 1177 if (pending & (ATMEL_US_RIIC | ATMEL_US_DSRIC | ATMEL_US_DCDIC 1177 1178 | ATMEL_US_CTSIC)) { 1178 1179 atmel_port->irq_status = status; 1180 + atmel_port->status_change = atmel_port->irq_status ^ 1181 + atmel_port->irq_status_prev; 1182 + atmel_port->irq_status_prev = status; 1179 1183 tasklet_schedule(&atmel_port->tasklet); 1180 1184 } 1181 1185 } ··· 1525 1523 { 1526 1524 struct uart_port *port = (struct uart_port *)data; 1527 1525 struct atmel_uart_port *atmel_port = to_atmel_uart_port(port); 1528 - unsigned int status; 1529 - unsigned int status_change; 1526 + unsigned int status = atmel_port->irq_status; 1527 + unsigned int status_change = atmel_port->status_change; 1530 1528 1531 1529 /* The interrupt handler does not take the lock */ 1532 1530 spin_lock(&port->lock); 1533 1531 1534 1532 atmel_port->schedule_tx(port); 1535 - 1536 - status = atmel_port->irq_status; 1537 - status_change = status ^ atmel_port->irq_status_prev; 1538 1533 1539 1534 if (status_change & (ATMEL_US_RI | ATMEL_US_DSR 1540 1535 | ATMEL_US_DCD | ATMEL_US_CTS)) { ··· 1547 1548 1548 1549 wake_up_interruptible(&port->state->port.delta_msr_wait); 1549 1550 1550 - atmel_port->irq_status_prev = status; 1551 + atmel_port->status_change = 0; 1551 1552 } 1552 1553 1553 1554 atmel_port->schedule_rx(port); ··· 2060 2061 2061 2062 /* mode */ 2062 2063 if (port->rs485.flags & SER_RS485_ENABLED) { 2063 - if ((port->rs485.delay_rts_after_send) > 0) 2064 - UART_PUT_TTGR(port, port->rs485.delay_rts_after_send); 2064 + UART_PUT_TTGR(port, port->rs485.delay_rts_after_send); 2065 2065 mode |= ATMEL_US_USMODE_RS485; 2066 2066 } else if (termios->c_cflag & CRTSCTS) { 2067 2067 /* RS232 with hardware handshake (RTS/CTS) */
+12 -12
drivers/tty/serial/bfin_uart.c
··· 74 74 75 75 static void bfin_serial_reset_irda(struct uart_port *port); 76 76 77 - #if defined(CONFIG_SERIAL_BFIN_CTSRTS) || \ 78 - defined(CONFIG_SERIAL_BFIN_HARD_CTSRTS) 77 + #if defined(SERIAL_BFIN_CTSRTS) || \ 78 + defined(SERIAL_BFIN_HARD_CTSRTS) 79 79 static unsigned int bfin_serial_get_mctrl(struct uart_port *port) 80 80 { 81 81 struct bfin_serial_port *uart = (struct bfin_serial_port *)port; ··· 110 110 struct bfin_serial_port *uart = dev_id; 111 111 struct uart_port *uport = &uart->port; 112 112 unsigned int status = bfin_serial_get_mctrl(uport); 113 - #ifdef CONFIG_SERIAL_BFIN_HARD_CTSRTS 113 + #ifdef SERIAL_BFIN_HARD_CTSRTS 114 114 115 115 UART_CLEAR_SCTS(uart); 116 116 if (uport->hw_stopped) { ··· 700 700 # endif 701 701 #endif 702 702 703 - #ifdef CONFIG_SERIAL_BFIN_CTSRTS 703 + #ifdef SERIAL_BFIN_CTSRTS 704 704 if (uart->cts_pin >= 0) { 705 705 if (request_irq(gpio_to_irq(uart->cts_pin), 706 706 bfin_serial_mctrl_cts_int, ··· 718 718 gpio_direction_output(uart->rts_pin, 0); 719 719 } 720 720 #endif 721 - #ifdef CONFIG_SERIAL_BFIN_HARD_CTSRTS 721 + #ifdef SERIAL_BFIN_HARD_CTSRTS 722 722 if (uart->cts_pin >= 0) { 723 723 if (request_irq(uart->status_irq, bfin_serial_mctrl_cts_int, 724 724 0, "BFIN_UART_MODEM_STATUS", uart)) { ··· 766 766 free_irq(uart->tx_irq, uart); 767 767 #endif 768 768 769 - #ifdef CONFIG_SERIAL_BFIN_CTSRTS 769 + #ifdef SERIAL_BFIN_CTSRTS 770 770 if (uart->cts_pin >= 0) 771 771 free_irq(gpio_to_irq(uart->cts_pin), uart); 772 772 if (uart->rts_pin >= 0) 773 773 gpio_free(uart->rts_pin); 774 774 #endif 775 - #ifdef CONFIG_SERIAL_BFIN_HARD_CTSRTS 775 + #ifdef SERIAL_BFIN_HARD_CTSRTS 776 776 if (uart->cts_pin >= 0) 777 777 free_irq(uart->status_irq, uart); 778 778 #endif ··· 788 788 unsigned int ier, lcr = 0; 789 789 unsigned long timeout; 790 790 791 - #ifdef CONFIG_SERIAL_BFIN_CTSRTS 791 + #ifdef SERIAL_BFIN_CTSRTS 792 792 if (old == NULL && uart->cts_pin != -1) 793 793 termios->c_cflag |= CRTSCTS; 794 794 else if (uart->cts_pin == -1) ··· 1110 1110 int baud = 57600; 1111 1111 int bits = 8; 1112 1112 int parity = 'n'; 1113 - # if defined(CONFIG_SERIAL_BFIN_CTSRTS) || \ 1114 - defined(CONFIG_SERIAL_BFIN_HARD_CTSRTS) 1113 + # if defined(SERIAL_BFIN_CTSRTS) || \ 1114 + defined(SERIAL_BFIN_HARD_CTSRTS) 1115 1115 int flow = 'r'; 1116 1116 # else 1117 1117 int flow = 'n'; ··· 1322 1322 init_timer(&(uart->rx_dma_timer)); 1323 1323 #endif 1324 1324 1325 - #if defined(CONFIG_SERIAL_BFIN_CTSRTS) || \ 1326 - defined(CONFIG_SERIAL_BFIN_HARD_CTSRTS) 1325 + #if defined(SERIAL_BFIN_CTSRTS) || \ 1326 + defined(SERIAL_BFIN_HARD_CTSRTS) 1327 1327 res = platform_get_resource(pdev, IORESOURCE_IO, 0); 1328 1328 if (res == NULL) 1329 1329 uart->cts_pin = -1;
+14 -92
drivers/tty/serial/crisv10.c
··· 56 56 #error "RX_TIMEOUT_TICKS == 0 not allowed, use 1" 57 57 #endif 58 58 59 - #if defined(CONFIG_ETRAX_RS485_ON_PA) && defined(CONFIG_ETRAX_RS485_ON_PORT_G) 60 - #error "Disable either CONFIG_ETRAX_RS485_ON_PA or CONFIG_ETRAX_RS485_ON_PORT_G" 61 - #endif 62 - 63 59 /* 64 60 * All of the compatibilty code so we can compile serial.c against 65 61 * older kernels is hidden in serial_compat.h ··· 451 455 static struct fast_timer fast_timers[NR_PORTS]; 452 456 #endif 453 457 454 - #ifdef CONFIG_ETRAX_SERIAL_PROC_ENTRY 455 - #define PROCSTAT(x) x 456 - struct ser_statistics_type { 457 - int overrun_cnt; 458 - int early_errors_cnt; 459 - int ser_ints_ok_cnt; 460 - int errors_cnt; 461 - unsigned long int processing_flip; 462 - unsigned long processing_flip_still_room; 463 - unsigned long int timeout_flush_cnt; 464 - int rx_dma_ints; 465 - int tx_dma_ints; 466 - int rx_tot; 467 - int tx_tot; 468 - }; 469 - 470 - static struct ser_statistics_type ser_stat[NR_PORTS]; 471 - 472 - #else 473 - 474 - #define PROCSTAT(x) 475 - 476 - #endif /* CONFIG_ETRAX_SERIAL_PROC_ENTRY */ 477 - 478 458 /* RS-485 */ 479 459 #if defined(CONFIG_ETRAX_RS485) 480 460 #ifdef CONFIG_ETRAX_FAST_TIMER ··· 458 486 #endif 459 487 #if defined(CONFIG_ETRAX_RS485_ON_PA) 460 488 static int rs485_pa_bit = CONFIG_ETRAX_RS485_ON_PA_BIT; 461 - #endif 462 - #if defined(CONFIG_ETRAX_RS485_ON_PORT_G) 463 - static int rs485_port_g_bit = CONFIG_ETRAX_RS485_ON_PORT_G_BIT; 464 489 #endif 465 490 #endif 466 491 ··· 708 739 defined(CONFIG_ETRAX_SER1_DTR_RI_DSR_CD_MIXED) || \ 709 740 defined(CONFIG_ETRAX_SER2_DTR_RI_DSR_CD_MIXED) || \ 710 741 defined(CONFIG_ETRAX_SER3_DTR_RI_DSR_CD_MIXED) 711 - #define CONFIG_ETRAX_SERX_DTR_RI_DSR_CD_MIXED 742 + #define ETRAX_SERX_DTR_RI_DSR_CD_MIXED 712 743 #endif 713 744 714 - #ifdef CONFIG_ETRAX_SERX_DTR_RI_DSR_CD_MIXED 745 + #ifdef ETRAX_SERX_DTR_RI_DSR_CD_MIXED 715 746 /* The pins can be mixed on PA and PB */ 716 747 #define CONTROL_PINS_PORT_NOT_USED(line) \ 717 748 &dummy_ser[line], &dummy_ser[line], \ ··· 804 835 #endif 805 836 } 806 837 }; 807 - #else /* CONFIG_ETRAX_SERX_DTR_RI_DSR_CD_MIXED */ 838 + #else /* ETRAX_SERX_DTR_RI_DSR_CD_MIXED */ 808 839 809 840 /* All pins are on either PA or PB for each serial port */ 810 841 #define CONTROL_PINS_PORT_NOT_USED(line) \ ··· 886 917 #endif 887 918 } 888 919 }; 889 - #endif /* !CONFIG_ETRAX_SERX_DTR_RI_DSR_CD_MIXED */ 920 + #endif /* !ETRAX_SERX_DTR_RI_DSR_CD_MIXED */ 890 921 891 922 #define E100_RTS_MASK 0x20 892 923 #define E100_CTS_MASK 0x40 ··· 1336 1367 #if defined(CONFIG_ETRAX_RS485_ON_PA) 1337 1368 *R_PORT_PA_DATA = port_pa_data_shadow |= (1 << rs485_pa_bit); 1338 1369 #endif 1339 - #if defined(CONFIG_ETRAX_RS485_ON_PORT_G) 1340 - REG_SHADOW_SET(R_PORT_G_DATA, port_g_data_shadow, 1341 - rs485_port_g_bit, 1); 1342 - #endif 1343 - #if defined(CONFIG_ETRAX_RS485_LTC1387) 1344 - REG_SHADOW_SET(R_PORT_G_DATA, port_g_data_shadow, 1345 - CONFIG_ETRAX_RS485_LTC1387_DXEN_PORT_G_BIT, 1); 1346 - REG_SHADOW_SET(R_PORT_G_DATA, port_g_data_shadow, 1347 - CONFIG_ETRAX_RS485_LTC1387_RXEN_PORT_G_BIT, 1); 1348 - #endif 1349 1370 1350 1371 info->rs485 = *r; 1351 1372 ··· 1635 1676 { 1636 1677 struct etrax_recv_buffer *buffer; 1637 1678 1638 - if (!(buffer = kmalloc(sizeof *buffer + size, GFP_ATOMIC))) 1679 + buffer = kmalloc(sizeof *buffer + size, GFP_ATOMIC); 1680 + if (!buffer) 1639 1681 return NULL; 1640 1682 1641 1683 buffer->next = NULL; ··· 1672 1712 { 1673 1713 struct etrax_recv_buffer *buffer; 1674 1714 if (info->uses_dma_in) { 1675 - if (!(buffer = alloc_recv_buffer(4))) 1715 + buffer = alloc_recv_buffer(4); 1716 + if (!buffer) 1676 1717 return 0; 1677 1718 1678 1719 buffer->length = 1; ··· 1711 1750 1712 1751 append_recv_buffer(info, buffer); 1713 1752 1714 - if (!(buffer = alloc_recv_buffer(SERIAL_DESCR_BUF_SIZE))) 1753 + buffer = alloc_recv_buffer(SERIAL_DESCR_BUF_SIZE); 1754 + if (!buffer) 1715 1755 panic("%s: Failed to allocate memory for receive buffer!\n", __func__); 1716 1756 1717 1757 descr->buf = virt_to_phys(buffer->buffer); ··· 1803 1841 */ 1804 1842 unsigned char data = info->ioport[REG_DATA]; 1805 1843 1806 - PROCSTAT(ser_stat[info->line].errors_cnt++); 1807 1844 DEBUG_LOG(info->line, "#dERR: s d 0x%04X\n", 1808 1845 ((rstat & SER_ERROR_MASK) << 8) | data); 1809 1846 ··· 1828 1867 1829 1868 /* Set up the receiving descriptors */ 1830 1869 for (i = 0; i < SERIAL_RECV_DESCRIPTORS; i++) { 1831 - if (!(buffer = alloc_recv_buffer(SERIAL_DESCR_BUF_SIZE))) 1870 + buffer = alloc_recv_buffer(SERIAL_DESCR_BUF_SIZE); 1871 + if (!buffer) 1832 1872 panic("%s: Failed to allocate memory for receive buffer!\n", __func__); 1833 1873 1834 1874 descr[i].ctrl = d_int; ··· 1905 1943 /* Read jiffies_usec first, 1906 1944 * we want this time to be as late as possible 1907 1945 */ 1908 - PROCSTAT(ser_stat[info->line].tx_dma_ints++); 1909 1946 info->last_tx_active_usec = GET_JIFFIES_USEC(); 1910 1947 info->last_tx_active = jiffies; 1911 1948 transmit_chars_dma(info); ··· 1983 2022 */ 1984 2023 if (!info->forced_eop) { 1985 2024 info->forced_eop = 1; 1986 - PROCSTAT(ser_stat[info->line].timeout_flush_cnt++); 1987 2025 TIMERD(DEBUG_LOG(info->line, "timeout EOP %i\n", info->line)); 1988 2026 FORCE_EOP(info); 1989 2027 } ··· 2334 2374 DEBUG_LOG(info->line, "#iERR s d %04X\n", 2335 2375 ((rstat & SER_ERROR_MASK) << 8) | data); 2336 2376 } 2337 - PROCSTAT(ser_stat[info->line].early_errors_cnt++); 2338 2377 } else { /* It was a valid byte, now let the DMA do the rest */ 2339 2378 unsigned long curr_time_u = GET_JIFFIES_USEC(); 2340 2379 unsigned long curr_time = jiffies; ··· 2366 2407 DINTR2(DEBUG_LOG(info->line, "ser_rx OK %d\n", info->line)); 2367 2408 info->break_detected_cnt = 0; 2368 2409 2369 - PROCSTAT(ser_stat[info->line].ser_ints_ok_cnt++); 2370 2410 } 2371 2411 /* Restarting the DMA never hurts */ 2372 2412 *info->icmdadr = IO_STATE(R_DMA_CH6_CMD, cmd, restart); ··· 2825 2867 *R_SERIAL_PRESCALE = divisor; 2826 2868 info->baud = SERIAL_PRESCALE_BASE/divisor; 2827 2869 } 2828 - #ifdef CONFIG_ETRAX_EXTERN_PB6CLK_ENABLED 2829 - else if ((info->baud_base==CONFIG_ETRAX_EXTERN_PB6CLK_FREQ/8 && 2830 - info->custom_divisor == 1) || 2831 - (info->baud_base==CONFIG_ETRAX_EXTERN_PB6CLK_FREQ && 2832 - info->custom_divisor == 8)) { 2833 - /* ext_clk selected */ 2834 - alt_source = 2835 - IO_STATE(R_ALT_SER_BAUDRATE, ser0_rec, extern) | 2836 - IO_STATE(R_ALT_SER_BAUDRATE, ser0_tr, extern); 2837 - DBAUD(printk("using external baudrate: %lu\n", CONFIG_ETRAX_EXTERN_PB6CLK_FREQ/8)); 2838 - info->baud = CONFIG_ETRAX_EXTERN_PB6CLK_FREQ/8; 2839 - } 2840 - #endif 2841 2870 else 2842 2871 { 2843 2872 /* Bad baudbase, we don't support using timer0 ··· 3161 3216 { 3162 3217 struct e100_serial *info = (struct e100_serial *)tty->driver_data; 3163 3218 #ifdef SERIAL_DEBUG_THROTTLE 3164 - char buf[64]; 3165 - 3166 - printk("throttle %s: %lu....\n", tty_name(tty, buf), 3219 + printk("throttle %s: %lu....\n", tty_name(tty), 3167 3220 (unsigned long)tty->ldisc.chars_in_buffer(tty)); 3168 3221 #endif 3169 3222 DFLOW(DEBUG_LOG(info->line,"rs_throttle %lu\n", tty->ldisc.chars_in_buffer(tty))); ··· 3181 3238 { 3182 3239 struct e100_serial *info = (struct e100_serial *)tty->driver_data; 3183 3240 #ifdef SERIAL_DEBUG_THROTTLE 3184 - char buf[64]; 3185 - 3186 - printk("unthrottle %s: %lu....\n", tty_name(tty, buf), 3241 + printk("unthrottle %s: %lu....\n", tty_name(tty), 3187 3242 (unsigned long)tty->ldisc.chars_in_buffer(tty)); 3188 3243 #endif 3189 3244 DFLOW(DEBUG_LOG(info->line,"rs_unthrottle ldisc %d\n", tty->ldisc.chars_in_buffer(tty))); ··· 3665 3724 info->rs485.flags &= ~(SER_RS485_ENABLED); 3666 3725 #if defined(CONFIG_ETRAX_RS485_ON_PA) 3667 3726 *R_PORT_PA_DATA = port_pa_data_shadow &= ~(1 << rs485_pa_bit); 3668 - #endif 3669 - #if defined(CONFIG_ETRAX_RS485_ON_PORT_G) 3670 - REG_SHADOW_SET(R_PORT_G_DATA, port_g_data_shadow, 3671 - rs485_port_g_bit, 0); 3672 - #endif 3673 - #if defined(CONFIG_ETRAX_RS485_LTC1387) 3674 - REG_SHADOW_SET(R_PORT_G_DATA, port_g_data_shadow, 3675 - CONFIG_ETRAX_RS485_LTC1387_DXEN_PORT_G_BIT, 0); 3676 - REG_SHADOW_SET(R_PORT_G_DATA, port_g_data_shadow, 3677 - CONFIG_ETRAX_RS485_LTC1387_RXEN_PORT_G_BIT, 0); 3678 3727 #endif 3679 3728 } 3680 3729 #endif ··· 4188 4257 #if defined(CONFIG_ETRAX_RS485_ON_PA) 4189 4258 if (cris_io_interface_allocate_pins(if_serial_0, 'a', rs485_pa_bit, 4190 4259 rs485_pa_bit)) { 4191 - printk(KERN_ERR "ETRAX100LX serial: Could not allocate " 4192 - "RS485 pin\n"); 4193 - put_tty_driver(driver); 4194 - return -EBUSY; 4195 - } 4196 - #endif 4197 - #if defined(CONFIG_ETRAX_RS485_ON_PORT_G) 4198 - if (cris_io_interface_allocate_pins(if_serial_0, 'g', rs485_pa_bit, 4199 - rs485_port_g_bit)) { 4200 4260 printk(KERN_ERR "ETRAX100LX serial: Could not allocate " 4201 4261 "RS485 pin\n"); 4202 4262 put_tty_driver(driver);
+6 -3
drivers/tty/serial/earlycon.c
··· 72 72 73 73 switch (port->iotype) { 74 74 case UPIO_MEM32: 75 + case UPIO_MEM32BE: 75 76 port->regshift = 2; /* fall-through */ 76 77 case UPIO_MEM: 77 78 port->mapbase = addr; ··· 91 90 strlcpy(device->options, options, length); 92 91 } 93 92 94 - if (port->iotype == UPIO_MEM || port->iotype == UPIO_MEM32) 93 + if (port->iotype == UPIO_MEM || port->iotype == UPIO_MEM32 || 94 + port->iotype == UPIO_MEM32BE) 95 95 pr_info("Early serial console at MMIO%s 0x%llx (options '%s')\n", 96 - (port->iotype == UPIO_MEM32) ? "32" : "", 96 + (port->iotype == UPIO_MEM) ? "" : 97 + (port->iotype == UPIO_MEM32) ? "32" : "32be", 97 98 (unsigned long long)port->mapbase, 98 99 device->options); 99 100 else ··· 136 133 * 137 134 * Registers the earlycon console matching the earlycon specified 138 135 * in the param string @buf. Acceptable param strings are of the form 139 - * <name>,io|mmio|mmio32,<addr>,<options> 136 + * <name>,io|mmio|mmio32|mmio32be,<addr>,<options> 140 137 * <name>,0x<addr>,<options> 141 138 * <name>,<options> 142 139 * <name>
+6 -5
drivers/tty/serial/icom.c
··· 1504 1504 return retval; 1505 1505 } 1506 1506 1507 - if ( (retval = pci_request_regions(dev, "icom"))) { 1507 + retval = pci_request_regions(dev, "icom"); 1508 + if (retval) { 1508 1509 dev_err(&dev->dev, "pci_request_regions FAILED\n"); 1509 1510 pci_disable_device(dev); 1510 1511 return retval; ··· 1513 1512 1514 1513 pci_set_master(dev); 1515 1514 1516 - if ( (retval = pci_read_config_dword(dev, PCI_COMMAND, &command_reg))) { 1515 + retval = pci_read_config_dword(dev, PCI_COMMAND, &command_reg); 1516 + if (retval) { 1517 1517 dev_err(&dev->dev, "PCI Config read FAILED\n"); 1518 1518 return retval; 1519 1519 } ··· 1558 1556 } 1559 1557 1560 1558 /* save off irq and request irq line */ 1561 - if ( (retval = request_irq(dev->irq, icom_interrupt, 1562 - IRQF_SHARED, ICOM_DRIVER_NAME, 1563 - (void *) icom_adapter))) { 1559 + retval = request_irq(dev->irq, icom_interrupt, IRQF_SHARED, ICOM_DRIVER_NAME, (void *)icom_adapter); 1560 + if (retval) { 1564 1561 goto probe_exit2; 1565 1562 } 1566 1563
+9 -10
drivers/tty/serial/ifx6x60.c
··· 1175 1175 ret = request_irq(gpio_to_irq(ifx_dev->gpio.reset_out), 1176 1176 ifx_spi_reset_interrupt, 1177 1177 IRQF_TRIGGER_RISING|IRQF_TRIGGER_FALLING, DRVNAME, 1178 - (void *)ifx_dev); 1178 + ifx_dev); 1179 1179 if (ret) { 1180 1180 dev_err(&spi->dev, "Unable to get irq %x\n", 1181 1181 gpio_to_irq(ifx_dev->gpio.reset_out)); ··· 1185 1185 ret = ifx_spi_reset(ifx_dev); 1186 1186 1187 1187 ret = request_irq(gpio_to_irq(ifx_dev->gpio.srdy), 1188 - ifx_spi_srdy_interrupt, 1189 - IRQF_TRIGGER_RISING, DRVNAME, 1190 - (void *)ifx_dev); 1188 + ifx_spi_srdy_interrupt, IRQF_TRIGGER_RISING, DRVNAME, 1189 + ifx_dev); 1191 1190 if (ret) { 1192 1191 dev_err(&spi->dev, "Unable to get irq %x", 1193 1192 gpio_to_irq(ifx_dev->gpio.srdy)); ··· 1211 1212 return 0; 1212 1213 1213 1214 error_ret7: 1214 - free_irq(gpio_to_irq(ifx_dev->gpio.reset_out), (void *)ifx_dev); 1215 + free_irq(gpio_to_irq(ifx_dev->gpio.reset_out), ifx_dev); 1215 1216 error_ret6: 1216 1217 gpio_free(ifx_dev->gpio.srdy); 1217 1218 error_ret5: ··· 1242 1243 /* stop activity */ 1243 1244 tasklet_kill(&ifx_dev->io_work_tasklet); 1244 1245 /* free irq */ 1245 - free_irq(gpio_to_irq(ifx_dev->gpio.reset_out), (void *)ifx_dev); 1246 - free_irq(gpio_to_irq(ifx_dev->gpio.srdy), (void *)ifx_dev); 1246 + free_irq(gpio_to_irq(ifx_dev->gpio.reset_out), ifx_dev); 1247 + free_irq(gpio_to_irq(ifx_dev->gpio.srdy), ifx_dev); 1247 1248 1248 1249 gpio_free(ifx_dev->gpio.srdy); 1249 1250 gpio_free(ifx_dev->gpio.mrdy); ··· 1380 1381 /* unregister */ 1381 1382 tty_unregister_driver(tty_drv); 1382 1383 put_tty_driver(tty_drv); 1383 - spi_unregister_driver((void *)&ifx_spi_driver); 1384 + spi_unregister_driver(&ifx_spi_driver); 1384 1385 unregister_reboot_notifier(&ifx_modem_reboot_notifier_block); 1385 1386 } 1386 1387 ··· 1419 1420 goto err_free_tty; 1420 1421 } 1421 1422 1422 - result = spi_register_driver((void *)&ifx_spi_driver); 1423 + result = spi_register_driver(&ifx_spi_driver); 1423 1424 if (result) { 1424 1425 pr_err("%s: spi_register_driver failed(%d)", 1425 1426 DRVNAME, result); ··· 1435 1436 1436 1437 return 0; 1437 1438 err_unreg_spi: 1438 - spi_unregister_driver((void *)&ifx_spi_driver); 1439 + spi_unregister_driver(&ifx_spi_driver); 1439 1440 err_unreg_tty: 1440 1441 tty_unregister_driver(tty_drv); 1441 1442 err_free_tty:
+8 -10
drivers/tty/serial/imx.c
··· 239 239 }, 240 240 }; 241 241 242 - static struct platform_device_id imx_uart_devtype[] = { 242 + static const struct platform_device_id imx_uart_devtype[] = { 243 243 { 244 244 .name = "imx1-uart", 245 245 .driver_data = (kernel_ulong_t) &imx_uart_devdata[IMX1_UART], ··· 853 853 #define TXTL 2 /* reset default */ 854 854 #define RXTL 1 /* reset default */ 855 855 856 - static int imx_setup_ufcr(struct imx_port *sport, unsigned int mode) 856 + static void imx_setup_ufcr(struct imx_port *sport, unsigned int mode) 857 857 { 858 858 unsigned int val; 859 859 ··· 861 861 val = readl(sport->port.membase + UFCR) & (UFCR_RFDIV | UFCR_DCEDTE); 862 862 val |= TXTL << UFCR_TXTL_SHF | RXTL; 863 863 writel(val, sport->port.membase + UFCR); 864 - return 0; 865 864 } 866 865 867 866 #define RX_BUF_SIZE (PAGE_SIZE) ··· 1121 1122 1122 1123 writel(temp & ~UCR4_DREN, sport->port.membase + UCR4); 1123 1124 1125 + /* Can we enable the DMA support? */ 1126 + if (is_imx6q_uart(sport) && !uart_console(port) && 1127 + !sport->dma_is_inited) 1128 + imx_uart_dma_init(sport); 1129 + 1130 + spin_lock_irqsave(&sport->port.lock, flags); 1124 1131 /* Reset fifo's and state machines */ 1125 1132 i = 100; 1126 1133 ··· 1136 1131 1137 1132 while (!(readl(sport->port.membase + UCR2) & UCR2_SRST) && (--i > 0)) 1138 1133 udelay(1); 1139 - 1140 - /* Can we enable the DMA support? */ 1141 - if (is_imx6q_uart(sport) && !uart_console(port) && 1142 - !sport->dma_is_inited) 1143 - imx_uart_dma_init(sport); 1144 - 1145 - spin_lock_irqsave(&sport->port.lock, flags); 1146 1134 1147 1135 /* 1148 1136 * Finally, clear and enable interrupts
+2 -1
drivers/tty/serial/ioc3_serial.c
··· 2137 2137 2138 2138 /* register port with the serial core */ 2139 2139 2140 - if ((ret = ioc3_serial_core_attach(is, idd))) 2140 + ret = ioc3_serial_core_attach(is, idd); 2141 + if (ret) 2141 2142 goto out4; 2142 2143 2143 2144 Num_of_ioc3_cards++;
+6 -3
drivers/tty/serial/ioc4_serial.c
··· 1011 1011 */ 1012 1012 for (xx = 0; xx < num_intrs; xx++) { 1013 1013 intr_info = &soft->is_intr_type[intr_type].is_intr_info[xx]; 1014 - if ((this_mir = this_ir & intr_info->sd_bits)) { 1014 + this_mir = this_ir & intr_info->sd_bits; 1015 + if (this_mir) { 1015 1016 /* Disable owned interrupts, call handler */ 1016 1017 handled++; 1017 1018 write_ireg(soft, intr_info->sd_bits, IOC4_W_IEC, ··· 2866 2865 2867 2866 /* register port with the serial core - 1 rs232, 1 rs422 */ 2868 2867 2869 - if ((ret = ioc4_serial_core_attach(idd->idd_pdev, PROTO_RS232))) 2868 + ret = ioc4_serial_core_attach(idd->idd_pdev, PROTO_RS232); 2869 + if (ret) 2870 2870 goto out4; 2871 2871 2872 - if ((ret = ioc4_serial_core_attach(idd->idd_pdev, PROTO_RS422))) 2872 + ret = ioc4_serial_core_attach(idd->idd_pdev, PROTO_RS422); 2873 + if (ret) 2873 2874 goto out5; 2874 2875 2875 2876 Num_of_ioc4_cards++;
+3 -3
drivers/tty/serial/kgdb_nmi.c
··· 173 173 bool kgdb_nmi_poll_knock(void) 174 174 { 175 175 if (kgdb_nmi_knock < 0) 176 - return 1; 176 + return true; 177 177 178 178 while (1) { 179 179 int ret; 180 180 181 181 ret = kgdb_nmi_poll_one_knock(); 182 182 if (ret == NO_POLL_CHAR) 183 - return 0; 183 + return false; 184 184 else if (ret == 1) 185 185 break; 186 186 } 187 - return 1; 187 + return true; 188 188 } 189 189 190 190 /*
+1 -1
drivers/tty/serial/mcf.c
··· 597 597 #define MCF_CONSOLE NULL 598 598 599 599 /****************************************************************************/ 600 - #endif /* CONFIG_MCF_CONSOLE */ 600 + #endif /* CONFIG_SERIAL_MCF_CONSOLE */ 601 601 /****************************************************************************/ 602 602 603 603 /*
+1 -1
drivers/tty/serial/meson_uart.c
··· 370 370 static void meson_uart_release_port(struct uart_port *port) 371 371 { 372 372 if (port->flags & UPF_IOREMAP) { 373 - iounmap(port->membase); 373 + devm_iounmap(port->dev, port->membase); 374 374 port->membase = NULL; 375 375 } 376 376 }
+1 -1
drivers/tty/serial/mpc52xx_uart.c
··· 405 405 .get_mr1 = mpc52xx_psc_get_mr1, 406 406 }; 407 407 408 - #endif /* CONFIG_MPC52xx */ 408 + #endif /* CONFIG_PPC_MPC52xx */ 409 409 410 410 #ifdef CONFIG_PPC_MPC512x 411 411 #define FIFO_512x(port) ((struct mpc512x_psc_fifo __iomem *)(PSC(port)+1))
+16 -9
drivers/tty/serial/mpsc.c
··· 913 913 914 914 if (!pi->ready) { 915 915 mpsc_init_hw(pi); 916 - if ((rc = mpsc_alloc_ring_mem(pi))) 916 + rc = mpsc_alloc_ring_mem(pi); 917 + if (rc) 917 918 return rc; 918 919 mpsc_init_rings(pi); 919 920 pi->ready = 1; ··· 1896 1895 int rc = -ENODEV; 1897 1896 1898 1897 if (dev->id == 0) { 1899 - if (!(rc = mpsc_shared_map_regs(dev))) { 1898 + rc = mpsc_shared_map_regs(dev); 1899 + if (!rc) { 1900 1900 pdata = (struct mpsc_shared_pdata *) 1901 1901 dev_get_platdata(&dev->dev); 1902 1902 ··· 2083 2081 if (dev->id < MPSC_NUM_CTLRS) { 2084 2082 pi = &mpsc_ports[dev->id]; 2085 2083 2086 - if (!(rc = mpsc_drv_map_regs(pi, dev))) { 2084 + rc = mpsc_drv_map_regs(pi, dev); 2085 + if (!rc) { 2087 2086 mpsc_drv_get_platform_data(pi, dev, dev->id); 2088 2087 pi->port.dev = &dev->dev; 2089 2088 2090 - if (!(rc = mpsc_make_ready(pi))) { 2089 + rc = mpsc_make_ready(pi); 2090 + if (!rc) { 2091 2091 spin_lock_init(&pi->tx_lock); 2092 - if (!(rc = uart_add_one_port(&mpsc_reg, 2093 - &pi->port))) { 2092 + rc = uart_add_one_port(&mpsc_reg, &pi->port); 2093 + if (!rc) { 2094 2094 rc = 0; 2095 2095 } else { 2096 2096 mpsc_release_port((struct uart_port *) ··· 2140 2136 memset(mpsc_ports, 0, sizeof(mpsc_ports)); 2141 2137 memset(&mpsc_shared_regs, 0, sizeof(mpsc_shared_regs)); 2142 2138 2143 - if (!(rc = uart_register_driver(&mpsc_reg))) { 2144 - if (!(rc = platform_driver_register(&mpsc_shared_driver))) { 2145 - if ((rc = platform_driver_register(&mpsc_driver))) { 2139 + rc = uart_register_driver(&mpsc_reg); 2140 + if (!rc) { 2141 + rc = platform_driver_register(&mpsc_shared_driver); 2142 + if (!rc) { 2143 + rc = platform_driver_register(&mpsc_driver); 2144 + if (rc) { 2146 2145 platform_driver_unregister(&mpsc_shared_driver); 2147 2146 uart_unregister_driver(&mpsc_reg); 2148 2147 }
-232
drivers/tty/serial/msm_smd_tty.c
··· 1 - /* 2 - * Copyright (C) 2007 Google, Inc. 3 - * Copyright (c) 2011, Code Aurora Forum. All rights reserved. 4 - * Author: Brian Swetland <swetland@google.com> 5 - * 6 - * This software is licensed under the terms of the GNU General Public 7 - * License version 2, as published by the Free Software Foundation, and 8 - * may be copied, distributed, and modified under those terms. 9 - * 10 - * This program is distributed in the hope that it will be useful, 11 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 12 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 13 - * GNU General Public License for more details. 14 - * 15 - */ 16 - 17 - #include <linux/module.h> 18 - #include <linux/fs.h> 19 - #include <linux/cdev.h> 20 - #include <linux/device.h> 21 - #include <linux/wait.h> 22 - 23 - #include <linux/tty.h> 24 - #include <linux/tty_driver.h> 25 - #include <linux/tty_flip.h> 26 - 27 - #include <mach/msm_smd.h> 28 - 29 - #define MAX_SMD_TTYS 32 30 - 31 - struct smd_tty_info { 32 - struct tty_port port; 33 - smd_channel_t *ch; 34 - }; 35 - 36 - struct smd_tty_channel_desc { 37 - int id; 38 - const char *name; 39 - }; 40 - 41 - static struct smd_tty_info smd_tty[MAX_SMD_TTYS]; 42 - 43 - static const struct smd_tty_channel_desc smd_default_tty_channels[] = { 44 - { .id = 0, .name = "SMD_DS" }, 45 - { .id = 27, .name = "SMD_GPSNMEA" }, 46 - }; 47 - 48 - static const struct smd_tty_channel_desc *smd_tty_channels = 49 - smd_default_tty_channels; 50 - static int smd_tty_channels_len = ARRAY_SIZE(smd_default_tty_channels); 51 - 52 - static void smd_tty_notify(void *priv, unsigned event) 53 - { 54 - unsigned char *ptr; 55 - int avail; 56 - struct smd_tty_info *info = priv; 57 - struct tty_struct *tty; 58 - 59 - if (event != SMD_EVENT_DATA) 60 - return; 61 - 62 - tty = tty_port_tty_get(&info->port); 63 - if (!tty) 64 - return; 65 - 66 - for (;;) { 67 - if (test_bit(TTY_THROTTLED, &tty->flags)) 68 - break; 69 - avail = smd_read_avail(info->ch); 70 - if (avail == 0) 71 - break; 72 - 73 - avail = tty_prepare_flip_string(&info->port, &ptr, avail); 74 - 75 - if (smd_read(info->ch, ptr, avail) != avail) { 76 - /* shouldn't be possible since we're in interrupt 77 - ** context here and nobody else could 'steal' our 78 - ** characters. 79 - */ 80 - pr_err("OOPS - smd_tty_buffer mismatch?!"); 81 - } 82 - 83 - tty_flip_buffer_push(&info->port); 84 - } 85 - 86 - /* XXX only when writable and necessary */ 87 - tty_wakeup(tty); 88 - tty_kref_put(tty); 89 - } 90 - 91 - static int smd_tty_port_activate(struct tty_port *tport, struct tty_struct *tty) 92 - { 93 - struct smd_tty_info *info = container_of(tport, struct smd_tty_info, 94 - port); 95 - int i, res = 0; 96 - const char *name = NULL; 97 - 98 - for (i = 0; i < smd_tty_channels_len; i++) { 99 - if (smd_tty_channels[i].id == tty->index) { 100 - name = smd_tty_channels[i].name; 101 - break; 102 - } 103 - } 104 - if (!name) 105 - return -ENODEV; 106 - 107 - if (info->ch) 108 - smd_kick(info->ch); 109 - else 110 - res = smd_open(name, &info->ch, info, smd_tty_notify); 111 - 112 - if (!res) 113 - tty->driver_data = info; 114 - 115 - return res; 116 - } 117 - 118 - static void smd_tty_port_shutdown(struct tty_port *tport) 119 - { 120 - struct smd_tty_info *info = container_of(tport, struct smd_tty_info, 121 - port); 122 - 123 - if (info->ch) { 124 - smd_close(info->ch); 125 - info->ch = 0; 126 - } 127 - } 128 - 129 - static int smd_tty_open(struct tty_struct *tty, struct file *f) 130 - { 131 - struct smd_tty_info *info = smd_tty + tty->index; 132 - 133 - return tty_port_open(&info->port, tty, f); 134 - } 135 - 136 - static void smd_tty_close(struct tty_struct *tty, struct file *f) 137 - { 138 - struct smd_tty_info *info = tty->driver_data; 139 - 140 - tty_port_close(&info->port, tty, f); 141 - } 142 - 143 - static int smd_tty_write(struct tty_struct *tty, 144 - const unsigned char *buf, int len) 145 - { 146 - struct smd_tty_info *info = tty->driver_data; 147 - int avail; 148 - 149 - /* if we're writing to a packet channel we will 150 - ** never be able to write more data than there 151 - ** is currently space for 152 - */ 153 - avail = smd_write_avail(info->ch); 154 - if (len > avail) 155 - len = avail; 156 - 157 - return smd_write(info->ch, buf, len); 158 - } 159 - 160 - static int smd_tty_write_room(struct tty_struct *tty) 161 - { 162 - struct smd_tty_info *info = tty->driver_data; 163 - return smd_write_avail(info->ch); 164 - } 165 - 166 - static int smd_tty_chars_in_buffer(struct tty_struct *tty) 167 - { 168 - struct smd_tty_info *info = tty->driver_data; 169 - return smd_read_avail(info->ch); 170 - } 171 - 172 - static void smd_tty_unthrottle(struct tty_struct *tty) 173 - { 174 - struct smd_tty_info *info = tty->driver_data; 175 - smd_kick(info->ch); 176 - } 177 - 178 - static const struct tty_port_operations smd_tty_port_ops = { 179 - .shutdown = smd_tty_port_shutdown, 180 - .activate = smd_tty_port_activate, 181 - }; 182 - 183 - static const struct tty_operations smd_tty_ops = { 184 - .open = smd_tty_open, 185 - .close = smd_tty_close, 186 - .write = smd_tty_write, 187 - .write_room = smd_tty_write_room, 188 - .chars_in_buffer = smd_tty_chars_in_buffer, 189 - .unthrottle = smd_tty_unthrottle, 190 - }; 191 - 192 - static struct tty_driver *smd_tty_driver; 193 - 194 - static int __init smd_tty_init(void) 195 - { 196 - int ret, i; 197 - 198 - smd_tty_driver = alloc_tty_driver(MAX_SMD_TTYS); 199 - if (smd_tty_driver == 0) 200 - return -ENOMEM; 201 - 202 - smd_tty_driver->driver_name = "smd_tty_driver"; 203 - smd_tty_driver->name = "smd"; 204 - smd_tty_driver->major = 0; 205 - smd_tty_driver->minor_start = 0; 206 - smd_tty_driver->type = TTY_DRIVER_TYPE_SERIAL; 207 - smd_tty_driver->subtype = SERIAL_TYPE_NORMAL; 208 - smd_tty_driver->init_termios = tty_std_termios; 209 - smd_tty_driver->init_termios.c_iflag = 0; 210 - smd_tty_driver->init_termios.c_oflag = 0; 211 - smd_tty_driver->init_termios.c_cflag = B38400 | CS8 | CREAD; 212 - smd_tty_driver->init_termios.c_lflag = 0; 213 - smd_tty_driver->flags = TTY_DRIVER_RESET_TERMIOS | 214 - TTY_DRIVER_REAL_RAW | TTY_DRIVER_DYNAMIC_DEV; 215 - tty_set_operations(smd_tty_driver, &smd_tty_ops); 216 - 217 - ret = tty_register_driver(smd_tty_driver); 218 - if (ret) 219 - return ret; 220 - 221 - for (i = 0; i < smd_tty_channels_len; i++) { 222 - struct tty_port *port = &smd_tty[smd_tty_channels[i].id].port; 223 - tty_port_init(port); 224 - port->ops = &smd_tty_port_ops; 225 - tty_port_register_device(port, smd_tty_driver, 226 - smd_tty_channels[i].id, NULL); 227 - } 228 - 229 - return 0; 230 - } 231 - 232 - module_init(smd_tty_init);
+1 -1
drivers/tty/serial/mxs-auart.c
··· 169 169 bool ms_irq_enabled; 170 170 }; 171 171 172 - static struct platform_device_id mxs_auart_devtype[] = { 172 + static const struct platform_device_id mxs_auart_devtype[] = { 173 173 { .name = "mxs-auart-imx23", .driver_data = IMX23_AUART }, 174 174 { .name = "mxs-auart-imx28", .driver_data = IMX28_AUART }, 175 175 { /* sentinel */ }
+5 -3
drivers/tty/serial/of_serial.c
··· 67 67 if (of_property_read_u32(np, "clock-frequency", &clk)) { 68 68 69 69 /* Get clk rate through clk driver if present */ 70 - info->clk = clk_get(&ofdev->dev, NULL); 70 + info->clk = devm_clk_get(&ofdev->dev, NULL); 71 71 if (IS_ERR(info->clk)) { 72 72 dev_warn(&ofdev->dev, 73 73 "clk or clock-frequency not defined\n"); 74 74 return PTR_ERR(info->clk); 75 75 } 76 76 77 - clk_prepare_enable(info->clk); 77 + ret = clk_prepare_enable(info->clk); 78 + if (ret < 0) 79 + return ret; 80 + 78 81 clk = clk_get_rate(info->clk); 79 82 } 80 83 /* If current-speed was set, then try not to change it. */ ··· 191 188 { 192 189 struct uart_8250_port port8250; 193 190 memset(&port8250, 0, sizeof(port8250)); 194 - port.type = port_type; 195 191 port8250.port = port; 196 192 197 193 if (port.fifosize)
+4 -31
drivers/tty/serial/omap-serial.c
··· 38 38 #include <linux/serial_core.h> 39 39 #include <linux/irq.h> 40 40 #include <linux/pm_runtime.h> 41 + #include <linux/pm_wakeirq.h> 41 42 #include <linux/of.h> 42 43 #include <linux/of_irq.h> 43 44 #include <linux/gpio.h> ··· 161 160 unsigned long port_activity; 162 161 int context_loss_cnt; 163 162 u32 errata; 164 - u8 wakeups_enabled; 165 163 u32 features; 166 164 167 165 int rts_gpio; ··· 209 209 return pdata->get_context_loss_count(up->dev); 210 210 } 211 211 212 - static inline void serial_omap_enable_wakeirq(struct uart_omap_port *up, 213 - bool enable) 214 - { 215 - if (!up->wakeirq) 216 - return; 217 - 218 - if (enable) 219 - enable_irq(up->wakeirq); 220 - else 221 - disable_irq_nosync(up->wakeirq); 222 - } 223 - 212 + /* REVISIT: Remove this when omap3 boots in device tree only mode */ 224 213 static void serial_omap_enable_wakeup(struct uart_omap_port *up, bool enable) 225 214 { 226 215 struct omap_uart_port_info *pdata = dev_get_platdata(up->dev); 227 - 228 - if (enable == up->wakeups_enabled) 229 - return; 230 - 231 - serial_omap_enable_wakeirq(up, enable); 232 - up->wakeups_enabled = enable; 233 216 234 217 if (!pdata || !pdata->enable_wakeup) 235 218 return; ··· 733 750 734 751 /* Optional wake-up IRQ */ 735 752 if (up->wakeirq) { 736 - retval = request_irq(up->wakeirq, serial_omap_irq, 737 - up->port.irqflags, up->name, up); 753 + retval = dev_pm_set_dedicated_wake_irq(up->dev, up->wakeirq); 738 754 if (retval) { 739 755 free_irq(up->port.irq, up); 740 756 return retval; 741 757 } 742 - disable_irq(up->wakeirq); 743 758 } 744 759 745 760 dev_dbg(up->port.dev, "serial_omap_startup+%d\n", up->port.line); ··· 826 845 pm_runtime_mark_last_busy(up->dev); 827 846 pm_runtime_put_autosuspend(up->dev); 828 847 free_irq(up->port.irq, up); 829 - if (up->wakeirq) 830 - free_irq(up->wakeirq, up); 848 + dev_pm_clear_wake_irq(up->dev); 831 849 } 832 850 833 851 static void serial_omap_uart_qos_work(struct work_struct *work) ··· 1118 1138 serial_out(up, UART_LCR, UART_LCR_CONF_MODE_B); 1119 1139 serial_out(up, UART_EFR, efr); 1120 1140 serial_out(up, UART_LCR, 0); 1121 - 1122 - if (!device_may_wakeup(up->dev)) { 1123 - if (!state) 1124 - pm_runtime_forbid(up->dev); 1125 - else 1126 - pm_runtime_allow(up->dev); 1127 - } 1128 1141 1129 1142 pm_runtime_mark_last_busy(up->dev); 1130 1143 pm_runtime_put_autosuspend(up->dev);
+2 -2
drivers/tty/serial/samsung.c
··· 348 348 s3c24xx_serial_start_tx_dma(ourport, count); 349 349 } 350 350 351 - void s3c24xx_serial_start_tx(struct uart_port *port) 351 + static void s3c24xx_serial_start_tx(struct uart_port *port) 352 352 { 353 353 struct s3c24xx_uart_port *ourport = to_ourport(port); 354 354 struct circ_buf *xmit = &port->state->xmit; ··· 2337 2337 #define EXYNOS5433_SERIAL_DRV_DATA (kernel_ulong_t)NULL 2338 2338 #endif 2339 2339 2340 - static struct platform_device_id s3c24xx_serial_driver_ids[] = { 2340 + static const struct platform_device_id s3c24xx_serial_driver_ids[] = { 2341 2341 { 2342 2342 .name = "s3c2410-uart", 2343 2343 .driver_data = S3C2410_SERIAL_DRV_DATA,
+241 -83
drivers/tty/serial/sc16is7xx.c
··· 25 25 #include <linux/serial.h> 26 26 #include <linux/tty.h> 27 27 #include <linux/tty_flip.h> 28 + #include <linux/spi/spi.h> 28 29 #include <linux/uaccess.h> 29 30 30 31 #define SC16IS7XX_NAME "sc16is7xx" ··· 301 300 int nr_uart; 302 301 }; 303 302 303 + #define SC16IS7XX_RECONF_MD (1 << 0) 304 + #define SC16IS7XX_RECONF_IER (1 << 1) 305 + #define SC16IS7XX_RECONF_RS485 (1 << 2) 306 + 307 + struct sc16is7xx_one_config { 308 + unsigned int flags; 309 + u8 ier_clear; 310 + }; 311 + 304 312 struct sc16is7xx_one { 305 313 struct uart_port port; 306 - struct work_struct tx_work; 307 - struct work_struct md_work; 314 + struct kthread_work tx_work; 315 + struct kthread_work reg_work; 316 + struct sc16is7xx_one_config config; 308 317 }; 309 318 310 319 struct sc16is7xx_port { 311 320 struct uart_driver uart; 312 321 struct sc16is7xx_devtype *devtype; 313 322 struct regmap *regmap; 314 - struct mutex mutex; 315 323 struct clk *clk; 316 324 #ifdef CONFIG_GPIOLIB 317 325 struct gpio_chip gpio; 318 326 #endif 319 327 unsigned char buf[SC16IS7XX_FIFO_SIZE]; 328 + struct kthread_worker kworker; 329 + struct task_struct *kworker_task; 330 + struct kthread_work irq_work; 320 331 struct sc16is7xx_one p[0]; 321 332 }; 322 333 334 + #define to_sc16is7xx_port(p,e) ((container_of((p), struct sc16is7xx_port, e))) 323 335 #define to_sc16is7xx_one(p,e) ((container_of((p), struct sc16is7xx_one, e))) 324 336 325 337 static u8 sc16is7xx_port_read(struct uart_port *port, u8 reg) ··· 629 615 !!(msr & SC16IS7XX_MSR_CTS_BIT)); 630 616 break; 631 617 case SC16IS7XX_IIR_THRI_SRC: 632 - mutex_lock(&s->mutex); 633 618 sc16is7xx_handle_tx(port); 634 - mutex_unlock(&s->mutex); 635 619 break; 636 620 default: 637 621 dev_err_ratelimited(port->dev, ··· 640 628 } while (1); 641 629 } 642 630 643 - static irqreturn_t sc16is7xx_ist(int irq, void *dev_id) 631 + static void sc16is7xx_ist(struct kthread_work *ws) 644 632 { 645 - struct sc16is7xx_port *s = (struct sc16is7xx_port *)dev_id; 633 + struct sc16is7xx_port *s = to_sc16is7xx_port(ws, irq_work); 646 634 int i; 647 635 648 636 for (i = 0; i < s->uart.nr; ++i) 649 637 sc16is7xx_port_irq(s, i); 638 + } 639 + 640 + static irqreturn_t sc16is7xx_irq(int irq, void *dev_id) 641 + { 642 + struct sc16is7xx_port *s = (struct sc16is7xx_port *)dev_id; 643 + 644 + queue_kthread_work(&s->kworker, &s->irq_work); 650 645 651 646 return IRQ_HANDLED; 652 647 } 653 648 654 - static void sc16is7xx_wq_proc(struct work_struct *ws) 649 + static void sc16is7xx_tx_proc(struct kthread_work *ws) 655 650 { 656 - struct sc16is7xx_one *one = to_sc16is7xx_one(ws, tx_work); 657 - struct sc16is7xx_port *s = dev_get_drvdata(one->port.dev); 651 + struct uart_port *port = &(to_sc16is7xx_one(ws, tx_work)->port); 658 652 659 - mutex_lock(&s->mutex); 660 - sc16is7xx_handle_tx(&one->port); 661 - mutex_unlock(&s->mutex); 653 + if ((port->rs485.flags & SER_RS485_ENABLED) && 654 + (port->rs485.delay_rts_before_send > 0)) 655 + msleep(port->rs485.delay_rts_before_send); 656 + 657 + sc16is7xx_handle_tx(port); 662 658 } 663 659 664 - static void sc16is7xx_stop_tx(struct uart_port* port) 660 + static void sc16is7xx_reconf_rs485(struct uart_port *port) 665 661 { 666 - struct sc16is7xx_one *one = to_sc16is7xx_one(port, port); 667 - struct circ_buf *xmit = &one->port.state->xmit; 662 + const u32 mask = SC16IS7XX_EFCR_AUTO_RS485_BIT | 663 + SC16IS7XX_EFCR_RTS_INVERT_BIT; 664 + u32 efcr = 0; 665 + struct serial_rs485 *rs485 = &port->rs485; 666 + unsigned long irqflags; 668 667 669 - /* handle rs485 */ 670 - if (port->rs485.flags & SER_RS485_ENABLED) { 671 - /* do nothing if current tx not yet completed */ 672 - int lsr = sc16is7xx_port_read(port, SC16IS7XX_LSR_REG); 673 - if (!(lsr & SC16IS7XX_LSR_TEMT_BIT)) 674 - return; 668 + spin_lock_irqsave(&port->lock, irqflags); 669 + if (rs485->flags & SER_RS485_ENABLED) { 670 + efcr |= SC16IS7XX_EFCR_AUTO_RS485_BIT; 675 671 676 - if (uart_circ_empty(xmit) && 677 - (port->rs485.delay_rts_after_send > 0)) 678 - mdelay(port->rs485.delay_rts_after_send); 672 + if (rs485->flags & SER_RS485_RTS_AFTER_SEND) 673 + efcr |= SC16IS7XX_EFCR_RTS_INVERT_BIT; 679 674 } 675 + spin_unlock_irqrestore(&port->lock, irqflags); 680 676 681 - sc16is7xx_port_update(port, SC16IS7XX_IER_REG, 682 - SC16IS7XX_IER_THRI_BIT, 683 - 0); 677 + sc16is7xx_port_update(port, SC16IS7XX_EFCR_REG, mask, efcr); 684 678 } 685 679 686 - static void sc16is7xx_stop_rx(struct uart_port* port) 680 + static void sc16is7xx_reg_proc(struct kthread_work *ws) 687 681 { 682 + struct sc16is7xx_one *one = to_sc16is7xx_one(ws, reg_work); 683 + struct sc16is7xx_one_config config; 684 + unsigned long irqflags; 685 + 686 + spin_lock_irqsave(&one->port.lock, irqflags); 687 + config = one->config; 688 + memset(&one->config, 0, sizeof(one->config)); 689 + spin_unlock_irqrestore(&one->port.lock, irqflags); 690 + 691 + if (config.flags & SC16IS7XX_RECONF_MD) 692 + sc16is7xx_port_update(&one->port, SC16IS7XX_MCR_REG, 693 + SC16IS7XX_MCR_LOOP_BIT, 694 + (one->port.mctrl & TIOCM_LOOP) ? 695 + SC16IS7XX_MCR_LOOP_BIT : 0); 696 + 697 + if (config.flags & SC16IS7XX_RECONF_IER) 698 + sc16is7xx_port_update(&one->port, SC16IS7XX_IER_REG, 699 + config.ier_clear, 0); 700 + 701 + if (config.flags & SC16IS7XX_RECONF_RS485) 702 + sc16is7xx_reconf_rs485(&one->port); 703 + } 704 + 705 + static void sc16is7xx_ier_clear(struct uart_port *port, u8 bit) 706 + { 707 + struct sc16is7xx_port *s = dev_get_drvdata(port->dev); 688 708 struct sc16is7xx_one *one = to_sc16is7xx_one(port, port); 689 709 690 - one->port.read_status_mask &= ~SC16IS7XX_LSR_DR_BIT; 691 - sc16is7xx_port_update(port, SC16IS7XX_IER_REG, 692 - SC16IS7XX_LSR_DR_BIT, 693 - 0); 710 + one->config.flags |= SC16IS7XX_RECONF_IER; 711 + one->config.ier_clear |= bit; 712 + queue_kthread_work(&s->kworker, &one->reg_work); 713 + } 714 + 715 + static void sc16is7xx_stop_tx(struct uart_port *port) 716 + { 717 + sc16is7xx_ier_clear(port, SC16IS7XX_IER_THRI_BIT); 718 + } 719 + 720 + static void sc16is7xx_stop_rx(struct uart_port *port) 721 + { 722 + sc16is7xx_ier_clear(port, SC16IS7XX_IER_RDI_BIT); 694 723 } 695 724 696 725 static void sc16is7xx_start_tx(struct uart_port *port) 697 726 { 727 + struct sc16is7xx_port *s = dev_get_drvdata(port->dev); 698 728 struct sc16is7xx_one *one = to_sc16is7xx_one(port, port); 699 729 700 - /* handle rs485 */ 701 - if ((port->rs485.flags & SER_RS485_ENABLED) && 702 - (port->rs485.delay_rts_before_send > 0)) { 703 - mdelay(port->rs485.delay_rts_before_send); 704 - } 705 - 706 - if (!work_pending(&one->tx_work)) 707 - schedule_work(&one->tx_work); 730 + queue_kthread_work(&s->kworker, &one->tx_work); 708 731 } 709 732 710 733 static unsigned int sc16is7xx_tx_empty(struct uart_port *port) 711 734 { 712 - unsigned int lvl, lsr; 735 + unsigned int lsr; 713 736 714 - lvl = sc16is7xx_port_read(port, SC16IS7XX_TXLVL_REG); 715 737 lsr = sc16is7xx_port_read(port, SC16IS7XX_LSR_REG); 716 738 717 - return ((lsr & SC16IS7XX_LSR_THRE_BIT) && !lvl) ? TIOCSER_TEMT : 0; 739 + return (lsr & SC16IS7XX_LSR_TEMT_BIT) ? TIOCSER_TEMT : 0; 718 740 } 719 741 720 742 static unsigned int sc16is7xx_get_mctrl(struct uart_port *port) ··· 759 713 return TIOCM_DSR | TIOCM_CAR; 760 714 } 761 715 762 - static void sc16is7xx_md_proc(struct work_struct *ws) 763 - { 764 - struct sc16is7xx_one *one = to_sc16is7xx_one(ws, md_work); 765 - 766 - sc16is7xx_port_update(&one->port, SC16IS7XX_MCR_REG, 767 - SC16IS7XX_MCR_LOOP_BIT, 768 - (one->port.mctrl & TIOCM_LOOP) ? 769 - SC16IS7XX_MCR_LOOP_BIT : 0); 770 - } 771 - 772 716 static void sc16is7xx_set_mctrl(struct uart_port *port, unsigned int mctrl) 773 717 { 718 + struct sc16is7xx_port *s = dev_get_drvdata(port->dev); 774 719 struct sc16is7xx_one *one = to_sc16is7xx_one(port, port); 775 720 776 - schedule_work(&one->md_work); 721 + one->config.flags |= SC16IS7XX_RECONF_MD; 722 + queue_kthread_work(&s->kworker, &one->reg_work); 777 723 } 778 724 779 725 static void sc16is7xx_break_ctl(struct uart_port *port, int break_state) ··· 869 831 static int sc16is7xx_config_rs485(struct uart_port *port, 870 832 struct serial_rs485 *rs485) 871 833 { 872 - const u32 mask = SC16IS7XX_EFCR_AUTO_RS485_BIT | 873 - SC16IS7XX_EFCR_RTS_INVERT_BIT; 874 - u32 efcr = 0; 834 + struct sc16is7xx_port *s = dev_get_drvdata(port->dev); 835 + struct sc16is7xx_one *one = to_sc16is7xx_one(port, port); 875 836 876 837 if (rs485->flags & SER_RS485_ENABLED) { 877 838 bool rts_during_rx, rts_during_tx; ··· 878 841 rts_during_rx = rs485->flags & SER_RS485_RTS_AFTER_SEND; 879 842 rts_during_tx = rs485->flags & SER_RS485_RTS_ON_SEND; 880 843 881 - efcr |= SC16IS7XX_EFCR_AUTO_RS485_BIT; 882 - 883 - if (!rts_during_rx && rts_during_tx) 884 - /* default */; 885 - else if (rts_during_rx && !rts_during_tx) 886 - efcr |= SC16IS7XX_EFCR_RTS_INVERT_BIT; 887 - else 844 + if (rts_during_rx == rts_during_tx) 888 845 dev_err(port->dev, 889 846 "unsupported RTS signalling on_send:%d after_send:%d - exactly one of RS485 RTS flags should be set\n", 890 847 rts_during_tx, rts_during_rx); 848 + 849 + /* 850 + * RTS signal is handled by HW, it's timing can't be influenced. 851 + * However, it's sometimes useful to delay TX even without RTS 852 + * control therefore we try to handle .delay_rts_before_send. 853 + */ 854 + if (rs485->delay_rts_after_send) 855 + return -EINVAL; 891 856 } 892 857 893 - sc16is7xx_port_update(port, SC16IS7XX_EFCR_REG, mask, efcr); 894 - 895 858 port->rs485 = *rs485; 859 + one->config.flags |= SC16IS7XX_RECONF_RS485; 860 + queue_kthread_work(&s->kworker, &one->reg_work); 896 861 897 862 return 0; 898 863 } ··· 955 916 956 917 static void sc16is7xx_shutdown(struct uart_port *port) 957 918 { 919 + struct sc16is7xx_port *s = dev_get_drvdata(port->dev); 920 + 958 921 /* Disable all interrupts */ 959 922 sc16is7xx_port_write(port, SC16IS7XX_IER_REG, 0); 960 923 /* Disable TX/RX */ ··· 967 926 SC16IS7XX_EFCR_TXDISABLE_BIT); 968 927 969 928 sc16is7xx_power(port, 0); 929 + 930 + flush_kthread_worker(&s->kworker); 970 931 } 971 932 972 933 static const char *sc16is7xx_type(struct uart_port *port) ··· 1086 1043 struct sc16is7xx_devtype *devtype, 1087 1044 struct regmap *regmap, int irq, unsigned long flags) 1088 1045 { 1046 + struct sched_param sched_param = { .sched_priority = MAX_RT_PRIO / 2 }; 1089 1047 unsigned long freq, *pfreq = dev_get_platdata(dev); 1090 1048 int i, ret; 1091 1049 struct sc16is7xx_port *s; ··· 1128 1084 goto out_clk; 1129 1085 } 1130 1086 1087 + init_kthread_worker(&s->kworker); 1088 + init_kthread_work(&s->irq_work, sc16is7xx_ist); 1089 + s->kworker_task = kthread_run(kthread_worker_fn, &s->kworker, 1090 + "sc16is7xx"); 1091 + if (IS_ERR(s->kworker_task)) { 1092 + ret = PTR_ERR(s->kworker_task); 1093 + goto out_uart; 1094 + } 1095 + sched_setscheduler(s->kworker_task, SCHED_FIFO, &sched_param); 1096 + 1131 1097 #ifdef CONFIG_GPIOLIB 1132 1098 if (devtype->nr_gpio) { 1133 1099 /* Setup GPIO cotroller */ ··· 1153 1099 s->gpio.can_sleep = 1; 1154 1100 ret = gpiochip_add(&s->gpio); 1155 1101 if (ret) 1156 - goto out_uart; 1102 + goto out_thread; 1157 1103 } 1158 1104 #endif 1159 - 1160 - mutex_init(&s->mutex); 1161 1105 1162 1106 for (i = 0; i < devtype->nr_uart; ++i) { 1163 1107 /* Initialize port data */ ··· 1175 1123 sc16is7xx_port_write(&s->p[i].port, SC16IS7XX_EFCR_REG, 1176 1124 SC16IS7XX_EFCR_RXDISABLE_BIT | 1177 1125 SC16IS7XX_EFCR_TXDISABLE_BIT); 1178 - /* Initialize queue for start TX */ 1179 - INIT_WORK(&s->p[i].tx_work, sc16is7xx_wq_proc); 1180 - /* Initialize queue for changing mode */ 1181 - INIT_WORK(&s->p[i].md_work, sc16is7xx_md_proc); 1126 + /* Initialize kthread work structs */ 1127 + init_kthread_work(&s->p[i].tx_work, sc16is7xx_tx_proc); 1128 + init_kthread_work(&s->p[i].reg_work, sc16is7xx_reg_proc); 1182 1129 /* Register port */ 1183 1130 uart_add_one_port(&s->uart, &s->p[i].port); 1184 1131 /* Go to suspend mode */ ··· 1185 1134 } 1186 1135 1187 1136 /* Setup interrupt */ 1188 - ret = devm_request_threaded_irq(dev, irq, NULL, sc16is7xx_ist, 1189 - IRQF_ONESHOT | flags, dev_name(dev), s); 1137 + ret = devm_request_irq(dev, irq, sc16is7xx_irq, 1138 + IRQF_ONESHOT | flags, dev_name(dev), s); 1190 1139 if (!ret) 1191 1140 return 0; 1192 1141 1193 1142 for (i = 0; i < s->uart.nr; i++) 1194 1143 uart_remove_one_port(&s->uart, &s->p[i].port); 1195 1144 1196 - mutex_destroy(&s->mutex); 1197 - 1198 1145 #ifdef CONFIG_GPIOLIB 1199 1146 if (devtype->nr_gpio) 1200 1147 gpiochip_remove(&s->gpio); 1201 1148 1202 - out_uart: 1149 + out_thread: 1203 1150 #endif 1151 + kthread_stop(s->kworker_task); 1152 + 1153 + out_uart: 1204 1154 uart_unregister_driver(&s->uart); 1205 1155 1206 1156 out_clk: ··· 1222 1170 #endif 1223 1171 1224 1172 for (i = 0; i < s->uart.nr; i++) { 1225 - cancel_work_sync(&s->p[i].tx_work); 1226 - cancel_work_sync(&s->p[i].md_work); 1227 1173 uart_remove_one_port(&s->uart, &s->p[i].port); 1228 1174 sc16is7xx_power(&s->p[i].port, 0); 1229 1175 } 1230 1176 1231 - mutex_destroy(&s->mutex); 1177 + flush_kthread_worker(&s->kworker); 1178 + kthread_stop(s->kworker_task); 1179 + 1232 1180 uart_unregister_driver(&s->uart); 1233 1181 if (!IS_ERR(s->clk)) 1234 1182 clk_disable_unprepare(s->clk); ··· 1256 1204 .precious_reg = sc16is7xx_regmap_precious, 1257 1205 }; 1258 1206 1207 + #ifdef CONFIG_SERIAL_SC16IS7XX_SPI 1208 + static int sc16is7xx_spi_probe(struct spi_device *spi) 1209 + { 1210 + struct sc16is7xx_devtype *devtype; 1211 + unsigned long flags = 0; 1212 + struct regmap *regmap; 1213 + int ret; 1214 + 1215 + /* Setup SPI bus */ 1216 + spi->bits_per_word = 8; 1217 + /* only supports mode 0 on SC16IS762 */ 1218 + spi->mode = spi->mode ? : SPI_MODE_0; 1219 + spi->max_speed_hz = spi->max_speed_hz ? : 15000000; 1220 + ret = spi_setup(spi); 1221 + if (ret) 1222 + return ret; 1223 + 1224 + if (spi->dev.of_node) { 1225 + const struct of_device_id *of_id = 1226 + of_match_device(sc16is7xx_dt_ids, &spi->dev); 1227 + 1228 + devtype = (struct sc16is7xx_devtype *)of_id->data; 1229 + } else { 1230 + const struct spi_device_id *id_entry = spi_get_device_id(spi); 1231 + 1232 + devtype = (struct sc16is7xx_devtype *)id_entry->driver_data; 1233 + flags = IRQF_TRIGGER_FALLING; 1234 + } 1235 + 1236 + regcfg.max_register = (0xf << SC16IS7XX_REG_SHIFT) | 1237 + (devtype->nr_uart - 1); 1238 + regmap = devm_regmap_init_spi(spi, &regcfg); 1239 + 1240 + return sc16is7xx_probe(&spi->dev, devtype, regmap, spi->irq, flags); 1241 + } 1242 + 1243 + static int sc16is7xx_spi_remove(struct spi_device *spi) 1244 + { 1245 + return sc16is7xx_remove(&spi->dev); 1246 + } 1247 + 1248 + static const struct spi_device_id sc16is7xx_spi_id_table[] = { 1249 + { "sc16is74x", (kernel_ulong_t)&sc16is74x_devtype, }, 1250 + { "sc16is740", (kernel_ulong_t)&sc16is74x_devtype, }, 1251 + { "sc16is741", (kernel_ulong_t)&sc16is74x_devtype, }, 1252 + { "sc16is750", (kernel_ulong_t)&sc16is750_devtype, }, 1253 + { "sc16is752", (kernel_ulong_t)&sc16is752_devtype, }, 1254 + { "sc16is760", (kernel_ulong_t)&sc16is760_devtype, }, 1255 + { "sc16is762", (kernel_ulong_t)&sc16is762_devtype, }, 1256 + { } 1257 + }; 1258 + 1259 + MODULE_DEVICE_TABLE(spi, sc16is7xx_spi_id_table); 1260 + 1261 + static struct spi_driver sc16is7xx_spi_uart_driver = { 1262 + .driver = { 1263 + .name = SC16IS7XX_NAME, 1264 + .owner = THIS_MODULE, 1265 + .of_match_table = of_match_ptr(sc16is7xx_dt_ids), 1266 + }, 1267 + .probe = sc16is7xx_spi_probe, 1268 + .remove = sc16is7xx_spi_remove, 1269 + .id_table = sc16is7xx_spi_id_table, 1270 + }; 1271 + 1272 + MODULE_ALIAS("spi:sc16is7xx"); 1273 + #endif 1274 + 1275 + #ifdef CONFIG_SERIAL_SC16IS7XX_I2C 1259 1276 static int sc16is7xx_i2c_probe(struct i2c_client *i2c, 1260 1277 const struct i2c_device_id *id) 1261 1278 { ··· 1356 1235 1357 1236 static const struct i2c_device_id sc16is7xx_i2c_id_table[] = { 1358 1237 { "sc16is74x", (kernel_ulong_t)&sc16is74x_devtype, }, 1238 + { "sc16is740", (kernel_ulong_t)&sc16is74x_devtype, }, 1239 + { "sc16is741", (kernel_ulong_t)&sc16is74x_devtype, }, 1359 1240 { "sc16is750", (kernel_ulong_t)&sc16is750_devtype, }, 1360 1241 { "sc16is752", (kernel_ulong_t)&sc16is752_devtype, }, 1361 1242 { "sc16is760", (kernel_ulong_t)&sc16is760_devtype, }, ··· 1376 1253 .remove = sc16is7xx_i2c_remove, 1377 1254 .id_table = sc16is7xx_i2c_id_table, 1378 1255 }; 1379 - module_i2c_driver(sc16is7xx_i2c_uart_driver); 1256 + 1380 1257 MODULE_ALIAS("i2c:sc16is7xx"); 1258 + #endif 1259 + 1260 + static int __init sc16is7xx_init(void) 1261 + { 1262 + int ret = 0; 1263 + #ifdef CONFIG_SERIAL_SC16IS7XX_I2C 1264 + ret = i2c_add_driver(&sc16is7xx_i2c_uart_driver); 1265 + if (ret < 0) { 1266 + pr_err("failed to init sc16is7xx i2c --> %d\n", ret); 1267 + return ret; 1268 + } 1269 + #endif 1270 + 1271 + #ifdef CONFIG_SERIAL_SC16IS7XX_SPI 1272 + ret = spi_register_driver(&sc16is7xx_spi_uart_driver); 1273 + if (ret < 0) { 1274 + pr_err("failed to init sc16is7xx spi --> %d\n", ret); 1275 + return ret; 1276 + } 1277 + #endif 1278 + return ret; 1279 + } 1280 + module_init(sc16is7xx_init); 1281 + 1282 + static void __exit sc16is7xx_exit(void) 1283 + { 1284 + #ifdef CONFIG_SERIAL_SC16IS7XX_I2C 1285 + i2c_del_driver(&sc16is7xx_i2c_uart_driver); 1286 + #endif 1287 + 1288 + #ifdef CONFIG_SERIAL_SC16IS7XX_SPI 1289 + spi_unregister_driver(&sc16is7xx_spi_uart_driver); 1290 + #endif 1291 + } 1292 + module_exit(sc16is7xx_exit); 1381 1293 1382 1294 MODULE_LICENSE("GPL"); 1383 1295 MODULE_AUTHOR("Jon Ringle <jringle@gridpoint.com>");
+97 -61
drivers/tty/serial/serial-tegra.c
··· 131 131 struct dma_async_tx_descriptor *rx_dma_desc; 132 132 dma_cookie_t tx_cookie; 133 133 dma_cookie_t rx_cookie; 134 - int tx_bytes_requested; 135 - int rx_bytes_requested; 134 + unsigned int tx_bytes_requested; 135 + unsigned int rx_bytes_requested; 136 136 }; 137 137 138 138 static void tegra_uart_start_next_tx(struct tegra_uart_port *tup); ··· 234 234 tup->lcr_shadow = lcr; 235 235 } 236 236 237 + /** 238 + * tegra_uart_wait_cycle_time: Wait for N UART clock periods 239 + * 240 + * @tup: Tegra serial port data structure. 241 + * @cycles: Number of clock periods to wait. 242 + * 243 + * Tegra UARTs are clocked at 16X the baud/bit rate and hence the UART 244 + * clock speed is 16X the current baud rate. 245 + */ 246 + static void tegra_uart_wait_cycle_time(struct tegra_uart_port *tup, 247 + unsigned int cycles) 248 + { 249 + if (tup->current_baud) 250 + udelay(DIV_ROUND_UP(cycles * 1000000, tup->current_baud * 16)); 251 + } 252 + 237 253 /* Wait for a symbol-time. */ 238 254 static void tegra_uart_wait_sym_time(struct tegra_uart_port *tup, 239 255 unsigned int syms) ··· 279 263 /* Dummy read to ensure the write is posted */ 280 264 tegra_uart_read(tup, UART_SCR); 281 265 282 - /* Wait for the flush to propagate. */ 283 - tegra_uart_wait_sym_time(tup, 1); 266 + /* 267 + * For all tegra devices (up to t210), there is a hardware issue that 268 + * requires software to wait for 32 UART clock periods for the flush 269 + * to propagate, otherwise data could be lost. 270 + */ 271 + tegra_uart_wait_cycle_time(tup, 32); 284 272 } 285 273 286 274 static int tegra_set_baudrate(struct tegra_uart_port *tup, unsigned int baud) ··· 408 388 struct circ_buf *xmit = &tup->uport.state->xmit; 409 389 struct dma_tx_state state; 410 390 unsigned long flags; 411 - int count; 391 + unsigned int count; 412 392 413 - dmaengine_tx_status(tup->tx_dma_chan, tup->rx_cookie, &state); 393 + dmaengine_tx_status(tup->tx_dma_chan, tup->tx_cookie, &state); 414 394 count = tup->tx_bytes_requested - state.residue; 415 395 async_tx_ack(tup->tx_dma_desc); 416 396 spin_lock_irqsave(&tup->uport.lock, flags); ··· 500 480 struct tegra_uart_port *tup = to_tegra_uport(u); 501 481 struct circ_buf *xmit = &tup->uport.state->xmit; 502 482 struct dma_tx_state state; 503 - int count; 483 + unsigned int count; 504 484 505 485 if (tup->tx_in_progress != TEGRA_UART_TX_DMA) 506 486 return; ··· 550 530 } 551 531 552 532 static void tegra_uart_copy_rx_to_tty(struct tegra_uart_port *tup, 553 - struct tty_port *tty, int count) 533 + struct tty_port *tty, 534 + unsigned int count) 554 535 { 555 536 int copied; 537 + 538 + /* If count is zero, then there is no data to be copied */ 539 + if (!count) 540 + return; 556 541 557 542 tup->uport.icount.rx += count; 558 543 if (!tty) { ··· 580 555 { 581 556 struct tegra_uart_port *tup = args; 582 557 struct uart_port *u = &tup->uport; 583 - int count = tup->rx_bytes_requested; 558 + unsigned int count = tup->rx_bytes_requested; 584 559 struct tty_struct *tty = tty_port_tty_get(&tup->uport.state->port); 585 560 struct tty_port *port = &u->state->port; 586 561 unsigned long flags; 562 + struct dma_tx_state state; 563 + enum dma_status status; 564 + 565 + spin_lock_irqsave(&u->lock, flags); 566 + 567 + status = dmaengine_tx_status(tup->rx_dma_chan, tup->rx_cookie, &state); 568 + 569 + if (status == DMA_IN_PROGRESS) { 570 + dev_dbg(tup->uport.dev, "RX DMA is in progress\n"); 571 + goto done; 572 + } 587 573 588 574 async_tx_ack(tup->rx_dma_desc); 589 - spin_lock_irqsave(&u->lock, flags); 590 575 591 576 /* Deactivate flow control to stop sender */ 592 577 if (tup->rts_active) 593 578 set_rts(tup, false); 594 579 595 580 /* If we are here, DMA is stopped */ 596 - if (count) 597 - tegra_uart_copy_rx_to_tty(tup, port, count); 581 + tegra_uart_copy_rx_to_tty(tup, port, count); 598 582 599 583 tegra_uart_handle_rx_pio(tup, port); 600 584 if (tty) { ··· 618 584 if (tup->rts_active) 619 585 set_rts(tup, true); 620 586 587 + done: 621 588 spin_unlock_irqrestore(&u->lock, flags); 622 589 } 623 590 ··· 629 594 struct tty_struct *tty = tty_port_tty_get(&tup->uport.state->port); 630 595 struct tty_port *port = &tup->uport.state->port; 631 596 struct uart_port *u = &tup->uport; 632 - int count; 597 + unsigned int count; 633 598 634 599 /* Deactivate flow control to stop sender */ 635 600 if (tup->rts_active) ··· 641 606 count = tup->rx_bytes_requested - state.residue; 642 607 643 608 /* If we are here, DMA is stopped */ 644 - if (count) 645 - tegra_uart_copy_rx_to_tty(tup, port, count); 609 + tegra_uart_copy_rx_to_tty(tup, port, count); 646 610 647 611 tegra_uart_handle_rx_pio(tup, port); 648 612 if (tty) { ··· 899 865 tup->fcr_shadow |= TEGRA_UART_TX_TRIG_16B; 900 866 tegra_uart_write(tup, tup->fcr_shadow, UART_FCR); 901 867 868 + /* Dummy read to ensure the write is posted */ 869 + tegra_uart_read(tup, UART_SCR); 870 + 871 + /* 872 + * For all tegra devices (up to t210), there is a hardware issue that 873 + * requires software to wait for 3 UART clock periods after enabling 874 + * the TX fifo, otherwise data could be lost. 875 + */ 876 + tegra_uart_wait_cycle_time(tup, 3); 877 + 902 878 /* 903 879 * Initialize the UART with default configuration 904 880 * (115200, N, 8, 1) so that the receive DMA buffer may be ··· 949 905 return 0; 950 906 } 951 907 908 + static void tegra_uart_dma_channel_free(struct tegra_uart_port *tup, 909 + bool dma_to_memory) 910 + { 911 + if (dma_to_memory) { 912 + dmaengine_terminate_all(tup->rx_dma_chan); 913 + dma_release_channel(tup->rx_dma_chan); 914 + dma_free_coherent(tup->uport.dev, TEGRA_UART_RX_DMA_BUFFER_SIZE, 915 + tup->rx_dma_buf_virt, tup->rx_dma_buf_phys); 916 + tup->rx_dma_chan = NULL; 917 + tup->rx_dma_buf_phys = 0; 918 + tup->rx_dma_buf_virt = NULL; 919 + } else { 920 + dmaengine_terminate_all(tup->tx_dma_chan); 921 + dma_release_channel(tup->tx_dma_chan); 922 + dma_unmap_single(tup->uport.dev, tup->tx_dma_buf_phys, 923 + UART_XMIT_SIZE, DMA_TO_DEVICE); 924 + tup->tx_dma_chan = NULL; 925 + tup->tx_dma_buf_phys = 0; 926 + tup->tx_dma_buf_virt = NULL; 927 + } 928 + } 929 + 952 930 static int tegra_uart_dma_channel_allocate(struct tegra_uart_port *tup, 953 931 bool dma_to_memory) 954 932 { ··· 999 933 dma_release_channel(dma_chan); 1000 934 return -ENOMEM; 1001 935 } 936 + dma_sconfig.src_addr = tup->uport.mapbase; 937 + dma_sconfig.src_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE; 938 + dma_sconfig.src_maxburst = 4; 939 + tup->rx_dma_chan = dma_chan; 940 + tup->rx_dma_buf_virt = dma_buf; 941 + tup->rx_dma_buf_phys = dma_phys; 1002 942 } else { 1003 943 dma_phys = dma_map_single(tup->uport.dev, 1004 944 tup->uport.state->xmit.buf, UART_XMIT_SIZE, 1005 945 DMA_TO_DEVICE); 946 + if (dma_mapping_error(tup->uport.dev, dma_phys)) { 947 + dev_err(tup->uport.dev, "dma_map_single tx failed\n"); 948 + dma_release_channel(dma_chan); 949 + return -ENOMEM; 950 + } 1006 951 dma_buf = tup->uport.state->xmit.buf; 1007 - } 1008 - 1009 - if (dma_to_memory) { 1010 - dma_sconfig.src_addr = tup->uport.mapbase; 1011 - dma_sconfig.src_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE; 1012 - dma_sconfig.src_maxburst = 4; 1013 - } else { 1014 952 dma_sconfig.dst_addr = tup->uport.mapbase; 1015 953 dma_sconfig.dst_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE; 1016 954 dma_sconfig.dst_maxburst = 16; 955 + tup->tx_dma_chan = dma_chan; 956 + tup->tx_dma_buf_virt = dma_buf; 957 + tup->tx_dma_buf_phys = dma_phys; 1017 958 } 1018 959 1019 960 ret = dmaengine_slave_config(dma_chan, &dma_sconfig); 1020 961 if (ret < 0) { 1021 962 dev_err(tup->uport.dev, 1022 963 "Dma slave config failed, err = %d\n", ret); 1023 - goto scrub; 964 + tegra_uart_dma_channel_free(tup, dma_to_memory); 965 + return ret; 1024 966 } 1025 967 1026 - if (dma_to_memory) { 1027 - tup->rx_dma_chan = dma_chan; 1028 - tup->rx_dma_buf_virt = dma_buf; 1029 - tup->rx_dma_buf_phys = dma_phys; 1030 - } else { 1031 - tup->tx_dma_chan = dma_chan; 1032 - tup->tx_dma_buf_virt = dma_buf; 1033 - tup->tx_dma_buf_phys = dma_phys; 1034 - } 1035 968 return 0; 1036 - 1037 - scrub: 1038 - dma_release_channel(dma_chan); 1039 - return ret; 1040 - } 1041 - 1042 - static void tegra_uart_dma_channel_free(struct tegra_uart_port *tup, 1043 - bool dma_to_memory) 1044 - { 1045 - struct dma_chan *dma_chan; 1046 - 1047 - if (dma_to_memory) { 1048 - dma_free_coherent(tup->uport.dev, TEGRA_UART_RX_DMA_BUFFER_SIZE, 1049 - tup->rx_dma_buf_virt, tup->rx_dma_buf_phys); 1050 - dma_chan = tup->rx_dma_chan; 1051 - tup->rx_dma_chan = NULL; 1052 - tup->rx_dma_buf_phys = 0; 1053 - tup->rx_dma_buf_virt = NULL; 1054 - } else { 1055 - dma_unmap_single(tup->uport.dev, tup->tx_dma_buf_phys, 1056 - UART_XMIT_SIZE, DMA_TO_DEVICE); 1057 - dma_chan = tup->tx_dma_chan; 1058 - tup->tx_dma_chan = NULL; 1059 - tup->tx_dma_buf_phys = 0; 1060 - tup->tx_dma_buf_virt = NULL; 1061 - } 1062 - dma_release_channel(dma_chan); 1063 969 } 1064 970 1065 971 static int tegra_uart_startup(struct uart_port *u) ··· 1098 1060 tegra_uart_dma_channel_free(tup, true); 1099 1061 tegra_uart_dma_channel_free(tup, false); 1100 1062 free_irq(u->irq, tup); 1101 - 1102 - tegra_uart_flush_buffer(u); 1103 1063 } 1104 1064 1105 1065 static void tegra_uart_enable_ms(struct uart_port *u)
+6 -5
drivers/tty/serial/serial_core.c
··· 894 894 * need to rate-limit; it's CAP_SYS_ADMIN only. 895 895 */ 896 896 if (uport->flags & UPF_SPD_MASK) { 897 - char buf[64]; 898 - 899 897 dev_notice(uport->dev, 900 898 "%s sets custom speed on %s. This is deprecated.\n", 901 899 current->comm, 902 - tty_name(port->tty, buf)); 900 + tty_name(port->tty)); 903 901 } 904 902 uart_change_speed(tty, state, NULL); 905 903 } ··· 1814 1816 * @options: ptr for <options> field; NULL if not present (out) 1815 1817 * 1816 1818 * Decodes earlycon kernel command line parameters of the form 1817 - * earlycon=<name>,io|mmio|mmio32,<addr>,<options> 1818 - * console=<name>,io|mmio|mmio32,<addr>,<options> 1819 + * earlycon=<name>,io|mmio|mmio32|mmio32be,<addr>,<options> 1820 + * console=<name>,io|mmio|mmio32|mmio32be,<addr>,<options> 1819 1821 * 1820 1822 * The optional form 1821 1823 * earlycon=<name>,0x<addr>,<options> ··· 1833 1835 } else if (strncmp(p, "mmio32,", 7) == 0) { 1834 1836 *iotype = UPIO_MEM32; 1835 1837 p += 7; 1838 + } else if (strncmp(p, "mmio32be,", 9) == 0) { 1839 + *iotype = UPIO_MEM32BE; 1840 + p += 9; 1836 1841 } else if (strncmp(p, "io,", 3) == 0) { 1837 1842 *iotype = UPIO_PORT; 1838 1843 p += 3;
+2 -3
drivers/tty/serial/serial_mctrl_gpio.c
··· 49 49 unsigned int count = 0; 50 50 51 51 for (i = 0; i < UART_GPIO_MAX; i++) 52 - if (!IS_ERR_OR_NULL(gpios->gpio[i]) && 53 - mctrl_gpios_desc[i].dir_out) { 52 + if (gpios->gpio[i] && mctrl_gpios_desc[i].dir_out) { 54 53 desc_array[count] = gpios->gpio[i]; 55 54 value_array[count] = !!(mctrl & mctrl_gpios_desc[i].mctrl); 56 55 count++; ··· 117 118 enum mctrl_gpio_idx i; 118 119 119 120 for (i = 0; i < UART_GPIO_MAX; i++) 120 - if (!IS_ERR_OR_NULL(gpios->gpio[i])) 121 + if (gpios->gpio[i]) 121 122 devm_gpiod_put(dev, gpios->gpio[i]); 122 123 devm_kfree(dev, gpios); 123 124 }
+51 -45
drivers/tty/serial/sh-sci.c
··· 81 81 82 82 /* Platform configuration */ 83 83 struct plat_sci_port *cfg; 84 - int overrun_bit; 84 + unsigned int overrun_reg; 85 + unsigned int overrun_mask; 85 86 unsigned int error_mask; 86 87 unsigned int sampling_rate; 87 88 ··· 169 168 [SCSPTR] = sci_reg_invalid, 170 169 [SCLSR] = sci_reg_invalid, 171 170 [HSSRR] = sci_reg_invalid, 171 + [SCPCR] = sci_reg_invalid, 172 + [SCPDR] = sci_reg_invalid, 172 173 }, 173 174 174 175 /* ··· 191 188 [SCSPTR] = sci_reg_invalid, 192 189 [SCLSR] = sci_reg_invalid, 193 190 [HSSRR] = sci_reg_invalid, 191 + [SCPCR] = sci_reg_invalid, 192 + [SCPDR] = sci_reg_invalid, 194 193 }, 195 194 196 195 /* ··· 212 207 [SCSPTR] = sci_reg_invalid, 213 208 [SCLSR] = sci_reg_invalid, 214 209 [HSSRR] = sci_reg_invalid, 210 + [SCPCR] = { 0x30, 16 }, 211 + [SCPDR] = { 0x34, 16 }, 215 212 }, 216 213 217 214 /* ··· 233 226 [SCSPTR] = sci_reg_invalid, 234 227 [SCLSR] = sci_reg_invalid, 235 228 [HSSRR] = sci_reg_invalid, 229 + [SCPCR] = { 0x30, 16 }, 230 + [SCPDR] = { 0x34, 16 }, 236 231 }, 237 232 238 233 /* ··· 255 246 [SCSPTR] = { 0x20, 16 }, 256 247 [SCLSR] = { 0x24, 16 }, 257 248 [HSSRR] = sci_reg_invalid, 249 + [SCPCR] = sci_reg_invalid, 250 + [SCPDR] = sci_reg_invalid, 258 251 }, 259 252 260 253 /* ··· 276 265 [SCSPTR] = sci_reg_invalid, 277 266 [SCLSR] = sci_reg_invalid, 278 267 [HSSRR] = sci_reg_invalid, 268 + [SCPCR] = sci_reg_invalid, 269 + [SCPDR] = sci_reg_invalid, 279 270 }, 280 271 281 272 /* ··· 297 284 [SCSPTR] = { 0x20, 16 }, 298 285 [SCLSR] = { 0x24, 16 }, 299 286 [HSSRR] = sci_reg_invalid, 287 + [SCPCR] = sci_reg_invalid, 288 + [SCPDR] = sci_reg_invalid, 300 289 }, 301 290 302 291 /* ··· 318 303 [SCSPTR] = { 0x20, 16 }, 319 304 [SCLSR] = { 0x24, 16 }, 320 305 [HSSRR] = { 0x40, 16 }, 306 + [SCPCR] = sci_reg_invalid, 307 + [SCPDR] = sci_reg_invalid, 321 308 }, 322 309 323 310 /* ··· 340 323 [SCSPTR] = sci_reg_invalid, 341 324 [SCLSR] = { 0x24, 16 }, 342 325 [HSSRR] = sci_reg_invalid, 326 + [SCPCR] = sci_reg_invalid, 327 + [SCPDR] = sci_reg_invalid, 343 328 }, 344 329 345 330 /* ··· 362 343 [SCSPTR] = { 0x24, 16 }, 363 344 [SCLSR] = { 0x28, 16 }, 364 345 [HSSRR] = sci_reg_invalid, 346 + [SCPCR] = sci_reg_invalid, 347 + [SCPDR] = sci_reg_invalid, 365 348 }, 366 349 367 350 /* ··· 384 363 [SCSPTR] = sci_reg_invalid, 385 364 [SCLSR] = sci_reg_invalid, 386 365 [HSSRR] = sci_reg_invalid, 366 + [SCPCR] = sci_reg_invalid, 367 + [SCPDR] = sci_reg_invalid, 387 368 }, 388 369 }; 389 370 ··· 804 781 struct sci_port *s = to_sci_port(port); 805 782 806 783 /* Handle overruns */ 807 - if (status & (1 << s->overrun_bit)) { 784 + if (status & s->overrun_mask) { 808 785 port->icount.overrun++; 809 786 810 787 /* overrun error */ ··· 867 844 struct tty_port *tport = &port->state->port; 868 845 struct sci_port *s = to_sci_port(port); 869 846 struct plat_sci_reg *reg; 870 - int copied = 0, offset; 871 - u16 status, bit; 847 + int copied = 0; 848 + u16 status; 872 849 873 - switch (port->type) { 874 - case PORT_SCIF: 875 - case PORT_HSCIF: 876 - offset = SCLSR; 877 - break; 878 - case PORT_SCIFA: 879 - case PORT_SCIFB: 880 - offset = SCxSR; 881 - break; 882 - default: 883 - return 0; 884 - } 885 - 886 - reg = sci_getreg(port, offset); 850 + reg = sci_getreg(port, s->overrun_reg); 887 851 if (!reg->size) 888 852 return 0; 889 853 890 - status = serial_port_in(port, offset); 891 - bit = 1 << s->overrun_bit; 892 - 893 - if (status & bit) { 894 - status &= ~bit; 895 - serial_port_out(port, offset, status); 854 + status = serial_port_in(port, s->overrun_reg); 855 + if (status & s->overrun_mask) { 856 + status &= ~s->overrun_mask; 857 + serial_port_out(port, s->overrun_reg, status); 896 858 897 859 port->icount.overrun++; 898 860 ··· 1029 1021 1030 1022 ssr_status = serial_port_in(port, SCxSR); 1031 1023 scr_status = serial_port_in(port, SCSCR); 1032 - switch (port->type) { 1033 - case PORT_SCIF: 1034 - case PORT_HSCIF: 1035 - orer_status = serial_port_in(port, SCLSR); 1036 - break; 1037 - case PORT_SCIFA: 1038 - case PORT_SCIFB: 1024 + if (s->overrun_reg == SCxSR) 1039 1025 orer_status = ssr_status; 1040 - break; 1026 + else { 1027 + if (sci_getreg(port, s->overrun_reg)->size) 1028 + orer_status = serial_port_in(port, s->overrun_reg); 1041 1029 } 1042 1030 1043 1031 err_enabled = scr_status & port_rx_irq_mask(port); ··· 1063 1059 ret = sci_br_interrupt(irq, ptr); 1064 1060 1065 1061 /* Overrun Interrupt */ 1066 - if (orer_status & (1 << s->overrun_bit)) 1062 + if (orer_status & s->overrun_mask) 1067 1063 sci_handle_fifo_overrun(port); 1068 1064 1069 1065 return ret; ··· 2238 2234 switch (p->type) { 2239 2235 case PORT_SCIFB: 2240 2236 port->fifosize = 256; 2241 - sci_port->overrun_bit = 9; 2237 + sci_port->overrun_reg = SCxSR; 2238 + sci_port->overrun_mask = SCIFA_ORER; 2242 2239 sampling_rate = 16; 2243 2240 break; 2244 2241 case PORT_HSCIF: 2245 2242 port->fifosize = 128; 2246 2243 sampling_rate = 0; 2247 - sci_port->overrun_bit = 0; 2244 + sci_port->overrun_reg = SCLSR; 2245 + sci_port->overrun_mask = SCLSR_ORER; 2248 2246 break; 2249 2247 case PORT_SCIFA: 2250 2248 port->fifosize = 64; 2251 - sci_port->overrun_bit = 9; 2249 + sci_port->overrun_reg = SCxSR; 2250 + sci_port->overrun_mask = SCIFA_ORER; 2252 2251 sampling_rate = 16; 2253 2252 break; 2254 2253 case PORT_SCIF: 2255 2254 port->fifosize = 16; 2256 2255 if (p->regtype == SCIx_SH7705_SCIF_REGTYPE) { 2257 - sci_port->overrun_bit = 9; 2256 + sci_port->overrun_reg = SCxSR; 2257 + sci_port->overrun_mask = SCIFA_ORER; 2258 2258 sampling_rate = 16; 2259 2259 } else { 2260 - sci_port->overrun_bit = 0; 2260 + sci_port->overrun_reg = SCLSR; 2261 + sci_port->overrun_mask = SCLSR_ORER; 2261 2262 sampling_rate = 32; 2262 2263 } 2263 2264 break; 2264 2265 default: 2265 2266 port->fifosize = 1; 2266 - sci_port->overrun_bit = 5; 2267 + sci_port->overrun_reg = SCxSR; 2268 + sci_port->overrun_mask = SCI_ORER; 2267 2269 sampling_rate = 32; 2268 2270 break; 2269 2271 } ··· 2315 2305 SCI_DEFAULT_ERROR_MASK : SCIF_DEFAULT_ERROR_MASK; 2316 2306 2317 2307 /* 2318 - * Establish sensible defaults for the overrun detection, unless 2319 - * the part has explicitly disabled support for it. 2320 - */ 2321 - 2322 - /* 2323 2308 * Make the error mask inclusive of overrun detection, if 2324 2309 * supported. 2325 2310 */ 2326 - sci_port->error_mask |= 1 << sci_port->overrun_bit; 2311 + if (sci_port->overrun_reg == SCxSR) 2312 + sci_port->error_mask |= sci_port->overrun_mask; 2327 2313 2328 2314 port->type = p->type; 2329 2315 port->flags = UPF_FIXED_PORT | p->flags;
+124 -16
drivers/tty/serial/sh-sci.h
··· 1 + #include <linux/bitops.h> 1 2 #include <linux/serial_core.h> 2 3 #include <linux/io.h> 3 4 #include <linux/gpio.h> 5 + 6 + #define SCI_MAJOR 204 7 + #define SCI_MINOR_START 8 8 + 9 + 10 + /* 11 + * SCI register subset common for all port types. 12 + * Not all registers will exist on all parts. 13 + */ 14 + enum { 15 + SCSMR, /* Serial Mode Register */ 16 + SCBRR, /* Bit Rate Register */ 17 + SCSCR, /* Serial Control Register */ 18 + SCxSR, /* Serial Status Register */ 19 + SCFCR, /* FIFO Control Register */ 20 + SCFDR, /* FIFO Data Count Register */ 21 + SCxTDR, /* Transmit (FIFO) Data Register */ 22 + SCxRDR, /* Receive (FIFO) Data Register */ 23 + SCLSR, /* Line Status Register */ 24 + SCTFDR, /* Transmit FIFO Data Count Register */ 25 + SCRFDR, /* Receive FIFO Data Count Register */ 26 + SCSPTR, /* Serial Port Register */ 27 + HSSRR, /* Sampling Rate Register */ 28 + SCPCR, /* Serial Port Control Register */ 29 + SCPDR, /* Serial Port Data Register */ 30 + 31 + SCIx_NR_REGS, 32 + }; 33 + 34 + 35 + /* SCSMR (Serial Mode Register) */ 36 + #define SCSMR_CHR BIT(6) /* 7-bit Character Length */ 37 + #define SCSMR_PE BIT(5) /* Parity Enable */ 38 + #define SCSMR_ODD BIT(4) /* Odd Parity */ 39 + #define SCSMR_STOP BIT(3) /* Stop Bit Length */ 40 + #define SCSMR_CKS 0x0003 /* Clock Select */ 41 + 42 + /* Serial Control Register, SCIFA/SCIFB only bits */ 43 + #define SCSCR_TDRQE BIT(15) /* Tx Data Transfer Request Enable */ 44 + #define SCSCR_RDRQE BIT(14) /* Rx Data Transfer Request Enable */ 45 + 46 + /* SCxSR (Serial Status Register) on SCI */ 47 + #define SCI_TDRE BIT(7) /* Transmit Data Register Empty */ 48 + #define SCI_RDRF BIT(6) /* Receive Data Register Full */ 49 + #define SCI_ORER BIT(5) /* Overrun Error */ 50 + #define SCI_FER BIT(4) /* Framing Error */ 51 + #define SCI_PER BIT(3) /* Parity Error */ 52 + #define SCI_TEND BIT(2) /* Transmit End */ 53 + #define SCI_RESERVED 0x03 /* All reserved bits */ 54 + 55 + #define SCI_DEFAULT_ERROR_MASK (SCI_PER | SCI_FER) 56 + 57 + #define SCI_RDxF_CLEAR ~(SCI_RESERVED | SCI_RDRF) 58 + #define SCI_ERROR_CLEAR ~(SCI_RESERVED | SCI_PER | SCI_FER | SCI_ORER) 59 + #define SCI_TDxE_CLEAR ~(SCI_RESERVED | SCI_TEND | SCI_TDRE) 60 + #define SCI_BREAK_CLEAR ~(SCI_RESERVED | SCI_PER | SCI_FER | SCI_ORER) 61 + 62 + /* SCxSR (Serial Status Register) on SCIF, SCIFA, SCIFB, HSCIF */ 63 + #define SCIF_ER BIT(7) /* Receive Error */ 64 + #define SCIF_TEND BIT(6) /* Transmission End */ 65 + #define SCIF_TDFE BIT(5) /* Transmit FIFO Data Empty */ 66 + #define SCIF_BRK BIT(4) /* Break Detect */ 67 + #define SCIF_FER BIT(3) /* Framing Error */ 68 + #define SCIF_PER BIT(2) /* Parity Error */ 69 + #define SCIF_RDF BIT(1) /* Receive FIFO Data Full */ 70 + #define SCIF_DR BIT(0) /* Receive Data Ready */ 71 + /* SCIF only (optional) */ 72 + #define SCIF_PERC 0xf000 /* Number of Parity Errors */ 73 + #define SCIF_FERC 0x0f00 /* Number of Framing Errors */ 74 + /*SCIFA/SCIFB and SCIF on SH7705/SH7720/SH7721 only */ 75 + #define SCIFA_ORER BIT(9) /* Overrun Error */ 76 + 77 + #define SCIF_DEFAULT_ERROR_MASK (SCIF_PER | SCIF_FER | SCIF_BRK | SCIF_ER) 78 + 79 + #define SCIF_RDxF_CLEAR ~(SCIF_DR | SCIF_RDF) 80 + #define SCIF_ERROR_CLEAR ~(SCIFA_ORER | SCIF_PER | SCIF_FER | SCIF_ER) 81 + #define SCIF_TDxE_CLEAR ~(SCIF_TDFE) 82 + #define SCIF_BREAK_CLEAR ~(SCIF_PER | SCIF_FER | SCIF_BRK) 83 + 84 + /* SCFCR (FIFO Control Register) */ 85 + #define SCFCR_MCE BIT(3) /* Modem Control Enable */ 86 + #define SCFCR_TFRST BIT(2) /* Transmit FIFO Data Register Reset */ 87 + #define SCFCR_RFRST BIT(1) /* Receive FIFO Data Register Reset */ 88 + #define SCFCR_LOOP BIT(0) /* Loopback Test */ 89 + 90 + /* SCLSR (Line Status Register) on (H)SCIF */ 91 + #define SCLSR_ORER BIT(0) /* Overrun Error */ 92 + 93 + /* SCSPTR (Serial Port Register), optional */ 94 + #define SCSPTR_RTSIO BIT(7) /* Serial Port RTS Pin Input/Output */ 95 + #define SCSPTR_RTSDT BIT(6) /* Serial Port RTS Pin Data */ 96 + #define SCSPTR_CTSIO BIT(5) /* Serial Port CTS Pin Input/Output */ 97 + #define SCSPTR_CTSDT BIT(4) /* Serial Port CTS Pin Data */ 98 + #define SCSPTR_SPB2IO BIT(1) /* Serial Port Break Input/Output */ 99 + #define SCSPTR_SPB2DT BIT(0) /* Serial Port Break Data */ 100 + 101 + /* HSSRR HSCIF */ 102 + #define HSCIF_SRE BIT(15) /* Sampling Rate Register Enable */ 103 + 104 + /* SCPCR (Serial Port Control Register), SCIFA/SCIFB only */ 105 + #define SCPCR_RTSC BIT(4) /* Serial Port RTS Pin / Output Pin */ 106 + #define SCPCR_CTSC BIT(3) /* Serial Port CTS Pin / Input Pin */ 107 + 108 + /* SCPDR (Serial Port Data Register), SCIFA/SCIFB only */ 109 + #define SCPDR_RTSD BIT(4) /* Serial Port RTS Output Pin Data */ 110 + #define SCPDR_CTSD BIT(3) /* Serial Port CTS Input Pin Data */ 111 + 4 112 5 113 #define SCxSR_TEND(port) (((port)->type == PORT_SCI) ? SCI_TEND : SCIF_TEND) 6 114 #define SCxSR_RDxF(port) (((port)->type == PORT_SCI) ? SCI_RDRF : SCIF_RDF) ··· 123 15 defined(CONFIG_CPU_SUBTYPE_SH7720) || \ 124 16 defined(CONFIG_CPU_SUBTYPE_SH7721) || \ 125 17 defined(CONFIG_ARCH_SH73A0) || \ 126 - defined(CONFIG_ARCH_SH7372) || \ 127 18 defined(CONFIG_ARCH_R8A7740) 128 19 129 - # define SCxSR_RDxF_CLEAR(port) (serial_port_in(port, SCxSR) & 0xfffc) 130 - # define SCxSR_ERROR_CLEAR(port) (serial_port_in(port, SCxSR) & 0xfd73) 131 - # define SCxSR_TDxE_CLEAR(port) (serial_port_in(port, SCxSR) & 0xffdf) 132 - # define SCxSR_BREAK_CLEAR(port) (serial_port_in(port, SCxSR) & 0xffe3) 20 + # define SCxSR_RDxF_CLEAR(port) \ 21 + (serial_port_in(port, SCxSR) & SCIF_RDxF_CLEAR) 22 + # define SCxSR_ERROR_CLEAR(port) \ 23 + (serial_port_in(port, SCxSR) & SCIF_ERROR_CLEAR) 24 + # define SCxSR_TDxE_CLEAR(port) \ 25 + (serial_port_in(port, SCxSR) & SCIF_TDxE_CLEAR) 26 + # define SCxSR_BREAK_CLEAR(port) \ 27 + (serial_port_in(port, SCxSR) & SCIF_BREAK_CLEAR) 133 28 #else 134 - # define SCxSR_RDxF_CLEAR(port) (((port)->type == PORT_SCI) ? 0xbc : 0x00fc) 135 - # define SCxSR_ERROR_CLEAR(port) (((port)->type == PORT_SCI) ? 0xc4 : 0x0073) 136 - # define SCxSR_TDxE_CLEAR(port) (((port)->type == PORT_SCI) ? 0x78 : 0x00df) 137 - # define SCxSR_BREAK_CLEAR(port) (((port)->type == PORT_SCI) ? 0xc4 : 0x00e3) 29 + # define SCxSR_RDxF_CLEAR(port) \ 30 + ((((port)->type == PORT_SCI) ? SCI_RDxF_CLEAR : SCIF_RDxF_CLEAR) & 0xff) 31 + # define SCxSR_ERROR_CLEAR(port) \ 32 + ((((port)->type == PORT_SCI) ? SCI_ERROR_CLEAR : SCIF_ERROR_CLEAR) & 0xff) 33 + # define SCxSR_TDxE_CLEAR(port) \ 34 + ((((port)->type == PORT_SCI) ? SCI_TDxE_CLEAR : SCIF_TDxE_CLEAR) & 0xff) 35 + # define SCxSR_BREAK_CLEAR(port) \ 36 + ((((port)->type == PORT_SCI) ? SCI_BREAK_CLEAR : SCIF_BREAK_CLEAR) & 0xff) 138 37 #endif 139 38 140 - /* SCFCR */ 141 - #define SCFCR_RFRST 0x0002 142 - #define SCFCR_TFRST 0x0004 143 - #define SCFCR_MCE 0x0008 144 - 145 - #define SCI_MAJOR 204 146 - #define SCI_MINOR_START 8
+278 -338
drivers/tty/serial/sirfsoc_uart.c
··· 36 36 static struct uart_driver sirfsoc_uart_drv; 37 37 38 38 static void sirfsoc_uart_tx_dma_complete_callback(void *param); 39 - static void sirfsoc_uart_start_next_rx_dma(struct uart_port *port); 40 - static void sirfsoc_uart_rx_dma_complete_callback(void *param); 41 39 static const struct sirfsoc_baudrate_to_regv baudrate_to_regv[] = { 42 40 {4000000, 2359296}, 43 41 {3500000, 1310721}, ··· 57 59 {9600, 1114979}, 58 60 }; 59 61 60 - static struct sirfsoc_uart_port sirfsoc_uart_ports[SIRFSOC_UART_NR] = { 61 - [0] = { 62 - .port = { 63 - .iotype = UPIO_MEM, 64 - .flags = UPF_BOOT_AUTOCONF, 65 - .line = 0, 66 - }, 67 - }, 68 - [1] = { 69 - .port = { 70 - .iotype = UPIO_MEM, 71 - .flags = UPF_BOOT_AUTOCONF, 72 - .line = 1, 73 - }, 74 - }, 75 - [2] = { 76 - .port = { 77 - .iotype = UPIO_MEM, 78 - .flags = UPF_BOOT_AUTOCONF, 79 - .line = 2, 80 - }, 81 - }, 82 - [3] = { 83 - .port = { 84 - .iotype = UPIO_MEM, 85 - .flags = UPF_BOOT_AUTOCONF, 86 - .line = 3, 87 - }, 88 - }, 89 - [4] = { 90 - .port = { 91 - .iotype = UPIO_MEM, 92 - .flags = UPF_BOOT_AUTOCONF, 93 - .line = 4, 94 - }, 95 - }, 96 - [5] = { 97 - .port = { 98 - .iotype = UPIO_MEM, 99 - .flags = UPF_BOOT_AUTOCONF, 100 - .line = 5, 101 - }, 102 - }, 103 - }; 62 + static struct sirfsoc_uart_port *sirf_ports[SIRFSOC_UART_NR]; 104 63 105 64 static inline struct sirfsoc_uart_port *to_sirfport(struct uart_port *port) 106 65 { ··· 71 116 struct sirfsoc_register *ureg = &sirfport->uart_reg->uart_reg; 72 117 struct sirfsoc_fifo_status *ufifo_st = &sirfport->uart_reg->fifo_status; 73 118 reg = rd_regl(port, ureg->sirfsoc_tx_fifo_status); 74 - 75 - return (reg & ufifo_st->ff_empty(port->line)) ? TIOCSER_TEMT : 0; 119 + return (reg & ufifo_st->ff_empty(port)) ? TIOCSER_TEMT : 0; 76 120 } 77 121 78 122 static unsigned int sirfsoc_uart_get_mctrl(struct uart_port *port) ··· 106 152 unsigned int val = assert ? SIRFUART_AFC_CTRL_RX_THD : 0x0; 107 153 unsigned int current_val; 108 154 155 + if (mctrl & TIOCM_LOOP) { 156 + if (sirfport->uart_reg->uart_type == SIRF_REAL_UART) 157 + wr_regl(port, ureg->sirfsoc_line_ctrl, 158 + rd_regl(port, ureg->sirfsoc_line_ctrl) | 159 + SIRFUART_LOOP_BACK); 160 + else 161 + wr_regl(port, ureg->sirfsoc_mode1, 162 + rd_regl(port, ureg->sirfsoc_mode1) | 163 + SIRFSOC_USP_LOOP_BACK_CTRL); 164 + } else { 165 + if (sirfport->uart_reg->uart_type == SIRF_REAL_UART) 166 + wr_regl(port, ureg->sirfsoc_line_ctrl, 167 + rd_regl(port, ureg->sirfsoc_line_ctrl) & 168 + ~SIRFUART_LOOP_BACK); 169 + else 170 + wr_regl(port, ureg->sirfsoc_mode1, 171 + rd_regl(port, ureg->sirfsoc_mode1) & 172 + ~SIRFSOC_USP_LOOP_BACK_CTRL); 173 + } 174 + 109 175 if (!sirfport->hw_flow_ctrl || !sirfport->ms_enabled) 110 176 return; 111 177 if (sirfport->uart_reg->uart_type == SIRF_REAL_UART) { ··· 156 182 rd_regl(port, ureg->sirfsoc_int_en_reg) & 157 183 ~uint_en->sirfsoc_txfifo_empty_en); 158 184 else 159 - wr_regl(port, SIRFUART_INT_EN_CLR, 185 + wr_regl(port, ureg->sirfsoc_int_en_clr_reg, 160 186 uint_en->sirfsoc_txfifo_empty_en); 161 187 } 162 188 } else { 189 + if (sirfport->uart_reg->uart_type == SIRF_USP_UART) 190 + wr_regl(port, ureg->sirfsoc_tx_rx_en, rd_regl(port, 191 + ureg->sirfsoc_tx_rx_en) & ~SIRFUART_TX_EN); 163 192 if (!sirfport->is_atlas7) 164 193 wr_regl(port, ureg->sirfsoc_int_en_reg, 165 194 rd_regl(port, ureg->sirfsoc_int_en_reg) & 166 195 ~uint_en->sirfsoc_txfifo_empty_en); 167 196 else 168 - wr_regl(port, SIRFUART_INT_EN_CLR, 197 + wr_regl(port, ureg->sirfsoc_int_en_clr_reg, 169 198 uint_en->sirfsoc_txfifo_empty_en); 170 199 } 171 200 } ··· 199 222 rd_regl(port, ureg->sirfsoc_int_en_reg)& 200 223 ~(uint_en->sirfsoc_txfifo_empty_en)); 201 224 else 202 - wr_regl(port, SIRFUART_INT_EN_CLR, 225 + wr_regl(port, ureg->sirfsoc_int_en_clr_reg, 203 226 uint_en->sirfsoc_txfifo_empty_en); 204 227 /* 205 228 * DMA requires buffer address and buffer length are both aligned with ··· 267 290 if (sirfport->tx_dma_chan) 268 291 sirfsoc_uart_tx_with_dma(sirfport); 269 292 else { 270 - sirfsoc_uart_pio_tx_chars(sirfport, 271 - SIRFSOC_UART_IO_TX_REASONABLE_CNT); 293 + if (sirfport->uart_reg->uart_type == SIRF_USP_UART) 294 + wr_regl(port, ureg->sirfsoc_tx_rx_en, rd_regl(port, 295 + ureg->sirfsoc_tx_rx_en) | SIRFUART_TX_EN); 296 + wr_regl(port, ureg->sirfsoc_tx_fifo_op, SIRFUART_FIFO_STOP); 297 + sirfsoc_uart_pio_tx_chars(sirfport, port->fifosize); 272 298 wr_regl(port, ureg->sirfsoc_tx_fifo_op, SIRFUART_FIFO_START); 273 299 if (!sirfport->is_atlas7) 274 300 wr_regl(port, ureg->sirfsoc_int_en_reg, ··· 294 314 if (!sirfport->is_atlas7) 295 315 wr_regl(port, ureg->sirfsoc_int_en_reg, 296 316 rd_regl(port, ureg->sirfsoc_int_en_reg) & 297 - ~(SIRFUART_RX_DMA_INT_EN(port, uint_en) | 317 + ~(SIRFUART_RX_DMA_INT_EN(uint_en, 318 + sirfport->uart_reg->uart_type) | 298 319 uint_en->sirfsoc_rx_done_en)); 299 320 else 300 - wr_regl(port, SIRFUART_INT_EN_CLR, 301 - SIRFUART_RX_DMA_INT_EN(port, uint_en)| 302 - uint_en->sirfsoc_rx_done_en); 321 + wr_regl(port, ureg->sirfsoc_int_en_clr_reg, 322 + SIRFUART_RX_DMA_INT_EN(uint_en, 323 + sirfport->uart_reg->uart_type)| 324 + uint_en->sirfsoc_rx_done_en); 303 325 dmaengine_terminate_all(sirfport->rx_dma_chan); 304 326 } else { 305 327 if (!sirfport->is_atlas7) 306 328 wr_regl(port, ureg->sirfsoc_int_en_reg, 307 329 rd_regl(port, ureg->sirfsoc_int_en_reg)& 308 - ~(SIRFUART_RX_IO_INT_EN(port, uint_en))); 330 + ~(SIRFUART_RX_IO_INT_EN(uint_en, 331 + sirfport->uart_reg->uart_type))); 309 332 else 310 - wr_regl(port, SIRFUART_INT_EN_CLR, 311 - SIRFUART_RX_IO_INT_EN(port, uint_en)); 333 + wr_regl(port, ureg->sirfsoc_int_en_clr_reg, 334 + SIRFUART_RX_IO_INT_EN(uint_en, 335 + sirfport->uart_reg->uart_type)); 312 336 } 313 337 } 314 338 ··· 333 349 rd_regl(port, ureg->sirfsoc_int_en_reg)& 334 350 ~uint_en->sirfsoc_cts_en); 335 351 else 336 - wr_regl(port, SIRFUART_INT_EN_CLR, 352 + wr_regl(port, ureg->sirfsoc_int_en_clr_reg, 337 353 uint_en->sirfsoc_cts_en); 338 354 } else 339 355 disable_irq(gpio_to_irq(sirfport->cts_gpio)); ··· 363 379 if (sirfport->uart_reg->uart_type == SIRF_REAL_UART) { 364 380 wr_regl(port, ureg->sirfsoc_afc_ctrl, 365 381 rd_regl(port, ureg->sirfsoc_afc_ctrl) | 366 - SIRFUART_AFC_TX_EN | SIRFUART_AFC_RX_EN); 382 + SIRFUART_AFC_TX_EN | SIRFUART_AFC_RX_EN | 383 + SIRFUART_AFC_CTRL_RX_THD); 367 384 if (!sirfport->is_atlas7) 368 385 wr_regl(port, ureg->sirfsoc_int_en_reg, 369 386 rd_regl(port, ureg->sirfsoc_int_en_reg) ··· 402 417 if (!tty) 403 418 return -ENODEV; 404 419 while (!(rd_regl(port, ureg->sirfsoc_rx_fifo_status) & 405 - ufifo_st->ff_empty(port->line))) { 420 + ufifo_st->ff_empty(port))) { 406 421 ch = rd_regl(port, ureg->sirfsoc_rx_fifo_data) | 407 422 SIRFUART_DUMMY_READ; 408 423 if (unlikely(uart_handle_sysrq_char(port, ch))) ··· 429 444 unsigned int num_tx = 0; 430 445 while (!uart_circ_empty(xmit) && 431 446 !(rd_regl(port, ureg->sirfsoc_tx_fifo_status) & 432 - ufifo_st->ff_full(port->line)) && 447 + ufifo_st->ff_full(port)) && 433 448 count--) { 434 449 wr_regl(port, ureg->sirfsoc_tx_fifo_data, 435 450 xmit->buf[xmit->tail]); ··· 463 478 spin_unlock_irqrestore(&port->lock, flags); 464 479 } 465 480 466 - static void sirfsoc_uart_insert_rx_buf_to_tty( 467 - struct sirfsoc_uart_port *sirfport, int count) 468 - { 469 - struct uart_port *port = &sirfport->port; 470 - struct tty_port *tport = &port->state->port; 471 - int inserted; 472 - 473 - inserted = tty_insert_flip_string(tport, 474 - sirfport->rx_dma_items[sirfport->rx_completed].xmit.buf, count); 475 - port->icount.rx += inserted; 476 - } 477 - 478 - static void sirfsoc_rx_submit_one_dma_desc(struct uart_port *port, int index) 479 - { 480 - struct sirfsoc_uart_port *sirfport = to_sirfport(port); 481 - 482 - sirfport->rx_dma_items[index].xmit.tail = 483 - sirfport->rx_dma_items[index].xmit.head = 0; 484 - sirfport->rx_dma_items[index].desc = 485 - dmaengine_prep_slave_single(sirfport->rx_dma_chan, 486 - sirfport->rx_dma_items[index].dma_addr, SIRFSOC_RX_DMA_BUF_SIZE, 487 - DMA_DEV_TO_MEM, DMA_PREP_INTERRUPT); 488 - if (!sirfport->rx_dma_items[index].desc) { 489 - dev_err(port->dev, "DMA slave single fail\n"); 490 - return; 491 - } 492 - sirfport->rx_dma_items[index].desc->callback = 493 - sirfsoc_uart_rx_dma_complete_callback; 494 - sirfport->rx_dma_items[index].desc->callback_param = sirfport; 495 - sirfport->rx_dma_items[index].cookie = 496 - dmaengine_submit(sirfport->rx_dma_items[index].desc); 497 - dma_async_issue_pending(sirfport->rx_dma_chan); 498 - } 499 - 500 - static void sirfsoc_rx_tmo_process_tl(unsigned long param) 501 - { 502 - struct sirfsoc_uart_port *sirfport = (struct sirfsoc_uart_port *)param; 503 - struct uart_port *port = &sirfport->port; 504 - struct sirfsoc_register *ureg = &sirfport->uart_reg->uart_reg; 505 - struct sirfsoc_int_en *uint_en = &sirfport->uart_reg->uart_int_en; 506 - struct sirfsoc_int_status *uint_st = &sirfport->uart_reg->uart_int_st; 507 - unsigned int count; 508 - unsigned long flags; 509 - struct dma_tx_state tx_state; 510 - 511 - spin_lock_irqsave(&port->lock, flags); 512 - while (DMA_COMPLETE == dmaengine_tx_status(sirfport->rx_dma_chan, 513 - sirfport->rx_dma_items[sirfport->rx_completed].cookie, &tx_state)) { 514 - sirfsoc_uart_insert_rx_buf_to_tty(sirfport, 515 - SIRFSOC_RX_DMA_BUF_SIZE); 516 - sirfport->rx_completed++; 517 - sirfport->rx_completed %= SIRFSOC_RX_LOOP_BUF_CNT; 518 - } 519 - count = CIRC_CNT(sirfport->rx_dma_items[sirfport->rx_issued].xmit.head, 520 - sirfport->rx_dma_items[sirfport->rx_issued].xmit.tail, 521 - SIRFSOC_RX_DMA_BUF_SIZE); 522 - if (count > 0) 523 - sirfsoc_uart_insert_rx_buf_to_tty(sirfport, count); 524 - wr_regl(port, ureg->sirfsoc_rx_dma_io_ctrl, 525 - rd_regl(port, ureg->sirfsoc_rx_dma_io_ctrl) | 526 - SIRFUART_IO_MODE); 527 - sirfsoc_uart_pio_rx_chars(port, 4 - sirfport->rx_io_count); 528 - if (sirfport->rx_io_count == 4) { 529 - sirfport->rx_io_count = 0; 530 - wr_regl(port, ureg->sirfsoc_int_st_reg, 531 - uint_st->sirfsoc_rx_done); 532 - if (!sirfport->is_atlas7) 533 - wr_regl(port, ureg->sirfsoc_int_en_reg, 534 - rd_regl(port, ureg->sirfsoc_int_en_reg) & 535 - ~(uint_en->sirfsoc_rx_done_en)); 536 - else 537 - wr_regl(port, SIRFUART_INT_EN_CLR, 538 - uint_en->sirfsoc_rx_done_en); 539 - sirfsoc_uart_start_next_rx_dma(port); 540 - } else { 541 - wr_regl(port, ureg->sirfsoc_int_st_reg, 542 - uint_st->sirfsoc_rx_done); 543 - if (!sirfport->is_atlas7) 544 - wr_regl(port, ureg->sirfsoc_int_en_reg, 545 - rd_regl(port, ureg->sirfsoc_int_en_reg) | 546 - (uint_en->sirfsoc_rx_done_en)); 547 - else 548 - wr_regl(port, ureg->sirfsoc_int_en_reg, 549 - uint_en->sirfsoc_rx_done_en); 550 - } 551 - spin_unlock_irqrestore(&port->lock, flags); 552 - tty_flip_buffer_push(&port->state->port); 553 - } 554 - 555 - static void sirfsoc_uart_handle_rx_tmo(struct sirfsoc_uart_port *sirfport) 556 - { 557 - struct uart_port *port = &sirfport->port; 558 - struct sirfsoc_register *ureg = &sirfport->uart_reg->uart_reg; 559 - struct sirfsoc_int_en *uint_en = &sirfport->uart_reg->uart_int_en; 560 - struct dma_tx_state tx_state; 561 - dmaengine_tx_status(sirfport->rx_dma_chan, 562 - sirfport->rx_dma_items[sirfport->rx_issued].cookie, &tx_state); 563 - dmaengine_terminate_all(sirfport->rx_dma_chan); 564 - sirfport->rx_dma_items[sirfport->rx_issued].xmit.head = 565 - SIRFSOC_RX_DMA_BUF_SIZE - tx_state.residue; 566 - if (!sirfport->is_atlas7) 567 - wr_regl(port, ureg->sirfsoc_int_en_reg, 568 - rd_regl(port, ureg->sirfsoc_int_en_reg) & 569 - ~(uint_en->sirfsoc_rx_timeout_en)); 570 - else 571 - wr_regl(port, SIRFUART_INT_EN_CLR, 572 - uint_en->sirfsoc_rx_timeout_en); 573 - tasklet_schedule(&sirfport->rx_tmo_process_tasklet); 574 - } 575 - 576 - static void sirfsoc_uart_handle_rx_done(struct sirfsoc_uart_port *sirfport) 577 - { 578 - struct uart_port *port = &sirfport->port; 579 - struct sirfsoc_register *ureg = &sirfport->uart_reg->uart_reg; 580 - struct sirfsoc_int_en *uint_en = &sirfport->uart_reg->uart_int_en; 581 - struct sirfsoc_int_status *uint_st = &sirfport->uart_reg->uart_int_st; 582 - 583 - sirfsoc_uart_pio_rx_chars(port, 4 - sirfport->rx_io_count); 584 - if (sirfport->rx_io_count == 4) { 585 - sirfport->rx_io_count = 0; 586 - if (!sirfport->is_atlas7) 587 - wr_regl(port, ureg->sirfsoc_int_en_reg, 588 - rd_regl(port, ureg->sirfsoc_int_en_reg) & 589 - ~(uint_en->sirfsoc_rx_done_en)); 590 - else 591 - wr_regl(port, SIRFUART_INT_EN_CLR, 592 - uint_en->sirfsoc_rx_done_en); 593 - wr_regl(port, ureg->sirfsoc_int_st_reg, 594 - uint_st->sirfsoc_rx_timeout); 595 - sirfsoc_uart_start_next_rx_dma(port); 596 - } 597 - } 598 - 599 481 static irqreturn_t sirfsoc_uart_isr(int irq, void *dev_id) 600 482 { 601 483 unsigned long intr_status; ··· 480 628 intr_status = rd_regl(port, ureg->sirfsoc_int_st_reg); 481 629 wr_regl(port, ureg->sirfsoc_int_st_reg, intr_status); 482 630 intr_status &= rd_regl(port, ureg->sirfsoc_int_en_reg); 483 - if (unlikely(intr_status & (SIRFUART_ERR_INT_STAT(port, uint_st)))) { 631 + if (unlikely(intr_status & (SIRFUART_ERR_INT_STAT(uint_st, 632 + sirfport->uart_reg->uart_type)))) { 484 633 if (intr_status & uint_st->sirfsoc_rxd_brk) { 485 634 port->icount.brk++; 486 635 if (uart_handle_break(port)) 487 636 goto recv_char; 488 637 } 489 - if (intr_status & uint_st->sirfsoc_rx_oflow) 638 + if (intr_status & uint_st->sirfsoc_rx_oflow) { 490 639 port->icount.overrun++; 640 + flag = TTY_OVERRUN; 641 + } 491 642 if (intr_status & uint_st->sirfsoc_frm_err) { 492 643 port->icount.frame++; 493 644 flag = TTY_FRAME; 494 645 } 495 - if (intr_status & uint_st->sirfsoc_parity_err) 646 + if (intr_status & uint_st->sirfsoc_parity_err) { 647 + port->icount.parity++; 496 648 flag = TTY_PARITY; 649 + } 497 650 wr_regl(port, ureg->sirfsoc_rx_fifo_op, SIRFUART_FIFO_RESET); 498 651 wr_regl(port, ureg->sirfsoc_rx_fifo_op, 0); 499 652 wr_regl(port, ureg->sirfsoc_rx_fifo_op, SIRFUART_FIFO_START); ··· 519 662 uart_handle_cts_change(port, cts_status); 520 663 wake_up_interruptible(&state->port.delta_msr_wait); 521 664 } 522 - if (sirfport->rx_dma_chan) { 523 - if (intr_status & uint_st->sirfsoc_rx_timeout) 524 - sirfsoc_uart_handle_rx_tmo(sirfport); 525 - if (intr_status & uint_st->sirfsoc_rx_done) 526 - sirfsoc_uart_handle_rx_done(sirfport); 527 - } else { 528 - if (intr_status & SIRFUART_RX_IO_INT_ST(uint_st)) 529 - sirfsoc_uart_pio_rx_chars(port, 530 - SIRFSOC_UART_IO_RX_MAX_CNT); 665 + if (!sirfport->rx_dma_chan && 666 + (intr_status & SIRFUART_RX_IO_INT_ST(uint_st))) { 667 + /* 668 + * chip will trigger continuous RX_TIMEOUT interrupt 669 + * in RXFIFO empty and not trigger if RXFIFO recevice 670 + * data in limit time, original method use RX_TIMEOUT 671 + * will trigger lots of useless interrupt in RXFIFO 672 + * empty.RXFIFO received one byte will trigger RX_DONE 673 + * interrupt.use RX_DONE to wait for data received 674 + * into RXFIFO, use RX_THD/RX_FULL for lots data receive 675 + * and use RX_TIMEOUT for the last left data. 676 + */ 677 + if (intr_status & uint_st->sirfsoc_rx_done) { 678 + if (!sirfport->is_atlas7) { 679 + wr_regl(port, ureg->sirfsoc_int_en_reg, 680 + rd_regl(port, ureg->sirfsoc_int_en_reg) 681 + & ~(uint_en->sirfsoc_rx_done_en)); 682 + wr_regl(port, ureg->sirfsoc_int_en_reg, 683 + rd_regl(port, ureg->sirfsoc_int_en_reg) 684 + | (uint_en->sirfsoc_rx_timeout_en)); 685 + } else { 686 + wr_regl(port, ureg->sirfsoc_int_en_clr_reg, 687 + uint_en->sirfsoc_rx_done_en); 688 + wr_regl(port, ureg->sirfsoc_int_en_reg, 689 + uint_en->sirfsoc_rx_timeout_en); 690 + } 691 + } else { 692 + if (intr_status & uint_st->sirfsoc_rx_timeout) { 693 + if (!sirfport->is_atlas7) { 694 + wr_regl(port, ureg->sirfsoc_int_en_reg, 695 + rd_regl(port, ureg->sirfsoc_int_en_reg) 696 + & ~(uint_en->sirfsoc_rx_timeout_en)); 697 + wr_regl(port, ureg->sirfsoc_int_en_reg, 698 + rd_regl(port, ureg->sirfsoc_int_en_reg) 699 + | (uint_en->sirfsoc_rx_done_en)); 700 + } else { 701 + wr_regl(port, 702 + ureg->sirfsoc_int_en_clr_reg, 703 + uint_en->sirfsoc_rx_timeout_en); 704 + wr_regl(port, ureg->sirfsoc_int_en_reg, 705 + uint_en->sirfsoc_rx_done_en); 706 + } 707 + } 708 + sirfsoc_uart_pio_rx_chars(port, port->fifosize); 709 + } 531 710 } 532 711 spin_unlock(&port->lock); 533 712 tty_flip_buffer_push(&state->port); ··· 577 684 return IRQ_HANDLED; 578 685 } else { 579 686 sirfsoc_uart_pio_tx_chars(sirfport, 580 - SIRFSOC_UART_IO_TX_REASONABLE_CNT); 687 + port->fifosize); 581 688 if ((uart_circ_empty(xmit)) && 582 689 (rd_regl(port, ureg->sirfsoc_tx_fifo_status) & 583 - ufifo_st->ff_empty(port->line))) 690 + ufifo_st->ff_empty(port))) 584 691 sirfsoc_uart_stop_tx(port); 585 692 } 586 693 } ··· 590 697 return IRQ_HANDLED; 591 698 } 592 699 593 - static void sirfsoc_uart_rx_dma_complete_tl(unsigned long param) 594 - { 595 - struct sirfsoc_uart_port *sirfport = (struct sirfsoc_uart_port *)param; 596 - struct uart_port *port = &sirfport->port; 597 - struct sirfsoc_register *ureg = &sirfport->uart_reg->uart_reg; 598 - struct sirfsoc_int_en *uint_en = &sirfport->uart_reg->uart_int_en; 599 - unsigned long flags; 600 - struct dma_tx_state tx_state; 601 - spin_lock_irqsave(&port->lock, flags); 602 - while (DMA_COMPLETE == dmaengine_tx_status(sirfport->rx_dma_chan, 603 - sirfport->rx_dma_items[sirfport->rx_completed].cookie, &tx_state)) { 604 - sirfsoc_uart_insert_rx_buf_to_tty(sirfport, 605 - SIRFSOC_RX_DMA_BUF_SIZE); 606 - if (rd_regl(port, ureg->sirfsoc_int_en_reg) & 607 - uint_en->sirfsoc_rx_timeout_en) 608 - sirfsoc_rx_submit_one_dma_desc(port, 609 - sirfport->rx_completed++); 610 - else 611 - sirfport->rx_completed++; 612 - sirfport->rx_completed %= SIRFSOC_RX_LOOP_BUF_CNT; 613 - } 614 - spin_unlock_irqrestore(&port->lock, flags); 615 - tty_flip_buffer_push(&port->state->port); 616 - } 617 - 618 700 static void sirfsoc_uart_rx_dma_complete_callback(void *param) 619 701 { 620 - struct sirfsoc_uart_port *sirfport = (struct sirfsoc_uart_port *)param; 621 - unsigned long flags; 622 - 623 - spin_lock_irqsave(&sirfport->port.lock, flags); 624 - sirfport->rx_issued++; 625 - sirfport->rx_issued %= SIRFSOC_RX_LOOP_BUF_CNT; 626 - tasklet_schedule(&sirfport->rx_dma_complete_tasklet); 627 - spin_unlock_irqrestore(&sirfport->port.lock, flags); 628 702 } 629 703 630 704 /* submit rx dma task into dmaengine */ ··· 600 740 struct sirfsoc_uart_port *sirfport = to_sirfport(port); 601 741 struct sirfsoc_register *ureg = &sirfport->uart_reg->uart_reg; 602 742 struct sirfsoc_int_en *uint_en = &sirfport->uart_reg->uart_int_en; 603 - int i; 604 743 sirfport->rx_io_count = 0; 605 744 wr_regl(port, ureg->sirfsoc_rx_dma_io_ctrl, 606 745 rd_regl(port, ureg->sirfsoc_rx_dma_io_ctrl) & 607 746 ~SIRFUART_IO_MODE); 608 - for (i = 0; i < SIRFSOC_RX_LOOP_BUF_CNT; i++) 609 - sirfsoc_rx_submit_one_dma_desc(port, i); 610 - sirfport->rx_completed = sirfport->rx_issued = 0; 747 + sirfport->rx_dma_items.xmit.tail = 748 + sirfport->rx_dma_items.xmit.head = 0; 749 + sirfport->rx_dma_items.desc = 750 + dmaengine_prep_dma_cyclic(sirfport->rx_dma_chan, 751 + sirfport->rx_dma_items.dma_addr, SIRFSOC_RX_DMA_BUF_SIZE, 752 + SIRFSOC_RX_DMA_BUF_SIZE / 2, 753 + DMA_DEV_TO_MEM, DMA_PREP_INTERRUPT); 754 + if (IS_ERR_OR_NULL(sirfport->rx_dma_items.desc)) { 755 + dev_err(port->dev, "DMA slave single fail\n"); 756 + return; 757 + } 758 + sirfport->rx_dma_items.desc->callback = 759 + sirfsoc_uart_rx_dma_complete_callback; 760 + sirfport->rx_dma_items.desc->callback_param = sirfport; 761 + sirfport->rx_dma_items.cookie = 762 + dmaengine_submit(sirfport->rx_dma_items.desc); 763 + dma_async_issue_pending(sirfport->rx_dma_chan); 611 764 if (!sirfport->is_atlas7) 612 765 wr_regl(port, ureg->sirfsoc_int_en_reg, 613 766 rd_regl(port, ureg->sirfsoc_int_en_reg) | 614 - SIRFUART_RX_DMA_INT_EN(port, uint_en)); 767 + SIRFUART_RX_DMA_INT_EN(uint_en, 768 + sirfport->uart_reg->uart_type)); 615 769 else 616 770 wr_regl(port, ureg->sirfsoc_int_en_reg, 617 - SIRFUART_RX_DMA_INT_EN(port, uint_en)); 771 + SIRFUART_RX_DMA_INT_EN(uint_en, 772 + sirfport->uart_reg->uart_type)); 618 773 } 619 774 620 775 static void sirfsoc_uart_start_rx(struct uart_port *port) ··· 648 773 if (!sirfport->is_atlas7) 649 774 wr_regl(port, ureg->sirfsoc_int_en_reg, 650 775 rd_regl(port, ureg->sirfsoc_int_en_reg) | 651 - SIRFUART_RX_IO_INT_EN(port, uint_en)); 776 + SIRFUART_RX_IO_INT_EN(uint_en, 777 + sirfport->uart_reg->uart_type)); 652 778 else 653 779 wr_regl(port, ureg->sirfsoc_int_en_reg, 654 - SIRFUART_RX_IO_INT_EN(port, uint_en)); 780 + SIRFUART_RX_IO_INT_EN(uint_en, 781 + sirfport->uart_reg->uart_type)); 655 782 } 656 783 } 657 784 ··· 666 789 unsigned long ioclk_div = 0; 667 790 unsigned long temp_delta; 668 791 669 - for (sample_div = SIRF_MIN_SAMPLE_DIV; 792 + for (sample_div = SIRF_USP_MIN_SAMPLE_DIV; 670 793 sample_div <= SIRF_MAX_SAMPLE_DIV; sample_div++) { 671 794 temp_delta = ioclk_rate - 672 795 (ioclk_rate + (set_rate * sample_div) / 2) ··· 787 910 config_reg |= SIRFUART_STICK_BIT_MARK; 788 911 else 789 912 config_reg |= SIRFUART_STICK_BIT_SPACE; 790 - } else if (termios->c_cflag & PARODD) { 791 - config_reg |= SIRFUART_STICK_BIT_ODD; 792 913 } else { 793 - config_reg |= SIRFUART_STICK_BIT_EVEN; 914 + if (termios->c_cflag & PARODD) 915 + config_reg |= SIRFUART_STICK_BIT_ODD; 916 + else 917 + config_reg |= SIRFUART_STICK_BIT_EVEN; 794 918 } 795 919 } 796 920 } else { ··· 854 976 wr_regl(port, ureg->sirfsoc_tx_fifo_op, 855 977 (txfifo_op_reg & ~SIRFUART_FIFO_START)); 856 978 if (sirfport->uart_reg->uart_type == SIRF_REAL_UART) { 857 - config_reg |= SIRFUART_RECV_TIMEOUT(port, rx_time_out); 979 + config_reg |= SIRFUART_UART_RECV_TIMEOUT(rx_time_out); 858 980 wr_regl(port, ureg->sirfsoc_line_ctrl, config_reg); 859 981 } else { 860 982 /*tx frame ctrl*/ ··· 877 999 wr_regl(port, ureg->sirfsoc_rx_frame_ctrl, len_val); 878 1000 /*async param*/ 879 1001 wr_regl(port, ureg->sirfsoc_async_param_reg, 880 - (SIRFUART_RECV_TIMEOUT(port, rx_time_out)) | 1002 + (SIRFUART_USP_RECV_TIMEOUT(rx_time_out)) | 881 1003 (sample_div_reg & SIRFSOC_USP_ASYNC_DIV2_MASK) << 882 1004 SIRFSOC_USP_ASYNC_DIV2_OFFSET); 883 1005 } ··· 889 1011 wr_regl(port, ureg->sirfsoc_rx_dma_io_ctrl, SIRFUART_DMA_MODE); 890 1012 else 891 1013 wr_regl(port, ureg->sirfsoc_rx_dma_io_ctrl, SIRFUART_IO_MODE); 1014 + sirfport->rx_period_time = 20000000; 892 1015 /* Reset Rx/Tx FIFO Threshold level for proper baudrate */ 893 1016 if (set_baud < 1000000) 894 1017 threshold_div = 1; ··· 911 1032 unsigned int oldstate) 912 1033 { 913 1034 struct sirfsoc_uart_port *sirfport = to_sirfport(port); 914 - if (!state) { 915 - if (sirfport->is_bt_uart) { 916 - clk_prepare_enable(sirfport->clk_noc); 917 - clk_prepare_enable(sirfport->clk_general); 918 - } 1035 + if (!state) 919 1036 clk_prepare_enable(sirfport->clk); 920 - } else { 1037 + else 921 1038 clk_disable_unprepare(sirfport->clk); 922 - if (sirfport->is_bt_uart) { 923 - clk_disable_unprepare(sirfport->clk_general); 924 - clk_disable_unprepare(sirfport->clk_noc); 925 - } 926 - } 927 1039 } 928 1040 929 1041 static int sirfsoc_uart_startup(struct uart_port *port) ··· 934 1064 index, port->irq); 935 1065 goto irq_err; 936 1066 } 937 - 938 1067 /* initial hardware settings */ 939 1068 wr_regl(port, ureg->sirfsoc_tx_dma_io_ctrl, 940 1069 rd_regl(port, ureg->sirfsoc_tx_dma_io_ctrl) | ··· 941 1072 wr_regl(port, ureg->sirfsoc_rx_dma_io_ctrl, 942 1073 rd_regl(port, ureg->sirfsoc_rx_dma_io_ctrl) | 943 1074 SIRFUART_IO_MODE); 1075 + wr_regl(port, ureg->sirfsoc_rx_dma_io_ctrl, 1076 + rd_regl(port, ureg->sirfsoc_rx_dma_io_ctrl) & 1077 + ~SIRFUART_RX_DMA_FLUSH); 944 1078 wr_regl(port, ureg->sirfsoc_tx_dma_io_len, 0); 945 1079 wr_regl(port, ureg->sirfsoc_rx_dma_io_len, 0); 946 1080 wr_regl(port, ureg->sirfsoc_tx_rx_en, SIRFUART_RX_EN | SIRFUART_TX_EN); ··· 952 1080 SIRFSOC_USP_ENDIAN_CTRL_LSBF | 953 1081 SIRFSOC_USP_EN); 954 1082 wr_regl(port, ureg->sirfsoc_tx_fifo_op, SIRFUART_FIFO_RESET); 955 - wr_regl(port, ureg->sirfsoc_tx_fifo_op, 0); 956 1083 wr_regl(port, ureg->sirfsoc_rx_fifo_op, SIRFUART_FIFO_RESET); 957 1084 wr_regl(port, ureg->sirfsoc_rx_fifo_op, 0); 958 1085 wr_regl(port, ureg->sirfsoc_tx_fifo_ctrl, SIRFUART_FIFO_THD(port)); ··· 981 1110 goto init_rx_err; 982 1111 } 983 1112 } 984 - 985 1113 enable_irq(port->irq); 1114 + if (sirfport->rx_dma_chan && !sirfport->is_hrt_enabled) { 1115 + sirfport->is_hrt_enabled = true; 1116 + sirfport->rx_period_time = 20000000; 1117 + sirfport->rx_dma_items.xmit.tail = 1118 + sirfport->rx_dma_items.xmit.head = 0; 1119 + hrtimer_start(&sirfport->hrt, 1120 + ns_to_ktime(sirfport->rx_period_time), 1121 + HRTIMER_MODE_REL); 1122 + } 986 1123 987 1124 return 0; 988 1125 init_rx_err: ··· 1006 1127 if (!sirfport->is_atlas7) 1007 1128 wr_regl(port, ureg->sirfsoc_int_en_reg, 0); 1008 1129 else 1009 - wr_regl(port, SIRFUART_INT_EN_CLR, ~0UL); 1130 + wr_regl(port, ureg->sirfsoc_int_en_clr_reg, ~0UL); 1010 1131 1011 1132 free_irq(port->irq, sirfport); 1012 1133 if (sirfport->ms_enabled) ··· 1018 1139 } 1019 1140 if (sirfport->tx_dma_chan) 1020 1141 sirfport->tx_dma_state = TX_DMA_IDLE; 1142 + if (sirfport->rx_dma_chan && sirfport->is_hrt_enabled) { 1143 + while ((rd_regl(port, ureg->sirfsoc_rx_fifo_status) & 1144 + SIRFUART_RX_FIFO_MASK) > 0) 1145 + ; 1146 + sirfport->is_hrt_enabled = false; 1147 + hrtimer_cancel(&sirfport->hrt); 1148 + } 1021 1149 } 1022 1150 1023 1151 static const char *sirfsoc_uart_type(struct uart_port *port) ··· 1082 1196 unsigned int bits = 8; 1083 1197 unsigned int parity = 'n'; 1084 1198 unsigned int flow = 'n'; 1085 - struct uart_port *port = &sirfsoc_uart_ports[co->index].port; 1086 - struct sirfsoc_uart_port *sirfport = to_sirfport(port); 1087 - struct sirfsoc_register *ureg = &sirfport->uart_reg->uart_reg; 1199 + struct sirfsoc_uart_port *sirfport; 1200 + struct sirfsoc_register *ureg; 1088 1201 if (co->index < 0 || co->index >= SIRFSOC_UART_NR) 1089 - return -EINVAL; 1090 - 1091 - if (!port->mapbase) 1202 + co->index = 1; 1203 + sirfport = sirf_ports[co->index]; 1204 + if (!sirfport) 1205 + return -ENODEV; 1206 + ureg = &sirfport->uart_reg->uart_reg; 1207 + if (!sirfport->port.mapbase) 1092 1208 return -ENODEV; 1093 1209 1094 1210 /* enable usp in mode1 register */ 1095 1211 if (sirfport->uart_reg->uart_type == SIRF_USP_UART) 1096 - wr_regl(port, ureg->sirfsoc_mode1, SIRFSOC_USP_EN | 1212 + wr_regl(&sirfport->port, ureg->sirfsoc_mode1, SIRFSOC_USP_EN | 1097 1213 SIRFSOC_USP_ENDIAN_CTRL_LSBF); 1098 1214 if (options) 1099 1215 uart_parse_options(options, &baud, &parity, &bits, &flow); 1100 - port->cons = co; 1216 + sirfport->port.cons = co; 1101 1217 1102 1218 /* default console tx/rx transfer using io mode */ 1103 1219 sirfport->rx_dma_chan = NULL; 1104 1220 sirfport->tx_dma_chan = NULL; 1105 - return uart_set_options(port, co, baud, parity, bits, flow); 1221 + return uart_set_options(&sirfport->port, co, baud, parity, bits, flow); 1106 1222 } 1107 1223 1108 1224 static void sirfsoc_uart_console_putchar(struct uart_port *port, int ch) ··· 1112 1224 struct sirfsoc_uart_port *sirfport = to_sirfport(port); 1113 1225 struct sirfsoc_register *ureg = &sirfport->uart_reg->uart_reg; 1114 1226 struct sirfsoc_fifo_status *ufifo_st = &sirfport->uart_reg->fifo_status; 1115 - while (rd_regl(port, 1116 - ureg->sirfsoc_tx_fifo_status) & ufifo_st->ff_full(port->line)) 1227 + while (rd_regl(port, ureg->sirfsoc_tx_fifo_status) & 1228 + ufifo_st->ff_full(port)) 1117 1229 cpu_relax(); 1118 1230 wr_regl(port, ureg->sirfsoc_tx_fifo_data, ch); 1119 1231 } ··· 1121 1233 static void sirfsoc_uart_console_write(struct console *co, const char *s, 1122 1234 unsigned int count) 1123 1235 { 1124 - struct uart_port *port = &sirfsoc_uart_ports[co->index].port; 1125 - uart_console_write(port, s, count, sirfsoc_uart_console_putchar); 1236 + struct sirfsoc_uart_port *sirfport = sirf_ports[co->index]; 1237 + 1238 + uart_console_write(&sirfport->port, s, count, 1239 + sirfsoc_uart_console_putchar); 1126 1240 } 1127 1241 1128 1242 static struct console sirfsoc_uart_console = { ··· 1159 1269 #endif 1160 1270 }; 1161 1271 1162 - static const struct of_device_id sirfsoc_uart_ids[] = { 1272 + static enum hrtimer_restart 1273 + sirfsoc_uart_rx_dma_hrtimer_callback(struct hrtimer *hrt) 1274 + { 1275 + struct sirfsoc_uart_port *sirfport; 1276 + struct uart_port *port; 1277 + int count, inserted; 1278 + struct dma_tx_state tx_state; 1279 + struct tty_struct *tty; 1280 + struct sirfsoc_register *ureg; 1281 + struct circ_buf *xmit; 1282 + 1283 + sirfport = container_of(hrt, struct sirfsoc_uart_port, hrt); 1284 + port = &sirfport->port; 1285 + inserted = 0; 1286 + tty = port->state->port.tty; 1287 + ureg = &sirfport->uart_reg->uart_reg; 1288 + xmit = &sirfport->rx_dma_items.xmit; 1289 + dmaengine_tx_status(sirfport->rx_dma_chan, 1290 + sirfport->rx_dma_items.cookie, &tx_state); 1291 + xmit->head = SIRFSOC_RX_DMA_BUF_SIZE - tx_state.residue; 1292 + count = CIRC_CNT_TO_END(xmit->head, xmit->tail, 1293 + SIRFSOC_RX_DMA_BUF_SIZE); 1294 + while (count > 0) { 1295 + inserted = tty_insert_flip_string(tty->port, 1296 + (const unsigned char *)&xmit->buf[xmit->tail], count); 1297 + if (!inserted) 1298 + goto next_hrt; 1299 + port->icount.rx += inserted; 1300 + xmit->tail = (xmit->tail + inserted) & 1301 + (SIRFSOC_RX_DMA_BUF_SIZE - 1); 1302 + count = CIRC_CNT_TO_END(xmit->head, xmit->tail, 1303 + SIRFSOC_RX_DMA_BUF_SIZE); 1304 + tty_flip_buffer_push(tty->port); 1305 + } 1306 + /* 1307 + * if RX DMA buffer data have all push into tty buffer, and there is 1308 + * only little data(less than a dma transfer unit) left in rxfifo, 1309 + * fetch it out in pio mode and switch back to dma immediately 1310 + */ 1311 + if (!inserted && !count && 1312 + ((rd_regl(port, ureg->sirfsoc_rx_fifo_status) & 1313 + SIRFUART_RX_FIFO_MASK) > 0)) { 1314 + /* switch to pio mode */ 1315 + wr_regl(port, ureg->sirfsoc_rx_dma_io_ctrl, 1316 + rd_regl(port, ureg->sirfsoc_rx_dma_io_ctrl) | 1317 + SIRFUART_IO_MODE); 1318 + while ((rd_regl(port, ureg->sirfsoc_rx_fifo_status) & 1319 + SIRFUART_RX_FIFO_MASK) > 0) { 1320 + if (sirfsoc_uart_pio_rx_chars(port, 16) > 0) 1321 + tty_flip_buffer_push(tty->port); 1322 + } 1323 + wr_regl(port, ureg->sirfsoc_rx_fifo_op, SIRFUART_FIFO_RESET); 1324 + wr_regl(port, ureg->sirfsoc_rx_fifo_op, 0); 1325 + wr_regl(port, ureg->sirfsoc_rx_fifo_op, SIRFUART_FIFO_START); 1326 + /* switch back to dma mode */ 1327 + wr_regl(port, ureg->sirfsoc_rx_dma_io_ctrl, 1328 + rd_regl(port, ureg->sirfsoc_rx_dma_io_ctrl) & 1329 + ~SIRFUART_IO_MODE); 1330 + } 1331 + next_hrt: 1332 + hrtimer_forward_now(hrt, ns_to_ktime(sirfport->rx_period_time)); 1333 + return HRTIMER_RESTART; 1334 + } 1335 + 1336 + static struct of_device_id sirfsoc_uart_ids[] = { 1163 1337 { .compatible = "sirf,prima2-uart", .data = &sirfsoc_uart,}, 1164 1338 { .compatible = "sirf,atlas7-uart", .data = &sirfsoc_uart}, 1165 1339 { .compatible = "sirf,prima2-usp-uart", .data = &sirfsoc_usp}, 1340 + { .compatible = "sirf,atlas7-usp-uart", .data = &sirfsoc_usp}, 1166 1341 {} 1167 1342 }; 1168 1343 MODULE_DEVICE_TABLE(of, sirfsoc_uart_ids); ··· 1238 1283 struct uart_port *port; 1239 1284 struct resource *res; 1240 1285 int ret; 1241 - int i, j; 1242 1286 struct dma_slave_config slv_cfg = { 1243 1287 .src_maxburst = 2, 1244 1288 }; ··· 1247 1293 const struct of_device_id *match; 1248 1294 1249 1295 match = of_match_node(sirfsoc_uart_ids, pdev->dev.of_node); 1250 - if (of_property_read_u32(pdev->dev.of_node, "cell-index", &pdev->id)) { 1251 - dev_err(&pdev->dev, 1252 - "Unable to find cell-index in uart node.\n"); 1253 - ret = -EFAULT; 1296 + sirfport = devm_kzalloc(&pdev->dev, sizeof(*sirfport), GFP_KERNEL); 1297 + if (!sirfport) { 1298 + ret = -ENOMEM; 1254 1299 goto err; 1255 1300 } 1256 - if (of_device_is_compatible(pdev->dev.of_node, "sirf,prima2-usp-uart")) 1257 - pdev->id += ((struct sirfsoc_uart_register *) 1258 - match->data)->uart_param.register_uart_nr; 1259 - sirfport = &sirfsoc_uart_ports[pdev->id]; 1301 + sirfport->port.line = of_alias_get_id(pdev->dev.of_node, "serial"); 1302 + sirf_ports[sirfport->port.line] = sirfport; 1303 + sirfport->port.iotype = UPIO_MEM; 1304 + sirfport->port.flags = UPF_BOOT_AUTOCONF; 1260 1305 port = &sirfport->port; 1261 1306 port->dev = &pdev->dev; 1262 1307 port->private_data = sirfport; ··· 1263 1310 1264 1311 sirfport->hw_flow_ctrl = of_property_read_bool(pdev->dev.of_node, 1265 1312 "sirf,uart-has-rtscts"); 1266 - if (of_device_is_compatible(pdev->dev.of_node, "sirf,prima2-uart")) 1313 + if (of_device_is_compatible(pdev->dev.of_node, "sirf,prima2-uart") || 1314 + of_device_is_compatible(pdev->dev.of_node, "sirf,atlas7-uart")) 1267 1315 sirfport->uart_reg->uart_type = SIRF_REAL_UART; 1268 - if (of_device_is_compatible(pdev->dev.of_node, "sirf,prima2-usp-uart")) { 1316 + if (of_device_is_compatible(pdev->dev.of_node, 1317 + "sirf,prima2-usp-uart") || of_device_is_compatible( 1318 + pdev->dev.of_node, "sirf,atlas7-usp-uart")) { 1269 1319 sirfport->uart_reg->uart_type = SIRF_USP_UART; 1270 1320 if (!sirfport->hw_flow_ctrl) 1271 1321 goto usp_no_flow_control; ··· 1306 1350 gpio_direction_output(sirfport->rts_gpio, 1); 1307 1351 } 1308 1352 usp_no_flow_control: 1309 - if (of_device_is_compatible(pdev->dev.of_node, "sirf,atlas7-uart")) 1353 + if (of_device_is_compatible(pdev->dev.of_node, "sirf,atlas7-uart") || 1354 + of_device_is_compatible(pdev->dev.of_node, "sirf,atlas7-usp-uart")) 1310 1355 sirfport->is_atlas7 = true; 1311 1356 1312 1357 if (of_property_read_u32(pdev->dev.of_node, ··· 1325 1368 ret = -EFAULT; 1326 1369 goto err; 1327 1370 } 1328 - tasklet_init(&sirfport->rx_dma_complete_tasklet, 1329 - sirfsoc_uart_rx_dma_complete_tl, (unsigned long)sirfport); 1330 - tasklet_init(&sirfport->rx_tmo_process_tasklet, 1331 - sirfsoc_rx_tmo_process_tl, (unsigned long)sirfport); 1332 1371 port->mapbase = res->start; 1333 - port->membase = devm_ioremap(&pdev->dev, res->start, resource_size(res)); 1372 + port->membase = devm_ioremap(&pdev->dev, 1373 + res->start, resource_size(res)); 1334 1374 if (!port->membase) { 1335 1375 dev_err(&pdev->dev, "Cannot remap resource.\n"); 1336 1376 ret = -ENOMEM; ··· 1347 1393 goto err; 1348 1394 } 1349 1395 port->uartclk = clk_get_rate(sirfport->clk); 1350 - if (of_device_is_compatible(pdev->dev.of_node, "sirf,atlas7-bt-uart")) { 1351 - sirfport->clk_general = devm_clk_get(&pdev->dev, "general"); 1352 - if (IS_ERR(sirfport->clk_general)) { 1353 - ret = PTR_ERR(sirfport->clk_general); 1354 - goto err; 1355 - } 1356 - sirfport->clk_noc = devm_clk_get(&pdev->dev, "noc"); 1357 - if (IS_ERR(sirfport->clk_noc)) { 1358 - ret = PTR_ERR(sirfport->clk_noc); 1359 - goto err; 1360 - } 1361 - sirfport->is_bt_uart = true; 1362 - } else 1363 - sirfport->is_bt_uart = false; 1364 1396 1365 1397 port->ops = &sirfsoc_uart_ops; 1366 1398 spin_lock_init(&port->lock); ··· 1359 1419 } 1360 1420 1361 1421 sirfport->rx_dma_chan = dma_request_slave_channel(port->dev, "rx"); 1362 - for (i = 0; sirfport->rx_dma_chan && i < SIRFSOC_RX_LOOP_BUF_CNT; i++) { 1363 - sirfport->rx_dma_items[i].xmit.buf = 1364 - dma_alloc_coherent(port->dev, SIRFSOC_RX_DMA_BUF_SIZE, 1365 - &sirfport->rx_dma_items[i].dma_addr, GFP_KERNEL); 1366 - if (!sirfport->rx_dma_items[i].xmit.buf) { 1367 - dev_err(port->dev, "Uart alloc bufa failed\n"); 1368 - ret = -ENOMEM; 1369 - goto alloc_coherent_err; 1370 - } 1371 - sirfport->rx_dma_items[i].xmit.head = 1372 - sirfport->rx_dma_items[i].xmit.tail = 0; 1422 + sirfport->rx_dma_items.xmit.buf = 1423 + dma_alloc_coherent(port->dev, SIRFSOC_RX_DMA_BUF_SIZE, 1424 + &sirfport->rx_dma_items.dma_addr, GFP_KERNEL); 1425 + if (!sirfport->rx_dma_items.xmit.buf) { 1426 + dev_err(port->dev, "Uart alloc bufa failed\n"); 1427 + ret = -ENOMEM; 1428 + goto alloc_coherent_err; 1373 1429 } 1430 + sirfport->rx_dma_items.xmit.head = 1431 + sirfport->rx_dma_items.xmit.tail = 0; 1374 1432 if (sirfport->rx_dma_chan) 1375 1433 dmaengine_slave_config(sirfport->rx_dma_chan, &slv_cfg); 1376 1434 sirfport->tx_dma_chan = dma_request_slave_channel(port->dev, "tx"); 1377 1435 if (sirfport->tx_dma_chan) 1378 1436 dmaengine_slave_config(sirfport->tx_dma_chan, &tx_slv_cfg); 1437 + if (sirfport->rx_dma_chan) { 1438 + hrtimer_init(&sirfport->hrt, CLOCK_MONOTONIC, HRTIMER_MODE_REL); 1439 + sirfport->hrt.function = sirfsoc_uart_rx_dma_hrtimer_callback; 1440 + sirfport->is_hrt_enabled = false; 1441 + } 1379 1442 1380 1443 return 0; 1381 1444 alloc_coherent_err: 1382 - for (j = 0; j < i; j++) 1383 - dma_free_coherent(port->dev, SIRFSOC_RX_DMA_BUF_SIZE, 1384 - sirfport->rx_dma_items[j].xmit.buf, 1385 - sirfport->rx_dma_items[j].dma_addr); 1445 + dma_free_coherent(port->dev, SIRFSOC_RX_DMA_BUF_SIZE, 1446 + sirfport->rx_dma_items.xmit.buf, 1447 + sirfport->rx_dma_items.dma_addr); 1386 1448 dma_release_channel(sirfport->rx_dma_chan); 1387 1449 err: 1388 1450 return ret; ··· 1396 1454 struct uart_port *port = &sirfport->port; 1397 1455 uart_remove_one_port(&sirfsoc_uart_drv, port); 1398 1456 if (sirfport->rx_dma_chan) { 1399 - int i; 1400 1457 dmaengine_terminate_all(sirfport->rx_dma_chan); 1401 1458 dma_release_channel(sirfport->rx_dma_chan); 1402 - for (i = 0; i < SIRFSOC_RX_LOOP_BUF_CNT; i++) 1403 - dma_free_coherent(port->dev, SIRFSOC_RX_DMA_BUF_SIZE, 1404 - sirfport->rx_dma_items[i].xmit.buf, 1405 - sirfport->rx_dma_items[i].dma_addr); 1459 + dma_free_coherent(port->dev, SIRFSOC_RX_DMA_BUF_SIZE, 1460 + sirfport->rx_dma_items.xmit.buf, 1461 + sirfport->rx_dma_items.dma_addr); 1406 1462 } 1407 1463 if (sirfport->tx_dma_chan) { 1408 1464 dmaengine_terminate_all(sirfport->tx_dma_chan);
+54 -66
drivers/tty/serial/sirfsoc_uart.h
··· 6 6 * Licensed under GPLv2 or later. 7 7 */ 8 8 #include <linux/bitops.h> 9 + #include <linux/log2.h> 10 + #include <linux/hrtimer.h> 9 11 struct sirfsoc_uart_param { 10 12 const char *uart_name; 11 13 const char *port_name; 12 - u32 uart_nr; 13 - u32 register_uart_nr; 14 14 }; 15 15 16 16 struct sirfsoc_register { ··· 21 21 u32 sirfsoc_tx_rx_en; 22 22 u32 sirfsoc_int_en_reg; 23 23 u32 sirfsoc_int_st_reg; 24 + u32 sirfsoc_int_en_clr_reg; 24 25 u32 sirfsoc_tx_dma_io_ctrl; 25 26 u32 sirfsoc_tx_dma_io_len; 26 27 u32 sirfsoc_tx_fifo_ctrl; ··· 46 45 u32 sirfsoc_async_param_reg; 47 46 }; 48 47 49 - typedef u32 (*fifo_full_mask)(int line); 50 - typedef u32 (*fifo_empty_mask)(int line); 48 + typedef u32 (*fifo_full_mask)(struct uart_port *port); 49 + typedef u32 (*fifo_empty_mask)(struct uart_port *port); 51 50 52 51 struct sirfsoc_fifo_status { 53 52 fifo_full_mask ff_full; ··· 106 105 enum sirfsoc_uart_type uart_type; 107 106 }; 108 107 109 - u32 usp_ff_full(int line) 108 + u32 uart_usp_ff_full_mask(struct uart_port *port) 110 109 { 111 - return 0x80; 110 + u32 full_bit; 111 + 112 + full_bit = ilog2(port->fifosize); 113 + return (1 << full_bit); 112 114 } 113 - u32 usp_ff_empty(int line) 115 + 116 + u32 uart_usp_ff_empty_mask(struct uart_port *port) 114 117 { 115 - return 0x100; 116 - } 117 - u32 uart_ff_full(int line) 118 - { 119 - return (line == 1) ? (0x20) : (0x80); 120 - } 121 - u32 uart_ff_empty(int line) 122 - { 123 - return (line == 1) ? (0x40) : (0x100); 118 + u32 empty_bit; 119 + 120 + empty_bit = ilog2(port->fifosize) + 1; 121 + return (1 << empty_bit); 124 122 } 125 123 struct sirfsoc_uart_register sirfsoc_usp = { 126 124 .uart_reg = { ··· 145 145 .sirfsoc_rx_fifo_op = 0x0130, 146 146 .sirfsoc_rx_fifo_status = 0x0134, 147 147 .sirfsoc_rx_fifo_data = 0x0138, 148 + .sirfsoc_int_en_clr_reg = 0x140, 148 149 }, 149 150 .uart_int_en = { 150 151 .sirfsoc_rx_done_en = BIT(0), ··· 178 177 .sirfsoc_rxd_brk = BIT(15), 179 178 }, 180 179 .fifo_status = { 181 - .ff_full = usp_ff_full, 182 - .ff_empty = usp_ff_empty, 180 + .ff_full = uart_usp_ff_full_mask, 181 + .ff_empty = uart_usp_ff_empty_mask, 183 182 }, 184 183 .uart_param = { 185 184 .uart_name = "ttySiRF", 186 185 .port_name = "sirfsoc-uart", 187 - .uart_nr = 2, 188 - .register_uart_nr = 3, 189 186 }, 190 187 }; 191 188 ··· 194 195 .sirfsoc_divisor = 0x0050, 195 196 .sirfsoc_int_en_reg = 0x0054, 196 197 .sirfsoc_int_st_reg = 0x0058, 198 + .sirfsoc_int_en_clr_reg = 0x0060, 197 199 .sirfsoc_tx_dma_io_ctrl = 0x0100, 198 200 .sirfsoc_tx_dma_io_len = 0x0104, 199 201 .sirfsoc_tx_fifo_ctrl = 0x0108, ··· 249 249 .sirfsoc_rts = BIT(15), 250 250 }, 251 251 .fifo_status = { 252 - .ff_full = uart_ff_full, 253 - .ff_empty = uart_ff_empty, 252 + .ff_full = uart_usp_ff_full_mask, 253 + .ff_empty = uart_usp_ff_empty_mask, 254 254 }, 255 255 .uart_param = { 256 256 .uart_name = "ttySiRF", 257 257 .port_name = "sirfsoc_uart", 258 - .uart_nr = 3, 259 - .register_uart_nr = 0, 260 258 }, 261 259 }; 262 260 /* uart io ctrl */ ··· 294 296 295 297 #define SIRFUART_IO_MODE BIT(0) 296 298 #define SIRFUART_DMA_MODE 0x0 299 + #define SIRFUART_RX_DMA_FLUSH 0x4 297 300 298 - /* Macro Specific*/ 299 - #define SIRFUART_INT_EN_CLR 0x0060 300 301 /* Baud Rate Calculation */ 302 + #define SIRF_USP_MIN_SAMPLE_DIV 0x1 301 303 #define SIRF_MIN_SAMPLE_DIV 0xf 302 304 #define SIRF_MAX_SAMPLE_DIV 0x3f 303 305 #define SIRF_IOCLK_DIV_MAX 0xffff ··· 324 326 #define SIRFSOC_USP_RX_CLK_DIVISOR_OFFSET 24 325 327 #define SIRFSOC_USP_ASYNC_DIV2_MASK 0x3f 326 328 #define SIRFSOC_USP_ASYNC_DIV2_OFFSET 16 327 - 329 + #define SIRFSOC_USP_LOOP_BACK_CTRL BIT(2) 328 330 /* USP-UART Common */ 329 331 #define SIRFSOC_UART_RX_TIMEOUT(br, to) (((br) * (((to) + 999) / 1000)) / 1000) 330 332 #define SIRFUART_RECV_TIMEOUT_VALUE(x) \ 331 333 (((x) > 0xFFFF) ? 0xFFFF : ((x) & 0xFFFF)) 332 - #define SIRFUART_RECV_TIMEOUT(port, x) \ 333 - (((port)->line > 2) ? (x & 0xFFFF) : ((x) & 0xFFFF) << 16) 334 + #define SIRFUART_USP_RECV_TIMEOUT(x) (x & 0xFFFF) 335 + #define SIRFUART_UART_RECV_TIMEOUT(x) ((x & 0xFFFF) << 16) 334 336 335 - #define SIRFUART_FIFO_THD(port) ((port->line) == 1 ? 16 : 64) 336 - #define SIRFUART_ERR_INT_STAT(port, unit_st) \ 337 + #define SIRFUART_FIFO_THD(port) (port->fifosize >> 1) 338 + #define SIRFUART_ERR_INT_STAT(unit_st, uart_type) \ 337 339 (uint_st->sirfsoc_rx_oflow | \ 338 340 uint_st->sirfsoc_frm_err | \ 339 341 uint_st->sirfsoc_rxd_brk | \ 340 - ((port->line > 2) ? 0 : uint_st->sirfsoc_parity_err)) 341 - #define SIRFUART_RX_IO_INT_EN(port, uint_en) \ 342 - (uint_en->sirfsoc_rx_timeout_en |\ 342 + ((uart_type != SIRF_REAL_UART) ? \ 343 + 0 : uint_st->sirfsoc_parity_err)) 344 + #define SIRFUART_RX_IO_INT_EN(uint_en, uart_type) \ 345 + (uint_en->sirfsoc_rx_done_en |\ 343 346 uint_en->sirfsoc_rxfifo_thd_en |\ 344 347 uint_en->sirfsoc_rxfifo_full_en |\ 345 348 uint_en->sirfsoc_frm_err_en |\ 346 349 uint_en->sirfsoc_rx_oflow_en |\ 347 350 uint_en->sirfsoc_rxd_brk_en |\ 348 - ((port->line > 2) ? 0 : uint_en->sirfsoc_parity_err_en)) 351 + ((uart_type != SIRF_REAL_UART) ? \ 352 + 0 : uint_en->sirfsoc_parity_err_en)) 349 353 #define SIRFUART_RX_IO_INT_ST(uint_st) \ 350 - (uint_st->sirfsoc_rx_timeout |\ 351 - uint_st->sirfsoc_rxfifo_thd |\ 352 - uint_st->sirfsoc_rxfifo_full) 354 + (uint_st->sirfsoc_rxfifo_thd |\ 355 + uint_st->sirfsoc_rxfifo_full|\ 356 + uint_st->sirfsoc_rx_done |\ 357 + uint_st->sirfsoc_rx_timeout) 353 358 #define SIRFUART_CTS_INT_ST(uint_st) (uint_st->sirfsoc_cts) 354 - #define SIRFUART_RX_DMA_INT_EN(port, uint_en) \ 355 - (uint_en->sirfsoc_rx_timeout_en |\ 356 - uint_en->sirfsoc_frm_err_en |\ 359 + #define SIRFUART_RX_DMA_INT_EN(uint_en, uart_type) \ 360 + (uint_en->sirfsoc_frm_err_en |\ 357 361 uint_en->sirfsoc_rx_oflow_en |\ 358 362 uint_en->sirfsoc_rxd_brk_en |\ 359 - ((port->line > 2) ? 0 : uint_en->sirfsoc_parity_err_en)) 363 + ((uart_type != SIRF_REAL_UART) ? \ 364 + 0 : uint_en->sirfsoc_parity_err_en)) 360 365 /* Generic Definitions */ 361 366 #define SIRFSOC_UART_NAME "ttySiRF" 362 367 #define SIRFSOC_UART_MAJOR 0 363 368 #define SIRFSOC_UART_MINOR 0 364 369 #define SIRFUART_PORT_NAME "sirfsoc-uart" 365 370 #define SIRFUART_MAP_SIZE 0x200 366 - #define SIRFSOC_UART_NR 6 371 + #define SIRFSOC_UART_NR 11 367 372 #define SIRFSOC_PORT_TYPE 0xa5 368 373 369 374 /* Uart Common Use Macro*/ 370 - #define SIRFSOC_RX_DMA_BUF_SIZE 256 375 + #define SIRFSOC_RX_DMA_BUF_SIZE (1024 * 32) 371 376 #define BYTES_TO_ALIGN(dma_addr) ((unsigned long)(dma_addr) & 0x3) 372 - #define LOOP_DMA_BUFA_FILL 1 373 - #define LOOP_DMA_BUFB_FILL 2 374 - #define TX_TRAN_PIO 1 375 - #define TX_TRAN_DMA 2 376 377 /* Uart Fifo Level Chk */ 377 378 #define SIRFUART_TX_FIFO_SC_OFFSET 0 378 379 #define SIRFUART_TX_FIFO_LC_OFFSET 10 ··· 386 389 #define SIRFUART_RX_FIFO_CHK_SC SIRFUART_TX_FIFO_CHK_SC 387 390 #define SIRFUART_RX_FIFO_CHK_LC SIRFUART_TX_FIFO_CHK_LC 388 391 #define SIRFUART_RX_FIFO_CHK_HC SIRFUART_TX_FIFO_CHK_HC 392 + #define SIRFUART_RX_FIFO_MASK 0x7f 389 393 /* Indicate how many buffers used */ 390 - #define SIRFSOC_RX_LOOP_BUF_CNT 2 391 394 392 395 /* For Fast Baud Rate Calculation */ 393 396 struct sirfsoc_baudrate_to_regv { ··· 401 404 TX_DMA_PAUSE, 402 405 }; 403 406 404 - struct sirfsoc_loop_buffer { 407 + struct sirfsoc_rx_buffer { 405 408 struct circ_buf xmit; 406 409 dma_cookie_t cookie; 407 410 struct dma_async_tx_descriptor *desc; ··· 414 417 415 418 struct uart_port port; 416 419 struct clk *clk; 417 - /* UART6 for BT usage in A7DA platform need multi-clock source */ 418 - bool is_bt_uart; 419 - struct clk *clk_general; 420 - struct clk *clk_noc; 421 420 /* for SiRFatlas7, there are SET/CLR for UART_INT_EN */ 422 421 bool is_atlas7; 423 422 struct sirfsoc_uart_register *uart_reg; ··· 421 428 struct dma_chan *tx_dma_chan; 422 429 dma_addr_t tx_dma_addr; 423 430 struct dma_async_tx_descriptor *tx_dma_desc; 424 - struct tasklet_struct rx_dma_complete_tasklet; 425 - struct tasklet_struct rx_tmo_process_tasklet; 426 431 unsigned int rx_io_count; 427 432 unsigned long transfer_size; 428 433 enum sirfsoc_tx_state tx_dma_state; 429 434 unsigned int cts_gpio; 430 435 unsigned int rts_gpio; 431 436 432 - struct sirfsoc_loop_buffer rx_dma_items[SIRFSOC_RX_LOOP_BUF_CNT]; 433 - int rx_completed; 434 - int rx_issued; 437 + struct sirfsoc_rx_buffer rx_dma_items; 438 + struct hrtimer hrt; 439 + bool is_hrt_enabled; 440 + unsigned long rx_period_time; 435 441 }; 436 442 437 443 /* Register Access Control */ ··· 439 447 #define wr_regl(port, reg, val) __raw_writel(val, portaddr(port, reg)) 440 448 441 449 /* UART Port Mask */ 442 - #define SIRFUART_FIFOLEVEL_MASK(port) ((port->line == 1) ? (0x1f) : (0x7f)) 443 - #define SIRFUART_FIFOFULL_MASK(port) ((port->line == 1) ? (0x20) : (0x80)) 444 - #define SIRFUART_FIFOEMPTY_MASK(port) ((port->line == 1) ? (0x40) : (0x100)) 445 - 446 - /* I/O Mode */ 447 - #define SIRFSOC_UART_IO_RX_MAX_CNT 256 448 - #define SIRFSOC_UART_IO_TX_REASONABLE_CNT 256 450 + #define SIRFUART_FIFOLEVEL_MASK(port) ((port->fifosize - 1) & 0xFFF) 451 + #define SIRFUART_FIFOFULL_MASK(port) (port->fifosize & 0xFFF) 452 + #define SIRFUART_FIFOEMPTY_MASK(port) ((port->fifosize & 0xFFF) << 1)
+2 -1
drivers/tty/serial/xilinx_uartps.c
··· 1075 1075 writel(ch, port->membase + CDNS_UART_FIFO_OFFSET); 1076 1076 } 1077 1077 1078 - static void cdns_early_write(struct console *con, const char *s, unsigned n) 1078 + static void __init cdns_early_write(struct console *con, const char *s, 1079 + unsigned n) 1079 1080 { 1080 1081 struct earlycon_device *dev = con->data; 1081 1082
+10 -5
drivers/tty/synclink.c
··· 4410 4410 printk("Unloading %s: %s\n", driver_name, driver_version); 4411 4411 4412 4412 if (serial_driver) { 4413 - if ((rc = tty_unregister_driver(serial_driver))) 4413 + rc = tty_unregister_driver(serial_driver); 4414 + if (rc) 4414 4415 printk("%s(%d) failed to unregister tty driver err=%d\n", 4415 4416 __FILE__,__LINE__,rc); 4416 4417 put_tty_driver(serial_driver); ··· 7752 7751 printk("%s:hdlcdev_open(%s)\n",__FILE__,dev->name); 7753 7752 7754 7753 /* generic HDLC layer open processing */ 7755 - if ((rc = hdlc_open(dev))) 7754 + rc = hdlc_open(dev); 7755 + if (rc) 7756 7756 return rc; 7757 7757 7758 7758 /* arbitrate between network and tty opens */ ··· 8020 8018 8021 8019 /* allocate and initialize network and HDLC layer objects */ 8022 8020 8023 - if (!(dev = alloc_hdlcdev(info))) { 8021 + dev = alloc_hdlcdev(info); 8022 + if (!dev) { 8024 8023 printk(KERN_ERR "%s:hdlc device allocation failure\n",__FILE__); 8025 8024 return -ENOMEM; 8026 8025 } ··· 8042 8039 hdlc->xmit = hdlcdev_xmit; 8043 8040 8044 8041 /* register objects with HDLC layer */ 8045 - if ((rc = register_hdlc_device(dev))) { 8042 + rc = register_hdlc_device(dev); 8043 + if (rc) { 8046 8044 printk(KERN_WARNING "%s:unable to register hdlc device\n",__FILE__); 8047 8045 free_netdev(dev); 8048 8046 return rc; ··· 8079 8075 return -EIO; 8080 8076 } 8081 8077 8082 - if (!(info = mgsl_allocate_device())) { 8078 + info = mgsl_allocate_device(); 8079 + if (!info) { 8083 8080 printk("can't allocate device instance data.\n"); 8084 8081 return -EIO; 8085 8082 }
+8 -4
drivers/tty/synclinkmp.c
··· 1655 1655 printk("%s:hdlcdev_open(%s)\n",__FILE__,dev->name); 1656 1656 1657 1657 /* generic HDLC layer open processing */ 1658 - if ((rc = hdlc_open(dev))) 1658 + rc = hdlc_open(dev); 1659 + if (rc) 1659 1660 return rc; 1660 1661 1661 1662 /* arbitrate between network and tty opens */ ··· 1923 1922 1924 1923 /* allocate and initialize network and HDLC layer objects */ 1925 1924 1926 - if (!(dev = alloc_hdlcdev(info))) { 1925 + dev = alloc_hdlcdev(info); 1926 + if (!dev) { 1927 1927 printk(KERN_ERR "%s:hdlc device allocation failure\n",__FILE__); 1928 1928 return -ENOMEM; 1929 1929 } ··· 1945 1943 hdlc->xmit = hdlcdev_xmit; 1946 1944 1947 1945 /* register objects with HDLC layer */ 1948 - if ((rc = register_hdlc_device(dev))) { 1946 + rc = register_hdlc_device(dev); 1947 + if (rc) { 1949 1948 printk(KERN_WARNING "%s:unable to register hdlc device\n",__FILE__); 1950 1949 free_netdev(dev); 1951 1950 return rc; ··· 3923 3920 printk("Unloading %s %s\n", driver_name, driver_version); 3924 3921 3925 3922 if (serial_driver) { 3926 - if ((rc = tty_unregister_driver(serial_driver))) 3923 + rc = tty_unregister_driver(serial_driver); 3924 + if (rc) 3927 3925 printk("%s(%d) failed to unregister tty driver err=%d\n", 3928 3926 __FILE__,__LINE__,rc); 3929 3927 put_tty_driver(serial_driver);
+1 -18
drivers/tty/sysrq.c
··· 55 55 static int __read_mostly sysrq_enabled = CONFIG_MAGIC_SYSRQ_DEFAULT_ENABLE; 56 56 static bool __read_mostly sysrq_always_enabled; 57 57 58 - unsigned short platform_sysrq_reset_seq[] __weak = { KEY_RESERVED }; 59 - int sysrq_reset_downtime_ms __weak; 60 - 61 58 static bool sysrq_on(void) 62 59 { 63 60 return sysrq_enabled || sysrq_always_enabled; ··· 566 569 EXPORT_SYMBOL(handle_sysrq); 567 570 568 571 #ifdef CONFIG_INPUT 572 + static int sysrq_reset_downtime_ms; 569 573 570 574 /* Simple translation table for the SysRq keys */ 571 575 static const unsigned char sysrq_xlate[KEY_CNT] = ··· 947 949 948 950 static inline void sysrq_register_handler(void) 949 951 { 950 - unsigned short key; 951 952 int error; 952 - int i; 953 953 954 - /* First check if a __weak interface was instantiated. */ 955 - for (i = 0; i < ARRAY_SIZE(sysrq_reset_seq); i++) { 956 - key = platform_sysrq_reset_seq[i]; 957 - if (key == KEY_RESERVED || key > KEY_MAX) 958 - break; 959 - 960 - sysrq_reset_seq[sysrq_reset_seq_len++] = key; 961 - } 962 - 963 - /* 964 - * DT configuration takes precedence over anything that would 965 - * have been defined via the __weak interface. 966 - */ 967 954 sysrq_of_get_keyreset_config(); 968 955 969 956 error = input_register_handler(&sysrq_handler);
+2 -1
drivers/tty/tty_buffer.c
··· 286 286 change = (b->flags & TTYB_NORMAL) && (~flags & TTYB_NORMAL); 287 287 if (change || left < size) { 288 288 /* This is the slow path - looking for new buffers to use */ 289 - if ((n = tty_buffer_alloc(port, size)) != NULL) { 289 + n = tty_buffer_alloc(port, size); 290 + if (n != NULL) { 290 291 n->flags = flags; 291 292 buf->tail = n; 292 293 b->commit = b->used;
+13 -21
drivers/tty/tty_io.c
··· 235 235 /** 236 236 * tty_name - return tty naming 237 237 * @tty: tty structure 238 - * @buf: buffer for output 239 238 * 240 239 * Convert a tty structure into a name. The name reflects the kernel 241 240 * naming policy and if udev is in use may not reflect user space ··· 242 243 * Locking: none 243 244 */ 244 245 245 - char *tty_name(struct tty_struct *tty, char *buf) 246 + const char *tty_name(const struct tty_struct *tty) 246 247 { 247 248 if (!tty) /* Hmm. NULL pointer. That's fun. */ 248 - strcpy(buf, "NULL tty"); 249 - else 250 - strcpy(buf, tty->name); 251 - return buf; 249 + return "NULL tty"; 250 + return tty->name; 252 251 } 253 252 254 253 EXPORT_SYMBOL(tty_name); ··· 767 770 void tty_hangup(struct tty_struct *tty) 768 771 { 769 772 #ifdef TTY_DEBUG_HANGUP 770 - char buf[64]; 771 - printk(KERN_DEBUG "%s hangup...\n", tty_name(tty, buf)); 773 + printk(KERN_DEBUG "%s hangup...\n", tty_name(tty)); 772 774 #endif 773 775 schedule_work(&tty->hangup_work); 774 776 } ··· 786 790 void tty_vhangup(struct tty_struct *tty) 787 791 { 788 792 #ifdef TTY_DEBUG_HANGUP 789 - char buf[64]; 790 - 791 - printk(KERN_DEBUG "%s vhangup...\n", tty_name(tty, buf)); 793 + printk(KERN_DEBUG "%s vhangup...\n", tty_name(tty)); 792 794 #endif 793 795 __tty_hangup(tty, 0); 794 796 } ··· 825 831 static void tty_vhangup_session(struct tty_struct *tty) 826 832 { 827 833 #ifdef TTY_DEBUG_HANGUP 828 - char buf[64]; 829 - 830 - printk(KERN_DEBUG "%s vhangup session...\n", tty_name(tty, buf)); 834 + printk(KERN_DEBUG "%s vhangup session...\n", tty_name(tty)); 831 835 #endif 832 836 __tty_hangup(tty, 1); 833 837 } ··· 1761 1769 struct tty_struct *o_tty = NULL; 1762 1770 int do_sleep, final; 1763 1771 int idx; 1764 - char buf[64]; 1765 1772 long timeout = 0; 1766 1773 int once = 1; 1767 1774 ··· 1784 1793 1785 1794 #ifdef TTY_DEBUG_HANGUP 1786 1795 printk(KERN_DEBUG "%s: %s (tty count=%d)...\n", __func__, 1787 - tty_name(tty, buf), tty->count); 1796 + tty_name(tty), tty->count); 1788 1797 #endif 1789 1798 1790 1799 if (tty->ops->close) ··· 1835 1844 if (once) { 1836 1845 once = 0; 1837 1846 printk(KERN_WARNING "%s: %s: read/write wait queue active!\n", 1838 - __func__, tty_name(tty, buf)); 1847 + __func__, tty_name(tty)); 1839 1848 } 1840 1849 schedule_timeout_killable(timeout); 1841 1850 if (timeout < 120 * HZ) ··· 1847 1856 if (o_tty) { 1848 1857 if (--o_tty->count < 0) { 1849 1858 printk(KERN_WARNING "%s: bad pty slave count (%d) for %s\n", 1850 - __func__, o_tty->count, tty_name(o_tty, buf)); 1859 + __func__, o_tty->count, tty_name(o_tty)); 1851 1860 o_tty->count = 0; 1852 1861 } 1853 1862 } 1854 1863 if (--tty->count < 0) { 1855 1864 printk(KERN_WARNING "%s: bad tty->count (%d) for %s\n", 1856 - __func__, tty->count, tty_name(tty, buf)); 1865 + __func__, tty->count, tty_name(tty)); 1857 1866 tty->count = 0; 1858 1867 } 1859 1868 ··· 1896 1905 return 0; 1897 1906 1898 1907 #ifdef TTY_DEBUG_HANGUP 1899 - printk(KERN_DEBUG "%s: %s: final close\n", __func__, tty_name(tty, buf)); 1908 + printk(KERN_DEBUG "%s: %s: final close\n", __func__, tty_name(tty)); 1900 1909 #endif 1901 1910 /* 1902 1911 * Ask the line discipline code to release its structures ··· 1907 1916 tty_flush_works(tty); 1908 1917 1909 1918 #ifdef TTY_DEBUG_HANGUP 1910 - printk(KERN_DEBUG "%s: %s: freeing structure...\n", __func__, tty_name(tty, buf)); 1919 + printk(KERN_DEBUG "%s: %s: freeing structure...\n", __func__, 1920 + tty_name(tty)); 1911 1921 #endif 1912 1922 /* 1913 1923 * The release_tty function takes care of the details of clearing
+1 -3
drivers/tty/tty_ioctl.c
··· 211 211 void tty_wait_until_sent(struct tty_struct *tty, long timeout) 212 212 { 213 213 #ifdef TTY_DEBUG_WAIT_UNTIL_SENT 214 - char buf[64]; 215 - 216 - printk(KERN_DEBUG "%s wait until sent...\n", tty_name(tty, buf)); 214 + printk(KERN_DEBUG "%s wait until sent...\n", tty_name(tty)); 217 215 #endif 218 216 if (!timeout) 219 217 timeout = MAX_SCHEDULE_TIMEOUT;
+3 -5
drivers/tty/tty_ldisc.c
··· 22 22 #undef LDISC_DEBUG_HANGUP 23 23 24 24 #ifdef LDISC_DEBUG_HANGUP 25 - #define tty_ldisc_debug(tty, f, args...) ({ \ 26 - char __b[64]; \ 27 - printk(KERN_DEBUG "%s: %s: " f, __func__, tty_name(tty, __b), ##args); \ 25 + #define tty_ldisc_debug(tty, f, args...) ({ \ 26 + printk(KERN_DEBUG "%s: %s: " f, __func__, tty_name(tty), ##args); \ 28 27 }) 29 28 #else 30 29 #define tty_ldisc_debug(tty, f, args...) ··· 482 483 483 484 static void tty_ldisc_restore(struct tty_struct *tty, struct tty_ldisc *old) 484 485 { 485 - char buf[64]; 486 486 struct tty_ldisc *new_ldisc; 487 487 int r; 488 488 ··· 502 504 if (r < 0) 503 505 panic("Couldn't open N_TTY ldisc for " 504 506 "%s --- error %d.", 505 - tty_name(tty, buf), r); 507 + tty_name(tty), r); 506 508 } 507 509 } 508 510
+2 -1
drivers/tty/tty_ldsem.c
··· 299 299 timeout = schedule_timeout(timeout); 300 300 raw_spin_lock_irq(&sem->wait_lock); 301 301 set_task_state(tsk, TASK_UNINTERRUPTIBLE); 302 - if ((locked = writer_trylock(sem))) 302 + locked = writer_trylock(sem); 303 + if (locked) 303 304 break; 304 305 } 305 306
+37 -23
drivers/tty/vt/consolemap.c
··· 261 261 int m; 262 262 if (glyph < 0 || glyph >= MAX_GLYPH) 263 263 return 0; 264 - else if (!(p = *conp->vc_uni_pagedir_loc)) 265 - return glyph; 266 - else if (use_unicode) { 267 - if (!p->inverse_trans_unicode) 264 + else { 265 + p = *conp->vc_uni_pagedir_loc; 266 + if (!p) 268 267 return glyph; 269 - else 270 - return p->inverse_trans_unicode[glyph]; 271 - } else { 272 - m = inv_translate[conp->vc_num]; 273 - if (!p->inverse_translations[m]) 274 - return glyph; 275 - else 276 - return p->inverse_translations[m][glyph]; 268 + else if (use_unicode) { 269 + if (!p->inverse_trans_unicode) 270 + return glyph; 271 + else 272 + return p->inverse_trans_unicode[glyph]; 273 + } else { 274 + m = inv_translate[conp->vc_num]; 275 + if (!p->inverse_translations[m]) 276 + return glyph; 277 + else 278 + return p->inverse_translations[m][glyph]; 279 + } 277 280 } 278 281 } 279 282 EXPORT_SYMBOL_GPL(inverse_translate); ··· 400 397 401 398 if (p == dflt) dflt = NULL; 402 399 for (i = 0; i < 32; i++) { 403 - if ((p1 = p->uni_pgdir[i]) != NULL) { 400 + p1 = p->uni_pgdir[i]; 401 + if (p1 != NULL) { 404 402 for (j = 0; j < 32; j++) 405 403 kfree(p1[j]); 406 404 kfree(p1); ··· 477 473 int i, n; 478 474 u16 **p1, *p2; 479 475 480 - if (!(p1 = p->uni_pgdir[n = unicode >> 11])) { 476 + p1 = p->uni_pgdir[n = unicode >> 11]; 477 + if (!p1) { 481 478 p1 = p->uni_pgdir[n] = kmalloc(32*sizeof(u16 *), GFP_KERNEL); 482 479 if (!p1) return -ENOMEM; 483 480 for (i = 0; i < 32; i++) 484 481 p1[i] = NULL; 485 482 } 486 483 487 - if (!(p2 = p1[n = (unicode >> 6) & 0x1f])) { 484 + p2 = p1[n = (unicode >> 6) & 0x1f]; 485 + if (!p2) { 488 486 p2 = p1[n] = kmalloc(64*sizeof(u16), GFP_KERNEL); 489 487 if (!p2) return -ENOMEM; 490 488 memset(p2, 0xff, 64*sizeof(u16)); /* No glyphs for the characters (yet) */ ··· 575 569 * entries from "p" (old) to "q" (new). 576 570 */ 577 571 l = 0; /* unicode value */ 578 - for (i = 0; i < 32; i++) 579 - if ((p1 = p->uni_pgdir[i])) 580 - for (j = 0; j < 32; j++) 581 - if ((p2 = p1[j])) { 572 + for (i = 0; i < 32; i++) { 573 + p1 = p->uni_pgdir[i]; 574 + if (p1) 575 + for (j = 0; j < 32; j++) { 576 + p2 = p1[j]; 577 + if (p2) { 582 578 for (k = 0; k < 64; k++, l++) 583 579 if (p2[k] != 0xffff) { 584 580 /* ··· 601 593 /* Account for row of 64 empty entries */ 602 594 l += 64; 603 595 } 596 + } 604 597 else 605 598 /* Account for empty table */ 606 599 l += 32 * 64; 600 + } 607 601 608 602 /* 609 603 * Finished copying font table, set vc_uni_pagedir to new table ··· 745 735 ect = 0; 746 736 if (*vc->vc_uni_pagedir_loc) { 747 737 p = *vc->vc_uni_pagedir_loc; 748 - for (i = 0; i < 32; i++) 749 - if ((p1 = p->uni_pgdir[i])) 750 - for (j = 0; j < 32; j++) 751 - if ((p2 = *(p1++))) 738 + for (i = 0; i < 32; i++) { 739 + p1 = p->uni_pgdir[i]; 740 + if (p1) 741 + for (j = 0; j < 32; j++) { 742 + p2 = *(p1++); 743 + if (p2) 752 744 for (k = 0; k < 64; k++) { 753 745 if (*p2 < MAX_GLYPH && ect++ < ct) { 754 746 __put_user((u_short)((i<<11)+(j<<6)+k), ··· 761 749 } 762 750 p2++; 763 751 } 752 + } 753 + } 764 754 } 765 755 __put_user(ect, uct); 766 756 console_unlock();
+62 -30
drivers/tty/vt/vt.c
··· 108 108 #define CON_DRIVER_FLAG_MODULE 1 109 109 #define CON_DRIVER_FLAG_INIT 2 110 110 #define CON_DRIVER_FLAG_ATTR 4 111 + #define CON_DRIVER_FLAG_ZOMBIE 8 111 112 112 113 struct con_driver { 113 114 const struct consw *con; ··· 136 135 */ 137 136 #define DEFAULT_BELL_PITCH 750 138 137 #define DEFAULT_BELL_DURATION (HZ/8) 138 + #define DEFAULT_CURSOR_BLINK_MS 200 139 139 140 140 struct vc vc_cons [MAX_NR_CONSOLES]; 141 141 ··· 155 153 static void set_cursor(struct vc_data *vc); 156 154 static void hide_cursor(struct vc_data *vc); 157 155 static void console_callback(struct work_struct *ignored); 156 + static void con_driver_unregister_callback(struct work_struct *ignored); 158 157 static void blank_screen_t(unsigned long dummy); 159 158 static void set_palette(struct vc_data *vc); 160 159 ··· 185 182 core_param(consoleblank, blankinterval, int, 0444); 186 183 187 184 static DECLARE_WORK(console_work, console_callback); 185 + static DECLARE_WORK(con_driver_unregister_work, con_driver_unregister_callback); 188 186 189 187 /* 190 188 * fg_console is the current virtual console, ··· 1594 1590 case 15: /* activate the previous console */ 1595 1591 set_console(last_console); 1596 1592 break; 1593 + case 16: /* set cursor blink duration in msec */ 1594 + if (vc->vc_npar >= 1 && vc->vc_par[1] >= 50 && 1595 + vc->vc_par[1] <= USHRT_MAX) 1596 + vc->vc_cur_blink_ms = vc->vc_par[1]; 1597 + else 1598 + vc->vc_cur_blink_ms = DEFAULT_CURSOR_BLINK_MS; 1599 + break; 1597 1600 } 1598 1601 } 1599 1602 ··· 1728 1717 1729 1718 vc->vc_bell_pitch = DEFAULT_BELL_PITCH; 1730 1719 vc->vc_bell_duration = DEFAULT_BELL_DURATION; 1720 + vc->vc_cur_blink_ms = DEFAULT_CURSOR_BLINK_MS; 1731 1721 1732 1722 gotoxy(vc, 0, 0); 1733 1723 save_cur(vc); ··· 3204 3192 3205 3193 3206 3194 #ifdef CONFIG_VT_HW_CONSOLE_BINDING 3207 - static int con_is_graphics(const struct consw *csw, int first, int last) 3208 - { 3209 - int i, retval = 0; 3210 - 3211 - for (i = first; i <= last; i++) { 3212 - struct vc_data *vc = vc_cons[i].d; 3213 - 3214 - if (vc && vc->vc_mode == KD_GRAPHICS) { 3215 - retval = 1; 3216 - break; 3217 - } 3218 - } 3219 - 3220 - return retval; 3221 - } 3222 - 3223 3195 /* unlocked version of unbind_con_driver() */ 3224 3196 int do_unbind_con_driver(const struct consw *csw, int first, int last, int deflt) 3225 3197 { ··· 3289 3293 const struct consw *defcsw = NULL, *csw = NULL; 3290 3294 int i, more = 1, first = -1, last = -1, deflt = 0; 3291 3295 3292 - if (!con->con || !(con->flag & CON_DRIVER_FLAG_MODULE) || 3293 - con_is_graphics(con->con, con->first, con->last)) 3296 + if (!con->con || !(con->flag & CON_DRIVER_FLAG_MODULE)) 3294 3297 goto err; 3295 3298 3296 3299 csw = con->con; ··· 3340 3345 int i, more = 1, first = -1, last = -1, deflt = 0; 3341 3346 int ret; 3342 3347 3343 - if (!con->con || !(con->flag & CON_DRIVER_FLAG_MODULE) || 3344 - con_is_graphics(con->con, con->first, con->last)) 3348 + if (!con->con || !(con->flag & CON_DRIVER_FLAG_MODULE)) 3345 3349 goto err; 3346 3350 3347 3351 csw = con->con; ··· 3590 3596 for (i = 0; i < MAX_NR_CON_DRIVER; i++) { 3591 3597 con_driver = &registered_con_driver[i]; 3592 3598 3593 - if (con_driver->con == NULL) { 3599 + if (con_driver->con == NULL && 3600 + !(con_driver->flag & CON_DRIVER_FLAG_ZOMBIE)) { 3594 3601 con_driver->con = csw; 3595 3602 con_driver->desc = desc; 3596 3603 con_driver->node = i; ··· 3653 3658 struct con_driver *con_driver = &registered_con_driver[i]; 3654 3659 3655 3660 if (con_driver->con == csw) { 3656 - vtconsole_deinit_device(con_driver); 3657 - device_destroy(vtconsole_class, 3658 - MKDEV(0, con_driver->node)); 3661 + /* 3662 + * Defer the removal of the sysfs entries since that 3663 + * will acquire the kernfs s_active lock and we can't 3664 + * acquire this lock while holding the console lock: 3665 + * the unbind sysfs entry imposes already the opposite 3666 + * order. Reset con already here to prevent any later 3667 + * lookup to succeed and mark this slot as zombie, so 3668 + * it won't get reused until we complete the removal 3669 + * in the deferred work. 3670 + */ 3659 3671 con_driver->con = NULL; 3660 - con_driver->desc = NULL; 3661 - con_driver->dev = NULL; 3662 - con_driver->node = 0; 3663 - con_driver->flag = 0; 3664 - con_driver->first = 0; 3665 - con_driver->last = 0; 3672 + con_driver->flag = CON_DRIVER_FLAG_ZOMBIE; 3673 + schedule_work(&con_driver_unregister_work); 3674 + 3666 3675 return 0; 3667 3676 } 3668 3677 } ··· 3674 3675 return -ENODEV; 3675 3676 } 3676 3677 EXPORT_SYMBOL_GPL(do_unregister_con_driver); 3678 + 3679 + static void con_driver_unregister_callback(struct work_struct *ignored) 3680 + { 3681 + int i; 3682 + 3683 + console_lock(); 3684 + 3685 + for (i = 0; i < MAX_NR_CON_DRIVER; i++) { 3686 + struct con_driver *con_driver = &registered_con_driver[i]; 3687 + 3688 + if (!(con_driver->flag & CON_DRIVER_FLAG_ZOMBIE)) 3689 + continue; 3690 + 3691 + console_unlock(); 3692 + 3693 + vtconsole_deinit_device(con_driver); 3694 + device_destroy(vtconsole_class, MKDEV(0, con_driver->node)); 3695 + 3696 + console_lock(); 3697 + 3698 + if (WARN_ON_ONCE(con_driver->con)) 3699 + con_driver->con = NULL; 3700 + con_driver->desc = NULL; 3701 + con_driver->dev = NULL; 3702 + con_driver->node = 0; 3703 + WARN_ON_ONCE(con_driver->flag != CON_DRIVER_FLAG_ZOMBIE); 3704 + con_driver->flag = 0; 3705 + con_driver->first = 0; 3706 + con_driver->last = 0; 3707 + } 3708 + 3709 + console_unlock(); 3710 + } 3677 3711 3678 3712 /* 3679 3713 * If we support more console drivers, this function is used
+3 -2
drivers/video/console/fbcon.c
··· 402 402 struct fbcon_ops *ops = info->fbcon_par; 403 403 404 404 queue_work(system_power_efficient_wq, &info->queue); 405 - mod_timer(&ops->cursor_timer, jiffies + HZ/5); 405 + mod_timer(&ops->cursor_timer, jiffies + ops->cur_blink_jiffies); 406 406 } 407 407 408 408 static void fbcon_add_cursor_timer(struct fb_info *info) ··· 417 417 418 418 init_timer(&ops->cursor_timer); 419 419 ops->cursor_timer.function = cursor_timer_handler; 420 - ops->cursor_timer.expires = jiffies + HZ / 5; 420 + ops->cursor_timer.expires = jiffies + ops->cur_blink_jiffies; 421 421 ops->cursor_timer.data = (unsigned long ) info; 422 422 add_timer(&ops->cursor_timer); 423 423 ops->flags |= FBCON_FLAGS_CURSOR_TIMER; ··· 1309 1309 if (fbcon_is_inactive(vc, info) || vc->vc_deccm != 1) 1310 1310 return; 1311 1311 1312 + ops->cur_blink_jiffies = msecs_to_jiffies(vc->vc_cur_blink_ms); 1312 1313 if (vc->vc_cursor_type & 0x10) 1313 1314 fbcon_del_cursor_timer(info); 1314 1315 else
+1
drivers/video/console/fbcon.h
··· 70 70 struct fb_cursor cursor_state; 71 71 struct display *p; 72 72 int currcon; /* Current VC. */ 73 + int cur_blink_jiffies; 73 74 int cursor_flash; 74 75 int cursor_reset; 75 76 int blank_state;
+1
include/linux/console_struct.h
··· 104 104 unsigned int vc_resize_user; /* resize request from user */ 105 105 unsigned int vc_bell_pitch; /* Console bell pitch */ 106 106 unsigned int vc_bell_duration; /* Console bell duration */ 107 + unsigned short vc_cur_blink_ms; /* Cursor blink duration */ 107 108 struct vc_data **vc_display_fg; /* [!] Ptr to var holding fg console for this display */ 108 109 struct uni_pagedir *vc_uni_pagedir; 109 110 struct uni_pagedir **vc_uni_pagedir_loc; /* [!] Location of uni_pagedir variable for this console */
+3
include/linux/gsmmux.h include/uapi/linux/gsmmux.h
··· 1 1 #ifndef _LINUX_GSMMUX_H 2 2 #define _LINUX_GSMMUX_H 3 3 4 + #include <linux/if.h> 5 + #include <linux/ioctl.h> 6 + 4 7 struct gsm_config 5 8 { 6 9 unsigned int adaption;
+2
include/linux/pm.h
··· 529 529 }; 530 530 531 531 struct wakeup_source; 532 + struct wake_irq; 532 533 struct pm_domain_data; 533 534 534 535 struct pm_subsys_data { ··· 569 568 unsigned long timer_expires; 570 569 struct work_struct work; 571 570 wait_queue_head_t wait_queue; 571 + struct wake_irq *wakeirq; 572 572 atomic_t usage_count; 573 573 atomic_t child_count; 574 574 unsigned int disable_depth:3;
+52
include/linux/pm_wakeirq.h
··· 1 + /* 2 + * pm_wakeirq.h - Device wakeirq helper functions 3 + * 4 + * This program is free software; you can redistribute it and/or modify 5 + * it under the terms of the GNU General Public License version 2 as 6 + * published by the Free Software Foundation. 7 + * 8 + * This program is distributed "as is" WITHOUT ANY WARRANTY of any 9 + * kind, whether express or implied; without even the implied warranty 10 + * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 11 + * GNU General Public License for more details. 12 + */ 13 + 14 + #ifndef _LINUX_PM_WAKEIRQ_H 15 + #define _LINUX_PM_WAKEIRQ_H 16 + 17 + #ifdef CONFIG_PM 18 + 19 + extern int dev_pm_set_wake_irq(struct device *dev, int irq); 20 + extern int dev_pm_set_dedicated_wake_irq(struct device *dev, 21 + int irq); 22 + extern void dev_pm_clear_wake_irq(struct device *dev); 23 + extern void dev_pm_enable_wake_irq(struct device *dev); 24 + extern void dev_pm_disable_wake_irq(struct device *dev); 25 + 26 + #else /* !CONFIG_PM */ 27 + 28 + static inline int dev_pm_set_wake_irq(struct device *dev, int irq) 29 + { 30 + return 0; 31 + } 32 + 33 + static inline int dev_pm_set_dedicated__wake_irq(struct device *dev, 34 + int irq) 35 + { 36 + return 0; 37 + } 38 + 39 + static inline void dev_pm_clear_wake_irq(struct device *dev) 40 + { 41 + } 42 + 43 + static inline void dev_pm_enable_wake_irq(struct device *dev) 44 + { 45 + } 46 + 47 + static inline void dev_pm_disable_wake_irq(struct device *dev) 48 + { 49 + } 50 + 51 + #endif /* CONFIG_PM */ 52 + #endif /* _LINUX_PM_WAKEIRQ_H */
+9
include/linux/pm_wakeup.h
··· 28 28 29 29 #include <linux/types.h> 30 30 31 + struct wake_irq; 32 + 31 33 /** 32 34 * struct wakeup_source - Representation of wakeup sources 33 35 * 36 + * @name: Name of the wakeup source 37 + * @entry: Wakeup source list entry 38 + * @lock: Wakeup source lock 39 + * @wakeirq: Optional device specific wakeirq 40 + * @timer: Wakeup timer list 41 + * @timer_expires: Wakeup timer expiration 34 42 * @total_time: Total time this wakeup source has been active. 35 43 * @max_time: Maximum time this wakeup source has been continuously active. 36 44 * @last_time: Monotonic clock when the wakeup source's was touched last time. ··· 55 47 const char *name; 56 48 struct list_head entry; 57 49 spinlock_t lock; 50 + struct wake_irq *wakeirq; 58 51 struct timer_list timer; 59 52 unsigned long timer_expires; 60 53 ktime_t total_time;
+3
include/linux/serial_8250.h
··· 12 12 #define _LINUX_SERIAL_8250_H 13 13 14 14 #include <linux/serial_core.h> 15 + #include <linux/serial_reg.h> 15 16 #include <linux/platform_device.h> 16 17 17 18 /* ··· 138 137 139 138 extern unsigned int serial8250_early_in(struct uart_port *port, int offset); 140 139 extern void serial8250_early_out(struct uart_port *port, int offset, int value); 140 + extern int early_serial8250_setup(struct earlycon_device *device, 141 + const char *options); 141 142 extern void serial8250_do_set_termios(struct uart_port *port, 142 143 struct ktermios *termios, struct ktermios *old); 143 144 extern int serial8250_do_startup(struct uart_port *port);
+1 -1
include/linux/serial_core.h
··· 35 35 #define uart_console(port) \ 36 36 ((port)->cons && (port)->cons->index == (port)->line) 37 37 #else 38 - #define uart_console(port) (0) 38 + #define uart_console(port) ({ (void)port; 0; }) 39 39 #endif 40 40 41 41 struct uart_port;
+10 -74
include/linux/serial_sci.h
··· 1 1 #ifndef __LINUX_SERIAL_SCI_H 2 2 #define __LINUX_SERIAL_SCI_H 3 3 4 + #include <linux/bitops.h> 4 5 #include <linux/serial_core.h> 5 6 #include <linux/sh_dma.h> 6 7 ··· 11 10 12 11 #define SCIx_NOT_SUPPORTED (-1) 13 12 14 - /* SCSMR (Serial Mode Register) */ 15 - #define SCSMR_CHR (1 << 6) /* 7-bit Character Length */ 16 - #define SCSMR_PE (1 << 5) /* Parity Enable */ 17 - #define SCSMR_ODD (1 << 4) /* Odd Parity */ 18 - #define SCSMR_STOP (1 << 3) /* Stop Bit Length */ 19 - #define SCSMR_CKS 0x0003 /* Clock Select */ 20 - 21 13 /* Serial Control Register (@ = not supported by all parts) */ 22 - #define SCSCR_TIE (1 << 7) /* Transmit Interrupt Enable */ 23 - #define SCSCR_RIE (1 << 6) /* Receive Interrupt Enable */ 24 - #define SCSCR_TE (1 << 5) /* Transmit Enable */ 25 - #define SCSCR_RE (1 << 4) /* Receive Enable */ 26 - #define SCSCR_REIE (1 << 3) /* Receive Error Interrupt Enable @ */ 27 - #define SCSCR_TOIE (1 << 2) /* Timeout Interrupt Enable @ */ 28 - #define SCSCR_CKE1 (1 << 1) /* Clock Enable 1 */ 29 - #define SCSCR_CKE0 (1 << 0) /* Clock Enable 0 */ 30 - /* SCIFA/SCIFB only */ 31 - #define SCSCR_TDRQE (1 << 15) /* Tx Data Transfer Request Enable */ 32 - #define SCSCR_RDRQE (1 << 14) /* Rx Data Transfer Request Enable */ 14 + #define SCSCR_TIE BIT(7) /* Transmit Interrupt Enable */ 15 + #define SCSCR_RIE BIT(6) /* Receive Interrupt Enable */ 16 + #define SCSCR_TE BIT(5) /* Transmit Enable */ 17 + #define SCSCR_RE BIT(4) /* Receive Enable */ 18 + #define SCSCR_REIE BIT(3) /* Receive Error Interrupt Enable @ */ 19 + #define SCSCR_TOIE BIT(2) /* Timeout Interrupt Enable @ */ 20 + #define SCSCR_CKE1 BIT(1) /* Clock Enable 1 */ 21 + #define SCSCR_CKE0 BIT(0) /* Clock Enable 0 */ 33 22 34 - /* SCxSR (Serial Status Register) on SCI */ 35 - #define SCI_TDRE 0x80 /* Transmit Data Register Empty */ 36 - #define SCI_RDRF 0x40 /* Receive Data Register Full */ 37 - #define SCI_ORER 0x20 /* Overrun Error */ 38 - #define SCI_FER 0x10 /* Framing Error */ 39 - #define SCI_PER 0x08 /* Parity Error */ 40 - #define SCI_TEND 0x04 /* Transmit End */ 41 - 42 - #define SCI_DEFAULT_ERROR_MASK (SCI_PER | SCI_FER) 43 - 44 - /* SCxSR (Serial Status Register) on SCIF, HSCIF */ 45 - #define SCIF_ER 0x0080 /* Receive Error */ 46 - #define SCIF_TEND 0x0040 /* Transmission End */ 47 - #define SCIF_TDFE 0x0020 /* Transmit FIFO Data Empty */ 48 - #define SCIF_BRK 0x0010 /* Break Detect */ 49 - #define SCIF_FER 0x0008 /* Framing Error */ 50 - #define SCIF_PER 0x0004 /* Parity Error */ 51 - #define SCIF_RDF 0x0002 /* Receive FIFO Data Full */ 52 - #define SCIF_DR 0x0001 /* Receive Data Ready */ 53 - 54 - #define SCIF_DEFAULT_ERROR_MASK (SCIF_PER | SCIF_FER | SCIF_ER | SCIF_BRK) 55 - 56 - /* SCFCR (FIFO Control Register) */ 57 - #define SCFCR_LOOP (1 << 0) /* Loopback Test */ 58 - 59 - /* SCSPTR (Serial Port Register), optional */ 60 - #define SCSPTR_RTSIO (1 << 7) /* Serial Port RTS Pin Input/Output */ 61 - #define SCSPTR_CTSIO (1 << 5) /* Serial Port CTS Pin Input/Output */ 62 - #define SCSPTR_SPB2IO (1 << 1) /* Serial Port Break Input/Output */ 63 - #define SCSPTR_SPB2DT (1 << 0) /* Serial Port Break Data */ 64 - 65 - /* HSSRR HSCIF */ 66 - #define HSCIF_SRE 0x8000 /* Sampling Rate Register Enable */ 67 23 68 24 enum { 69 25 SCIx_PROBE_REGTYPE, ··· 40 82 SCIx_NR_REGTYPES, 41 83 }; 42 84 43 - /* 44 - * SCI register subset common for all port types. 45 - * Not all registers will exist on all parts. 46 - */ 47 - enum { 48 - SCSMR, /* Serial Mode Register */ 49 - SCBRR, /* Bit Rate Register */ 50 - SCSCR, /* Serial Control Register */ 51 - SCxSR, /* Serial Status Register */ 52 - SCFCR, /* FIFO Control Register */ 53 - SCFDR, /* FIFO Data Count Register */ 54 - SCxTDR, /* Transmit (FIFO) Data Register */ 55 - SCxRDR, /* Receive (FIFO) Data Register */ 56 - SCLSR, /* Line Status Register */ 57 - SCTFDR, /* Transmit FIFO Data Count Register */ 58 - SCRFDR, /* Receive FIFO Data Count Register */ 59 - SCSPTR, /* Serial Port Register */ 60 - HSSRR, /* Sampling Rate Register */ 61 - 62 - SCIx_NR_REGS, 63 - }; 64 - 65 85 struct device; 66 86 67 87 struct plat_sci_port_ops { ··· 49 113 /* 50 114 * Port-specific capabilities 51 115 */ 52 - #define SCIx_HAVE_RTSCTS (1 << 0) 116 + #define SCIx_HAVE_RTSCTS BIT(0) 53 117 54 118 /* 55 119 * Platform device specific platform_data struct
+1 -1
include/linux/tty.h
··· 422 422 423 423 extern int tty_paranoia_check(struct tty_struct *tty, struct inode *inode, 424 424 const char *routine); 425 - extern char *tty_name(struct tty_struct *tty, char *buf); 425 + extern const char *tty_name(const struct tty_struct *tty); 426 426 extern void tty_wait_until_sent(struct tty_struct *tty, long timeout); 427 427 extern int tty_check_change(struct tty_struct *tty); 428 428 extern void __stop_tty(struct tty_struct *tty);
+1
include/uapi/linux/Kbuild
··· 138 138 header-y += gen_stats.h 139 139 header-y += gfs2_ondisk.h 140 140 header-y += gigaset_dev.h 141 + header-y += gsmmux.h 141 142 header-y += hdlcdrv.h 142 143 header-y += hdlc.h 143 144 header-y += hdreg.h
+1 -1
include/uapi/linux/tty_flags.h
··· 15 15 #define ASYNCB_FOURPORT 1 /* Set OU1, OUT2 per AST Fourport settings */ 16 16 #define ASYNCB_SAK 2 /* Secure Attention Key (Orange book) */ 17 17 #define ASYNCB_SPLIT_TERMIOS 3 /* [x] Separate termios for dialin/callout */ 18 - #define ASYNCB_SPD_HI 4 /* Use 56000 instead of 38400 bps */ 18 + #define ASYNCB_SPD_HI 4 /* Use 57600 instead of 38400 bps */ 19 19 #define ASYNCB_SPD_VHI 5 /* Use 115200 instead of 38400 bps */ 20 20 #define ASYNCB_SKIP_TEST 6 /* Skip UART test during autoconfiguration */ 21 21 #define ASYNCB_AUTO_IRQ 7 /* Do automatic IRQ during