Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

Conflicts:
drivers/net/ethernet/cadence/macb.c

Overlapping changes in macb driver, mostly fixes and cleanups
in 'net' overlapping with the integration of at91_ether into
macb in 'net-next'.

Signed-off-by: David S. Miller <davem@davemloft.net>

+3138 -1730
+27
Documentation/CodeOfConflict
··· 1 + Code of Conflict 2 + ---------------- 3 + 4 + The Linux kernel development effort is a very personal process compared 5 + to "traditional" ways of developing software. Your code and ideas 6 + behind it will be carefully reviewed, often resulting in critique and 7 + criticism. The review will almost always require improvements to the 8 + code before it can be included in the kernel. Know that this happens 9 + because everyone involved wants to see the best possible solution for 10 + the overall success of Linux. This development process has been proven 11 + to create the most robust operating system kernel ever, and we do not 12 + want to do anything to cause the quality of submission and eventual 13 + result to ever decrease. 14 + 15 + If however, anyone feels personally abused, threatened, or otherwise 16 + uncomfortable due to this process, that is not acceptable. If so, 17 + please contact the Linux Foundation's Technical Advisory Board at 18 + <tab@lists.linux-foundation.org>, or the individual members, and they 19 + will work to resolve the issue to the best of their ability. For more 20 + information on who is on the Technical Advisory Board and what their 21 + role is, please see: 22 + http://www.linuxfoundation.org/programs/advisory-councils/tab 23 + 24 + As a reviewer of code, please strive to keep things civil and focused on 25 + the technical issues involved. We are all humans, and frustrations can 26 + be high on both sides of the process. Try to keep in mind the immortal 27 + words of Bill and Ted, "Be excellent to each other."
+1
Documentation/devicetree/bindings/i2c/i2c-imx.txt
··· 7 7 - "fsl,vf610-i2c" for I2C compatible with the one integrated on Vybrid vf610 SoC 8 8 - reg : Should contain I2C/HS-I2C registers location and length 9 9 - interrupts : Should contain I2C/HS-I2C interrupt 10 + - clocks : Should contain the I2C/HS-I2C clock specifier 10 11 11 12 Optional properties: 12 13 - clock-frequency : Constains desired I2C/HS-I2C bus clock frequency in Hz.
+4 -1
Documentation/devicetree/bindings/net/apm-xgene-enet.txt
··· 4 4 APM X-Gene SoC. 5 5 6 6 Required properties for all the ethernet interfaces: 7 - - compatible: Should be "apm,xgene-enet" 7 + - compatible: Should state binding information from the following list, 8 + - "apm,xgene-enet": RGMII based 1G interface 9 + - "apm,xgene1-sgenet": SGMII based 1G interface 10 + - "apm,xgene1-xgenet": XFI based 10G interface 8 11 - reg: Address and length of the register set for the device. It contains the 9 12 information of registers in the same order as described by reg-names 10 13 - reg-names: Should contain the register set names
+16
Documentation/devicetree/bindings/serial/snps-dw-apb-uart.txt
··· 21 21 - reg-io-width : the size (in bytes) of the IO accesses that should be 22 22 performed on the device. If this property is not present then single byte 23 23 accesses are used. 24 + - dcd-override : Override the DCD modem status signal. This signal will always 25 + be reported as active instead of being obtained from the modem status 26 + register. Define this if your serial port does not use this pin. 27 + - dsr-override : Override the DTS modem status signal. This signal will always 28 + be reported as active instead of being obtained from the modem status 29 + register. Define this if your serial port does not use this pin. 30 + - cts-override : Override the CTS modem status signal. This signal will always 31 + be reported as active instead of being obtained from the modem status 32 + register. Define this if your serial port does not use this pin. 33 + - ri-override : Override the RI modem status signal. This signal will always be 34 + reported as inactive instead of being obtained from the modem status register. 35 + Define this if your serial port does not use this pin. 24 36 25 37 Example: 26 38 ··· 43 31 interrupts = <10>; 44 32 reg-shift = <2>; 45 33 reg-io-width = <4>; 34 + dcd-override; 35 + dsr-override; 36 + cts-override; 37 + ri-override; 46 38 }; 47 39 48 40 Example with one clock:
+17 -5
Documentation/power/suspend-and-interrupts.txt
··· 40 40 41 41 The IRQF_NO_SUSPEND flag is used to indicate that to the IRQ subsystem when 42 42 requesting a special-purpose interrupt. It causes suspend_device_irqs() to 43 - leave the corresponding IRQ enabled so as to allow the interrupt to work all 44 - the time as expected. 43 + leave the corresponding IRQ enabled so as to allow the interrupt to work as 44 + expected during the suspend-resume cycle, but does not guarantee that the 45 + interrupt will wake the system from a suspended state -- for such cases it is 46 + necessary to use enable_irq_wake(). 45 47 46 48 Note that the IRQF_NO_SUSPEND flag affects the entire IRQ and not just one 47 49 user of it. Thus, if the IRQ is shared, all of the interrupt handlers installed ··· 112 110 IRQF_NO_SUSPEND and enable_irq_wake() 113 111 ------------------------------------- 114 112 115 - There are no valid reasons to use both enable_irq_wake() and the IRQF_NO_SUSPEND 116 - flag on the same IRQ. 113 + There are very few valid reasons to use both enable_irq_wake() and the 114 + IRQF_NO_SUSPEND flag on the same IRQ, and it is never valid to use both for the 115 + same device. 117 116 118 117 First of all, if the IRQ is not shared, the rules for handling IRQF_NO_SUSPEND 119 118 interrupts (interrupt handlers are invoked after suspend_device_irqs()) are ··· 123 120 124 121 Second, both enable_irq_wake() and IRQF_NO_SUSPEND apply to entire IRQs and not 125 122 to individual interrupt handlers, so sharing an IRQ between a system wakeup 126 - interrupt source and an IRQF_NO_SUSPEND interrupt source does not make sense. 123 + interrupt source and an IRQF_NO_SUSPEND interrupt source does not generally 124 + make sense. 125 + 126 + In rare cases an IRQ can be shared between a wakeup device driver and an 127 + IRQF_NO_SUSPEND user. In order for this to be safe, the wakeup device driver 128 + must be able to discern spurious IRQs from genuine wakeup events (signalling 129 + the latter to the core with pm_system_wakeup()), must use enable_irq_wake() to 130 + ensure that the IRQ will function as a wakeup source, and must request the IRQ 131 + with IRQF_COND_SUSPEND to tell the core that it meets these requirements. If 132 + these requirements are not met, it is not valid to use IRQF_COND_SUSPEND.
+11 -2
MAINTAINERS
··· 2369 2369 2370 2370 CAN NETWORK LAYER 2371 2371 M: Oliver Hartkopp <socketcan@hartkopp.net> 2372 + M: Marc Kleine-Budde <mkl@pengutronix.de> 2372 2373 L: linux-can@vger.kernel.org 2373 - W: http://gitorious.org/linux-can 2374 + W: https://github.com/linux-can 2374 2375 T: git git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can.git 2375 2376 T: git git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can-next.git 2376 2377 S: Maintained ··· 2387 2386 M: Wolfgang Grandegger <wg@grandegger.com> 2388 2387 M: Marc Kleine-Budde <mkl@pengutronix.de> 2389 2388 L: linux-can@vger.kernel.org 2390 - W: http://gitorious.org/linux-can 2389 + W: https://github.com/linux-can 2391 2390 T: git git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can.git 2392 2391 T: git git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can-next.git 2393 2392 S: Maintained ··· 8479 8478 S: Supported 8480 8479 L: netdev@vger.kernel.org 8481 8480 F: drivers/net/ethernet/samsung/sxgbe/ 8481 + 8482 + SAMSUNG THERMAL DRIVER 8483 + M: Lukasz Majewski <l.majewski@samsung.com> 8484 + L: linux-pm@vger.kernel.org 8485 + L: linux-samsung-soc@vger.kernel.org 8486 + S: Supported 8487 + T: https://github.com/lmajewski/linux-samsung-thermal.git 8488 + F: drivers/thermal/samsung/ 8482 8489 8483 8490 SAMSUNG USB2 PHY DRIVER 8484 8491 M: Kamil Debski <k.debski@samsung.com>
+1 -1
Makefile
··· 1 1 VERSION = 4 2 2 PATCHLEVEL = 0 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc2 4 + EXTRAVERSION = -rc3 5 5 NAME = Hurr durr I'ma sheep 6 6 7 7 # *DOCUMENTATION*
+7 -7
arch/arc/include/asm/processor.h
··· 47 47 /* Forward declaration, a strange C thing */ 48 48 struct task_struct; 49 49 50 - /* Return saved PC of a blocked thread */ 51 - unsigned long thread_saved_pc(struct task_struct *t); 52 - 53 50 #define task_pt_regs(p) \ 54 51 ((struct pt_regs *)(THREAD_SIZE + (void *)task_stack_page(p)) - 1) 55 52 ··· 69 72 #define release_segments(mm) do { } while (0) 70 73 71 74 #define KSTK_EIP(tsk) (task_pt_regs(tsk)->ret) 75 + #define KSTK_ESP(tsk) (task_pt_regs(tsk)->sp) 72 76 73 77 /* 74 78 * Where abouts of Task's sp, fp, blink when it was last seen in kernel mode. 75 79 * Look in process.c for details of kernel stack layout 76 80 */ 77 - #define KSTK_ESP(tsk) (tsk->thread.ksp) 81 + #define TSK_K_ESP(tsk) (tsk->thread.ksp) 78 82 79 - #define KSTK_REG(tsk, off) (*((unsigned int *)(KSTK_ESP(tsk) + \ 83 + #define TSK_K_REG(tsk, off) (*((unsigned int *)(TSK_K_ESP(tsk) + \ 80 84 sizeof(struct callee_regs) + off))) 81 85 82 - #define KSTK_BLINK(tsk) KSTK_REG(tsk, 4) 83 - #define KSTK_FP(tsk) KSTK_REG(tsk, 0) 86 + #define TSK_K_BLINK(tsk) TSK_K_REG(tsk, 4) 87 + #define TSK_K_FP(tsk) TSK_K_REG(tsk, 0) 88 + 89 + #define thread_saved_pc(tsk) TSK_K_BLINK(tsk) 84 90 85 91 extern void start_thread(struct pt_regs * regs, unsigned long pc, 86 92 unsigned long usp);
+37
arch/arc/include/asm/stacktrace.h
··· 1 + /* 2 + * Copyright (C) 2014-15 Synopsys, Inc. (www.synopsys.com) 3 + * Copyright (C) 2007-2010, 2011-2012 Synopsys, Inc. (www.synopsys.com) 4 + * 5 + * This program is free software; you can redistribute it and/or modify 6 + * it under the terms of the GNU General Public License version 2 as 7 + * published by the Free Software Foundation. 8 + */ 9 + 10 + #ifndef __ASM_STACKTRACE_H 11 + #define __ASM_STACKTRACE_H 12 + 13 + #include <linux/sched.h> 14 + 15 + /** 16 + * arc_unwind_core - Unwind the kernel mode stack for an execution context 17 + * @tsk: NULL for current task, specific task otherwise 18 + * @regs: pt_regs used to seed the unwinder {SP, FP, BLINK, PC} 19 + * If NULL, use pt_regs of @tsk (if !NULL) otherwise 20 + * use the current values of {SP, FP, BLINK, PC} 21 + * @consumer_fn: Callback invoked for each frame unwound 22 + * Returns 0 to continue unwinding, -1 to stop 23 + * @arg: Arg to callback 24 + * 25 + * Returns the address of first function in stack 26 + * 27 + * Semantics: 28 + * - synchronous unwinding (e.g. dump_stack): @tsk NULL, @regs NULL 29 + * - Asynchronous unwinding of sleeping task: @tsk !NULL, @regs NULL 30 + * - Asynchronous unwinding of intr/excp etc: @tsk !NULL, @regs !NULL 31 + */ 32 + notrace noinline unsigned int arc_unwind_core( 33 + struct task_struct *tsk, struct pt_regs *regs, 34 + int (*consumer_fn) (unsigned int, void *), 35 + void *arg); 36 + 37 + #endif /* __ASM_STACKTRACE_H */
-23
arch/arc/kernel/process.c
··· 192 192 return 0; 193 193 } 194 194 195 - /* 196 - * API: expected by schedular Code: If thread is sleeping where is that. 197 - * What is this good for? it will be always the scheduler or ret_from_fork. 198 - * So we hard code that anyways. 199 - */ 200 - unsigned long thread_saved_pc(struct task_struct *t) 201 - { 202 - struct pt_regs *regs = task_pt_regs(t); 203 - unsigned long blink = 0; 204 - 205 - /* 206 - * If the thread being queried for in not itself calling this, then it 207 - * implies it is not executing, which in turn implies it is sleeping, 208 - * which in turn implies it got switched OUT by the schedular. 209 - * In that case, it's kernel mode blink can reliably retrieved as per 210 - * the picture above (right above pt_regs). 211 - */ 212 - if (t != current && t->state != TASK_RUNNING) 213 - blink = *((unsigned int *)regs - 1); 214 - 215 - return blink; 216 - } 217 - 218 195 int elf_check_arch(const struct elf32_hdr *x) 219 196 { 220 197 unsigned int eflags;
+17 -4
arch/arc/kernel/stacktrace.c
··· 43 43 struct pt_regs *regs, 44 44 struct unwind_frame_info *frame_info) 45 45 { 46 + /* 47 + * synchronous unwinding (e.g. dump_stack) 48 + * - uses current values of SP and friends 49 + */ 46 50 if (tsk == NULL && regs == NULL) { 47 51 unsigned long fp, sp, blink, ret; 48 52 frame_info->task = current; ··· 65 61 frame_info->regs.r63 = ret; 66 62 frame_info->call_frame = 0; 67 63 } else if (regs == NULL) { 64 + /* 65 + * Asynchronous unwinding of sleeping task 66 + * - Gets SP etc from task's pt_regs (saved bottom of kernel 67 + * mode stack of task) 68 + */ 68 69 69 70 frame_info->task = tsk; 70 71 71 - frame_info->regs.r27 = KSTK_FP(tsk); 72 - frame_info->regs.r28 = KSTK_ESP(tsk); 73 - frame_info->regs.r31 = KSTK_BLINK(tsk); 72 + frame_info->regs.r27 = TSK_K_FP(tsk); 73 + frame_info->regs.r28 = TSK_K_ESP(tsk); 74 + frame_info->regs.r31 = TSK_K_BLINK(tsk); 74 75 frame_info->regs.r63 = (unsigned int)__switch_to; 75 76 76 77 /* In the prologue of __switch_to, first FP is saved on stack ··· 92 83 frame_info->call_frame = 0; 93 84 94 85 } else { 86 + /* 87 + * Asynchronous unwinding of intr/exception 88 + * - Just uses the pt_regs passed 89 + */ 95 90 frame_info->task = tsk; 96 91 97 92 frame_info->regs.r27 = regs->fp; ··· 108 95 109 96 #endif 110 97 111 - static noinline unsigned int 98 + notrace noinline unsigned int 112 99 arc_unwind_core(struct task_struct *tsk, struct pt_regs *regs, 113 100 int (*consumer_fn) (unsigned int, void *), void *arg) 114 101 {
+2
arch/arc/kernel/unaligned.c
··· 12 12 */ 13 13 14 14 #include <linux/types.h> 15 + #include <linux/perf_event.h> 15 16 #include <linux/ptrace.h> 16 17 #include <linux/uaccess.h> 17 18 #include <asm/disasm.h> ··· 254 253 } 255 254 } 256 255 256 + perf_sw_event(PERF_COUNT_SW_ALIGNMENT_FAULTS, 1, regs, address); 257 257 return 0; 258 258 259 259 fault:
+10 -2
arch/arc/mm/fault.c
··· 14 14 #include <linux/ptrace.h> 15 15 #include <linux/uaccess.h> 16 16 #include <linux/kdebug.h> 17 + #include <linux/perf_event.h> 17 18 #include <asm/pgalloc.h> 18 19 #include <asm/mmu.h> 19 20 ··· 140 139 return; 141 140 } 142 141 142 + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); 143 + 143 144 if (likely(!(fault & VM_FAULT_ERROR))) { 144 145 if (flags & FAULT_FLAG_ALLOW_RETRY) { 145 146 /* To avoid updating stats twice for retry case */ 146 - if (fault & VM_FAULT_MAJOR) 147 + if (fault & VM_FAULT_MAJOR) { 147 148 tsk->maj_flt++; 148 - else 149 + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1, 150 + regs, address); 151 + } else { 149 152 tsk->min_flt++; 153 + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1, 154 + regs, address); 155 + } 150 156 151 157 if (fault & VM_FAULT_RETRY) { 152 158 flags &= ~FAULT_FLAG_ALLOW_RETRY;
+1 -1
arch/arm/include/asm/kvm_mmu.h
··· 207 207 208 208 bool need_flush = !vcpu_has_cache_enabled(vcpu) || ipa_uncached; 209 209 210 - VM_BUG_ON(size & PAGE_MASK); 210 + VM_BUG_ON(size & ~PAGE_MASK); 211 211 212 212 if (!need_flush && !icache_is_pipt()) 213 213 goto vipt_cache;
+1 -1
arch/arm/kvm/arm.c
··· 540 540 541 541 vcpu->mode = OUTSIDE_GUEST_MODE; 542 542 kvm_guest_exit(); 543 - trace_kvm_exit(*vcpu_pc(vcpu)); 543 + trace_kvm_exit(kvm_vcpu_trap_get_class(vcpu), *vcpu_pc(vcpu)); 544 544 /* 545 545 * We may have taken a host interrupt in HYP mode (ie 546 546 * while executing the guest). This interrupt is still
+7 -3
arch/arm/kvm/trace.h
··· 25 25 ); 26 26 27 27 TRACE_EVENT(kvm_exit, 28 - TP_PROTO(unsigned long vcpu_pc), 29 - TP_ARGS(vcpu_pc), 28 + TP_PROTO(unsigned int exit_reason, unsigned long vcpu_pc), 29 + TP_ARGS(exit_reason, vcpu_pc), 30 30 31 31 TP_STRUCT__entry( 32 + __field( unsigned int, exit_reason ) 32 33 __field( unsigned long, vcpu_pc ) 33 34 ), 34 35 35 36 TP_fast_assign( 37 + __entry->exit_reason = exit_reason; 36 38 __entry->vcpu_pc = vcpu_pc; 37 39 ), 38 40 39 - TP_printk("PC: 0x%08lx", __entry->vcpu_pc) 41 + TP_printk("HSR_EC: 0x%04x, PC: 0x%08lx", 42 + __entry->exit_reason, 43 + __entry->vcpu_pc) 40 44 ); 41 45 42 46 TRACE_EVENT(kvm_guest_fault,
+1
arch/arm/mach-pxa/idp.c
··· 36 36 #include <linux/platform_data/video-pxafb.h> 37 37 #include <mach/bitfield.h> 38 38 #include <linux/platform_data/mmc-pxamci.h> 39 + #include <linux/smc91x.h> 39 40 40 41 #include "generic.h" 41 42 #include "devices.h"
+1 -1
arch/arm/mach-pxa/lpd270.c
··· 195 195 }; 196 196 197 197 struct smc91x_platdata smc91x_platdata = { 198 - .flags = SMC91X_USE_16BIT | SMC91X_NOWAIT; 198 + .flags = SMC91X_USE_16BIT | SMC91X_NOWAIT, 199 199 }; 200 200 201 201 static struct platform_device smc91x_device = {
+2 -2
arch/arm/mach-sa1100/neponset.c
··· 268 268 .id = 0, 269 269 .res = smc91x_resources, 270 270 .num_res = ARRAY_SIZE(smc91x_resources), 271 - .data = &smc91c_platdata, 272 - .size_data = sizeof(smc91c_platdata), 271 + .data = &smc91x_platdata, 272 + .size_data = sizeof(smc91x_platdata), 273 273 }; 274 274 int ret, irq; 275 275
+1 -1
arch/arm/mach-sa1100/pleb.c
··· 54 54 .num_resources = ARRAY_SIZE(smc91x_resources), 55 55 .resource = smc91x_resources, 56 56 .dev = { 57 - .platform_data = &smc91c_platdata, 57 + .platform_data = &smc91x_platdata, 58 58 }, 59 59 }; 60 60
+2 -2
arch/arm64/boot/dts/apm/apm-storm.dtsi
··· 622 622 }; 623 623 624 624 sgenet0: ethernet@1f210000 { 625 - compatible = "apm,xgene-enet"; 625 + compatible = "apm,xgene1-sgenet"; 626 626 status = "disabled"; 627 627 reg = <0x0 0x1f210000 0x0 0xd100>, 628 628 <0x0 0x1f200000 0x0 0Xc300>, ··· 636 636 }; 637 637 638 638 xgenet: ethernet@1f610000 { 639 - compatible = "apm,xgene-enet"; 639 + compatible = "apm,xgene1-xgenet"; 640 640 status = "disabled"; 641 641 reg = <0x0 0x1f610000 0x0 0xd100>, 642 642 <0x0 0x1f600000 0x0 0Xc300>,
+4 -1
arch/arm64/mm/pageattr.c
··· 51 51 WARN_ON_ONCE(1); 52 52 } 53 53 54 - if (!is_module_address(start) || !is_module_address(end - 1)) 54 + if (start < MODULES_VADDR || start >= MODULES_END) 55 + return -EINVAL; 56 + 57 + if (end < MODULES_VADDR || end >= MODULES_END) 55 58 return -EINVAL; 56 59 57 60 data.set_mask = set_mask;
+1
arch/mips/kvm/tlb.c
··· 216 216 if (idx > current_cpu_data.tlbsize) { 217 217 kvm_err("%s: Invalid Index: %d\n", __func__, idx); 218 218 kvm_mips_dump_host_tlbs(); 219 + local_irq_restore(flags); 219 220 return -1; 220 221 } 221 222
+3 -3
arch/mips/kvm/trace.h
··· 24 24 TP_PROTO(struct kvm_vcpu *vcpu, unsigned int reason), 25 25 TP_ARGS(vcpu, reason), 26 26 TP_STRUCT__entry( 27 - __field(struct kvm_vcpu *, vcpu) 27 + __field(unsigned long, pc) 28 28 __field(unsigned int, reason) 29 29 ), 30 30 31 31 TP_fast_assign( 32 - __entry->vcpu = vcpu; 32 + __entry->pc = vcpu->arch.pc; 33 33 __entry->reason = reason; 34 34 ), 35 35 36 36 TP_printk("[%s]PC: 0x%08lx", 37 37 kvm_mips_exit_types_str[__entry->reason], 38 - __entry->vcpu->arch.pc) 38 + __entry->pc) 39 39 ); 40 40 41 41 #endif /* _TRACE_KVM_H */
+6
arch/powerpc/include/asm/iommu.h
··· 113 113 int pci_domain_number, unsigned long pe_num); 114 114 extern int iommu_add_device(struct device *dev); 115 115 extern void iommu_del_device(struct device *dev); 116 + extern int __init tce_iommu_bus_notifier_init(void); 116 117 #else 117 118 static inline void iommu_register_group(struct iommu_table *tbl, 118 119 int pci_domain_number, ··· 128 127 129 128 static inline void iommu_del_device(struct device *dev) 130 129 { 130 + } 131 + 132 + static inline int __init tce_iommu_bus_notifier_init(void) 133 + { 134 + return 0; 131 135 } 132 136 #endif /* !CONFIG_IOMMU_API */ 133 137
+9
arch/powerpc/include/asm/irq_work.h
··· 1 + #ifndef _ASM_POWERPC_IRQ_WORK_H 2 + #define _ASM_POWERPC_IRQ_WORK_H 3 + 4 + static inline bool arch_irq_work_has_interrupt(void) 5 + { 6 + return true; 7 + } 8 + 9 + #endif /* _ASM_POWERPC_IRQ_WORK_H */
+26
arch/powerpc/kernel/iommu.c
··· 1175 1175 } 1176 1176 EXPORT_SYMBOL_GPL(iommu_del_device); 1177 1177 1178 + static int tce_iommu_bus_notifier(struct notifier_block *nb, 1179 + unsigned long action, void *data) 1180 + { 1181 + struct device *dev = data; 1182 + 1183 + switch (action) { 1184 + case BUS_NOTIFY_ADD_DEVICE: 1185 + return iommu_add_device(dev); 1186 + case BUS_NOTIFY_DEL_DEVICE: 1187 + if (dev->iommu_group) 1188 + iommu_del_device(dev); 1189 + return 0; 1190 + default: 1191 + return 0; 1192 + } 1193 + } 1194 + 1195 + static struct notifier_block tce_iommu_bus_nb = { 1196 + .notifier_call = tce_iommu_bus_notifier, 1197 + }; 1198 + 1199 + int __init tce_iommu_bus_notifier_init(void) 1200 + { 1201 + bus_register_notifier(&pci_bus_type, &tce_iommu_bus_nb); 1202 + return 0; 1203 + } 1178 1204 #endif /* CONFIG_IOMMU_API */
+2 -2
arch/powerpc/kernel/smp.c
··· 541 541 if (smp_ops->give_timebase) 542 542 smp_ops->give_timebase(); 543 543 544 - /* Wait until cpu puts itself in the online map */ 545 - while (!cpu_online(cpu)) 544 + /* Wait until cpu puts itself in the online & active maps */ 545 + while (!cpu_online(cpu) || !cpu_active(cpu)) 546 546 cpu_relax(); 547 547 548 548 return 0;
-26
arch/powerpc/platforms/powernv/pci.c
··· 836 836 #endif 837 837 } 838 838 839 - static int tce_iommu_bus_notifier(struct notifier_block *nb, 840 - unsigned long action, void *data) 841 - { 842 - struct device *dev = data; 843 - 844 - switch (action) { 845 - case BUS_NOTIFY_ADD_DEVICE: 846 - return iommu_add_device(dev); 847 - case BUS_NOTIFY_DEL_DEVICE: 848 - if (dev->iommu_group) 849 - iommu_del_device(dev); 850 - return 0; 851 - default: 852 - return 0; 853 - } 854 - } 855 - 856 - static struct notifier_block tce_iommu_bus_nb = { 857 - .notifier_call = tce_iommu_bus_notifier, 858 - }; 859 - 860 - static int __init tce_iommu_bus_notifier_init(void) 861 - { 862 - bus_register_notifier(&pci_bus_type, &tce_iommu_bus_nb); 863 - return 0; 864 - } 865 839 machine_subsys_initcall_sync(powernv, tce_iommu_bus_notifier_init);
+2
arch/powerpc/platforms/pseries/iommu.c
··· 1340 1340 } 1341 1341 1342 1342 __setup("multitce=", disable_multitce); 1343 + 1344 + machine_subsys_initcall_sync(pseries, tce_iommu_bus_notifier_init);
+6 -6
arch/s390/include/asm/kvm_host.h
··· 515 515 #define S390_ARCH_FAC_MASK_SIZE_U64 \ 516 516 (S390_ARCH_FAC_MASK_SIZE_BYTE / sizeof(u64)) 517 517 518 - struct s390_model_fac { 519 - /* facilities used in SIE context */ 520 - __u64 sie[S390_ARCH_FAC_LIST_SIZE_U64]; 521 - /* subset enabled by kvm */ 522 - __u64 kvm[S390_ARCH_FAC_LIST_SIZE_U64]; 518 + struct kvm_s390_fac { 519 + /* facility list requested by guest */ 520 + __u64 list[S390_ARCH_FAC_LIST_SIZE_U64]; 521 + /* facility mask supported by kvm & hosting machine */ 522 + __u64 mask[S390_ARCH_FAC_LIST_SIZE_U64]; 523 523 }; 524 524 525 525 struct kvm_s390_cpu_model { 526 - struct s390_model_fac *fac; 526 + struct kvm_s390_fac *fac; 527 527 struct cpuid cpu_id; 528 528 unsigned short ibc; 529 529 };
+1 -1
arch/s390/include/asm/mmu_context.h
··· 62 62 { 63 63 int cpu = smp_processor_id(); 64 64 65 + S390_lowcore.user_asce = next->context.asce_bits | __pa(next->pgd); 65 66 if (prev == next) 66 67 return; 67 68 if (MACHINE_HAS_TLB_LC) ··· 74 73 atomic_dec(&prev->context.attach_count); 75 74 if (MACHINE_HAS_TLB_LC) 76 75 cpumask_clear_cpu(cpu, &prev->context.cpu_attach_mask); 77 - S390_lowcore.user_asce = next->context.asce_bits | __pa(next->pgd); 78 76 } 79 77 80 78 #define finish_arch_post_lock_switch finish_arch_post_lock_switch
+1 -10
arch/s390/include/asm/page.h
··· 37 37 #endif 38 38 } 39 39 40 - static inline void clear_page(void *page) 41 - { 42 - register unsigned long reg1 asm ("1") = 0; 43 - register void *reg2 asm ("2") = page; 44 - register unsigned long reg3 asm ("3") = 4096; 45 - asm volatile( 46 - " mvcl 2,0" 47 - : "+d" (reg2), "+d" (reg3) : "d" (reg1) 48 - : "memory", "cc"); 49 - } 40 + #define clear_page(page) memset((page), 0, PAGE_SIZE) 50 41 51 42 /* 52 43 * copy_page uses the mvcl instruction with 0xb0 padding byte in order to
+8 -4
arch/s390/kernel/jump_label.c
··· 36 36 insn->offset = (entry->target - entry->code) >> 1; 37 37 } 38 38 39 - static void jump_label_bug(struct jump_entry *entry, struct insn *insn) 39 + static void jump_label_bug(struct jump_entry *entry, struct insn *expected, 40 + struct insn *new) 40 41 { 41 42 unsigned char *ipc = (unsigned char *)entry->code; 42 - unsigned char *ipe = (unsigned char *)insn; 43 + unsigned char *ipe = (unsigned char *)expected; 44 + unsigned char *ipn = (unsigned char *)new; 43 45 44 46 pr_emerg("Jump label code mismatch at %pS [%p]\n", ipc, ipc); 45 47 pr_emerg("Found: %02x %02x %02x %02x %02x %02x\n", 46 48 ipc[0], ipc[1], ipc[2], ipc[3], ipc[4], ipc[5]); 47 49 pr_emerg("Expected: %02x %02x %02x %02x %02x %02x\n", 48 50 ipe[0], ipe[1], ipe[2], ipe[3], ipe[4], ipe[5]); 51 + pr_emerg("New: %02x %02x %02x %02x %02x %02x\n", 52 + ipn[0], ipn[1], ipn[2], ipn[3], ipn[4], ipn[5]); 49 53 panic("Corrupted kernel text"); 50 54 } 51 55 ··· 73 69 } 74 70 if (init) { 75 71 if (memcmp((void *)entry->code, &orignop, sizeof(orignop))) 76 - jump_label_bug(entry, &old); 72 + jump_label_bug(entry, &orignop, &new); 77 73 } else { 78 74 if (memcmp((void *)entry->code, &old, sizeof(old))) 79 - jump_label_bug(entry, &old); 75 + jump_label_bug(entry, &old, &new); 80 76 } 81 77 probe_kernel_write((void *)entry->code, &new, sizeof(new)); 82 78 }
+1
arch/s390/kernel/module.c
··· 436 436 const Elf_Shdr *sechdrs, 437 437 struct module *me) 438 438 { 439 + jump_label_apply_nops(me); 439 440 vfree(me->arch.syminfo); 440 441 me->arch.syminfo = NULL; 441 442 return 0;
+1 -1
arch/s390/kernel/processor.c
··· 18 18 19 19 static DEFINE_PER_CPU(struct cpuid, cpu_id); 20 20 21 - void cpu_relax(void) 21 + void notrace cpu_relax(void) 22 22 { 23 23 if (!smp_cpu_mtid && MACHINE_HAS_DIAG44) 24 24 asm volatile("diag 0,0,0x44");
+31 -37
arch/s390/kvm/kvm-s390.c
··· 522 522 memcpy(&kvm->arch.model.cpu_id, &proc->cpuid, 523 523 sizeof(struct cpuid)); 524 524 kvm->arch.model.ibc = proc->ibc; 525 - memcpy(kvm->arch.model.fac->kvm, proc->fac_list, 525 + memcpy(kvm->arch.model.fac->list, proc->fac_list, 526 526 S390_ARCH_FAC_LIST_SIZE_BYTE); 527 527 } else 528 528 ret = -EFAULT; ··· 556 556 } 557 557 memcpy(&proc->cpuid, &kvm->arch.model.cpu_id, sizeof(struct cpuid)); 558 558 proc->ibc = kvm->arch.model.ibc; 559 - memcpy(&proc->fac_list, kvm->arch.model.fac->kvm, S390_ARCH_FAC_LIST_SIZE_BYTE); 559 + memcpy(&proc->fac_list, kvm->arch.model.fac->list, S390_ARCH_FAC_LIST_SIZE_BYTE); 560 560 if (copy_to_user((void __user *)attr->addr, proc, sizeof(*proc))) 561 561 ret = -EFAULT; 562 562 kfree(proc); ··· 576 576 } 577 577 get_cpu_id((struct cpuid *) &mach->cpuid); 578 578 mach->ibc = sclp_get_ibc(); 579 - memcpy(&mach->fac_mask, kvm_s390_fac_list_mask, 580 - kvm_s390_fac_list_mask_size() * sizeof(u64)); 579 + memcpy(&mach->fac_mask, kvm->arch.model.fac->mask, 580 + S390_ARCH_FAC_LIST_SIZE_BYTE); 581 581 memcpy((unsigned long *)&mach->fac_list, S390_lowcore.stfle_fac_list, 582 - S390_ARCH_FAC_LIST_SIZE_U64); 582 + S390_ARCH_FAC_LIST_SIZE_BYTE); 583 583 if (copy_to_user((void __user *)attr->addr, mach, sizeof(*mach))) 584 584 ret = -EFAULT; 585 585 kfree(mach); ··· 778 778 static int kvm_s390_query_ap_config(u8 *config) 779 779 { 780 780 u32 fcn_code = 0x04000000UL; 781 - u32 cc; 781 + u32 cc = 0; 782 782 783 + memset(config, 0, 128); 783 784 asm volatile( 784 785 "lgr 0,%1\n" 785 786 "lgr 2,%2\n" 786 787 ".long 0xb2af0000\n" /* PQAP(QCI) */ 787 - "ipm %0\n" 788 + "0: ipm %0\n" 788 789 "srl %0,28\n" 789 - : "=r" (cc) 790 + "1:\n" 791 + EX_TABLE(0b, 1b) 792 + : "+r" (cc) 790 793 : "r" (fcn_code), "r" (config) 791 794 : "cc", "0", "2", "memory" 792 795 ); ··· 842 839 843 840 kvm_s390_set_crycb_format(kvm); 844 841 845 - /* Disable AES/DEA protected key functions by default */ 846 - kvm->arch.crypto.aes_kw = 0; 847 - kvm->arch.crypto.dea_kw = 0; 842 + /* Enable AES/DEA protected key functions by default */ 843 + kvm->arch.crypto.aes_kw = 1; 844 + kvm->arch.crypto.dea_kw = 1; 845 + get_random_bytes(kvm->arch.crypto.crycb->aes_wrapping_key_mask, 846 + sizeof(kvm->arch.crypto.crycb->aes_wrapping_key_mask)); 847 + get_random_bytes(kvm->arch.crypto.crycb->dea_wrapping_key_mask, 848 + sizeof(kvm->arch.crypto.crycb->dea_wrapping_key_mask)); 848 849 849 850 return 0; 850 851 } ··· 893 886 /* 894 887 * The architectural maximum amount of facilities is 16 kbit. To store 895 888 * this amount, 2 kbyte of memory is required. Thus we need a full 896 - * page to hold the active copy (arch.model.fac->sie) and the current 897 - * facilities set (arch.model.fac->kvm). Its address size has to be 889 + * page to hold the guest facility list (arch.model.fac->list) and the 890 + * facility mask (arch.model.fac->mask). Its address size has to be 898 891 * 31 bits and word aligned. 899 892 */ 900 893 kvm->arch.model.fac = 901 - (struct s390_model_fac *) get_zeroed_page(GFP_KERNEL | GFP_DMA); 894 + (struct kvm_s390_fac *) get_zeroed_page(GFP_KERNEL | GFP_DMA); 902 895 if (!kvm->arch.model.fac) 903 896 goto out_nofac; 904 897 905 - memcpy(kvm->arch.model.fac->kvm, S390_lowcore.stfle_fac_list, 906 - S390_ARCH_FAC_LIST_SIZE_U64); 907 - 908 - /* 909 - * If this KVM host runs *not* in a LPAR, relax the facility bits 910 - * of the kvm facility mask by all missing facilities. This will allow 911 - * to determine the right CPU model by means of the remaining facilities. 912 - * Live guest migration must prohibit the migration of KVMs running in 913 - * a LPAR to non LPAR hosts. 914 - */ 915 - if (!MACHINE_IS_LPAR) 916 - for (i = 0; i < kvm_s390_fac_list_mask_size(); i++) 917 - kvm_s390_fac_list_mask[i] &= kvm->arch.model.fac->kvm[i]; 918 - 919 - /* 920 - * Apply the kvm facility mask to limit the kvm supported/tolerated 921 - * facility list. 922 - */ 898 + /* Populate the facility mask initially. */ 899 + memcpy(kvm->arch.model.fac->mask, S390_lowcore.stfle_fac_list, 900 + S390_ARCH_FAC_LIST_SIZE_BYTE); 923 901 for (i = 0; i < S390_ARCH_FAC_LIST_SIZE_U64; i++) { 924 902 if (i < kvm_s390_fac_list_mask_size()) 925 - kvm->arch.model.fac->kvm[i] &= kvm_s390_fac_list_mask[i]; 903 + kvm->arch.model.fac->mask[i] &= kvm_s390_fac_list_mask[i]; 926 904 else 927 - kvm->arch.model.fac->kvm[i] = 0UL; 905 + kvm->arch.model.fac->mask[i] = 0UL; 928 906 } 907 + 908 + /* Populate the facility list initially. */ 909 + memcpy(kvm->arch.model.fac->list, kvm->arch.model.fac->mask, 910 + S390_ARCH_FAC_LIST_SIZE_BYTE); 929 911 930 912 kvm_s390_get_cpu_id(&kvm->arch.model.cpu_id); 931 913 kvm->arch.model.ibc = sclp_get_ibc() & 0x0fff; ··· 1161 1165 1162 1166 mutex_lock(&vcpu->kvm->lock); 1163 1167 vcpu->arch.cpu_id = vcpu->kvm->arch.model.cpu_id; 1164 - memcpy(vcpu->kvm->arch.model.fac->sie, vcpu->kvm->arch.model.fac->kvm, 1165 - S390_ARCH_FAC_LIST_SIZE_BYTE); 1166 1168 vcpu->arch.sie_block->ibc = vcpu->kvm->arch.model.ibc; 1167 1169 mutex_unlock(&vcpu->kvm->lock); 1168 1170 ··· 1206 1212 vcpu->arch.sie_block->scaol = (__u32)(__u64)kvm->arch.sca; 1207 1213 set_bit(63 - id, (unsigned long *) &kvm->arch.sca->mcn); 1208 1214 } 1209 - vcpu->arch.sie_block->fac = (int) (long) kvm->arch.model.fac->sie; 1215 + vcpu->arch.sie_block->fac = (int) (long) kvm->arch.model.fac->list; 1210 1216 1211 1217 spin_lock_init(&vcpu->arch.local_int.lock); 1212 1218 vcpu->arch.local_int.float_int = &kvm->arch.float_int;
+2 -1
arch/s390/kvm/kvm-s390.h
··· 128 128 /* test availability of facility in a kvm intance */ 129 129 static inline int test_kvm_facility(struct kvm *kvm, unsigned long nr) 130 130 { 131 - return __test_facility(nr, kvm->arch.model.fac->kvm); 131 + return __test_facility(nr, kvm->arch.model.fac->mask) && 132 + __test_facility(nr, kvm->arch.model.fac->list); 132 133 } 133 134 134 135 /* are cpu states controlled by user space */
+1 -1
arch/s390/kvm/priv.c
··· 348 348 * We need to shift the lower 32 facility bits (bit 0-31) from a u64 349 349 * into a u32 memory representation. They will remain bits 0-31. 350 350 */ 351 - fac = *vcpu->kvm->arch.model.fac->sie >> 32; 351 + fac = *vcpu->kvm->arch.model.fac->list >> 32; 352 352 rc = write_guest_lc(vcpu, offsetof(struct _lowcore, stfl_fac_list), 353 353 &fac, sizeof(fac)); 354 354 if (rc)
+16 -12
arch/s390/pci/pci.c
··· 287 287 addr = ZPCI_IOMAP_ADDR_BASE | ((u64) idx << 48); 288 288 return (void __iomem *) addr + offset; 289 289 } 290 - EXPORT_SYMBOL_GPL(pci_iomap_range); 290 + EXPORT_SYMBOL(pci_iomap_range); 291 291 292 292 void __iomem *pci_iomap(struct pci_dev *dev, int bar, unsigned long maxlen) 293 293 { ··· 309 309 } 310 310 spin_unlock(&zpci_iomap_lock); 311 311 } 312 - EXPORT_SYMBOL_GPL(pci_iounmap); 312 + EXPORT_SYMBOL(pci_iounmap); 313 313 314 314 static int pci_read(struct pci_bus *bus, unsigned int devfn, int where, 315 315 int size, u32 *val) ··· 483 483 airq_iv_free_bit(zpci_aisb_iv, zdev->aisb); 484 484 } 485 485 486 - static void zpci_map_resources(struct zpci_dev *zdev) 486 + static void zpci_map_resources(struct pci_dev *pdev) 487 487 { 488 - struct pci_dev *pdev = zdev->pdev; 489 488 resource_size_t len; 490 489 int i; 491 490 ··· 498 499 } 499 500 } 500 501 501 - static void zpci_unmap_resources(struct zpci_dev *zdev) 502 + static void zpci_unmap_resources(struct pci_dev *pdev) 502 503 { 503 - struct pci_dev *pdev = zdev->pdev; 504 504 resource_size_t len; 505 505 int i; 506 506 ··· 649 651 650 652 zdev->pdev = pdev; 651 653 pdev->dev.groups = zpci_attr_groups; 652 - zpci_map_resources(zdev); 654 + zpci_map_resources(pdev); 653 655 654 656 for (i = 0; i < PCI_BAR_COUNT; i++) { 655 657 res = &pdev->resource[i]; ··· 661 663 return 0; 662 664 } 663 665 666 + void pcibios_release_device(struct pci_dev *pdev) 667 + { 668 + zpci_unmap_resources(pdev); 669 + } 670 + 664 671 int pcibios_enable_device(struct pci_dev *pdev, int mask) 665 672 { 666 673 struct zpci_dev *zdev = get_zdev(pdev); ··· 673 670 zdev->pdev = pdev; 674 671 zpci_debug_init_device(zdev); 675 672 zpci_fmb_enable_device(zdev); 676 - zpci_map_resources(zdev); 677 673 678 674 return pci_enable_resources(pdev, mask); 679 675 } ··· 681 679 { 682 680 struct zpci_dev *zdev = get_zdev(pdev); 683 681 684 - zpci_unmap_resources(zdev); 685 682 zpci_fmb_disable_device(zdev); 686 683 zpci_debug_exit_device(zdev); 687 684 zdev->pdev = NULL; ··· 689 688 #ifdef CONFIG_HIBERNATE_CALLBACKS 690 689 static int zpci_restore(struct device *dev) 691 690 { 692 - struct zpci_dev *zdev = get_zdev(to_pci_dev(dev)); 691 + struct pci_dev *pdev = to_pci_dev(dev); 692 + struct zpci_dev *zdev = get_zdev(pdev); 693 693 int ret = 0; 694 694 695 695 if (zdev->state != ZPCI_FN_STATE_ONLINE) ··· 700 698 if (ret) 701 699 goto out; 702 700 703 - zpci_map_resources(zdev); 701 + zpci_map_resources(pdev); 704 702 zpci_register_ioat(zdev, 0, zdev->start_dma + PAGE_OFFSET, 705 703 zdev->start_dma + zdev->iommu_size - 1, 706 704 (u64) zdev->dma_table); ··· 711 709 712 710 static int zpci_freeze(struct device *dev) 713 711 { 714 - struct zpci_dev *zdev = get_zdev(to_pci_dev(dev)); 712 + struct pci_dev *pdev = to_pci_dev(dev); 713 + struct zpci_dev *zdev = get_zdev(pdev); 715 714 716 715 if (zdev->state != ZPCI_FN_STATE_ONLINE) 717 716 return 0; 718 717 719 718 zpci_unregister_ioat(zdev, 0); 719 + zpci_unmap_resources(pdev); 720 720 return clp_disable_fh(zdev); 721 721 } 722 722
+8 -9
arch/s390/pci/pci_mmio.c
··· 64 64 if (copy_from_user(buf, user_buffer, length)) 65 65 goto out; 66 66 67 - memcpy_toio(io_addr, buf, length); 68 - ret = 0; 67 + ret = zpci_memcpy_toio(io_addr, buf, length); 69 68 out: 70 69 if (buf != local_buf) 71 70 kfree(buf); ··· 97 98 goto out; 98 99 io_addr = (void __iomem *)((pfn << PAGE_SHIFT) | (mmio_addr & ~PAGE_MASK)); 99 100 100 - ret = -EFAULT; 101 - if ((unsigned long) io_addr < ZPCI_IOMAP_ADDR_BASE) 101 + if ((unsigned long) io_addr < ZPCI_IOMAP_ADDR_BASE) { 102 + ret = -EFAULT; 102 103 goto out; 103 - 104 - memcpy_fromio(buf, io_addr, length); 105 - 104 + } 105 + ret = zpci_memcpy_fromio(buf, io_addr, length); 106 + if (ret) 107 + goto out; 106 108 if (copy_to_user(user_buffer, buf, length)) 107 - goto out; 109 + ret = -EFAULT; 108 110 109 - ret = 0; 110 111 out: 111 112 if (buf != local_buf) 112 113 kfree(buf);
+1
arch/x86/Kconfig
··· 499 499 depends on X86_IO_APIC 500 500 select IOSF_MBI 501 501 select INTEL_IMR 502 + select COMMON_CLK 502 503 ---help--- 503 504 Select to include support for Quark X1000 SoC. 504 505 Say Y here if you have a Quark based system such as the Arduino
+11 -17
arch/x86/include/asm/xsave.h
··· 82 82 if (boot_cpu_has(X86_FEATURE_XSAVES)) 83 83 asm volatile("1:"XSAVES"\n\t" 84 84 "2:\n\t" 85 - : : "D" (fx), "m" (*fx), "a" (lmask), "d" (hmask) 85 + xstate_fault 86 + : "D" (fx), "m" (*fx), "a" (lmask), "d" (hmask) 86 87 : "memory"); 87 88 else 88 89 asm volatile("1:"XSAVE"\n\t" 89 90 "2:\n\t" 90 - : : "D" (fx), "m" (*fx), "a" (lmask), "d" (hmask) 91 + xstate_fault 92 + : "D" (fx), "m" (*fx), "a" (lmask), "d" (hmask) 91 93 : "memory"); 92 - 93 - asm volatile(xstate_fault 94 - : "0" (0) 95 - : "memory"); 96 - 97 94 return err; 98 95 } 99 96 ··· 109 112 if (boot_cpu_has(X86_FEATURE_XSAVES)) 110 113 asm volatile("1:"XRSTORS"\n\t" 111 114 "2:\n\t" 112 - : : "D" (fx), "m" (*fx), "a" (lmask), "d" (hmask) 115 + xstate_fault 116 + : "D" (fx), "m" (*fx), "a" (lmask), "d" (hmask) 113 117 : "memory"); 114 118 else 115 119 asm volatile("1:"XRSTOR"\n\t" 116 120 "2:\n\t" 117 - : : "D" (fx), "m" (*fx), "a" (lmask), "d" (hmask) 121 + xstate_fault 122 + : "D" (fx), "m" (*fx), "a" (lmask), "d" (hmask) 118 123 : "memory"); 119 - 120 - asm volatile(xstate_fault 121 - : "0" (0) 122 - : "memory"); 123 - 124 124 return err; 125 125 } 126 126 ··· 143 149 */ 144 150 alternative_input_2( 145 151 "1:"XSAVE, 146 - "1:"XSAVEOPT, 152 + XSAVEOPT, 147 153 X86_FEATURE_XSAVEOPT, 148 - "1:"XSAVES, 154 + XSAVES, 149 155 X86_FEATURE_XSAVES, 150 156 [fx] "D" (fx), "a" (lmask), "d" (hmask) : 151 157 "memory"); ··· 172 178 */ 173 179 alternative_input( 174 180 "1: " XRSTOR, 175 - "1: " XRSTORS, 181 + XRSTORS, 176 182 X86_FEATURE_XSAVES, 177 183 "D" (fx), "m" (*fx), "a" (lmask), "d" (hmask) 178 184 : "memory");
+8 -5
arch/x86/kernel/entry_64.S
··· 269 269 testl $3, CS-ARGOFFSET(%rsp) # from kernel_thread? 270 270 jz 1f 271 271 272 - testl $_TIF_IA32, TI_flags(%rcx) # 32-bit compat task needs IRET 273 - jnz int_ret_from_sys_call 274 - 275 - RESTORE_TOP_OF_STACK %rdi, -ARGOFFSET 276 - jmp ret_from_sys_call # go to the SYSRET fastpath 272 + /* 273 + * By the time we get here, we have no idea whether our pt_regs, 274 + * ti flags, and ti status came from the 64-bit SYSCALL fast path, 275 + * the slow path, or one of the ia32entry paths. 276 + * Use int_ret_from_sys_call to return, since it can safely handle 277 + * all of the above. 278 + */ 279 + jmp int_ret_from_sys_call 277 280 278 281 1: 279 282 subq $REST_SKIP, %rsp # leave space for volatiles
+2 -1
arch/x86/kvm/emulate.c
··· 4950 4950 goto done; 4951 4951 } 4952 4952 } 4953 - ctxt->dst.orig_val = ctxt->dst.val; 4953 + /* Copy full 64-bit value for CMPXCHG8B. */ 4954 + ctxt->dst.orig_val64 = ctxt->dst.val64; 4954 4955 4955 4956 special_insn: 4956 4957
+2 -2
arch/x86/kvm/lapic.c
··· 1572 1572 apic_set_reg(apic, APIC_TMR + 0x10 * i, 0); 1573 1573 } 1574 1574 apic->irr_pending = kvm_apic_vid_enabled(vcpu->kvm); 1575 - apic->isr_count = kvm_apic_vid_enabled(vcpu->kvm); 1575 + apic->isr_count = kvm_x86_ops->hwapic_isr_update ? 1 : 0; 1576 1576 apic->highest_isr_cache = -1; 1577 1577 update_divide_count(apic); 1578 1578 atomic_set(&apic->lapic_timer.pending, 0); ··· 1782 1782 update_divide_count(apic); 1783 1783 start_apic_timer(apic); 1784 1784 apic->irr_pending = true; 1785 - apic->isr_count = kvm_apic_vid_enabled(vcpu->kvm) ? 1785 + apic->isr_count = kvm_x86_ops->hwapic_isr_update ? 1786 1786 1 : count_vectors(apic->regs + APIC_ISR); 1787 1787 apic->highest_isr_cache = -1; 1788 1788 if (kvm_x86_ops->hwapic_irr_update)
-6
arch/x86/kvm/svm.c
··· 3649 3649 return; 3650 3650 } 3651 3651 3652 - static void svm_hwapic_isr_update(struct kvm *kvm, int isr) 3653 - { 3654 - return; 3655 - } 3656 - 3657 3652 static void svm_sync_pir_to_irr(struct kvm_vcpu *vcpu) 3658 3653 { 3659 3654 return; ··· 4398 4403 .set_virtual_x2apic_mode = svm_set_virtual_x2apic_mode, 4399 4404 .vm_has_apicv = svm_vm_has_apicv, 4400 4405 .load_eoi_exitmap = svm_load_eoi_exitmap, 4401 - .hwapic_isr_update = svm_hwapic_isr_update, 4402 4406 .sync_pir_to_irr = svm_sync_pir_to_irr, 4403 4407 4404 4408 .set_tss_addr = svm_set_tss_addr,
+14 -9
arch/x86/kvm/vmx.c
··· 4367 4367 return 0; 4368 4368 } 4369 4369 4370 + static inline bool kvm_vcpu_trigger_posted_interrupt(struct kvm_vcpu *vcpu) 4371 + { 4372 + #ifdef CONFIG_SMP 4373 + if (vcpu->mode == IN_GUEST_MODE) { 4374 + apic->send_IPI_mask(get_cpu_mask(vcpu->cpu), 4375 + POSTED_INTR_VECTOR); 4376 + return true; 4377 + } 4378 + #endif 4379 + return false; 4380 + } 4381 + 4370 4382 static int vmx_deliver_nested_posted_interrupt(struct kvm_vcpu *vcpu, 4371 4383 int vector) 4372 4384 { ··· 4387 4375 if (is_guest_mode(vcpu) && 4388 4376 vector == vmx->nested.posted_intr_nv) { 4389 4377 /* the PIR and ON have been set by L1. */ 4390 - if (vcpu->mode == IN_GUEST_MODE) 4391 - apic->send_IPI_mask(get_cpu_mask(vcpu->cpu), 4392 - POSTED_INTR_VECTOR); 4378 + kvm_vcpu_trigger_posted_interrupt(vcpu); 4393 4379 /* 4394 4380 * If a posted intr is not recognized by hardware, 4395 4381 * we will accomplish it in the next vmentry. ··· 4419 4409 4420 4410 r = pi_test_and_set_on(&vmx->pi_desc); 4421 4411 kvm_make_request(KVM_REQ_EVENT, vcpu); 4422 - #ifdef CONFIG_SMP 4423 - if (!r && (vcpu->mode == IN_GUEST_MODE)) 4424 - apic->send_IPI_mask(get_cpu_mask(vcpu->cpu), 4425 - POSTED_INTR_VECTOR); 4426 - else 4427 - #endif 4412 + if (r || !kvm_vcpu_trigger_posted_interrupt(vcpu)) 4428 4413 kvm_vcpu_kick(vcpu); 4429 4414 } 4430 4415
+8 -3
arch/x86/pci/acpi.c
··· 331 331 struct list_head *list) 332 332 { 333 333 int ret; 334 - struct resource_entry *entry; 334 + struct resource_entry *entry, *tmp; 335 335 336 336 sprintf(info->name, "PCI Bus %04x:%02x", domain, busnum); 337 337 info->bridge = device; ··· 345 345 dev_dbg(&device->dev, 346 346 "no IO and memory resources present in _CRS\n"); 347 347 else 348 - resource_list_for_each_entry(entry, list) 349 - entry->res->name = info->name; 348 + resource_list_for_each_entry_safe(entry, tmp, list) { 349 + if ((entry->res->flags & IORESOURCE_WINDOW) == 0 || 350 + (entry->res->flags & IORESOURCE_DISABLED)) 351 + resource_list_destroy_entry(entry); 352 + else 353 + entry->res->name = info->name; 354 + } 350 355 } 351 356 352 357 struct pci_bus *pci_acpi_scan_root(struct acpi_pci_root *root)
+3 -1
drivers/acpi/resource.c
··· 42 42 * CHECKME: len might be required to check versus a minimum 43 43 * length as well. 1 for io is fine, but for memory it does 44 44 * not make any sense at all. 45 + * Note: some BIOSes report incorrect length for ACPI address space 46 + * descriptor, so remove check of 'reslen == len' to avoid regression. 45 47 */ 46 - if (len && reslen && reslen == len && start <= end) 48 + if (len && reslen && start <= end) 47 49 return true; 48 50 49 51 pr_debug("ACPI: invalid or unassigned resource %s [%016llx - %016llx] length [%016llx]\n",
+16 -4
drivers/acpi/video.c
··· 2110 2110 2111 2111 int acpi_video_register(void) 2112 2112 { 2113 - int result = 0; 2113 + int ret; 2114 + 2114 2115 if (register_count) { 2115 2116 /* 2116 2117 * if the function of acpi_video_register is already called, ··· 2123 2122 mutex_init(&video_list_lock); 2124 2123 INIT_LIST_HEAD(&video_bus_head); 2125 2124 2126 - result = acpi_bus_register_driver(&acpi_video_bus); 2127 - if (result < 0) 2128 - return -ENODEV; 2125 + ret = acpi_bus_register_driver(&acpi_video_bus); 2126 + if (ret) 2127 + return ret; 2129 2128 2130 2129 /* 2131 2130 * When the acpi_video_bus is loaded successfully, increase ··· 2177 2176 2178 2177 static int __init acpi_video_init(void) 2179 2178 { 2179 + /* 2180 + * Let the module load even if ACPI is disabled (e.g. due to 2181 + * a broken BIOS) so that i915.ko can still be loaded on such 2182 + * old systems without an AcpiOpRegion. 2183 + * 2184 + * acpi_video_register() will report -ENODEV later as well due 2185 + * to acpi_disabled when i915.ko tries to register itself afterwards. 2186 + */ 2187 + if (acpi_disabled) 2188 + return 0; 2189 + 2180 2190 dmi_check_system(video_dmi_table); 2181 2191 2182 2192 if (intel_opregion_present())
+5 -5
drivers/android/binder.c
··· 551 551 { 552 552 void *page_addr; 553 553 unsigned long user_page_addr; 554 - struct vm_struct tmp_area; 555 554 struct page **page; 556 555 struct mm_struct *mm; 557 556 ··· 599 600 proc->pid, page_addr); 600 601 goto err_alloc_page_failed; 601 602 } 602 - tmp_area.addr = page_addr; 603 - tmp_area.size = PAGE_SIZE + PAGE_SIZE /* guard page? */; 604 - ret = map_vm_area(&tmp_area, PAGE_KERNEL, page); 605 - if (ret) { 603 + ret = map_kernel_range_noflush((unsigned long)page_addr, 604 + PAGE_SIZE, PAGE_KERNEL, page); 605 + flush_cache_vmap((unsigned long)page_addr, 606 + (unsigned long)page_addr + PAGE_SIZE); 607 + if (ret != 1) { 606 608 pr_err("%d: binder_alloc_buf failed to map page at %p in kernel\n", 607 609 proc->pid, page_addr); 608 610 goto err_map_kernel_failed;
+2
drivers/ata/sata_fsl.c
··· 869 869 */ 870 870 ata_msleep(ap, 1); 871 871 872 + sata_set_spd(link); 873 + 872 874 /* 873 875 * Now, bring the host controller online again, this can take time 874 876 * as PHY reset and communication establishment, 1st D2H FIS and
+12 -12
drivers/base/power/domain.c
··· 2242 2242 } 2243 2243 2244 2244 static int pm_genpd_summary_one(struct seq_file *s, 2245 - struct generic_pm_domain *gpd) 2245 + struct generic_pm_domain *genpd) 2246 2246 { 2247 2247 static const char * const status_lookup[] = { 2248 2248 [GPD_STATE_ACTIVE] = "on", ··· 2256 2256 struct gpd_link *link; 2257 2257 int ret; 2258 2258 2259 - ret = mutex_lock_interruptible(&gpd->lock); 2259 + ret = mutex_lock_interruptible(&genpd->lock); 2260 2260 if (ret) 2261 2261 return -ERESTARTSYS; 2262 2262 2263 - if (WARN_ON(gpd->status >= ARRAY_SIZE(status_lookup))) 2263 + if (WARN_ON(genpd->status >= ARRAY_SIZE(status_lookup))) 2264 2264 goto exit; 2265 - seq_printf(s, "%-30s %-15s ", gpd->name, status_lookup[gpd->status]); 2265 + seq_printf(s, "%-30s %-15s ", genpd->name, status_lookup[genpd->status]); 2266 2266 2267 2267 /* 2268 2268 * Modifications on the list require holding locks on both 2269 2269 * master and slave, so we are safe. 2270 - * Also gpd->name is immutable. 2270 + * Also genpd->name is immutable. 2271 2271 */ 2272 - list_for_each_entry(link, &gpd->master_links, master_node) { 2272 + list_for_each_entry(link, &genpd->master_links, master_node) { 2273 2273 seq_printf(s, "%s", link->slave->name); 2274 - if (!list_is_last(&link->master_node, &gpd->master_links)) 2274 + if (!list_is_last(&link->master_node, &genpd->master_links)) 2275 2275 seq_puts(s, ", "); 2276 2276 } 2277 2277 2278 - list_for_each_entry(pm_data, &gpd->dev_list, list_node) { 2278 + list_for_each_entry(pm_data, &genpd->dev_list, list_node) { 2279 2279 kobj_path = kobject_get_path(&pm_data->dev->kobj, GFP_KERNEL); 2280 2280 if (kobj_path == NULL) 2281 2281 continue; ··· 2287 2287 2288 2288 seq_puts(s, "\n"); 2289 2289 exit: 2290 - mutex_unlock(&gpd->lock); 2290 + mutex_unlock(&genpd->lock); 2291 2291 2292 2292 return 0; 2293 2293 } 2294 2294 2295 2295 static int pm_genpd_summary_show(struct seq_file *s, void *data) 2296 2296 { 2297 - struct generic_pm_domain *gpd; 2297 + struct generic_pm_domain *genpd; 2298 2298 int ret = 0; 2299 2299 2300 2300 seq_puts(s, " domain status slaves\n"); ··· 2305 2305 if (ret) 2306 2306 return -ERESTARTSYS; 2307 2307 2308 - list_for_each_entry(gpd, &gpd_list, gpd_list_node) { 2309 - ret = pm_genpd_summary_one(s, gpd); 2308 + list_for_each_entry(genpd, &gpd_list, gpd_list_node) { 2309 + ret = pm_genpd_summary_one(s, genpd); 2310 2310 if (ret) 2311 2311 break; 2312 2312 }
+1
drivers/base/power/wakeup.c
··· 730 730 pm_abort_suspend = true; 731 731 freeze_wake(); 732 732 } 733 + EXPORT_SYMBOL_GPL(pm_system_wakeup); 733 734 734 735 void pm_wakeup_clear(void) 735 736 {
+19 -25
drivers/char/tpm/tpm-chip.c
··· 140 140 { 141 141 int rc; 142 142 143 - rc = device_add(&chip->dev); 144 - if (rc) { 145 - dev_err(&chip->dev, 146 - "unable to device_register() %s, major %d, minor %d, err=%d\n", 147 - chip->devname, MAJOR(chip->dev.devt), 148 - MINOR(chip->dev.devt), rc); 149 - 150 - return rc; 151 - } 152 - 153 143 rc = cdev_add(&chip->cdev, chip->dev.devt, 1); 154 144 if (rc) { 155 145 dev_err(&chip->dev, ··· 148 158 MINOR(chip->dev.devt), rc); 149 159 150 160 device_unregister(&chip->dev); 161 + return rc; 162 + } 163 + 164 + rc = device_add(&chip->dev); 165 + if (rc) { 166 + dev_err(&chip->dev, 167 + "unable to device_register() %s, major %d, minor %d, err=%d\n", 168 + chip->devname, MAJOR(chip->dev.devt), 169 + MINOR(chip->dev.devt), rc); 170 + 151 171 return rc; 152 172 } 153 173 ··· 174 174 * tpm_chip_register() - create a character device for the TPM chip 175 175 * @chip: TPM chip to use. 176 176 * 177 - * Creates a character device for the TPM chip and adds sysfs interfaces for 178 - * the device, PPI and TCPA. As the last step this function adds the 179 - * chip to the list of TPM chips available for use. 177 + * Creates a character device for the TPM chip and adds sysfs attributes for 178 + * the device. As the last step this function adds the chip to the list of TPM 179 + * chips available for in-kernel use. 180 180 * 181 - * NOTE: This function should be only called after the chip initialization 182 - * is complete. 183 - * 184 - * Called from tpm_<specific>.c probe function only for devices 185 - * the driver has determined it should claim. Prior to calling 186 - * this function the specific probe function has called pci_enable_device 187 - * upon errant exit from this function specific probe function should call 188 - * pci_disable_device 181 + * This function should be only called after the chip initialization is 182 + * complete. 189 183 */ 190 184 int tpm_chip_register(struct tpm_chip *chip) 191 185 { 192 186 int rc; 193 - 194 - rc = tpm_dev_add_device(chip); 195 - if (rc) 196 - return rc; 197 187 198 188 /* Populate sysfs for TPM1 devices. */ 199 189 if (!(chip->flags & TPM_CHIP_FLAG_TPM2)) { ··· 197 207 198 208 chip->bios_dir = tpm_bios_log_setup(chip->devname); 199 209 } 210 + 211 + rc = tpm_dev_add_device(chip); 212 + if (rc) 213 + return rc; 200 214 201 215 /* Make the chip available. */ 202 216 spin_lock(&driver_lock);
+5 -5
drivers/char/tpm/tpm_ibmvtpm.c
··· 124 124 { 125 125 struct ibmvtpm_dev *ibmvtpm; 126 126 struct ibmvtpm_crq crq; 127 - u64 *word = (u64 *) &crq; 127 + __be64 *word = (__be64 *)&crq; 128 128 int rc; 129 129 130 130 ibmvtpm = (struct ibmvtpm_dev *)TPM_VPRIV(chip); ··· 145 145 memcpy((void *)ibmvtpm->rtce_buf, (void *)buf, count); 146 146 crq.valid = (u8)IBMVTPM_VALID_CMD; 147 147 crq.msg = (u8)VTPM_TPM_COMMAND; 148 - crq.len = (u16)count; 149 - crq.data = ibmvtpm->rtce_dma_handle; 148 + crq.len = cpu_to_be16(count); 149 + crq.data = cpu_to_be32(ibmvtpm->rtce_dma_handle); 150 150 151 - rc = ibmvtpm_send_crq(ibmvtpm->vdev, cpu_to_be64(word[0]), 152 - cpu_to_be64(word[1])); 151 + rc = ibmvtpm_send_crq(ibmvtpm->vdev, be64_to_cpu(word[0]), 152 + be64_to_cpu(word[1])); 153 153 if (rc != H_SUCCESS) { 154 154 dev_err(ibmvtpm->dev, "tpm_ibmvtpm_send failed rc=%d\n", rc); 155 155 rc = 0;
+3 -3
drivers/char/tpm/tpm_ibmvtpm.h
··· 22 22 struct ibmvtpm_crq { 23 23 u8 valid; 24 24 u8 msg; 25 - u16 len; 26 - u32 data; 27 - u64 reserved; 25 + __be16 len; 26 + __be32 data; 27 + __be64 reserved; 28 28 } __attribute__((packed, aligned(8))); 29 29 30 30 struct ibmvtpm_crq_queue {
+19 -1
drivers/clk/at91/pmc.c
··· 89 89 return 0; 90 90 } 91 91 92 + static void pmc_irq_suspend(struct irq_data *d) 93 + { 94 + struct at91_pmc *pmc = irq_data_get_irq_chip_data(d); 95 + 96 + pmc->imr = pmc_read(pmc, AT91_PMC_IMR); 97 + pmc_write(pmc, AT91_PMC_IDR, pmc->imr); 98 + } 99 + 100 + static void pmc_irq_resume(struct irq_data *d) 101 + { 102 + struct at91_pmc *pmc = irq_data_get_irq_chip_data(d); 103 + 104 + pmc_write(pmc, AT91_PMC_IER, pmc->imr); 105 + } 106 + 92 107 static struct irq_chip pmc_irq = { 93 108 .name = "PMC", 94 109 .irq_disable = pmc_irq_mask, 95 110 .irq_mask = pmc_irq_mask, 96 111 .irq_unmask = pmc_irq_unmask, 97 112 .irq_set_type = pmc_irq_set_type, 113 + .irq_suspend = pmc_irq_suspend, 114 + .irq_resume = pmc_irq_resume, 98 115 }; 99 116 100 117 static struct lock_class_key pmc_lock_class; ··· 241 224 goto out_free_pmc; 242 225 243 226 pmc_write(pmc, AT91_PMC_IDR, 0xffffffff); 244 - if (request_irq(pmc->virq, pmc_irq_handler, IRQF_SHARED, "pmc", pmc)) 227 + if (request_irq(pmc->virq, pmc_irq_handler, 228 + IRQF_SHARED | IRQF_COND_SUSPEND, "pmc", pmc)) 245 229 goto out_remove_irqdomain; 246 230 247 231 return pmc;
+1
drivers/clk/at91/pmc.h
··· 33 33 spinlock_t lock; 34 34 const struct at91_pmc_caps *caps; 35 35 struct irq_domain *irqdomain; 36 + u32 imr; 36 37 }; 37 38 38 39 static inline void pmc_lock(struct at91_pmc *pmc)
+6 -15
drivers/cpufreq/exynos-cpufreq.c
··· 159 159 160 160 static int exynos_cpufreq_probe(struct platform_device *pdev) 161 161 { 162 - struct device_node *cpus, *np; 162 + struct device_node *cpu0; 163 163 int ret = -EINVAL; 164 164 165 165 exynos_info = kzalloc(sizeof(*exynos_info), GFP_KERNEL); ··· 206 206 if (ret) 207 207 goto err_cpufreq_reg; 208 208 209 - cpus = of_find_node_by_path("/cpus"); 210 - if (!cpus) { 211 - pr_err("failed to find cpus node\n"); 209 + cpu0 = of_get_cpu_node(0, NULL); 210 + if (!cpu0) { 211 + pr_err("failed to find cpu0 node\n"); 212 212 return 0; 213 213 } 214 214 215 - np = of_get_next_child(cpus, NULL); 216 - if (!np) { 217 - pr_err("failed to find cpus child node\n"); 218 - of_node_put(cpus); 219 - return 0; 220 - } 221 - 222 - if (of_find_property(np, "#cooling-cells", NULL)) { 223 - cdev = of_cpufreq_cooling_register(np, 215 + if (of_find_property(cpu0, "#cooling-cells", NULL)) { 216 + cdev = of_cpufreq_cooling_register(cpu0, 224 217 cpu_present_mask); 225 218 if (IS_ERR(cdev)) 226 219 pr_err("running cpufreq without cooling device: %ld\n", 227 220 PTR_ERR(cdev)); 228 221 } 229 - of_node_put(np); 230 - of_node_put(cpus); 231 222 232 223 return 0; 233 224
+2
drivers/cpufreq/ppc-corenet-cpufreq.c
··· 22 22 #include <linux/smp.h> 23 23 #include <sysdev/fsl_soc.h> 24 24 25 + #include <asm/smp.h> /* for get_hard_smp_processor_id() in UP configs */ 26 + 25 27 /** 26 28 * struct cpu_data - per CPU data struct 27 29 * @parent: the parent node of cpu clock
+26 -35
drivers/cpuidle/cpuidle.c
··· 44 44 off = 1; 45 45 } 46 46 47 + bool cpuidle_not_available(struct cpuidle_driver *drv, 48 + struct cpuidle_device *dev) 49 + { 50 + return off || !initialized || !drv || !dev || !dev->enabled; 51 + } 52 + 47 53 /** 48 54 * cpuidle_play_dead - cpu off-lining 49 55 * ··· 72 66 return -ENODEV; 73 67 } 74 68 75 - /** 76 - * cpuidle_find_deepest_state - Find deepest state meeting specific conditions. 77 - * @drv: cpuidle driver for the given CPU. 78 - * @dev: cpuidle device for the given CPU. 79 - * @freeze: Whether or not the state should be suitable for suspend-to-idle. 80 - */ 81 - static int cpuidle_find_deepest_state(struct cpuidle_driver *drv, 82 - struct cpuidle_device *dev, bool freeze) 69 + static int find_deepest_state(struct cpuidle_driver *drv, 70 + struct cpuidle_device *dev, bool freeze) 83 71 { 84 72 unsigned int latency_req = 0; 85 73 int i, ret = freeze ? -1 : CPUIDLE_DRIVER_STATE_START - 1; ··· 90 90 ret = i; 91 91 } 92 92 return ret; 93 + } 94 + 95 + /** 96 + * cpuidle_find_deepest_state - Find the deepest available idle state. 97 + * @drv: cpuidle driver for the given CPU. 98 + * @dev: cpuidle device for the given CPU. 99 + */ 100 + int cpuidle_find_deepest_state(struct cpuidle_driver *drv, 101 + struct cpuidle_device *dev) 102 + { 103 + return find_deepest_state(drv, dev, false); 93 104 } 94 105 95 106 static void enter_freeze_proper(struct cpuidle_driver *drv, ··· 124 113 125 114 /** 126 115 * cpuidle_enter_freeze - Enter an idle state suitable for suspend-to-idle. 116 + * @drv: cpuidle driver for the given CPU. 117 + * @dev: cpuidle device for the given CPU. 127 118 * 128 119 * If there are states with the ->enter_freeze callback, find the deepest of 129 - * them and enter it with frozen tick. Otherwise, find the deepest state 130 - * available and enter it normally. 120 + * them and enter it with frozen tick. 131 121 */ 132 - void cpuidle_enter_freeze(void) 122 + int cpuidle_enter_freeze(struct cpuidle_driver *drv, struct cpuidle_device *dev) 133 123 { 134 - struct cpuidle_device *dev = __this_cpu_read(cpuidle_devices); 135 - struct cpuidle_driver *drv = cpuidle_get_cpu_driver(dev); 136 124 int index; 137 125 138 126 /* ··· 139 129 * that interrupts won't be enabled when it exits and allows the tick to 140 130 * be frozen safely. 141 131 */ 142 - index = cpuidle_find_deepest_state(drv, dev, true); 143 - if (index >= 0) { 144 - enter_freeze_proper(drv, dev, index); 145 - return; 146 - } 147 - 148 - /* 149 - * It is not safe to freeze the tick, find the deepest state available 150 - * at all and try to enter it normally. 151 - */ 152 - index = cpuidle_find_deepest_state(drv, dev, false); 132 + index = find_deepest_state(drv, dev, true); 153 133 if (index >= 0) 154 - cpuidle_enter(drv, dev, index); 155 - else 156 - arch_cpu_idle(); 134 + enter_freeze_proper(drv, dev, index); 157 135 158 - /* Interrupts are enabled again here. */ 159 - local_irq_disable(); 136 + return index; 160 137 } 161 138 162 139 /** ··· 202 205 */ 203 206 int cpuidle_select(struct cpuidle_driver *drv, struct cpuidle_device *dev) 204 207 { 205 - if (off || !initialized) 206 - return -ENODEV; 207 - 208 - if (!drv || !dev || !dev->enabled) 209 - return -EBUSY; 210 - 211 208 return cpuidle_curr_governor->select(drv, dev); 212 209 } 213 210
+3
drivers/dma-buf/fence.c
··· 159 159 if (WARN_ON(timeout < 0)) 160 160 return -EINVAL; 161 161 162 + if (timeout == 0) 163 + return fence_is_signaled(fence); 164 + 162 165 trace_fence_wait_start(fence); 163 166 ret = fence->ops->wait(fence, intr, timeout); 164 167 trace_fence_wait_end(fence);
+3 -2
drivers/dma-buf/reservation.c
··· 327 327 unsigned seq, shared_count, i = 0; 328 328 long ret = timeout; 329 329 330 + if (!timeout) 331 + return reservation_object_test_signaled_rcu(obj, wait_all); 332 + 330 333 retry: 331 334 fence = NULL; 332 335 shared_count = 0; ··· 405 402 int ret = 1; 406 403 407 404 if (!test_bit(FENCE_FLAG_SIGNALED_BIT, &lfence->flags)) { 408 - int ret; 409 - 410 405 fence = fence_get_rcu(lfence); 411 406 if (!fence) 412 407 return -1;
+3 -4
drivers/dma/at_xdmac.c
··· 664 664 struct at_xdmac_desc *first = NULL, *prev = NULL; 665 665 unsigned int periods = buf_len / period_len; 666 666 int i; 667 - u32 cfg; 668 667 669 668 dev_dbg(chan2dev(chan), "%s: buf_addr=%pad, buf_len=%zd, period_len=%zd, dir=%s, flags=0x%lx\n", 670 669 __func__, &buf_addr, buf_len, period_len, ··· 699 700 if (direction == DMA_DEV_TO_MEM) { 700 701 desc->lld.mbr_sa = atchan->per_src_addr; 701 702 desc->lld.mbr_da = buf_addr + i * period_len; 702 - cfg = atchan->cfg[AT_XDMAC_DEV_TO_MEM_CFG]; 703 + desc->lld.mbr_cfg = atchan->cfg[AT_XDMAC_DEV_TO_MEM_CFG]; 703 704 } else { 704 705 desc->lld.mbr_sa = buf_addr + i * period_len; 705 706 desc->lld.mbr_da = atchan->per_dst_addr; 706 - cfg = atchan->cfg[AT_XDMAC_MEM_TO_DEV_CFG]; 707 + desc->lld.mbr_cfg = atchan->cfg[AT_XDMAC_MEM_TO_DEV_CFG]; 707 708 } 708 709 desc->lld.mbr_ubc = AT_XDMAC_MBR_UBC_NDV1 709 710 | AT_XDMAC_MBR_UBC_NDEN 710 711 | AT_XDMAC_MBR_UBC_NSEN 711 712 | AT_XDMAC_MBR_UBC_NDE 712 - | period_len >> at_xdmac_get_dwidth(cfg); 713 + | period_len >> at_xdmac_get_dwidth(desc->lld.mbr_cfg); 713 714 714 715 dev_dbg(chan2dev(chan), 715 716 "%s: lld: mbr_sa=%pad, mbr_da=%pad, mbr_ubc=0x%08x\n",
+1 -1
drivers/dma/dw/core.c
··· 626 626 dev_vdbg(dw->dma.dev, "%s: status=0x%x\n", __func__, status); 627 627 628 628 /* Check if we have any interrupt from the DMAC */ 629 - if (!status) 629 + if (!status || !dw->in_use) 630 630 return IRQ_NONE; 631 631 632 632 /*
+4
drivers/dma/ioat/dma_v3.c
··· 230 230 switch (pdev->device) { 231 231 case PCI_DEVICE_ID_INTEL_IOAT_BWD2: 232 232 case PCI_DEVICE_ID_INTEL_IOAT_BWD3: 233 + case PCI_DEVICE_ID_INTEL_IOAT_BDXDE0: 234 + case PCI_DEVICE_ID_INTEL_IOAT_BDXDE1: 235 + case PCI_DEVICE_ID_INTEL_IOAT_BDXDE2: 236 + case PCI_DEVICE_ID_INTEL_IOAT_BDXDE3: 233 237 return true; 234 238 default: 235 239 return false;
+10
drivers/dma/mmp_pdma.c
··· 219 219 220 220 while (dint) { 221 221 i = __ffs(dint); 222 + /* only handle interrupts belonging to pdma driver*/ 223 + if (i >= pdev->dma_channels) 224 + break; 222 225 dint &= (dint - 1); 223 226 phy = &pdev->phy[i]; 224 227 ret = mmp_pdma_chan_handler(irq, phy); ··· 1002 999 struct resource *iores; 1003 1000 int i, ret, irq = 0; 1004 1001 int dma_channels = 0, irq_num = 0; 1002 + const enum dma_slave_buswidth widths = 1003 + DMA_SLAVE_BUSWIDTH_1_BYTE | DMA_SLAVE_BUSWIDTH_2_BYTES | 1004 + DMA_SLAVE_BUSWIDTH_4_BYTES; 1005 1005 1006 1006 pdev = devm_kzalloc(&op->dev, sizeof(*pdev), GFP_KERNEL); 1007 1007 if (!pdev) ··· 1072 1066 pdev->device.device_config = mmp_pdma_config; 1073 1067 pdev->device.device_terminate_all = mmp_pdma_terminate_all; 1074 1068 pdev->device.copy_align = PDMA_ALIGNMENT; 1069 + pdev->device.src_addr_widths = widths; 1070 + pdev->device.dst_addr_widths = widths; 1071 + pdev->device.directions = BIT(DMA_MEM_TO_DEV) | BIT(DMA_DEV_TO_MEM); 1072 + pdev->device.residue_granularity = DMA_RESIDUE_GRANULARITY_DESCRIPTOR; 1075 1073 1076 1074 if (pdev->dev->coherent_dma_mask) 1077 1075 dma_set_mask(pdev->dev, pdev->dev->coherent_dma_mask);
+25 -6
drivers/dma/mmp_tdma.c
··· 110 110 struct tasklet_struct tasklet; 111 111 112 112 struct mmp_tdma_desc *desc_arr; 113 - phys_addr_t desc_arr_phys; 113 + dma_addr_t desc_arr_phys; 114 114 int desc_num; 115 115 enum dma_transfer_direction dir; 116 116 dma_addr_t dev_addr; ··· 166 166 static int mmp_tdma_disable_chan(struct dma_chan *chan) 167 167 { 168 168 struct mmp_tdma_chan *tdmac = to_mmp_tdma_chan(chan); 169 + u32 tdcr; 169 170 170 - writel(readl(tdmac->reg_base + TDCR) & ~TDCR_CHANEN, 171 - tdmac->reg_base + TDCR); 171 + tdcr = readl(tdmac->reg_base + TDCR); 172 + tdcr |= TDCR_ABR; 173 + tdcr &= ~TDCR_CHANEN; 174 + writel(tdcr, tdmac->reg_base + TDCR); 172 175 173 176 tdmac->status = DMA_COMPLETE; 174 177 ··· 299 296 return -EAGAIN; 300 297 } 301 298 299 + static size_t mmp_tdma_get_pos(struct mmp_tdma_chan *tdmac) 300 + { 301 + size_t reg; 302 + 303 + if (tdmac->idx == 0) { 304 + reg = __raw_readl(tdmac->reg_base + TDSAR); 305 + reg -= tdmac->desc_arr[0].src_addr; 306 + } else if (tdmac->idx == 1) { 307 + reg = __raw_readl(tdmac->reg_base + TDDAR); 308 + reg -= tdmac->desc_arr[0].dst_addr; 309 + } else 310 + return -EINVAL; 311 + 312 + return reg; 313 + } 314 + 302 315 static irqreturn_t mmp_tdma_chan_handler(int irq, void *dev_id) 303 316 { 304 317 struct mmp_tdma_chan *tdmac = dev_id; 305 318 306 319 if (mmp_tdma_clear_chan_irq(tdmac) == 0) { 307 - tdmac->pos = (tdmac->pos + tdmac->period_len) % tdmac->buf_len; 308 320 tasklet_schedule(&tdmac->tasklet); 309 321 return IRQ_HANDLED; 310 322 } else ··· 361 343 int size = tdmac->desc_num * sizeof(struct mmp_tdma_desc); 362 344 363 345 gpool = tdmac->pool; 364 - if (tdmac->desc_arr) 346 + if (gpool && tdmac->desc_arr) 365 347 gen_pool_free(gpool, (unsigned long)tdmac->desc_arr, 366 348 size); 367 349 tdmac->desc_arr = NULL; ··· 517 499 { 518 500 struct mmp_tdma_chan *tdmac = to_mmp_tdma_chan(chan); 519 501 502 + tdmac->pos = mmp_tdma_get_pos(tdmac); 520 503 dma_set_tx_state(txstate, chan->completed_cookie, chan->cookie, 521 504 tdmac->buf_len - tdmac->pos); 522 505 ··· 629 610 int i, ret; 630 611 int irq = 0, irq_num = 0; 631 612 int chan_num = TDMA_CHANNEL_NUM; 632 - struct gen_pool *pool; 613 + struct gen_pool *pool = NULL; 633 614 634 615 of_id = of_match_device(mmp_tdma_dt_ids, &pdev->dev); 635 616 if (of_id)
+7 -3
drivers/dma/qcom_bam_dma.c
··· 162 162 [BAM_P_IRQ_STTS] = { 0x1010, 0x1000, 0x00, 0x00 }, 163 163 [BAM_P_IRQ_CLR] = { 0x1014, 0x1000, 0x00, 0x00 }, 164 164 [BAM_P_IRQ_EN] = { 0x1018, 0x1000, 0x00, 0x00 }, 165 - [BAM_P_EVNT_DEST_ADDR] = { 0x102C, 0x00, 0x1000, 0x00 }, 166 - [BAM_P_EVNT_REG] = { 0x1018, 0x00, 0x1000, 0x00 }, 167 - [BAM_P_SW_OFSTS] = { 0x1000, 0x00, 0x1000, 0x00 }, 165 + [BAM_P_EVNT_DEST_ADDR] = { 0x182C, 0x00, 0x1000, 0x00 }, 166 + [BAM_P_EVNT_REG] = { 0x1818, 0x00, 0x1000, 0x00 }, 167 + [BAM_P_SW_OFSTS] = { 0x1800, 0x00, 0x1000, 0x00 }, 168 168 [BAM_P_DATA_FIFO_ADDR] = { 0x1824, 0x00, 0x1000, 0x00 }, 169 169 [BAM_P_DESC_FIFO_ADDR] = { 0x181C, 0x00, 0x1000, 0x00 }, 170 170 [BAM_P_EVNT_GEN_TRSHLD] = { 0x1828, 0x00, 0x1000, 0x00 }, ··· 1143 1143 dma_cap_set(DMA_SLAVE, bdev->common.cap_mask); 1144 1144 1145 1145 /* initialize dmaengine apis */ 1146 + bdev->common.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV); 1147 + bdev->common.residue_granularity = DMA_RESIDUE_GRANULARITY_SEGMENT; 1148 + bdev->common.src_addr_widths = DMA_SLAVE_BUSWIDTH_4_BYTES; 1149 + bdev->common.dst_addr_widths = DMA_SLAVE_BUSWIDTH_4_BYTES; 1146 1150 bdev->common.device_alloc_chan_resources = bam_alloc_chan; 1147 1151 bdev->common.device_free_chan_resources = bam_free_chan; 1148 1152 bdev->common.device_prep_slave_sg = bam_prep_slave_sg;
+7 -8
drivers/dma/sh/shdmac.c
··· 582 582 } 583 583 } 584 584 585 - static void sh_dmae_shutdown(struct platform_device *pdev) 586 - { 587 - struct sh_dmae_device *shdev = platform_get_drvdata(pdev); 588 - sh_dmae_ctl_stop(shdev); 589 - } 590 - 591 585 #ifdef CONFIG_PM 592 586 static int sh_dmae_runtime_suspend(struct device *dev) 593 587 { 588 + struct sh_dmae_device *shdev = dev_get_drvdata(dev); 589 + 590 + sh_dmae_ctl_stop(shdev); 594 591 return 0; 595 592 } 596 593 ··· 602 605 #ifdef CONFIG_PM_SLEEP 603 606 static int sh_dmae_suspend(struct device *dev) 604 607 { 608 + struct sh_dmae_device *shdev = dev_get_drvdata(dev); 609 + 610 + sh_dmae_ctl_stop(shdev); 605 611 return 0; 606 612 } 607 613 ··· 929 929 } 930 930 931 931 static struct platform_driver sh_dmae_driver = { 932 - .driver = { 932 + .driver = { 933 933 .pm = &sh_dmae_pm, 934 934 .name = SH_DMAE_DRV_NAME, 935 935 .of_match_table = sh_dmae_of_match, 936 936 }, 937 937 .remove = sh_dmae_remove, 938 - .shutdown = sh_dmae_shutdown, 939 938 }; 940 939 941 940 static int __init sh_dmae_init(void)
+9 -8
drivers/firmware/dmi_scan.c
··· 78 78 * We have to be cautious here. We have seen BIOSes with DMI pointers 79 79 * pointing to completely the wrong place for example 80 80 */ 81 - static void dmi_table(u8 *buf, int len, int num, 81 + static void dmi_table(u8 *buf, u32 len, int num, 82 82 void (*decode)(const struct dmi_header *, void *), 83 83 void *private_data) 84 84 { ··· 93 93 const struct dmi_header *dm = (const struct dmi_header *)data; 94 94 95 95 /* 96 - * 7.45 End-of-Table (Type 127) [SMBIOS reference spec v3.0.0] 97 - */ 98 - if (dm->type == DMI_ENTRY_END_OF_TABLE) 99 - break; 100 - 101 - /* 102 96 * We want to know the total length (formatted area and 103 97 * strings) before decoding to make sure we won't run off the 104 98 * table in dmi_decode or dmi_string ··· 102 108 data++; 103 109 if (data - buf < len - 1) 104 110 decode(dm, private_data); 111 + 112 + /* 113 + * 7.45 End-of-Table (Type 127) [SMBIOS reference spec v3.0.0] 114 + */ 115 + if (dm->type == DMI_ENTRY_END_OF_TABLE) 116 + break; 117 + 105 118 data += 2; 106 119 i++; 107 120 } 108 121 } 109 122 110 123 static phys_addr_t dmi_base; 111 - static u16 dmi_len; 124 + static u32 dmi_len; 112 125 static u16 dmi_num; 113 126 114 127 static int __init dmi_walk_early(void (*decode)(const struct dmi_header *,
+4 -4
drivers/firmware/efi/libstub/efi-stub-helper.c
··· 179 179 start = desc->phys_addr; 180 180 end = start + desc->num_pages * (1UL << EFI_PAGE_SHIFT); 181 181 182 - if ((start + size) > end || (start + size) > max) 183 - continue; 184 - 185 - if (end - size > max) 182 + if (end > max) 186 183 end = max; 184 + 185 + if ((start + size) > end) 186 + continue; 187 187 188 188 if (round_down(end - size, align) < start) 189 189 continue;
+79 -73
drivers/gpu/drm/drm_mm.c
··· 91 91 */ 92 92 93 93 static struct drm_mm_node *drm_mm_search_free_generic(const struct drm_mm *mm, 94 - unsigned long size, 94 + u64 size, 95 95 unsigned alignment, 96 96 unsigned long color, 97 97 enum drm_mm_search_flags flags); 98 98 static struct drm_mm_node *drm_mm_search_free_in_range_generic(const struct drm_mm *mm, 99 - unsigned long size, 99 + u64 size, 100 100 unsigned alignment, 101 101 unsigned long color, 102 - unsigned long start, 103 - unsigned long end, 102 + u64 start, 103 + u64 end, 104 104 enum drm_mm_search_flags flags); 105 105 106 106 static void drm_mm_insert_helper(struct drm_mm_node *hole_node, 107 107 struct drm_mm_node *node, 108 - unsigned long size, unsigned alignment, 108 + u64 size, unsigned alignment, 109 109 unsigned long color, 110 110 enum drm_mm_allocator_flags flags) 111 111 { 112 112 struct drm_mm *mm = hole_node->mm; 113 - unsigned long hole_start = drm_mm_hole_node_start(hole_node); 114 - unsigned long hole_end = drm_mm_hole_node_end(hole_node); 115 - unsigned long adj_start = hole_start; 116 - unsigned long adj_end = hole_end; 113 + u64 hole_start = drm_mm_hole_node_start(hole_node); 114 + u64 hole_end = drm_mm_hole_node_end(hole_node); 115 + u64 adj_start = hole_start; 116 + u64 adj_end = hole_end; 117 117 118 118 BUG_ON(node->allocated); 119 119 ··· 124 124 adj_start = adj_end - size; 125 125 126 126 if (alignment) { 127 - unsigned tmp = adj_start % alignment; 128 - if (tmp) { 127 + u64 tmp = adj_start; 128 + unsigned rem; 129 + 130 + rem = do_div(tmp, alignment); 131 + if (rem) { 129 132 if (flags & DRM_MM_CREATE_TOP) 130 - adj_start -= tmp; 133 + adj_start -= rem; 131 134 else 132 - adj_start += alignment - tmp; 135 + adj_start += alignment - rem; 133 136 } 134 137 } 135 138 ··· 179 176 int drm_mm_reserve_node(struct drm_mm *mm, struct drm_mm_node *node) 180 177 { 181 178 struct drm_mm_node *hole; 182 - unsigned long end = node->start + node->size; 183 - unsigned long hole_start; 184 - unsigned long hole_end; 179 + u64 end = node->start + node->size; 180 + u64 hole_start; 181 + u64 hole_end; 185 182 186 183 BUG_ON(node == NULL); 187 184 ··· 230 227 * 0 on success, -ENOSPC if there's no suitable hole. 231 228 */ 232 229 int drm_mm_insert_node_generic(struct drm_mm *mm, struct drm_mm_node *node, 233 - unsigned long size, unsigned alignment, 230 + u64 size, unsigned alignment, 234 231 unsigned long color, 235 232 enum drm_mm_search_flags sflags, 236 233 enum drm_mm_allocator_flags aflags) ··· 249 246 250 247 static void drm_mm_insert_helper_range(struct drm_mm_node *hole_node, 251 248 struct drm_mm_node *node, 252 - unsigned long size, unsigned alignment, 249 + u64 size, unsigned alignment, 253 250 unsigned long color, 254 - unsigned long start, unsigned long end, 251 + u64 start, u64 end, 255 252 enum drm_mm_allocator_flags flags) 256 253 { 257 254 struct drm_mm *mm = hole_node->mm; 258 - unsigned long hole_start = drm_mm_hole_node_start(hole_node); 259 - unsigned long hole_end = drm_mm_hole_node_end(hole_node); 260 - unsigned long adj_start = hole_start; 261 - unsigned long adj_end = hole_end; 255 + u64 hole_start = drm_mm_hole_node_start(hole_node); 256 + u64 hole_end = drm_mm_hole_node_end(hole_node); 257 + u64 adj_start = hole_start; 258 + u64 adj_end = hole_end; 262 259 263 260 BUG_ON(!hole_node->hole_follows || node->allocated); 264 261 ··· 274 271 mm->color_adjust(hole_node, color, &adj_start, &adj_end); 275 272 276 273 if (alignment) { 277 - unsigned tmp = adj_start % alignment; 278 - if (tmp) { 274 + u64 tmp = adj_start; 275 + unsigned rem; 276 + 277 + rem = do_div(tmp, alignment); 278 + if (rem) { 279 279 if (flags & DRM_MM_CREATE_TOP) 280 - adj_start -= tmp; 280 + adj_start -= rem; 281 281 else 282 - adj_start += alignment - tmp; 282 + adj_start += alignment - rem; 283 283 } 284 284 } 285 285 ··· 330 324 * 0 on success, -ENOSPC if there's no suitable hole. 331 325 */ 332 326 int drm_mm_insert_node_in_range_generic(struct drm_mm *mm, struct drm_mm_node *node, 333 - unsigned long size, unsigned alignment, 327 + u64 size, unsigned alignment, 334 328 unsigned long color, 335 - unsigned long start, unsigned long end, 329 + u64 start, u64 end, 336 330 enum drm_mm_search_flags sflags, 337 331 enum drm_mm_allocator_flags aflags) 338 332 { ··· 393 387 } 394 388 EXPORT_SYMBOL(drm_mm_remove_node); 395 389 396 - static int check_free_hole(unsigned long start, unsigned long end, 397 - unsigned long size, unsigned alignment) 390 + static int check_free_hole(u64 start, u64 end, u64 size, unsigned alignment) 398 391 { 399 392 if (end - start < size) 400 393 return 0; 401 394 402 395 if (alignment) { 403 - unsigned tmp = start % alignment; 396 + u64 tmp = start; 397 + unsigned rem; 398 + 399 + rem = do_div(tmp, alignment); 404 400 if (tmp) 405 - start += alignment - tmp; 401 + start += alignment - rem; 406 402 } 407 403 408 404 return end >= start + size; 409 405 } 410 406 411 407 static struct drm_mm_node *drm_mm_search_free_generic(const struct drm_mm *mm, 412 - unsigned long size, 408 + u64 size, 413 409 unsigned alignment, 414 410 unsigned long color, 415 411 enum drm_mm_search_flags flags) 416 412 { 417 413 struct drm_mm_node *entry; 418 414 struct drm_mm_node *best; 419 - unsigned long adj_start; 420 - unsigned long adj_end; 421 - unsigned long best_size; 415 + u64 adj_start; 416 + u64 adj_end; 417 + u64 best_size; 422 418 423 419 BUG_ON(mm->scanned_blocks); 424 420 ··· 429 421 430 422 __drm_mm_for_each_hole(entry, mm, adj_start, adj_end, 431 423 flags & DRM_MM_SEARCH_BELOW) { 432 - unsigned long hole_size = adj_end - adj_start; 424 + u64 hole_size = adj_end - adj_start; 433 425 434 426 if (mm->color_adjust) { 435 427 mm->color_adjust(entry, color, &adj_start, &adj_end); ··· 453 445 } 454 446 455 447 static struct drm_mm_node *drm_mm_search_free_in_range_generic(const struct drm_mm *mm, 456 - unsigned long size, 448 + u64 size, 457 449 unsigned alignment, 458 450 unsigned long color, 459 - unsigned long start, 460 - unsigned long end, 451 + u64 start, 452 + u64 end, 461 453 enum drm_mm_search_flags flags) 462 454 { 463 455 struct drm_mm_node *entry; 464 456 struct drm_mm_node *best; 465 - unsigned long adj_start; 466 - unsigned long adj_end; 467 - unsigned long best_size; 457 + u64 adj_start; 458 + u64 adj_end; 459 + u64 best_size; 468 460 469 461 BUG_ON(mm->scanned_blocks); 470 462 ··· 473 465 474 466 __drm_mm_for_each_hole(entry, mm, adj_start, adj_end, 475 467 flags & DRM_MM_SEARCH_BELOW) { 476 - unsigned long hole_size = adj_end - adj_start; 468 + u64 hole_size = adj_end - adj_start; 477 469 478 470 if (adj_start < start) 479 471 adj_start = start; ··· 569 561 * adding/removing nodes to/from the scan list are allowed. 570 562 */ 571 563 void drm_mm_init_scan(struct drm_mm *mm, 572 - unsigned long size, 564 + u64 size, 573 565 unsigned alignment, 574 566 unsigned long color) 575 567 { ··· 602 594 * adding/removing nodes to/from the scan list are allowed. 603 595 */ 604 596 void drm_mm_init_scan_with_range(struct drm_mm *mm, 605 - unsigned long size, 597 + u64 size, 606 598 unsigned alignment, 607 599 unsigned long color, 608 - unsigned long start, 609 - unsigned long end) 600 + u64 start, 601 + u64 end) 610 602 { 611 603 mm->scan_color = color; 612 604 mm->scan_alignment = alignment; ··· 635 627 { 636 628 struct drm_mm *mm = node->mm; 637 629 struct drm_mm_node *prev_node; 638 - unsigned long hole_start, hole_end; 639 - unsigned long adj_start, adj_end; 630 + u64 hole_start, hole_end; 631 + u64 adj_start, adj_end; 640 632 641 633 mm->scanned_blocks++; 642 634 ··· 739 731 * 740 732 * Note that @mm must be cleared to 0 before calling this function. 741 733 */ 742 - void drm_mm_init(struct drm_mm * mm, unsigned long start, unsigned long size) 734 + void drm_mm_init(struct drm_mm * mm, u64 start, u64 size) 743 735 { 744 736 INIT_LIST_HEAD(&mm->hole_stack); 745 737 mm->scanned_blocks = 0; ··· 774 766 } 775 767 EXPORT_SYMBOL(drm_mm_takedown); 776 768 777 - static unsigned long drm_mm_debug_hole(struct drm_mm_node *entry, 778 - const char *prefix) 769 + static u64 drm_mm_debug_hole(struct drm_mm_node *entry, 770 + const char *prefix) 779 771 { 780 - unsigned long hole_start, hole_end, hole_size; 772 + u64 hole_start, hole_end, hole_size; 781 773 782 774 if (entry->hole_follows) { 783 775 hole_start = drm_mm_hole_node_start(entry); 784 776 hole_end = drm_mm_hole_node_end(entry); 785 777 hole_size = hole_end - hole_start; 786 - printk(KERN_DEBUG "%s 0x%08lx-0x%08lx: %8lu: free\n", 787 - prefix, hole_start, hole_end, 788 - hole_size); 778 + pr_debug("%s %#llx-%#llx: %llu: free\n", prefix, hole_start, 779 + hole_end, hole_size); 789 780 return hole_size; 790 781 } 791 782 ··· 799 792 void drm_mm_debug_table(struct drm_mm *mm, const char *prefix) 800 793 { 801 794 struct drm_mm_node *entry; 802 - unsigned long total_used = 0, total_free = 0, total = 0; 795 + u64 total_used = 0, total_free = 0, total = 0; 803 796 804 797 total_free += drm_mm_debug_hole(&mm->head_node, prefix); 805 798 806 799 drm_mm_for_each_node(entry, mm) { 807 - printk(KERN_DEBUG "%s 0x%08lx-0x%08lx: %8lu: used\n", 808 - prefix, entry->start, entry->start + entry->size, 809 - entry->size); 800 + pr_debug("%s %#llx-%#llx: %llu: used\n", prefix, entry->start, 801 + entry->start + entry->size, entry->size); 810 802 total_used += entry->size; 811 803 total_free += drm_mm_debug_hole(entry, prefix); 812 804 } 813 805 total = total_free + total_used; 814 806 815 - printk(KERN_DEBUG "%s total: %lu, used %lu free %lu\n", prefix, total, 816 - total_used, total_free); 807 + pr_debug("%s total: %llu, used %llu free %llu\n", prefix, total, 808 + total_used, total_free); 817 809 } 818 810 EXPORT_SYMBOL(drm_mm_debug_table); 819 811 820 812 #if defined(CONFIG_DEBUG_FS) 821 - static unsigned long drm_mm_dump_hole(struct seq_file *m, struct drm_mm_node *entry) 813 + static u64 drm_mm_dump_hole(struct seq_file *m, struct drm_mm_node *entry) 822 814 { 823 - unsigned long hole_start, hole_end, hole_size; 815 + u64 hole_start, hole_end, hole_size; 824 816 825 817 if (entry->hole_follows) { 826 818 hole_start = drm_mm_hole_node_start(entry); 827 819 hole_end = drm_mm_hole_node_end(entry); 828 820 hole_size = hole_end - hole_start; 829 - seq_printf(m, "0x%08lx-0x%08lx: 0x%08lx: free\n", 830 - hole_start, hole_end, hole_size); 821 + seq_printf(m, "%#llx-%#llx: %llu: free\n", hole_start, 822 + hole_end, hole_size); 831 823 return hole_size; 832 824 } 833 825 ··· 841 835 int drm_mm_dump_table(struct seq_file *m, struct drm_mm *mm) 842 836 { 843 837 struct drm_mm_node *entry; 844 - unsigned long total_used = 0, total_free = 0, total = 0; 838 + u64 total_used = 0, total_free = 0, total = 0; 845 839 846 840 total_free += drm_mm_dump_hole(m, &mm->head_node); 847 841 848 842 drm_mm_for_each_node(entry, mm) { 849 - seq_printf(m, "0x%08lx-0x%08lx: 0x%08lx: used\n", 850 - entry->start, entry->start + entry->size, 851 - entry->size); 843 + seq_printf(m, "%#016llx-%#016llx: %llu: used\n", entry->start, 844 + entry->start + entry->size, entry->size); 852 845 total_used += entry->size; 853 846 total_free += drm_mm_dump_hole(m, entry); 854 847 } 855 848 total = total_free + total_used; 856 849 857 - seq_printf(m, "total: %lu, used %lu free %lu\n", total, total_used, total_free); 850 + seq_printf(m, "total: %llu, used %llu free %llu\n", total, 851 + total_used, total_free); 858 852 return 0; 859 853 } 860 854 EXPORT_SYMBOL(drm_mm_dump_table);
+2 -2
drivers/gpu/drm/i915/i915_debugfs.c
··· 152 152 seq_puts(m, " (pp"); 153 153 else 154 154 seq_puts(m, " (g"); 155 - seq_printf(m, "gtt offset: %08lx, size: %08lx, type: %u)", 155 + seq_printf(m, "gtt offset: %08llx, size: %08llx, type: %u)", 156 156 vma->node.start, vma->node.size, 157 157 vma->ggtt_view.type); 158 158 } 159 159 if (obj->stolen) 160 - seq_printf(m, " (stolen: %08lx)", obj->stolen->start); 160 + seq_printf(m, " (stolen: %08llx)", obj->stolen->start); 161 161 if (obj->pin_mappable || obj->fault_mappable) { 162 162 char s[3], *t = s; 163 163 if (obj->pin_mappable)
+25 -5
drivers/gpu/drm/i915/i915_drv.c
··· 622 622 return 0; 623 623 } 624 624 625 - static int i915_drm_suspend_late(struct drm_device *drm_dev) 625 + static int i915_drm_suspend_late(struct drm_device *drm_dev, bool hibernation) 626 626 { 627 627 struct drm_i915_private *dev_priv = drm_dev->dev_private; 628 628 int ret; ··· 636 636 } 637 637 638 638 pci_disable_device(drm_dev->pdev); 639 - pci_set_power_state(drm_dev->pdev, PCI_D3hot); 639 + /* 640 + * During hibernation on some GEN4 platforms the BIOS may try to access 641 + * the device even though it's already in D3 and hang the machine. So 642 + * leave the device in D0 on those platforms and hope the BIOS will 643 + * power down the device properly. Platforms where this was seen: 644 + * Lenovo Thinkpad X301, X61s 645 + */ 646 + if (!(hibernation && 647 + drm_dev->pdev->subsystem_vendor == PCI_VENDOR_ID_LENOVO && 648 + INTEL_INFO(dev_priv)->gen == 4)) 649 + pci_set_power_state(drm_dev->pdev, PCI_D3hot); 640 650 641 651 return 0; 642 652 } ··· 672 662 if (error) 673 663 return error; 674 664 675 - return i915_drm_suspend_late(dev); 665 + return i915_drm_suspend_late(dev, false); 676 666 } 677 667 678 668 static int i915_drm_resume(struct drm_device *dev) ··· 960 950 if (drm_dev->switch_power_state == DRM_SWITCH_POWER_OFF) 961 951 return 0; 962 952 963 - return i915_drm_suspend_late(drm_dev); 953 + return i915_drm_suspend_late(drm_dev, false); 954 + } 955 + 956 + static int i915_pm_poweroff_late(struct device *dev) 957 + { 958 + struct drm_device *drm_dev = dev_to_i915(dev)->dev; 959 + 960 + if (drm_dev->switch_power_state == DRM_SWITCH_POWER_OFF) 961 + return 0; 962 + 963 + return i915_drm_suspend_late(drm_dev, true); 964 964 } 965 965 966 966 static int i915_pm_resume_early(struct device *dev) ··· 1540 1520 .thaw_early = i915_pm_resume_early, 1541 1521 .thaw = i915_pm_resume, 1542 1522 .poweroff = i915_pm_suspend, 1543 - .poweroff_late = i915_pm_suspend_late, 1523 + .poweroff_late = i915_pm_poweroff_late, 1544 1524 .restore_early = i915_pm_resume_early, 1545 1525 .restore = i915_pm_resume, 1546 1526
+3 -3
drivers/gpu/drm/i915/i915_gem_gtt.c
··· 1145 1145 1146 1146 ppgtt->base.clear_range(&ppgtt->base, 0, ppgtt->base.total, true); 1147 1147 1148 - DRM_DEBUG_DRIVER("Allocated pde space (%ldM) at GTT entry: %lx\n", 1148 + DRM_DEBUG_DRIVER("Allocated pde space (%lldM) at GTT entry: %llx\n", 1149 1149 ppgtt->node.size >> 20, 1150 1150 ppgtt->node.start / PAGE_SIZE); 1151 1151 ··· 1713 1713 1714 1714 static void i915_gtt_color_adjust(struct drm_mm_node *node, 1715 1715 unsigned long color, 1716 - unsigned long *start, 1717 - unsigned long *end) 1716 + u64 *start, 1717 + u64 *end) 1718 1718 { 1719 1719 if (node->color != color) 1720 1720 *start += 4096;
+7 -11
drivers/gpu/drm/i915/intel_fifo_underrun.c
··· 282 282 return ret; 283 283 } 284 284 285 - static bool 286 - __cpu_fifo_underrun_reporting_enabled(struct drm_i915_private *dev_priv, 287 - enum pipe pipe) 288 - { 289 - struct drm_crtc *crtc = dev_priv->pipe_to_crtc_mapping[pipe]; 290 - struct intel_crtc *intel_crtc = to_intel_crtc(crtc); 291 - 292 - return !intel_crtc->cpu_fifo_underrun_disabled; 293 - } 294 - 295 285 /** 296 286 * intel_set_pch_fifo_underrun_reporting - set PCH fifo underrun reporting state 297 287 * @dev_priv: i915 device instance ··· 342 352 void intel_cpu_fifo_underrun_irq_handler(struct drm_i915_private *dev_priv, 343 353 enum pipe pipe) 344 354 { 355 + struct drm_crtc *crtc = dev_priv->pipe_to_crtc_mapping[pipe]; 356 + 357 + /* We may be called too early in init, thanks BIOS! */ 358 + if (crtc == NULL) 359 + return; 360 + 345 361 /* GMCH can't disable fifo underruns, filter them. */ 346 362 if (HAS_GMCH_DISPLAY(dev_priv->dev) && 347 - !__cpu_fifo_underrun_reporting_enabled(dev_priv, pipe)) 363 + to_intel_crtc(crtc)->cpu_fifo_underrun_disabled) 348 364 return; 349 365 350 366 if (intel_set_cpu_fifo_underrun_reporting(dev_priv, pipe, false))
+31 -5
drivers/gpu/drm/imx/dw_hdmi-imx.c
··· 70 70 118800000, { 0x091c, 0x091c, 0x06dc }, 71 71 }, { 72 72 216000000, { 0x06dc, 0x0b5c, 0x091c }, 73 - } 73 + }, { 74 + ~0UL, { 0x0000, 0x0000, 0x0000 }, 75 + }, 74 76 }; 75 77 76 78 static const struct dw_hdmi_sym_term imx_sym_term[] = { ··· 138 136 .destroy = drm_encoder_cleanup, 139 137 }; 140 138 139 + static enum drm_mode_status imx6q_hdmi_mode_valid(struct drm_connector *con, 140 + struct drm_display_mode *mode) 141 + { 142 + if (mode->clock < 13500) 143 + return MODE_CLOCK_LOW; 144 + if (mode->clock > 266000) 145 + return MODE_CLOCK_HIGH; 146 + 147 + return MODE_OK; 148 + } 149 + 150 + static enum drm_mode_status imx6dl_hdmi_mode_valid(struct drm_connector *con, 151 + struct drm_display_mode *mode) 152 + { 153 + if (mode->clock < 13500) 154 + return MODE_CLOCK_LOW; 155 + if (mode->clock > 270000) 156 + return MODE_CLOCK_HIGH; 157 + 158 + return MODE_OK; 159 + } 160 + 141 161 static struct dw_hdmi_plat_data imx6q_hdmi_drv_data = { 142 - .mpll_cfg = imx_mpll_cfg, 143 - .cur_ctr = imx_cur_ctr, 144 - .sym_term = imx_sym_term, 145 - .dev_type = IMX6Q_HDMI, 162 + .mpll_cfg = imx_mpll_cfg, 163 + .cur_ctr = imx_cur_ctr, 164 + .sym_term = imx_sym_term, 165 + .dev_type = IMX6Q_HDMI, 166 + .mode_valid = imx6q_hdmi_mode_valid, 146 167 }; 147 168 148 169 static struct dw_hdmi_plat_data imx6dl_hdmi_drv_data = { ··· 173 148 .cur_ctr = imx_cur_ctr, 174 149 .sym_term = imx_sym_term, 175 150 .dev_type = IMX6DL_HDMI, 151 + .mode_valid = imx6dl_hdmi_mode_valid, 176 152 }; 177 153 178 154 static const struct of_device_id dw_hdmi_imx_dt_ids[] = {
+13 -15
drivers/gpu/drm/imx/imx-ldb.c
··· 163 163 { 164 164 struct imx_ldb_channel *imx_ldb_ch = enc_to_imx_ldb_ch(encoder); 165 165 struct imx_ldb *ldb = imx_ldb_ch->ldb; 166 - struct drm_display_mode *mode = &encoder->crtc->hwmode; 167 166 u32 pixel_fmt; 168 - unsigned long serial_clk; 169 - unsigned long di_clk = mode->clock * 1000; 170 - int mux = imx_drm_encoder_get_mux_id(imx_ldb_ch->child, encoder); 171 - 172 - if (ldb->ldb_ctrl & LDB_SPLIT_MODE_EN) { 173 - /* dual channel LVDS mode */ 174 - serial_clk = 3500UL * mode->clock; 175 - imx_ldb_set_clock(ldb, mux, 0, serial_clk, di_clk); 176 - imx_ldb_set_clock(ldb, mux, 1, serial_clk, di_clk); 177 - } else { 178 - serial_clk = 7000UL * mode->clock; 179 - imx_ldb_set_clock(ldb, mux, imx_ldb_ch->chno, serial_clk, 180 - di_clk); 181 - } 182 167 183 168 switch (imx_ldb_ch->chno) { 184 169 case 0: ··· 232 247 struct imx_ldb_channel *imx_ldb_ch = enc_to_imx_ldb_ch(encoder); 233 248 struct imx_ldb *ldb = imx_ldb_ch->ldb; 234 249 int dual = ldb->ldb_ctrl & LDB_SPLIT_MODE_EN; 250 + unsigned long serial_clk; 251 + unsigned long di_clk = mode->clock * 1000; 252 + int mux = imx_drm_encoder_get_mux_id(imx_ldb_ch->child, encoder); 235 253 236 254 if (mode->clock > 170000) { 237 255 dev_warn(ldb->dev, ··· 243 255 if (mode->clock > 85000 && !dual) { 244 256 dev_warn(ldb->dev, 245 257 "%s: mode exceeds 85 MHz pixel clock\n", __func__); 258 + } 259 + 260 + if (dual) { 261 + serial_clk = 3500UL * mode->clock; 262 + imx_ldb_set_clock(ldb, mux, 0, serial_clk, di_clk); 263 + imx_ldb_set_clock(ldb, mux, 1, serial_clk, di_clk); 264 + } else { 265 + serial_clk = 7000UL * mode->clock; 266 + imx_ldb_set_clock(ldb, mux, imx_ldb_ch->chno, serial_clk, 267 + di_clk); 246 268 } 247 269 248 270 /* FIXME - assumes straight connections DI0 --> CH0, DI1 --> CH1 */
+4 -1
drivers/gpu/drm/imx/parallel-display.c
··· 236 236 } 237 237 238 238 panel_node = of_parse_phandle(np, "fsl,panel", 0); 239 - if (panel_node) 239 + if (panel_node) { 240 240 imxpd->panel = of_drm_find_panel(panel_node); 241 + if (!imxpd->panel) 242 + return -EPROBE_DEFER; 243 + } 241 244 242 245 imxpd->dev = dev; 243 246
+5
drivers/gpu/drm/msm/mdp/mdp4/mdp4_irq.c
··· 32 32 void mdp4_irq_preinstall(struct msm_kms *kms) 33 33 { 34 34 struct mdp4_kms *mdp4_kms = to_mdp4_kms(to_mdp_kms(kms)); 35 + mdp4_enable(mdp4_kms); 35 36 mdp4_write(mdp4_kms, REG_MDP4_INTR_CLEAR, 0xffffffff); 37 + mdp4_write(mdp4_kms, REG_MDP4_INTR_ENABLE, 0x00000000); 38 + mdp4_disable(mdp4_kms); 36 39 } 37 40 38 41 int mdp4_irq_postinstall(struct msm_kms *kms) ··· 56 53 void mdp4_irq_uninstall(struct msm_kms *kms) 57 54 { 58 55 struct mdp4_kms *mdp4_kms = to_mdp4_kms(to_mdp_kms(kms)); 56 + mdp4_enable(mdp4_kms); 59 57 mdp4_write(mdp4_kms, REG_MDP4_INTR_ENABLE, 0x00000000); 58 + mdp4_disable(mdp4_kms); 60 59 } 61 60 62 61 irqreturn_t mdp4_irq(struct msm_kms *kms)
+4 -11
drivers/gpu/drm/msm/mdp/mdp5/mdp5.xml.h
··· 8 8 git clone https://github.com/freedreno/envytools.git 9 9 10 10 The rules-ng-ng source files this header was generated from are: 11 - - /home/robclark/src/freedreno/envytools/rnndb/msm.xml ( 676 bytes, from 2014-12-05 15:34:49) 12 - - /home/robclark/src/freedreno/envytools/rnndb/freedreno_copyright.xml ( 1453 bytes, from 2013-03-31 16:51:27) 13 - - /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp4.xml ( 20908 bytes, from 2014-12-08 16:13:00) 14 - - /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp_common.xml ( 2357 bytes, from 2014-12-08 16:13:00) 15 - - /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp5.xml ( 27208 bytes, from 2015-01-13 23:56:11) 16 - - /home/robclark/src/freedreno/envytools/rnndb/dsi/dsi.xml ( 11712 bytes, from 2013-08-17 17:13:43) 17 - - /home/robclark/src/freedreno/envytools/rnndb/dsi/sfpb.xml ( 344 bytes, from 2013-08-11 19:26:32) 18 - - /home/robclark/src/freedreno/envytools/rnndb/dsi/mmss_cc.xml ( 1686 bytes, from 2014-10-31 16:48:57) 19 - - /home/robclark/src/freedreno/envytools/rnndb/hdmi/qfprom.xml ( 600 bytes, from 2013-07-05 19:21:12) 20 - - /home/robclark/src/freedreno/envytools/rnndb/hdmi/hdmi.xml ( 26848 bytes, from 2015-01-13 23:55:57) 21 - - /home/robclark/src/freedreno/envytools/rnndb/edp/edp.xml ( 8253 bytes, from 2014-12-08 16:13:00) 11 + - /local/mnt2/workspace2/sviau/envytools/rnndb/mdp/mdp5.xml ( 27229 bytes, from 2015-02-10 17:00:41) 12 + - /local/mnt2/workspace2/sviau/envytools/rnndb/freedreno_copyright.xml ( 1453 bytes, from 2014-06-02 18:31:15) 13 + - /local/mnt2/workspace2/sviau/envytools/rnndb/mdp/mdp_common.xml ( 2357 bytes, from 2015-01-23 16:20:19) 22 14 23 15 Copyright (C) 2013-2015 by the following authors: 24 16 - Rob Clark <robdclark@gmail.com> (robclark) ··· 902 910 case 2: return (mdp5_cfg->lm.base[2]); 903 911 case 3: return (mdp5_cfg->lm.base[3]); 904 912 case 4: return (mdp5_cfg->lm.base[4]); 913 + case 5: return (mdp5_cfg->lm.base[5]); 905 914 default: return INVALID_IDX(idx); 906 915 } 907 916 }
+61 -38
drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c
··· 62 62 63 63 /* current cursor being scanned out: */ 64 64 struct drm_gem_object *scanout_bo; 65 - uint32_t width; 66 - uint32_t height; 65 + uint32_t width, height; 66 + uint32_t x, y; 67 67 } cursor; 68 68 }; 69 69 #define to_mdp5_crtc(x) container_of(x, struct mdp5_crtc, base) ··· 103 103 struct drm_plane *plane; 104 104 uint32_t flush_mask = 0; 105 105 106 - /* we could have already released CTL in the disable path: */ 107 - if (!mdp5_crtc->ctl) 106 + /* this should not happen: */ 107 + if (WARN_ON(!mdp5_crtc->ctl)) 108 108 return; 109 109 110 110 drm_atomic_crtc_for_each_plane(plane, crtc) { ··· 142 142 143 143 drm_atomic_crtc_for_each_plane(plane, crtc) { 144 144 mdp5_plane_complete_flip(plane); 145 + } 146 + 147 + if (mdp5_crtc->ctl && !crtc->state->enable) { 148 + mdp5_ctl_release(mdp5_crtc->ctl); 149 + mdp5_crtc->ctl = NULL; 145 150 } 146 151 } 147 152 ··· 391 386 mdp5_crtc->event = crtc->state->event; 392 387 spin_unlock_irqrestore(&dev->event_lock, flags); 393 388 389 + /* 390 + * If no CTL has been allocated in mdp5_crtc_atomic_check(), 391 + * it means we are trying to flush a CRTC whose state is disabled: 392 + * nothing else needs to be done. 393 + */ 394 + if (unlikely(!mdp5_crtc->ctl)) 395 + return; 396 + 394 397 blend_setup(crtc); 395 398 crtc_flush_all(crtc); 396 399 request_pending(crtc, PENDING_FLIP); 397 - 398 - if (mdp5_crtc->ctl && !crtc->state->enable) { 399 - mdp5_ctl_release(mdp5_crtc->ctl); 400 - mdp5_crtc->ctl = NULL; 401 - } 402 400 } 403 401 404 402 static int mdp5_crtc_set_property(struct drm_crtc *crtc, ··· 409 401 { 410 402 // XXX 411 403 return -EINVAL; 404 + } 405 + 406 + static void get_roi(struct drm_crtc *crtc, uint32_t *roi_w, uint32_t *roi_h) 407 + { 408 + struct mdp5_crtc *mdp5_crtc = to_mdp5_crtc(crtc); 409 + uint32_t xres = crtc->mode.hdisplay; 410 + uint32_t yres = crtc->mode.vdisplay; 411 + 412 + /* 413 + * Cursor Region Of Interest (ROI) is a plane read from cursor 414 + * buffer to render. The ROI region is determined by the visibility of 415 + * the cursor point. In the default Cursor image the cursor point will 416 + * be at the top left of the cursor image, unless it is specified 417 + * otherwise using hotspot feature. 418 + * 419 + * If the cursor point reaches the right (xres - x < cursor.width) or 420 + * bottom (yres - y < cursor.height) boundary of the screen, then ROI 421 + * width and ROI height need to be evaluated to crop the cursor image 422 + * accordingly. 423 + * (xres-x) will be new cursor width when x > (xres - cursor.width) 424 + * (yres-y) will be new cursor height when y > (yres - cursor.height) 425 + */ 426 + *roi_w = min(mdp5_crtc->cursor.width, xres - 427 + mdp5_crtc->cursor.x); 428 + *roi_h = min(mdp5_crtc->cursor.height, yres - 429 + mdp5_crtc->cursor.y); 412 430 } 413 431 414 432 static int mdp5_crtc_cursor_set(struct drm_crtc *crtc, ··· 450 416 unsigned int depth; 451 417 enum mdp5_cursor_alpha cur_alpha = CURSOR_ALPHA_PER_PIXEL; 452 418 uint32_t flush_mask = mdp_ctl_flush_mask_cursor(0); 419 + uint32_t roi_w, roi_h; 453 420 unsigned long flags; 454 421 455 422 if ((width > CURSOR_WIDTH) || (height > CURSOR_HEIGHT)) { ··· 481 446 spin_lock_irqsave(&mdp5_crtc->cursor.lock, flags); 482 447 old_bo = mdp5_crtc->cursor.scanout_bo; 483 448 449 + mdp5_crtc->cursor.scanout_bo = cursor_bo; 450 + mdp5_crtc->cursor.width = width; 451 + mdp5_crtc->cursor.height = height; 452 + 453 + get_roi(crtc, &roi_w, &roi_h); 454 + 484 455 mdp5_write(mdp5_kms, REG_MDP5_LM_CURSOR_STRIDE(lm), stride); 485 456 mdp5_write(mdp5_kms, REG_MDP5_LM_CURSOR_FORMAT(lm), 486 457 MDP5_LM_CURSOR_FORMAT_FORMAT(CURSOR_FMT_ARGB8888)); ··· 494 453 MDP5_LM_CURSOR_IMG_SIZE_SRC_H(height) | 495 454 MDP5_LM_CURSOR_IMG_SIZE_SRC_W(width)); 496 455 mdp5_write(mdp5_kms, REG_MDP5_LM_CURSOR_SIZE(lm), 497 - MDP5_LM_CURSOR_SIZE_ROI_H(height) | 498 - MDP5_LM_CURSOR_SIZE_ROI_W(width)); 456 + MDP5_LM_CURSOR_SIZE_ROI_H(roi_h) | 457 + MDP5_LM_CURSOR_SIZE_ROI_W(roi_w)); 499 458 mdp5_write(mdp5_kms, REG_MDP5_LM_CURSOR_BASE_ADDR(lm), cursor_addr); 500 459 501 - 502 460 blendcfg = MDP5_LM_CURSOR_BLEND_CONFIG_BLEND_EN; 503 - blendcfg |= MDP5_LM_CURSOR_BLEND_CONFIG_BLEND_TRANSP_EN; 504 461 blendcfg |= MDP5_LM_CURSOR_BLEND_CONFIG_BLEND_ALPHA_SEL(cur_alpha); 505 462 mdp5_write(mdp5_kms, REG_MDP5_LM_CURSOR_BLEND_CONFIG(lm), blendcfg); 506 463 507 - mdp5_crtc->cursor.scanout_bo = cursor_bo; 508 - mdp5_crtc->cursor.width = width; 509 - mdp5_crtc->cursor.height = height; 510 464 spin_unlock_irqrestore(&mdp5_crtc->cursor.lock, flags); 511 465 512 466 ret = mdp5_ctl_set_cursor(mdp5_crtc->ctl, true); ··· 525 489 struct mdp5_kms *mdp5_kms = get_kms(crtc); 526 490 struct mdp5_crtc *mdp5_crtc = to_mdp5_crtc(crtc); 527 491 uint32_t flush_mask = mdp_ctl_flush_mask_cursor(0); 528 - uint32_t xres = crtc->mode.hdisplay; 529 - uint32_t yres = crtc->mode.vdisplay; 530 492 uint32_t roi_w; 531 493 uint32_t roi_h; 532 494 unsigned long flags; 533 495 534 - x = (x > 0) ? x : 0; 535 - y = (y > 0) ? y : 0; 496 + /* In case the CRTC is disabled, just drop the cursor update */ 497 + if (unlikely(!crtc->state->enable)) 498 + return 0; 536 499 537 - /* 538 - * Cursor Region Of Interest (ROI) is a plane read from cursor 539 - * buffer to render. The ROI region is determined by the visiblity of 540 - * the cursor point. In the default Cursor image the cursor point will 541 - * be at the top left of the cursor image, unless it is specified 542 - * otherwise using hotspot feature. 543 - * 544 - * If the cursor point reaches the right (xres - x < cursor.width) or 545 - * bottom (yres - y < cursor.height) boundary of the screen, then ROI 546 - * width and ROI height need to be evaluated to crop the cursor image 547 - * accordingly. 548 - * (xres-x) will be new cursor width when x > (xres - cursor.width) 549 - * (yres-y) will be new cursor height when y > (yres - cursor.height) 550 - */ 551 - roi_w = min(mdp5_crtc->cursor.width, xres - x); 552 - roi_h = min(mdp5_crtc->cursor.height, yres - y); 500 + mdp5_crtc->cursor.x = x = max(x, 0); 501 + mdp5_crtc->cursor.y = y = max(y, 0); 502 + 503 + get_roi(crtc, &roi_w, &roi_h); 553 504 554 505 spin_lock_irqsave(&mdp5_crtc->cursor.lock, flags); 555 506 mdp5_write(mdp5_kms, REG_MDP5_LM_CURSOR_SIZE(mdp5_crtc->lm), ··· 567 544 static const struct drm_crtc_helper_funcs mdp5_crtc_helper_funcs = { 568 545 .mode_fixup = mdp5_crtc_mode_fixup, 569 546 .mode_set_nofb = mdp5_crtc_mode_set_nofb, 570 - .prepare = mdp5_crtc_disable, 571 - .commit = mdp5_crtc_enable, 547 + .disable = mdp5_crtc_disable, 548 + .enable = mdp5_crtc_enable, 572 549 .atomic_check = mdp5_crtc_atomic_check, 573 550 .atomic_begin = mdp5_crtc_atomic_begin, 574 551 .atomic_flush = mdp5_crtc_atomic_flush,
+3 -3
drivers/gpu/drm/msm/mdp/mdp5/mdp5_encoder.c
··· 267 267 mdp5_write(mdp5_kms, REG_MDP5_INTF_TIMING_ENGINE_EN(intf), 1); 268 268 spin_unlock_irqrestore(&mdp5_encoder->intf_lock, flags); 269 269 270 - mdp5_encoder->enabled = false; 270 + mdp5_encoder->enabled = true; 271 271 } 272 272 273 273 static const struct drm_encoder_helper_funcs mdp5_encoder_helper_funcs = { 274 274 .mode_fixup = mdp5_encoder_mode_fixup, 275 275 .mode_set = mdp5_encoder_mode_set, 276 - .prepare = mdp5_encoder_disable, 277 - .commit = mdp5_encoder_enable, 276 + .disable = mdp5_encoder_disable, 277 + .enable = mdp5_encoder_enable, 278 278 }; 279 279 280 280 /* initialize encoder */
+5
drivers/gpu/drm/msm/mdp/mdp5/mdp5_irq.c
··· 34 34 void mdp5_irq_preinstall(struct msm_kms *kms) 35 35 { 36 36 struct mdp5_kms *mdp5_kms = to_mdp5_kms(to_mdp_kms(kms)); 37 + mdp5_enable(mdp5_kms); 37 38 mdp5_write(mdp5_kms, REG_MDP5_INTR_CLEAR, 0xffffffff); 39 + mdp5_write(mdp5_kms, REG_MDP5_INTR_EN, 0x00000000); 40 + mdp5_disable(mdp5_kms); 38 41 } 39 42 40 43 int mdp5_irq_postinstall(struct msm_kms *kms) ··· 60 57 void mdp5_irq_uninstall(struct msm_kms *kms) 61 58 { 62 59 struct mdp5_kms *mdp5_kms = to_mdp5_kms(to_mdp_kms(kms)); 60 + mdp5_enable(mdp5_kms); 63 61 mdp5_write(mdp5_kms, REG_MDP5_INTR_EN, 0x00000000); 62 + mdp5_disable(mdp5_kms); 64 63 } 65 64 66 65 static void mdp5_irq_mdp(struct mdp_kms *mdp_kms)
+3 -1
drivers/gpu/drm/msm/msm_atomic.c
··· 219 219 * mark our set of crtc's as busy: 220 220 */ 221 221 ret = start_atomic(dev->dev_private, c->crtc_mask); 222 - if (ret) 222 + if (ret) { 223 + kfree(c); 223 224 return ret; 225 + } 224 226 225 227 /* 226 228 * This is the point of no return - everything below never fails except
+1 -1
drivers/gpu/drm/nouveau/nouveau_fbcon.c
··· 418 418 nouveau_fbcon_zfill(dev, fbcon); 419 419 420 420 /* To allow resizeing without swapping buffers */ 421 - NV_INFO(drm, "allocated %dx%d fb: 0x%lx, bo %p\n", 421 + NV_INFO(drm, "allocated %dx%d fb: 0x%llx, bo %p\n", 422 422 nouveau_fb->base.width, nouveau_fb->base.height, 423 423 nvbo->bo.offset, nvbo); 424 424
+3
drivers/gpu/drm/radeon/atombios_crtc.c
··· 1405 1405 (x << 16) | y); 1406 1406 viewport_w = crtc->mode.hdisplay; 1407 1407 viewport_h = (crtc->mode.vdisplay + 1) & ~1; 1408 + if ((rdev->family >= CHIP_BONAIRE) && 1409 + (crtc->mode.flags & DRM_MODE_FLAG_INTERLACE)) 1410 + viewport_h *= 2; 1408 1411 WREG32(EVERGREEN_VIEWPORT_SIZE + radeon_crtc->crtc_offset, 1409 1412 (viewport_w << 16) | viewport_h); 1410 1413
+16 -14
drivers/gpu/drm/radeon/atombios_encoders.c
··· 1626 1626 struct radeon_connector *radeon_connector = NULL; 1627 1627 struct radeon_connector_atom_dig *radeon_dig_connector = NULL; 1628 1628 bool travis_quirk = false; 1629 - int encoder_mode; 1630 1629 1631 1630 if (connector) { 1632 1631 radeon_connector = to_radeon_connector(connector); ··· 1721 1722 } 1722 1723 break; 1723 1724 } 1724 - 1725 - encoder_mode = atombios_get_encoder_mode(encoder); 1726 - if (connector && (radeon_audio != 0) && 1727 - ((encoder_mode == ATOM_ENCODER_MODE_HDMI) || 1728 - (ENCODER_MODE_IS_DP(encoder_mode) && 1729 - drm_detect_monitor_audio(radeon_connector_edid(connector))))) 1730 - radeon_audio_dpms(encoder, mode); 1731 1725 } 1732 1726 1733 1727 static void ··· 1729 1737 struct drm_device *dev = encoder->dev; 1730 1738 struct radeon_device *rdev = dev->dev_private; 1731 1739 struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); 1740 + struct drm_connector *connector = radeon_get_connector_for_encoder(encoder); 1741 + int encoder_mode = atombios_get_encoder_mode(encoder); 1732 1742 1733 1743 DRM_DEBUG_KMS("encoder dpms %d to mode %d, devices %08x, active_devices %08x\n", 1734 1744 radeon_encoder->encoder_id, mode, radeon_encoder->devices, 1735 1745 radeon_encoder->active_device); 1746 + 1747 + if (connector && (radeon_audio != 0) && 1748 + ((encoder_mode == ATOM_ENCODER_MODE_HDMI) || 1749 + (ENCODER_MODE_IS_DP(encoder_mode) && 1750 + drm_detect_monitor_audio(radeon_connector_edid(connector))))) 1751 + radeon_audio_dpms(encoder, mode); 1752 + 1736 1753 switch (radeon_encoder->encoder_id) { 1737 1754 case ENCODER_OBJECT_ID_INTERNAL_TMDS1: 1738 1755 case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_TMDS1: ··· 2171 2170 case ENCODER_OBJECT_ID_INTERNAL_UNIPHY3: 2172 2171 case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_LVTMA: 2173 2172 /* handled in dpms */ 2174 - encoder_mode = atombios_get_encoder_mode(encoder); 2175 - if (connector && (radeon_audio != 0) && 2176 - ((encoder_mode == ATOM_ENCODER_MODE_HDMI) || 2177 - (ENCODER_MODE_IS_DP(encoder_mode) && 2178 - drm_detect_monitor_audio(radeon_connector_edid(connector))))) 2179 - radeon_audio_mode_set(encoder, adjusted_mode); 2180 2173 break; 2181 2174 case ENCODER_OBJECT_ID_INTERNAL_DDI: 2182 2175 case ENCODER_OBJECT_ID_INTERNAL_DVO1: ··· 2192 2197 } 2193 2198 2194 2199 atombios_apply_encoder_quirks(encoder, adjusted_mode); 2200 + 2201 + encoder_mode = atombios_get_encoder_mode(encoder); 2202 + if (connector && (radeon_audio != 0) && 2203 + ((encoder_mode == ATOM_ENCODER_MODE_HDMI) || 2204 + (ENCODER_MODE_IS_DP(encoder_mode) && 2205 + drm_detect_monitor_audio(radeon_connector_edid(connector))))) 2206 + radeon_audio_mode_set(encoder, adjusted_mode); 2195 2207 } 2196 2208 2197 2209 static bool
+3
drivers/gpu/drm/radeon/cik.c
··· 7555 7555 WREG32(DC_HPD5_INT_CONTROL, hpd5); 7556 7556 WREG32(DC_HPD6_INT_CONTROL, hpd6); 7557 7557 7558 + /* posting read */ 7559 + RREG32(SRBM_STATUS); 7560 + 7558 7561 return 0; 7559 7562 } 7560 7563
+33 -35
drivers/gpu/drm/radeon/dce6_afmt.c
··· 26 26 #include "radeon_audio.h" 27 27 #include "sid.h" 28 28 29 + #define DCE8_DCCG_AUDIO_DTO1_PHASE 0x05b8 30 + #define DCE8_DCCG_AUDIO_DTO1_MODULE 0x05bc 31 + 29 32 u32 dce6_endpoint_rreg(struct radeon_device *rdev, 30 33 u32 block_offset, u32 reg) 31 34 { ··· 255 252 void dce6_hdmi_audio_set_dto(struct radeon_device *rdev, 256 253 struct radeon_crtc *crtc, unsigned int clock) 257 254 { 258 - /* Two dtos; generally use dto0 for HDMI */ 255 + /* Two dtos; generally use dto0 for HDMI */ 259 256 u32 value = 0; 260 257 261 - if (crtc) 258 + if (crtc) 262 259 value |= DCCG_AUDIO_DTO0_SOURCE_SEL(crtc->crtc_id); 263 260 264 261 WREG32(DCCG_AUDIO_DTO_SOURCE, value); 265 262 266 - /* Express [24MHz / target pixel clock] as an exact rational 267 - * number (coefficient of two integer numbers. DCCG_AUDIO_DTOx_PHASE 268 - * is the numerator, DCCG_AUDIO_DTOx_MODULE is the denominator 269 - */ 270 - WREG32(DCCG_AUDIO_DTO0_PHASE, 24000); 271 - WREG32(DCCG_AUDIO_DTO0_MODULE, clock); 263 + /* Express [24MHz / target pixel clock] as an exact rational 264 + * number (coefficient of two integer numbers. DCCG_AUDIO_DTOx_PHASE 265 + * is the numerator, DCCG_AUDIO_DTOx_MODULE is the denominator 266 + */ 267 + WREG32(DCCG_AUDIO_DTO0_PHASE, 24000); 268 + WREG32(DCCG_AUDIO_DTO0_MODULE, clock); 272 269 } 273 270 274 271 void dce6_dp_audio_set_dto(struct radeon_device *rdev, 275 272 struct radeon_crtc *crtc, unsigned int clock) 276 273 { 277 - /* Two dtos; generally use dto1 for DP */ 274 + /* Two dtos; generally use dto1 for DP */ 278 275 u32 value = 0; 279 276 value |= DCCG_AUDIO_DTO_SEL; 280 277 281 - if (crtc) 278 + if (crtc) 282 279 value |= DCCG_AUDIO_DTO0_SOURCE_SEL(crtc->crtc_id); 283 280 284 281 WREG32(DCCG_AUDIO_DTO_SOURCE, value); 285 282 286 - /* Express [24MHz / target pixel clock] as an exact rational 287 - * number (coefficient of two integer numbers. DCCG_AUDIO_DTOx_PHASE 288 - * is the numerator, DCCG_AUDIO_DTOx_MODULE is the denominator 289 - */ 290 - WREG32(DCCG_AUDIO_DTO1_PHASE, 24000); 291 - WREG32(DCCG_AUDIO_DTO1_MODULE, clock); 283 + /* Express [24MHz / target pixel clock] as an exact rational 284 + * number (coefficient of two integer numbers. DCCG_AUDIO_DTOx_PHASE 285 + * is the numerator, DCCG_AUDIO_DTOx_MODULE is the denominator 286 + */ 287 + if (ASIC_IS_DCE8(rdev)) { 288 + WREG32(DCE8_DCCG_AUDIO_DTO1_PHASE, 24000); 289 + WREG32(DCE8_DCCG_AUDIO_DTO1_MODULE, clock); 290 + } else { 291 + WREG32(DCCG_AUDIO_DTO1_PHASE, 24000); 292 + WREG32(DCCG_AUDIO_DTO1_MODULE, clock); 293 + } 292 294 } 293 295 294 - void dce6_enable_dp_audio_packets(struct drm_encoder *encoder, bool enable) 296 + void dce6_dp_enable(struct drm_encoder *encoder, bool enable) 295 297 { 296 298 struct drm_device *dev = encoder->dev; 297 299 struct radeon_device *rdev = dev->dev_private; 298 300 struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); 299 301 struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv; 300 - uint32_t offset; 301 302 302 303 if (!dig || !dig->afmt) 303 304 return; 304 305 305 - offset = dig->afmt->offset; 306 - 307 306 if (enable) { 308 - if (dig->afmt->enabled) 309 - return; 310 - 311 - WREG32(EVERGREEN_DP_SEC_TIMESTAMP + offset, EVERGREEN_DP_SEC_TIMESTAMP_MODE(1)); 312 - WREG32(EVERGREEN_DP_SEC_CNTL + offset, 313 - EVERGREEN_DP_SEC_ASP_ENABLE | /* Audio packet transmission */ 314 - EVERGREEN_DP_SEC_ATP_ENABLE | /* Audio timestamp packet transmission */ 315 - EVERGREEN_DP_SEC_AIP_ENABLE | /* Audio infoframe packet transmission */ 316 - EVERGREEN_DP_SEC_STREAM_ENABLE); /* Master enable for secondary stream engine */ 317 - radeon_audio_enable(rdev, dig->afmt->pin, true); 307 + WREG32(EVERGREEN_DP_SEC_TIMESTAMP + dig->afmt->offset, 308 + EVERGREEN_DP_SEC_TIMESTAMP_MODE(1)); 309 + WREG32(EVERGREEN_DP_SEC_CNTL + dig->afmt->offset, 310 + EVERGREEN_DP_SEC_ASP_ENABLE | /* Audio packet transmission */ 311 + EVERGREEN_DP_SEC_ATP_ENABLE | /* Audio timestamp packet transmission */ 312 + EVERGREEN_DP_SEC_AIP_ENABLE | /* Audio infoframe packet transmission */ 313 + EVERGREEN_DP_SEC_STREAM_ENABLE); /* Master enable for secondary stream engine */ 318 314 } else { 319 - if (!dig->afmt->enabled) 320 - return; 321 - 322 - WREG32(EVERGREEN_DP_SEC_CNTL + offset, 0); 323 - radeon_audio_enable(rdev, dig->afmt->pin, false); 315 + WREG32(EVERGREEN_DP_SEC_CNTL + dig->afmt->offset, 0); 324 316 } 325 317 326 318 dig->afmt->enabled = enable;
+3
drivers/gpu/drm/radeon/evergreen.c
··· 4593 4593 WREG32(AFMT_AUDIO_PACKET_CONTROL + EVERGREEN_CRTC4_REGISTER_OFFSET, afmt5); 4594 4594 WREG32(AFMT_AUDIO_PACKET_CONTROL + EVERGREEN_CRTC5_REGISTER_OFFSET, afmt6); 4595 4595 4596 + /* posting read */ 4597 + RREG32(SRBM_STATUS); 4598 + 4596 4599 return 0; 4597 4600 } 4598 4601
+21 -38
drivers/gpu/drm/radeon/evergreen_hdmi.c
··· 272 272 } 273 273 274 274 void dce4_dp_audio_set_dto(struct radeon_device *rdev, 275 - struct radeon_crtc *crtc, unsigned int clock) 275 + struct radeon_crtc *crtc, unsigned int clock) 276 276 { 277 277 u32 value; 278 278 ··· 294 294 * is the numerator, DCCG_AUDIO_DTOx_MODULE is the denominator 295 295 */ 296 296 WREG32(DCCG_AUDIO_DTO1_PHASE, 24000); 297 - WREG32(DCCG_AUDIO_DTO1_MODULE, rdev->clock.max_pixel_clock * 10); 297 + WREG32(DCCG_AUDIO_DTO1_MODULE, clock); 298 298 } 299 299 300 300 void dce4_set_vbi_packet(struct drm_encoder *encoder, u32 offset) ··· 350 350 struct drm_device *dev = encoder->dev; 351 351 struct radeon_device *rdev = dev->dev_private; 352 352 353 - WREG32(HDMI_INFOFRAME_CONTROL0 + offset, 354 - HDMI_AUDIO_INFO_SEND | /* enable audio info frames (frames won't be set until audio is enabled) */ 355 - HDMI_AUDIO_INFO_CONT); /* required for audio info values to be updated */ 356 - 357 353 WREG32(AFMT_INFOFRAME_CONTROL0 + offset, 358 354 AFMT_AUDIO_INFO_UPDATE); /* required for audio info values to be updated */ 359 - 360 - WREG32(HDMI_INFOFRAME_CONTROL1 + offset, 361 - HDMI_AUDIO_INFO_LINE(2)); /* anything other than 0 */ 362 - 363 - WREG32(HDMI_AUDIO_PACKET_CONTROL + offset, 364 - HDMI_AUDIO_DELAY_EN(1) | /* set the default audio delay */ 365 - HDMI_AUDIO_PACKETS_PER_LINE(3)); /* should be suffient for all audio modes and small enough for all hblanks */ 366 355 367 356 WREG32(AFMT_60958_0 + offset, 368 357 AFMT_60958_CS_CHANNEL_NUMBER_L(1)); ··· 397 408 if (!dig || !dig->afmt) 398 409 return; 399 410 400 - /* Silent, r600_hdmi_enable will raise WARN for us */ 401 - if (enable && dig->afmt->enabled) 402 - return; 403 - if (!enable && !dig->afmt->enabled) 404 - return; 411 + if (enable) { 412 + WREG32(HDMI_INFOFRAME_CONTROL1 + dig->afmt->offset, 413 + HDMI_AUDIO_INFO_LINE(2)); /* anything other than 0 */ 405 414 406 - if (!enable && dig->afmt->pin) { 407 - radeon_audio_enable(rdev, dig->afmt->pin, 0); 408 - dig->afmt->pin = NULL; 415 + WREG32(HDMI_AUDIO_PACKET_CONTROL + dig->afmt->offset, 416 + HDMI_AUDIO_DELAY_EN(1) | /* set the default audio delay */ 417 + HDMI_AUDIO_PACKETS_PER_LINE(3)); /* should be suffient for all audio modes and small enough for all hblanks */ 418 + 419 + WREG32(HDMI_INFOFRAME_CONTROL0 + dig->afmt->offset, 420 + HDMI_AUDIO_INFO_SEND | /* enable audio info frames (frames won't be set until audio is enabled) */ 421 + HDMI_AUDIO_INFO_CONT); /* required for audio info values to be updated */ 422 + } else { 423 + WREG32(HDMI_INFOFRAME_CONTROL0 + dig->afmt->offset, 0); 409 424 } 410 425 411 426 dig->afmt->enabled = enable; ··· 418 425 enable ? "En" : "Dis", dig->afmt->offset, radeon_encoder->encoder_id); 419 426 } 420 427 421 - void evergreen_enable_dp_audio_packets(struct drm_encoder *encoder, bool enable) 428 + void evergreen_dp_enable(struct drm_encoder *encoder, bool enable) 422 429 { 423 430 struct drm_device *dev = encoder->dev; 424 431 struct radeon_device *rdev = dev->dev_private; 425 432 struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); 426 433 struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv; 427 - uint32_t offset; 428 434 429 435 if (!dig || !dig->afmt) 430 436 return; 431 - 432 - offset = dig->afmt->offset; 433 437 434 438 if (enable) { 435 439 struct drm_connector *connector = radeon_get_connector_for_encoder(encoder); ··· 434 444 struct radeon_connector_atom_dig *dig_connector; 435 445 uint32_t val; 436 446 437 - if (dig->afmt->enabled) 438 - return; 439 - 440 - WREG32(EVERGREEN_DP_SEC_TIMESTAMP + offset, EVERGREEN_DP_SEC_TIMESTAMP_MODE(1)); 447 + WREG32(EVERGREEN_DP_SEC_TIMESTAMP + dig->afmt->offset, 448 + EVERGREEN_DP_SEC_TIMESTAMP_MODE(1)); 441 449 442 450 if (radeon_connector->con_priv) { 443 451 dig_connector = radeon_connector->con_priv; 444 - val = RREG32(EVERGREEN_DP_SEC_AUD_N + offset); 452 + val = RREG32(EVERGREEN_DP_SEC_AUD_N + dig->afmt->offset); 445 453 val &= ~EVERGREEN_DP_SEC_N_BASE_MULTIPLE(0xf); 446 454 447 455 if (dig_connector->dp_clock == 162000) ··· 447 459 else 448 460 val |= EVERGREEN_DP_SEC_N_BASE_MULTIPLE(5); 449 461 450 - WREG32(EVERGREEN_DP_SEC_AUD_N + offset, val); 462 + WREG32(EVERGREEN_DP_SEC_AUD_N + dig->afmt->offset, val); 451 463 } 452 464 453 - WREG32(EVERGREEN_DP_SEC_CNTL + offset, 465 + WREG32(EVERGREEN_DP_SEC_CNTL + dig->afmt->offset, 454 466 EVERGREEN_DP_SEC_ASP_ENABLE | /* Audio packet transmission */ 455 467 EVERGREEN_DP_SEC_ATP_ENABLE | /* Audio timestamp packet transmission */ 456 468 EVERGREEN_DP_SEC_AIP_ENABLE | /* Audio infoframe packet transmission */ 457 469 EVERGREEN_DP_SEC_STREAM_ENABLE); /* Master enable for secondary stream engine */ 458 - radeon_audio_enable(rdev, dig->afmt->pin, 0xf); 459 470 } else { 460 - if (!dig->afmt->enabled) 461 - return; 462 - 463 - WREG32(EVERGREEN_DP_SEC_CNTL + offset, 0); 464 - radeon_audio_enable(rdev, dig->afmt->pin, 0); 471 + WREG32(EVERGREEN_DP_SEC_CNTL + dig->afmt->offset, 0); 465 472 } 466 473 467 474 dig->afmt->enabled = enable;
+4
drivers/gpu/drm/radeon/r100.c
··· 728 728 tmp |= RADEON_FP2_DETECT_MASK; 729 729 } 730 730 WREG32(RADEON_GEN_INT_CNTL, tmp); 731 + 732 + /* read back to post the write */ 733 + RREG32(RADEON_GEN_INT_CNTL); 734 + 731 735 return 0; 732 736 } 733 737
+3
drivers/gpu/drm/radeon/r600.c
··· 3784 3784 WREG32(RV770_CG_THERMAL_INT, thermal_int); 3785 3785 } 3786 3786 3787 + /* posting read */ 3788 + RREG32(R_000E50_SRBM_STATUS); 3789 + 3787 3790 return 0; 3788 3791 } 3789 3792
-11
drivers/gpu/drm/radeon/r600_hdmi.c
··· 476 476 if (!dig || !dig->afmt) 477 477 return; 478 478 479 - /* Silent, r600_hdmi_enable will raise WARN for us */ 480 - if (enable && dig->afmt->enabled) 481 - return; 482 - if (!enable && !dig->afmt->enabled) 483 - return; 484 - 485 - if (!enable && dig->afmt->pin) { 486 - radeon_audio_enable(rdev, dig->afmt->pin, 0); 487 - dig->afmt->pin = NULL; 488 - } 489 - 490 479 /* Older chipsets require setting HDMI and routing manually */ 491 480 if (!ASIC_IS_DCE3(rdev)) { 492 481 if (enable)
+24 -26
drivers/gpu/drm/radeon/radeon_audio.c
··· 101 101 struct drm_display_mode *mode); 102 102 void r600_hdmi_enable(struct drm_encoder *encoder, bool enable); 103 103 void evergreen_hdmi_enable(struct drm_encoder *encoder, bool enable); 104 - void evergreen_enable_dp_audio_packets(struct drm_encoder *encoder, bool enable); 105 - void dce6_enable_dp_audio_packets(struct drm_encoder *encoder, bool enable); 104 + void evergreen_dp_enable(struct drm_encoder *encoder, bool enable); 105 + void dce6_dp_enable(struct drm_encoder *encoder, bool enable); 106 106 107 107 static const u32 pin_offsets[7] = 108 108 { ··· 210 210 .set_avi_packet = evergreen_set_avi_packet, 211 211 .set_audio_packet = dce4_set_audio_packet, 212 212 .mode_set = radeon_audio_dp_mode_set, 213 - .dpms = evergreen_enable_dp_audio_packets, 213 + .dpms = evergreen_dp_enable, 214 214 }; 215 215 216 216 static struct radeon_audio_funcs dce6_hdmi_funcs = { ··· 240 240 .set_avi_packet = evergreen_set_avi_packet, 241 241 .set_audio_packet = dce4_set_audio_packet, 242 242 .mode_set = radeon_audio_dp_mode_set, 243 - .dpms = dce6_enable_dp_audio_packets, 243 + .dpms = dce6_dp_enable, 244 244 }; 245 245 246 246 static void radeon_audio_interface_init(struct radeon_device *rdev) ··· 452 452 } 453 453 454 454 void radeon_audio_detect(struct drm_connector *connector, 455 - enum drm_connector_status status) 455 + enum drm_connector_status status) 456 456 { 457 457 struct radeon_device *rdev; 458 458 struct radeon_encoder *radeon_encoder; ··· 483 483 else 484 484 radeon_encoder->audio = rdev->audio.hdmi_funcs; 485 485 486 - radeon_audio_write_speaker_allocation(connector->encoder); 487 - radeon_audio_write_sad_regs(connector->encoder); 488 - if (connector->encoder->crtc) 489 - radeon_audio_write_latency_fields(connector->encoder, 490 - &connector->encoder->crtc->mode); 486 + dig->afmt->pin = radeon_audio_get_pin(connector->encoder); 491 487 radeon_audio_enable(rdev, dig->afmt->pin, 0xf); 492 488 } else { 493 489 radeon_audio_enable(rdev, dig->afmt->pin, 0); 490 + dig->afmt->pin = NULL; 494 491 } 495 492 } 496 493 ··· 691 694 * update the info frames with the data from the current display mode 692 695 */ 693 696 static void radeon_audio_hdmi_mode_set(struct drm_encoder *encoder, 694 - struct drm_display_mode *mode) 697 + struct drm_display_mode *mode) 695 698 { 696 - struct radeon_device *rdev = encoder->dev->dev_private; 697 699 struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); 698 700 struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv; 699 701 700 702 if (!dig || !dig->afmt) 701 703 return; 702 704 703 - /* disable audio prior to setting up hw */ 704 - dig->afmt->pin = radeon_audio_get_pin(encoder); 705 - radeon_audio_enable(rdev, dig->afmt->pin, 0); 705 + radeon_audio_set_mute(encoder, true); 706 706 707 + radeon_audio_write_speaker_allocation(encoder); 708 + radeon_audio_write_sad_regs(encoder); 709 + radeon_audio_write_latency_fields(encoder, mode); 707 710 radeon_audio_set_dto(encoder, mode->clock); 708 711 radeon_audio_set_vbi_packet(encoder); 709 712 radeon_hdmi_set_color_depth(encoder); 710 - radeon_audio_set_mute(encoder, false); 711 713 radeon_audio_update_acr(encoder, mode->clock); 712 714 radeon_audio_set_audio_packet(encoder); 713 715 radeon_audio_select_pin(encoder); ··· 714 718 if (radeon_audio_set_avi_packet(encoder, mode) < 0) 715 719 return; 716 720 717 - /* enable audio after to setting up hw */ 718 - radeon_audio_enable(rdev, dig->afmt->pin, 0xf); 721 + radeon_audio_set_mute(encoder, false); 719 722 } 720 723 721 724 static void radeon_audio_dp_mode_set(struct drm_encoder *encoder, ··· 724 729 struct radeon_device *rdev = dev->dev_private; 725 730 struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); 726 731 struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv; 732 + struct drm_connector *connector = radeon_get_connector_for_encoder(encoder); 733 + struct radeon_connector *radeon_connector = to_radeon_connector(connector); 734 + struct radeon_connector_atom_dig *dig_connector = 735 + radeon_connector->con_priv; 727 736 728 737 if (!dig || !dig->afmt) 729 738 return; 730 739 731 - /* disable audio prior to setting up hw */ 732 - dig->afmt->pin = radeon_audio_get_pin(encoder); 733 - radeon_audio_enable(rdev, dig->afmt->pin, 0); 734 - 735 - radeon_audio_set_dto(encoder, rdev->clock.default_dispclk * 10); 740 + radeon_audio_write_speaker_allocation(encoder); 741 + radeon_audio_write_sad_regs(encoder); 742 + radeon_audio_write_latency_fields(encoder, mode); 743 + if (rdev->clock.dp_extclk || ASIC_IS_DCE5(rdev)) 744 + radeon_audio_set_dto(encoder, rdev->clock.default_dispclk * 10); 745 + else 746 + radeon_audio_set_dto(encoder, dig_connector->dp_clock); 736 747 radeon_audio_set_audio_packet(encoder); 737 748 radeon_audio_select_pin(encoder); 738 749 739 750 if (radeon_audio_set_avi_packet(encoder, mode) < 0) 740 751 return; 741 - 742 - /* enable audio after to setting up hw */ 743 - radeon_audio_enable(rdev, dig->afmt->pin, 0xf); 744 752 } 745 753 746 754 void radeon_audio_mode_set(struct drm_encoder *encoder,
+3 -1
drivers/gpu/drm/radeon/radeon_cs.c
··· 256 256 u32 ring = RADEON_CS_RING_GFX; 257 257 s32 priority = 0; 258 258 259 + INIT_LIST_HEAD(&p->validated); 260 + 259 261 if (!cs->num_chunks) { 260 262 return 0; 261 263 } 264 + 262 265 /* get chunks */ 263 - INIT_LIST_HEAD(&p->validated); 264 266 p->idx = 0; 265 267 p->ib.sa_bo = NULL; 266 268 p->const_ib.sa_bo = NULL;
+4
drivers/gpu/drm/radeon/rs600.c
··· 694 694 WREG32(R_007D18_DC_HOT_PLUG_DETECT2_INT_CONTROL, hpd2); 695 695 if (ASIC_IS_DCE2(rdev)) 696 696 WREG32(R_007408_HDMI0_AUDIO_PACKET_CONTROL, hdmi0); 697 + 698 + /* posting read */ 699 + RREG32(R_000040_GEN_INT_CNTL); 700 + 697 701 return 0; 698 702 } 699 703
+3
drivers/gpu/drm/radeon/si.c
··· 6203 6203 6204 6204 WREG32(CG_THERMAL_INT, thermal_int); 6205 6205 6206 + /* posting read */ 6207 + RREG32(SRBM_STATUS); 6208 + 6206 6209 return 0; 6207 6210 } 6208 6211
+2 -2
drivers/gpu/drm/radeon/sid.h
··· 912 912 913 913 #define DCCG_AUDIO_DTO0_PHASE 0x05b0 914 914 #define DCCG_AUDIO_DTO0_MODULE 0x05b4 915 - #define DCCG_AUDIO_DTO1_PHASE 0x05b8 916 - #define DCCG_AUDIO_DTO1_MODULE 0x05bc 915 + #define DCCG_AUDIO_DTO1_PHASE 0x05c0 916 + #define DCCG_AUDIO_DTO1_MODULE 0x05c4 917 917 918 918 #define AFMT_AUDIO_SRC_CONTROL 0x713c 919 919 #define AFMT_AUDIO_SRC_SELECT(x) (((x) & 7) << 0)
+1 -1
drivers/gpu/drm/ttm/ttm_bo.c
··· 74 74 pr_err(" has_type: %d\n", man->has_type); 75 75 pr_err(" use_type: %d\n", man->use_type); 76 76 pr_err(" flags: 0x%08X\n", man->flags); 77 - pr_err(" gpu_offset: 0x%08lX\n", man->gpu_offset); 77 + pr_err(" gpu_offset: 0x%08llX\n", man->gpu_offset); 78 78 pr_err(" size: %llu\n", man->size); 79 79 pr_err(" available_caching: 0x%08X\n", man->available_caching); 80 80 pr_err(" default_caching: 0x%08X\n", man->default_caching);
+2
drivers/gpu/ipu-v3/ipu-di.c
··· 459 459 460 460 clkrate = clk_get_rate(di->clk_ipu); 461 461 div = DIV_ROUND_CLOSEST(clkrate, sig->mode.pixelclock); 462 + if (div == 0) 463 + div = 1; 462 464 rate = clkrate / div; 463 465 464 466 error = rate / (sig->mode.pixelclock / 1000);
+21 -19
drivers/i2c/busses/i2c-designware-baytrail.c
··· 17 17 #include <linux/acpi.h> 18 18 #include <linux/i2c.h> 19 19 #include <linux/interrupt.h> 20 + 20 21 #include <asm/iosf_mbi.h> 22 + 21 23 #include "i2c-designware-core.h" 22 24 23 25 #define SEMAPHORE_TIMEOUT 100 24 26 #define PUNIT_SEMAPHORE 0x7 27 + #define PUNIT_SEMAPHORE_BIT BIT(0) 28 + #define PUNIT_SEMAPHORE_ACQUIRE BIT(1) 25 29 26 30 static unsigned long acquired; 27 31 28 32 static int get_sem(struct device *dev, u32 *sem) 29 33 { 30 - u32 reg_val; 34 + u32 data; 31 35 int ret; 32 36 33 37 ret = iosf_mbi_read(BT_MBI_UNIT_PMC, BT_MBI_BUNIT_READ, PUNIT_SEMAPHORE, 34 - &reg_val); 38 + &data); 35 39 if (ret) { 36 40 dev_err(dev, "iosf failed to read punit semaphore\n"); 37 41 return ret; 38 42 } 39 43 40 - *sem = reg_val & 0x1; 44 + *sem = data & PUNIT_SEMAPHORE_BIT; 41 45 42 46 return 0; 43 47 } ··· 56 52 return; 57 53 } 58 54 59 - data = data & 0xfffffffe; 55 + data &= ~PUNIT_SEMAPHORE_BIT; 60 56 if (iosf_mbi_write(BT_MBI_UNIT_PMC, BT_MBI_BUNIT_WRITE, 61 - PUNIT_SEMAPHORE, data)) 57 + PUNIT_SEMAPHORE, data)) 62 58 dev_err(dev, "iosf failed to reset punit semaphore during write\n"); 63 59 } 64 60 65 - int baytrail_i2c_acquire(struct dw_i2c_dev *dev) 61 + static int baytrail_i2c_acquire(struct dw_i2c_dev *dev) 66 62 { 67 - u32 sem = 0; 63 + u32 sem; 68 64 int ret; 69 65 unsigned long start, end; 66 + 67 + might_sleep(); 70 68 71 69 if (!dev || !dev->dev) 72 70 return -ENODEV; 73 71 74 - if (!dev->acquire_lock) 72 + if (!dev->release_lock) 75 73 return 0; 76 74 77 - /* host driver writes 0x2 to side band semaphore register */ 75 + /* host driver writes to side band semaphore register */ 78 76 ret = iosf_mbi_write(BT_MBI_UNIT_PMC, BT_MBI_BUNIT_WRITE, 79 - PUNIT_SEMAPHORE, 0x2); 77 + PUNIT_SEMAPHORE, PUNIT_SEMAPHORE_ACQUIRE); 80 78 if (ret) { 81 79 dev_err(dev->dev, "iosf punit semaphore request failed\n"); 82 80 return ret; ··· 87 81 /* host driver waits for bit 0 to be set in semaphore register */ 88 82 start = jiffies; 89 83 end = start + msecs_to_jiffies(SEMAPHORE_TIMEOUT); 90 - while (!time_after(jiffies, end)) { 84 + do { 91 85 ret = get_sem(dev->dev, &sem); 92 86 if (!ret && sem) { 93 87 acquired = jiffies; ··· 97 91 } 98 92 99 93 usleep_range(1000, 2000); 100 - } 94 + } while (time_before(jiffies, end)); 101 95 102 96 dev_err(dev->dev, "punit semaphore timed out, resetting\n"); 103 97 reset_semaphore(dev->dev); 104 98 105 99 ret = iosf_mbi_read(BT_MBI_UNIT_PMC, BT_MBI_BUNIT_READ, 106 - PUNIT_SEMAPHORE, &sem); 107 - if (!ret) 100 + PUNIT_SEMAPHORE, &sem); 101 + if (ret) 108 102 dev_err(dev->dev, "iosf failed to read punit semaphore\n"); 109 103 else 110 104 dev_err(dev->dev, "PUNIT SEM: %d\n", sem); ··· 113 107 114 108 return -ETIMEDOUT; 115 109 } 116 - EXPORT_SYMBOL(baytrail_i2c_acquire); 117 110 118 - void baytrail_i2c_release(struct dw_i2c_dev *dev) 111 + static void baytrail_i2c_release(struct dw_i2c_dev *dev) 119 112 { 120 113 if (!dev || !dev->dev) 121 114 return; ··· 126 121 dev_dbg(dev->dev, "punit semaphore held for %ums\n", 127 122 jiffies_to_msecs(jiffies - acquired)); 128 123 } 129 - EXPORT_SYMBOL(baytrail_i2c_release); 130 124 131 125 int i2c_dw_eval_lock_support(struct dw_i2c_dev *dev) 132 126 { ··· 141 137 return 0; 142 138 143 139 status = acpi_evaluate_integer(handle, "_SEM", NULL, &shared_host); 144 - 145 140 if (ACPI_FAILURE(status)) 146 141 return 0; 147 142 ··· 156 153 157 154 return 0; 158 155 } 159 - EXPORT_SYMBOL(i2c_dw_eval_lock_support); 160 156 161 157 MODULE_AUTHOR("David E. Box <david.e.box@linux.intel.com>"); 162 158 MODULE_DESCRIPTION("Baytrail I2C Semaphore driver");
+4 -13
drivers/iio/adc/mcp3422.c
··· 58 58 .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SAMP_FREQ), \ 59 59 } 60 60 61 - /* LSB is in nV to eliminate floating point */ 62 - static const u32 rates_to_lsb[] = {1000000, 250000, 62500, 15625}; 63 - 64 - /* 65 - * scales calculated as: 66 - * rates_to_lsb[sample_rate] / (1 << pga); 67 - * pga is 1 for 0, 2 68 - */ 69 - 70 61 static const int mcp3422_scales[4][4] = { 71 - { 1000000, 250000, 62500, 15625 }, 72 - { 500000 , 125000, 31250, 7812 }, 73 - { 250000 , 62500 , 15625, 3906 }, 74 - { 125000 , 31250 , 7812 , 1953 } }; 62 + { 1000000, 500000, 250000, 125000 }, 63 + { 250000 , 125000, 62500 , 31250 }, 64 + { 62500 , 31250 , 15625 , 7812 }, 65 + { 15625 , 7812 , 3906 , 1953 } }; 75 66 76 67 /* Constant msleep times for data acquisitions */ 77 68 static const int mcp3422_read_times[4] = {
+2 -1
drivers/iio/adc/qcom-spmi-iadc.c
··· 296 296 if (iadc->poll_eoc) { 297 297 ret = iadc_poll_wait_eoc(iadc, wait); 298 298 } else { 299 - ret = wait_for_completion_timeout(&iadc->complete, wait); 299 + ret = wait_for_completion_timeout(&iadc->complete, 300 + usecs_to_jiffies(wait)); 300 301 if (!ret) 301 302 ret = -ETIMEDOUT; 302 303 else
+2
drivers/iio/common/ssp_sensors/ssp_dev.c
··· 640 640 return 0; 641 641 } 642 642 643 + #ifdef CONFIG_PM_SLEEP 643 644 static int ssp_suspend(struct device *dev) 644 645 { 645 646 int ret; ··· 689 688 690 689 return 0; 691 690 } 691 + #endif /* CONFIG_PM_SLEEP */ 692 692 693 693 static const struct dev_pm_ops ssp_pm_ops = { 694 694 SET_SYSTEM_SLEEP_PM_OPS(ssp_suspend, ssp_resume)
+1 -1
drivers/iio/dac/ad5686.c
··· 322 322 st = iio_priv(indio_dev); 323 323 spi_set_drvdata(spi, indio_dev); 324 324 325 - st->reg = devm_regulator_get(&spi->dev, "vcc"); 325 + st->reg = devm_regulator_get_optional(&spi->dev, "vcc"); 326 326 if (!IS_ERR(st->reg)) { 327 327 ret = regulator_enable(st->reg); 328 328 if (ret)
+41 -28
drivers/iio/humidity/dht11.c
··· 29 29 #include <linux/wait.h> 30 30 #include <linux/bitops.h> 31 31 #include <linux/completion.h> 32 + #include <linux/mutex.h> 32 33 #include <linux/delay.h> 33 34 #include <linux/gpio.h> 34 35 #include <linux/of_gpio.h> ··· 40 39 41 40 #define DHT11_DATA_VALID_TIME 2000000000 /* 2s in ns */ 42 41 43 - #define DHT11_EDGES_PREAMBLE 4 42 + #define DHT11_EDGES_PREAMBLE 2 44 43 #define DHT11_BITS_PER_READ 40 44 + /* 45 + * Note that when reading the sensor actually 84 edges are detected, but 46 + * since the last edge is not significant, we only store 83: 47 + */ 45 48 #define DHT11_EDGES_PER_READ (2*DHT11_BITS_PER_READ + DHT11_EDGES_PREAMBLE + 1) 46 49 47 50 /* Data transmission timing (nano seconds) */ ··· 62 57 int irq; 63 58 64 59 struct completion completion; 60 + struct mutex lock; 65 61 66 62 s64 timestamp; 67 63 int temperature; ··· 94 88 unsigned char temp_int, temp_dec, hum_int, hum_dec, checksum; 95 89 96 90 /* Calculate timestamp resolution */ 97 - for (i = 0; i < dht11->num_edges; ++i) { 91 + for (i = 1; i < dht11->num_edges; ++i) { 98 92 t = dht11->edges[i].ts - dht11->edges[i-1].ts; 99 93 if (t > 0 && t < timeres) 100 94 timeres = t; ··· 144 138 return 0; 145 139 } 146 140 141 + /* 142 + * IRQ handler called on GPIO edges 143 + */ 144 + static irqreturn_t dht11_handle_irq(int irq, void *data) 145 + { 146 + struct iio_dev *iio = data; 147 + struct dht11 *dht11 = iio_priv(iio); 148 + 149 + /* TODO: Consider making the handler safe for IRQ sharing */ 150 + if (dht11->num_edges < DHT11_EDGES_PER_READ && dht11->num_edges >= 0) { 151 + dht11->edges[dht11->num_edges].ts = iio_get_time_ns(); 152 + dht11->edges[dht11->num_edges++].value = 153 + gpio_get_value(dht11->gpio); 154 + 155 + if (dht11->num_edges >= DHT11_EDGES_PER_READ) 156 + complete(&dht11->completion); 157 + } 158 + 159 + return IRQ_HANDLED; 160 + } 161 + 147 162 static int dht11_read_raw(struct iio_dev *iio_dev, 148 163 const struct iio_chan_spec *chan, 149 164 int *val, int *val2, long m) ··· 172 145 struct dht11 *dht11 = iio_priv(iio_dev); 173 146 int ret; 174 147 148 + mutex_lock(&dht11->lock); 175 149 if (dht11->timestamp + DHT11_DATA_VALID_TIME < iio_get_time_ns()) { 176 150 reinit_completion(&dht11->completion); 177 151 ··· 185 157 if (ret) 186 158 goto err; 187 159 160 + ret = request_irq(dht11->irq, dht11_handle_irq, 161 + IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING, 162 + iio_dev->name, iio_dev); 163 + if (ret) 164 + goto err; 165 + 188 166 ret = wait_for_completion_killable_timeout(&dht11->completion, 189 167 HZ); 168 + 169 + free_irq(dht11->irq, iio_dev); 170 + 190 171 if (ret == 0 && dht11->num_edges < DHT11_EDGES_PER_READ - 1) { 191 172 dev_err(&iio_dev->dev, 192 173 "Only %d signal edges detected\n", ··· 222 185 ret = -EINVAL; 223 186 err: 224 187 dht11->num_edges = -1; 188 + mutex_unlock(&dht11->lock); 225 189 return ret; 226 190 } 227 191 ··· 230 192 .driver_module = THIS_MODULE, 231 193 .read_raw = dht11_read_raw, 232 194 }; 233 - 234 - /* 235 - * IRQ handler called on GPIO edges 236 - */ 237 - static irqreturn_t dht11_handle_irq(int irq, void *data) 238 - { 239 - struct iio_dev *iio = data; 240 - struct dht11 *dht11 = iio_priv(iio); 241 - 242 - /* TODO: Consider making the handler safe for IRQ sharing */ 243 - if (dht11->num_edges < DHT11_EDGES_PER_READ && dht11->num_edges >= 0) { 244 - dht11->edges[dht11->num_edges].ts = iio_get_time_ns(); 245 - dht11->edges[dht11->num_edges++].value = 246 - gpio_get_value(dht11->gpio); 247 - 248 - if (dht11->num_edges >= DHT11_EDGES_PER_READ) 249 - complete(&dht11->completion); 250 - } 251 - 252 - return IRQ_HANDLED; 253 - } 254 195 255 196 static const struct iio_chan_spec dht11_chan_spec[] = { 256 197 { .type = IIO_TEMP, ··· 273 256 dev_err(dev, "GPIO %d has no interrupt\n", dht11->gpio); 274 257 return -EINVAL; 275 258 } 276 - ret = devm_request_irq(dev, dht11->irq, dht11_handle_irq, 277 - IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING, 278 - pdev->name, iio); 279 - if (ret) 280 - return ret; 281 259 282 260 dht11->timestamp = iio_get_time_ns() - DHT11_DATA_VALID_TIME - 1; 283 261 dht11->num_edges = -1; ··· 280 268 platform_set_drvdata(pdev, iio); 281 269 282 270 init_completion(&dht11->completion); 271 + mutex_init(&dht11->lock); 283 272 iio->name = pdev->name; 284 273 iio->dev.parent = &pdev->dev; 285 274 iio->info = &dht11_iio_info;
+3 -3
drivers/iio/humidity/si7020.c
··· 45 45 struct iio_chan_spec const *chan, int *val, 46 46 int *val2, long mask) 47 47 { 48 - struct i2c_client *client = iio_priv(indio_dev); 48 + struct i2c_client **client = iio_priv(indio_dev); 49 49 int ret; 50 50 51 51 switch (mask) { 52 52 case IIO_CHAN_INFO_RAW: 53 - ret = i2c_smbus_read_word_data(client, 53 + ret = i2c_smbus_read_word_data(*client, 54 54 chan->type == IIO_TEMP ? 55 55 SI7020CMD_TEMP_HOLD : 56 56 SI7020CMD_RH_HOLD); ··· 126 126 /* Wait the maximum power-up time after software reset. */ 127 127 msleep(15); 128 128 129 - indio_dev = devm_iio_device_alloc(&client->dev, sizeof(*client)); 129 + indio_dev = devm_iio_device_alloc(&client->dev, sizeof(*data)); 130 130 if (!indio_dev) 131 131 return -ENOMEM; 132 132
+2 -1
drivers/iio/imu/adis16400_core.c
··· 26 26 #include <linux/list.h> 27 27 #include <linux/module.h> 28 28 #include <linux/debugfs.h> 29 + #include <linux/bitops.h> 29 30 30 31 #include <linux/iio/iio.h> 31 32 #include <linux/iio/sysfs.h> ··· 415 414 mutex_unlock(&indio_dev->mlock); 416 415 if (ret) 417 416 return ret; 418 - val16 = ((val16 & 0xFFF) << 4) >> 4; 417 + val16 = sign_extend32(val16, 11); 419 418 *val = val16; 420 419 return IIO_VAL_INT; 421 420 case IIO_CHAN_INFO_OFFSET:
+5 -1
drivers/iio/imu/inv_mpu6050/inv_mpu_core.c
··· 780 780 781 781 i2c_set_clientdata(client, indio_dev); 782 782 indio_dev->dev.parent = &client->dev; 783 - indio_dev->name = id->name; 783 + /* id will be NULL when enumerated via ACPI */ 784 + if (id) 785 + indio_dev->name = (char *)id->name; 786 + else 787 + indio_dev->name = (char *)dev_name(&client->dev); 784 788 indio_dev->channels = inv_mpu_channels; 785 789 indio_dev->num_channels = ARRAY_SIZE(inv_mpu_channels); 786 790
+2
drivers/iio/light/Kconfig
··· 73 73 config GP2AP020A00F 74 74 tristate "Sharp GP2AP020A00F Proximity/ALS sensor" 75 75 depends on I2C 76 + select REGMAP_I2C 76 77 select IIO_BUFFER 77 78 select IIO_TRIGGERED_BUFFER 78 79 select IRQ_WORK ··· 127 126 config JSA1212 128 127 tristate "JSA1212 ALS and proximity sensor driver" 129 128 depends on I2C 129 + select REGMAP_I2C 130 130 help 131 131 Say Y here if you want to build a IIO driver for JSA1212 132 132 proximity & ALS sensor device.
+2
drivers/iio/magnetometer/Kconfig
··· 18 18 19 19 config AK09911 20 20 tristate "Asahi Kasei AK09911 3-axis Compass" 21 + depends on I2C 22 + depends on GPIOLIB 21 23 select AK8975 22 24 help 23 25 Deprecated: AK09911 is now supported by AK8975 driver.
+3 -3
drivers/input/keyboard/tc3589x-keypad.c
··· 411 411 412 412 input_set_drvdata(input, keypad); 413 413 414 - error = request_threaded_irq(irq, NULL, 415 - tc3589x_keypad_irq, plat->irqtype, 416 - "tc3589x-keypad", keypad); 414 + error = request_threaded_irq(irq, NULL, tc3589x_keypad_irq, 415 + plat->irqtype | IRQF_ONESHOT, 416 + "tc3589x-keypad", keypad); 417 417 if (error < 0) { 418 418 dev_err(&pdev->dev, 419 419 "Could not allocate irq %d,error %d\n",
+1
drivers/input/misc/mma8450.c
··· 187 187 idev->private = m; 188 188 idev->input->name = MMA8450_DRV_NAME; 189 189 idev->input->id.bustype = BUS_I2C; 190 + idev->input->dev.parent = &c->dev; 190 191 idev->poll = mma8450_poll; 191 192 idev->poll_interval = POLL_INTERVAL; 192 193 idev->poll_interval_max = POLL_INTERVAL_MAX;
+3 -1
drivers/input/mouse/alps.c
··· 2605 2605 return -ENOMEM; 2606 2606 2607 2607 error = alps_identify(psmouse, priv); 2608 - if (error) 2608 + if (error) { 2609 + kfree(priv); 2609 2610 return error; 2611 + } 2610 2612 2611 2613 if (set_properties) { 2612 2614 psmouse->vendor = "ALPS";
+1 -1
drivers/input/mouse/cyapa_gen3.c
··· 20 20 #include <linux/input/mt.h> 21 21 #include <linux/module.h> 22 22 #include <linux/slab.h> 23 - #include <linux/unaligned/access_ok.h> 23 + #include <asm/unaligned.h> 24 24 #include "cyapa.h" 25 25 26 26
+2 -2
drivers/input/mouse/cyapa_gen5.c
··· 17 17 #include <linux/mutex.h> 18 18 #include <linux/completion.h> 19 19 #include <linux/slab.h> 20 - #include <linux/unaligned/access_ok.h> 20 + #include <asm/unaligned.h> 21 21 #include <linux/crc-itu-t.h> 22 22 #include "cyapa.h" 23 23 ··· 1926 1926 electrodes_tx = cyapa->electrodes_x; 1927 1927 max_element_cnt = ((cyapa->aligned_electrodes_rx + 7) & 1928 1928 ~7u) * electrodes_tx; 1929 - } else if (idac_data_type == GEN5_RETRIEVE_SELF_CAP_PWC_DATA) { 1929 + } else { 1930 1930 offset = 2; 1931 1931 max_element_cnt = cyapa->electrodes_x + 1932 1932 cyapa->electrodes_y;
+35 -15
drivers/input/mouse/focaltech.c
··· 67 67 68 68 #define FOC_MAX_FINGERS 5 69 69 70 - #define FOC_MAX_X 2431 71 - #define FOC_MAX_Y 1663 72 - 73 70 /* 74 71 * Current state of a single finger on the touchpad. 75 72 */ ··· 126 129 input_mt_slot(dev, i); 127 130 input_mt_report_slot_state(dev, MT_TOOL_FINGER, active); 128 131 if (active) { 129 - input_report_abs(dev, ABS_MT_POSITION_X, finger->x); 132 + unsigned int clamped_x, clamped_y; 133 + /* 134 + * The touchpad might report invalid data, so we clamp 135 + * the resulting values so that we do not confuse 136 + * userspace. 137 + */ 138 + clamped_x = clamp(finger->x, 0U, priv->x_max); 139 + clamped_y = clamp(finger->y, 0U, priv->y_max); 140 + input_report_abs(dev, ABS_MT_POSITION_X, clamped_x); 130 141 input_report_abs(dev, ABS_MT_POSITION_Y, 131 - FOC_MAX_Y - finger->y); 142 + priv->y_max - clamped_y); 132 143 } 133 144 } 134 145 input_mt_report_pointer_emulation(dev, true); ··· 184 179 } 185 180 186 181 state->pressed = (packet[0] >> 4) & 1; 187 - 188 - /* 189 - * packet[5] contains some kind of tool size in the most 190 - * significant nibble. 0xff is a special value (latching) that 191 - * signals a large contact area. 192 - */ 193 - if (packet[5] == 0xff) { 194 - state->fingers[finger].valid = false; 195 - return; 196 - } 197 182 198 183 state->fingers[finger].x = ((packet[1] & 0xf) << 8) | packet[2]; 199 184 state->fingers[finger].y = (packet[3] << 8) | packet[4]; ··· 376 381 377 382 return 0; 378 383 } 384 + 385 + void focaltech_set_resolution(struct psmouse *psmouse, unsigned int resolution) 386 + { 387 + /* not supported yet */ 388 + } 389 + 390 + static void focaltech_set_rate(struct psmouse *psmouse, unsigned int rate) 391 + { 392 + /* not supported yet */ 393 + } 394 + 395 + static void focaltech_set_scale(struct psmouse *psmouse, 396 + enum psmouse_scale scale) 397 + { 398 + /* not supported yet */ 399 + } 400 + 379 401 int focaltech_init(struct psmouse *psmouse) 380 402 { 381 403 struct focaltech_data *priv; ··· 427 415 psmouse->cleanup = focaltech_reset; 428 416 /* resync is not supported yet */ 429 417 psmouse->resync_time = 0; 418 + /* 419 + * rate/resolution/scale changes are not supported yet, and 420 + * the generic implementations of these functions seem to 421 + * confuse some touchpads 422 + */ 423 + psmouse->set_resolution = focaltech_set_resolution; 424 + psmouse->set_rate = focaltech_set_rate; 425 + psmouse->set_scale = focaltech_set_scale; 430 426 431 427 return 0; 432 428
+13 -1
drivers/input/mouse/psmouse-base.c
··· 454 454 } 455 455 456 456 /* 457 + * Here we set the mouse scaling. 458 + */ 459 + 460 + static void psmouse_set_scale(struct psmouse *psmouse, enum psmouse_scale scale) 461 + { 462 + ps2_command(&psmouse->ps2dev, NULL, 463 + scale == PSMOUSE_SCALE21 ? PSMOUSE_CMD_SETSCALE21 : 464 + PSMOUSE_CMD_SETSCALE11); 465 + } 466 + 467 + /* 457 468 * psmouse_poll() - default poll handler. Everyone except for ALPS uses it. 458 469 */ 459 470 ··· 700 689 701 690 psmouse->set_rate = psmouse_set_rate; 702 691 psmouse->set_resolution = psmouse_set_resolution; 692 + psmouse->set_scale = psmouse_set_scale; 703 693 psmouse->poll = psmouse_poll; 704 694 psmouse->protocol_handler = psmouse_process_byte; 705 695 psmouse->pktsize = 3; ··· 1172 1160 if (psmouse_max_proto != PSMOUSE_PS2) { 1173 1161 psmouse->set_rate(psmouse, psmouse->rate); 1174 1162 psmouse->set_resolution(psmouse, psmouse->resolution); 1175 - ps2_command(&psmouse->ps2dev, NULL, PSMOUSE_CMD_SETSCALE11); 1163 + psmouse->set_scale(psmouse, PSMOUSE_SCALE11); 1176 1164 } 1177 1165 } 1178 1166
+6
drivers/input/mouse/psmouse.h
··· 36 36 PSMOUSE_FULL_PACKET 37 37 } psmouse_ret_t; 38 38 39 + enum psmouse_scale { 40 + PSMOUSE_SCALE11, 41 + PSMOUSE_SCALE21 42 + }; 43 + 39 44 struct psmouse { 40 45 void *private; 41 46 struct input_dev *dev; ··· 72 67 psmouse_ret_t (*protocol_handler)(struct psmouse *psmouse); 73 68 void (*set_rate)(struct psmouse *psmouse, unsigned int rate); 74 69 void (*set_resolution)(struct psmouse *psmouse, unsigned int resolution); 70 + void (*set_scale)(struct psmouse *psmouse, enum psmouse_scale scale); 75 71 76 72 int (*reconnect)(struct psmouse *psmouse); 77 73 void (*disconnect)(struct psmouse *psmouse);
+1
drivers/input/touchscreen/Kconfig
··· 943 943 tristate "Allwinner sun4i resistive touchscreen controller support" 944 944 depends on ARCH_SUNXI || COMPILE_TEST 945 945 depends on HWMON 946 + depends on THERMAL || !THERMAL_OF 946 947 help 947 948 This selects support for the resistive touchscreen controller 948 949 found on Allwinner sunxi SoCs.
+2
drivers/misc/mei/init.c
··· 341 341 342 342 dev->dev_state = MEI_DEV_POWER_DOWN; 343 343 mei_reset(dev); 344 + /* move device to disabled state unconditionally */ 345 + dev->dev_state = MEI_DEV_DISABLED; 344 346 345 347 mutex_unlock(&dev->device_lock); 346 348
+8
drivers/net/can/dev.c
··· 579 579 skb->pkt_type = PACKET_BROADCAST; 580 580 skb->ip_summed = CHECKSUM_UNNECESSARY; 581 581 582 + skb_reset_mac_header(skb); 583 + skb_reset_network_header(skb); 584 + skb_reset_transport_header(skb); 585 + 582 586 can_skb_reserve(skb); 583 587 can_skb_prv(skb)->ifindex = dev->ifindex; 584 588 ··· 606 602 skb->protocol = htons(ETH_P_CANFD); 607 603 skb->pkt_type = PACKET_BROADCAST; 608 604 skb->ip_summed = CHECKSUM_UNNECESSARY; 605 + 606 + skb_reset_mac_header(skb); 607 + skb_reset_network_header(skb); 608 + skb_reset_transport_header(skb); 609 609 610 610 can_skb_reserve(skb); 611 611 can_skb_prv(skb)->ifindex = dev->ifindex;
+31 -17
drivers/net/can/usb/kvaser_usb.c
··· 14 14 * Copyright (C) 2015 Valeo S.A. 15 15 */ 16 16 17 + #include <linux/kernel.h> 17 18 #include <linux/completion.h> 18 19 #include <linux/module.h> 19 20 #include <linux/netdevice.h> ··· 585 584 while (pos <= actual_len - MSG_HEADER_LEN) { 586 585 tmp = buf + pos; 587 586 588 - if (!tmp->len) 589 - break; 587 + /* Handle messages crossing the USB endpoint max packet 588 + * size boundary. Check kvaser_usb_read_bulk_callback() 589 + * for further details. 590 + */ 591 + if (tmp->len == 0) { 592 + pos = round_up(pos, 593 + dev->bulk_in->wMaxPacketSize); 594 + continue; 595 + } 590 596 591 597 if (pos + tmp->len > actual_len) { 592 598 dev_err(dev->udev->dev.parent, ··· 795 787 netdev_err(netdev, "Error transmitting URB\n"); 796 788 usb_unanchor_urb(urb); 797 789 usb_free_urb(urb); 798 - kfree(buf); 799 790 return err; 800 791 } 801 792 ··· 1324 1317 while (pos <= urb->actual_length - MSG_HEADER_LEN) { 1325 1318 msg = urb->transfer_buffer + pos; 1326 1319 1327 - if (!msg->len) 1328 - break; 1320 + /* The Kvaser firmware can only read and write messages that 1321 + * does not cross the USB's endpoint wMaxPacketSize boundary. 1322 + * If a follow-up command crosses such boundary, firmware puts 1323 + * a placeholder zero-length command in its place then aligns 1324 + * the real command to the next max packet size. 1325 + * 1326 + * Handle such cases or we're going to miss a significant 1327 + * number of events in case of a heavy rx load on the bus. 1328 + */ 1329 + if (msg->len == 0) { 1330 + pos = round_up(pos, dev->bulk_in->wMaxPacketSize); 1331 + continue; 1332 + } 1329 1333 1330 1334 if (pos + msg->len > urb->actual_length) { 1331 1335 dev_err(dev->udev->dev.parent, "Format error\n"); ··· 1344 1326 } 1345 1327 1346 1328 kvaser_usb_handle_message(dev, msg); 1347 - 1348 1329 pos += msg->len; 1349 1330 } 1350 1331 ··· 1632 1615 struct urb *urb; 1633 1616 void *buf; 1634 1617 struct kvaser_msg *msg; 1635 - int i, err; 1636 - int ret = NETDEV_TX_OK; 1618 + int i, err, ret = NETDEV_TX_OK; 1637 1619 u8 *msg_tx_can_flags = NULL; /* GCC */ 1638 1620 1639 1621 if (can_dropped_invalid_skb(netdev, skb)) ··· 1650 1634 if (!buf) { 1651 1635 stats->tx_dropped++; 1652 1636 dev_kfree_skb(skb); 1653 - goto nobufmem; 1637 + goto freeurb; 1654 1638 } 1655 1639 1656 1640 msg = buf; ··· 1697 1681 /* This should never happen; it implies a flow control bug */ 1698 1682 if (!context) { 1699 1683 netdev_warn(netdev, "cannot find free context\n"); 1684 + 1685 + kfree(buf); 1700 1686 ret = NETDEV_TX_BUSY; 1701 - goto releasebuf; 1687 + goto freeurb; 1702 1688 } 1703 1689 1704 1690 context->priv = priv; ··· 1737 1719 else 1738 1720 netdev_warn(netdev, "Failed tx_urb %d\n", err); 1739 1721 1740 - goto releasebuf; 1722 + goto freeurb; 1741 1723 } 1742 1724 1743 - usb_free_urb(urb); 1725 + ret = NETDEV_TX_OK; 1744 1726 1745 - return NETDEV_TX_OK; 1746 - 1747 - releasebuf: 1748 - kfree(buf); 1749 - nobufmem: 1727 + freeurb: 1750 1728 usb_free_urb(urb); 1751 1729 return ret; 1752 1730 }
+4
drivers/net/can/usb/peak_usb/pcan_usb_fd.c
··· 879 879 880 880 pdev->usb_if = ppdev->usb_if; 881 881 pdev->cmd_buffer_addr = ppdev->cmd_buffer_addr; 882 + 883 + /* do a copy of the ctrlmode[_supported] too */ 884 + dev->can.ctrlmode = ppdev->dev.can.ctrlmode; 885 + dev->can.ctrlmode_supported = ppdev->dev.can.ctrlmode_supported; 882 886 } 883 887 884 888 pdev->usb_if->dev[dev->ctrl_idx] = dev;
+1 -1
drivers/net/ethernet/apm/xgene/xgene_enet_hw.c
··· 593 593 if (!xgene_ring_mgr_init(pdata)) 594 594 return -ENODEV; 595 595 596 - if (!efi_enabled(EFI_BOOT)) { 596 + if (pdata->clk) { 597 597 clk_prepare_enable(pdata->clk); 598 598 clk_disable_unprepare(pdata->clk); 599 599 clk_prepare_enable(pdata->clk);
+4
drivers/net/ethernet/apm/xgene/xgene_enet_main.c
··· 1025 1025 #ifdef CONFIG_ACPI 1026 1026 static const struct acpi_device_id xgene_enet_acpi_match[] = { 1027 1027 { "APMC0D05", }, 1028 + { "APMC0D30", }, 1029 + { "APMC0D31", }, 1028 1030 { } 1029 1031 }; 1030 1032 MODULE_DEVICE_TABLE(acpi, xgene_enet_acpi_match); ··· 1035 1033 #ifdef CONFIG_OF 1036 1034 static struct of_device_id xgene_enet_of_match[] = { 1037 1035 {.compatible = "apm,xgene-enet",}, 1036 + {.compatible = "apm,xgene1-sgenet",}, 1037 + {.compatible = "apm,xgene1-xgenet",}, 1038 1038 {}, 1039 1039 }; 1040 1040
+4 -4
drivers/net/ethernet/broadcom/bcm63xx_enet.c
··· 486 486 { 487 487 struct bcm_enet_priv *priv; 488 488 struct net_device *dev; 489 - int tx_work_done, rx_work_done; 489 + int rx_work_done; 490 490 491 491 priv = container_of(napi, struct bcm_enet_priv, napi); 492 492 dev = priv->net_dev; ··· 498 498 ENETDMAC_IR, priv->tx_chan); 499 499 500 500 /* reclaim sent skb */ 501 - tx_work_done = bcm_enet_tx_reclaim(dev, 0); 501 + bcm_enet_tx_reclaim(dev, 0); 502 502 503 503 spin_lock(&priv->rx_lock); 504 504 rx_work_done = bcm_enet_receive_queue(dev, budget); 505 505 spin_unlock(&priv->rx_lock); 506 506 507 - if (rx_work_done >= budget || tx_work_done > 0) { 508 - /* rx/tx queue is not yet empty/clean */ 507 + if (rx_work_done >= budget) { 508 + /* rx queue is not yet empty/clean */ 509 509 return rx_work_done; 510 510 } 511 511
-7
drivers/net/ethernet/broadcom/bgmac.c
··· 302 302 slot->skb = skb; 303 303 slot->dma_addr = dma_addr; 304 304 305 - if (slot->dma_addr & 0xC0000000) 306 - bgmac_warn(bgmac, "DMA address using 0xC0000000 bit(s), it may need translation trick\n"); 307 - 308 305 return 0; 309 306 } 310 307 ··· 502 505 ring->mmio_base); 503 506 goto err_dma_free; 504 507 } 505 - if (ring->dma_base & 0xC0000000) 506 - bgmac_warn(bgmac, "DMA address using 0xC0000000 bit(s), it may need translation trick\n"); 507 508 508 509 ring->unaligned = bgmac_dma_unaligned(bgmac, ring, 509 510 BGMAC_DMA_RING_TX); ··· 531 536 err = -ENOMEM; 532 537 goto err_dma_free; 533 538 } 534 - if (ring->dma_base & 0xC0000000) 535 - bgmac_warn(bgmac, "DMA address using 0xC0000000 bit(s), it may need translation trick\n"); 536 539 537 540 ring->unaligned = bgmac_dma_unaligned(bgmac, ring, 538 541 BGMAC_DMA_RING_RX);
+3
drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
··· 12722 12722 pci_write_config_dword(bp->pdev, PCICFG_GRC_ADDRESS, 12723 12723 PCICFG_VENDOR_ID_OFFSET); 12724 12724 12725 + /* Set PCIe reset type to fundamental for EEH recovery */ 12726 + pdev->needs_freset = 1; 12727 + 12725 12728 /* AER (Advanced Error reporting) configuration */ 12726 12729 rc = pci_enable_pcie_error_reporting(pdev); 12727 12730 if (!rc)
+4 -2
drivers/net/ethernet/broadcom/genet/bcmgenet_wol.c
··· 73 73 if (wol->wolopts & ~(WAKE_MAGIC | WAKE_MAGICSECURE)) 74 74 return -EINVAL; 75 75 76 + reg = bcmgenet_umac_readl(priv, UMAC_MPD_CTRL); 76 77 if (wol->wolopts & WAKE_MAGICSECURE) { 77 78 bcmgenet_umac_writel(priv, get_unaligned_be16(&wol->sopass[0]), 78 79 UMAC_MPD_PW_MS); 79 80 bcmgenet_umac_writel(priv, get_unaligned_be32(&wol->sopass[2]), 80 81 UMAC_MPD_PW_LS); 81 - reg = bcmgenet_umac_readl(priv, UMAC_MPD_CTRL); 82 82 reg |= MPD_PW_EN; 83 - bcmgenet_umac_writel(priv, reg, UMAC_MPD_CTRL); 83 + } else { 84 + reg &= ~MPD_PW_EN; 84 85 } 86 + bcmgenet_umac_writel(priv, reg, UMAC_MPD_CTRL); 85 87 86 88 /* Flag the device and relevant IRQ as wakeup capable */ 87 89 if (wol->wolopts) {
+21 -5
drivers/net/ethernet/cadence/macb.c
··· 2131 2131 */ 2132 2132 static void macb_configure_caps(struct macb *bp) 2133 2133 { 2134 + const struct of_device_id *match; 2135 + const struct macb_config *config; 2134 2136 u32 dcfg; 2137 + 2138 + if (bp->pdev->dev.of_node) { 2139 + match = of_match_node(macb_dt_ids, bp->pdev->dev.of_node); 2140 + if (match && match->data) { 2141 + config = match->data; 2142 + 2143 + bp->caps = config->caps; 2144 + /* 2145 + * As we have access to the matching node, configure 2146 + * DMA burst length as well 2147 + */ 2148 + bp->dma_burst_length = config->dma_burst_length; 2149 + } 2150 + } 2135 2151 2136 2152 if (MACB_BFEXT(IDNUM, macb_readl(bp, MID)) == 0x2) 2137 2153 bp->caps |= MACB_CAPS_MACB_IS_GEM; ··· 2652 2636 return err; 2653 2637 } 2654 2638 2655 - static struct macb_config at91sam9260_config = { 2639 + static const struct macb_config at91sam9260_config = { 2656 2640 .caps = MACB_CAPS_USRIO_HAS_CLKEN | MACB_CAPS_USRIO_DEFAULT_IS_MII, 2657 2641 .init = macb_init, 2658 2642 }; 2659 2643 2660 - static struct macb_config pc302gem_config = { 2644 + static const struct macb_config pc302gem_config = { 2661 2645 .caps = MACB_CAPS_SG_DISABLED | MACB_CAPS_GIGABIT_MODE_AVAILABLE, 2662 2646 .dma_burst_length = 16, 2663 2647 .init = macb_init, 2664 2648 }; 2665 2649 2666 - static struct macb_config sama5d3_config = { 2650 + static const struct macb_config sama5d3_config = { 2667 2651 .caps = MACB_CAPS_SG_DISABLED | MACB_CAPS_GIGABIT_MODE_AVAILABLE, 2668 2652 .dma_burst_length = 16, 2669 2653 .init = macb_init, 2670 2654 }; 2671 2655 2672 - static struct macb_config sama5d4_config = { 2656 + static const struct macb_config sama5d4_config = { 2673 2657 .caps = 0, 2674 2658 .dma_burst_length = 4, 2675 2659 .init = macb_init, 2676 2660 }; 2677 2661 2678 - static struct macb_config emac_config = { 2662 + static const struct macb_config emac_config = { 2679 2663 .init = at91ether_init, 2680 2664 }; 2681 2665
+1 -1
drivers/net/ethernet/cadence/macb.h
··· 353 353 354 354 /* Bitfields in MID */ 355 355 #define MACB_IDNUM_OFFSET 16 356 - #define MACB_IDNUM_SIZE 16 356 + #define MACB_IDNUM_SIZE 12 357 357 #define MACB_REV_OFFSET 0 358 358 #define MACB_REV_SIZE 16 359 359
+1 -2
drivers/net/ethernet/freescale/fec_main.c
··· 1597 1597 writel(int_events, fep->hwp + FEC_IEVENT); 1598 1598 fec_enet_collect_events(fep, int_events); 1599 1599 1600 - if (fep->work_tx || fep->work_rx) { 1600 + if ((fep->work_tx || fep->work_rx) && fep->link) { 1601 1601 ret = IRQ_HANDLED; 1602 1602 1603 1603 if (napi_schedule_prep(&fep->napi)) { ··· 3383 3383 regulator_disable(fep->reg_phy); 3384 3384 if (fep->ptp_clock) 3385 3385 ptp_clock_unregister(fep->ptp_clock); 3386 - fec_enet_clk_enable(ndev, false); 3387 3386 of_node_put(fep->phy_node); 3388 3387 free_netdev(ndev); 3389 3388
+17 -2
drivers/net/ethernet/freescale/gianfar.c
··· 747 747 return 0; 748 748 } 749 749 750 + static int gfar_of_group_count(struct device_node *np) 751 + { 752 + struct device_node *child; 753 + int num = 0; 754 + 755 + for_each_available_child_of_node(np, child) 756 + if (!of_node_cmp(child->name, "queue-group")) 757 + num++; 758 + 759 + return num; 760 + } 761 + 750 762 static int gfar_of_init(struct platform_device *ofdev, struct net_device **pdev) 751 763 { 752 764 const char *model; ··· 796 784 num_rx_qs = 1; 797 785 } else { /* MQ_MG_MODE */ 798 786 /* get the actual number of supported groups */ 799 - unsigned int num_grps = of_get_available_child_count(np); 787 + unsigned int num_grps = gfar_of_group_count(np); 800 788 801 789 if (num_grps == 0 || num_grps > MAXGROUPS) { 802 790 dev_err(&ofdev->dev, "Invalid # of int groups(%d)\n", ··· 863 851 864 852 /* Parse and initialize group specific information */ 865 853 if (priv->mode == MQ_MG_MODE) { 866 - for_each_child_of_node(np, child) { 854 + for_each_available_child_of_node(np, child) { 855 + if (of_node_cmp(child->name, "queue-group")) 856 + continue; 857 + 867 858 err = gfar_parse_group(child, priv, model); 868 859 if (err) 869 860 goto err_grp_init;
+1
drivers/net/ethernet/smsc/smc91x.c
··· 92 92 #include "smc91x.h" 93 93 94 94 #if defined(CONFIG_ASSABET_NEPONSET) 95 + #include <mach/assabet.h> 95 96 #include <mach/neponset.h> 96 97 #endif 97 98
+36 -29
drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
··· 272 272 struct stmmac_priv *priv = NULL; 273 273 struct plat_stmmacenet_data *plat_dat = NULL; 274 274 const char *mac = NULL; 275 + int irq, wol_irq, lpi_irq; 276 + 277 + /* Get IRQ information early to have an ability to ask for deferred 278 + * probe if needed before we went too far with resource allocation. 279 + */ 280 + irq = platform_get_irq_byname(pdev, "macirq"); 281 + if (irq < 0) { 282 + if (irq != -EPROBE_DEFER) { 283 + dev_err(dev, 284 + "MAC IRQ configuration information not found\n"); 285 + } 286 + return irq; 287 + } 288 + 289 + /* On some platforms e.g. SPEAr the wake up irq differs from the mac irq 290 + * The external wake up irq can be passed through the platform code 291 + * named as "eth_wake_irq" 292 + * 293 + * In case the wake up interrupt is not passed from the platform 294 + * so the driver will continue to use the mac irq (ndev->irq) 295 + */ 296 + wol_irq = platform_get_irq_byname(pdev, "eth_wake_irq"); 297 + if (wol_irq < 0) { 298 + if (wol_irq == -EPROBE_DEFER) 299 + return -EPROBE_DEFER; 300 + wol_irq = irq; 301 + } 302 + 303 + lpi_irq = platform_get_irq_byname(pdev, "eth_lpi"); 304 + if (lpi_irq == -EPROBE_DEFER) 305 + return -EPROBE_DEFER; 275 306 276 307 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 277 308 addr = devm_ioremap_resource(dev, res); ··· 354 323 return PTR_ERR(priv); 355 324 } 356 325 326 + /* Copy IRQ values to priv structure which is now avaialble */ 327 + priv->dev->irq = irq; 328 + priv->wol_irq = wol_irq; 329 + priv->lpi_irq = lpi_irq; 330 + 357 331 /* Get MAC address if available (DT) */ 358 332 if (mac) 359 333 memcpy(priv->dev->dev_addr, mac, ETH_ALEN); 360 - 361 - /* Get the MAC information */ 362 - priv->dev->irq = platform_get_irq_byname(pdev, "macirq"); 363 - if (priv->dev->irq < 0) { 364 - if (priv->dev->irq != -EPROBE_DEFER) { 365 - netdev_err(priv->dev, 366 - "MAC IRQ configuration information not found\n"); 367 - } 368 - return priv->dev->irq; 369 - } 370 - 371 - /* 372 - * On some platforms e.g. SPEAr the wake up irq differs from the mac irq 373 - * The external wake up irq can be passed through the platform code 374 - * named as "eth_wake_irq" 375 - * 376 - * In case the wake up interrupt is not passed from the platform 377 - * so the driver will continue to use the mac irq (ndev->irq) 378 - */ 379 - priv->wol_irq = platform_get_irq_byname(pdev, "eth_wake_irq"); 380 - if (priv->wol_irq < 0) { 381 - if (priv->wol_irq == -EPROBE_DEFER) 382 - return -EPROBE_DEFER; 383 - priv->wol_irq = priv->dev->irq; 384 - } 385 - 386 - priv->lpi_irq = platform_get_irq_byname(pdev, "eth_lpi"); 387 - if (priv->lpi_irq == -EPROBE_DEFER) 388 - return -EPROBE_DEFER; 389 334 390 335 platform_set_drvdata(pdev, priv->dev); 391 336
+3 -3
drivers/net/team/team.c
··· 1730 1730 if (dev->type == ARPHRD_ETHER && !is_valid_ether_addr(addr->sa_data)) 1731 1731 return -EADDRNOTAVAIL; 1732 1732 memcpy(dev->dev_addr, addr->sa_data, dev->addr_len); 1733 - rcu_read_lock(); 1734 - list_for_each_entry_rcu(port, &team->port_list, list) 1733 + mutex_lock(&team->lock); 1734 + list_for_each_entry(port, &team->port_list, list) 1735 1735 if (team->ops.port_change_dev_addr) 1736 1736 team->ops.port_change_dev_addr(team, port); 1737 - rcu_read_unlock(); 1737 + mutex_unlock(&team->lock); 1738 1738 return 0; 1739 1739 } 1740 1740
+1 -2
drivers/net/xen-netback/interface.c
··· 340 340 unsigned int num_queues = vif->num_queues; 341 341 int i; 342 342 unsigned int queue_index; 343 - struct xenvif_stats *vif_stats; 344 343 345 344 for (i = 0; i < ARRAY_SIZE(xenvif_stats); i++) { 346 345 unsigned long accum = 0; 347 346 for (queue_index = 0; queue_index < num_queues; ++queue_index) { 348 - vif_stats = &vif->queues[queue_index].stats; 347 + void *vif_stats = &vif->queues[queue_index].stats; 349 348 accum += *(unsigned long *)(vif_stats + xenvif_stats[i].offset); 350 349 } 351 350 data[i] = accum;
+12 -10
drivers/net/xen-netback/netback.c
··· 1349 1349 { 1350 1350 unsigned int offset = skb_headlen(skb); 1351 1351 skb_frag_t frags[MAX_SKB_FRAGS]; 1352 - int i; 1352 + int i, f; 1353 1353 struct ubuf_info *uarg; 1354 1354 struct sk_buff *nskb = skb_shinfo(skb)->frag_list; 1355 1355 ··· 1389 1389 frags[i].page_offset = 0; 1390 1390 skb_frag_size_set(&frags[i], len); 1391 1391 } 1392 - /* swap out with old one */ 1393 - memcpy(skb_shinfo(skb)->frags, 1394 - frags, 1395 - i * sizeof(skb_frag_t)); 1396 - skb_shinfo(skb)->nr_frags = i; 1397 - skb->truesize += i * PAGE_SIZE; 1398 1392 1399 - /* remove traces of mapped pages and frag_list */ 1393 + /* Copied all the bits from the frag list -- free it. */ 1400 1394 skb_frag_list_init(skb); 1395 + xenvif_skb_zerocopy_prepare(queue, nskb); 1396 + kfree_skb(nskb); 1397 + 1398 + /* Release all the original (foreign) frags. */ 1399 + for (f = 0; f < skb_shinfo(skb)->nr_frags; f++) 1400 + skb_frag_unref(skb, f); 1401 1401 uarg = skb_shinfo(skb)->destructor_arg; 1402 1402 /* increase inflight counter to offset decrement in callback */ 1403 1403 atomic_inc(&queue->inflight_packets); 1404 1404 uarg->callback(uarg, true); 1405 1405 skb_shinfo(skb)->destructor_arg = NULL; 1406 1406 1407 - xenvif_skb_zerocopy_prepare(queue, nskb); 1408 - kfree_skb(nskb); 1407 + /* Fill the skb with the new (local) frags. */ 1408 + memcpy(skb_shinfo(skb)->frags, frags, i * sizeof(skb_frag_t)); 1409 + skb_shinfo(skb)->nr_frags = i; 1410 + skb->truesize += i * PAGE_SIZE; 1409 1411 1410 1412 return 0; 1411 1413 }
+1 -1
drivers/pci/host/pci-versatile.c
··· 80 80 if (err) 81 81 return err; 82 82 83 - resource_list_for_each_entry(win, res, list) { 83 + resource_list_for_each_entry(win, res) { 84 84 struct resource *parent, *res = win->res; 85 85 86 86 switch (resource_type(res)) {
-7
drivers/regulator/core.c
··· 3444 3444 if (attr == &dev_attr_requested_microamps.attr) 3445 3445 return rdev->desc->type == REGULATOR_CURRENT ? mode : 0; 3446 3446 3447 - /* all the other attributes exist to support constraints; 3448 - * don't show them if there are no constraints, or if the 3449 - * relevant supporting methods are missing. 3450 - */ 3451 - if (!rdev->constraints) 3452 - return 0; 3453 - 3454 3447 /* constraints need specific supporting methods */ 3455 3448 if (attr == &dev_attr_min_microvolts.attr || 3456 3449 attr == &dev_attr_max_microvolts.attr)
+9
drivers/regulator/da9210-regulator.c
··· 152 152 config.regmap = chip->regmap; 153 153 config.of_node = dev->of_node; 154 154 155 + /* Mask all interrupt sources to deassert interrupt line */ 156 + error = regmap_write(chip->regmap, DA9210_REG_MASK_A, ~0); 157 + if (!error) 158 + error = regmap_write(chip->regmap, DA9210_REG_MASK_B, ~0); 159 + if (error) { 160 + dev_err(&i2c->dev, "Failed to write to mask reg: %d\n", error); 161 + return error; 162 + } 163 + 155 164 rdev = devm_regulator_register(&i2c->dev, &da9210_reg, &config); 156 165 if (IS_ERR(rdev)) { 157 166 dev_err(&i2c->dev, "Failed to register DA9210 regulator\n");
+8
drivers/regulator/rk808-regulator.c
··· 235 235 .vsel_mask = RK808_LDO_VSEL_MASK, 236 236 .enable_reg = RK808_LDO_EN_REG, 237 237 .enable_mask = BIT(0), 238 + .enable_time = 400, 238 239 .owner = THIS_MODULE, 239 240 }, { 240 241 .name = "LDO_REG2", ··· 250 249 .vsel_mask = RK808_LDO_VSEL_MASK, 251 250 .enable_reg = RK808_LDO_EN_REG, 252 251 .enable_mask = BIT(1), 252 + .enable_time = 400, 253 253 .owner = THIS_MODULE, 254 254 }, { 255 255 .name = "LDO_REG3", ··· 265 263 .vsel_mask = RK808_BUCK4_VSEL_MASK, 266 264 .enable_reg = RK808_LDO_EN_REG, 267 265 .enable_mask = BIT(2), 266 + .enable_time = 400, 268 267 .owner = THIS_MODULE, 269 268 }, { 270 269 .name = "LDO_REG4", ··· 280 277 .vsel_mask = RK808_LDO_VSEL_MASK, 281 278 .enable_reg = RK808_LDO_EN_REG, 282 279 .enable_mask = BIT(3), 280 + .enable_time = 400, 283 281 .owner = THIS_MODULE, 284 282 }, { 285 283 .name = "LDO_REG5", ··· 295 291 .vsel_mask = RK808_LDO_VSEL_MASK, 296 292 .enable_reg = RK808_LDO_EN_REG, 297 293 .enable_mask = BIT(4), 294 + .enable_time = 400, 298 295 .owner = THIS_MODULE, 299 296 }, { 300 297 .name = "LDO_REG6", ··· 310 305 .vsel_mask = RK808_LDO_VSEL_MASK, 311 306 .enable_reg = RK808_LDO_EN_REG, 312 307 .enable_mask = BIT(5), 308 + .enable_time = 400, 313 309 .owner = THIS_MODULE, 314 310 }, { 315 311 .name = "LDO_REG7", ··· 325 319 .vsel_mask = RK808_LDO_VSEL_MASK, 326 320 .enable_reg = RK808_LDO_EN_REG, 327 321 .enable_mask = BIT(6), 322 + .enable_time = 400, 328 323 .owner = THIS_MODULE, 329 324 }, { 330 325 .name = "LDO_REG8", ··· 340 333 .vsel_mask = RK808_LDO_VSEL_MASK, 341 334 .enable_reg = RK808_LDO_EN_REG, 342 335 .enable_mask = BIT(7), 336 + .enable_time = 400, 343 337 .owner = THIS_MODULE, 344 338 }, { 345 339 .name = "SWITCH_REG1",
+48 -14
drivers/rtc/rtc-at91rm9200.c
··· 31 31 #include <linux/io.h> 32 32 #include <linux/of.h> 33 33 #include <linux/of_device.h> 34 + #include <linux/suspend.h> 34 35 #include <linux/uaccess.h> 35 36 36 37 #include "rtc-at91rm9200.h" ··· 55 54 static int irq; 56 55 static DEFINE_SPINLOCK(at91_rtc_lock); 57 56 static u32 at91_rtc_shadow_imr; 57 + static bool suspended; 58 + static DEFINE_SPINLOCK(suspended_lock); 59 + static unsigned long cached_events; 60 + static u32 at91_rtc_imr; 58 61 59 62 static void at91_rtc_write_ier(u32 mask) 60 63 { ··· 295 290 struct rtc_device *rtc = platform_get_drvdata(pdev); 296 291 unsigned int rtsr; 297 292 unsigned long events = 0; 293 + int ret = IRQ_NONE; 298 294 295 + spin_lock(&suspended_lock); 299 296 rtsr = at91_rtc_read(AT91_RTC_SR) & at91_rtc_read_imr(); 300 297 if (rtsr) { /* this interrupt is shared! Is it ours? */ 301 298 if (rtsr & AT91_RTC_ALARM) ··· 311 304 312 305 at91_rtc_write(AT91_RTC_SCCR, rtsr); /* clear status reg */ 313 306 314 - rtc_update_irq(rtc, 1, events); 307 + if (!suspended) { 308 + rtc_update_irq(rtc, 1, events); 315 309 316 - dev_dbg(&pdev->dev, "%s(): num=%ld, events=0x%02lx\n", __func__, 317 - events >> 8, events & 0x000000FF); 310 + dev_dbg(&pdev->dev, "%s(): num=%ld, events=0x%02lx\n", 311 + __func__, events >> 8, events & 0x000000FF); 312 + } else { 313 + cached_events |= events; 314 + at91_rtc_write_idr(at91_rtc_imr); 315 + pm_system_wakeup(); 316 + } 318 317 319 - return IRQ_HANDLED; 318 + ret = IRQ_HANDLED; 320 319 } 321 - return IRQ_NONE; /* not handled */ 320 + spin_lock(&suspended_lock); 321 + 322 + return ret; 322 323 } 323 324 324 325 static const struct at91_rtc_config at91rm9200_config = { ··· 416 401 AT91_RTC_CALEV); 417 402 418 403 ret = devm_request_irq(&pdev->dev, irq, at91_rtc_interrupt, 419 - IRQF_SHARED, 420 - "at91_rtc", pdev); 404 + IRQF_SHARED | IRQF_COND_SUSPEND, 405 + "at91_rtc", pdev); 421 406 if (ret) { 422 407 dev_err(&pdev->dev, "IRQ %d already in use.\n", irq); 423 408 return ret; ··· 469 454 470 455 /* AT91RM9200 RTC Power management control */ 471 456 472 - static u32 at91_rtc_imr; 473 - 474 457 static int at91_rtc_suspend(struct device *dev) 475 458 { 476 459 /* this IRQ is shared with DBGU and other hardware which isn't ··· 477 464 at91_rtc_imr = at91_rtc_read_imr() 478 465 & (AT91_RTC_ALARM|AT91_RTC_SECEV); 479 466 if (at91_rtc_imr) { 480 - if (device_may_wakeup(dev)) 467 + if (device_may_wakeup(dev)) { 468 + unsigned long flags; 469 + 481 470 enable_irq_wake(irq); 482 - else 471 + 472 + spin_lock_irqsave(&suspended_lock, flags); 473 + suspended = true; 474 + spin_unlock_irqrestore(&suspended_lock, flags); 475 + } else { 483 476 at91_rtc_write_idr(at91_rtc_imr); 477 + } 484 478 } 485 479 return 0; 486 480 } 487 481 488 482 static int at91_rtc_resume(struct device *dev) 489 483 { 484 + struct rtc_device *rtc = dev_get_drvdata(dev); 485 + 490 486 if (at91_rtc_imr) { 491 - if (device_may_wakeup(dev)) 487 + if (device_may_wakeup(dev)) { 488 + unsigned long flags; 489 + 490 + spin_lock_irqsave(&suspended_lock, flags); 491 + 492 + if (cached_events) { 493 + rtc_update_irq(rtc, 1, cached_events); 494 + cached_events = 0; 495 + } 496 + 497 + suspended = false; 498 + spin_unlock_irqrestore(&suspended_lock, flags); 499 + 492 500 disable_irq_wake(irq); 493 - else 494 - at91_rtc_write_ier(at91_rtc_imr); 501 + } 502 + at91_rtc_write_ier(at91_rtc_imr); 495 503 } 496 504 return 0; 497 505 }
+63 -14
drivers/rtc/rtc-at91sam9.c
··· 23 23 #include <linux/io.h> 24 24 #include <linux/mfd/syscon.h> 25 25 #include <linux/regmap.h> 26 + #include <linux/suspend.h> 26 27 #include <linux/clk.h> 27 28 28 29 /* ··· 78 77 unsigned int gpbr_offset; 79 78 int irq; 80 79 struct clk *sclk; 80 + bool suspended; 81 + unsigned long events; 82 + spinlock_t lock; 81 83 }; 82 84 83 85 #define rtt_readl(rtc, field) \ ··· 275 271 return 0; 276 272 } 277 273 278 - /* 279 - * IRQ handler for the RTC 280 - */ 281 - static irqreturn_t at91_rtc_interrupt(int irq, void *_rtc) 274 + static irqreturn_t at91_rtc_cache_events(struct sam9_rtc *rtc) 282 275 { 283 - struct sam9_rtc *rtc = _rtc; 284 276 u32 sr, mr; 285 - unsigned long events = 0; 286 277 287 278 /* Shared interrupt may be for another device. Note: reading 288 279 * SR clears it, so we must only read it in this irq handler! ··· 289 290 290 291 /* alarm status */ 291 292 if (sr & AT91_RTT_ALMS) 292 - events |= (RTC_AF | RTC_IRQF); 293 + rtc->events |= (RTC_AF | RTC_IRQF); 293 294 294 295 /* timer update/increment */ 295 296 if (sr & AT91_RTT_RTTINC) 296 - events |= (RTC_UF | RTC_IRQF); 297 - 298 - rtc_update_irq(rtc->rtcdev, 1, events); 299 - 300 - pr_debug("%s: num=%ld, events=0x%02lx\n", __func__, 301 - events >> 8, events & 0x000000FF); 297 + rtc->events |= (RTC_UF | RTC_IRQF); 302 298 303 299 return IRQ_HANDLED; 300 + } 301 + 302 + static void at91_rtc_flush_events(struct sam9_rtc *rtc) 303 + { 304 + if (!rtc->events) 305 + return; 306 + 307 + rtc_update_irq(rtc->rtcdev, 1, rtc->events); 308 + rtc->events = 0; 309 + 310 + pr_debug("%s: num=%ld, events=0x%02lx\n", __func__, 311 + rtc->events >> 8, rtc->events & 0x000000FF); 312 + } 313 + 314 + /* 315 + * IRQ handler for the RTC 316 + */ 317 + static irqreturn_t at91_rtc_interrupt(int irq, void *_rtc) 318 + { 319 + struct sam9_rtc *rtc = _rtc; 320 + int ret; 321 + 322 + spin_lock(&rtc->lock); 323 + 324 + ret = at91_rtc_cache_events(rtc); 325 + 326 + /* We're called in suspended state */ 327 + if (rtc->suspended) { 328 + /* Mask irqs coming from this peripheral */ 329 + rtt_writel(rtc, MR, 330 + rtt_readl(rtc, MR) & 331 + ~(AT91_RTT_ALMIEN | AT91_RTT_RTTINCIEN)); 332 + /* Trigger a system wakeup */ 333 + pm_system_wakeup(); 334 + } else { 335 + at91_rtc_flush_events(rtc); 336 + } 337 + 338 + spin_unlock(&rtc->lock); 339 + 340 + return ret; 304 341 } 305 342 306 343 static const struct rtc_class_ops at91_rtc_ops = { ··· 456 421 457 422 /* register irq handler after we know what name we'll use */ 458 423 ret = devm_request_irq(&pdev->dev, rtc->irq, at91_rtc_interrupt, 459 - IRQF_SHARED, dev_name(&rtc->rtcdev->dev), rtc); 424 + IRQF_SHARED | IRQF_COND_SUSPEND, 425 + dev_name(&rtc->rtcdev->dev), rtc); 460 426 if (ret) { 461 427 dev_dbg(&pdev->dev, "can't share IRQ %d?\n", rtc->irq); 462 428 return ret; ··· 518 482 rtc->imr = mr & (AT91_RTT_ALMIEN | AT91_RTT_RTTINCIEN); 519 483 if (rtc->imr) { 520 484 if (device_may_wakeup(dev) && (mr & AT91_RTT_ALMIEN)) { 485 + unsigned long flags; 486 + 521 487 enable_irq_wake(rtc->irq); 488 + spin_lock_irqsave(&rtc->lock, flags); 489 + rtc->suspended = true; 490 + spin_unlock_irqrestore(&rtc->lock, flags); 522 491 /* don't let RTTINC cause wakeups */ 523 492 if (mr & AT91_RTT_RTTINCIEN) 524 493 rtt_writel(rtc, MR, mr & ~AT91_RTT_RTTINCIEN); ··· 540 499 u32 mr; 541 500 542 501 if (rtc->imr) { 502 + unsigned long flags; 503 + 543 504 if (device_may_wakeup(dev)) 544 505 disable_irq_wake(rtc->irq); 545 506 mr = rtt_readl(rtc, MR); 546 507 rtt_writel(rtc, MR, mr | rtc->imr); 508 + 509 + spin_lock_irqsave(&rtc->lock, flags); 510 + rtc->suspended = false; 511 + at91_rtc_cache_events(rtc); 512 + at91_rtc_flush_events(rtc); 513 + spin_unlock_irqrestore(&rtc->lock, flags); 547 514 } 548 515 549 516 return 0;
+1 -1
drivers/s390/block/dcssblk.c
··· 547 547 * parse input 548 548 */ 549 549 num_of_segments = 0; 550 - for (i = 0; ((buf[i] != '\0') && (buf[i] != '\n') && i < count); i++) { 550 + for (i = 0; (i < count && (buf[i] != '\0') && (buf[i] != '\n')); i++) { 551 551 for (j = i; (buf[j] != ':') && 552 552 (buf[j] != '\0') && 553 553 (buf[j] != '\n') &&
+1 -1
drivers/s390/block/scm_blk_cluster.c
··· 92 92 add = 0; 93 93 continue; 94 94 } 95 - for (pos = 0; pos <= iter->aob->request.msb_count; pos++) { 95 + for (pos = 0; pos < iter->aob->request.msb_count; pos++) { 96 96 if (clusters_intersect(req, iter->request[pos]) && 97 97 (rq_data_dir(req) == WRITE || 98 98 rq_data_dir(iter->request[pos]) == WRITE)) {
+6 -6
drivers/spi/spi-atmel.c
··· 764 764 (unsigned long long)xfer->rx_dma); 765 765 } 766 766 767 - /* REVISIT: We're waiting for ENDRX before we start the next 767 + /* REVISIT: We're waiting for RXBUFF before we start the next 768 768 * transfer because we need to handle some difficult timing 769 - * issues otherwise. If we wait for ENDTX in one transfer and 770 - * then starts waiting for ENDRX in the next, it's difficult 771 - * to tell the difference between the ENDRX interrupt we're 772 - * actually waiting for and the ENDRX interrupt of the 769 + * issues otherwise. If we wait for TXBUFE in one transfer and 770 + * then starts waiting for RXBUFF in the next, it's difficult 771 + * to tell the difference between the RXBUFF interrupt we're 772 + * actually waiting for and the RXBUFF interrupt of the 773 773 * previous transfer. 774 774 * 775 775 * It should be doable, though. Just not now... 776 776 */ 777 - spi_writel(as, IER, SPI_BIT(ENDRX) | SPI_BIT(OVRES)); 777 + spi_writel(as, IER, SPI_BIT(RXBUFF) | SPI_BIT(OVRES)); 778 778 spi_writel(as, PTCR, SPI_BIT(TXTEN) | SPI_BIT(RXTEN)); 779 779 } 780 780
+6
drivers/spi/spi-dw-mid.c
··· 139 139 1, 140 140 DMA_MEM_TO_DEV, 141 141 DMA_PREP_INTERRUPT | DMA_CTRL_ACK); 142 + if (!txdesc) 143 + return NULL; 144 + 142 145 txdesc->callback = dw_spi_dma_tx_done; 143 146 txdesc->callback_param = dws; 144 147 ··· 187 184 1, 188 185 DMA_DEV_TO_MEM, 189 186 DMA_PREP_INTERRUPT | DMA_CTRL_ACK); 187 + if (!rxdesc) 188 + return NULL; 189 + 190 190 rxdesc->callback = dw_spi_dma_rx_done; 191 191 rxdesc->callback_param = dws; 192 192
+2 -2
drivers/spi/spi-dw-pci.c
··· 36 36 37 37 static struct spi_pci_desc spi_pci_mid_desc_1 = { 38 38 .setup = dw_spi_mid_init, 39 - .num_cs = 32, 39 + .num_cs = 5, 40 40 .bus_num = 0, 41 41 }; 42 42 43 43 static struct spi_pci_desc spi_pci_mid_desc_2 = { 44 44 .setup = dw_spi_mid_init, 45 - .num_cs = 4, 45 + .num_cs = 2, 46 46 .bus_num = 1, 47 47 }; 48 48
+2 -2
drivers/spi/spi-dw.c
··· 621 621 if (!dws->fifo_len) { 622 622 u32 fifo; 623 623 624 - for (fifo = 2; fifo <= 256; fifo++) { 624 + for (fifo = 1; fifo < 256; fifo++) { 625 625 dw_writew(dws, DW_SPI_TXFLTR, fifo); 626 626 if (fifo != dw_readw(dws, DW_SPI_TXFLTR)) 627 627 break; 628 628 } 629 629 dw_writew(dws, DW_SPI_TXFLTR, 0); 630 630 631 - dws->fifo_len = (fifo == 2) ? 0 : fifo - 1; 631 + dws->fifo_len = (fifo == 1) ? 0 : fifo; 632 632 dev_dbg(dev, "Detected FIFO size: %u bytes\n", dws->fifo_len); 633 633 } 634 634 }
+7
drivers/spi/spi-img-spfi.c
··· 459 459 unsigned long flags; 460 460 int ret; 461 461 462 + if (xfer->len > SPFI_TRANSACTION_TSIZE_MASK) { 463 + dev_err(spfi->dev, 464 + "Transfer length (%d) is greater than the max supported (%d)", 465 + xfer->len, SPFI_TRANSACTION_TSIZE_MASK); 466 + return -EINVAL; 467 + } 468 + 462 469 /* 463 470 * Stop all DMA and reset the controller if the previous transaction 464 471 * timed-out and never completed it's DMA.
+1 -1
drivers/spi/spi-pl022.c
··· 534 534 pl022->cur_msg = NULL; 535 535 pl022->cur_transfer = NULL; 536 536 pl022->cur_chip = NULL; 537 - spi_finalize_current_message(pl022->master); 538 537 539 538 /* disable the SPI/SSP operation */ 540 539 writew((readw(SSP_CR1(pl022->virtbase)) & 541 540 (~SSP_CR1_MASK_SSE)), SSP_CR1(pl022->virtbase)); 542 541 542 + spi_finalize_current_message(pl022->master); 543 543 } 544 544 545 545 /**
+22
drivers/spi/spi-ti-qspi.c
··· 101 101 #define QSPI_FLEN(n) ((n - 1) << 0) 102 102 103 103 /* STATUS REGISTER */ 104 + #define BUSY 0x01 104 105 #define WC 0x02 105 106 106 107 /* INTERRUPT REGISTER */ ··· 200 199 ti_qspi_write(qspi, ctx_reg->clkctrl, QSPI_SPI_CLOCK_CNTRL_REG); 201 200 } 202 201 202 + static inline u32 qspi_is_busy(struct ti_qspi *qspi) 203 + { 204 + u32 stat; 205 + unsigned long timeout = jiffies + QSPI_COMPLETION_TIMEOUT; 206 + 207 + stat = ti_qspi_read(qspi, QSPI_SPI_STATUS_REG); 208 + while ((stat & BUSY) && time_after(timeout, jiffies)) { 209 + cpu_relax(); 210 + stat = ti_qspi_read(qspi, QSPI_SPI_STATUS_REG); 211 + } 212 + 213 + WARN(stat & BUSY, "qspi busy\n"); 214 + return stat & BUSY; 215 + } 216 + 203 217 static int qspi_write_msg(struct ti_qspi *qspi, struct spi_transfer *t) 204 218 { 205 219 int wlen, count; ··· 227 211 wlen = t->bits_per_word >> 3; /* in bytes */ 228 212 229 213 while (count) { 214 + if (qspi_is_busy(qspi)) 215 + return -EBUSY; 216 + 230 217 switch (wlen) { 231 218 case 1: 232 219 dev_dbg(qspi->dev, "tx cmd %08x dc %08x data %02x\n", ··· 285 266 286 267 while (count) { 287 268 dev_dbg(qspi->dev, "rx cmd %08x dc %08x\n", cmd, qspi->dc); 269 + if (qspi_is_busy(qspi)) 270 + return -EBUSY; 271 + 288 272 ti_qspi_write(qspi, cmd, QSPI_SPI_CMD_REG); 289 273 if (!wait_for_completion_timeout(&qspi->transfer_complete, 290 274 QSPI_COMPLETION_TIMEOUT)) {
+1 -2
drivers/staging/comedi/drivers/adv_pci1710.c
··· 426 426 unsigned int *data) 427 427 { 428 428 struct pci1710_private *devpriv = dev->private; 429 - unsigned int chan = CR_CHAN(insn->chanspec); 430 429 int ret = 0; 431 430 int i; 432 431 ··· 446 447 if (ret) 447 448 break; 448 449 449 - ret = pci171x_ai_read_sample(dev, s, chan, &val); 450 + ret = pci171x_ai_read_sample(dev, s, 0, &val); 450 451 if (ret) 451 452 break; 452 453
+3 -2
drivers/staging/comedi/drivers/comedi_isadma.c
··· 91 91 stalled++; 92 92 if (stalled > 10) 93 93 break; 94 + } else { 95 + residue = new_residue; 96 + stalled = 0; 94 97 } 95 - residue = new_residue; 96 - stalled = 0; 97 98 } 98 99 return residue; 99 100 }
-71
drivers/staging/comedi/drivers/vmk80xx.c
··· 103 103 VMK8061_MODEL 104 104 }; 105 105 106 - struct firmware_version { 107 - unsigned char ic3_vers[32]; /* USB-Controller */ 108 - unsigned char ic6_vers[32]; /* CPU */ 109 - }; 110 - 111 106 static const struct comedi_lrange vmk8061_range = { 112 107 2, { 113 108 UNI_RANGE(5), ··· 151 156 struct vmk80xx_private { 152 157 struct usb_endpoint_descriptor *ep_rx; 153 158 struct usb_endpoint_descriptor *ep_tx; 154 - struct firmware_version fw; 155 159 struct semaphore limit_sem; 156 160 unsigned char *usb_rx_buf; 157 161 unsigned char *usb_tx_buf; 158 162 enum vmk80xx_model model; 159 163 }; 160 - 161 - static int vmk80xx_check_data_link(struct comedi_device *dev) 162 - { 163 - struct vmk80xx_private *devpriv = dev->private; 164 - struct usb_device *usb = comedi_to_usb_dev(dev); 165 - unsigned int tx_pipe; 166 - unsigned int rx_pipe; 167 - unsigned char tx[1]; 168 - unsigned char rx[2]; 169 - 170 - tx_pipe = usb_sndbulkpipe(usb, 0x01); 171 - rx_pipe = usb_rcvbulkpipe(usb, 0x81); 172 - 173 - tx[0] = VMK8061_CMD_RD_PWR_STAT; 174 - 175 - /* 176 - * Check that IC6 (PIC16F871) is powered and 177 - * running and the data link between IC3 and 178 - * IC6 is working properly 179 - */ 180 - usb_bulk_msg(usb, tx_pipe, tx, 1, NULL, devpriv->ep_tx->bInterval); 181 - usb_bulk_msg(usb, rx_pipe, rx, 2, NULL, HZ * 10); 182 - 183 - return (int)rx[1]; 184 - } 185 - 186 - static void vmk80xx_read_eeprom(struct comedi_device *dev, int flag) 187 - { 188 - struct vmk80xx_private *devpriv = dev->private; 189 - struct usb_device *usb = comedi_to_usb_dev(dev); 190 - unsigned int tx_pipe; 191 - unsigned int rx_pipe; 192 - unsigned char tx[1]; 193 - unsigned char rx[64]; 194 - int cnt; 195 - 196 - tx_pipe = usb_sndbulkpipe(usb, 0x01); 197 - rx_pipe = usb_rcvbulkpipe(usb, 0x81); 198 - 199 - tx[0] = VMK8061_CMD_RD_VERSION; 200 - 201 - /* 202 - * Read the firmware version info of IC3 and 203 - * IC6 from the internal EEPROM of the IC 204 - */ 205 - usb_bulk_msg(usb, tx_pipe, tx, 1, NULL, devpriv->ep_tx->bInterval); 206 - usb_bulk_msg(usb, rx_pipe, rx, 64, &cnt, HZ * 10); 207 - 208 - rx[cnt] = '\0'; 209 - 210 - if (flag & IC3_VERSION) 211 - strncpy(devpriv->fw.ic3_vers, rx + 1, 24); 212 - else /* IC6_VERSION */ 213 - strncpy(devpriv->fw.ic6_vers, rx + 25, 24); 214 - } 215 164 216 165 static void vmk80xx_do_bulk_msg(struct comedi_device *dev) 217 166 { ··· 816 877 sema_init(&devpriv->limit_sem, 8); 817 878 818 879 usb_set_intfdata(intf, devpriv); 819 - 820 - if (devpriv->model == VMK8061_MODEL) { 821 - vmk80xx_read_eeprom(dev, IC3_VERSION); 822 - dev_info(&intf->dev, "%s\n", devpriv->fw.ic3_vers); 823 - 824 - if (vmk80xx_check_data_link(dev)) { 825 - vmk80xx_read_eeprom(dev, IC6_VERSION); 826 - dev_info(&intf->dev, "%s\n", devpriv->fw.ic6_vers); 827 - } 828 - } 829 880 830 881 if (devpriv->model == VMK8055_MODEL) 831 882 vmk80xx_reset_device(dev);
+106 -103
drivers/staging/iio/adc/mxs-lradc.c
··· 214 214 unsigned long is_divided; 215 215 216 216 /* 217 - * Touchscreen LRADC channels receives a private slot in the CTRL4 218 - * register, the slot #7. Therefore only 7 slots instead of 8 in the 219 - * CTRL4 register can be mapped to LRADC channels when using the 220 - * touchscreen. 221 - * 217 + * When the touchscreen is enabled, we give it two private virtual 218 + * channels: #6 and #7. This means that only 6 virtual channels (instead 219 + * of 8) will be available for buffered capture. 220 + */ 221 + #define TOUCHSCREEN_VCHANNEL1 7 222 + #define TOUCHSCREEN_VCHANNEL2 6 223 + #define BUFFER_VCHANS_LIMITED 0x3f 224 + #define BUFFER_VCHANS_ALL 0xff 225 + u8 buffer_vchans; 226 + 227 + /* 222 228 * Furthermore, certain LRADC channels are shared between touchscreen 223 229 * and/or touch-buttons and generic LRADC block. Therefore when using 224 230 * either of these, these channels are not available for the regular ··· 348 342 #define LRADC_CTRL4 0x140 349 343 #define LRADC_CTRL4_LRADCSELECT_MASK(n) (0xf << ((n) * 4)) 350 344 #define LRADC_CTRL4_LRADCSELECT_OFFSET(n) ((n) * 4) 345 + #define LRADC_CTRL4_LRADCSELECT(n, x) \ 346 + (((x) << LRADC_CTRL4_LRADCSELECT_OFFSET(n)) & \ 347 + LRADC_CTRL4_LRADCSELECT_MASK(n)) 351 348 352 349 #define LRADC_RESOLUTION 12 353 350 #define LRADC_SINGLE_SAMPLE_MASK ((1 << LRADC_RESOLUTION) - 1) ··· 425 416 LRADC_STATUS_TOUCH_DETECT_RAW); 426 417 } 427 418 419 + static void mxs_lradc_map_channel(struct mxs_lradc *lradc, unsigned vch, 420 + unsigned ch) 421 + { 422 + mxs_lradc_reg_clear(lradc, LRADC_CTRL4_LRADCSELECT_MASK(vch), 423 + LRADC_CTRL4); 424 + mxs_lradc_reg_set(lradc, LRADC_CTRL4_LRADCSELECT(vch, ch), LRADC_CTRL4); 425 + } 426 + 428 427 static void mxs_lradc_setup_ts_channel(struct mxs_lradc *lradc, unsigned ch) 429 428 { 430 429 /* ··· 467 450 LRADC_DELAY_DELAY(lradc->over_sample_delay - 1), 468 451 LRADC_DELAY(3)); 469 452 470 - mxs_lradc_reg_clear(lradc, LRADC_CTRL1_LRADC_IRQ(2) | 471 - LRADC_CTRL1_LRADC_IRQ(3) | LRADC_CTRL1_LRADC_IRQ(4) | 472 - LRADC_CTRL1_LRADC_IRQ(5), LRADC_CTRL1); 453 + mxs_lradc_reg_clear(lradc, LRADC_CTRL1_LRADC_IRQ(ch), LRADC_CTRL1); 473 454 474 - /* wake us again, when the complete conversion is done */ 475 - mxs_lradc_reg_set(lradc, LRADC_CTRL1_LRADC_IRQ_EN(ch), LRADC_CTRL1); 476 455 /* 477 456 * after changing the touchscreen plates setting 478 457 * the signals need some initial time to settle. Start the ··· 522 509 LRADC_DELAY_DELAY(lradc->over_sample_delay - 1), 523 510 LRADC_DELAY(3)); 524 511 525 - mxs_lradc_reg_clear(lradc, LRADC_CTRL1_LRADC_IRQ(2) | 526 - LRADC_CTRL1_LRADC_IRQ(3) | LRADC_CTRL1_LRADC_IRQ(4) | 527 - LRADC_CTRL1_LRADC_IRQ(5), LRADC_CTRL1); 512 + mxs_lradc_reg_clear(lradc, LRADC_CTRL1_LRADC_IRQ(ch2), LRADC_CTRL1); 528 513 529 - /* wake us again, when the conversions are done */ 530 - mxs_lradc_reg_set(lradc, LRADC_CTRL1_LRADC_IRQ_EN(ch2), LRADC_CTRL1); 531 514 /* 532 515 * after changing the touchscreen plates setting 533 516 * the signals need some initial time to settle. Start the ··· 589 580 #define TS_CH_XM 4 590 581 #define TS_CH_YM 5 591 582 592 - static int mxs_lradc_read_ts_channel(struct mxs_lradc *lradc) 593 - { 594 - u32 reg; 595 - int val; 596 - 597 - reg = readl(lradc->base + LRADC_CTRL1); 598 - 599 - /* only channels 3 to 5 are of interest here */ 600 - if (reg & LRADC_CTRL1_LRADC_IRQ(TS_CH_YP)) { 601 - mxs_lradc_reg_clear(lradc, LRADC_CTRL1_LRADC_IRQ_EN(TS_CH_YP) | 602 - LRADC_CTRL1_LRADC_IRQ(TS_CH_YP), LRADC_CTRL1); 603 - val = mxs_lradc_read_raw_channel(lradc, TS_CH_YP); 604 - } else if (reg & LRADC_CTRL1_LRADC_IRQ(TS_CH_XM)) { 605 - mxs_lradc_reg_clear(lradc, LRADC_CTRL1_LRADC_IRQ_EN(TS_CH_XM) | 606 - LRADC_CTRL1_LRADC_IRQ(TS_CH_XM), LRADC_CTRL1); 607 - val = mxs_lradc_read_raw_channel(lradc, TS_CH_XM); 608 - } else if (reg & LRADC_CTRL1_LRADC_IRQ(TS_CH_YM)) { 609 - mxs_lradc_reg_clear(lradc, LRADC_CTRL1_LRADC_IRQ_EN(TS_CH_YM) | 610 - LRADC_CTRL1_LRADC_IRQ(TS_CH_YM), LRADC_CTRL1); 611 - val = mxs_lradc_read_raw_channel(lradc, TS_CH_YM); 612 - } else { 613 - return -EIO; 614 - } 615 - 616 - mxs_lradc_reg_wrt(lradc, 0, LRADC_DELAY(2)); 617 - mxs_lradc_reg_wrt(lradc, 0, LRADC_DELAY(3)); 618 - 619 - return val; 620 - } 621 - 622 583 /* 623 584 * YP(open)--+-------------+ 624 585 * | |--+ ··· 632 653 mxs_lradc_reg_set(lradc, mxs_lradc_drive_x_plate(lradc), LRADC_CTRL0); 633 654 634 655 lradc->cur_plate = LRADC_SAMPLE_X; 635 - mxs_lradc_setup_ts_channel(lradc, TS_CH_YP); 656 + mxs_lradc_map_channel(lradc, TOUCHSCREEN_VCHANNEL1, TS_CH_YP); 657 + mxs_lradc_setup_ts_channel(lradc, TOUCHSCREEN_VCHANNEL1); 636 658 } 637 659 638 660 /* ··· 654 674 mxs_lradc_reg_set(lradc, mxs_lradc_drive_y_plate(lradc), LRADC_CTRL0); 655 675 656 676 lradc->cur_plate = LRADC_SAMPLE_Y; 657 - mxs_lradc_setup_ts_channel(lradc, TS_CH_XM); 677 + mxs_lradc_map_channel(lradc, TOUCHSCREEN_VCHANNEL1, TS_CH_XM); 678 + mxs_lradc_setup_ts_channel(lradc, TOUCHSCREEN_VCHANNEL1); 658 679 } 659 680 660 681 /* ··· 676 695 mxs_lradc_reg_set(lradc, mxs_lradc_drive_pressure(lradc), LRADC_CTRL0); 677 696 678 697 lradc->cur_plate = LRADC_SAMPLE_PRESSURE; 679 - mxs_lradc_setup_ts_pressure(lradc, TS_CH_XP, TS_CH_YM); 698 + mxs_lradc_map_channel(lradc, TOUCHSCREEN_VCHANNEL1, TS_CH_YM); 699 + mxs_lradc_map_channel(lradc, TOUCHSCREEN_VCHANNEL2, TS_CH_XP); 700 + mxs_lradc_setup_ts_pressure(lradc, TOUCHSCREEN_VCHANNEL2, 701 + TOUCHSCREEN_VCHANNEL1); 680 702 } 681 703 682 704 static void mxs_lradc_enable_touch_detection(struct mxs_lradc *lradc) ··· 690 706 mxs_lradc_reg_clear(lradc, LRADC_CTRL1_TOUCH_DETECT_IRQ | 691 707 LRADC_CTRL1_TOUCH_DETECT_IRQ_EN, LRADC_CTRL1); 692 708 mxs_lradc_reg_set(lradc, LRADC_CTRL1_TOUCH_DETECT_IRQ_EN, LRADC_CTRL1); 709 + } 710 + 711 + static void mxs_lradc_start_touch_event(struct mxs_lradc *lradc) 712 + { 713 + mxs_lradc_reg_clear(lradc, LRADC_CTRL1_TOUCH_DETECT_IRQ_EN, 714 + LRADC_CTRL1); 715 + mxs_lradc_reg_set(lradc, 716 + LRADC_CTRL1_LRADC_IRQ_EN(TOUCHSCREEN_VCHANNEL1), LRADC_CTRL1); 717 + /* 718 + * start with the Y-pos, because it uses nearly the same plate 719 + * settings like the touch detection 720 + */ 721 + mxs_lradc_prepare_y_pos(lradc); 693 722 } 694 723 695 724 static void mxs_lradc_report_ts_event(struct mxs_lradc *lradc) ··· 722 725 * start a dummy conversion to burn time to settle the signals 723 726 * note: we are not interested in the conversion's value 724 727 */ 725 - mxs_lradc_reg_wrt(lradc, 0, LRADC_CH(5)); 726 - mxs_lradc_reg_clear(lradc, LRADC_CTRL1_LRADC_IRQ(5), LRADC_CTRL1); 727 - mxs_lradc_reg_set(lradc, LRADC_CTRL1_LRADC_IRQ_EN(5), LRADC_CTRL1); 728 - mxs_lradc_reg_wrt(lradc, LRADC_DELAY_TRIGGER(1 << 5) | 728 + mxs_lradc_reg_wrt(lradc, 0, LRADC_CH(TOUCHSCREEN_VCHANNEL1)); 729 + mxs_lradc_reg_clear(lradc, 730 + LRADC_CTRL1_LRADC_IRQ(TOUCHSCREEN_VCHANNEL1) | 731 + LRADC_CTRL1_LRADC_IRQ(TOUCHSCREEN_VCHANNEL2), LRADC_CTRL1); 732 + mxs_lradc_reg_wrt(lradc, 733 + LRADC_DELAY_TRIGGER(1 << TOUCHSCREEN_VCHANNEL1) | 729 734 LRADC_DELAY_KICK | LRADC_DELAY_DELAY(10), /* waste 5 ms */ 730 735 LRADC_DELAY(2)); 731 736 } ··· 759 760 760 761 /* if it is released, wait for the next touch via IRQ */ 761 762 lradc->cur_plate = LRADC_TOUCH; 762 - mxs_lradc_reg_clear(lradc, LRADC_CTRL1_TOUCH_DETECT_IRQ, LRADC_CTRL1); 763 + mxs_lradc_reg_wrt(lradc, 0, LRADC_DELAY(2)); 764 + mxs_lradc_reg_wrt(lradc, 0, LRADC_DELAY(3)); 765 + mxs_lradc_reg_clear(lradc, LRADC_CTRL1_TOUCH_DETECT_IRQ | 766 + LRADC_CTRL1_LRADC_IRQ_EN(TOUCHSCREEN_VCHANNEL1) | 767 + LRADC_CTRL1_LRADC_IRQ(TOUCHSCREEN_VCHANNEL1), LRADC_CTRL1); 763 768 mxs_lradc_reg_set(lradc, LRADC_CTRL1_TOUCH_DETECT_IRQ_EN, LRADC_CTRL1); 764 769 } 765 770 766 771 /* touchscreen's state machine */ 767 772 static void mxs_lradc_handle_touch(struct mxs_lradc *lradc) 768 773 { 769 - int val; 770 - 771 774 switch (lradc->cur_plate) { 772 775 case LRADC_TOUCH: 773 - /* 774 - * start with the Y-pos, because it uses nearly the same plate 775 - * settings like the touch detection 776 - */ 777 - if (mxs_lradc_check_touch_event(lradc)) { 778 - mxs_lradc_reg_clear(lradc, 779 - LRADC_CTRL1_TOUCH_DETECT_IRQ_EN, 780 - LRADC_CTRL1); 781 - mxs_lradc_prepare_y_pos(lradc); 782 - } 776 + if (mxs_lradc_check_touch_event(lradc)) 777 + mxs_lradc_start_touch_event(lradc); 783 778 mxs_lradc_reg_clear(lradc, LRADC_CTRL1_TOUCH_DETECT_IRQ, 784 779 LRADC_CTRL1); 785 780 return; 786 781 787 782 case LRADC_SAMPLE_Y: 788 - val = mxs_lradc_read_ts_channel(lradc); 789 - if (val < 0) { 790 - mxs_lradc_enable_touch_detection(lradc); /* re-start */ 791 - return; 792 - } 793 - lradc->ts_y_pos = val; 783 + lradc->ts_y_pos = mxs_lradc_read_raw_channel(lradc, 784 + TOUCHSCREEN_VCHANNEL1); 794 785 mxs_lradc_prepare_x_pos(lradc); 795 786 return; 796 787 797 788 case LRADC_SAMPLE_X: 798 - val = mxs_lradc_read_ts_channel(lradc); 799 - if (val < 0) { 800 - mxs_lradc_enable_touch_detection(lradc); /* re-start */ 801 - return; 802 - } 803 - lradc->ts_x_pos = val; 789 + lradc->ts_x_pos = mxs_lradc_read_raw_channel(lradc, 790 + TOUCHSCREEN_VCHANNEL1); 804 791 mxs_lradc_prepare_pressure(lradc); 805 792 return; 806 793 807 794 case LRADC_SAMPLE_PRESSURE: 808 - lradc->ts_pressure = 809 - mxs_lradc_read_ts_pressure(lradc, TS_CH_XP, TS_CH_YM); 795 + lradc->ts_pressure = mxs_lradc_read_ts_pressure(lradc, 796 + TOUCHSCREEN_VCHANNEL2, 797 + TOUCHSCREEN_VCHANNEL1); 810 798 mxs_lradc_complete_touch_event(lradc); 811 799 return; 812 800 813 801 case LRADC_SAMPLE_VALID: 814 - val = mxs_lradc_read_ts_channel(lradc); /* ignore the value */ 815 802 mxs_lradc_finish_touch_event(lradc, 1); 816 803 break; 817 804 } ··· 829 844 * used if doing raw sampling. 830 845 */ 831 846 if (lradc->soc == IMX28_LRADC) 832 - mxs_lradc_reg_clear(lradc, LRADC_CTRL1_MX28_LRADC_IRQ_EN_MASK, 847 + mxs_lradc_reg_clear(lradc, LRADC_CTRL1_LRADC_IRQ_EN(0), 833 848 LRADC_CTRL1); 834 - mxs_lradc_reg_clear(lradc, 0xff, LRADC_CTRL0); 849 + mxs_lradc_reg_clear(lradc, 0x1, LRADC_CTRL0); 835 850 836 851 /* Enable / disable the divider per requirement */ 837 852 if (test_bit(chan, &lradc->is_divided)) ··· 1075 1090 { 1076 1091 /* stop all interrupts from firing */ 1077 1092 mxs_lradc_reg_clear(lradc, LRADC_CTRL1_TOUCH_DETECT_IRQ_EN | 1078 - LRADC_CTRL1_LRADC_IRQ_EN(2) | LRADC_CTRL1_LRADC_IRQ_EN(3) | 1079 - LRADC_CTRL1_LRADC_IRQ_EN(4) | LRADC_CTRL1_LRADC_IRQ_EN(5), 1080 - LRADC_CTRL1); 1093 + LRADC_CTRL1_LRADC_IRQ_EN(TOUCHSCREEN_VCHANNEL1) | 1094 + LRADC_CTRL1_LRADC_IRQ_EN(TOUCHSCREEN_VCHANNEL2), LRADC_CTRL1); 1081 1095 1082 1096 /* Power-down touchscreen touch-detect circuitry. */ 1083 1097 mxs_lradc_reg_clear(lradc, mxs_lradc_plate_mask(lradc), LRADC_CTRL0); ··· 1142 1158 struct iio_dev *iio = data; 1143 1159 struct mxs_lradc *lradc = iio_priv(iio); 1144 1160 unsigned long reg = readl(lradc->base + LRADC_CTRL1); 1161 + uint32_t clr_irq = mxs_lradc_irq_mask(lradc); 1145 1162 const uint32_t ts_irq_mask = 1146 1163 LRADC_CTRL1_TOUCH_DETECT_IRQ | 1147 - LRADC_CTRL1_LRADC_IRQ(2) | 1148 - LRADC_CTRL1_LRADC_IRQ(3) | 1149 - LRADC_CTRL1_LRADC_IRQ(4) | 1150 - LRADC_CTRL1_LRADC_IRQ(5); 1164 + LRADC_CTRL1_LRADC_IRQ(TOUCHSCREEN_VCHANNEL1) | 1165 + LRADC_CTRL1_LRADC_IRQ(TOUCHSCREEN_VCHANNEL2); 1151 1166 1152 1167 if (!(reg & mxs_lradc_irq_mask(lradc))) 1153 1168 return IRQ_NONE; 1154 1169 1155 - if (lradc->use_touchscreen && (reg & ts_irq_mask)) 1170 + if (lradc->use_touchscreen && (reg & ts_irq_mask)) { 1156 1171 mxs_lradc_handle_touch(lradc); 1157 1172 1158 - if (iio_buffer_enabled(iio)) 1159 - iio_trigger_poll(iio->trig); 1160 - else if (reg & LRADC_CTRL1_LRADC_IRQ(0)) 1161 - complete(&lradc->completion); 1173 + /* Make sure we don't clear the next conversion's interrupt. */ 1174 + clr_irq &= ~(LRADC_CTRL1_LRADC_IRQ(TOUCHSCREEN_VCHANNEL1) | 1175 + LRADC_CTRL1_LRADC_IRQ(TOUCHSCREEN_VCHANNEL2)); 1176 + } 1162 1177 1163 - mxs_lradc_reg_clear(lradc, reg & mxs_lradc_irq_mask(lradc), 1164 - LRADC_CTRL1); 1178 + if (iio_buffer_enabled(iio)) { 1179 + if (reg & lradc->buffer_vchans) 1180 + iio_trigger_poll(iio->trig); 1181 + } else if (reg & LRADC_CTRL1_LRADC_IRQ(0)) { 1182 + complete(&lradc->completion); 1183 + } 1184 + 1185 + mxs_lradc_reg_clear(lradc, reg & clr_irq, LRADC_CTRL1); 1165 1186 1166 1187 return IRQ_HANDLED; 1167 1188 } ··· 1278 1289 } 1279 1290 1280 1291 if (lradc->soc == IMX28_LRADC) 1281 - mxs_lradc_reg_clear(lradc, LRADC_CTRL1_MX28_LRADC_IRQ_EN_MASK, 1282 - LRADC_CTRL1); 1283 - mxs_lradc_reg_clear(lradc, 0xff, LRADC_CTRL0); 1292 + mxs_lradc_reg_clear(lradc, 1293 + lradc->buffer_vchans << LRADC_CTRL1_LRADC_IRQ_EN_OFFSET, 1294 + LRADC_CTRL1); 1295 + mxs_lradc_reg_clear(lradc, lradc->buffer_vchans, LRADC_CTRL0); 1284 1296 1285 1297 for_each_set_bit(chan, iio->active_scan_mask, LRADC_MAX_TOTAL_CHANS) { 1286 1298 ctrl4_set |= chan << LRADC_CTRL4_LRADCSELECT_OFFSET(ofs); ··· 1314 1324 mxs_lradc_reg_clear(lradc, LRADC_DELAY_TRIGGER_LRADCS_MASK | 1315 1325 LRADC_DELAY_KICK, LRADC_DELAY(0)); 1316 1326 1317 - mxs_lradc_reg_clear(lradc, 0xff, LRADC_CTRL0); 1327 + mxs_lradc_reg_clear(lradc, lradc->buffer_vchans, LRADC_CTRL0); 1318 1328 if (lradc->soc == IMX28_LRADC) 1319 - mxs_lradc_reg_clear(lradc, LRADC_CTRL1_MX28_LRADC_IRQ_EN_MASK, 1320 - LRADC_CTRL1); 1329 + mxs_lradc_reg_clear(lradc, 1330 + lradc->buffer_vchans << LRADC_CTRL1_LRADC_IRQ_EN_OFFSET, 1331 + LRADC_CTRL1); 1321 1332 1322 1333 kfree(lradc->buffer); 1323 1334 mutex_unlock(&lradc->lock); ··· 1344 1353 if (lradc->use_touchbutton) 1345 1354 rsvd_chans++; 1346 1355 if (lradc->use_touchscreen) 1347 - rsvd_chans++; 1356 + rsvd_chans += 2; 1348 1357 1349 1358 /* Test for attempts to map channels with special mode of operation. */ 1350 1359 if (bitmap_intersects(mask, &rsvd_mask, LRADC_MAX_TOTAL_CHANS)) ··· 1403 1412 BIT(IIO_CHAN_INFO_SCALE), 1404 1413 .channel = 8, 1405 1414 .scan_type = {.sign = 'u', .realbits = 18, .storagebits = 32,}, 1415 + }, 1416 + /* Hidden channel to keep indexes */ 1417 + { 1418 + .type = IIO_TEMP, 1419 + .indexed = 1, 1420 + .scan_index = -1, 1421 + .channel = 9, 1406 1422 }, 1407 1423 MXS_ADC_CHAN(10, IIO_VOLTAGE), /* VDDIO */ 1408 1424 MXS_ADC_CHAN(11, IIO_VOLTAGE), /* VTH */ ··· 1580 1582 } 1581 1583 1582 1584 touch_ret = mxs_lradc_probe_touchscreen(lradc, node); 1585 + 1586 + if (touch_ret == 0) 1587 + lradc->buffer_vchans = BUFFER_VCHANS_LIMITED; 1588 + else 1589 + lradc->buffer_vchans = BUFFER_VCHANS_ALL; 1583 1590 1584 1591 /* Grab all IRQ sources */ 1585 1592 for (i = 0; i < of_cfg->irq_count; i++) {
+2 -1
drivers/staging/iio/resolver/ad2s1200.c
··· 18 18 #include <linux/delay.h> 19 19 #include <linux/gpio.h> 20 20 #include <linux/module.h> 21 + #include <linux/bitops.h> 21 22 22 23 #include <linux/iio/iio.h> 23 24 #include <linux/iio/sysfs.h> ··· 69 68 break; 70 69 case IIO_ANGL_VEL: 71 70 vel = (((s16)(st->rx[0])) << 4) | ((st->rx[1] & 0xF0) >> 4); 72 - vel = (vel << 4) >> 4; 71 + vel = sign_extend32(vel, 11); 73 72 *val = vel; 74 73 break; 75 74 default:
+6 -4
drivers/thermal/int340x_thermal/int340x_thermal_zone.c
··· 208 208 trip_cnt, GFP_KERNEL); 209 209 if (!int34x_thermal_zone->aux_trips) { 210 210 ret = -ENOMEM; 211 - goto free_mem; 211 + goto err_trip_alloc; 212 212 } 213 213 trip_mask = BIT(trip_cnt) - 1; 214 214 int34x_thermal_zone->aux_trip_nr = trip_cnt; ··· 248 248 0, 0); 249 249 if (IS_ERR(int34x_thermal_zone->zone)) { 250 250 ret = PTR_ERR(int34x_thermal_zone->zone); 251 - goto free_lpat; 251 + goto err_thermal_zone; 252 252 } 253 253 254 254 return int34x_thermal_zone; 255 255 256 - free_lpat: 256 + err_thermal_zone: 257 257 acpi_lpat_free_conversion_table(int34x_thermal_zone->lpat_table); 258 - free_mem: 258 + kfree(int34x_thermal_zone->aux_trips); 259 + err_trip_alloc: 259 260 kfree(int34x_thermal_zone); 260 261 return ERR_PTR(ret); 261 262 } ··· 267 266 { 268 267 thermal_zone_device_unregister(int34x_thermal_zone->zone); 269 268 acpi_lpat_free_conversion_table(int34x_thermal_zone->lpat_table); 269 + kfree(int34x_thermal_zone->aux_trips); 270 270 kfree(int34x_thermal_zone); 271 271 } 272 272 EXPORT_SYMBOL_GPL(int340x_thermal_zone_remove);
+2 -1
drivers/thermal/samsung/exynos_tmu.c
··· 682 682 683 683 if (on) { 684 684 con |= (1 << EXYNOS_TMU_CORE_EN_SHIFT); 685 + con |= (1 << EXYNOS7_PD_DET_EN_SHIFT); 685 686 interrupt_en = 686 687 (of_thermal_is_trip_valid(tz, 7) 687 688 << EXYNOS7_TMU_INTEN_RISE7_SHIFT) | ··· 705 704 interrupt_en << EXYNOS_TMU_INTEN_FALL0_SHIFT; 706 705 } else { 707 706 con &= ~(1 << EXYNOS_TMU_CORE_EN_SHIFT); 707 + con &= ~(1 << EXYNOS7_PD_DET_EN_SHIFT); 708 708 interrupt_en = 0; /* Disable all interrupts */ 709 709 } 710 - con |= 1 << EXYNOS7_PD_DET_EN_SHIFT; 711 710 712 711 writel(interrupt_en, data->base + EXYNOS7_TMU_REG_INTEN); 713 712 writel(con, data->base + EXYNOS_TMU_REG_CONTROL);
+17 -20
drivers/thermal/thermal_core.c
··· 899 899 return sprintf(buf, "%d\n", instance->trip); 900 900 } 901 901 902 + static struct attribute *cooling_device_attrs[] = { 903 + &dev_attr_cdev_type.attr, 904 + &dev_attr_max_state.attr, 905 + &dev_attr_cur_state.attr, 906 + NULL, 907 + }; 908 + 909 + static const struct attribute_group cooling_device_attr_group = { 910 + .attrs = cooling_device_attrs, 911 + }; 912 + 913 + static const struct attribute_group *cooling_device_attr_groups[] = { 914 + &cooling_device_attr_group, 915 + NULL, 916 + }; 917 + 902 918 /* Device management */ 903 919 904 920 /** ··· 1146 1130 cdev->ops = ops; 1147 1131 cdev->updated = false; 1148 1132 cdev->device.class = &thermal_class; 1133 + cdev->device.groups = cooling_device_attr_groups; 1149 1134 cdev->devdata = devdata; 1150 1135 dev_set_name(&cdev->device, "cooling_device%d", cdev->id); 1151 1136 result = device_register(&cdev->device); ··· 1155 1138 kfree(cdev); 1156 1139 return ERR_PTR(result); 1157 1140 } 1158 - 1159 - /* sys I/F */ 1160 - if (type) { 1161 - result = device_create_file(&cdev->device, &dev_attr_cdev_type); 1162 - if (result) 1163 - goto unregister; 1164 - } 1165 - 1166 - result = device_create_file(&cdev->device, &dev_attr_max_state); 1167 - if (result) 1168 - goto unregister; 1169 - 1170 - result = device_create_file(&cdev->device, &dev_attr_cur_state); 1171 - if (result) 1172 - goto unregister; 1173 1141 1174 1142 /* Add 'this' new cdev to the global cdev list */ 1175 1143 mutex_lock(&thermal_list_lock); ··· 1165 1163 bind_cdev(cdev); 1166 1164 1167 1165 return cdev; 1168 - 1169 - unregister: 1170 - release_idr(&thermal_cdev_idr, &thermal_idr_lock, cdev->id); 1171 - device_unregister(&cdev->device); 1172 - return ERR_PTR(result); 1173 1166 } 1174 1167 1175 1168 /**
-13
drivers/tty/bfin_jtag_comm.c
··· 210 210 return circ_cnt(&bfin_jc_write_buf); 211 211 } 212 212 213 - static void 214 - bfin_jc_wait_until_sent(struct tty_struct *tty, int timeout) 215 - { 216 - unsigned long expire = jiffies + timeout; 217 - while (!circ_empty(&bfin_jc_write_buf)) { 218 - if (signal_pending(current)) 219 - break; 220 - if (time_after(jiffies, expire)) 221 - break; 222 - } 223 - } 224 - 225 213 static const struct tty_operations bfin_jc_ops = { 226 214 .open = bfin_jc_open, 227 215 .close = bfin_jc_close, ··· 218 230 .flush_chars = bfin_jc_flush_chars, 219 231 .write_room = bfin_jc_write_room, 220 232 .chars_in_buffer = bfin_jc_chars_in_buffer, 221 - .wait_until_sent = bfin_jc_wait_until_sent, 222 233 }; 223 234 224 235 static int __init bfin_jc_init(void)
+5 -6
drivers/tty/serial/8250/8250_core.c
··· 2138 2138 /* 2139 2139 * Clear the interrupt registers. 2140 2140 */ 2141 - if (serial_port_in(port, UART_LSR) & UART_LSR_DR) 2142 - serial_port_in(port, UART_RX); 2141 + serial_port_in(port, UART_LSR); 2142 + serial_port_in(port, UART_RX); 2143 2143 serial_port_in(port, UART_IIR); 2144 2144 serial_port_in(port, UART_MSR); 2145 2145 ··· 2300 2300 * saved flags to avoid getting false values from polling 2301 2301 * routines or the previous session. 2302 2302 */ 2303 - if (serial_port_in(port, UART_LSR) & UART_LSR_DR) 2304 - serial_port_in(port, UART_RX); 2303 + serial_port_in(port, UART_LSR); 2304 + serial_port_in(port, UART_RX); 2305 2305 serial_port_in(port, UART_IIR); 2306 2306 serial_port_in(port, UART_MSR); 2307 2307 up->lsr_saved_flags = 0; ··· 2394 2394 * Read data port to reset things, and then unlink from 2395 2395 * the IRQ chain. 2396 2396 */ 2397 - if (serial_port_in(port, UART_LSR) & UART_LSR_DR) 2398 - serial_port_in(port, UART_RX); 2397 + serial_port_in(port, UART_RX); 2399 2398 serial8250_rpm_put(up); 2400 2399 2401 2400 del_timer_sync(&up->timer);
+32
drivers/tty/serial/8250/8250_dw.c
··· 59 59 u8 usr_reg; 60 60 int last_mcr; 61 61 int line; 62 + int msr_mask_on; 63 + int msr_mask_off; 62 64 struct clk *clk; 63 65 struct clk *pclk; 64 66 struct reset_control *rst; ··· 81 79 if (offset == UART_MSR && d->last_mcr & UART_MCR_AFE) { 82 80 value |= UART_MSR_CTS; 83 81 value &= ~UART_MSR_DCTS; 82 + } 83 + 84 + /* Override any modem control signals if needed */ 85 + if (offset == UART_MSR) { 86 + value |= d->msr_mask_on; 87 + value &= ~d->msr_mask_off; 84 88 } 85 89 86 90 return value; ··· 341 333 id = of_alias_get_id(np, "serial"); 342 334 if (id >= 0) 343 335 p->line = id; 336 + 337 + if (of_property_read_bool(np, "dcd-override")) { 338 + /* Always report DCD as active */ 339 + data->msr_mask_on |= UART_MSR_DCD; 340 + data->msr_mask_off |= UART_MSR_DDCD; 341 + } 342 + 343 + if (of_property_read_bool(np, "dsr-override")) { 344 + /* Always report DSR as active */ 345 + data->msr_mask_on |= UART_MSR_DSR; 346 + data->msr_mask_off |= UART_MSR_DDSR; 347 + } 348 + 349 + if (of_property_read_bool(np, "cts-override")) { 350 + /* Always report DSR as active */ 351 + data->msr_mask_on |= UART_MSR_DSR; 352 + data->msr_mask_off |= UART_MSR_DDSR; 353 + } 354 + 355 + if (of_property_read_bool(np, "ri-override")) { 356 + /* Always report Ring indicator as inactive */ 357 + data->msr_mask_off |= UART_MSR_RI; 358 + data->msr_mask_off |= UART_MSR_TERI; 359 + } 344 360 345 361 /* clock got configured through clk api, all done */ 346 362 if (p->uartclk)
+1 -19
drivers/tty/serial/8250/8250_pci.c
··· 69 69 "Please send the output of lspci -vv, this\n" 70 70 "message (0x%04x,0x%04x,0x%04x,0x%04x), the\n" 71 71 "manufacturer and name of serial board or\n" 72 - "modem board to rmk+serial@arm.linux.org.uk.\n", 72 + "modem board to <linux-serial@vger.kernel.org>.\n", 73 73 pci_name(dev), str, dev->vendor, dev->device, 74 74 dev->subsystem_vendor, dev->subsystem_device); 75 75 } ··· 1989 1989 }, 1990 1990 { 1991 1991 .vendor = PCI_VENDOR_ID_INTEL, 1992 - .device = PCI_DEVICE_ID_INTEL_QRK_UART, 1993 - .subvendor = PCI_ANY_ID, 1994 - .subdevice = PCI_ANY_ID, 1995 - .setup = pci_default_setup, 1996 - }, 1997 - { 1998 - .vendor = PCI_VENDOR_ID_INTEL, 1999 1992 .device = PCI_DEVICE_ID_INTEL_BSW_UART1, 2000 1993 .subvendor = PCI_ANY_ID, 2001 1994 .subdevice = PCI_ANY_ID, ··· 2192 2199 /* 2193 2200 * PLX 2194 2201 */ 2195 - { 2196 - .vendor = PCI_VENDOR_ID_PLX, 2197 - .device = PCI_DEVICE_ID_PLX_9030, 2198 - .subvendor = PCI_SUBVENDOR_ID_PERLE, 2199 - .subdevice = PCI_ANY_ID, 2200 - .setup = pci_default_setup, 2201 - }, 2202 2202 { 2203 2203 .vendor = PCI_VENDOR_ID_PLX, 2204 2204 .device = PCI_DEVICE_ID_PLX_9050, ··· 5398 5412 0, 0, pbn_b0_bt_4_115200 }, 5399 5413 5400 5414 { PCI_VENDOR_ID_WCH, PCI_DEVICE_ID_WCH_CH353_2S1PF, 5401 - PCI_ANY_ID, PCI_ANY_ID, 5402 - 0, 0, pbn_b0_bt_2_115200 }, 5403 - 5404 - { PCI_VENDOR_ID_WCH, PCI_DEVICE_ID_WCH_CH352_2S, 5405 5415 PCI_ANY_ID, PCI_ANY_ID, 5406 5416 0, 0, pbn_b0_bt_2_115200 }, 5407 5417
+45 -4
drivers/tty/serial/atmel_serial.c
··· 47 47 #include <linux/gpio/consumer.h> 48 48 #include <linux/err.h> 49 49 #include <linux/irq.h> 50 + #include <linux/suspend.h> 50 51 51 52 #include <asm/io.h> 52 53 #include <asm/ioctls.h> ··· 174 173 bool ms_irq_enabled; 175 174 bool is_usart; /* usart or uart */ 176 175 struct timer_list uart_timer; /* uart timer */ 176 + 177 + bool suspended; 178 + unsigned int pending; 179 + unsigned int pending_status; 180 + spinlock_t lock_suspended; 181 + 177 182 int (*prepare_rx)(struct uart_port *port); 178 183 int (*prepare_tx)(struct uart_port *port); 179 184 void (*schedule_rx)(struct uart_port *port); ··· 1186 1179 { 1187 1180 struct uart_port *port = dev_id; 1188 1181 struct atmel_uart_port *atmel_port = to_atmel_uart_port(port); 1189 - unsigned int status, pending, pass_counter = 0; 1182 + unsigned int status, pending, mask, pass_counter = 0; 1190 1183 bool gpio_handled = false; 1184 + 1185 + spin_lock(&atmel_port->lock_suspended); 1191 1186 1192 1187 do { 1193 1188 status = atmel_get_lines_status(port); 1194 - pending = status & UART_GET_IMR(port); 1189 + mask = UART_GET_IMR(port); 1190 + pending = status & mask; 1195 1191 if (!gpio_handled) { 1196 1192 /* 1197 1193 * Dealing with GPIO interrupt ··· 1216 1206 if (!pending) 1217 1207 break; 1218 1208 1209 + if (atmel_port->suspended) { 1210 + atmel_port->pending |= pending; 1211 + atmel_port->pending_status = status; 1212 + UART_PUT_IDR(port, mask); 1213 + pm_system_wakeup(); 1214 + break; 1215 + } 1216 + 1219 1217 atmel_handle_receive(port, pending); 1220 1218 atmel_handle_status(port, pending, status); 1221 1219 atmel_handle_transmit(port, pending); 1222 1220 } while (pass_counter++ < ATMEL_ISR_PASS_LIMIT); 1221 + 1222 + spin_unlock(&atmel_port->lock_suspended); 1223 1223 1224 1224 return pass_counter ? IRQ_HANDLED : IRQ_NONE; 1225 1225 } ··· 1762 1742 /* 1763 1743 * Allocate the IRQ 1764 1744 */ 1765 - retval = request_irq(port->irq, atmel_interrupt, IRQF_SHARED, 1745 + retval = request_irq(port->irq, atmel_interrupt, 1746 + IRQF_SHARED | IRQF_COND_SUSPEND, 1766 1747 tty ? tty->name : "atmel_serial", port); 1767 1748 if (retval) { 1768 1749 dev_err(port->dev, "atmel_startup - Can't get irq\n"); ··· 2534 2513 2535 2514 /* we can not wake up if we're running on slow clock */ 2536 2515 atmel_port->may_wakeup = device_may_wakeup(&pdev->dev); 2537 - if (atmel_serial_clk_will_stop()) 2516 + if (atmel_serial_clk_will_stop()) { 2517 + unsigned long flags; 2518 + 2519 + spin_lock_irqsave(&atmel_port->lock_suspended, flags); 2520 + atmel_port->suspended = true; 2521 + spin_unlock_irqrestore(&atmel_port->lock_suspended, flags); 2538 2522 device_set_wakeup_enable(&pdev->dev, 0); 2523 + } 2539 2524 2540 2525 uart_suspend_port(&atmel_uart, port); 2541 2526 ··· 2552 2525 { 2553 2526 struct uart_port *port = platform_get_drvdata(pdev); 2554 2527 struct atmel_uart_port *atmel_port = to_atmel_uart_port(port); 2528 + unsigned long flags; 2529 + 2530 + spin_lock_irqsave(&atmel_port->lock_suspended, flags); 2531 + if (atmel_port->pending) { 2532 + atmel_handle_receive(port, atmel_port->pending); 2533 + atmel_handle_status(port, atmel_port->pending, 2534 + atmel_port->pending_status); 2535 + atmel_handle_transmit(port, atmel_port->pending); 2536 + atmel_port->pending = 0; 2537 + } 2538 + atmel_port->suspended = false; 2539 + spin_unlock_irqrestore(&atmel_port->lock_suspended, flags); 2555 2540 2556 2541 uart_resume_port(&atmel_uart, port); 2557 2542 device_set_wakeup_enable(&pdev->dev, atmel_port->may_wakeup); ··· 2631 2592 port = &atmel_ports[ret]; 2632 2593 port->backup_imr = 0; 2633 2594 port->uart.line = ret; 2595 + 2596 + spin_lock_init(&port->lock_suspended); 2634 2597 2635 2598 ret = atmel_init_gpios(port, &pdev->dev); 2636 2599 if (ret < 0)
-4
drivers/tty/serial/of_serial.c
··· 133 133 if (of_find_property(np, "no-loopback-test", NULL)) 134 134 port->flags |= UPF_SKIP_TEST; 135 135 136 - ret = of_alias_get_id(np, "serial"); 137 - if (ret >= 0) 138 - port->line = ret; 139 - 140 136 port->dev = &ofdev->dev; 141 137 142 138 switch (type) {
+3 -1
drivers/tty/serial/sprd_serial.c
··· 293 293 294 294 ims = serial_in(port, SPRD_IMSR); 295 295 296 - if (!ims) 296 + if (!ims) { 297 + spin_unlock(&port->lock); 297 298 return IRQ_NONE; 299 + } 298 300 299 301 serial_out(port, SPRD_ICLR, ~0); 300 302
+2 -2
drivers/tty/tty_io.c
··· 1028 1028 /* We limit tty time update visibility to every 8 seconds or so. */ 1029 1029 static void tty_update_time(struct timespec *time) 1030 1030 { 1031 - unsigned long sec = get_seconds() & ~7; 1032 - if ((long)(sec - time->tv_sec) > 0) 1031 + unsigned long sec = get_seconds(); 1032 + if (abs(sec - time->tv_sec) & ~7) 1033 1033 time->tv_sec = sec; 1034 1034 } 1035 1035
+11 -5
drivers/tty/tty_ioctl.c
··· 217 217 #endif 218 218 if (!timeout) 219 219 timeout = MAX_SCHEDULE_TIMEOUT; 220 - if (wait_event_interruptible_timeout(tty->write_wait, 221 - !tty_chars_in_buffer(tty), timeout) >= 0) { 222 - if (tty->ops->wait_until_sent) 223 - tty->ops->wait_until_sent(tty, timeout); 224 - } 220 + 221 + timeout = wait_event_interruptible_timeout(tty->write_wait, 222 + !tty_chars_in_buffer(tty), timeout); 223 + if (timeout <= 0) 224 + return; 225 + 226 + if (timeout == MAX_SCHEDULE_TIMEOUT) 227 + timeout = 0; 228 + 229 + if (tty->ops->wait_until_sent) 230 + tty->ops->wait_until_sent(tty, timeout); 225 231 } 226 232 EXPORT_SYMBOL(tty_wait_until_sent); 227 233
+2
drivers/usb/class/cdc-acm.c
··· 1650 1650 1651 1651 static const struct usb_device_id acm_ids[] = { 1652 1652 /* quirky and broken devices */ 1653 + { USB_DEVICE(0x076d, 0x0006), /* Denso Cradle CU-321 */ 1654 + .driver_info = NO_UNION_NORMAL, },/* has no union descriptor */ 1653 1655 { USB_DEVICE(0x17ef, 0x7000), /* Lenovo USB modem */ 1654 1656 .driver_info = NO_UNION_NORMAL, },/* has no union descriptor */ 1655 1657 { USB_DEVICE(0x0870, 0x0001), /* Metricom GS Modem */
+2
drivers/usb/core/devio.c
··· 501 501 as->status = urb->status; 502 502 signr = as->signr; 503 503 if (signr) { 504 + memset(&sinfo, 0, sizeof(sinfo)); 504 505 sinfo.si_signo = as->signr; 505 506 sinfo.si_errno = as->status; 506 507 sinfo.si_code = SI_ASYNCIO; ··· 2383 2382 wake_up_all(&ps->wait); 2384 2383 list_del_init(&ps->list); 2385 2384 if (ps->discsignr) { 2385 + memset(&sinfo, 0, sizeof(sinfo)); 2386 2386 sinfo.si_signo = ps->discsignr; 2387 2387 sinfo.si_errno = EPIPE; 2388 2388 sinfo.si_code = SI_ASYNCIO;
+28 -2
drivers/usb/dwc3/dwc3-omap.c
··· 205 205 omap->irq0_offset, value); 206 206 } 207 207 208 + static void dwc3_omap_write_irqmisc_clr(struct dwc3_omap *omap, u32 value) 209 + { 210 + dwc3_omap_writel(omap->base, USBOTGSS_IRQENABLE_CLR_MISC + 211 + omap->irqmisc_offset, value); 212 + } 213 + 214 + static void dwc3_omap_write_irq0_clr(struct dwc3_omap *omap, u32 value) 215 + { 216 + dwc3_omap_writel(omap->base, USBOTGSS_IRQENABLE_CLR_0 - 217 + omap->irq0_offset, value); 218 + } 219 + 208 220 static void dwc3_omap_set_mailbox(struct dwc3_omap *omap, 209 221 enum omap_dwc3_vbus_id_status status) 210 222 { ··· 357 345 358 346 static void dwc3_omap_disable_irqs(struct dwc3_omap *omap) 359 347 { 348 + u32 reg; 349 + 360 350 /* disable all IRQs */ 361 - dwc3_omap_write_irqmisc_set(omap, 0x00); 362 - dwc3_omap_write_irq0_set(omap, 0x00); 351 + reg = USBOTGSS_IRQO_COREIRQ_ST; 352 + dwc3_omap_write_irq0_clr(omap, reg); 353 + 354 + reg = (USBOTGSS_IRQMISC_OEVT | 355 + USBOTGSS_IRQMISC_DRVVBUS_RISE | 356 + USBOTGSS_IRQMISC_CHRGVBUS_RISE | 357 + USBOTGSS_IRQMISC_DISCHRGVBUS_RISE | 358 + USBOTGSS_IRQMISC_IDPULLUP_RISE | 359 + USBOTGSS_IRQMISC_DRVVBUS_FALL | 360 + USBOTGSS_IRQMISC_CHRGVBUS_FALL | 361 + USBOTGSS_IRQMISC_DISCHRGVBUS_FALL | 362 + USBOTGSS_IRQMISC_IDPULLUP_FALL); 363 + 364 + dwc3_omap_write_irqmisc_clr(omap, reg); 363 365 } 364 366 365 367 static u64 dwc3_omap_dma_mask = DMA_BIT_MASK(32);
-2
drivers/usb/gadget/configfs.c
··· 1161 1161 if (desc->opts_mutex) 1162 1162 mutex_lock(desc->opts_mutex); 1163 1163 memcpy(desc->ext_compat_id, page, l); 1164 - desc->ext_compat_id[l] = '\0'; 1165 1164 1166 1165 if (desc->opts_mutex) 1167 1166 mutex_unlock(desc->opts_mutex); ··· 1191 1192 if (desc->opts_mutex) 1192 1193 mutex_lock(desc->opts_mutex); 1193 1194 memcpy(desc->ext_compat_id + 8, page, l); 1194 - desc->ext_compat_id[l + 8] = '\0'; 1195 1195 1196 1196 if (desc->opts_mutex) 1197 1197 mutex_unlock(desc->opts_mutex);
+1 -1
drivers/usb/gadget/function/f_hid.c
··· 569 569 return status; 570 570 } 571 571 572 - const struct file_operations f_hidg_fops = { 572 + static const struct file_operations f_hidg_fops = { 573 573 .owner = THIS_MODULE, 574 574 .open = f_hidg_open, 575 575 .release = f_hidg_release,
+4 -1
drivers/usb/gadget/function/f_phonet.c
··· 417 417 return -EINVAL; 418 418 419 419 spin_lock(&port->lock); 420 - __pn_reset(f); 420 + 421 + if (fp->in_ep->driver_data) 422 + __pn_reset(f); 423 + 421 424 if (alt == 1) { 422 425 int i; 423 426
+2 -2
drivers/usb/gadget/function/f_sourcesink.c
··· 344 344 .bInterval = USB_MS_TO_SS_INTERVAL(GZERO_INT_INTERVAL), 345 345 }; 346 346 347 - struct usb_ss_ep_comp_descriptor ss_int_source_comp_desc = { 347 + static struct usb_ss_ep_comp_descriptor ss_int_source_comp_desc = { 348 348 .bLength = USB_DT_SS_EP_COMP_SIZE, 349 349 .bDescriptorType = USB_DT_SS_ENDPOINT_COMP, 350 350 ··· 362 362 .bInterval = USB_MS_TO_SS_INTERVAL(GZERO_INT_INTERVAL), 363 363 }; 364 364 365 - struct usb_ss_ep_comp_descriptor ss_int_sink_comp_desc = { 365 + static struct usb_ss_ep_comp_descriptor ss_int_sink_comp_desc = { 366 366 .bLength = USB_DT_SS_EP_COMP_SIZE, 367 367 .bDescriptorType = USB_DT_SS_ENDPOINT_COMP, 368 368
+17 -17
drivers/usb/gadget/function/f_uac2.c
··· 54 54 #define UNFLW_CTRL 8 55 55 #define OVFLW_CTRL 10 56 56 57 - const char *uac2_name = "snd_uac2"; 57 + static const char *uac2_name = "snd_uac2"; 58 58 59 59 struct uac2_req { 60 60 struct uac2_rtd_params *pp; /* parent param */ ··· 634 634 }; 635 635 636 636 /* Clock source for IN traffic */ 637 - struct uac_clock_source_descriptor in_clk_src_desc = { 637 + static struct uac_clock_source_descriptor in_clk_src_desc = { 638 638 .bLength = sizeof in_clk_src_desc, 639 639 .bDescriptorType = USB_DT_CS_INTERFACE, 640 640 ··· 646 646 }; 647 647 648 648 /* Clock source for OUT traffic */ 649 - struct uac_clock_source_descriptor out_clk_src_desc = { 649 + static struct uac_clock_source_descriptor out_clk_src_desc = { 650 650 .bLength = sizeof out_clk_src_desc, 651 651 .bDescriptorType = USB_DT_CS_INTERFACE, 652 652 ··· 658 658 }; 659 659 660 660 /* Input Terminal for USB_OUT */ 661 - struct uac2_input_terminal_descriptor usb_out_it_desc = { 661 + static struct uac2_input_terminal_descriptor usb_out_it_desc = { 662 662 .bLength = sizeof usb_out_it_desc, 663 663 .bDescriptorType = USB_DT_CS_INTERFACE, 664 664 ··· 672 672 }; 673 673 674 674 /* Input Terminal for I/O-In */ 675 - struct uac2_input_terminal_descriptor io_in_it_desc = { 675 + static struct uac2_input_terminal_descriptor io_in_it_desc = { 676 676 .bLength = sizeof io_in_it_desc, 677 677 .bDescriptorType = USB_DT_CS_INTERFACE, 678 678 ··· 686 686 }; 687 687 688 688 /* Ouput Terminal for USB_IN */ 689 - struct uac2_output_terminal_descriptor usb_in_ot_desc = { 689 + static struct uac2_output_terminal_descriptor usb_in_ot_desc = { 690 690 .bLength = sizeof usb_in_ot_desc, 691 691 .bDescriptorType = USB_DT_CS_INTERFACE, 692 692 ··· 700 700 }; 701 701 702 702 /* Ouput Terminal for I/O-Out */ 703 - struct uac2_output_terminal_descriptor io_out_ot_desc = { 703 + static struct uac2_output_terminal_descriptor io_out_ot_desc = { 704 704 .bLength = sizeof io_out_ot_desc, 705 705 .bDescriptorType = USB_DT_CS_INTERFACE, 706 706 ··· 713 713 .bmControls = (CONTROL_RDWR << COPY_CTRL), 714 714 }; 715 715 716 - struct uac2_ac_header_descriptor ac_hdr_desc = { 716 + static struct uac2_ac_header_descriptor ac_hdr_desc = { 717 717 .bLength = sizeof ac_hdr_desc, 718 718 .bDescriptorType = USB_DT_CS_INTERFACE, 719 719 ··· 751 751 }; 752 752 753 753 /* Audio Stream OUT Intface Desc */ 754 - struct uac2_as_header_descriptor as_out_hdr_desc = { 754 + static struct uac2_as_header_descriptor as_out_hdr_desc = { 755 755 .bLength = sizeof as_out_hdr_desc, 756 756 .bDescriptorType = USB_DT_CS_INTERFACE, 757 757 ··· 764 764 }; 765 765 766 766 /* Audio USB_OUT Format */ 767 - struct uac2_format_type_i_descriptor as_out_fmt1_desc = { 767 + static struct uac2_format_type_i_descriptor as_out_fmt1_desc = { 768 768 .bLength = sizeof as_out_fmt1_desc, 769 769 .bDescriptorType = USB_DT_CS_INTERFACE, 770 770 .bDescriptorSubtype = UAC_FORMAT_TYPE, ··· 772 772 }; 773 773 774 774 /* STD AS ISO OUT Endpoint */ 775 - struct usb_endpoint_descriptor fs_epout_desc = { 775 + static struct usb_endpoint_descriptor fs_epout_desc = { 776 776 .bLength = USB_DT_ENDPOINT_SIZE, 777 777 .bDescriptorType = USB_DT_ENDPOINT, 778 778 ··· 782 782 .bInterval = 1, 783 783 }; 784 784 785 - struct usb_endpoint_descriptor hs_epout_desc = { 785 + static struct usb_endpoint_descriptor hs_epout_desc = { 786 786 .bLength = USB_DT_ENDPOINT_SIZE, 787 787 .bDescriptorType = USB_DT_ENDPOINT, 788 788 ··· 828 828 }; 829 829 830 830 /* Audio Stream IN Intface Desc */ 831 - struct uac2_as_header_descriptor as_in_hdr_desc = { 831 + static struct uac2_as_header_descriptor as_in_hdr_desc = { 832 832 .bLength = sizeof as_in_hdr_desc, 833 833 .bDescriptorType = USB_DT_CS_INTERFACE, 834 834 ··· 841 841 }; 842 842 843 843 /* Audio USB_IN Format */ 844 - struct uac2_format_type_i_descriptor as_in_fmt1_desc = { 844 + static struct uac2_format_type_i_descriptor as_in_fmt1_desc = { 845 845 .bLength = sizeof as_in_fmt1_desc, 846 846 .bDescriptorType = USB_DT_CS_INTERFACE, 847 847 .bDescriptorSubtype = UAC_FORMAT_TYPE, ··· 849 849 }; 850 850 851 851 /* STD AS ISO IN Endpoint */ 852 - struct usb_endpoint_descriptor fs_epin_desc = { 852 + static struct usb_endpoint_descriptor fs_epin_desc = { 853 853 .bLength = USB_DT_ENDPOINT_SIZE, 854 854 .bDescriptorType = USB_DT_ENDPOINT, 855 855 ··· 859 859 .bInterval = 1, 860 860 }; 861 861 862 - struct usb_endpoint_descriptor hs_epin_desc = { 862 + static struct usb_endpoint_descriptor hs_epin_desc = { 863 863 .bLength = USB_DT_ENDPOINT_SIZE, 864 864 .bDescriptorType = USB_DT_ENDPOINT, 865 865 ··· 1563 1563 agdev->out_ep->driver_data = NULL; 1564 1564 } 1565 1565 1566 - struct usb_function *afunc_alloc(struct usb_function_instance *fi) 1566 + static struct usb_function *afunc_alloc(struct usb_function_instance *fi) 1567 1567 { 1568 1568 struct audio_dev *agdev; 1569 1569 struct f_uac2_opts *opts;
+1
drivers/usb/gadget/function/uvc_v4l2.c
··· 27 27 #include "uvc.h" 28 28 #include "uvc_queue.h" 29 29 #include "uvc_video.h" 30 + #include "uvc_v4l2.h" 30 31 31 32 /* -------------------------------------------------------------------------- 32 33 * Requests handling
+1
drivers/usb/gadget/function/uvc_video.c
··· 21 21 22 22 #include "uvc.h" 23 23 #include "uvc_queue.h" 24 + #include "uvc_video.h" 24 25 25 26 /* -------------------------------------------------------------------------- 26 27 * Video codecs
+4 -2
drivers/usb/gadget/legacy/g_ffs.c
··· 133 133 struct usb_configuration c; 134 134 int (*eth)(struct usb_configuration *c); 135 135 int num; 136 - } gfs_configurations[] = { 136 + }; 137 + 138 + static struct gfs_configuration gfs_configurations[] = { 137 139 #ifdef CONFIG_USB_FUNCTIONFS_RNDIS 138 140 { 139 141 .eth = bind_rndis_config, ··· 280 278 if (!try_module_get(THIS_MODULE)) 281 279 return ERR_PTR(-ENOENT); 282 280 283 - return 0; 281 + return NULL; 284 282 } 285 283 286 284 static void functionfs_release_dev(struct ffs_dev *dev)
+30
drivers/usb/host/xhci-pci.c
··· 37 37 38 38 #define PCI_DEVICE_ID_INTEL_LYNXPOINT_XHCI 0x8c31 39 39 #define PCI_DEVICE_ID_INTEL_LYNXPOINT_LP_XHCI 0x9c31 40 + #define PCI_DEVICE_ID_INTEL_CHERRYVIEW_XHCI 0x22b5 41 + #define PCI_DEVICE_ID_INTEL_SUNRISEPOINT_H_XHCI 0xa12f 42 + #define PCI_DEVICE_ID_INTEL_SUNRISEPOINT_LP_XHCI 0x9d2f 40 43 41 44 static const char hcd_name[] = "xhci_hcd"; 42 45 ··· 136 133 pdev->device == PCI_DEVICE_ID_INTEL_LYNXPOINT_LP_XHCI) { 137 134 xhci->quirks |= XHCI_SPURIOUS_REBOOT; 138 135 } 136 + if (pdev->vendor == PCI_VENDOR_ID_INTEL && 137 + (pdev->device == PCI_DEVICE_ID_INTEL_SUNRISEPOINT_LP_XHCI || 138 + pdev->device == PCI_DEVICE_ID_INTEL_SUNRISEPOINT_H_XHCI || 139 + pdev->device == PCI_DEVICE_ID_INTEL_CHERRYVIEW_XHCI)) { 140 + xhci->quirks |= XHCI_PME_STUCK_QUIRK; 141 + } 139 142 if (pdev->vendor == PCI_VENDOR_ID_ETRON && 140 143 pdev->device == PCI_DEVICE_ID_EJ168) { 141 144 xhci->quirks |= XHCI_RESET_ON_RESUME; ··· 166 157 if (xhci->quirks & XHCI_RESET_ON_RESUME) 167 158 xhci_dbg_trace(xhci, trace_xhci_dbg_quirks, 168 159 "QUIRK: Resetting on resume"); 160 + } 161 + 162 + /* 163 + * Make sure PME works on some Intel xHCI controllers by writing 1 to clear 164 + * the Internal PME flag bit in vendor specific PMCTRL register at offset 0x80a4 165 + */ 166 + static void xhci_pme_quirk(struct xhci_hcd *xhci) 167 + { 168 + u32 val; 169 + void __iomem *reg; 170 + 171 + reg = (void __iomem *) xhci->cap_regs + 0x80a4; 172 + val = readl(reg); 173 + writel(val | BIT(28), reg); 174 + readl(reg); 169 175 } 170 176 171 177 /* called during probe() after chip reset completes */ ··· 307 283 if (xhci->quirks & XHCI_COMP_MODE_QUIRK) 308 284 pdev->no_d3cold = true; 309 285 286 + if (xhci->quirks & XHCI_PME_STUCK_QUIRK) 287 + xhci_pme_quirk(xhci); 288 + 310 289 return xhci_suspend(xhci, do_wakeup); 311 290 } 312 291 ··· 339 312 340 313 if (pdev->vendor == PCI_VENDOR_ID_INTEL) 341 314 usb_enable_intel_xhci_ports(pdev); 315 + 316 + if (xhci->quirks & XHCI_PME_STUCK_QUIRK) 317 + xhci_pme_quirk(xhci); 342 318 343 319 retval = xhci_resume(xhci, hibernated); 344 320 return retval;
+9 -10
drivers/usb/host/xhci-plat.c
··· 83 83 if (irq < 0) 84 84 return -ENODEV; 85 85 86 - 87 - if (of_device_is_compatible(pdev->dev.of_node, 88 - "marvell,armada-375-xhci") || 89 - of_device_is_compatible(pdev->dev.of_node, 90 - "marvell,armada-380-xhci")) { 91 - ret = xhci_mvebu_mbus_init_quirk(pdev); 92 - if (ret) 93 - return ret; 94 - } 95 - 96 86 /* Initialize dma_mask and coherent_dma_mask to 32-bits */ 97 87 ret = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32)); 98 88 if (ret) ··· 115 125 ret = clk_prepare_enable(clk); 116 126 if (ret) 117 127 goto put_hcd; 128 + } 129 + 130 + if (of_device_is_compatible(pdev->dev.of_node, 131 + "marvell,armada-375-xhci") || 132 + of_device_is_compatible(pdev->dev.of_node, 133 + "marvell,armada-380-xhci")) { 134 + ret = xhci_mvebu_mbus_init_quirk(pdev); 135 + if (ret) 136 + goto disable_clk; 118 137 } 119 138 120 139 ret = usb_add_hcd(hcd, irq, IRQF_SHARED);
+9 -3
drivers/usb/host/xhci-ring.c
··· 1729 1729 if (!command) 1730 1730 return; 1731 1731 1732 - ep->ep_state |= EP_HALTED; 1732 + ep->ep_state |= EP_HALTED | EP_RECENTLY_HALTED; 1733 1733 ep->stopped_stream = stream_id; 1734 1734 1735 1735 xhci_queue_reset_ep(xhci, command, slot_id, ep_index); ··· 1946 1946 if (event_trb != ep_ring->dequeue) { 1947 1947 /* The event was for the status stage */ 1948 1948 if (event_trb == td->last_trb) { 1949 - if (td->urb->actual_length != 0) { 1949 + if (td->urb_length_set) { 1950 1950 /* Don't overwrite a previously set error code 1951 1951 */ 1952 1952 if ((*status == -EINPROGRESS || *status == 0) && ··· 1960 1960 td->urb->transfer_buffer_length; 1961 1961 } 1962 1962 } else { 1963 - /* Maybe the event was for the data stage? */ 1963 + /* 1964 + * Maybe the event was for the data stage? If so, update 1965 + * already the actual_length of the URB and flag it as 1966 + * set, so that it is not overwritten in the event for 1967 + * the last TRB. 1968 + */ 1969 + td->urb_length_set = true; 1964 1970 td->urb->actual_length = 1965 1971 td->urb->transfer_buffer_length - 1966 1972 EVENT_TRB_LEN(le32_to_cpu(event->transfer_len));
+91 -9
drivers/usb/host/xhci.c
··· 1338 1338 goto exit; 1339 1339 } 1340 1340 1341 + /* Reject urb if endpoint is in soft reset, queue must stay empty */ 1342 + if (xhci->devs[slot_id]->eps[ep_index].ep_state & EP_CONFIG_PENDING) { 1343 + xhci_warn(xhci, "Can't enqueue URB while ep is in soft reset\n"); 1344 + ret = -EINVAL; 1345 + } 1346 + 1341 1347 if (usb_endpoint_xfer_isoc(&urb->ep->desc)) 1342 1348 size = urb->number_of_packets; 1343 1349 else ··· 2954 2948 } 2955 2949 } 2956 2950 2957 - /* Called when clearing halted device. The core should have sent the control 2951 + /* Called after clearing a halted device. USB core should have sent the control 2958 2952 * message to clear the device halt condition. The host side of the halt should 2959 - * already be cleared with a reset endpoint command issued when the STALL tx 2960 - * event was received. 2961 - * 2962 - * Context: in_interrupt 2953 + * already be cleared with a reset endpoint command issued immediately when the 2954 + * STALL tx event was received. 2963 2955 */ 2964 2956 2965 2957 void xhci_endpoint_reset(struct usb_hcd *hcd, 2966 2958 struct usb_host_endpoint *ep) 2967 2959 { 2968 2960 struct xhci_hcd *xhci; 2961 + struct usb_device *udev; 2962 + struct xhci_virt_device *virt_dev; 2963 + struct xhci_virt_ep *virt_ep; 2964 + struct xhci_input_control_ctx *ctrl_ctx; 2965 + struct xhci_command *command; 2966 + unsigned int ep_index, ep_state; 2967 + unsigned long flags; 2968 + u32 ep_flag; 2969 2969 2970 2970 xhci = hcd_to_xhci(hcd); 2971 + udev = (struct usb_device *) ep->hcpriv; 2972 + if (!ep->hcpriv) 2973 + return; 2974 + virt_dev = xhci->devs[udev->slot_id]; 2975 + ep_index = xhci_get_endpoint_index(&ep->desc); 2976 + virt_ep = &virt_dev->eps[ep_index]; 2977 + ep_state = virt_ep->ep_state; 2971 2978 2972 2979 /* 2973 - * We might need to implement the config ep cmd in xhci 4.8.1 note: 2980 + * Implement the config ep command in xhci 4.6.8 additional note: 2974 2981 * The Reset Endpoint Command may only be issued to endpoints in the 2975 2982 * Halted state. If software wishes reset the Data Toggle or Sequence 2976 2983 * Number of an endpoint that isn't in the Halted state, then software ··· 2991 2972 * for the target endpoint. that is in the Stopped state. 2992 2973 */ 2993 2974 2994 - /* For now just print debug to follow the situation */ 2995 - xhci_dbg(xhci, "Endpoint 0x%x ep reset callback called\n", 2996 - ep->desc.bEndpointAddress); 2975 + if (ep_state & SET_DEQ_PENDING || ep_state & EP_RECENTLY_HALTED) { 2976 + virt_ep->ep_state &= ~EP_RECENTLY_HALTED; 2977 + xhci_dbg(xhci, "ep recently halted, no toggle reset needed\n"); 2978 + return; 2979 + } 2980 + 2981 + /* Only interrupt and bulk ep's use Data toggle, USB2 spec 5.5.4-> */ 2982 + if (usb_endpoint_xfer_control(&ep->desc) || 2983 + usb_endpoint_xfer_isoc(&ep->desc)) 2984 + return; 2985 + 2986 + ep_flag = xhci_get_endpoint_flag(&ep->desc); 2987 + 2988 + if (ep_flag == SLOT_FLAG || ep_flag == EP0_FLAG) 2989 + return; 2990 + 2991 + command = xhci_alloc_command(xhci, true, true, GFP_NOWAIT); 2992 + if (!command) { 2993 + xhci_err(xhci, "Could not allocate xHCI command structure.\n"); 2994 + return; 2995 + } 2996 + 2997 + spin_lock_irqsave(&xhci->lock, flags); 2998 + 2999 + /* block ringing ep doorbell */ 3000 + virt_ep->ep_state |= EP_CONFIG_PENDING; 3001 + 3002 + /* 3003 + * Make sure endpoint ring is empty before resetting the toggle/seq. 3004 + * Driver is required to synchronously cancel all transfer request. 3005 + * 3006 + * xhci 4.6.6 says we can issue a configure endpoint command on a 3007 + * running endpoint ring as long as it's idle (queue empty) 3008 + */ 3009 + 3010 + if (!list_empty(&virt_ep->ring->td_list)) { 3011 + dev_err(&udev->dev, "EP not empty, refuse reset\n"); 3012 + spin_unlock_irqrestore(&xhci->lock, flags); 3013 + goto cleanup; 3014 + } 3015 + 3016 + xhci_dbg(xhci, "Reset toggle/seq for slot %d, ep_index: %d\n", 3017 + udev->slot_id, ep_index); 3018 + 3019 + ctrl_ctx = xhci_get_input_control_ctx(command->in_ctx); 3020 + if (!ctrl_ctx) { 3021 + xhci_err(xhci, "Could not get input context, bad type. virt_dev: %p, in_ctx %p\n", 3022 + virt_dev, virt_dev->in_ctx); 3023 + spin_unlock_irqrestore(&xhci->lock, flags); 3024 + goto cleanup; 3025 + } 3026 + xhci_setup_input_ctx_for_config_ep(xhci, command->in_ctx, 3027 + virt_dev->out_ctx, ctrl_ctx, 3028 + ep_flag, ep_flag); 3029 + xhci_endpoint_copy(xhci, command->in_ctx, virt_dev->out_ctx, ep_index); 3030 + 3031 + xhci_queue_configure_endpoint(xhci, command, command->in_ctx->dma, 3032 + udev->slot_id, false); 3033 + xhci_ring_cmd_db(xhci); 3034 + spin_unlock_irqrestore(&xhci->lock, flags); 3035 + 3036 + wait_for_completion(command->completion); 3037 + 3038 + cleanup: 3039 + virt_ep->ep_state &= ~EP_CONFIG_PENDING; 3040 + xhci_free_command(xhci, command); 2997 3041 } 2998 3042 2999 3043 static int xhci_check_streams_endpoint(struct xhci_hcd *xhci,
+9 -2
drivers/usb/host/xhci.h
··· 1 + 1 2 /* 2 3 * xHCI host controller driver 3 4 * ··· 89 88 #define HCS_IST(p) (((p) >> 0) & 0xf) 90 89 /* bits 4:7, max number of Event Ring segments */ 91 90 #define HCS_ERST_MAX(p) (((p) >> 4) & 0xf) 91 + /* bits 21:25 Hi 5 bits of Scratchpad buffers SW must allocate for the HW */ 92 92 /* bit 26 Scratchpad restore - for save/restore HW state - not used yet */ 93 - /* bits 27:31 number of Scratchpad buffers SW must allocate for the HW */ 94 - #define HCS_MAX_SCRATCHPAD(p) (((p) >> 27) & 0x1f) 93 + /* bits 27:31 Lo 5 bits of Scratchpad buffers SW must allocate for the HW */ 94 + #define HCS_MAX_SCRATCHPAD(p) ((((p) >> 16) & 0x3e0) | (((p) >> 27) & 0x1f)) 95 95 96 96 /* HCSPARAMS3 - hcs_params3 - bitmasks */ 97 97 /* bits 0:7, Max U1 to U0 latency for the roothub ports */ ··· 865 863 #define EP_HAS_STREAMS (1 << 4) 866 864 /* Transitioning the endpoint to not using streams, don't enqueue URBs */ 867 865 #define EP_GETTING_NO_STREAMS (1 << 5) 866 + #define EP_RECENTLY_HALTED (1 << 6) 867 + #define EP_CONFIG_PENDING (1 << 7) 868 868 /* ---- Related to URB cancellation ---- */ 869 869 struct list_head cancelled_td_list; 870 870 struct xhci_td *stopped_td; ··· 1292 1288 struct xhci_segment *start_seg; 1293 1289 union xhci_trb *first_trb; 1294 1290 union xhci_trb *last_trb; 1291 + /* actual_length of the URB has already been set */ 1292 + bool urb_length_set; 1295 1293 }; 1296 1294 1297 1295 /* xHCI command default timeout value */ ··· 1566 1560 #define XHCI_SPURIOUS_WAKEUP (1 << 18) 1567 1561 /* For controllers with a broken beyond repair streams implementation */ 1568 1562 #define XHCI_BROKEN_STREAMS (1 << 19) 1563 + #define XHCI_PME_STUCK_QUIRK (1 << 20) 1569 1564 unsigned int num_active_eps; 1570 1565 unsigned int limit_active_eps; 1571 1566 /* There are two roothubs to keep track of bus suspend info for */
+3 -3
drivers/usb/isp1760/isp1760-hcd.c
··· 1274 1274 for (slot = 0; slot < 32; slot++) 1275 1275 if (priv->atl_slots[slot].qh && time_after(jiffies, 1276 1276 priv->atl_slots[slot].timestamp + 1277 - SLOT_TIMEOUT * HZ / 1000)) { 1277 + msecs_to_jiffies(SLOT_TIMEOUT))) { 1278 1278 ptd_read(hcd->regs, ATL_PTD_OFFSET, slot, &ptd); 1279 1279 if (!FROM_DW0_VALID(ptd.dw0) && 1280 1280 !FROM_DW3_ACTIVE(ptd.dw3)) ··· 1286 1286 1287 1287 spin_unlock_irqrestore(&priv->lock, spinflags); 1288 1288 1289 - errata2_timer.expires = jiffies + SLOT_CHECK_PERIOD * HZ / 1000; 1289 + errata2_timer.expires = jiffies + msecs_to_jiffies(SLOT_CHECK_PERIOD); 1290 1290 add_timer(&errata2_timer); 1291 1291 } 1292 1292 ··· 1336 1336 return retval; 1337 1337 1338 1338 setup_timer(&errata2_timer, errata2_function, (unsigned long)hcd); 1339 - errata2_timer.expires = jiffies + SLOT_CHECK_PERIOD * HZ / 1000; 1339 + errata2_timer.expires = jiffies + msecs_to_jiffies(SLOT_CHECK_PERIOD); 1340 1340 add_timer(&errata2_timer); 1341 1341 1342 1342 chipid = reg_read32(hcd->regs, HC_CHIP_ID_REG);
+6 -4
drivers/usb/musb/musb_core.c
··· 1969 1969 goto fail0; 1970 1970 } 1971 1971 1972 - pm_runtime_use_autosuspend(musb->controller); 1973 - pm_runtime_set_autosuspend_delay(musb->controller, 200); 1974 - pm_runtime_enable(musb->controller); 1975 - 1976 1972 spin_lock_init(&musb->lock); 1977 1973 musb->board_set_power = plat->set_power; 1978 1974 musb->min_power = plat->min_power; ··· 1986 1990 musb_writew = musb_default_writew; 1987 1991 musb_readl = musb_default_readl; 1988 1992 musb_writel = musb_default_writel; 1993 + 1994 + /* We need musb_read/write functions initialized for PM */ 1995 + pm_runtime_use_autosuspend(musb->controller); 1996 + pm_runtime_set_autosuspend_delay(musb->controller, 200); 1997 + pm_runtime_irq_safe(musb->controller); 1998 + pm_runtime_enable(musb->controller); 1989 1999 1990 2000 /* The musb_platform_init() call: 1991 2001 * - adjusts musb->mregs
+29 -3
drivers/usb/musb/musb_dsps.c
··· 457 457 if (IS_ERR(musb->xceiv)) 458 458 return PTR_ERR(musb->xceiv); 459 459 460 + musb->phy = devm_phy_get(dev->parent, "usb2-phy"); 461 + 460 462 /* Returns zero if e.g. not clocked */ 461 463 rev = dsps_readl(reg_base, wrp->revision); 462 464 if (!rev) 463 465 return -ENODEV; 464 466 465 467 usb_phy_init(musb->xceiv); 468 + if (IS_ERR(musb->phy)) { 469 + musb->phy = NULL; 470 + } else { 471 + ret = phy_init(musb->phy); 472 + if (ret < 0) 473 + return ret; 474 + ret = phy_power_on(musb->phy); 475 + if (ret) { 476 + phy_exit(musb->phy); 477 + return ret; 478 + } 479 + } 480 + 466 481 setup_timer(&glue->timer, otg_timer, (unsigned long) musb); 467 482 468 483 /* Reset the musb */ ··· 517 502 518 503 del_timer_sync(&glue->timer); 519 504 usb_phy_shutdown(musb->xceiv); 505 + phy_power_off(musb->phy); 506 + phy_exit(musb->phy); 520 507 debugfs_remove_recursive(glue->dbgfs_root); 521 508 522 509 return 0; ··· 627 610 struct device *dev = musb->controller; 628 611 struct dsps_glue *glue = dev_get_drvdata(dev->parent); 629 612 const struct dsps_musb_wrapper *wrp = glue->wrp; 630 - int session_restart = 0; 613 + int session_restart = 0, error; 631 614 632 615 if (glue->sw_babble_enabled) 633 616 session_restart = sw_babble_control(musb); ··· 641 624 dsps_writel(musb->ctrl_base, wrp->control, (1 << wrp->reset)); 642 625 usleep_range(100, 200); 643 626 usb_phy_shutdown(musb->xceiv); 627 + error = phy_power_off(musb->phy); 628 + if (error) 629 + dev_err(dev, "phy shutdown failed: %i\n", error); 644 630 usleep_range(100, 200); 645 631 usb_phy_init(musb->xceiv); 632 + error = phy_power_on(musb->phy); 633 + if (error) 634 + dev_err(dev, "phy powerup failed: %i\n", error); 646 635 session_restart = 1; 647 636 } 648 637 ··· 710 687 struct musb_hdrc_config *config; 711 688 struct platform_device *musb; 712 689 struct device_node *dn = parent->dev.of_node; 713 - int ret; 690 + int ret, val; 714 691 715 692 memset(resources, 0, sizeof(resources)); 716 693 res = platform_get_resource_byname(parent, IORESOURCE_MEM, "mc"); ··· 762 739 pdata.mode = get_musb_port_mode(dev); 763 740 /* DT keeps this entry in mA, musb expects it as per USB spec */ 764 741 pdata.power = get_int_prop(dn, "mentor,power") / 2; 765 - config->multipoint = of_property_read_bool(dn, "mentor,multipoint"); 742 + 743 + ret = of_property_read_u32(dn, "mentor,multipoint", &val); 744 + if (!ret && val) 745 + config->multipoint = true; 766 746 767 747 ret = platform_device_add_data(musb, &pdata, sizeof(pdata)); 768 748 if (ret) {
+1 -1
drivers/usb/musb/musb_host.c
··· 2613 2613 .description = "musb-hcd", 2614 2614 .product_desc = "MUSB HDRC host driver", 2615 2615 .hcd_priv_size = sizeof(struct musb *), 2616 - .flags = HCD_USB2 | HCD_MEMORY, 2616 + .flags = HCD_USB2 | HCD_MEMORY | HCD_BH, 2617 2617 2618 2618 /* not using irq handler or reset hooks from usbcore, since 2619 2619 * those must be shared with peripheral code for OTG configs
+5 -2
drivers/usb/musb/omap2430.c
··· 516 516 struct omap2430_glue *glue; 517 517 struct device_node *np = pdev->dev.of_node; 518 518 struct musb_hdrc_config *config; 519 - int ret = -ENOMEM; 519 + int ret = -ENOMEM, val; 520 520 521 521 glue = devm_kzalloc(&pdev->dev, sizeof(*glue), GFP_KERNEL); 522 522 if (!glue) ··· 559 559 of_property_read_u32(np, "num-eps", (u32 *)&config->num_eps); 560 560 of_property_read_u32(np, "ram-bits", (u32 *)&config->ram_bits); 561 561 of_property_read_u32(np, "power", (u32 *)&pdata->power); 562 - config->multipoint = of_property_read_bool(np, "multipoint"); 562 + 563 + ret = of_property_read_u32(np, "multipoint", &val); 564 + if (!ret && val) 565 + config->multipoint = true; 563 566 564 567 pdata->board_data = data; 565 568 pdata->config = config;
+1
drivers/usb/renesas_usbhs/Kconfig
··· 6 6 tristate 'Renesas USBHS controller' 7 7 depends on USB_GADGET 8 8 depends on ARCH_SHMOBILE || SUPERH || COMPILE_TEST 9 + depends on EXTCON || !EXTCON # if EXTCON=m, USBHS cannot be built-in 9 10 default n 10 11 help 11 12 Renesas USBHS is a discrete USB host and peripheral controller chip
+20 -27
drivers/usb/serial/bus.c
··· 38 38 return 0; 39 39 } 40 40 41 - static ssize_t port_number_show(struct device *dev, 42 - struct device_attribute *attr, char *buf) 43 - { 44 - struct usb_serial_port *port = to_usb_serial_port(dev); 45 - 46 - return sprintf(buf, "%d\n", port->port_number); 47 - } 48 - static DEVICE_ATTR_RO(port_number); 49 - 50 41 static int usb_serial_device_probe(struct device *dev) 51 42 { 52 43 struct usb_serial_driver *driver; 53 44 struct usb_serial_port *port; 45 + struct device *tty_dev; 54 46 int retval = 0; 55 47 int minor; 56 48 57 49 port = to_usb_serial_port(dev); 58 - if (!port) { 59 - retval = -ENODEV; 60 - goto exit; 61 - } 50 + if (!port) 51 + return -ENODEV; 62 52 63 53 /* make sure suspend/resume doesn't race against port_probe */ 64 54 retval = usb_autopm_get_interface(port->serial->interface); 65 55 if (retval) 66 - goto exit; 56 + return retval; 67 57 68 58 driver = port->serial->type; 69 59 if (driver->port_probe) { 70 60 retval = driver->port_probe(port); 71 61 if (retval) 72 - goto exit_with_autopm; 73 - } 74 - 75 - retval = device_create_file(dev, &dev_attr_port_number); 76 - if (retval) { 77 - if (driver->port_remove) 78 - retval = driver->port_remove(port); 79 - goto exit_with_autopm; 62 + goto err_autopm_put; 80 63 } 81 64 82 65 minor = port->minor; 83 - tty_register_device(usb_serial_tty_driver, minor, dev); 66 + tty_dev = tty_register_device(usb_serial_tty_driver, minor, dev); 67 + if (IS_ERR(tty_dev)) { 68 + retval = PTR_ERR(tty_dev); 69 + goto err_port_remove; 70 + } 71 + 72 + usb_autopm_put_interface(port->serial->interface); 73 + 84 74 dev_info(&port->serial->dev->dev, 85 75 "%s converter now attached to ttyUSB%d\n", 86 76 driver->description, minor); 87 77 88 - exit_with_autopm: 78 + return 0; 79 + 80 + err_port_remove: 81 + if (driver->port_remove) 82 + driver->port_remove(port); 83 + err_autopm_put: 89 84 usb_autopm_put_interface(port->serial->interface); 90 - exit: 85 + 91 86 return retval; 92 87 } 93 88 ··· 108 113 109 114 minor = port->minor; 110 115 tty_unregister_device(usb_serial_tty_driver, minor); 111 - 112 - device_remove_file(&port->dev, &dev_attr_port_number); 113 116 114 117 driver = port->serial->type; 115 118 if (driver->port_remove)
+6 -9
drivers/usb/serial/ch341.c
··· 84 84 u8 line_status; /* active status of modem control inputs */ 85 85 }; 86 86 87 + static void ch341_set_termios(struct tty_struct *tty, 88 + struct usb_serial_port *port, 89 + struct ktermios *old_termios); 90 + 87 91 static int ch341_control_out(struct usb_device *dev, u8 request, 88 92 u16 value, u16 index) 89 93 { ··· 313 309 struct ch341_private *priv = usb_get_serial_port_data(port); 314 310 int r; 315 311 316 - priv->baud_rate = DEFAULT_BAUD_RATE; 317 - 318 312 r = ch341_configure(serial->dev, priv); 319 313 if (r) 320 314 goto out; 321 315 322 - r = ch341_set_handshake(serial->dev, priv->line_control); 323 - if (r) 324 - goto out; 325 - 326 - r = ch341_set_baudrate(serial->dev, priv); 327 - if (r) 328 - goto out; 316 + if (tty) 317 + ch341_set_termios(tty, port, NULL); 329 318 330 319 dev_dbg(&port->dev, "%s - submitting interrupt urb\n", __func__); 331 320 r = usb_submit_urb(port->interrupt_in_urb, GFP_KERNEL);
+2
drivers/usb/serial/console.c
··· 14 14 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 15 15 16 16 #include <linux/kernel.h> 17 + #include <linux/module.h> 17 18 #include <linux/slab.h> 18 19 #include <linux/tty.h> 19 20 #include <linux/console.h> ··· 145 144 init_ldsem(&tty->ldisc_sem); 146 145 INIT_LIST_HEAD(&tty->tty_files); 147 146 kref_get(&tty->driver->kref); 147 + __module_get(tty->driver->owner); 148 148 tty->ops = &usb_console_fake_tty_ops; 149 149 if (tty_init_termios(tty)) { 150 150 retval = -ENOMEM;
+2
drivers/usb/serial/cp210x.c
··· 147 147 { USB_DEVICE(0x166A, 0x0305) }, /* Clipsal C-5000CT2 C-Bus Spectrum Colour Touchscreen */ 148 148 { USB_DEVICE(0x166A, 0x0401) }, /* Clipsal L51xx C-Bus Architectural Dimmer */ 149 149 { USB_DEVICE(0x166A, 0x0101) }, /* Clipsal 5560884 C-Bus Multi-room Audio Matrix Switcher */ 150 + { USB_DEVICE(0x16C0, 0x09B0) }, /* Lunatico Seletek */ 151 + { USB_DEVICE(0x16C0, 0x09B1) }, /* Lunatico Seletek */ 150 152 { USB_DEVICE(0x16D6, 0x0001) }, /* Jablotron serial interface */ 151 153 { USB_DEVICE(0x16DC, 0x0010) }, /* W-IE-NE-R Plein & Baus GmbH PL512 Power Supply */ 152 154 { USB_DEVICE(0x16DC, 0x0011) }, /* W-IE-NE-R Plein & Baus GmbH RCM Remote Control for MARATON Power Supply */
+19
drivers/usb/serial/ftdi_sio.c
··· 799 799 { USB_DEVICE(FTDI_VID, FTDI_ELSTER_UNICOM_PID) }, 800 800 { USB_DEVICE(FTDI_VID, FTDI_PROPOX_JTAGCABLEII_PID) }, 801 801 { USB_DEVICE(FTDI_VID, FTDI_PROPOX_ISPCABLEIII_PID) }, 802 + { USB_DEVICE(FTDI_VID, CYBER_CORTEX_AV_PID), 803 + .driver_info = (kernel_ulong_t)&ftdi_jtag_quirk }, 802 804 { USB_DEVICE(OLIMEX_VID, OLIMEX_ARM_USB_OCD_PID), 803 805 .driver_info = (kernel_ulong_t)&ftdi_jtag_quirk }, 804 806 { USB_DEVICE(OLIMEX_VID, OLIMEX_ARM_USB_OCD_H_PID), ··· 980 978 { USB_DEVICE_INTERFACE_NUMBER(INFINEON_VID, INFINEON_TRIBOARD_PID, 1) }, 981 979 /* GE Healthcare devices */ 982 980 { USB_DEVICE(GE_HEALTHCARE_VID, GE_HEALTHCARE_NEMO_TRACKER_PID) }, 981 + /* Active Research (Actisense) devices */ 982 + { USB_DEVICE(FTDI_VID, ACTISENSE_NDC_PID) }, 983 + { USB_DEVICE(FTDI_VID, ACTISENSE_USG_PID) }, 984 + { USB_DEVICE(FTDI_VID, ACTISENSE_NGT_PID) }, 985 + { USB_DEVICE(FTDI_VID, ACTISENSE_NGW_PID) }, 986 + { USB_DEVICE(FTDI_VID, ACTISENSE_D9AC_PID) }, 987 + { USB_DEVICE(FTDI_VID, ACTISENSE_D9AD_PID) }, 988 + { USB_DEVICE(FTDI_VID, ACTISENSE_D9AE_PID) }, 989 + { USB_DEVICE(FTDI_VID, ACTISENSE_D9AF_PID) }, 990 + { USB_DEVICE(FTDI_VID, CHETCO_SEAGAUGE_PID) }, 991 + { USB_DEVICE(FTDI_VID, CHETCO_SEASWITCH_PID) }, 992 + { USB_DEVICE(FTDI_VID, CHETCO_SEASMART_NMEA2000_PID) }, 993 + { USB_DEVICE(FTDI_VID, CHETCO_SEASMART_ETHERNET_PID) }, 994 + { USB_DEVICE(FTDI_VID, CHETCO_SEASMART_WIFI_PID) }, 995 + { USB_DEVICE(FTDI_VID, CHETCO_SEASMART_DISPLAY_PID) }, 996 + { USB_DEVICE(FTDI_VID, CHETCO_SEASMART_LITE_PID) }, 997 + { USB_DEVICE(FTDI_VID, CHETCO_SEASMART_ANALOG_PID) }, 983 998 { } /* Terminating entry */ 984 999 }; 985 1000
+23
drivers/usb/serial/ftdi_sio_ids.h
··· 38 38 39 39 #define FTDI_LUMEL_PD12_PID 0x6002 40 40 41 + /* Cyber Cortex AV by Fabulous Silicon (http://fabuloussilicon.com) */ 42 + #define CYBER_CORTEX_AV_PID 0x8698 43 + 41 44 /* 42 45 * Marvell OpenRD Base, Client 43 46 * http://www.open-rd.org ··· 1441 1438 */ 1442 1439 #define GE_HEALTHCARE_VID 0x1901 1443 1440 #define GE_HEALTHCARE_NEMO_TRACKER_PID 0x0015 1441 + 1442 + /* 1443 + * Active Research (Actisense) devices 1444 + */ 1445 + #define ACTISENSE_NDC_PID 0xD9A8 /* NDC USB Serial Adapter */ 1446 + #define ACTISENSE_USG_PID 0xD9A9 /* USG USB Serial Adapter */ 1447 + #define ACTISENSE_NGT_PID 0xD9AA /* NGT NMEA2000 Interface */ 1448 + #define ACTISENSE_NGW_PID 0xD9AB /* NGW NMEA2000 Gateway */ 1449 + #define ACTISENSE_D9AC_PID 0xD9AC /* Actisense Reserved */ 1450 + #define ACTISENSE_D9AD_PID 0xD9AD /* Actisense Reserved */ 1451 + #define ACTISENSE_D9AE_PID 0xD9AE /* Actisense Reserved */ 1452 + #define ACTISENSE_D9AF_PID 0xD9AF /* Actisense Reserved */ 1453 + #define CHETCO_SEAGAUGE_PID 0xA548 /* SeaGauge USB Adapter */ 1454 + #define CHETCO_SEASWITCH_PID 0xA549 /* SeaSwitch USB Adapter */ 1455 + #define CHETCO_SEASMART_NMEA2000_PID 0xA54A /* SeaSmart NMEA2000 Gateway */ 1456 + #define CHETCO_SEASMART_ETHERNET_PID 0xA54B /* SeaSmart Ethernet Gateway */ 1457 + #define CHETCO_SEASMART_WIFI_PID 0xA5AC /* SeaSmart Wifi Gateway */ 1458 + #define CHETCO_SEASMART_DISPLAY_PID 0xA5AD /* SeaSmart NMEA2000 Display */ 1459 + #define CHETCO_SEASMART_LITE_PID 0xA5AE /* SeaSmart Lite USB Adapter */ 1460 + #define CHETCO_SEASMART_ANALOG_PID 0xA5AF /* SeaSmart Analog Adapter */
+3 -2
drivers/usb/serial/generic.c
··· 258 258 * character or at least one jiffy. 259 259 */ 260 260 period = max_t(unsigned long, (10 * HZ / bps), 1); 261 - period = min_t(unsigned long, period, timeout); 261 + if (timeout) 262 + period = min_t(unsigned long, period, timeout); 262 263 263 264 dev_dbg(&port->dev, "%s - timeout = %u ms, period = %u ms\n", 264 265 __func__, jiffies_to_msecs(timeout), ··· 269 268 schedule_timeout_interruptible(period); 270 269 if (signal_pending(current)) 271 270 break; 272 - if (time_after(jiffies, expire)) 271 + if (timeout && time_after(jiffies, expire)) 273 272 break; 274 273 } 275 274 }
+2 -1
drivers/usb/serial/mxuport.c
··· 1284 1284 } 1285 1285 1286 1286 /* Initial port termios */ 1287 - mxuport_set_termios(tty, port, NULL); 1287 + if (tty) 1288 + mxuport_set_termios(tty, port, NULL); 1288 1289 1289 1290 /* 1290 1291 * TODO: use RQ_VENDOR_GET_MSR, once we know what it
+13 -5
drivers/usb/serial/pl2303.c
··· 132 132 #define UART_OVERRUN_ERROR 0x40 133 133 #define UART_CTS 0x80 134 134 135 + static void pl2303_set_break(struct usb_serial_port *port, bool enable); 135 136 136 137 enum pl2303_type { 137 138 TYPE_01, /* Type 0 and 1 (difference unknown) */ ··· 616 615 { 617 616 usb_serial_generic_close(port); 618 617 usb_kill_urb(port->interrupt_in_urb); 618 + pl2303_set_break(port, false); 619 619 } 620 620 621 621 static int pl2303_open(struct tty_struct *tty, struct usb_serial_port *port) ··· 743 741 return -ENOIOCTLCMD; 744 742 } 745 743 746 - static void pl2303_break_ctl(struct tty_struct *tty, int break_state) 744 + static void pl2303_set_break(struct usb_serial_port *port, bool enable) 747 745 { 748 - struct usb_serial_port *port = tty->driver_data; 749 746 struct usb_serial *serial = port->serial; 750 747 u16 state; 751 748 int result; 752 749 753 - if (break_state == 0) 754 - state = BREAK_OFF; 755 - else 750 + if (enable) 756 751 state = BREAK_ON; 752 + else 753 + state = BREAK_OFF; 757 754 758 755 dev_dbg(&port->dev, "%s - turning break %s\n", __func__, 759 756 state == BREAK_OFF ? "off" : "on"); ··· 762 761 0, NULL, 0, 100); 763 762 if (result) 764 763 dev_err(&port->dev, "error sending break = %d\n", result); 764 + } 765 + 766 + static void pl2303_break_ctl(struct tty_struct *tty, int state) 767 + { 768 + struct usb_serial_port *port = tty->driver_data; 769 + 770 + pl2303_set_break(port, state); 765 771 } 766 772 767 773 static void pl2303_update_line_status(struct usb_serial_port *port,
+19 -2
drivers/usb/serial/usb-serial.c
··· 687 687 drv->dtr_rts(p, on); 688 688 } 689 689 690 + static ssize_t port_number_show(struct device *dev, 691 + struct device_attribute *attr, char *buf) 692 + { 693 + struct usb_serial_port *port = to_usb_serial_port(dev); 694 + 695 + return sprintf(buf, "%u\n", port->port_number); 696 + } 697 + static DEVICE_ATTR_RO(port_number); 698 + 699 + static struct attribute *usb_serial_port_attrs[] = { 700 + &dev_attr_port_number.attr, 701 + NULL 702 + }; 703 + ATTRIBUTE_GROUPS(usb_serial_port); 704 + 690 705 static const struct tty_port_operations serial_port_ops = { 691 706 .carrier_raised = serial_port_carrier_raised, 692 707 .dtr_rts = serial_port_dtr_rts, ··· 917 902 port->dev.driver = NULL; 918 903 port->dev.bus = &usb_serial_bus_type; 919 904 port->dev.release = &usb_serial_port_release; 905 + port->dev.groups = usb_serial_port_groups; 920 906 device_initialize(&port->dev); 921 907 } 922 908 ··· 956 940 port = serial->port[i]; 957 941 if (kfifo_alloc(&port->write_fifo, PAGE_SIZE, GFP_KERNEL)) 958 942 goto probe_error; 959 - buffer_size = max_t(int, serial->type->bulk_out_size, 960 - usb_endpoint_maxp(endpoint)); 943 + buffer_size = serial->type->bulk_out_size; 944 + if (!buffer_size) 945 + buffer_size = usb_endpoint_maxp(endpoint); 961 946 port->bulk_out_size = buffer_size; 962 947 port->bulk_out_endpointAddress = endpoint->bEndpointAddress; 963 948
+7
drivers/usb/storage/unusual_uas.h
··· 113 113 USB_SC_DEVICE, USB_PR_DEVICE, NULL, 114 114 US_FL_NO_ATA_1X), 115 115 116 + /* Reported-by: Tom Arild Naess <tanaess@gmail.com> */ 117 + UNUSUAL_DEV(0x152d, 0x0539, 0x0000, 0x9999, 118 + "JMicron", 119 + "JMS539", 120 + USB_SC_DEVICE, USB_PR_DEVICE, NULL, 121 + US_FL_NO_REPORT_OPCODES), 122 + 116 123 /* Reported-by: Claudio Bizzarri <claudio.bizzarri@gmail.com> */ 117 124 UNUSUAL_DEV(0x152d, 0x0567, 0x0000, 0x9999, 118 125 "JMicron",
+6
drivers/usb/storage/usb.c
··· 889 889 !(us->fflags & US_FL_SCM_MULT_TARG)) { 890 890 mutex_lock(&us->dev_mutex); 891 891 us->max_lun = usb_stor_Bulk_max_lun(us); 892 + /* 893 + * Allow proper scanning of devices that present more than 8 LUNs 894 + * While not affecting other devices that may need the previous behavior 895 + */ 896 + if (us->max_lun >= 8) 897 + us_to_host(us)->max_lun = us->max_lun+1; 892 898 mutex_unlock(&us->dev_mutex); 893 899 } 894 900 scsi_scan_host(us_to_host(us));
+3
drivers/video/fbdev/amba-clcd.c
··· 599 599 600 600 len = clcdfb_snprintf_mode(NULL, 0, mode); 601 601 name = devm_kzalloc(dev, len + 1, GFP_KERNEL); 602 + if (!name) 603 + return -ENOMEM; 604 + 602 605 clcdfb_snprintf_mode(name, len + 1, mode); 603 606 mode->name = name; 604 607
+3 -3
drivers/video/fbdev/core/fbmon.c
··· 624 624 int num = 0, i, first = 1; 625 625 int ver, rev; 626 626 627 - ver = edid[EDID_STRUCT_VERSION]; 628 - rev = edid[EDID_STRUCT_REVISION]; 629 - 630 627 mode = kzalloc(50 * sizeof(struct fb_videomode), GFP_KERNEL); 631 628 if (mode == NULL) 632 629 return NULL; ··· 633 636 kfree(mode); 634 637 return NULL; 635 638 } 639 + 640 + ver = edid[EDID_STRUCT_VERSION]; 641 + rev = edid[EDID_STRUCT_REVISION]; 636 642 637 643 *dbsize = 0; 638 644
+95 -84
drivers/video/fbdev/omap2/dss/display-sysfs.c
··· 28 28 #include <video/omapdss.h> 29 29 #include "dss.h" 30 30 31 - static struct omap_dss_device *to_dss_device_sysfs(struct device *dev) 31 + static ssize_t display_name_show(struct omap_dss_device *dssdev, char *buf) 32 32 { 33 - struct omap_dss_device *dssdev = NULL; 34 - 35 - for_each_dss_dev(dssdev) { 36 - if (dssdev->dev == dev) { 37 - omap_dss_put_device(dssdev); 38 - return dssdev; 39 - } 40 - } 41 - 42 - return NULL; 43 - } 44 - 45 - static ssize_t display_name_show(struct device *dev, 46 - struct device_attribute *attr, char *buf) 47 - { 48 - struct omap_dss_device *dssdev = to_dss_device_sysfs(dev); 49 - 50 33 return snprintf(buf, PAGE_SIZE, "%s\n", 51 34 dssdev->name ? 52 35 dssdev->name : ""); 53 36 } 54 37 55 - static ssize_t display_enabled_show(struct device *dev, 56 - struct device_attribute *attr, char *buf) 38 + static ssize_t display_enabled_show(struct omap_dss_device *dssdev, char *buf) 57 39 { 58 - struct omap_dss_device *dssdev = to_dss_device_sysfs(dev); 59 - 60 40 return snprintf(buf, PAGE_SIZE, "%d\n", 61 41 omapdss_device_is_enabled(dssdev)); 62 42 } 63 43 64 - static ssize_t display_enabled_store(struct device *dev, 65 - struct device_attribute *attr, 44 + static ssize_t display_enabled_store(struct omap_dss_device *dssdev, 66 45 const char *buf, size_t size) 67 46 { 68 - struct omap_dss_device *dssdev = to_dss_device_sysfs(dev); 69 47 int r; 70 48 bool enable; 71 49 ··· 68 90 return size; 69 91 } 70 92 71 - static ssize_t display_tear_show(struct device *dev, 72 - struct device_attribute *attr, char *buf) 93 + static ssize_t display_tear_show(struct omap_dss_device *dssdev, char *buf) 73 94 { 74 - struct omap_dss_device *dssdev = to_dss_device_sysfs(dev); 75 95 return snprintf(buf, PAGE_SIZE, "%d\n", 76 96 dssdev->driver->get_te ? 77 97 dssdev->driver->get_te(dssdev) : 0); 78 98 } 79 99 80 - static ssize_t display_tear_store(struct device *dev, 81 - struct device_attribute *attr, const char *buf, size_t size) 100 + static ssize_t display_tear_store(struct omap_dss_device *dssdev, 101 + const char *buf, size_t size) 82 102 { 83 - struct omap_dss_device *dssdev = to_dss_device_sysfs(dev); 84 103 int r; 85 104 bool te; 86 105 ··· 95 120 return size; 96 121 } 97 122 98 - static ssize_t display_timings_show(struct device *dev, 99 - struct device_attribute *attr, char *buf) 123 + static ssize_t display_timings_show(struct omap_dss_device *dssdev, char *buf) 100 124 { 101 - struct omap_dss_device *dssdev = to_dss_device_sysfs(dev); 102 125 struct omap_video_timings t; 103 126 104 127 if (!dssdev->driver->get_timings) ··· 110 137 t.y_res, t.vfp, t.vbp, t.vsw); 111 138 } 112 139 113 - static ssize_t display_timings_store(struct device *dev, 114 - struct device_attribute *attr, const char *buf, size_t size) 140 + static ssize_t display_timings_store(struct omap_dss_device *dssdev, 141 + const char *buf, size_t size) 115 142 { 116 - struct omap_dss_device *dssdev = to_dss_device_sysfs(dev); 117 143 struct omap_video_timings t = dssdev->panel.timings; 118 144 int r, found; 119 145 ··· 148 176 return size; 149 177 } 150 178 151 - static ssize_t display_rotate_show(struct device *dev, 152 - struct device_attribute *attr, char *buf) 179 + static ssize_t display_rotate_show(struct omap_dss_device *dssdev, char *buf) 153 180 { 154 - struct omap_dss_device *dssdev = to_dss_device_sysfs(dev); 155 181 int rotate; 156 182 if (!dssdev->driver->get_rotate) 157 183 return -ENOENT; ··· 157 187 return snprintf(buf, PAGE_SIZE, "%u\n", rotate); 158 188 } 159 189 160 - static ssize_t display_rotate_store(struct device *dev, 161 - struct device_attribute *attr, const char *buf, size_t size) 190 + static ssize_t display_rotate_store(struct omap_dss_device *dssdev, 191 + const char *buf, size_t size) 162 192 { 163 - struct omap_dss_device *dssdev = to_dss_device_sysfs(dev); 164 193 int rot, r; 165 194 166 195 if (!dssdev->driver->set_rotate || !dssdev->driver->get_rotate) ··· 176 207 return size; 177 208 } 178 209 179 - static ssize_t display_mirror_show(struct device *dev, 180 - struct device_attribute *attr, char *buf) 210 + static ssize_t display_mirror_show(struct omap_dss_device *dssdev, char *buf) 181 211 { 182 - struct omap_dss_device *dssdev = to_dss_device_sysfs(dev); 183 212 int mirror; 184 213 if (!dssdev->driver->get_mirror) 185 214 return -ENOENT; ··· 185 218 return snprintf(buf, PAGE_SIZE, "%u\n", mirror); 186 219 } 187 220 188 - static ssize_t display_mirror_store(struct device *dev, 189 - struct device_attribute *attr, const char *buf, size_t size) 221 + static ssize_t display_mirror_store(struct omap_dss_device *dssdev, 222 + const char *buf, size_t size) 190 223 { 191 - struct omap_dss_device *dssdev = to_dss_device_sysfs(dev); 192 224 int r; 193 225 bool mirror; 194 226 ··· 205 239 return size; 206 240 } 207 241 208 - static ssize_t display_wss_show(struct device *dev, 209 - struct device_attribute *attr, char *buf) 242 + static ssize_t display_wss_show(struct omap_dss_device *dssdev, char *buf) 210 243 { 211 - struct omap_dss_device *dssdev = to_dss_device_sysfs(dev); 212 244 unsigned int wss; 213 245 214 246 if (!dssdev->driver->get_wss) ··· 217 253 return snprintf(buf, PAGE_SIZE, "0x%05x\n", wss); 218 254 } 219 255 220 - static ssize_t display_wss_store(struct device *dev, 221 - struct device_attribute *attr, const char *buf, size_t size) 256 + static ssize_t display_wss_store(struct omap_dss_device *dssdev, 257 + const char *buf, size_t size) 222 258 { 223 - struct omap_dss_device *dssdev = to_dss_device_sysfs(dev); 224 259 u32 wss; 225 260 int r; 226 261 ··· 240 277 return size; 241 278 } 242 279 243 - static DEVICE_ATTR(display_name, S_IRUGO, display_name_show, NULL); 244 - static DEVICE_ATTR(enabled, S_IRUGO|S_IWUSR, 280 + struct display_attribute { 281 + struct attribute attr; 282 + ssize_t (*show)(struct omap_dss_device *, char *); 283 + ssize_t (*store)(struct omap_dss_device *, const char *, size_t); 284 + }; 285 + 286 + #define DISPLAY_ATTR(_name, _mode, _show, _store) \ 287 + struct display_attribute display_attr_##_name = \ 288 + __ATTR(_name, _mode, _show, _store) 289 + 290 + static DISPLAY_ATTR(name, S_IRUGO, display_name_show, NULL); 291 + static DISPLAY_ATTR(display_name, S_IRUGO, display_name_show, NULL); 292 + static DISPLAY_ATTR(enabled, S_IRUGO|S_IWUSR, 245 293 display_enabled_show, display_enabled_store); 246 - static DEVICE_ATTR(tear_elim, S_IRUGO|S_IWUSR, 294 + static DISPLAY_ATTR(tear_elim, S_IRUGO|S_IWUSR, 247 295 display_tear_show, display_tear_store); 248 - static DEVICE_ATTR(timings, S_IRUGO|S_IWUSR, 296 + static DISPLAY_ATTR(timings, S_IRUGO|S_IWUSR, 249 297 display_timings_show, display_timings_store); 250 - static DEVICE_ATTR(rotate, S_IRUGO|S_IWUSR, 298 + static DISPLAY_ATTR(rotate, S_IRUGO|S_IWUSR, 251 299 display_rotate_show, display_rotate_store); 252 - static DEVICE_ATTR(mirror, S_IRUGO|S_IWUSR, 300 + static DISPLAY_ATTR(mirror, S_IRUGO|S_IWUSR, 253 301 display_mirror_show, display_mirror_store); 254 - static DEVICE_ATTR(wss, S_IRUGO|S_IWUSR, 302 + static DISPLAY_ATTR(wss, S_IRUGO|S_IWUSR, 255 303 display_wss_show, display_wss_store); 256 304 257 - static const struct attribute *display_sysfs_attrs[] = { 258 - &dev_attr_display_name.attr, 259 - &dev_attr_enabled.attr, 260 - &dev_attr_tear_elim.attr, 261 - &dev_attr_timings.attr, 262 - &dev_attr_rotate.attr, 263 - &dev_attr_mirror.attr, 264 - &dev_attr_wss.attr, 305 + static struct attribute *display_sysfs_attrs[] = { 306 + &display_attr_name.attr, 307 + &display_attr_display_name.attr, 308 + &display_attr_enabled.attr, 309 + &display_attr_tear_elim.attr, 310 + &display_attr_timings.attr, 311 + &display_attr_rotate.attr, 312 + &display_attr_mirror.attr, 313 + &display_attr_wss.attr, 265 314 NULL 315 + }; 316 + 317 + static ssize_t display_attr_show(struct kobject *kobj, struct attribute *attr, 318 + char *buf) 319 + { 320 + struct omap_dss_device *dssdev; 321 + struct display_attribute *display_attr; 322 + 323 + dssdev = container_of(kobj, struct omap_dss_device, kobj); 324 + display_attr = container_of(attr, struct display_attribute, attr); 325 + 326 + if (!display_attr->show) 327 + return -ENOENT; 328 + 329 + return display_attr->show(dssdev, buf); 330 + } 331 + 332 + static ssize_t display_attr_store(struct kobject *kobj, struct attribute *attr, 333 + const char *buf, size_t size) 334 + { 335 + struct omap_dss_device *dssdev; 336 + struct display_attribute *display_attr; 337 + 338 + dssdev = container_of(kobj, struct omap_dss_device, kobj); 339 + display_attr = container_of(attr, struct display_attribute, attr); 340 + 341 + if (!display_attr->store) 342 + return -ENOENT; 343 + 344 + return display_attr->store(dssdev, buf, size); 345 + } 346 + 347 + static const struct sysfs_ops display_sysfs_ops = { 348 + .show = display_attr_show, 349 + .store = display_attr_store, 350 + }; 351 + 352 + static struct kobj_type display_ktype = { 353 + .sysfs_ops = &display_sysfs_ops, 354 + .default_attrs = display_sysfs_attrs, 266 355 }; 267 356 268 357 int display_init_sysfs(struct platform_device *pdev) ··· 323 308 int r; 324 309 325 310 for_each_dss_dev(dssdev) { 326 - struct kobject *kobj = &dssdev->dev->kobj; 327 - 328 - r = sysfs_create_files(kobj, display_sysfs_attrs); 311 + r = kobject_init_and_add(&dssdev->kobj, &display_ktype, 312 + &pdev->dev.kobj, dssdev->alias); 329 313 if (r) { 330 314 DSSERR("failed to create sysfs files\n"); 331 - goto err; 332 - } 333 - 334 - r = sysfs_create_link(&pdev->dev.kobj, kobj, dssdev->alias); 335 - if (r) { 336 - sysfs_remove_files(kobj, display_sysfs_attrs); 337 - 338 - DSSERR("failed to create sysfs display link\n"); 315 + omap_dss_put_device(dssdev); 339 316 goto err; 340 317 } 341 318 } ··· 345 338 struct omap_dss_device *dssdev = NULL; 346 339 347 340 for_each_dss_dev(dssdev) { 348 - sysfs_remove_link(&pdev->dev.kobj, dssdev->alias); 349 - sysfs_remove_files(&dssdev->dev->kobj, 350 - display_sysfs_attrs); 341 + if (kobject_name(&dssdev->kobj) == NULL) 342 + continue; 343 + 344 + kobject_del(&dssdev->kobj); 345 + kobject_put(&dssdev->kobj); 346 + 347 + memset(&dssdev->kobj, 0, sizeof(dssdev->kobj)); 351 348 } 352 349 }
+2 -1
drivers/watchdog/at91sam9_wdt.c
··· 208 208 209 209 if ((tmp & AT91_WDT_WDFIEN) && wdt->irq) { 210 210 err = request_irq(wdt->irq, wdt_interrupt, 211 - IRQF_SHARED | IRQF_IRQPOLL, 211 + IRQF_SHARED | IRQF_IRQPOLL | 212 + IRQF_NO_SUSPEND, 212 213 pdev->name, wdt); 213 214 if (err) 214 215 return err;
+4 -4
fs/btrfs/ctree.c
··· 1645 1645 1646 1646 parent_nritems = btrfs_header_nritems(parent); 1647 1647 blocksize = root->nodesize; 1648 - end_slot = parent_nritems; 1648 + end_slot = parent_nritems - 1; 1649 1649 1650 - if (parent_nritems == 1) 1650 + if (parent_nritems <= 1) 1651 1651 return 0; 1652 1652 1653 1653 btrfs_set_lock_blocking(parent); 1654 1654 1655 - for (i = start_slot; i < end_slot; i++) { 1655 + for (i = start_slot; i <= end_slot; i++) { 1656 1656 int close = 1; 1657 1657 1658 1658 btrfs_node_key(parent, &disk_key, i); ··· 1669 1669 other = btrfs_node_blockptr(parent, i - 1); 1670 1670 close = close_blocks(blocknr, other, blocksize); 1671 1671 } 1672 - if (!close && i < end_slot - 2) { 1672 + if (!close && i < end_slot) { 1673 1673 other = btrfs_node_blockptr(parent, i + 1); 1674 1674 close = close_blocks(blocknr, other, blocksize); 1675 1675 }
+16
fs/btrfs/extent-tree.c
··· 3208 3208 return 0; 3209 3209 } 3210 3210 3211 + if (trans->aborted) 3212 + return 0; 3211 3213 again: 3212 3214 inode = lookup_free_space_inode(root, block_group, path); 3213 3215 if (IS_ERR(inode) && PTR_ERR(inode) != -ENOENT) { ··· 3245 3243 */ 3246 3244 BTRFS_I(inode)->generation = 0; 3247 3245 ret = btrfs_update_inode(trans, root, inode); 3246 + if (ret) { 3247 + /* 3248 + * So theoretically we could recover from this, simply set the 3249 + * super cache generation to 0 so we know to invalidate the 3250 + * cache, but then we'd have to keep track of the block groups 3251 + * that fail this way so we know we _have_ to reset this cache 3252 + * before the next commit or risk reading stale cache. So to 3253 + * limit our exposure to horrible edge cases lets just abort the 3254 + * transaction, this only happens in really bad situations 3255 + * anyway. 3256 + */ 3257 + btrfs_abort_transaction(trans, root, ret); 3258 + goto out_put; 3259 + } 3248 3260 WARN_ON(ret); 3249 3261 3250 3262 if (i_size_read(inode) > 0) {
+56 -31
fs/btrfs/file.c
··· 1811 1811 mutex_unlock(&inode->i_mutex); 1812 1812 1813 1813 /* 1814 - * we want to make sure fsync finds this change 1815 - * but we haven't joined a transaction running right now. 1816 - * 1817 - * Later on, someone is sure to update the inode and get the 1818 - * real transid recorded. 1819 - * 1820 - * We set last_trans now to the fs_info generation + 1, 1821 - * this will either be one more than the running transaction 1822 - * or the generation used for the next transaction if there isn't 1823 - * one running right now. 1824 - * 1825 1814 * We also have to set last_sub_trans to the current log transid, 1826 1815 * otherwise subsequent syncs to a file that's been synced in this 1827 1816 * transaction will appear to have already occured. 1828 1817 */ 1829 - BTRFS_I(inode)->last_trans = root->fs_info->generation + 1; 1830 1818 BTRFS_I(inode)->last_sub_trans = root->log_transid; 1831 1819 if (num_written > 0) { 1832 1820 err = generic_write_sync(file, pos, num_written); ··· 1947 1959 atomic_inc(&root->log_batch); 1948 1960 1949 1961 /* 1950 - * check the transaction that last modified this inode 1951 - * and see if its already been committed 1952 - */ 1953 - if (!BTRFS_I(inode)->last_trans) { 1954 - mutex_unlock(&inode->i_mutex); 1955 - goto out; 1956 - } 1957 - 1958 - /* 1959 - * if the last transaction that changed this file was before 1960 - * the current transaction, we can bail out now without any 1961 - * syncing 1962 + * If the last transaction that changed this file was before the current 1963 + * transaction and we have the full sync flag set in our inode, we can 1964 + * bail out now without any syncing. 1965 + * 1966 + * Note that we can't bail out if the full sync flag isn't set. This is 1967 + * because when the full sync flag is set we start all ordered extents 1968 + * and wait for them to fully complete - when they complete they update 1969 + * the inode's last_trans field through: 1970 + * 1971 + * btrfs_finish_ordered_io() -> 1972 + * btrfs_update_inode_fallback() -> 1973 + * btrfs_update_inode() -> 1974 + * btrfs_set_inode_last_trans() 1975 + * 1976 + * So we are sure that last_trans is up to date and can do this check to 1977 + * bail out safely. For the fast path, when the full sync flag is not 1978 + * set in our inode, we can not do it because we start only our ordered 1979 + * extents and don't wait for them to complete (that is when 1980 + * btrfs_finish_ordered_io runs), so here at this point their last_trans 1981 + * value might be less than or equals to fs_info->last_trans_committed, 1982 + * and setting a speculative last_trans for an inode when a buffered 1983 + * write is made (such as fs_info->generation + 1 for example) would not 1984 + * be reliable since after setting the value and before fsync is called 1985 + * any number of transactions can start and commit (transaction kthread 1986 + * commits the current transaction periodically), and a transaction 1987 + * commit does not start nor waits for ordered extents to complete. 1962 1988 */ 1963 1989 smp_mb(); 1964 1990 if (btrfs_inode_in_log(inode, root->fs_info->generation) || 1965 - BTRFS_I(inode)->last_trans <= 1966 - root->fs_info->last_trans_committed) { 1967 - BTRFS_I(inode)->last_trans = 0; 1968 - 1991 + (full_sync && BTRFS_I(inode)->last_trans <= 1992 + root->fs_info->last_trans_committed)) { 1969 1993 /* 1970 1994 * We'v had everything committed since the last time we were 1971 1995 * modified so clear this flag in case it was set for whatever ··· 2275 2275 bool same_page; 2276 2276 bool no_holes = btrfs_fs_incompat(root->fs_info, NO_HOLES); 2277 2277 u64 ino_size; 2278 + bool truncated_page = false; 2279 + bool updated_inode = false; 2278 2280 2279 2281 ret = btrfs_wait_ordered_range(inode, offset, len); 2280 2282 if (ret) ··· 2308 2306 * entire page. 2309 2307 */ 2310 2308 if (same_page && len < PAGE_CACHE_SIZE) { 2311 - if (offset < ino_size) 2309 + if (offset < ino_size) { 2310 + truncated_page = true; 2312 2311 ret = btrfs_truncate_page(inode, offset, len, 0); 2312 + } else { 2313 + ret = 0; 2314 + } 2313 2315 goto out_only_mutex; 2314 2316 } 2315 2317 2316 2318 /* zero back part of the first page */ 2317 2319 if (offset < ino_size) { 2320 + truncated_page = true; 2318 2321 ret = btrfs_truncate_page(inode, offset, 0, 0); 2319 2322 if (ret) { 2320 2323 mutex_unlock(&inode->i_mutex); ··· 2355 2348 if (!ret) { 2356 2349 /* zero the front end of the last page */ 2357 2350 if (tail_start + tail_len < ino_size) { 2351 + truncated_page = true; 2358 2352 ret = btrfs_truncate_page(inode, 2359 2353 tail_start + tail_len, 0, 1); 2360 2354 if (ret) ··· 2365 2357 } 2366 2358 2367 2359 if (lockend < lockstart) { 2368 - mutex_unlock(&inode->i_mutex); 2369 - return 0; 2360 + ret = 0; 2361 + goto out_only_mutex; 2370 2362 } 2371 2363 2372 2364 while (1) { ··· 2514 2506 2515 2507 trans->block_rsv = &root->fs_info->trans_block_rsv; 2516 2508 ret = btrfs_update_inode(trans, root, inode); 2509 + updated_inode = true; 2517 2510 btrfs_end_transaction(trans, root); 2518 2511 btrfs_btree_balance_dirty(root); 2519 2512 out_free: ··· 2524 2515 unlock_extent_cached(&BTRFS_I(inode)->io_tree, lockstart, lockend, 2525 2516 &cached_state, GFP_NOFS); 2526 2517 out_only_mutex: 2518 + if (!updated_inode && truncated_page && !ret && !err) { 2519 + /* 2520 + * If we only end up zeroing part of a page, we still need to 2521 + * update the inode item, so that all the time fields are 2522 + * updated as well as the necessary btrfs inode in memory fields 2523 + * for detecting, at fsync time, if the inode isn't yet in the 2524 + * log tree or it's there but not up to date. 2525 + */ 2526 + trans = btrfs_start_transaction(root, 1); 2527 + if (IS_ERR(trans)) { 2528 + err = PTR_ERR(trans); 2529 + } else { 2530 + err = btrfs_update_inode(trans, root, inode); 2531 + ret = btrfs_end_transaction(trans, root); 2532 + } 2533 + } 2527 2534 mutex_unlock(&inode->i_mutex); 2528 2535 if (ret && !err) 2529 2536 err = ret;
-1
fs/btrfs/inode.c
··· 7285 7285 ((BTRFS_I(inode)->flags & BTRFS_INODE_NODATACOW) && 7286 7286 em->block_start != EXTENT_MAP_HOLE)) { 7287 7287 int type; 7288 - int ret; 7289 7288 u64 block_start, orig_start, orig_block_len, ram_bytes; 7290 7289 7291 7290 if (test_bit(EXTENT_FLAG_PREALLOC, &em->flags))
+2 -5
fs/btrfs/ordered-data.c
··· 452 452 continue; 453 453 if (entry_end(ordered) <= start) 454 454 break; 455 - if (!list_empty(&ordered->log_list)) 456 - continue; 457 - if (test_bit(BTRFS_ORDERED_LOGGED, &ordered->flags)) 455 + if (test_and_set_bit(BTRFS_ORDERED_LOGGED, &ordered->flags)) 458 456 continue; 459 457 list_add(&ordered->log_list, logged_list); 460 458 atomic_inc(&ordered->refs); ··· 509 511 wait_event(ordered->wait, test_bit(BTRFS_ORDERED_IO_DONE, 510 512 &ordered->flags)); 511 513 512 - if (!test_and_set_bit(BTRFS_ORDERED_LOGGED, &ordered->flags)) 513 - list_add_tail(&ordered->trans_list, &trans->ordered); 514 + list_add_tail(&ordered->trans_list, &trans->ordered); 514 515 spin_lock_irq(&log->log_extents_lock[index]); 515 516 } 516 517 spin_unlock_irq(&log->log_extents_lock[index]);
+156 -15
fs/btrfs/send.c
··· 230 230 u64 parent_ino; 231 231 u64 ino; 232 232 u64 gen; 233 + bool is_orphan; 233 234 struct list_head update_refs; 234 235 }; 235 236 ··· 2985 2984 u64 ino_gen, 2986 2985 u64 parent_ino, 2987 2986 struct list_head *new_refs, 2988 - struct list_head *deleted_refs) 2987 + struct list_head *deleted_refs, 2988 + const bool is_orphan) 2989 2989 { 2990 2990 struct rb_node **p = &sctx->pending_dir_moves.rb_node; 2991 2991 struct rb_node *parent = NULL; ··· 3001 2999 pm->parent_ino = parent_ino; 3002 3000 pm->ino = ino; 3003 3001 pm->gen = ino_gen; 3002 + pm->is_orphan = is_orphan; 3004 3003 INIT_LIST_HEAD(&pm->list); 3005 3004 INIT_LIST_HEAD(&pm->update_refs); 3006 3005 RB_CLEAR_NODE(&pm->node); ··· 3134 3131 rmdir_ino = dm->rmdir_ino; 3135 3132 free_waiting_dir_move(sctx, dm); 3136 3133 3137 - ret = get_first_ref(sctx->parent_root, pm->ino, 3138 - &parent_ino, &parent_gen, name); 3139 - if (ret < 0) 3140 - goto out; 3141 - 3142 - ret = get_cur_path(sctx, parent_ino, parent_gen, 3143 - from_path); 3144 - if (ret < 0) 3145 - goto out; 3146 - ret = fs_path_add_path(from_path, name); 3134 + if (pm->is_orphan) { 3135 + ret = gen_unique_name(sctx, pm->ino, 3136 + pm->gen, from_path); 3137 + } else { 3138 + ret = get_first_ref(sctx->parent_root, pm->ino, 3139 + &parent_ino, &parent_gen, name); 3140 + if (ret < 0) 3141 + goto out; 3142 + ret = get_cur_path(sctx, parent_ino, parent_gen, 3143 + from_path); 3144 + if (ret < 0) 3145 + goto out; 3146 + ret = fs_path_add_path(from_path, name); 3147 + } 3147 3148 if (ret < 0) 3148 3149 goto out; 3149 3150 ··· 3157 3150 LIST_HEAD(deleted_refs); 3158 3151 ASSERT(ancestor > BTRFS_FIRST_FREE_OBJECTID); 3159 3152 ret = add_pending_dir_move(sctx, pm->ino, pm->gen, ancestor, 3160 - &pm->update_refs, &deleted_refs); 3153 + &pm->update_refs, &deleted_refs, 3154 + pm->is_orphan); 3161 3155 if (ret < 0) 3162 3156 goto out; 3163 3157 if (rmdir_ino) { ··· 3291 3283 return ret; 3292 3284 } 3293 3285 3286 + /* 3287 + * We might need to delay a directory rename even when no ancestor directory 3288 + * (in the send root) with a higher inode number than ours (sctx->cur_ino) was 3289 + * renamed. This happens when we rename a directory to the old name (the name 3290 + * in the parent root) of some other unrelated directory that got its rename 3291 + * delayed due to some ancestor with higher number that got renamed. 3292 + * 3293 + * Example: 3294 + * 3295 + * Parent snapshot: 3296 + * . (ino 256) 3297 + * |---- a/ (ino 257) 3298 + * | |---- file (ino 260) 3299 + * | 3300 + * |---- b/ (ino 258) 3301 + * |---- c/ (ino 259) 3302 + * 3303 + * Send snapshot: 3304 + * . (ino 256) 3305 + * |---- a/ (ino 258) 3306 + * |---- x/ (ino 259) 3307 + * |---- y/ (ino 257) 3308 + * |----- file (ino 260) 3309 + * 3310 + * Here we can not rename 258 from 'b' to 'a' without the rename of inode 257 3311 + * from 'a' to 'x/y' happening first, which in turn depends on the rename of 3312 + * inode 259 from 'c' to 'x'. So the order of rename commands the send stream 3313 + * must issue is: 3314 + * 3315 + * 1 - rename 259 from 'c' to 'x' 3316 + * 2 - rename 257 from 'a' to 'x/y' 3317 + * 3 - rename 258 from 'b' to 'a' 3318 + * 3319 + * Returns 1 if the rename of sctx->cur_ino needs to be delayed, 0 if it can 3320 + * be done right away and < 0 on error. 3321 + */ 3322 + static int wait_for_dest_dir_move(struct send_ctx *sctx, 3323 + struct recorded_ref *parent_ref, 3324 + const bool is_orphan) 3325 + { 3326 + struct btrfs_path *path; 3327 + struct btrfs_key key; 3328 + struct btrfs_key di_key; 3329 + struct btrfs_dir_item *di; 3330 + u64 left_gen; 3331 + u64 right_gen; 3332 + int ret = 0; 3333 + 3334 + if (RB_EMPTY_ROOT(&sctx->waiting_dir_moves)) 3335 + return 0; 3336 + 3337 + path = alloc_path_for_send(); 3338 + if (!path) 3339 + return -ENOMEM; 3340 + 3341 + key.objectid = parent_ref->dir; 3342 + key.type = BTRFS_DIR_ITEM_KEY; 3343 + key.offset = btrfs_name_hash(parent_ref->name, parent_ref->name_len); 3344 + 3345 + ret = btrfs_search_slot(NULL, sctx->parent_root, &key, path, 0, 0); 3346 + if (ret < 0) { 3347 + goto out; 3348 + } else if (ret > 0) { 3349 + ret = 0; 3350 + goto out; 3351 + } 3352 + 3353 + di = btrfs_match_dir_item_name(sctx->parent_root, path, 3354 + parent_ref->name, parent_ref->name_len); 3355 + if (!di) { 3356 + ret = 0; 3357 + goto out; 3358 + } 3359 + /* 3360 + * di_key.objectid has the number of the inode that has a dentry in the 3361 + * parent directory with the same name that sctx->cur_ino is being 3362 + * renamed to. We need to check if that inode is in the send root as 3363 + * well and if it is currently marked as an inode with a pending rename, 3364 + * if it is, we need to delay the rename of sctx->cur_ino as well, so 3365 + * that it happens after that other inode is renamed. 3366 + */ 3367 + btrfs_dir_item_key_to_cpu(path->nodes[0], di, &di_key); 3368 + if (di_key.type != BTRFS_INODE_ITEM_KEY) { 3369 + ret = 0; 3370 + goto out; 3371 + } 3372 + 3373 + ret = get_inode_info(sctx->parent_root, di_key.objectid, NULL, 3374 + &left_gen, NULL, NULL, NULL, NULL); 3375 + if (ret < 0) 3376 + goto out; 3377 + ret = get_inode_info(sctx->send_root, di_key.objectid, NULL, 3378 + &right_gen, NULL, NULL, NULL, NULL); 3379 + if (ret < 0) { 3380 + if (ret == -ENOENT) 3381 + ret = 0; 3382 + goto out; 3383 + } 3384 + 3385 + /* Different inode, no need to delay the rename of sctx->cur_ino */ 3386 + if (right_gen != left_gen) { 3387 + ret = 0; 3388 + goto out; 3389 + } 3390 + 3391 + if (is_waiting_for_move(sctx, di_key.objectid)) { 3392 + ret = add_pending_dir_move(sctx, 3393 + sctx->cur_ino, 3394 + sctx->cur_inode_gen, 3395 + di_key.objectid, 3396 + &sctx->new_refs, 3397 + &sctx->deleted_refs, 3398 + is_orphan); 3399 + if (!ret) 3400 + ret = 1; 3401 + } 3402 + out: 3403 + btrfs_free_path(path); 3404 + return ret; 3405 + } 3406 + 3294 3407 static int wait_for_parent_move(struct send_ctx *sctx, 3295 3408 struct recorded_ref *parent_ref) 3296 3409 { ··· 3478 3349 sctx->cur_inode_gen, 3479 3350 ino, 3480 3351 &sctx->new_refs, 3481 - &sctx->deleted_refs); 3352 + &sctx->deleted_refs, 3353 + false); 3482 3354 if (!ret) 3483 3355 ret = 1; 3484 3356 } ··· 3502 3372 int did_overwrite = 0; 3503 3373 int is_orphan = 0; 3504 3374 u64 last_dir_ino_rm = 0; 3375 + bool can_rename = true; 3505 3376 3506 3377 verbose_printk("btrfs: process_recorded_refs %llu\n", sctx->cur_ino); 3507 3378 ··· 3621 3490 } 3622 3491 } 3623 3492 3493 + if (S_ISDIR(sctx->cur_inode_mode) && sctx->parent_root) { 3494 + ret = wait_for_dest_dir_move(sctx, cur, is_orphan); 3495 + if (ret < 0) 3496 + goto out; 3497 + if (ret == 1) { 3498 + can_rename = false; 3499 + *pending_move = 1; 3500 + } 3501 + } 3502 + 3624 3503 /* 3625 3504 * link/move the ref to the new place. If we have an orphan 3626 3505 * inode, move it and update valid_path. If not, link or move 3627 3506 * it depending on the inode mode. 3628 3507 */ 3629 - if (is_orphan) { 3508 + if (is_orphan && can_rename) { 3630 3509 ret = send_rename(sctx, valid_path, cur->full_path); 3631 3510 if (ret < 0) 3632 3511 goto out; ··· 3644 3503 ret = fs_path_copy(valid_path, cur->full_path); 3645 3504 if (ret < 0) 3646 3505 goto out; 3647 - } else { 3506 + } else if (can_rename) { 3648 3507 if (S_ISDIR(sctx->cur_inode_mode)) { 3649 3508 /* 3650 3509 * Dirs can't be linked, so move it. For moved
-3
fs/btrfs/transaction.c
··· 1052 1052 ret = btrfs_run_delayed_refs(trans, root, (unsigned long)-1); 1053 1053 if (ret) 1054 1054 return ret; 1055 - ret = btrfs_run_delayed_refs(trans, root, (unsigned long)-1); 1056 - if (ret) 1057 - return ret; 1058 1055 } 1059 1056 1060 1057 return 0;
+1 -1
fs/btrfs/tree-log.c
··· 1012 1012 base = btrfs_item_ptr_offset(leaf, path->slots[0]); 1013 1013 1014 1014 while (cur_offset < item_size) { 1015 - extref = (struct btrfs_inode_extref *)base + cur_offset; 1015 + extref = (struct btrfs_inode_extref *)(base + cur_offset); 1016 1016 1017 1017 victim_name_len = btrfs_inode_extref_name_len(leaf, extref); 1018 1018
+6 -2
fs/btrfs/xattr.c
··· 111 111 name, name_len, -1); 112 112 if (!di && (flags & XATTR_REPLACE)) 113 113 ret = -ENODATA; 114 + else if (IS_ERR(di)) 115 + ret = PTR_ERR(di); 114 116 else if (di) 115 117 ret = btrfs_delete_one_dir_name(trans, root, path, di); 116 118 goto out; ··· 129 127 ASSERT(mutex_is_locked(&inode->i_mutex)); 130 128 di = btrfs_lookup_xattr(NULL, root, path, btrfs_ino(inode), 131 129 name, name_len, 0); 132 - if (!di) { 130 + if (!di) 133 131 ret = -ENODATA; 132 + else if (IS_ERR(di)) 133 + ret = PTR_ERR(di); 134 + if (ret) 134 135 goto out; 135 - } 136 136 btrfs_release_path(path); 137 137 di = NULL; 138 138 }
+2 -2
fs/ecryptfs/ecryptfs_kernel.h
··· 124 124 } 125 125 126 126 #define ECRYPTFS_MAX_KEYSET_SIZE 1024 127 - #define ECRYPTFS_MAX_CIPHER_NAME_SIZE 32 127 + #define ECRYPTFS_MAX_CIPHER_NAME_SIZE 31 128 128 #define ECRYPTFS_MAX_NUM_ENC_KEYS 64 129 129 #define ECRYPTFS_MAX_IV_BYTES 16 /* 128 bits */ 130 130 #define ECRYPTFS_SALT_BYTES 2 ··· 237 237 struct crypto_ablkcipher *tfm; 238 238 struct crypto_hash *hash_tfm; /* Crypto context for generating 239 239 * the initialization vectors */ 240 - unsigned char cipher[ECRYPTFS_MAX_CIPHER_NAME_SIZE]; 240 + unsigned char cipher[ECRYPTFS_MAX_CIPHER_NAME_SIZE + 1]; 241 241 unsigned char key[ECRYPTFS_MAX_KEY_BYTES]; 242 242 unsigned char root_iv[ECRYPTFS_MAX_IV_BYTES]; 243 243 struct list_head keysig_list;
+30 -4
fs/ecryptfs/file.c
··· 303 303 struct file *lower_file = ecryptfs_file_to_lower(file); 304 304 long rc = -ENOTTY; 305 305 306 - if (lower_file->f_op->unlocked_ioctl) 306 + if (!lower_file->f_op->unlocked_ioctl) 307 + return rc; 308 + 309 + switch (cmd) { 310 + case FITRIM: 311 + case FS_IOC_GETFLAGS: 312 + case FS_IOC_SETFLAGS: 313 + case FS_IOC_GETVERSION: 314 + case FS_IOC_SETVERSION: 307 315 rc = lower_file->f_op->unlocked_ioctl(lower_file, cmd, arg); 308 - return rc; 316 + fsstack_copy_attr_all(file_inode(file), file_inode(lower_file)); 317 + 318 + return rc; 319 + default: 320 + return rc; 321 + } 309 322 } 310 323 311 324 #ifdef CONFIG_COMPAT ··· 328 315 struct file *lower_file = ecryptfs_file_to_lower(file); 329 316 long rc = -ENOIOCTLCMD; 330 317 331 - if (lower_file->f_op->compat_ioctl) 318 + if (!lower_file->f_op->compat_ioctl) 319 + return rc; 320 + 321 + switch (cmd) { 322 + case FITRIM: 323 + case FS_IOC32_GETFLAGS: 324 + case FS_IOC32_SETFLAGS: 325 + case FS_IOC32_GETVERSION: 326 + case FS_IOC32_SETVERSION: 332 327 rc = lower_file->f_op->compat_ioctl(lower_file, cmd, arg); 333 - return rc; 328 + fsstack_copy_attr_all(file_inode(file), file_inode(lower_file)); 329 + 330 + return rc; 331 + default: 332 + return rc; 333 + } 334 334 } 335 335 #endif 336 336
+1 -1
fs/ecryptfs/keystore.c
··· 891 891 struct blkcipher_desc desc; 892 892 char fnek_sig_hex[ECRYPTFS_SIG_SIZE_HEX + 1]; 893 893 char iv[ECRYPTFS_MAX_IV_BYTES]; 894 - char cipher_string[ECRYPTFS_MAX_CIPHER_NAME_SIZE]; 894 + char cipher_string[ECRYPTFS_MAX_CIPHER_NAME_SIZE + 1]; 895 895 }; 896 896 897 897 /**
+1 -1
fs/ecryptfs/main.c
··· 407 407 if (!cipher_name_set) { 408 408 int cipher_name_len = strlen(ECRYPTFS_DEFAULT_CIPHER); 409 409 410 - BUG_ON(cipher_name_len >= ECRYPTFS_MAX_CIPHER_NAME_SIZE); 410 + BUG_ON(cipher_name_len > ECRYPTFS_MAX_CIPHER_NAME_SIZE); 411 411 strcpy(mount_crypt_stat->global_default_cipher_name, 412 412 ECRYPTFS_DEFAULT_CIPHER); 413 413 }
+2 -1
fs/locks.c
··· 1665 1665 } 1666 1666 1667 1667 if (my_fl != NULL) { 1668 - error = lease->fl_lmops->lm_change(my_fl, arg, &dispose); 1668 + lease = my_fl; 1669 + error = lease->fl_lmops->lm_change(lease, arg, &dispose); 1669 1670 if (error) 1670 1671 goto out; 1671 1672 goto out_setup;
+1 -1
fs/nfs/client.c
··· 433 433 434 434 static bool nfs_client_init_is_complete(const struct nfs_client *clp) 435 435 { 436 - return clp->cl_cons_state != NFS_CS_INITING; 436 + return clp->cl_cons_state <= NFS_CS_READY; 437 437 } 438 438 439 439 int nfs_wait_client_init_complete(const struct nfs_client *clp)
+34 -11
fs/nfs/delegation.c
··· 181 181 clear_bit(NFS_DELEGATION_NEED_RECLAIM, 182 182 &delegation->flags); 183 183 spin_unlock(&delegation->lock); 184 - put_rpccred(oldcred); 185 184 rcu_read_unlock(); 185 + put_rpccred(oldcred); 186 186 trace_nfs4_reclaim_delegation(inode, res->delegation_type); 187 187 } else { 188 188 /* We appear to have raced with a delegation return. */ ··· 370 370 delegation = NULL; 371 371 goto out; 372 372 } 373 - freeme = nfs_detach_delegation_locked(nfsi, 373 + if (test_and_set_bit(NFS_DELEGATION_RETURNING, 374 + &old_delegation->flags)) 375 + goto out; 376 + freeme = nfs_detach_delegation_locked(nfsi, 374 377 old_delegation, clp); 375 378 if (freeme == NULL) 376 379 goto out; ··· 436 433 { 437 434 bool ret = false; 438 435 436 + if (test_bit(NFS_DELEGATION_RETURNING, &delegation->flags)) 437 + goto out; 439 438 if (test_and_clear_bit(NFS_DELEGATION_RETURN, &delegation->flags)) 440 439 ret = true; 441 440 if (test_and_clear_bit(NFS_DELEGATION_RETURN_IF_CLOSED, &delegation->flags) && !ret) { ··· 449 444 ret = true; 450 445 spin_unlock(&delegation->lock); 451 446 } 447 + out: 452 448 return ret; 453 449 } 454 450 ··· 477 471 super_list) { 478 472 if (!nfs_delegation_need_return(delegation)) 479 473 continue; 480 - inode = nfs_delegation_grab_inode(delegation); 481 - if (inode == NULL) 474 + if (!nfs_sb_active(server->super)) 482 475 continue; 476 + inode = nfs_delegation_grab_inode(delegation); 477 + if (inode == NULL) { 478 + rcu_read_unlock(); 479 + nfs_sb_deactive(server->super); 480 + goto restart; 481 + } 483 482 delegation = nfs_start_delegation_return_locked(NFS_I(inode)); 484 483 rcu_read_unlock(); 485 484 486 485 err = nfs_end_delegation_return(inode, delegation, 0); 487 486 iput(inode); 487 + nfs_sb_deactive(server->super); 488 488 if (!err) 489 489 goto restart; 490 490 set_bit(NFS4CLNT_DELEGRETURN, &clp->cl_state); ··· 821 809 list_for_each_entry_rcu(server, &clp->cl_superblocks, client_link) { 822 810 list_for_each_entry_rcu(delegation, &server->delegations, 823 811 super_list) { 812 + if (test_bit(NFS_DELEGATION_RETURNING, 813 + &delegation->flags)) 814 + continue; 824 815 if (test_bit(NFS_DELEGATION_NEED_RECLAIM, 825 816 &delegation->flags) == 0) 826 817 continue; 827 - inode = nfs_delegation_grab_inode(delegation); 828 - if (inode == NULL) 818 + if (!nfs_sb_active(server->super)) 829 819 continue; 830 - delegation = nfs_detach_delegation(NFS_I(inode), 831 - delegation, server); 820 + inode = nfs_delegation_grab_inode(delegation); 821 + if (inode == NULL) { 822 + rcu_read_unlock(); 823 + nfs_sb_deactive(server->super); 824 + goto restart; 825 + } 826 + delegation = nfs_start_delegation_return_locked(NFS_I(inode)); 832 827 rcu_read_unlock(); 833 - 834 - if (delegation != NULL) 835 - nfs_free_delegation(delegation); 828 + if (delegation != NULL) { 829 + delegation = nfs_detach_delegation(NFS_I(inode), 830 + delegation, server); 831 + if (delegation != NULL) 832 + nfs_free_delegation(delegation); 833 + } 836 834 iput(inode); 835 + nfs_sb_deactive(server->super); 837 836 goto restart; 838 837 } 839 838 }
+19 -3
fs/nfs/dir.c
··· 408 408 return 0; 409 409 } 410 410 411 + /* Match file and dirent using either filehandle or fileid 412 + * Note: caller is responsible for checking the fsid 413 + */ 411 414 static 412 415 int nfs_same_file(struct dentry *dentry, struct nfs_entry *entry) 413 416 { 417 + struct nfs_inode *nfsi; 418 + 414 419 if (dentry->d_inode == NULL) 415 420 goto different; 416 - if (nfs_compare_fh(entry->fh, NFS_FH(dentry->d_inode)) != 0) 417 - goto different; 418 - return 1; 421 + 422 + nfsi = NFS_I(dentry->d_inode); 423 + if (entry->fattr->fileid == nfsi->fileid) 424 + return 1; 425 + if (nfs_compare_fh(entry->fh, &nfsi->fh) == 0) 426 + return 1; 419 427 different: 420 428 return 0; 421 429 } ··· 477 469 struct inode *inode; 478 470 int status; 479 471 472 + if (!(entry->fattr->valid & NFS_ATTR_FATTR_FILEID)) 473 + return; 474 + if (!(entry->fattr->valid & NFS_ATTR_FATTR_FSID)) 475 + return; 480 476 if (filename.name[0] == '.') { 481 477 if (filename.len == 1) 482 478 return; ··· 491 479 492 480 dentry = d_lookup(parent, &filename); 493 481 if (dentry != NULL) { 482 + /* Is there a mountpoint here? If so, just exit */ 483 + if (!nfs_fsid_equal(&NFS_SB(dentry->d_sb)->fsid, 484 + &entry->fattr->fsid)) 485 + goto out; 494 486 if (nfs_same_file(dentry, entry)) { 495 487 nfs_set_verifier(dentry, nfs_save_change_attribute(dir)); 496 488 status = nfs_refresh_inode(dentry->d_inode, entry->fattr);
+9 -2
fs/nfs/file.c
··· 178 178 iocb->ki_filp, 179 179 iov_iter_count(to), (unsigned long) iocb->ki_pos); 180 180 181 - result = nfs_revalidate_mapping(inode, iocb->ki_filp->f_mapping); 181 + result = nfs_revalidate_mapping_protected(inode, iocb->ki_filp->f_mapping); 182 182 if (!result) { 183 183 result = generic_file_read_iter(iocb, to); 184 184 if (result > 0) ··· 199 199 dprintk("NFS: splice_read(%pD2, %lu@%Lu)\n", 200 200 filp, (unsigned long) count, (unsigned long long) *ppos); 201 201 202 - res = nfs_revalidate_mapping(inode, filp->f_mapping); 202 + res = nfs_revalidate_mapping_protected(inode, filp->f_mapping); 203 203 if (!res) { 204 204 res = generic_file_splice_read(filp, ppos, pipe, count, flags); 205 205 if (res > 0) ··· 372 372 nfs_wait_bit_killable, TASK_KILLABLE); 373 373 if (ret) 374 374 return ret; 375 + /* 376 + * Wait for O_DIRECT to complete 377 + */ 378 + nfs_inode_dio_wait(mapping->host); 375 379 376 380 page = grab_cache_page_write_begin(mapping, index, flags); 377 381 if (!page) ··· 622 618 623 619 /* make sure the cache has finished storing the page */ 624 620 nfs_fscache_wait_on_page_write(NFS_I(inode), page); 621 + 622 + wait_on_bit_action(&NFS_I(inode)->flags, NFS_INO_INVALIDATING, 623 + nfs_wait_bit_killable, TASK_KILLABLE); 625 624 626 625 lock_page(page); 627 626 mapping = page_file_mapping(page);
+92 -19
fs/nfs/inode.c
··· 556 556 * This is a copy of the common vmtruncate, but with the locking 557 557 * corrected to take into account the fact that NFS requires 558 558 * inode->i_size to be updated under the inode->i_lock. 559 + * Note: must be called with inode->i_lock held! 559 560 */ 560 561 static int nfs_vmtruncate(struct inode * inode, loff_t offset) 561 562 { ··· 566 565 if (err) 567 566 goto out; 568 567 569 - spin_lock(&inode->i_lock); 570 568 i_size_write(inode, offset); 571 569 /* Optimisation */ 572 570 if (offset == 0) 573 571 NFS_I(inode)->cache_validity &= ~NFS_INO_INVALID_DATA; 574 - spin_unlock(&inode->i_lock); 575 572 573 + spin_unlock(&inode->i_lock); 576 574 truncate_pagecache(inode, offset); 575 + spin_lock(&inode->i_lock); 577 576 out: 578 577 return err; 579 578 } ··· 586 585 * Note: we do this in the *proc.c in order to ensure that 587 586 * it works for things like exclusive creates too. 588 587 */ 589 - void nfs_setattr_update_inode(struct inode *inode, struct iattr *attr) 588 + void nfs_setattr_update_inode(struct inode *inode, struct iattr *attr, 589 + struct nfs_fattr *fattr) 590 590 { 591 + /* Barrier: bump the attribute generation count. */ 592 + nfs_fattr_set_barrier(fattr); 593 + 594 + spin_lock(&inode->i_lock); 595 + NFS_I(inode)->attr_gencount = fattr->gencount; 591 596 if ((attr->ia_valid & (ATTR_MODE|ATTR_UID|ATTR_GID)) != 0) { 592 - spin_lock(&inode->i_lock); 593 597 if ((attr->ia_valid & ATTR_MODE) != 0) { 594 598 int mode = attr->ia_mode & S_IALLUGO; 595 599 mode |= inode->i_mode & ~S_IALLUGO; ··· 606 600 inode->i_gid = attr->ia_gid; 607 601 nfs_set_cache_invalid(inode, NFS_INO_INVALID_ACCESS 608 602 | NFS_INO_INVALID_ACL); 609 - spin_unlock(&inode->i_lock); 610 603 } 611 604 if ((attr->ia_valid & ATTR_SIZE) != 0) { 612 605 nfs_inc_stats(inode, NFSIOS_SETATTRTRUNC); 613 606 nfs_vmtruncate(inode, attr->ia_size); 614 607 } 608 + nfs_update_inode(inode, fattr); 609 + spin_unlock(&inode->i_lock); 615 610 } 616 611 EXPORT_SYMBOL_GPL(nfs_setattr_update_inode); 617 612 ··· 1035 1028 1036 1029 if (mapping->nrpages != 0) { 1037 1030 if (S_ISREG(inode->i_mode)) { 1031 + unmap_mapping_range(mapping, 0, 0, 0); 1038 1032 ret = nfs_sync_mapping(mapping); 1039 1033 if (ret < 0) 1040 1034 return ret; ··· 1068 1060 } 1069 1061 1070 1062 /** 1071 - * nfs_revalidate_mapping - Revalidate the pagecache 1063 + * __nfs_revalidate_mapping - Revalidate the pagecache 1072 1064 * @inode - pointer to host inode 1073 1065 * @mapping - pointer to mapping 1066 + * @may_lock - take inode->i_mutex? 1074 1067 */ 1075 - int nfs_revalidate_mapping(struct inode *inode, struct address_space *mapping) 1068 + static int __nfs_revalidate_mapping(struct inode *inode, 1069 + struct address_space *mapping, 1070 + bool may_lock) 1076 1071 { 1077 1072 struct nfs_inode *nfsi = NFS_I(inode); 1078 1073 unsigned long *bitlock = &nfsi->flags; ··· 1124 1113 nfsi->cache_validity &= ~NFS_INO_INVALID_DATA; 1125 1114 spin_unlock(&inode->i_lock); 1126 1115 trace_nfs_invalidate_mapping_enter(inode); 1127 - ret = nfs_invalidate_mapping(inode, mapping); 1116 + if (may_lock) { 1117 + mutex_lock(&inode->i_mutex); 1118 + ret = nfs_invalidate_mapping(inode, mapping); 1119 + mutex_unlock(&inode->i_mutex); 1120 + } else 1121 + ret = nfs_invalidate_mapping(inode, mapping); 1128 1122 trace_nfs_invalidate_mapping_exit(inode, ret); 1129 1123 1130 1124 clear_bit_unlock(NFS_INO_INVALIDATING, bitlock); ··· 1137 1121 wake_up_bit(bitlock, NFS_INO_INVALIDATING); 1138 1122 out: 1139 1123 return ret; 1124 + } 1125 + 1126 + /** 1127 + * nfs_revalidate_mapping - Revalidate the pagecache 1128 + * @inode - pointer to host inode 1129 + * @mapping - pointer to mapping 1130 + */ 1131 + int nfs_revalidate_mapping(struct inode *inode, struct address_space *mapping) 1132 + { 1133 + return __nfs_revalidate_mapping(inode, mapping, false); 1134 + } 1135 + 1136 + /** 1137 + * nfs_revalidate_mapping_protected - Revalidate the pagecache 1138 + * @inode - pointer to host inode 1139 + * @mapping - pointer to mapping 1140 + * 1141 + * Differs from nfs_revalidate_mapping() in that it grabs the inode->i_mutex 1142 + * while invalidating the mapping. 1143 + */ 1144 + int nfs_revalidate_mapping_protected(struct inode *inode, struct address_space *mapping) 1145 + { 1146 + return __nfs_revalidate_mapping(inode, mapping, true); 1140 1147 } 1141 1148 1142 1149 static unsigned long nfs_wcc_update_inode(struct inode *inode, struct nfs_fattr *fattr) ··· 1270 1231 return timespec_compare(&fattr->ctime, &inode->i_ctime) > 0; 1271 1232 } 1272 1233 1273 - static int nfs_size_need_update(const struct inode *inode, const struct nfs_fattr *fattr) 1274 - { 1275 - if (!(fattr->valid & NFS_ATTR_FATTR_SIZE)) 1276 - return 0; 1277 - return nfs_size_to_loff_t(fattr->size) > i_size_read(inode); 1278 - } 1279 - 1280 1234 static atomic_long_t nfs_attr_generation_counter; 1281 1235 1282 1236 static unsigned long nfs_read_attr_generation_counter(void) ··· 1281 1249 { 1282 1250 return atomic_long_inc_return(&nfs_attr_generation_counter); 1283 1251 } 1252 + EXPORT_SYMBOL_GPL(nfs_inc_attr_generation_counter); 1284 1253 1285 1254 void nfs_fattr_init(struct nfs_fattr *fattr) 1286 1255 { ··· 1292 1259 fattr->group_name = NULL; 1293 1260 } 1294 1261 EXPORT_SYMBOL_GPL(nfs_fattr_init); 1262 + 1263 + /** 1264 + * nfs_fattr_set_barrier 1265 + * @fattr: attributes 1266 + * 1267 + * Used to set a barrier after an attribute was updated. This 1268 + * barrier ensures that older attributes from RPC calls that may 1269 + * have raced with our update cannot clobber these new values. 1270 + * Note that you are still responsible for ensuring that other 1271 + * operations which change the attribute on the server do not 1272 + * collide. 1273 + */ 1274 + void nfs_fattr_set_barrier(struct nfs_fattr *fattr) 1275 + { 1276 + fattr->gencount = nfs_inc_attr_generation_counter(); 1277 + } 1295 1278 1296 1279 struct nfs_fattr *nfs_alloc_fattr(void) 1297 1280 { ··· 1419 1370 1420 1371 return ((long)fattr->gencount - (long)nfsi->attr_gencount) > 0 || 1421 1372 nfs_ctime_need_update(inode, fattr) || 1422 - nfs_size_need_update(inode, fattr) || 1423 1373 ((long)nfsi->attr_gencount - (long)nfs_read_attr_generation_counter() > 0); 1424 1374 } 1425 1375 ··· 1508 1460 int status; 1509 1461 1510 1462 spin_lock(&inode->i_lock); 1463 + nfs_fattr_set_barrier(fattr); 1511 1464 status = nfs_post_op_update_inode_locked(inode, fattr); 1512 1465 spin_unlock(&inode->i_lock); 1513 1466 ··· 1517 1468 EXPORT_SYMBOL_GPL(nfs_post_op_update_inode); 1518 1469 1519 1470 /** 1520 - * nfs_post_op_update_inode_force_wcc - try to update the inode attribute cache 1471 + * nfs_post_op_update_inode_force_wcc_locked - update the inode attribute cache 1521 1472 * @inode - pointer to inode 1522 1473 * @fattr - updated attributes 1523 1474 * ··· 1527 1478 * 1528 1479 * This function is mainly designed to be used by the ->write_done() functions. 1529 1480 */ 1530 - int nfs_post_op_update_inode_force_wcc(struct inode *inode, struct nfs_fattr *fattr) 1481 + int nfs_post_op_update_inode_force_wcc_locked(struct inode *inode, struct nfs_fattr *fattr) 1531 1482 { 1532 1483 int status; 1533 1484 1534 - spin_lock(&inode->i_lock); 1535 1485 /* Don't do a WCC update if these attributes are already stale */ 1536 1486 if ((fattr->valid & NFS_ATTR_FATTR) == 0 || 1537 1487 !nfs_inode_attrs_need_update(inode, fattr)) { ··· 1562 1514 } 1563 1515 out_noforce: 1564 1516 status = nfs_post_op_update_inode_locked(inode, fattr); 1517 + return status; 1518 + } 1519 + 1520 + /** 1521 + * nfs_post_op_update_inode_force_wcc - try to update the inode attribute cache 1522 + * @inode - pointer to inode 1523 + * @fattr - updated attributes 1524 + * 1525 + * After an operation that has changed the inode metadata, mark the 1526 + * attribute cache as being invalid, then try to update it. Fake up 1527 + * weak cache consistency data, if none exist. 1528 + * 1529 + * This function is mainly designed to be used by the ->write_done() functions. 1530 + */ 1531 + int nfs_post_op_update_inode_force_wcc(struct inode *inode, struct nfs_fattr *fattr) 1532 + { 1533 + int status; 1534 + 1535 + spin_lock(&inode->i_lock); 1536 + nfs_fattr_set_barrier(fattr); 1537 + status = nfs_post_op_update_inode_force_wcc_locked(inode, fattr); 1565 1538 spin_unlock(&inode->i_lock); 1566 1539 return status; 1567 1540 } ··· 1784 1715 nfs_inc_stats(inode, NFSIOS_ATTRINVALIDATE); 1785 1716 nfsi->attrtimeo = NFS_MINATTRTIMEO(inode); 1786 1717 nfsi->attrtimeo_timestamp = now; 1718 + /* Set barrier to be more recent than all outstanding updates */ 1787 1719 nfsi->attr_gencount = nfs_inc_attr_generation_counter(); 1788 1720 } else { 1789 1721 if (!time_in_range_open(now, nfsi->attrtimeo_timestamp, nfsi->attrtimeo_timestamp + nfsi->attrtimeo)) { ··· 1792 1722 nfsi->attrtimeo = NFS_MAXATTRTIMEO(inode); 1793 1723 nfsi->attrtimeo_timestamp = now; 1794 1724 } 1725 + /* Set the barrier to be more recent than this fattr */ 1726 + if ((long)fattr->gencount - (long)nfsi->attr_gencount > 0) 1727 + nfsi->attr_gencount = fattr->gencount; 1795 1728 } 1796 1729 invalid &= ~NFS_INO_INVALID_ATTR; 1797 1730 /* Don't invalidate the data if we were to blame */
+1
fs/nfs/internal.h
··· 459 459 struct nfs_commit_info *cinfo, 460 460 u32 ds_commit_idx); 461 461 int nfs_write_need_commit(struct nfs_pgio_header *); 462 + void nfs_writeback_update_inode(struct nfs_pgio_header *hdr); 462 463 int nfs_generic_commit_list(struct inode *inode, struct list_head *head, 463 464 int how, struct nfs_commit_info *cinfo); 464 465 void nfs_retry_commit(struct list_head *page_list,
+2 -2
fs/nfs/nfs3proc.c
··· 138 138 nfs_fattr_init(fattr); 139 139 status = rpc_call_sync(NFS_CLIENT(inode), &msg, 0); 140 140 if (status == 0) 141 - nfs_setattr_update_inode(inode, sattr); 141 + nfs_setattr_update_inode(inode, sattr, fattr); 142 142 dprintk("NFS reply setattr: %d\n", status); 143 143 return status; 144 144 } ··· 834 834 if (nfs3_async_handle_jukebox(task, inode)) 835 835 return -EAGAIN; 836 836 if (task->tk_status >= 0) 837 - nfs_post_op_update_inode_force_wcc(inode, hdr->res.fattr); 837 + nfs_writeback_update_inode(hdr); 838 838 return 0; 839 839 } 840 840
+5
fs/nfs/nfs3xdr.c
··· 1987 1987 if (entry->fattr->valid & NFS_ATTR_FATTR_V3) 1988 1988 entry->d_type = nfs_umode_to_dtype(entry->fattr->mode); 1989 1989 1990 + if (entry->fattr->fileid != entry->ino) { 1991 + entry->fattr->mounted_on_fileid = entry->ino; 1992 + entry->fattr->valid |= NFS_ATTR_FATTR_MOUNTED_ON_FILEID; 1993 + } 1994 + 1990 1995 /* In fact, a post_op_fh3: */ 1991 1996 p = xdr_inline_decode(xdr, 4); 1992 1997 if (unlikely(p == NULL))
+4 -5
fs/nfs/nfs4client.c
··· 621 621 spin_lock(&nn->nfs_client_lock); 622 622 list_for_each_entry(pos, &nn->nfs_client_list, cl_share_link) { 623 623 624 + if (pos == new) 625 + goto found; 626 + 624 627 if (pos->rpc_ops != new->rpc_ops) 625 628 continue; 626 629 ··· 642 639 prev = pos; 643 640 644 641 status = nfs_wait_client_init_complete(pos); 645 - if (pos->cl_cons_state == NFS_CS_SESSION_INITING) { 646 - nfs4_schedule_lease_recovery(pos); 647 - status = nfs4_wait_clnt_recover(pos); 648 - } 649 642 spin_lock(&nn->nfs_client_lock); 650 643 if (status < 0) 651 644 break; ··· 667 668 */ 668 669 if (!nfs4_match_client_owner_id(pos, new)) 669 670 continue; 670 - 671 + found: 671 672 atomic_inc(&pos->cl_count); 672 673 *result = pos; 673 674 status = 0;
+21 -10
fs/nfs/nfs4proc.c
··· 901 901 if (!cinfo->atomic || cinfo->before != dir->i_version) 902 902 nfs_force_lookup_revalidate(dir); 903 903 dir->i_version = cinfo->after; 904 + nfsi->attr_gencount = nfs_inc_attr_generation_counter(); 904 905 nfs_fscache_invalidate(dir); 905 906 spin_unlock(&dir->i_lock); 906 907 } ··· 1553 1552 1554 1553 opendata->o_arg.open_flags = 0; 1555 1554 opendata->o_arg.fmode = fmode; 1555 + opendata->o_arg.share_access = nfs4_map_atomic_open_share( 1556 + NFS_SB(opendata->dentry->d_sb), 1557 + fmode, 0); 1556 1558 memset(&opendata->o_res, 0, sizeof(opendata->o_res)); 1557 1559 memset(&opendata->c_res, 0, sizeof(opendata->c_res)); 1558 1560 nfs4_init_opendata_res(opendata); ··· 2417 2413 opendata->o_res.f_attr, sattr, 2418 2414 state, label, olabel); 2419 2415 if (status == 0) { 2420 - nfs_setattr_update_inode(state->inode, sattr); 2421 - nfs_post_op_update_inode(state->inode, opendata->o_res.f_attr); 2416 + nfs_setattr_update_inode(state->inode, sattr, 2417 + opendata->o_res.f_attr); 2422 2418 nfs_setsecurity(state->inode, opendata->o_res.f_attr, olabel); 2423 2419 } 2424 2420 } ··· 2655 2651 case -NFS4ERR_BAD_STATEID: 2656 2652 case -NFS4ERR_EXPIRED: 2657 2653 if (!nfs4_stateid_match(&calldata->arg.stateid, 2658 - &state->stateid)) { 2654 + &state->open_stateid)) { 2659 2655 rpc_restart_call_prepare(task); 2660 2656 goto out_release; 2661 2657 } ··· 2691 2687 is_rdwr = test_bit(NFS_O_RDWR_STATE, &state->flags); 2692 2688 is_rdonly = test_bit(NFS_O_RDONLY_STATE, &state->flags); 2693 2689 is_wronly = test_bit(NFS_O_WRONLY_STATE, &state->flags); 2694 - nfs4_stateid_copy(&calldata->arg.stateid, &state->stateid); 2690 + nfs4_stateid_copy(&calldata->arg.stateid, &state->open_stateid); 2695 2691 /* Calculate the change in open mode */ 2696 2692 calldata->arg.fmode = 0; 2697 2693 if (state->n_rdwr == 0) { ··· 3292 3288 3293 3289 status = nfs4_do_setattr(inode, cred, fattr, sattr, state, NULL, label); 3294 3290 if (status == 0) { 3295 - nfs_setattr_update_inode(inode, sattr); 3291 + nfs_setattr_update_inode(inode, sattr, fattr); 3296 3292 nfs_setsecurity(inode, fattr, label); 3297 3293 } 3298 3294 nfs4_label_free(label); ··· 4238 4234 } 4239 4235 if (task->tk_status >= 0) { 4240 4236 renew_lease(NFS_SERVER(inode), hdr->timestamp); 4241 - nfs_post_op_update_inode_force_wcc(inode, &hdr->fattr); 4237 + nfs_writeback_update_inode(hdr); 4242 4238 } 4243 4239 return 0; 4244 4240 } ··· 6897 6893 6898 6894 if (status == 0) { 6899 6895 clp->cl_clientid = res.clientid; 6900 - clp->cl_exchange_flags = (res.flags & ~EXCHGID4_FLAG_CONFIRMED_R); 6901 - if (!(res.flags & EXCHGID4_FLAG_CONFIRMED_R)) 6896 + clp->cl_exchange_flags = res.flags; 6897 + /* Client ID is not confirmed */ 6898 + if (!(res.flags & EXCHGID4_FLAG_CONFIRMED_R)) { 6899 + clear_bit(NFS4_SESSION_ESTABLISHED, 6900 + &clp->cl_session->session_state); 6902 6901 clp->cl_seqid = res.seqid; 6902 + } 6903 6903 6904 6904 kfree(clp->cl_serverowner); 6905 6905 clp->cl_serverowner = res.server_owner; ··· 7235 7227 struct nfs41_create_session_res *res) 7236 7228 { 7237 7229 nfs4_copy_sessionid(&session->sess_id, &res->sessionid); 7230 + /* Mark client id and session as being confirmed */ 7231 + session->clp->cl_exchange_flags |= EXCHGID4_FLAG_CONFIRMED_R; 7232 + set_bit(NFS4_SESSION_ESTABLISHED, &session->session_state); 7238 7233 session->flags = res->flags; 7239 7234 memcpy(&session->fc_attrs, &res->fc_attrs, sizeof(session->fc_attrs)); 7240 7235 if (res->flags & SESSION4_BACK_CHAN) ··· 7333 7322 dprintk("--> nfs4_proc_destroy_session\n"); 7334 7323 7335 7324 /* session is still being setup */ 7336 - if (session->clp->cl_cons_state != NFS_CS_READY) 7337 - return status; 7325 + if (!test_and_clear_bit(NFS4_SESSION_ESTABLISHED, &session->session_state)) 7326 + return 0; 7338 7327 7339 7328 status = rpc_call_sync(session->clp->cl_rpcclient, &msg, RPC_TASK_TIMEOUT); 7340 7329 trace_nfs4_destroy_session(session->clp, status);
+1
fs/nfs/nfs4session.h
··· 70 70 71 71 enum nfs4_session_state { 72 72 NFS4_SESSION_INITING, 73 + NFS4_SESSION_ESTABLISHED, 73 74 }; 74 75 75 76 extern int nfs4_setup_slot_table(struct nfs4_slot_table *tbl,
+16 -2
fs/nfs/nfs4state.c
··· 346 346 status = nfs4_proc_exchange_id(clp, cred); 347 347 if (status != NFS4_OK) 348 348 return status; 349 - set_bit(NFS4CLNT_LEASE_CONFIRM, &clp->cl_state); 350 349 351 - return nfs41_walk_client_list(clp, result, cred); 350 + status = nfs41_walk_client_list(clp, result, cred); 351 + if (status < 0) 352 + return status; 353 + if (clp != *result) 354 + return 0; 355 + 356 + /* Purge state if the client id was established in a prior instance */ 357 + if (clp->cl_exchange_flags & EXCHGID4_FLAG_CONFIRMED_R) 358 + set_bit(NFS4CLNT_PURGE_STATE, &clp->cl_state); 359 + else 360 + set_bit(NFS4CLNT_LEASE_CONFIRM, &clp->cl_state); 361 + nfs4_schedule_state_manager(clp); 362 + status = nfs_wait_client_init_complete(clp); 363 + if (status < 0) 364 + nfs_put_client(clp); 365 + return status; 352 366 } 353 367 354 368 #endif /* CONFIG_NFS_V4_1 */
+2 -4
fs/nfs/proc.c
··· 139 139 nfs_fattr_init(fattr); 140 140 status = rpc_call_sync(NFS_CLIENT(inode), &msg, 0); 141 141 if (status == 0) 142 - nfs_setattr_update_inode(inode, sattr); 142 + nfs_setattr_update_inode(inode, sattr, fattr); 143 143 dprintk("NFS reply setattr: %d\n", status); 144 144 return status; 145 145 } ··· 609 609 610 610 static int nfs_write_done(struct rpc_task *task, struct nfs_pgio_header *hdr) 611 611 { 612 - struct inode *inode = hdr->inode; 613 - 614 612 if (task->tk_status >= 0) 615 - nfs_post_op_update_inode_force_wcc(inode, hdr->res.fattr); 613 + nfs_writeback_update_inode(hdr); 616 614 return 0; 617 615 } 618 616
+30
fs/nfs/write.c
··· 1377 1377 return 0; 1378 1378 } 1379 1379 1380 + static void nfs_writeback_check_extend(struct nfs_pgio_header *hdr, 1381 + struct nfs_fattr *fattr) 1382 + { 1383 + struct nfs_pgio_args *argp = &hdr->args; 1384 + struct nfs_pgio_res *resp = &hdr->res; 1385 + 1386 + if (!(fattr->valid & NFS_ATTR_FATTR_SIZE)) 1387 + return; 1388 + if (argp->offset + resp->count != fattr->size) 1389 + return; 1390 + if (nfs_size_to_loff_t(fattr->size) < i_size_read(hdr->inode)) 1391 + return; 1392 + /* Set attribute barrier */ 1393 + nfs_fattr_set_barrier(fattr); 1394 + } 1395 + 1396 + void nfs_writeback_update_inode(struct nfs_pgio_header *hdr) 1397 + { 1398 + struct nfs_fattr *fattr = hdr->res.fattr; 1399 + struct inode *inode = hdr->inode; 1400 + 1401 + if (fattr == NULL) 1402 + return; 1403 + spin_lock(&inode->i_lock); 1404 + nfs_writeback_check_extend(hdr, fattr); 1405 + nfs_post_op_update_inode_force_wcc_locked(inode, fattr); 1406 + spin_unlock(&inode->i_lock); 1407 + } 1408 + EXPORT_SYMBOL_GPL(nfs_writeback_update_inode); 1409 + 1380 1410 /* 1381 1411 * This function is called when the WRITE call is complete. 1382 1412 */
+26 -26
include/drm/drm_mm.h
··· 68 68 unsigned scanned_preceeds_hole : 1; 69 69 unsigned allocated : 1; 70 70 unsigned long color; 71 - unsigned long start; 72 - unsigned long size; 71 + u64 start; 72 + u64 size; 73 73 struct drm_mm *mm; 74 74 }; 75 75 ··· 82 82 unsigned int scan_check_range : 1; 83 83 unsigned scan_alignment; 84 84 unsigned long scan_color; 85 - unsigned long scan_size; 86 - unsigned long scan_hit_start; 87 - unsigned long scan_hit_end; 85 + u64 scan_size; 86 + u64 scan_hit_start; 87 + u64 scan_hit_end; 88 88 unsigned scanned_blocks; 89 - unsigned long scan_start; 90 - unsigned long scan_end; 89 + u64 scan_start; 90 + u64 scan_end; 91 91 struct drm_mm_node *prev_scanned_node; 92 92 93 93 void (*color_adjust)(struct drm_mm_node *node, unsigned long color, 94 - unsigned long *start, unsigned long *end); 94 + u64 *start, u64 *end); 95 95 }; 96 96 97 97 /** ··· 124 124 return mm->hole_stack.next; 125 125 } 126 126 127 - static inline unsigned long __drm_mm_hole_node_start(struct drm_mm_node *hole_node) 127 + static inline u64 __drm_mm_hole_node_start(struct drm_mm_node *hole_node) 128 128 { 129 129 return hole_node->start + hole_node->size; 130 130 } ··· 140 140 * Returns: 141 141 * Start of the subsequent hole. 142 142 */ 143 - static inline unsigned long drm_mm_hole_node_start(struct drm_mm_node *hole_node) 143 + static inline u64 drm_mm_hole_node_start(struct drm_mm_node *hole_node) 144 144 { 145 145 BUG_ON(!hole_node->hole_follows); 146 146 return __drm_mm_hole_node_start(hole_node); 147 147 } 148 148 149 - static inline unsigned long __drm_mm_hole_node_end(struct drm_mm_node *hole_node) 149 + static inline u64 __drm_mm_hole_node_end(struct drm_mm_node *hole_node) 150 150 { 151 151 return list_entry(hole_node->node_list.next, 152 152 struct drm_mm_node, node_list)->start; ··· 163 163 * Returns: 164 164 * End of the subsequent hole. 165 165 */ 166 - static inline unsigned long drm_mm_hole_node_end(struct drm_mm_node *hole_node) 166 + static inline u64 drm_mm_hole_node_end(struct drm_mm_node *hole_node) 167 167 { 168 168 return __drm_mm_hole_node_end(hole_node); 169 169 } ··· 222 222 223 223 int drm_mm_insert_node_generic(struct drm_mm *mm, 224 224 struct drm_mm_node *node, 225 - unsigned long size, 225 + u64 size, 226 226 unsigned alignment, 227 227 unsigned long color, 228 228 enum drm_mm_search_flags sflags, ··· 245 245 */ 246 246 static inline int drm_mm_insert_node(struct drm_mm *mm, 247 247 struct drm_mm_node *node, 248 - unsigned long size, 248 + u64 size, 249 249 unsigned alignment, 250 250 enum drm_mm_search_flags flags) 251 251 { ··· 255 255 256 256 int drm_mm_insert_node_in_range_generic(struct drm_mm *mm, 257 257 struct drm_mm_node *node, 258 - unsigned long size, 258 + u64 size, 259 259 unsigned alignment, 260 260 unsigned long color, 261 - unsigned long start, 262 - unsigned long end, 261 + u64 start, 262 + u64 end, 263 263 enum drm_mm_search_flags sflags, 264 264 enum drm_mm_allocator_flags aflags); 265 265 /** ··· 282 282 */ 283 283 static inline int drm_mm_insert_node_in_range(struct drm_mm *mm, 284 284 struct drm_mm_node *node, 285 - unsigned long size, 285 + u64 size, 286 286 unsigned alignment, 287 - unsigned long start, 288 - unsigned long end, 287 + u64 start, 288 + u64 end, 289 289 enum drm_mm_search_flags flags) 290 290 { 291 291 return drm_mm_insert_node_in_range_generic(mm, node, size, alignment, ··· 296 296 void drm_mm_remove_node(struct drm_mm_node *node); 297 297 void drm_mm_replace_node(struct drm_mm_node *old, struct drm_mm_node *new); 298 298 void drm_mm_init(struct drm_mm *mm, 299 - unsigned long start, 300 - unsigned long size); 299 + u64 start, 300 + u64 size); 301 301 void drm_mm_takedown(struct drm_mm *mm); 302 302 bool drm_mm_clean(struct drm_mm *mm); 303 303 304 304 void drm_mm_init_scan(struct drm_mm *mm, 305 - unsigned long size, 305 + u64 size, 306 306 unsigned alignment, 307 307 unsigned long color); 308 308 void drm_mm_init_scan_with_range(struct drm_mm *mm, 309 - unsigned long size, 309 + u64 size, 310 310 unsigned alignment, 311 311 unsigned long color, 312 - unsigned long start, 313 - unsigned long end); 312 + u64 start, 313 + u64 end); 314 314 bool drm_mm_scan_add_block(struct drm_mm_node *node); 315 315 bool drm_mm_scan_remove_block(struct drm_mm_node *node); 316 316
+1 -1
include/drm/ttm/ttm_bo_api.h
··· 249 249 * either of these locks held. 250 250 */ 251 251 252 - unsigned long offset; 252 + uint64_t offset; /* GPU address space is independent of CPU word size */ 253 253 uint32_t cur_placement; 254 254 255 255 struct sg_table *sg;
+1 -1
include/drm/ttm/ttm_bo_driver.h
··· 277 277 bool has_type; 278 278 bool use_type; 279 279 uint32_t flags; 280 - unsigned long gpu_offset; 280 + uint64_t gpu_offset; /* GPU address space is independent of CPU word size */ 281 281 uint64_t size; 282 282 uint32_t available_caching; 283 283 uint32_t default_caching;
+15 -2
include/linux/cpuidle.h
··· 126 126 127 127 #ifdef CONFIG_CPU_IDLE 128 128 extern void disable_cpuidle(void); 129 + extern bool cpuidle_not_available(struct cpuidle_driver *drv, 130 + struct cpuidle_device *dev); 129 131 130 132 extern int cpuidle_select(struct cpuidle_driver *drv, 131 133 struct cpuidle_device *dev); ··· 152 150 extern int cpuidle_enable_device(struct cpuidle_device *dev); 153 151 extern void cpuidle_disable_device(struct cpuidle_device *dev); 154 152 extern int cpuidle_play_dead(void); 155 - extern void cpuidle_enter_freeze(void); 153 + extern int cpuidle_find_deepest_state(struct cpuidle_driver *drv, 154 + struct cpuidle_device *dev); 155 + extern int cpuidle_enter_freeze(struct cpuidle_driver *drv, 156 + struct cpuidle_device *dev); 156 157 157 158 extern struct cpuidle_driver *cpuidle_get_cpu_driver(struct cpuidle_device *dev); 158 159 #else 159 160 static inline void disable_cpuidle(void) { } 161 + static inline bool cpuidle_not_available(struct cpuidle_driver *drv, 162 + struct cpuidle_device *dev) 163 + {return true; } 160 164 static inline int cpuidle_select(struct cpuidle_driver *drv, 161 165 struct cpuidle_device *dev) 162 166 {return -ENODEV; } ··· 191 183 {return -ENODEV; } 192 184 static inline void cpuidle_disable_device(struct cpuidle_device *dev) { } 193 185 static inline int cpuidle_play_dead(void) {return -ENODEV; } 194 - static inline void cpuidle_enter_freeze(void) { } 186 + static inline int cpuidle_find_deepest_state(struct cpuidle_driver *drv, 187 + struct cpuidle_device *dev) 188 + {return -ENODEV; } 189 + static inline int cpuidle_enter_freeze(struct cpuidle_driver *drv, 190 + struct cpuidle_device *dev) 191 + {return -ENODEV; } 195 192 static inline struct cpuidle_driver *cpuidle_get_cpu_driver( 196 193 struct cpuidle_device *dev) {return NULL; } 197 194 #endif
+8 -1
include/linux/interrupt.h
··· 52 52 * IRQF_ONESHOT - Interrupt is not reenabled after the hardirq handler finished. 53 53 * Used by threaded interrupts which need to keep the 54 54 * irq line disabled until the threaded handler has been run. 55 - * IRQF_NO_SUSPEND - Do not disable this IRQ during suspend 55 + * IRQF_NO_SUSPEND - Do not disable this IRQ during suspend. Does not guarantee 56 + * that this interrupt will wake the system from a suspended 57 + * state. See Documentation/power/suspend-and-interrupts.txt 56 58 * IRQF_FORCE_RESUME - Force enable it on resume even if IRQF_NO_SUSPEND is set 57 59 * IRQF_NO_THREAD - Interrupt cannot be threaded 58 60 * IRQF_EARLY_RESUME - Resume IRQ early during syscore instead of at device 59 61 * resume time. 62 + * IRQF_COND_SUSPEND - If the IRQ is shared with a NO_SUSPEND user, execute this 63 + * interrupt handler after suspending interrupts. For system 64 + * wakeup devices users need to implement wakeup detection in 65 + * their interrupt handlers. 60 66 */ 61 67 #define IRQF_DISABLED 0x00000020 62 68 #define IRQF_SHARED 0x00000080 ··· 76 70 #define IRQF_FORCE_RESUME 0x00008000 77 71 #define IRQF_NO_THREAD 0x00010000 78 72 #define IRQF_EARLY_RESUME 0x00020000 73 + #define IRQF_COND_SUSPEND 0x00040000 79 74 80 75 #define IRQF_TIMER (__IRQF_TIMER | IRQF_NO_SUSPEND | IRQF_NO_THREAD) 81 76
+1
include/linux/irqdesc.h
··· 78 78 #ifdef CONFIG_PM_SLEEP 79 79 unsigned int nr_actions; 80 80 unsigned int no_suspend_depth; 81 + unsigned int cond_suspend_depth; 81 82 unsigned int force_resume_depth; 82 83 #endif 83 84 #ifdef CONFIG_PROC_FS
+4 -1
include/linux/nfs_fs.h
··· 343 343 extern int nfs_refresh_inode(struct inode *, struct nfs_fattr *); 344 344 extern int nfs_post_op_update_inode(struct inode *inode, struct nfs_fattr *fattr); 345 345 extern int nfs_post_op_update_inode_force_wcc(struct inode *inode, struct nfs_fattr *fattr); 346 + extern int nfs_post_op_update_inode_force_wcc_locked(struct inode *inode, struct nfs_fattr *fattr); 346 347 extern int nfs_getattr(struct vfsmount *, struct dentry *, struct kstat *); 347 348 extern void nfs_access_add_cache(struct inode *, struct nfs_access_entry *); 348 349 extern void nfs_access_set_mask(struct nfs_access_entry *, u32); ··· 356 355 extern int nfs_revalidate_inode_rcu(struct nfs_server *server, struct inode *inode); 357 356 extern int __nfs_revalidate_inode(struct nfs_server *, struct inode *); 358 357 extern int nfs_revalidate_mapping(struct inode *inode, struct address_space *mapping); 358 + extern int nfs_revalidate_mapping_protected(struct inode *inode, struct address_space *mapping); 359 359 extern int nfs_setattr(struct dentry *, struct iattr *); 360 - extern void nfs_setattr_update_inode(struct inode *inode, struct iattr *attr); 360 + extern void nfs_setattr_update_inode(struct inode *inode, struct iattr *attr, struct nfs_fattr *); 361 361 extern void nfs_setsecurity(struct inode *inode, struct nfs_fattr *fattr, 362 362 struct nfs4_label *label); 363 363 extern struct nfs_open_context *get_nfs_open_context(struct nfs_open_context *ctx); ··· 371 369 extern void nfs_put_lock_context(struct nfs_lock_context *l_ctx); 372 370 extern u64 nfs_compat_user_ino64(u64 fileid); 373 371 extern void nfs_fattr_init(struct nfs_fattr *fattr); 372 + extern void nfs_fattr_set_barrier(struct nfs_fattr *fattr); 374 373 extern unsigned long nfs_inc_attr_generation_counter(void); 375 374 376 375 extern struct nfs_fattr *nfs_alloc_fattr(void);
+7 -7
include/linux/serial_core.h
··· 143 143 unsigned char iotype; /* io access style */ 144 144 unsigned char unused1; 145 145 146 - #define UPIO_PORT (0) /* 8b I/O port access */ 147 - #define UPIO_HUB6 (1) /* Hub6 ISA card */ 148 - #define UPIO_MEM (2) /* 8b MMIO access */ 149 - #define UPIO_MEM32 (3) /* 32b little endian */ 150 - #define UPIO_MEM32BE (4) /* 32b big endian */ 151 - #define UPIO_AU (5) /* Au1x00 and RT288x type IO */ 152 - #define UPIO_TSI (6) /* Tsi108/109 type IO */ 146 + #define UPIO_PORT (SERIAL_IO_PORT) /* 8b I/O port access */ 147 + #define UPIO_HUB6 (SERIAL_IO_HUB6) /* Hub6 ISA card */ 148 + #define UPIO_MEM (SERIAL_IO_MEM) /* 8b MMIO access */ 149 + #define UPIO_MEM32 (SERIAL_IO_MEM32) /* 32b little endian */ 150 + #define UPIO_AU (SERIAL_IO_AU) /* Au1x00 and RT288x type IO */ 151 + #define UPIO_TSI (SERIAL_IO_TSI) /* Tsi108/109 type IO */ 152 + #define UPIO_MEM32BE (SERIAL_IO_MEM32BE) /* 32b big endian */ 153 153 154 154 unsigned int read_status_mask; /* driver specific */ 155 155 unsigned int ignore_status_mask; /* driver specific */
+1 -1
include/linux/spi/spi.h
··· 649 649 * sequence completes. On some systems, many such sequences can execute as 650 650 * as single programmed DMA transfer. On all systems, these messages are 651 651 * queued, and might complete after transactions to other devices. Messages 652 - * sent to a given spi_device are alway executed in FIFO order. 652 + * sent to a given spi_device are always executed in FIFO order. 653 653 * 654 654 * The code that submits an spi_message (and its spi_transfers) 655 655 * to the lower layers is responsible for managing its memory.
+1 -2
include/linux/usb/serial.h
··· 190 190 * @num_ports: the number of different ports this device will have. 191 191 * @bulk_in_size: minimum number of bytes to allocate for bulk-in buffer 192 192 * (0 = end-point size) 193 - * @bulk_out_size: minimum number of bytes to allocate for bulk-out buffer 194 - * (0 = end-point size) 193 + * @bulk_out_size: bytes to allocate for bulk-out buffer (0 = end-point size) 195 194 * @calc_num_ports: pointer to a function to determine how many ports this 196 195 * device has dynamically. It will be called after the probe() 197 196 * callback is called, but before attach()
+2 -1
include/linux/workqueue.h
··· 70 70 /* data contains off-queue information when !WORK_STRUCT_PWQ */ 71 71 WORK_OFFQ_FLAG_BASE = WORK_STRUCT_COLOR_SHIFT, 72 72 73 - WORK_OFFQ_CANCELING = (1 << WORK_OFFQ_FLAG_BASE), 73 + __WORK_OFFQ_CANCELING = WORK_OFFQ_FLAG_BASE, 74 + WORK_OFFQ_CANCELING = (1 << __WORK_OFFQ_CANCELING), 74 75 75 76 /* 76 77 * When a work item is off queue, its high bits point to the last
+19 -3
include/net/netfilter/nf_tables.h
··· 119 119 const struct nft_data *data, 120 120 enum nft_data_types type); 121 121 122 + 123 + /** 124 + * struct nft_userdata - user defined data associated with an object 125 + * 126 + * @len: length of the data 127 + * @data: content 128 + * 129 + * The presence of user data is indicated in an object specific fashion, 130 + * so a length of zero can't occur and the value "len" indicates data 131 + * of length len + 1. 132 + */ 133 + struct nft_userdata { 134 + u8 len; 135 + unsigned char data[0]; 136 + }; 137 + 122 138 /** 123 139 * struct nft_set_elem - generic representation of set elements 124 140 * ··· 396 380 * @handle: rule handle 397 381 * @genmask: generation mask 398 382 * @dlen: length of expression data 399 - * @ulen: length of user data (used for comments) 383 + * @udata: user data is appended to the rule 400 384 * @data: expression data 401 385 */ 402 386 struct nft_rule { ··· 404 388 u64 handle:42, 405 389 genmask:2, 406 390 dlen:12, 407 - ulen:8; 391 + udata:1; 408 392 unsigned char data[] 409 393 __attribute__((aligned(__alignof__(struct nft_expr)))); 410 394 }; ··· 424 408 return (struct nft_expr *)&rule->data[rule->dlen]; 425 409 } 426 410 427 - static inline void *nft_userdata(const struct nft_rule *rule) 411 + static inline struct nft_userdata *nft_userdata(const struct nft_rule *rule) 428 412 { 429 413 return (void *)&rule->data[rule->dlen]; 430 414 }
+4
include/uapi/linux/serial.h
··· 65 65 #define SERIAL_IO_PORT 0 66 66 #define SERIAL_IO_HUB6 1 67 67 #define SERIAL_IO_MEM 2 68 + #define SERIAL_IO_MEM32 3 69 + #define SERIAL_IO_AU 4 70 + #define SERIAL_IO_TSI 5 71 + #define SERIAL_IO_MEM32BE 6 68 72 69 73 #define UART_CLEAR_FIFO 0x01 70 74 #define UART_USE_FIFO 0x02
+1
include/video/omapdss.h
··· 689 689 }; 690 690 691 691 struct omap_dss_device { 692 + struct kobject kobj; 692 693 struct device *dev; 693 694 694 695 struct module *owner;
+4 -5
kernel/cpuset.c
··· 548 548 549 549 rcu_read_lock(); 550 550 cpuset_for_each_descendant_pre(cp, pos_css, root_cs) { 551 - if (cp == root_cs) 552 - continue; 553 - 554 551 /* skip the whole subtree if @cp doesn't have any CPU */ 555 552 if (cpumask_empty(cp->cpus_allowed)) { 556 553 pos_css = css_rightmost_descendant(pos_css); ··· 870 873 * If it becomes empty, inherit the effective mask of the 871 874 * parent, which is guaranteed to have some CPUs. 872 875 */ 873 - if (cpumask_empty(new_cpus)) 876 + if (cgroup_on_dfl(cp->css.cgroup) && cpumask_empty(new_cpus)) 874 877 cpumask_copy(new_cpus, parent->effective_cpus); 875 878 876 879 /* Skip the whole subtree if the cpumask remains the same. */ ··· 1126 1129 * If it becomes empty, inherit the effective mask of the 1127 1130 * parent, which is guaranteed to have some MEMs. 1128 1131 */ 1129 - if (nodes_empty(*new_mems)) 1132 + if (cgroup_on_dfl(cp->css.cgroup) && nodes_empty(*new_mems)) 1130 1133 *new_mems = parent->effective_mems; 1131 1134 1132 1135 /* Skip the whole subtree if the nodemask remains the same. */ ··· 1976 1979 1977 1980 spin_lock_irq(&callback_lock); 1978 1981 cs->mems_allowed = parent->mems_allowed; 1982 + cs->effective_mems = parent->mems_allowed; 1979 1983 cpumask_copy(cs->cpus_allowed, parent->cpus_allowed); 1984 + cpumask_copy(cs->effective_cpus, parent->cpus_allowed); 1980 1985 spin_unlock_irq(&callback_lock); 1981 1986 out_unlock: 1982 1987 mutex_unlock(&cpuset_mutex);
+6 -1
kernel/irq/manage.c
··· 1474 1474 * otherwise we'll have trouble later trying to figure out 1475 1475 * which interrupt is which (messes up the interrupt freeing 1476 1476 * logic etc). 1477 + * 1478 + * Also IRQF_COND_SUSPEND only makes sense for shared interrupts and 1479 + * it cannot be set along with IRQF_NO_SUSPEND. 1477 1480 */ 1478 - if ((irqflags & IRQF_SHARED) && !dev_id) 1481 + if (((irqflags & IRQF_SHARED) && !dev_id) || 1482 + (!(irqflags & IRQF_SHARED) && (irqflags & IRQF_COND_SUSPEND)) || 1483 + ((irqflags & IRQF_NO_SUSPEND) && (irqflags & IRQF_COND_SUSPEND))) 1479 1484 return -EINVAL; 1480 1485 1481 1486 desc = irq_to_desc(irq);
+6 -1
kernel/irq/pm.c
··· 43 43 44 44 if (action->flags & IRQF_NO_SUSPEND) 45 45 desc->no_suspend_depth++; 46 + else if (action->flags & IRQF_COND_SUSPEND) 47 + desc->cond_suspend_depth++; 46 48 47 49 WARN_ON_ONCE(desc->no_suspend_depth && 48 - desc->no_suspend_depth != desc->nr_actions); 50 + (desc->no_suspend_depth + 51 + desc->cond_suspend_depth) != desc->nr_actions); 49 52 } 50 53 51 54 /* ··· 64 61 65 62 if (action->flags & IRQF_NO_SUSPEND) 66 63 desc->no_suspend_depth--; 64 + else if (action->flags & IRQF_COND_SUSPEND) 65 + desc->cond_suspend_depth--; 67 66 } 68 67 69 68 static bool suspend_device_irq(struct irq_desc *desc, int irq)
+2 -1
kernel/livepatch/core.c
··· 248 248 /* first, check if it's an exported symbol */ 249 249 preempt_disable(); 250 250 sym = find_symbol(name, NULL, NULL, true, true); 251 - preempt_enable(); 252 251 if (sym) { 253 252 *addr = sym->value; 253 + preempt_enable(); 254 254 return 0; 255 255 } 256 + preempt_enable(); 256 257 257 258 /* otherwise check if it's in another .o within the patch module */ 258 259 return klp_find_object_symbol(pmod->name, name, addr);
+2
kernel/module.c
··· 2313 2313 info->symoffs = ALIGN(mod->core_size, symsect->sh_addralign ?: 1); 2314 2314 info->stroffs = mod->core_size = info->symoffs + ndst * sizeof(Elf_Sym); 2315 2315 mod->core_size += strtab_size; 2316 + mod->core_size = debug_align(mod->core_size); 2316 2317 2317 2318 /* Put string table section at end of init part of module. */ 2318 2319 strsect->sh_flags |= SHF_ALLOC; 2319 2320 strsect->sh_entsize = get_offset(mod, &mod->init_size, strsect, 2320 2321 info->index.str) | INIT_OFFSET_MASK; 2322 + mod->init_size = debug_align(mod->init_size); 2321 2323 pr_debug("\t%s\n", info->secstrings + strsect->sh_name); 2322 2324 } 2323 2325
+1 -1
kernel/printk/console_cmdline.h
··· 3 3 4 4 struct console_cmdline 5 5 { 6 - char name[8]; /* Name of the driver */ 6 + char name[16]; /* Name of the driver */ 7 7 int index; /* Minor dev. to use */ 8 8 char *options; /* Options for the driver */ 9 9 #ifdef CONFIG_A11Y_BRAILLE_CONSOLE
+1
kernel/printk/printk.c
··· 2464 2464 for (i = 0, c = console_cmdline; 2465 2465 i < MAX_CMDLINECONSOLES && c->name[0]; 2466 2466 i++, c++) { 2467 + BUILD_BUG_ON(sizeof(c->name) != sizeof(newcon->name)); 2467 2468 if (strcmp(c->name, newcon->name) != 0) 2468 2469 continue; 2469 2470 if (newcon->index >= 0 &&
+34 -22
kernel/sched/idle.c
··· 82 82 struct cpuidle_driver *drv = cpuidle_get_cpu_driver(dev); 83 83 int next_state, entered_state; 84 84 unsigned int broadcast; 85 + bool reflect; 85 86 86 87 /* 87 88 * Check if the idle task must be rescheduled. If it is the ··· 106 105 */ 107 106 rcu_idle_enter(); 108 107 108 + if (cpuidle_not_available(drv, dev)) 109 + goto use_default; 110 + 109 111 /* 110 112 * Suspend-to-idle ("freeze") is a system state in which all user space 111 113 * has been frozen, all I/O devices have been suspended and the only ··· 119 115 * until a proper wakeup interrupt happens. 120 116 */ 121 117 if (idle_should_freeze()) { 122 - cpuidle_enter_freeze(); 123 - local_irq_enable(); 124 - goto exit_idle; 125 - } 126 - 127 - /* 128 - * Ask the cpuidle framework to choose a convenient idle state. 129 - * Fall back to the default arch idle method on errors. 130 - */ 131 - next_state = cpuidle_select(drv, dev); 132 - if (next_state < 0) { 133 - use_default: 134 - /* 135 - * We can't use the cpuidle framework, let's use the default 136 - * idle routine. 137 - */ 138 - if (current_clr_polling_and_test()) 118 + entered_state = cpuidle_enter_freeze(drv, dev); 119 + if (entered_state >= 0) { 139 120 local_irq_enable(); 140 - else 141 - arch_cpu_idle(); 121 + goto exit_idle; 122 + } 142 123 143 - goto exit_idle; 124 + reflect = false; 125 + next_state = cpuidle_find_deepest_state(drv, dev); 126 + } else { 127 + reflect = true; 128 + /* 129 + * Ask the cpuidle framework to choose a convenient idle state. 130 + */ 131 + next_state = cpuidle_select(drv, dev); 144 132 } 145 - 133 + /* Fall back to the default arch idle method on errors. */ 134 + if (next_state < 0) 135 + goto use_default; 146 136 147 137 /* 148 138 * The idle task must be scheduled, it is pointless to ··· 181 183 /* 182 184 * Give the governor an opportunity to reflect on the outcome 183 185 */ 184 - cpuidle_reflect(dev, entered_state); 186 + if (reflect) 187 + cpuidle_reflect(dev, entered_state); 185 188 186 189 exit_idle: 187 190 __current_set_polling(); ··· 195 196 196 197 rcu_idle_exit(); 197 198 start_critical_timings(); 199 + return; 200 + 201 + use_default: 202 + /* 203 + * We can't use the cpuidle framework, let's use the default 204 + * idle routine. 205 + */ 206 + if (current_clr_polling_and_test()) 207 + local_irq_enable(); 208 + else 209 + arch_cpu_idle(); 210 + 211 + goto exit_idle; 198 212 } 199 213 200 214 /*
+30 -10
kernel/trace/ftrace.c
··· 1059 1059 1060 1060 static struct pid * const ftrace_swapper_pid = &init_struct_pid; 1061 1061 1062 + #ifdef CONFIG_FUNCTION_GRAPH_TRACER 1063 + static int ftrace_graph_active; 1064 + #else 1065 + # define ftrace_graph_active 0 1066 + #endif 1067 + 1062 1068 #ifdef CONFIG_DYNAMIC_FTRACE 1063 1069 1064 1070 static struct ftrace_ops *removed_ops; ··· 2047 2041 if (!ftrace_rec_count(rec)) 2048 2042 rec->flags = 0; 2049 2043 else 2050 - /* Just disable the record (keep REGS state) */ 2051 - rec->flags &= ~FTRACE_FL_ENABLED; 2044 + /* 2045 + * Just disable the record, but keep the ops TRAMP 2046 + * and REGS states. The _EN flags must be disabled though. 2047 + */ 2048 + rec->flags &= ~(FTRACE_FL_ENABLED | FTRACE_FL_TRAMP_EN | 2049 + FTRACE_FL_REGS_EN); 2052 2050 } 2053 2051 2054 2052 return FTRACE_UPDATE_MAKE_NOP; ··· 2698 2688 2699 2689 static void ftrace_startup_sysctl(void) 2700 2690 { 2691 + int command; 2692 + 2701 2693 if (unlikely(ftrace_disabled)) 2702 2694 return; 2703 2695 2704 2696 /* Force update next time */ 2705 2697 saved_ftrace_func = NULL; 2706 2698 /* ftrace_start_up is true if we want ftrace running */ 2707 - if (ftrace_start_up) 2708 - ftrace_run_update_code(FTRACE_UPDATE_CALLS); 2699 + if (ftrace_start_up) { 2700 + command = FTRACE_UPDATE_CALLS; 2701 + if (ftrace_graph_active) 2702 + command |= FTRACE_START_FUNC_RET; 2703 + ftrace_startup_enable(command); 2704 + } 2709 2705 } 2710 2706 2711 2707 static void ftrace_shutdown_sysctl(void) 2712 2708 { 2709 + int command; 2710 + 2713 2711 if (unlikely(ftrace_disabled)) 2714 2712 return; 2715 2713 2716 2714 /* ftrace_start_up is true if ftrace is running */ 2717 - if (ftrace_start_up) 2718 - ftrace_run_update_code(FTRACE_DISABLE_CALLS); 2715 + if (ftrace_start_up) { 2716 + command = FTRACE_DISABLE_CALLS; 2717 + if (ftrace_graph_active) 2718 + command |= FTRACE_STOP_FUNC_RET; 2719 + ftrace_run_update_code(command); 2720 + } 2719 2721 } 2720 2722 2721 2723 static cycle_t ftrace_update_time; ··· 5580 5558 5581 5559 if (ftrace_enabled) { 5582 5560 5583 - ftrace_startup_sysctl(); 5584 - 5585 5561 /* we are starting ftrace again */ 5586 5562 if (ftrace_ops_list != &ftrace_list_end) 5587 5563 update_ftrace_function(); 5564 + 5565 + ftrace_startup_sysctl(); 5588 5566 5589 5567 } else { 5590 5568 /* stopping ftrace calls (just send to ftrace_stub) */ ··· 5611 5589 #endif 5612 5590 ASSIGN_OPS_HASH(graph_ops, &global_ops.local_hash) 5613 5591 }; 5614 - 5615 - static int ftrace_graph_active; 5616 5592 5617 5593 int ftrace_graph_entry_stub(struct ftrace_graph_ent *trace) 5618 5594 {
+52 -4
kernel/workqueue.c
··· 2728 2728 } 2729 2729 EXPORT_SYMBOL_GPL(flush_work); 2730 2730 2731 + struct cwt_wait { 2732 + wait_queue_t wait; 2733 + struct work_struct *work; 2734 + }; 2735 + 2736 + static int cwt_wakefn(wait_queue_t *wait, unsigned mode, int sync, void *key) 2737 + { 2738 + struct cwt_wait *cwait = container_of(wait, struct cwt_wait, wait); 2739 + 2740 + if (cwait->work != key) 2741 + return 0; 2742 + return autoremove_wake_function(wait, mode, sync, key); 2743 + } 2744 + 2731 2745 static bool __cancel_work_timer(struct work_struct *work, bool is_dwork) 2732 2746 { 2747 + static DECLARE_WAIT_QUEUE_HEAD(cancel_waitq); 2733 2748 unsigned long flags; 2734 2749 int ret; 2735 2750 2736 2751 do { 2737 2752 ret = try_to_grab_pending(work, is_dwork, &flags); 2738 2753 /* 2739 - * If someone else is canceling, wait for the same event it 2740 - * would be waiting for before retrying. 2754 + * If someone else is already canceling, wait for it to 2755 + * finish. flush_work() doesn't work for PREEMPT_NONE 2756 + * because we may get scheduled between @work's completion 2757 + * and the other canceling task resuming and clearing 2758 + * CANCELING - flush_work() will return false immediately 2759 + * as @work is no longer busy, try_to_grab_pending() will 2760 + * return -ENOENT as @work is still being canceled and the 2761 + * other canceling task won't be able to clear CANCELING as 2762 + * we're hogging the CPU. 2763 + * 2764 + * Let's wait for completion using a waitqueue. As this 2765 + * may lead to the thundering herd problem, use a custom 2766 + * wake function which matches @work along with exclusive 2767 + * wait and wakeup. 2741 2768 */ 2742 - if (unlikely(ret == -ENOENT)) 2743 - flush_work(work); 2769 + if (unlikely(ret == -ENOENT)) { 2770 + struct cwt_wait cwait; 2771 + 2772 + init_wait(&cwait.wait); 2773 + cwait.wait.func = cwt_wakefn; 2774 + cwait.work = work; 2775 + 2776 + prepare_to_wait_exclusive(&cancel_waitq, &cwait.wait, 2777 + TASK_UNINTERRUPTIBLE); 2778 + if (work_is_canceling(work)) 2779 + schedule(); 2780 + finish_wait(&cancel_waitq, &cwait.wait); 2781 + } 2744 2782 } while (unlikely(ret < 0)); 2745 2783 2746 2784 /* tell other tasks trying to grab @work to back off */ ··· 2787 2749 2788 2750 flush_work(work); 2789 2751 clear_work_data(work); 2752 + 2753 + /* 2754 + * Paired with prepare_to_wait() above so that either 2755 + * waitqueue_active() is visible here or !work_is_canceling() is 2756 + * visible there. 2757 + */ 2758 + smp_mb(); 2759 + if (waitqueue_active(&cancel_waitq)) 2760 + __wake_up(&cancel_waitq, TASK_NORMAL, 1, work); 2761 + 2790 2762 return ret; 2791 2763 } 2792 2764
+2 -2
lib/seq_buf.c
··· 61 61 62 62 if (s->len < s->size) { 63 63 len = vsnprintf(s->buffer + s->len, s->size - s->len, fmt, args); 64 - if (seq_buf_can_fit(s, len)) { 64 + if (s->len + len < s->size) { 65 65 s->len += len; 66 66 return 0; 67 67 } ··· 118 118 119 119 if (s->len < s->size) { 120 120 ret = bstr_printf(s->buffer + s->len, len, fmt, binary); 121 - if (seq_buf_can_fit(s, ret)) { 121 + if (s->len + ret < s->size) { 122 122 s->len += ret; 123 123 return 0; 124 124 }
+3
net/can/af_can.c
··· 259 259 goto inval_skb; 260 260 } 261 261 262 + skb->ip_summed = CHECKSUM_UNNECESSARY; 263 + 264 + skb_reset_mac_header(skb); 262 265 skb_reset_network_header(skb); 263 266 skb_reset_transport_header(skb); 264 267
+7 -4
net/ipv4/ip_fragment.c
··· 659 659 struct sk_buff *ip_check_defrag(struct sk_buff *skb, u32 user) 660 660 { 661 661 struct iphdr iph; 662 + int netoff; 662 663 u32 len; 663 664 664 665 if (skb->protocol != htons(ETH_P_IP)) 665 666 return skb; 666 667 667 - if (skb_copy_bits(skb, 0, &iph, sizeof(iph)) < 0) 668 + netoff = skb_network_offset(skb); 669 + 670 + if (skb_copy_bits(skb, netoff, &iph, sizeof(iph)) < 0) 668 671 return skb; 669 672 670 673 if (iph.ihl < 5 || iph.version != 4) 671 674 return skb; 672 675 673 676 len = ntohs(iph.tot_len); 674 - if (skb->len < len || len < (iph.ihl * 4)) 677 + if (skb->len < netoff + len || len < (iph.ihl * 4)) 675 678 return skb; 676 679 677 680 if (ip_is_fragment(&iph)) { 678 681 skb = skb_share_check(skb, GFP_ATOMIC); 679 682 if (skb) { 680 - if (!pskb_may_pull(skb, iph.ihl*4)) 683 + if (!pskb_may_pull(skb, netoff + iph.ihl * 4)) 681 684 return skb; 682 - if (pskb_trim_rcsum(skb, len)) 685 + if (pskb_trim_rcsum(skb, netoff + len)) 683 686 return skb; 684 687 memset(IPCB(skb), 0, sizeof(struct inet_skb_parm)); 685 688 if (ip_defrag(skb, user))
+23 -10
net/ipv4/ip_sockglue.c
··· 432 432 kfree_skb(skb); 433 433 } 434 434 435 - static bool ipv4_pktinfo_prepare_errqueue(const struct sock *sk, 436 - const struct sk_buff *skb, 437 - int ee_origin) 435 + /* IPv4 supports cmsg on all imcp errors and some timestamps 436 + * 437 + * Timestamp code paths do not initialize the fields expected by cmsg: 438 + * the PKTINFO fields in skb->cb[]. Fill those in here. 439 + */ 440 + static bool ipv4_datagram_support_cmsg(const struct sock *sk, 441 + struct sk_buff *skb, 442 + int ee_origin) 438 443 { 439 - struct in_pktinfo *info = PKTINFO_SKB_CB(skb); 444 + struct in_pktinfo *info; 440 445 441 - if ((ee_origin != SO_EE_ORIGIN_TIMESTAMPING) || 442 - (!(sk->sk_tsflags & SOF_TIMESTAMPING_OPT_CMSG)) || 446 + if (ee_origin == SO_EE_ORIGIN_ICMP) 447 + return true; 448 + 449 + if (ee_origin == SO_EE_ORIGIN_LOCAL) 450 + return false; 451 + 452 + /* Support IP_PKTINFO on tstamp packets if requested, to correlate 453 + * timestamp with egress dev. Not possible for packets without dev 454 + * or without payload (SOF_TIMESTAMPING_OPT_TSONLY). 455 + */ 456 + if ((!(sk->sk_tsflags & SOF_TIMESTAMPING_OPT_CMSG)) || 443 457 (!skb->dev)) 444 458 return false; 445 459 460 + info = PKTINFO_SKB_CB(skb); 446 461 info->ipi_spec_dst.s_addr = ip_hdr(skb)->saddr; 447 462 info->ipi_ifindex = skb->dev->ifindex; 448 463 return true; ··· 498 483 499 484 serr = SKB_EXT_ERR(skb); 500 485 501 - if (sin && skb->len) { 486 + if (sin && serr->port) { 502 487 sin->sin_family = AF_INET; 503 488 sin->sin_addr.s_addr = *(__be32 *)(skb_network_header(skb) + 504 489 serr->addr_offset); ··· 511 496 sin = &errhdr.offender; 512 497 memset(sin, 0, sizeof(*sin)); 513 498 514 - if (skb->len && 515 - (serr->ee.ee_origin == SO_EE_ORIGIN_ICMP || 516 - ipv4_pktinfo_prepare_errqueue(sk, skb, serr->ee.ee_origin))) { 499 + if (ipv4_datagram_support_cmsg(sk, skb, serr->ee.ee_origin)) { 517 500 sin->sin_family = AF_INET; 518 501 sin->sin_addr.s_addr = ip_hdr(skb)->saddr; 519 502 if (inet_sk(sk)->cmsg_flags)
+10 -2
net/ipv4/ping.c
··· 259 259 kgid_t low, high; 260 260 int ret = 0; 261 261 262 + if (sk->sk_family == AF_INET6) 263 + sk->sk_ipv6only = 1; 264 + 262 265 inet_get_ping_group_range_net(net, &low, &high); 263 266 if (gid_lte(low, group) && gid_lte(group, high)) 264 267 return 0; ··· 308 305 if (addr_len < sizeof(*addr)) 309 306 return -EINVAL; 310 307 308 + if (addr->sin_family != AF_INET && 309 + !(addr->sin_family == AF_UNSPEC && 310 + addr->sin_addr.s_addr == htonl(INADDR_ANY))) 311 + return -EAFNOSUPPORT; 312 + 311 313 pr_debug("ping_check_bind_addr(sk=%p,addr=%pI4,port=%d)\n", 312 314 sk, &addr->sin_addr.s_addr, ntohs(addr->sin_port)); 313 315 ··· 338 330 return -EINVAL; 339 331 340 332 if (addr->sin6_family != AF_INET6) 341 - return -EINVAL; 333 + return -EAFNOSUPPORT; 342 334 343 335 pr_debug("ping_check_bind_addr(sk=%p,addr=%pI6c,port=%d)\n", 344 336 sk, addr->sin6_addr.s6_addr, ntohs(addr->sin6_port)); ··· 723 715 if (msg->msg_namelen < sizeof(*usin)) 724 716 return -EINVAL; 725 717 if (usin->sin_family != AF_INET) 726 - return -EINVAL; 718 + return -EAFNOSUPPORT; 727 719 daddr = usin->sin_addr.s_addr; 728 720 /* no remote port */ 729 721 } else {
+3 -7
net/ipv4/tcp.c
··· 835 835 int large_allowed) 836 836 { 837 837 struct tcp_sock *tp = tcp_sk(sk); 838 - u32 new_size_goal, size_goal, hlen; 838 + u32 new_size_goal, size_goal; 839 839 840 840 if (!large_allowed || !sk_can_gso(sk)) 841 841 return mss_now; 842 842 843 - /* Maybe we should/could use sk->sk_prot->max_header here ? */ 844 - hlen = inet_csk(sk)->icsk_af_ops->net_header_len + 845 - inet_csk(sk)->icsk_ext_hdr_len + 846 - tp->tcp_header_len; 847 - 848 - new_size_goal = sk->sk_gso_max_size - 1 - hlen; 843 + /* Note : tcp_tso_autosize() will eventually split this later */ 844 + new_size_goal = sk->sk_gso_max_size - 1 - MAX_TCP_HEADER; 849 845 new_size_goal = tcp_bound_to_half_wnd(tp, new_size_goal); 850 846 851 847 /* We try hard to avoid divides here */
+28 -11
net/ipv6/datagram.c
··· 325 325 kfree_skb(skb); 326 326 } 327 327 328 - static void ip6_datagram_prepare_pktinfo_errqueue(struct sk_buff *skb) 328 + /* IPv6 supports cmsg on all origins aside from SO_EE_ORIGIN_LOCAL. 329 + * 330 + * At one point, excluding local errors was a quick test to identify icmp/icmp6 331 + * errors. This is no longer true, but the test remained, so the v6 stack, 332 + * unlike v4, also honors cmsg requests on all wifi and timestamp errors. 333 + * 334 + * Timestamp code paths do not initialize the fields expected by cmsg: 335 + * the PKTINFO fields in skb->cb[]. Fill those in here. 336 + */ 337 + static bool ip6_datagram_support_cmsg(struct sk_buff *skb, 338 + struct sock_exterr_skb *serr) 329 339 { 330 - int ifindex = skb->dev ? skb->dev->ifindex : -1; 340 + if (serr->ee.ee_origin == SO_EE_ORIGIN_ICMP || 341 + serr->ee.ee_origin == SO_EE_ORIGIN_ICMP6) 342 + return true; 343 + 344 + if (serr->ee.ee_origin == SO_EE_ORIGIN_LOCAL) 345 + return false; 346 + 347 + if (!skb->dev) 348 + return false; 331 349 332 350 if (skb->protocol == htons(ETH_P_IPV6)) 333 - IP6CB(skb)->iif = ifindex; 351 + IP6CB(skb)->iif = skb->dev->ifindex; 334 352 else 335 - PKTINFO_SKB_CB(skb)->ipi_ifindex = ifindex; 353 + PKTINFO_SKB_CB(skb)->ipi_ifindex = skb->dev->ifindex; 354 + 355 + return true; 336 356 } 337 357 338 358 /* ··· 389 369 390 370 serr = SKB_EXT_ERR(skb); 391 371 392 - if (sin && skb->len) { 372 + if (sin && serr->port) { 393 373 const unsigned char *nh = skb_network_header(skb); 394 374 sin->sin6_family = AF_INET6; 395 375 sin->sin6_flowinfo = 0; ··· 414 394 memcpy(&errhdr.ee, &serr->ee, sizeof(struct sock_extended_err)); 415 395 sin = &errhdr.offender; 416 396 memset(sin, 0, sizeof(*sin)); 417 - if (serr->ee.ee_origin != SO_EE_ORIGIN_LOCAL && skb->len) { 397 + 398 + if (ip6_datagram_support_cmsg(skb, serr)) { 418 399 sin->sin6_family = AF_INET6; 419 - if (np->rxopt.all) { 420 - if (serr->ee.ee_origin != SO_EE_ORIGIN_ICMP && 421 - serr->ee.ee_origin != SO_EE_ORIGIN_ICMP6) 422 - ip6_datagram_prepare_pktinfo_errqueue(skb); 400 + if (np->rxopt.all) 423 401 ip6_datagram_recv_common_ctl(sk, msg, skb); 424 - } 425 402 if (skb->protocol == htons(ETH_P_IPV6)) { 426 403 sin->sin6_addr = ipv6_hdr(skb)->saddr; 427 404 if (np->rxopt.all)
+3 -2
net/ipv6/ping.c
··· 101 101 102 102 if (msg->msg_name) { 103 103 DECLARE_SOCKADDR(struct sockaddr_in6 *, u, msg->msg_name); 104 - if (msg->msg_namelen < sizeof(struct sockaddr_in6) || 105 - u->sin6_family != AF_INET6) { 104 + if (msg->msg_namelen < sizeof(*u)) 106 105 return -EINVAL; 106 + if (u->sin6_family != AF_INET6) { 107 + return -EAFNOSUPPORT; 107 108 } 108 109 if (sk->sk_bound_dev_if && 109 110 sk->sk_bound_dev_if != u->sin6_scope_id) {
+3 -1
net/irda/ircomm/ircomm_tty.c
··· 798 798 orig_jiffies = jiffies; 799 799 800 800 /* Set poll time to 200 ms */ 801 - poll_time = IRDA_MIN(timeout, msecs_to_jiffies(200)); 801 + poll_time = msecs_to_jiffies(200); 802 + if (timeout) 803 + poll_time = min_t(unsigned long, timeout, poll_time); 802 804 803 805 spin_lock_irqsave(&self->spinlock, flags); 804 806 while (self->tx_skb && self->tx_skb->len) {
+3
net/netfilter/ipvs/ip_vs_sync.c
··· 913 913 IP_VS_DBG(2, "BACKUP, add new conn. failed\n"); 914 914 return; 915 915 } 916 + if (!(flags & IP_VS_CONN_F_TEMPLATE)) 917 + kfree(param->pe_data); 916 918 } 917 919 918 920 if (opt) ··· 1188 1186 (opt_flags & IPVS_OPT_F_SEQ_DATA ? &opt : NULL) 1189 1187 ); 1190 1188 #endif 1189 + ip_vs_pe_put(param.pe); 1191 1190 return 0; 1192 1191 /* Error exit */ 1193 1192 out:
+36 -25
net/netfilter/nf_tables_api.c
··· 227 227 228 228 static inline void nft_rule_clear(struct net *net, struct nft_rule *rule) 229 229 { 230 - rule->genmask = 0; 230 + rule->genmask &= ~(1 << gencursor_next(net)); 231 231 } 232 232 233 233 static int ··· 1712 1712 } 1713 1713 nla_nest_end(skb, list); 1714 1714 1715 - if (rule->ulen && 1716 - nla_put(skb, NFTA_RULE_USERDATA, rule->ulen, nft_userdata(rule))) 1717 - goto nla_put_failure; 1715 + if (rule->udata) { 1716 + struct nft_userdata *udata = nft_userdata(rule); 1717 + if (nla_put(skb, NFTA_RULE_USERDATA, udata->len + 1, 1718 + udata->data) < 0) 1719 + goto nla_put_failure; 1720 + } 1718 1721 1719 1722 nlmsg_end(skb, nlh); 1720 1723 return 0; ··· 1900 1897 struct nft_table *table; 1901 1898 struct nft_chain *chain; 1902 1899 struct nft_rule *rule, *old_rule = NULL; 1900 + struct nft_userdata *udata; 1903 1901 struct nft_trans *trans = NULL; 1904 1902 struct nft_expr *expr; 1905 1903 struct nft_ctx ctx; 1906 1904 struct nlattr *tmp; 1907 - unsigned int size, i, n, ulen = 0; 1905 + unsigned int size, i, n, ulen = 0, usize = 0; 1908 1906 int err, rem; 1909 1907 bool create; 1910 1908 u64 handle, pos_handle; ··· 1973 1969 n++; 1974 1970 } 1975 1971 } 1972 + /* Check for overflow of dlen field */ 1973 + err = -EFBIG; 1974 + if (size >= 1 << 12) 1975 + goto err1; 1976 1976 1977 - if (nla[NFTA_RULE_USERDATA]) 1977 + if (nla[NFTA_RULE_USERDATA]) { 1978 1978 ulen = nla_len(nla[NFTA_RULE_USERDATA]); 1979 + if (ulen > 0) 1980 + usize = sizeof(struct nft_userdata) + ulen; 1981 + } 1979 1982 1980 1983 err = -ENOMEM; 1981 - rule = kzalloc(sizeof(*rule) + size + ulen, GFP_KERNEL); 1984 + rule = kzalloc(sizeof(*rule) + size + usize, GFP_KERNEL); 1982 1985 if (rule == NULL) 1983 1986 goto err1; 1984 1987 ··· 1993 1982 1994 1983 rule->handle = handle; 1995 1984 rule->dlen = size; 1996 - rule->ulen = ulen; 1985 + rule->udata = ulen ? 1 : 0; 1997 1986 1998 - if (ulen) 1999 - nla_memcpy(nft_userdata(rule), nla[NFTA_RULE_USERDATA], ulen); 1987 + if (ulen) { 1988 + udata = nft_userdata(rule); 1989 + udata->len = ulen - 1; 1990 + nla_memcpy(udata->data, nla[NFTA_RULE_USERDATA], ulen); 1991 + } 2000 1992 2001 1993 expr = nft_expr_first(rule); 2002 1994 for (i = 0; i < n; i++) { ··· 2046 2032 2047 2033 err3: 2048 2034 list_del_rcu(&rule->list); 2049 - if (trans) { 2050 - list_del_rcu(&nft_trans_rule(trans)->list); 2051 - nft_rule_clear(net, nft_trans_rule(trans)); 2052 - nft_trans_destroy(trans); 2053 - chain->use++; 2054 - } 2055 2035 err2: 2056 2036 nf_tables_rule_destroy(&ctx, rule); 2057 2037 err1: ··· 3621 3613 &te->elem, 3622 3614 NFT_MSG_DELSETELEM, 0); 3623 3615 te->set->ops->get(te->set, &te->elem); 3624 - te->set->ops->remove(te->set, &te->elem); 3625 3616 nft_data_uninit(&te->elem.key, NFT_DATA_VALUE); 3626 - if (te->elem.flags & NFT_SET_MAP) { 3627 - nft_data_uninit(&te->elem.data, 3628 - te->set->dtype); 3629 - } 3617 + if (te->set->flags & NFT_SET_MAP && 3618 + !(te->elem.flags & NFT_SET_ELEM_INTERVAL_END)) 3619 + nft_data_uninit(&te->elem.data, te->set->dtype); 3620 + te->set->ops->remove(te->set, &te->elem); 3630 3621 nft_trans_destroy(trans); 3631 3622 break; 3632 3623 } ··· 3666 3659 { 3667 3660 struct net *net = sock_net(skb->sk); 3668 3661 struct nft_trans *trans, *next; 3669 - struct nft_set *set; 3662 + struct nft_trans_elem *te; 3670 3663 3671 3664 list_for_each_entry_safe(trans, next, &net->nft.commit_list, list) { 3672 3665 switch (trans->msg_type) { ··· 3727 3720 break; 3728 3721 case NFT_MSG_NEWSETELEM: 3729 3722 nft_trans_elem_set(trans)->nelems--; 3730 - set = nft_trans_elem_set(trans); 3731 - set->ops->get(set, &nft_trans_elem(trans)); 3732 - set->ops->remove(set, &nft_trans_elem(trans)); 3723 + te = (struct nft_trans_elem *)trans->data; 3724 + te->set->ops->get(te->set, &te->elem); 3725 + nft_data_uninit(&te->elem.key, NFT_DATA_VALUE); 3726 + if (te->set->flags & NFT_SET_MAP && 3727 + !(te->elem.flags & NFT_SET_ELEM_INTERVAL_END)) 3728 + nft_data_uninit(&te->elem.data, te->set->dtype); 3729 + te->set->ops->remove(te->set, &te->elem); 3733 3730 nft_trans_destroy(trans); 3734 3731 break; 3735 3732 case NFT_MSG_DELSETELEM:
+7 -7
net/netfilter/nft_compat.c
··· 125 125 nft_target_set_tgchk_param(struct xt_tgchk_param *par, 126 126 const struct nft_ctx *ctx, 127 127 struct xt_target *target, void *info, 128 - union nft_entry *entry, u8 proto, bool inv) 128 + union nft_entry *entry, u16 proto, bool inv) 129 129 { 130 130 par->net = ctx->net; 131 131 par->table = ctx->table->name; ··· 139 139 entry->e6.ipv6.invflags = inv ? IP6T_INV_PROTO : 0; 140 140 break; 141 141 case NFPROTO_BRIDGE: 142 - entry->ebt.ethproto = proto; 142 + entry->ebt.ethproto = (__force __be16)proto; 143 143 entry->ebt.invflags = inv ? EBT_IPROTO : 0; 144 144 break; 145 145 case NFPROTO_ARP: ··· 175 175 [NFTA_RULE_COMPAT_FLAGS] = { .type = NLA_U32 }, 176 176 }; 177 177 178 - static int nft_parse_compat(const struct nlattr *attr, u8 *proto, bool *inv) 178 + static int nft_parse_compat(const struct nlattr *attr, u16 *proto, bool *inv) 179 179 { 180 180 struct nlattr *tb[NFTA_RULE_COMPAT_MAX+1]; 181 181 u32 flags; ··· 207 207 struct xt_target *target = expr->ops->data; 208 208 struct xt_tgchk_param par; 209 209 size_t size = XT_ALIGN(nla_len(tb[NFTA_TARGET_INFO])); 210 - u8 proto = 0; 210 + u16 proto = 0; 211 211 bool inv = false; 212 212 union nft_entry e = {}; 213 213 int ret; ··· 338 338 static void 339 339 nft_match_set_mtchk_param(struct xt_mtchk_param *par, const struct nft_ctx *ctx, 340 340 struct xt_match *match, void *info, 341 - union nft_entry *entry, u8 proto, bool inv) 341 + union nft_entry *entry, u16 proto, bool inv) 342 342 { 343 343 par->net = ctx->net; 344 344 par->table = ctx->table->name; ··· 352 352 entry->e6.ipv6.invflags = inv ? IP6T_INV_PROTO : 0; 353 353 break; 354 354 case NFPROTO_BRIDGE: 355 - entry->ebt.ethproto = proto; 355 + entry->ebt.ethproto = (__force __be16)proto; 356 356 entry->ebt.invflags = inv ? EBT_IPROTO : 0; 357 357 break; 358 358 case NFPROTO_ARP: ··· 391 391 struct xt_match *match = expr->ops->data; 392 392 struct xt_mtchk_param par; 393 393 size_t size = XT_ALIGN(nla_len(tb[NFTA_MATCH_INFO])); 394 - u8 proto = 0; 394 + u16 proto = 0; 395 395 bool inv = false; 396 396 union nft_entry e = {}; 397 397 int ret;
+14 -8
net/packet/af_packet.c
··· 3139 3139 return 0; 3140 3140 } 3141 3141 3142 - static void packet_dev_mclist(struct net_device *dev, struct packet_mclist *i, int what) 3142 + static void packet_dev_mclist_delete(struct net_device *dev, 3143 + struct packet_mclist **mlp) 3143 3144 { 3144 - for ( ; i; i = i->next) { 3145 - if (i->ifindex == dev->ifindex) 3146 - packet_dev_mc(dev, i, what); 3145 + struct packet_mclist *ml; 3146 + 3147 + while ((ml = *mlp) != NULL) { 3148 + if (ml->ifindex == dev->ifindex) { 3149 + packet_dev_mc(dev, ml, -1); 3150 + *mlp = ml->next; 3151 + kfree(ml); 3152 + } else 3153 + mlp = &ml->next; 3147 3154 } 3148 3155 } 3149 3156 ··· 3227 3220 packet_dev_mc(dev, ml, -1); 3228 3221 kfree(ml); 3229 3222 } 3230 - rtnl_unlock(); 3231 - return 0; 3223 + break; 3232 3224 } 3233 3225 } 3234 3226 rtnl_unlock(); 3235 - return -EADDRNOTAVAIL; 3227 + return 0; 3236 3228 } 3237 3229 3238 3230 static void packet_flush_mclist(struct sock *sk) ··· 3581 3575 switch (msg) { 3582 3576 case NETDEV_UNREGISTER: 3583 3577 if (po->mclist) 3584 - packet_dev_mclist(dev, po->mclist, -1); 3578 + packet_dev_mclist_delete(dev, &po->mclist); 3585 3579 /* fallthrough */ 3586 3580 3587 3581 case NETDEV_DOWN:
+2 -2
net/rxrpc/ar-error.c
··· 42 42 _leave("UDP socket errqueue empty"); 43 43 return; 44 44 } 45 - if (!skb->len) { 45 + serr = SKB_EXT_ERR(skb); 46 + if (!skb->len && serr->ee.ee_origin == SO_EE_ORIGIN_TIMESTAMPING) { 46 47 _leave("UDP empty message"); 47 48 kfree_skb(skb); 48 49 return; ··· 51 50 52 51 rxrpc_new_skb(skb); 53 52 54 - serr = SKB_EXT_ERR(skb); 55 53 addr = *(__be32 *)(skb_network_header(skb) + serr->addr_offset); 56 54 port = serr->port; 57 55
+1 -1
net/sunrpc/cache.c
··· 921 921 poll_wait(filp, &queue_wait, wait); 922 922 923 923 /* alway allow write */ 924 - mask = POLL_OUT | POLLWRNORM; 924 + mask = POLLOUT | POLLWRNORM; 925 925 926 926 if (!rp) 927 927 return mask;
+2 -1
net/sunrpc/xprtrdma/rpc_rdma.c
··· 738 738 struct rpc_xprt *xprt = rep->rr_xprt; 739 739 struct rpcrdma_xprt *r_xprt = rpcx_to_rdmax(xprt); 740 740 __be32 *iptr; 741 - int credits, rdmalen, status; 741 + int rdmalen, status; 742 742 unsigned long cwnd; 743 + u32 credits; 743 744 744 745 /* Check status. If bad, signal disconnect and return rep to pool */ 745 746 if (rep->rr_len == ~0U) {
+1 -1
net/sunrpc/xprtrdma/xprt_rdma.h
··· 285 285 */ 286 286 struct rpcrdma_buffer { 287 287 spinlock_t rb_lock; /* protects indexes */ 288 - int rb_max_requests;/* client max requests */ 288 + u32 rb_max_requests;/* client max requests */ 289 289 struct list_head rb_mws; /* optional memory windows/fmrs/frmrs */ 290 290 struct list_head rb_all; 291 291 int rb_send_index;
+4 -3
net/tipc/link.c
··· 464 464 /* Clean up all queues, except inputq: */ 465 465 __skb_queue_purge(&l_ptr->outqueue); 466 466 __skb_queue_purge(&l_ptr->deferred_queue); 467 - skb_queue_splice_init(&l_ptr->wakeupq, &l_ptr->inputq); 468 - if (!skb_queue_empty(&l_ptr->inputq)) 467 + if (!owner->inputq) 468 + owner->inputq = &l_ptr->inputq; 469 + skb_queue_splice_init(&l_ptr->wakeupq, owner->inputq); 470 + if (!skb_queue_empty(owner->inputq)) 469 471 owner->action_flags |= TIPC_MSG_EVT; 470 - owner->inputq = &l_ptr->inputq; 471 472 l_ptr->next_out = NULL; 472 473 l_ptr->unacked_window = 0; 473 474 l_ptr->checkpoint = 1;
+2
sound/drivers/opl3/opl3_midi.c
··· 105 105 int pitchbend = chan->midi_pitchbend; 106 106 int segment; 107 107 108 + if (pitchbend < -0x2000) 109 + pitchbend = -0x2000; 108 110 if (pitchbend > 0x1FFF) 109 111 pitchbend = 0x1FFF; 110 112
+9 -9
sound/firewire/dice/dice-interface.h
··· 299 299 #define RX_ISOCHRONOUS 0x008 300 300 301 301 /* 302 - * Index of first quadlet to be interpreted; read/write. If > 0, that many 303 - * quadlets at the beginning of each data block will be ignored, and all the 304 - * audio and MIDI quadlets will follow. 305 - */ 306 - #define RX_SEQ_START 0x00c 307 - 308 - /* 309 302 * The number of audio channels; read-only. There will be one quadlet per 310 303 * channel. 311 304 */ 312 - #define RX_NUMBER_AUDIO 0x010 305 + #define RX_NUMBER_AUDIO 0x00c 313 306 314 307 /* 315 308 * The number of MIDI ports, 0-8; read-only. If > 0, there will be one 316 309 * additional quadlet in each data block, following the audio quadlets. 317 310 */ 318 - #define RX_NUMBER_MIDI 0x014 311 + #define RX_NUMBER_MIDI 0x010 312 + 313 + /* 314 + * Index of first quadlet to be interpreted; read/write. If > 0, that many 315 + * quadlets at the beginning of each data block will be ignored, and all the 316 + * audio and MIDI quadlets will follow. 317 + */ 318 + #define RX_SEQ_START 0x014 319 319 320 320 /* 321 321 * Names of all audio channels; read-only. Quadlets are byte-swapped. Names
+2 -2
sound/firewire/dice/dice-proc.c
··· 99 99 } tx; 100 100 struct { 101 101 u32 iso; 102 - u32 seq_start; 103 102 u32 number_audio; 104 103 u32 number_midi; 104 + u32 seq_start; 105 105 char names[RX_NAMES_SIZE]; 106 106 u32 ac3_caps; 107 107 u32 ac3_enable; ··· 204 204 break; 205 205 snd_iprintf(buffer, "rx %u:\n", stream); 206 206 snd_iprintf(buffer, " iso channel: %d\n", (int)buf.rx.iso); 207 - snd_iprintf(buffer, " sequence start: %u\n", buf.rx.seq_start); 208 207 snd_iprintf(buffer, " audio channels: %u\n", 209 208 buf.rx.number_audio); 210 209 snd_iprintf(buffer, " midi ports: %u\n", buf.rx.number_midi); 210 + snd_iprintf(buffer, " sequence start: %u\n", buf.rx.seq_start); 211 211 if (quadlets >= 68) { 212 212 dice_proc_fixup_string(buf.rx.names, RX_NAMES_SIZE); 213 213 snd_iprintf(buffer, " names: %s\n", buf.rx.names);
+3 -2
sound/firewire/oxfw/oxfw-stream.c
··· 171 171 } 172 172 173 173 /* Wait first packet */ 174 - err = amdtp_stream_wait_callback(stream, CALLBACK_TIMEOUT); 175 - if (err < 0) 174 + if (!amdtp_stream_wait_callback(stream, CALLBACK_TIMEOUT)) { 176 175 stop_stream(oxfw, stream); 176 + err = -ETIMEDOUT; 177 + } 177 178 end: 178 179 return err; 179 180 }
+2 -1
sound/isa/msnd/msnd_pinnacle_mixer.c
··· 306 306 spin_lock_init(&chip->mixer_lock); 307 307 strcpy(card->mixername, "MSND Pinnacle Mixer"); 308 308 309 - for (idx = 0; idx < ARRAY_SIZE(snd_msnd_controls); idx++) 309 + for (idx = 0; idx < ARRAY_SIZE(snd_msnd_controls); idx++) { 310 310 err = snd_ctl_add(card, 311 311 snd_ctl_new1(snd_msnd_controls + idx, chip)); 312 312 if (err < 0) 313 313 return err; 314 + } 314 315 315 316 return 0; 316 317 }
+7
sound/pci/hda/patch_realtek.c
··· 5209 5209 {0x17, 0x40000000}, 5210 5210 {0x1d, 0x40700001}, 5211 5211 {0x21, 0x02211040}), 5212 + SND_HDA_PIN_QUIRK(0x10ec0255, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE, 5213 + ALC255_STANDARD_PINS, 5214 + {0x12, 0x90a60170}, 5215 + {0x14, 0x90170140}, 5216 + {0x17, 0x40000000}, 5217 + {0x1d, 0x40700001}, 5218 + {0x21, 0x02211050}), 5212 5219 SND_HDA_PIN_QUIRK(0x10ec0280, 0x103c, "HP", ALC280_FIXUP_HP_GPIO4, 5213 5220 {0x12, 0x90a60130}, 5214 5221 {0x13, 0x40000000},
+29 -35
sound/soc/atmel/sam9g20_wm8731.c
··· 46 46 #include <sound/pcm_params.h> 47 47 #include <sound/soc.h> 48 48 49 - #include <asm/mach-types.h> 50 - 51 49 #include "../codecs/wm8731.h" 52 50 #include "atmel-pcm.h" 53 51 #include "atmel_ssc_dai.h" ··· 169 171 int ret; 170 172 171 173 if (!np) { 172 - if (!(machine_is_at91sam9g20ek() || 173 - machine_is_at91sam9g20ek_2mmc())) 174 - return -ENODEV; 174 + return -ENODEV; 175 175 } 176 176 177 177 ret = atmel_ssc_set_audio(0); ··· 206 210 card->dev = &pdev->dev; 207 211 208 212 /* Parse device node info */ 209 - if (np) { 210 - ret = snd_soc_of_parse_card_name(card, "atmel,model"); 211 - if (ret) 212 - goto err; 213 + ret = snd_soc_of_parse_card_name(card, "atmel,model"); 214 + if (ret) 215 + goto err; 213 216 214 - ret = snd_soc_of_parse_audio_routing(card, 215 - "atmel,audio-routing"); 216 - if (ret) 217 - goto err; 217 + ret = snd_soc_of_parse_audio_routing(card, 218 + "atmel,audio-routing"); 219 + if (ret) 220 + goto err; 218 221 219 - /* Parse codec info */ 220 - at91sam9g20ek_dai.codec_name = NULL; 221 - codec_np = of_parse_phandle(np, "atmel,audio-codec", 0); 222 - if (!codec_np) { 223 - dev_err(&pdev->dev, "codec info missing\n"); 224 - return -EINVAL; 225 - } 226 - at91sam9g20ek_dai.codec_of_node = codec_np; 227 - 228 - /* Parse dai and platform info */ 229 - at91sam9g20ek_dai.cpu_dai_name = NULL; 230 - at91sam9g20ek_dai.platform_name = NULL; 231 - cpu_np = of_parse_phandle(np, "atmel,ssc-controller", 0); 232 - if (!cpu_np) { 233 - dev_err(&pdev->dev, "dai and pcm info missing\n"); 234 - return -EINVAL; 235 - } 236 - at91sam9g20ek_dai.cpu_of_node = cpu_np; 237 - at91sam9g20ek_dai.platform_of_node = cpu_np; 238 - 239 - of_node_put(codec_np); 240 - of_node_put(cpu_np); 222 + /* Parse codec info */ 223 + at91sam9g20ek_dai.codec_name = NULL; 224 + codec_np = of_parse_phandle(np, "atmel,audio-codec", 0); 225 + if (!codec_np) { 226 + dev_err(&pdev->dev, "codec info missing\n"); 227 + return -EINVAL; 241 228 } 229 + at91sam9g20ek_dai.codec_of_node = codec_np; 230 + 231 + /* Parse dai and platform info */ 232 + at91sam9g20ek_dai.cpu_dai_name = NULL; 233 + at91sam9g20ek_dai.platform_name = NULL; 234 + cpu_np = of_parse_phandle(np, "atmel,ssc-controller", 0); 235 + if (!cpu_np) { 236 + dev_err(&pdev->dev, "dai and pcm info missing\n"); 237 + return -EINVAL; 238 + } 239 + at91sam9g20ek_dai.cpu_of_node = cpu_np; 240 + at91sam9g20ek_dai.platform_of_node = cpu_np; 241 + 242 + of_node_put(codec_np); 243 + of_node_put(cpu_np); 242 244 243 245 ret = snd_soc_register_card(card); 244 246 if (ret) {
+1 -1
sound/soc/cirrus/Kconfig
··· 16 16 17 17 config SND_EP93XX_SOC_SNAPPERCL15 18 18 tristate "SoC Audio support for Bluewater Systems Snapper CL15 module" 19 - depends on SND_EP93XX_SOC && MACH_SNAPPER_CL15 19 + depends on SND_EP93XX_SOC && MACH_SNAPPER_CL15 && I2C 20 20 select SND_EP93XX_SOC_I2S 21 21 select SND_SOC_TLV320AIC23_I2C 22 22 help
+1 -1
sound/soc/codecs/Kconfig
··· 69 69 select SND_SOC_MAX98088 if I2C 70 70 select SND_SOC_MAX98090 if I2C 71 71 select SND_SOC_MAX98095 if I2C 72 - select SND_SOC_MAX98357A 72 + select SND_SOC_MAX98357A if GPIOLIB 73 73 select SND_SOC_MAX9850 if I2C 74 74 select SND_SOC_MAX9768 if I2C 75 75 select SND_SOC_MAX9877 if I2C
+11 -1
sound/soc/codecs/max98357a.c
··· 12 12 * max98357a.c -- MAX98357A ALSA SoC Codec driver 13 13 */ 14 14 15 - #include <linux/module.h> 15 + #include <linux/device.h> 16 + #include <linux/err.h> 16 17 #include <linux/gpio.h> 18 + #include <linux/gpio/consumer.h> 19 + #include <linux/kernel.h> 20 + #include <linux/mod_devicetable.h> 21 + #include <linux/module.h> 22 + #include <linux/of.h> 23 + #include <linux/platform_device.h> 24 + #include <sound/pcm.h> 17 25 #include <sound/soc.h> 26 + #include <sound/soc-dai.h> 27 + #include <sound/soc-dapm.h> 18 28 19 29 #define DRV_NAME "max98357a" 20 30
+6 -1
sound/soc/codecs/rt5670.c
··· 225 225 case RT5670_ADC_EQ_CTRL1: 226 226 case RT5670_EQ_CTRL1: 227 227 case RT5670_ALC_CTRL_1: 228 - case RT5670_IRQ_CTRL1: 229 228 case RT5670_IRQ_CTRL2: 230 229 case RT5670_INT_IRQ_ST: 231 230 case RT5670_IL_CMD: ··· 2701 2702 msleep(100); 2702 2703 2703 2704 regmap_write(rt5670->regmap, RT5670_RESET, 0); 2705 + 2706 + regmap_read(rt5670->regmap, RT5670_VENDOR_ID, &val); 2707 + if (val >= 4) 2708 + regmap_write(rt5670->regmap, RT5670_GPIO_CTRL3, 0x0980); 2709 + else 2710 + regmap_write(rt5670->regmap, RT5670_GPIO_CTRL3, 0x0d00); 2704 2711 2705 2712 ret = regmap_register_patch(rt5670->regmap, init_list, 2706 2713 ARRAY_SIZE(init_list));
+16 -16
sound/soc/codecs/rt5677.c
··· 3284 3284 { "IB45 Bypass Mux", "Bypass", "IB45 Mux" }, 3285 3285 { "IB45 Bypass Mux", "Pass SRC", "IB45 Mux" }, 3286 3286 3287 - { "IB6 Mux", "IF1 DAC 6", "IF1 DAC6" }, 3288 - { "IB6 Mux", "IF2 DAC 6", "IF2 DAC6" }, 3287 + { "IB6 Mux", "IF1 DAC 6", "IF1 DAC6 Mux" }, 3288 + { "IB6 Mux", "IF2 DAC 6", "IF2 DAC6 Mux" }, 3289 3289 { "IB6 Mux", "SLB DAC 6", "SLB DAC6" }, 3290 3290 { "IB6 Mux", "STO4 ADC MIX L", "Stereo4 ADC MIXL" }, 3291 3291 { "IB6 Mux", "IF4 DAC L", "IF4 DAC L" }, ··· 3293 3293 { "IB6 Mux", "STO2 ADC MIX L", "Stereo2 ADC MIXL" }, 3294 3294 { "IB6 Mux", "STO3 ADC MIX L", "Stereo3 ADC MIXL" }, 3295 3295 3296 - { "IB7 Mux", "IF1 DAC 7", "IF1 DAC7" }, 3297 - { "IB7 Mux", "IF2 DAC 7", "IF2 DAC7" }, 3296 + { "IB7 Mux", "IF1 DAC 7", "IF1 DAC7 Mux" }, 3297 + { "IB7 Mux", "IF2 DAC 7", "IF2 DAC7 Mux" }, 3298 3298 { "IB7 Mux", "SLB DAC 7", "SLB DAC7" }, 3299 3299 { "IB7 Mux", "STO4 ADC MIX R", "Stereo4 ADC MIXR" }, 3300 3300 { "IB7 Mux", "IF4 DAC R", "IF4 DAC R" }, ··· 3635 3635 { "DAC1 FS", NULL, "DAC1 MIXL" }, 3636 3636 { "DAC1 FS", NULL, "DAC1 MIXR" }, 3637 3637 3638 - { "DAC2 L Mux", "IF1 DAC 2", "IF1 DAC2" }, 3639 - { "DAC2 L Mux", "IF2 DAC 2", "IF2 DAC2" }, 3638 + { "DAC2 L Mux", "IF1 DAC 2", "IF1 DAC2 Mux" }, 3639 + { "DAC2 L Mux", "IF2 DAC 2", "IF2 DAC2 Mux" }, 3640 3640 { "DAC2 L Mux", "IF3 DAC L", "IF3 DAC L" }, 3641 3641 { "DAC2 L Mux", "IF4 DAC L", "IF4 DAC L" }, 3642 3642 { "DAC2 L Mux", "SLB DAC 2", "SLB DAC2" }, 3643 3643 { "DAC2 L Mux", "OB 2", "OutBound2" }, 3644 3644 3645 - { "DAC2 R Mux", "IF1 DAC 3", "IF1 DAC3" }, 3646 - { "DAC2 R Mux", "IF2 DAC 3", "IF2 DAC3" }, 3645 + { "DAC2 R Mux", "IF1 DAC 3", "IF1 DAC3 Mux" }, 3646 + { "DAC2 R Mux", "IF2 DAC 3", "IF2 DAC3 Mux" }, 3647 3647 { "DAC2 R Mux", "IF3 DAC R", "IF3 DAC R" }, 3648 3648 { "DAC2 R Mux", "IF4 DAC R", "IF4 DAC R" }, 3649 3649 { "DAC2 R Mux", "SLB DAC 3", "SLB DAC3" }, ··· 3651 3651 { "DAC2 R Mux", "Haptic Generator", "Haptic Generator" }, 3652 3652 { "DAC2 R Mux", "VAD ADC", "VAD ADC Mux" }, 3653 3653 3654 - { "DAC3 L Mux", "IF1 DAC 4", "IF1 DAC4" }, 3655 - { "DAC3 L Mux", "IF2 DAC 4", "IF2 DAC4" }, 3654 + { "DAC3 L Mux", "IF1 DAC 4", "IF1 DAC4 Mux" }, 3655 + { "DAC3 L Mux", "IF2 DAC 4", "IF2 DAC4 Mux" }, 3656 3656 { "DAC3 L Mux", "IF3 DAC L", "IF3 DAC L" }, 3657 3657 { "DAC3 L Mux", "IF4 DAC L", "IF4 DAC L" }, 3658 3658 { "DAC3 L Mux", "SLB DAC 4", "SLB DAC4" }, 3659 3659 { "DAC3 L Mux", "OB 4", "OutBound4" }, 3660 3660 3661 - { "DAC3 R Mux", "IF1 DAC 5", "IF1 DAC4" }, 3662 - { "DAC3 R Mux", "IF2 DAC 5", "IF2 DAC4" }, 3661 + { "DAC3 R Mux", "IF1 DAC 5", "IF1 DAC5 Mux" }, 3662 + { "DAC3 R Mux", "IF2 DAC 5", "IF2 DAC5 Mux" }, 3663 3663 { "DAC3 R Mux", "IF3 DAC R", "IF3 DAC R" }, 3664 3664 { "DAC3 R Mux", "IF4 DAC R", "IF4 DAC R" }, 3665 3665 { "DAC3 R Mux", "SLB DAC 5", "SLB DAC5" }, 3666 3666 { "DAC3 R Mux", "OB 5", "OutBound5" }, 3667 3667 3668 - { "DAC4 L Mux", "IF1 DAC 6", "IF1 DAC6" }, 3669 - { "DAC4 L Mux", "IF2 DAC 6", "IF2 DAC6" }, 3668 + { "DAC4 L Mux", "IF1 DAC 6", "IF1 DAC6 Mux" }, 3669 + { "DAC4 L Mux", "IF2 DAC 6", "IF2 DAC6 Mux" }, 3670 3670 { "DAC4 L Mux", "IF3 DAC L", "IF3 DAC L" }, 3671 3671 { "DAC4 L Mux", "IF4 DAC L", "IF4 DAC L" }, 3672 3672 { "DAC4 L Mux", "SLB DAC 6", "SLB DAC6" }, 3673 3673 { "DAC4 L Mux", "OB 6", "OutBound6" }, 3674 3674 3675 - { "DAC4 R Mux", "IF1 DAC 7", "IF1 DAC7" }, 3676 - { "DAC4 R Mux", "IF2 DAC 7", "IF2 DAC7" }, 3675 + { "DAC4 R Mux", "IF1 DAC 7", "IF1 DAC7 Mux" }, 3676 + { "DAC4 R Mux", "IF2 DAC 7", "IF2 DAC7 Mux" }, 3677 3677 { "DAC4 R Mux", "IF3 DAC R", "IF3 DAC R" }, 3678 3678 { "DAC4 R Mux", "IF4 DAC R", "IF4 DAC R" }, 3679 3679 { "DAC4 R Mux", "SLB DAC 7", "SLB DAC7" },
+2 -4
sound/soc/codecs/sta32x.c
··· 106 106 }; 107 107 108 108 static const struct regmap_range sta32x_write_regs_range[] = { 109 - regmap_reg_range(STA32X_CONFA, STA32X_AUTO2), 110 - regmap_reg_range(STA32X_C1CFG, STA32X_FDRC2), 109 + regmap_reg_range(STA32X_CONFA, STA32X_FDRC2), 111 110 }; 112 111 113 112 static const struct regmap_range sta32x_read_regs_range[] = { 114 - regmap_reg_range(STA32X_CONFA, STA32X_AUTO2), 115 - regmap_reg_range(STA32X_C1CFG, STA32X_FDRC2), 113 + regmap_reg_range(STA32X_CONFA, STA32X_FDRC2), 116 114 }; 117 115 118 116 static const struct regmap_range sta32x_volatile_regs_range[] = {
+7 -4
sound/soc/fsl/fsl_ssi.c
··· 603 603 factor = (div2 + 1) * (7 * psr + 1) * 2; 604 604 605 605 for (i = 0; i < 255; i++) { 606 - /* The bclk rate must be smaller than 1/5 sysclk rate */ 607 - if (factor * (i + 1) < 5) 608 - continue; 609 - 610 606 tmprate = freq * factor * (i + 2); 611 607 612 608 if (baudclk_is_used) 613 609 clkrate = clk_get_rate(ssi_private->baudclk); 614 610 else 615 611 clkrate = clk_round_rate(ssi_private->baudclk, tmprate); 612 + 613 + /* 614 + * Hardware limitation: The bclk rate must be 615 + * never greater than 1/5 IPG clock rate 616 + */ 617 + if (clkrate * 5 > clk_get_rate(ssi_private->clk)) 618 + continue; 616 619 617 620 clkrate /= factor; 618 621 afreq = clkrate / (i + 1);
+5
sound/soc/generic/simple-card.c
··· 372 372 strlen(dai_link->cpu_dai_name) + 373 373 strlen(dai_link->codec_dai_name) + 2, 374 374 GFP_KERNEL); 375 + if (!name) { 376 + ret = -ENOMEM; 377 + goto dai_link_of_err; 378 + } 379 + 375 380 sprintf(name, "%s-%s", dai_link->cpu_dai_name, 376 381 dai_link->codec_dai_name); 377 382 dai_link->name = dai_link->stream_name = name;
+1 -1
sound/soc/intel/sst-atom-controls.h
··· 150 150 151 151 enum sst_task { 152 152 SST_TASK_SBA = 1, 153 - SST_TASK_MMX, 153 + SST_TASK_MMX = 3, 154 154 }; 155 155 156 156 enum sst_type {
+9 -1
sound/soc/intel/sst/sst.c
··· 350 350 351 351 spin_lock_irqsave(&ctx->ipc_spin_lock, irq_flags); 352 352 353 - shim_regs->imrx = sst_shim_read64(shim, SST_IMRX), 353 + shim_regs->imrx = sst_shim_read64(shim, SST_IMRX); 354 + shim_regs->csr = sst_shim_read64(shim, SST_CSR); 355 + 354 356 355 357 spin_unlock_irqrestore(&ctx->ipc_spin_lock, irq_flags); 356 358 } ··· 369 367 */ 370 368 spin_lock_irqsave(&ctx->ipc_spin_lock, irq_flags); 371 369 sst_shim_write64(shim, SST_IMRX, shim_regs->imrx), 370 + sst_shim_write64(shim, SST_CSR, shim_regs->csr), 372 371 spin_unlock_irqrestore(&ctx->ipc_spin_lock, irq_flags); 373 372 } 374 373 ··· 382 379 * initially active. So change the state to active before 383 380 * enabling the pm 384 381 */ 382 + 383 + if (!acpi_disabled) 384 + pm_runtime_set_active(ctx->dev); 385 + 385 386 pm_runtime_enable(ctx->dev); 386 387 387 388 if (acpi_disabled) ··· 416 409 synchronize_irq(ctx->irq_num); 417 410 flush_workqueue(ctx->post_msg_wq); 418 411 412 + ctx->ops->reset(ctx); 419 413 /* save the shim registers because PMC doesn't save state */ 420 414 sst_save_shim64(ctx, ctx->shim, ctx->shim_regs64); 421 415
+3
sound/soc/omap/omap-hdmi-audio.c
··· 352 352 return ret; 353 353 354 354 card = devm_kzalloc(dev, sizeof(*card), GFP_KERNEL); 355 + if (!card) 356 + return -ENOMEM; 357 + 355 358 card->name = devm_kasprintf(dev, GFP_KERNEL, 356 359 "HDMI %s", dev_name(ad->dssdev)); 357 360 card->owner = THIS_MODULE;
+11
sound/soc/omap/omap-mcbsp.c
··· 530 530 531 531 case OMAP_MCBSP_SYSCLK_CLKX_EXT: 532 532 regs->srgr2 |= CLKSM; 533 + regs->pcr0 |= SCLKME; 534 + /* 535 + * If McBSP is master but yet the CLKX/CLKR pin drives the SRG, 536 + * disable output on those pins. This enables to inject the 537 + * reference clock through CLKX/CLKR. For this to work 538 + * set_dai_sysclk() _needs_ to be called after set_dai_fmt(). 539 + */ 540 + regs->pcr0 &= ~CLKXM; 541 + break; 533 542 case OMAP_MCBSP_SYSCLK_CLKR_EXT: 534 543 regs->pcr0 |= SCLKME; 544 + /* Disable ouput on CLKR pin in master mode */ 545 + regs->pcr0 &= ~CLKRM; 535 546 break; 536 547 default: 537 548 err = -ENODEV;
+1 -1
sound/soc/omap/omap-pcm.c
··· 201 201 struct snd_pcm *pcm = rtd->pcm; 202 202 int ret; 203 203 204 - ret = dma_coerce_mask_and_coherent(card->dev, DMA_BIT_MASK(64)); 204 + ret = dma_coerce_mask_and_coherent(card->dev, DMA_BIT_MASK(32)); 205 205 if (ret) 206 206 return ret; 207 207
+5 -5
sound/soc/samsung/Kconfig
··· 174 174 175 175 config SND_SOC_SPEYSIDE 176 176 tristate "Audio support for Wolfson Speyside" 177 - depends on SND_SOC_SAMSUNG && MACH_WLF_CRAGG_6410 177 + depends on SND_SOC_SAMSUNG && MACH_WLF_CRAGG_6410 && I2C && SPI_MASTER 178 178 select SND_SAMSUNG_I2S 179 179 select SND_SOC_WM8996 180 180 select SND_SOC_WM9081 ··· 189 189 190 190 config SND_SOC_BELLS 191 191 tristate "Audio support for Wolfson Bells" 192 - depends on SND_SOC_SAMSUNG && MACH_WLF_CRAGG_6410 && MFD_ARIZONA 192 + depends on SND_SOC_SAMSUNG && MACH_WLF_CRAGG_6410 && MFD_ARIZONA && I2C && SPI_MASTER 193 193 select SND_SAMSUNG_I2S 194 194 select SND_SOC_WM5102 195 195 select SND_SOC_WM5110 ··· 206 206 207 207 config SND_SOC_LITTLEMILL 208 208 tristate "Audio support for Wolfson Littlemill" 209 - depends on SND_SOC_SAMSUNG && MACH_WLF_CRAGG_6410 209 + depends on SND_SOC_SAMSUNG && MACH_WLF_CRAGG_6410 && I2C 210 210 select SND_SAMSUNG_I2S 211 211 select MFD_WM8994 212 212 select SND_SOC_WM8994 ··· 223 223 224 224 config SND_SOC_ODROIDX2 225 225 tristate "Audio support for Odroid-X2 and Odroid-U3" 226 - depends on SND_SOC_SAMSUNG 226 + depends on SND_SOC_SAMSUNG && I2C 227 227 select SND_SOC_MAX98090 228 228 select SND_SAMSUNG_I2S 229 229 help ··· 231 231 232 232 config SND_SOC_ARNDALE_RT5631_ALC5631 233 233 tristate "Audio support for RT5631(ALC5631) on Arndale Board" 234 - depends on SND_SOC_SAMSUNG 234 + depends on SND_SOC_SAMSUNG && I2C 235 235 select SND_SAMSUNG_I2S 236 236 select SND_SOC_RT5631
+2 -2
sound/soc/sh/rcar/core.c
··· 1252 1252 goto exit_snd_probe; 1253 1253 } 1254 1254 1255 + dev_set_drvdata(dev, priv); 1256 + 1255 1257 /* 1256 1258 * asoc register 1257 1259 */ ··· 1269 1267 dev_err(dev, "cannot snd dai register\n"); 1270 1268 goto exit_snd_soc; 1271 1269 } 1272 - 1273 - dev_set_drvdata(dev, priv); 1274 1270 1275 1271 pm_runtime_enable(dev); 1276 1272
+3 -3
sound/usb/line6/playback.c
··· 39 39 for (; p < buf_end; ++p) { 40 40 short pv = le16_to_cpu(*p); 41 41 int val = (pv * volume[chn & 1]) >> 8; 42 - pv = clamp(val, 0x7fff, -0x8000); 42 + pv = clamp(val, -0x8000, 0x7fff); 43 43 *p = cpu_to_le16(pv); 44 44 ++chn; 45 45 } ··· 54 54 55 55 val = p[0] + (p[1] << 8) + ((signed char)p[2] << 16); 56 56 val = (val * volume[chn & 1]) >> 8; 57 - val = clamp(val, 0x7fffff, -0x800000); 57 + val = clamp(val, -0x800000, 0x7fffff); 58 58 p[0] = val; 59 59 p[1] = val >> 8; 60 60 p[2] = val >> 16; ··· 126 126 short pov = le16_to_cpu(*po); 127 127 short piv = le16_to_cpu(*pi); 128 128 int val = pov + ((piv * volume) >> 8); 129 - pov = clamp(val, 0x7fff, -0x8000); 129 + pov = clamp(val, -0x8000, 0x7fff); 130 130 *po = cpu_to_le16(pov); 131 131 } 132 132 }