Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

Conflicts:
drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c

+3376 -2221
+9 -8
Documentation/ABI/testing/sysfs-class-mtd
··· 142 142 Contact: linux-mtd@lists.infradead.org 143 143 Description: 144 144 This allows the user to examine and adjust the criteria by which 145 - mtd returns -EUCLEAN from mtd_read(). If the maximum number of 146 - bit errors that were corrected on any single region comprising 147 - an ecc step (as reported by the driver) equals or exceeds this 148 - value, -EUCLEAN is returned. Otherwise, absent an error, 0 is 149 - returned. Higher layers (e.g., UBI) use this return code as an 150 - indication that an erase block may be degrading and should be 151 - scrutinized as a candidate for being marked as bad. 145 + mtd returns -EUCLEAN from mtd_read() and mtd_read_oob(). If the 146 + maximum number of bit errors that were corrected on any single 147 + region comprising an ecc step (as reported by the driver) equals 148 + or exceeds this value, -EUCLEAN is returned. Otherwise, absent 149 + an error, 0 is returned. Higher layers (e.g., UBI) use this 150 + return code as an indication that an erase block may be 151 + degrading and should be scrutinized as a candidate for being 152 + marked as bad. 152 153 153 154 The initial value may be specified by the flash device driver. 154 155 If not, then the default value is ecc_strength. ··· 168 167 block degradation, but high enough to avoid the consequences of 169 168 a persistent return value of -EUCLEAN on devices where sticky 170 169 bitflips occur. Note that if bitflip_threshold exceeds 171 - ecc_strength, -EUCLEAN is never returned by mtd_read(). 170 + ecc_strength, -EUCLEAN is never returned by the read operations. 172 171 Conversely, if bitflip_threshold is zero, -EUCLEAN is always 173 172 returned, absent a hard error. 174 173
+1 -1
Documentation/DocBook/media/v4l/controls.xml
··· 3988 3988 from RGB to Y'CbCr color space. 3989 3989 </entry> 3990 3990 </row> 3991 - <row id = "v4l2-jpeg-chroma-subsampling"> 3991 + <row> 3992 3992 <entrytbl spanname="descr" cols="2"> 3993 3993 <tbody valign="top"> 3994 3994 <row>
-7
Documentation/DocBook/media/v4l/vidioc-g-ext-ctrls.xml
··· 284 284 processing controls. These controls are described in <xref 285 285 linkend="image-process-controls" />.</entry> 286 286 </row> 287 - <row> 288 - <entry><constant>V4L2_CTRL_CLASS_JPEG</constant></entry> 289 - <entry>0x9d0000</entry> 290 - <entry>The class containing JPEG compression controls. 291 - These controls are described in <xref 292 - linkend="jpeg-controls" />.</entry> 293 - </row> 294 287 </tbody> 295 288 </tgroup> 296 289 </table>
+1
Documentation/devicetree/bindings/input/fsl-mma8450.txt
··· 2 2 3 3 Required properties: 4 4 - compatible : "fsl,mma8450". 5 + - reg: the I2C address of MMA8450 5 6 6 7 Example: 7 8
+2 -2
Documentation/devicetree/bindings/mfd/mc13xxx.txt
··· 46 46 47 47 ecspi@70010000 { /* ECSPI1 */ 48 48 fsl,spi-num-chipselects = <2>; 49 - cs-gpios = <&gpio3 24 0>, /* GPIO4_24 */ 50 - <&gpio3 25 0>; /* GPIO4_25 */ 49 + cs-gpios = <&gpio4 24 0>, /* GPIO4_24 */ 50 + <&gpio4 25 0>; /* GPIO4_25 */ 51 51 status = "okay"; 52 52 53 53 pmic: mc13892@0 {
+2 -2
Documentation/devicetree/bindings/mmc/fsl-imx-esdhc.txt
··· 29 29 compatible = "fsl,imx51-esdhc"; 30 30 reg = <0x70008000 0x4000>; 31 31 interrupts = <2>; 32 - cd-gpios = <&gpio0 6 0>; /* GPIO1_6 */ 33 - wp-gpios = <&gpio0 5 0>; /* GPIO1_5 */ 32 + cd-gpios = <&gpio1 6 0>; /* GPIO1_6 */ 33 + wp-gpios = <&gpio1 5 0>; /* GPIO1_5 */ 34 34 };
+1 -1
Documentation/devicetree/bindings/net/fsl-fec.txt
··· 23 23 reg = <0x83fec000 0x4000>; 24 24 interrupts = <87>; 25 25 phy-mode = "mii"; 26 - phy-reset-gpios = <&gpio1 14 0>; /* GPIO2_14 */ 26 + phy-reset-gpios = <&gpio2 14 0>; /* GPIO2_14 */ 27 27 local-mac-address = [00 04 9F 01 1B B9]; 28 28 };
+2
Documentation/devicetree/bindings/pinctrl/fsl,imx6q-pinctrl.txt
··· 1626 1626 MX6Q_PAD_SD2_DAT3__GPIO_1_12 1588 1627 1627 MX6Q_PAD_SD2_DAT3__SJC_DONE 1589 1628 1628 MX6Q_PAD_SD2_DAT3__ANATOP_TESTO_3 1590 1629 + MX6Q_PAD_ENET_RX_ER__ANATOP_USBOTG_ID 1591 1630 + MX6Q_PAD_GPIO_1__ANATOP_USBOTG_ID 1592
+2 -2
Documentation/devicetree/bindings/spi/fsl-imx-cspi.txt
··· 17 17 reg = <0x70010000 0x4000>; 18 18 interrupts = <36>; 19 19 fsl,spi-num-chipselects = <2>; 20 - cs-gpios = <&gpio3 24 0>, /* GPIO4_24 */ 21 - <&gpio3 25 0>; /* GPIO4_25 */ 20 + cs-gpios = <&gpio3 24 0>, /* GPIO3_24 */ 21 + <&gpio3 25 0>; /* GPIO3_25 */ 22 22 };
+1
Documentation/devicetree/bindings/vendor-prefixes.txt
··· 3 3 This isn't an exhaustive list, but you should add new prefixes to it before 4 4 using them to avoid name-space collisions. 5 5 6 + ad Avionic Design GmbH 6 7 adi Analog Devices, Inc. 7 8 amcc Applied Micro Circuits Corporation (APM, formally AMCC) 8 9 apm Applied Micro Circuits Corporation (APM)
+1 -1
Documentation/kdump/kdump.txt
··· 86 86 http://www.kernel.org/git/?p=utils/kernel/kexec/kexec-tools.git 87 87 88 88 More information about kexec-tools can be found at 89 - http://www.kernel.org/pub/linux/utils/kernel/kexec/README.html 89 + http://horms.net/projects/kexec/ 90 90 91 91 3) Unpack the tarball with the tar command, as follows: 92 92
+7
Documentation/prctl/no_new_privs.txt
··· 25 25 add to the permitted set, and LSMs will not relax constraints after 26 26 execve. 27 27 28 + To set no_new_privs, use prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0). 29 + 30 + Be careful, though: LSMs might also not tighten constraints on exec 31 + in no_new_privs mode. (This means that setting up a general-purpose 32 + service launcher to set no_new_privs before execing daemons may 33 + interfere with LSM-based sandboxing.) 34 + 28 35 Note that no_new_privs does not prevent privilege changes that do not 29 36 involve execve. An appropriately privileged task can still call 30 37 setuid(2) and receive SCM_RIGHTS datagrams.
+17
Documentation/virtual/kvm/api.txt
··· 1930 1930 PTE's RPN field (ie, it needs to be shifted left by 12 to OR it 1931 1931 into the hash PTE second double word). 1932 1932 1933 + 4.75 KVM_IRQFD 1934 + 1935 + Capability: KVM_CAP_IRQFD 1936 + Architectures: x86 1937 + Type: vm ioctl 1938 + Parameters: struct kvm_irqfd (in) 1939 + Returns: 0 on success, -1 on error 1940 + 1941 + Allows setting an eventfd to directly trigger a guest interrupt. 1942 + kvm_irqfd.fd specifies the file descriptor to use as the eventfd and 1943 + kvm_irqfd.gsi specifies the irqchip pin toggled by this event. When 1944 + an event is tiggered on the eventfd, an interrupt is injected into 1945 + the guest using the specified gsi pin. The irqfd is removed using 1946 + the KVM_IRQFD_FLAG_DEASSIGN flag, specifying both kvm_irqfd.fd 1947 + and kvm_irqfd.gsi. 1948 + 1949 + 1933 1950 5. The kvm_run structure 1934 1951 ------------------------ 1935 1952
+5 -4
MAINTAINERS
··· 3434 3434 F: drivers/idle/i7300_idle.c 3435 3435 3436 3436 IEEE 802.15.4 SUBSYSTEM 3437 + M: Alexander Smirnov <alex.bluesman.smirnov@gmail.com> 3437 3438 M: Dmitry Eremin-Solenikov <dbaryshkov@gmail.com> 3438 - M: Sergey Lapin <slapin@ossfans.org> 3439 3439 L: linux-zigbee-devel@lists.sourceforge.net (moderated for non-subscribers) 3440 3440 W: http://apps.sourceforge.net/trac/linux-zigbee 3441 3441 T: git git://git.kernel.org/pub/scm/linux/kernel/git/lowpan/lowpan.git 3442 3442 S: Maintained 3443 3443 F: net/ieee802154/ 3444 + F: net/mac802154/ 3444 3445 F: drivers/ieee802154/ 3445 3446 3446 3447 IIO SUBSYSTEM AND DRIVERS ··· 4849 4848 L: linux-omap@vger.kernel.org 4850 4849 S: Maintained 4851 4850 F: arch/arm/*omap*/*pm* 4851 + F: drivers/cpufreq/omap-cpufreq.c 4852 4852 4853 4853 OMAP POWERDOMAIN/CLOCKDOMAIN SOC ADAPTATION LAYER SUPPORT 4854 4854 M: Rajendra Nayak <rnayak@ti.com> ··· 5556 5554 F: drivers/net/ethernet/qlogic/qla3xxx.* 5557 5555 5558 5556 QLOGIC QLCNIC (1/10)Gb ETHERNET DRIVER 5559 - M: Anirban Chakraborty <anirban.chakraborty@qlogic.com> 5557 + M: Jitendra Kalsaria <jitendra.kalsaria@qlogic.com> 5560 5558 M: Sony Chacko <sony.chacko@qlogic.com> 5561 5559 M: linux-driver@qlogic.com 5562 5560 L: netdev@vger.kernel.org ··· 5564 5562 F: drivers/net/ethernet/qlogic/qlcnic/ 5565 5563 5566 5564 QLOGIC QLGE 10Gb ETHERNET DRIVER 5567 - M: Anirban Chakraborty <anirban.chakraborty@qlogic.com> 5568 5565 M: Jitendra Kalsaria <jitendra.kalsaria@qlogic.com> 5569 5566 M: Ron Mercer <ron.mercer@qlogic.com> 5570 5567 M: linux-driver@qlogic.com ··· 5901 5900 M: Peter Zijlstra <peterz@infradead.org> 5902 5901 T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched/core 5903 5902 S: Maintained 5904 - F: kernel/sched* 5903 + F: kernel/sched/ 5905 5904 F: include/linux/sched.h 5906 5905 5907 5906 SCORE ARCHITECTURE
+1 -1
Makefile
··· 1 1 VERSION = 3 2 2 PATCHLEVEL = 5 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc5 4 + EXTRAVERSION = -rc7 5 5 NAME = Saber-toothed Squirrel 6 6 7 7 # *DOCUMENTATION*
+6 -5
arch/arm/boot/dts/spear13xx.dtsi
··· 43 43 44 44 pmu { 45 45 compatible = "arm,cortex-a9-pmu"; 46 - interrupts = <0 8 0x04 47 - 0 9 0x04>; 46 + interrupts = <0 6 0x04 47 + 0 7 0x04>; 48 48 }; 49 49 50 50 L2: l2-cache { ··· 119 119 gmac0: eth@e2000000 { 120 120 compatible = "st,spear600-gmac"; 121 121 reg = <0xe2000000 0x8000>; 122 - interrupts = <0 23 0x4 123 - 0 24 0x4>; 122 + interrupts = <0 33 0x4 123 + 0 34 0x4>; 124 124 interrupt-names = "macirq", "eth_wake_irq"; 125 125 status = "disabled"; 126 126 }; ··· 202 202 kbd@e0300000 { 203 203 compatible = "st,spear300-kbd"; 204 204 reg = <0xe0300000 0x1000>; 205 + interrupts = <0 52 0x4>; 205 206 status = "disabled"; 206 207 }; 207 208 ··· 225 224 serial@e0000000 { 226 225 compatible = "arm,pl011", "arm,primecell"; 227 226 reg = <0xe0000000 0x1000>; 228 - interrupts = <0 36 0x4>; 227 + interrupts = <0 35 0x4>; 229 228 status = "disabled"; 230 229 }; 231 230
+3 -3
arch/arm/boot/dts/spear320-evb.dts
··· 15 15 /include/ "spear320.dtsi" 16 16 17 17 / { 18 - model = "ST SPEAr300 Evaluation Board"; 19 - compatible = "st,spear300-evb", "st,spear300"; 18 + model = "ST SPEAr320 Evaluation Board"; 19 + compatible = "st,spear320-evb", "st,spear320"; 20 20 #address-cells = <1>; 21 21 #size-cells = <1>; 22 22 ··· 26 26 27 27 ahb { 28 28 pinmux@b3000000 { 29 - st,pinmux-mode = <3>; 29 + st,pinmux-mode = <4>; 30 30 pinctrl-names = "default"; 31 31 pinctrl-0 = <&state_default>; 32 32
+1
arch/arm/boot/dts/spear600.dtsi
··· 181 181 timer@f0000000 { 182 182 compatible = "st,spear-timer"; 183 183 reg = <0xf0000000 0x400>; 184 + interrupt-parent = <&vic0>; 184 185 interrupts = <16>; 185 186 }; 186 187 };
-1
arch/arm/configs/omap2plus_defconfig
··· 176 176 CONFIG_USB_DEVICEFS=y 177 177 CONFIG_USB_SUSPEND=y 178 178 CONFIG_USB_MON=y 179 - CONFIG_USB_EHCI_HCD=y 180 179 CONFIG_USB_WDM=y 181 180 CONFIG_USB_STORAGE=y 182 181 CONFIG_USB_LIBUSUAL=y
+1 -1
arch/arm/include/asm/atomic.h
··· 243 243 244 244 #define ATOMIC64_INIT(i) { (i) } 245 245 246 - static inline u64 atomic64_read(atomic64_t *v) 246 + static inline u64 atomic64_read(const atomic64_t *v) 247 247 { 248 248 u64 result; 249 249
+9 -9
arch/arm/include/asm/domain.h
··· 60 60 #ifndef __ASSEMBLY__ 61 61 62 62 #ifdef CONFIG_CPU_USE_DOMAINS 63 - #define set_domain(x) \ 64 - do { \ 65 - __asm__ __volatile__( \ 66 - "mcr p15, 0, %0, c3, c0 @ set domain" \ 67 - : : "r" (x)); \ 68 - isb(); \ 69 - } while (0) 63 + static inline void set_domain(unsigned val) 64 + { 65 + asm volatile( 66 + "mcr p15, 0, %0, c3, c0 @ set domain" 67 + : : "r" (val)); 68 + isb(); 69 + } 70 70 71 71 #define modify_domain(dom,type) \ 72 72 do { \ ··· 78 78 } while (0) 79 79 80 80 #else 81 - #define set_domain(x) do { } while (0) 82 - #define modify_domain(dom,type) do { } while (0) 81 + static inline void set_domain(unsigned val) { } 82 + static inline void modify_domain(unsigned dom, unsigned type) { } 83 83 #endif 84 84 85 85 /*
+1 -4
arch/arm/include/asm/thread_info.h
··· 148 148 #define TIF_NOTIFY_RESUME 2 /* callback before returning to user */ 149 149 #define TIF_SYSCALL_TRACE 8 150 150 #define TIF_SYSCALL_AUDIT 9 151 - #define TIF_SYSCALL_RESTARTSYS 10 152 151 #define TIF_POLLING_NRFLAG 16 153 152 #define TIF_USING_IWMMXT 17 154 153 #define TIF_MEMDIE 18 /* is terminating due to OOM killer */ ··· 163 164 #define _TIF_POLLING_NRFLAG (1 << TIF_POLLING_NRFLAG) 164 165 #define _TIF_USING_IWMMXT (1 << TIF_USING_IWMMXT) 165 166 #define _TIF_SECCOMP (1 << TIF_SECCOMP) 166 - #define _TIF_SYSCALL_RESTARTSYS (1 << TIF_SYSCALL_RESTARTSYS) 167 167 168 168 /* Checks for any syscall work in entry-common.S */ 169 - #define _TIF_SYSCALL_WORK (_TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT | \ 170 - _TIF_SYSCALL_RESTARTSYS) 169 + #define _TIF_SYSCALL_WORK (_TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT) 171 170 172 171 /* 173 172 * Change these and you break ASM code in entry-common.S
+2 -2
arch/arm/kernel/kprobes-test-arm.c
··· 187 187 TEST_BF_R ("mov pc, r",0,2f,"") 188 188 TEST_BF_RR("mov pc, r",0,2f,", asl r",1,0,"") 189 189 TEST_BB( "sub pc, pc, #1b-2b+8") 190 - #if __LINUX_ARM_ARCH__ >= 6 191 - TEST_BB( "sub pc, pc, #1b-2b+8-2") /* UNPREDICTABLE before ARMv6 */ 190 + #if __LINUX_ARM_ARCH__ == 6 && !defined(CONFIG_CPU_V7) 191 + TEST_BB( "sub pc, pc, #1b-2b+8-2") /* UNPREDICTABLE before and after ARMv6 */ 192 192 #endif 193 193 TEST_BB_R( "sub pc, pc, r",14, 1f-2f+8,"") 194 194 TEST_BB_R( "rsb pc, r",14,1f-2f+8,", pc")
+1 -1
arch/arm/kernel/perf_event.c
··· 503 503 event_requires_mode_exclusion(&event->attr)) { 504 504 pr_debug("ARM performance counters do not support " 505 505 "mode exclusion\n"); 506 - return -EPERM; 506 + return -EOPNOTSUPP; 507 507 } 508 508 509 509 /*
-3
arch/arm/kernel/ptrace.c
··· 25 25 #include <linux/regset.h> 26 26 #include <linux/audit.h> 27 27 #include <linux/tracehook.h> 28 - #include <linux/unistd.h> 29 28 30 29 #include <asm/pgtable.h> 31 30 #include <asm/traps.h> ··· 917 918 audit_syscall_entry(AUDIT_ARCH_ARM, scno, regs->ARM_r0, 918 919 regs->ARM_r1, regs->ARM_r2, regs->ARM_r3); 919 920 920 - if (why == 0 && test_and_clear_thread_flag(TIF_SYSCALL_RESTARTSYS)) 921 - scno = __NR_restart_syscall - __NR_SYSCALL_BASE; 922 921 if (!test_thread_flag(TIF_SYSCALL_TRACE)) 923 922 return scno; 924 923
+40 -6
arch/arm/kernel/signal.c
··· 27 27 */ 28 28 #define SWI_SYS_SIGRETURN (0xef000000|(__NR_sigreturn)|(__NR_OABI_SYSCALL_BASE)) 29 29 #define SWI_SYS_RT_SIGRETURN (0xef000000|(__NR_rt_sigreturn)|(__NR_OABI_SYSCALL_BASE)) 30 + #define SWI_SYS_RESTART (0xef000000|__NR_restart_syscall|__NR_OABI_SYSCALL_BASE) 30 31 31 32 /* 32 33 * With EABI, the syscall number has to be loaded into r7. ··· 45 44 const unsigned long sigreturn_codes[7] = { 46 45 MOV_R7_NR_SIGRETURN, SWI_SYS_SIGRETURN, SWI_THUMB_SIGRETURN, 47 46 MOV_R7_NR_RT_SIGRETURN, SWI_SYS_RT_SIGRETURN, SWI_THUMB_RT_SIGRETURN, 47 + }; 48 + 49 + /* 50 + * Either we support OABI only, or we have EABI with the OABI 51 + * compat layer enabled. In the later case we don't know if 52 + * user space is EABI or not, and if not we must not clobber r7. 53 + * Always using the OABI syscall solves that issue and works for 54 + * all those cases. 55 + */ 56 + const unsigned long syscall_restart_code[2] = { 57 + SWI_SYS_RESTART, /* swi __NR_restart_syscall */ 58 + 0xe49df004, /* ldr pc, [sp], #4 */ 48 59 }; 49 60 50 61 /* ··· 605 592 case -ERESTARTNOHAND: 606 593 case -ERESTARTSYS: 607 594 case -ERESTARTNOINTR: 608 - case -ERESTART_RESTARTBLOCK: 609 595 regs->ARM_r0 = regs->ARM_ORIG_r0; 610 596 regs->ARM_pc = restart_addr; 597 + break; 598 + case -ERESTART_RESTARTBLOCK: 599 + regs->ARM_r0 = -EINTR; 611 600 break; 612 601 } 613 602 } ··· 626 611 * debugger has chosen to restart at a different PC. 627 612 */ 628 613 if (regs->ARM_pc == restart_addr) { 629 - if (retval == -ERESTARTNOHAND || 630 - retval == -ERESTART_RESTARTBLOCK 614 + if (retval == -ERESTARTNOHAND 631 615 || (retval == -ERESTARTSYS 632 616 && !(ka.sa.sa_flags & SA_RESTART))) { 633 617 regs->ARM_r0 = -EINTR; 634 618 regs->ARM_pc = continue_addr; 635 619 } 636 - clear_thread_flag(TIF_SYSCALL_RESTARTSYS); 637 620 } 638 621 639 622 handle_signal(signr, &ka, &info, regs); ··· 645 632 * ignore the restart. 646 633 */ 647 634 if (retval == -ERESTART_RESTARTBLOCK 648 - && regs->ARM_pc == restart_addr) 649 - set_thread_flag(TIF_SYSCALL_RESTARTSYS); 635 + && regs->ARM_pc == continue_addr) { 636 + if (thumb_mode(regs)) { 637 + regs->ARM_r7 = __NR_restart_syscall - __NR_SYSCALL_BASE; 638 + regs->ARM_pc -= 2; 639 + } else { 640 + #if defined(CONFIG_AEABI) && !defined(CONFIG_OABI_COMPAT) 641 + regs->ARM_r7 = __NR_restart_syscall; 642 + regs->ARM_pc -= 4; 643 + #else 644 + u32 __user *usp; 645 + 646 + regs->ARM_sp -= 4; 647 + usp = (u32 __user *)regs->ARM_sp; 648 + 649 + if (put_user(regs->ARM_pc, usp) == 0) { 650 + regs->ARM_pc = KERN_RESTART_CODE; 651 + } else { 652 + regs->ARM_sp += 4; 653 + force_sigsegv(0, current); 654 + } 655 + #endif 656 + } 657 + } 650 658 } 651 659 652 660 restore_saved_sigmask();
+2
arch/arm/kernel/signal.h
··· 8 8 * published by the Free Software Foundation. 9 9 */ 10 10 #define KERN_SIGRETURN_CODE (CONFIG_VECTORS_BASE + 0x00000500) 11 + #define KERN_RESTART_CODE (KERN_SIGRETURN_CODE + sizeof(sigreturn_codes)) 11 12 12 13 extern const unsigned long sigreturn_codes[7]; 14 + extern const unsigned long syscall_restart_code[2];
+2
arch/arm/kernel/traps.c
··· 820 820 */ 821 821 memcpy((void *)(vectors + KERN_SIGRETURN_CODE - CONFIG_VECTORS_BASE), 822 822 sigreturn_codes, sizeof(sigreturn_codes)); 823 + memcpy((void *)(vectors + KERN_RESTART_CODE - CONFIG_VECTORS_BASE), 824 + syscall_restart_code, sizeof(syscall_restart_code)); 823 825 824 826 flush_icache_range(vectors, vectors + PAGE_SIZE); 825 827 modify_domain(DOMAIN_USER, DOMAIN_CLIENT);
+1
arch/arm/mach-dove/include/mach/bridge-regs.h
··· 50 50 #define POWER_MANAGEMENT (BRIDGE_VIRT_BASE | 0x011c) 51 51 52 52 #define TIMER_VIRT_BASE (BRIDGE_VIRT_BASE | 0x0300) 53 + #define TIMER_PHYS_BASE (BRIDGE_PHYS_BASE | 0x0300) 53 54 54 55 #endif
+1
arch/arm/mach-dove/include/mach/dove.h
··· 78 78 79 79 /* North-South Bridge */ 80 80 #define BRIDGE_VIRT_BASE (DOVE_SB_REGS_VIRT_BASE | 0x20000) 81 + #define BRIDGE_PHYS_BASE (DOVE_SB_REGS_PHYS_BASE | 0x20000) 81 82 82 83 /* Cryptographic Engine */ 83 84 #define DOVE_CRYPT_PHYS_BASE (DOVE_SB_REGS_PHYS_BASE | 0x30000)
+9 -4
arch/arm/mach-exynos/pm_domains.c
··· 119 119 struct exynos_pm_domain *pd) 120 120 { 121 121 if (pdev->dev.bus) { 122 - if (pm_genpd_add_device(&pd->pd, &pdev->dev)) 122 + if (!pm_genpd_add_device(&pd->pd, &pdev->dev)) 123 + pm_genpd_dev_need_restore(&pdev->dev, true); 124 + else 123 125 pr_info("%s: error in adding %s device to %s power" 124 126 "domain\n", __func__, dev_name(&pdev->dev), 125 127 pd->name); ··· 153 151 if (of_have_populated_dt()) 154 152 return exynos_pm_dt_parse_domains(); 155 153 156 - for (idx = 0; idx < ARRAY_SIZE(exynos4_pm_domains); idx++) 157 - pm_genpd_init(&exynos4_pm_domains[idx]->pd, NULL, 158 - exynos4_pm_domains[idx]->is_off); 154 + for (idx = 0; idx < ARRAY_SIZE(exynos4_pm_domains); idx++) { 155 + struct exynos_pm_domain *pd = exynos4_pm_domains[idx]; 156 + int on = __raw_readl(pd->base + 0x4) & S5P_INT_LOCAL_PWR_EN; 157 + 158 + pm_genpd_init(&pd->pd, NULL, !on); 159 + } 159 160 160 161 #ifdef CONFIG_S5P_DEV_FIMD0 161 162 exynos_pm_add_dev_to_genpd(&s5p_device_fimd0, &exynos4_pd_lcd0);
+8 -1
arch/arm/mach-imx/clk-imx35.c
··· 201 201 pr_err("i.MX35 clk %d: register failed with %ld\n", 202 202 i, PTR_ERR(clk[i])); 203 203 204 - 205 204 clk_register_clkdev(clk[pata_gate], NULL, "pata_imx"); 206 205 clk_register_clkdev(clk[can1_gate], NULL, "flexcan.0"); 207 206 clk_register_clkdev(clk[can2_gate], NULL, "flexcan.1"); ··· 262 263 clk_prepare_enable(clk[gpio3_gate]); 263 264 clk_prepare_enable(clk[iim_gate]); 264 265 clk_prepare_enable(clk[emi_gate]); 266 + 267 + /* 268 + * SCC is needed to boot via mmc after a watchdog reset. The clock code 269 + * before conversion to common clk also enabled UART1 (which isn't 270 + * handled here and not needed for mmc) and IIM (which is enabled 271 + * unconditionally above). 272 + */ 273 + clk_prepare_enable(clk[scc_gate]); 265 274 266 275 imx_print_silicon_rev("i.MX35", mx35_revision()); 267 276
+1 -1
arch/arm/mach-imx/mach-imx27_visstrim_m10.c
··· 38 38 #include <asm/mach-types.h> 39 39 #include <asm/mach/arch.h> 40 40 #include <asm/mach/time.h> 41 - #include <asm/system.h> 41 + #include <asm/system_info.h> 42 42 #include <mach/common.h> 43 43 #include <mach/iomux-mx27.h> 44 44
-29
arch/arm/mach-mmp/include/mach/gpio-pxa.h
··· 1 - #ifndef __ASM_MACH_GPIO_PXA_H 2 - #define __ASM_MACH_GPIO_PXA_H 3 - 4 - #include <mach/addr-map.h> 5 - #include <mach/cputype.h> 6 - #include <mach/irqs.h> 7 - 8 - #define GPIO_REGS_VIRT (APB_VIRT_BASE + 0x19000) 9 - 10 - #define BANK_OFF(n) (((n) < 3) ? (n) << 2 : 0x100 + (((n) - 3) << 2)) 11 - #define GPIO_REG(x) (*(volatile u32 *)(GPIO_REGS_VIRT + (x))) 12 - 13 - #define gpio_to_bank(gpio) ((gpio) >> 5) 14 - 15 - /* NOTE: these macros are defined here to make optimization of 16 - * gpio_{get,set}_value() to work when 'gpio' is a constant. 17 - * Usage of these macros otherwise is no longer recommended, 18 - * use generic GPIO API whenever possible. 19 - */ 20 - #define GPIO_bit(gpio) (1 << ((gpio) & 0x1f)) 21 - 22 - #define GPLR(x) GPIO_REG(BANK_OFF(gpio_to_bank(x)) + 0x00) 23 - #define GPDR(x) GPIO_REG(BANK_OFF(gpio_to_bank(x)) + 0x0c) 24 - #define GPSR(x) GPIO_REG(BANK_OFF(gpio_to_bank(x)) + 0x18) 25 - #define GPCR(x) GPIO_REG(BANK_OFF(gpio_to_bank(x)) + 0x24) 26 - 27 - #include <plat/gpio-pxa.h> 28 - 29 - #endif /* __ASM_MACH_GPIO_PXA_H */
+1
arch/arm/mach-mv78xx0/include/mach/bridge-regs.h
··· 31 31 #define IRQ_MASK_HIGH_OFF 0x0014 32 32 33 33 #define TIMER_VIRT_BASE (BRIDGE_VIRT_BASE | 0x0300) 34 + #define TIMER_PHYS_BASE (BRIDGE_PHYS_BASE | 0x0300) 34 35 35 36 #endif
+2
arch/arm/mach-mv78xx0/include/mach/mv78xx0.h
··· 42 42 #define MV78XX0_CORE0_REGS_PHYS_BASE 0xf1020000 43 43 #define MV78XX0_CORE1_REGS_PHYS_BASE 0xf1024000 44 44 #define MV78XX0_CORE_REGS_VIRT_BASE 0xfe400000 45 + #define MV78XX0_CORE_REGS_PHYS_BASE 0xfe400000 45 46 #define MV78XX0_CORE_REGS_SIZE SZ_16K 46 47 47 48 #define MV78XX0_PCIE_IO_PHYS_BASE(i) (0xf0800000 + ((i) << 20)) ··· 60 59 * Core-specific peripheral registers. 61 60 */ 62 61 #define BRIDGE_VIRT_BASE (MV78XX0_CORE_REGS_VIRT_BASE) 62 + #define BRIDGE_PHYS_BASE (MV78XX0_CORE_REGS_PHYS_BASE) 63 63 64 64 /* 65 65 * Register Map
+11
arch/arm/mach-mxs/mach-apx4devkit.c
··· 205 205 return 0; 206 206 } 207 207 208 + static void __init apx4devkit_fec_phy_clk_enable(void) 209 + { 210 + struct clk *clk; 211 + 212 + /* Enable fec phy clock */ 213 + clk = clk_get_sys("enet_out", NULL); 214 + if (!IS_ERR(clk)) 215 + clk_prepare_enable(clk); 216 + } 217 + 208 218 static void __init apx4devkit_init(void) 209 219 { 210 220 mx28_soc_init(); ··· 235 225 phy_register_fixup_for_uid(PHY_ID_KS8051, MICREL_PHY_ID_MASK, 236 226 apx4devkit_phy_fixup); 237 227 228 + apx4devkit_fec_phy_clk_enable(); 238 229 mx28_add_fec(0, &mx28_fec_pdata); 239 230 240 231 mx28_add_mxs_mmc(0, &apx4devkit_mmc_pdata);
+1 -1
arch/arm/mach-omap2/board-overo.c
··· 494 494 495 495 regulator_register_fixed(0, dummy_supplies, ARRAY_SIZE(dummy_supplies)); 496 496 omap3_mux_init(board_mux, OMAP_PACKAGE_CBB); 497 - omap_hsmmc_init(mmc); 498 497 overo_i2c_init(); 498 + omap_hsmmc_init(mmc); 499 499 omap_display_init(&overo_dss_data); 500 500 omap_serial_init(); 501 501 omap_sdrc_init(mt46h32m32lf6_sdrc_params,
+4
arch/arm/mach-omap2/clockdomain.h
··· 31 31 * 32 32 * CLKDM_NO_AUTODEPS: Prevent "autodeps" from being added/removed from this 33 33 * clockdomain. (Currently, this applies to OMAP3 clockdomains only.) 34 + * CLKDM_ACTIVE_WITH_MPU: The PRCM guarantees that this clockdomain is 35 + * active whenever the MPU is active. True for interconnects and 36 + * the WKUP clockdomains. 34 37 */ 35 38 #define CLKDM_CAN_FORCE_SLEEP (1 << 0) 36 39 #define CLKDM_CAN_FORCE_WAKEUP (1 << 1) 37 40 #define CLKDM_CAN_ENABLE_AUTO (1 << 2) 38 41 #define CLKDM_CAN_DISABLE_AUTO (1 << 3) 39 42 #define CLKDM_NO_AUTODEPS (1 << 4) 43 + #define CLKDM_ACTIVE_WITH_MPU (1 << 5) 40 44 41 45 #define CLKDM_CAN_HWSUP (CLKDM_CAN_ENABLE_AUTO | CLKDM_CAN_DISABLE_AUTO) 42 46 #define CLKDM_CAN_SWSUP (CLKDM_CAN_FORCE_SLEEP | CLKDM_CAN_FORCE_WAKEUP)
+1
arch/arm/mach-omap2/clockdomains2xxx_3xxx_data.c
··· 88 88 .name = "wkup_clkdm", 89 89 .pwrdm = { .name = "wkup_pwrdm" }, 90 90 .dep_bit = OMAP_EN_WKUP_SHIFT, 91 + .flags = CLKDM_ACTIVE_WITH_MPU, 91 92 };
+1 -1
arch/arm/mach-omap2/clockdomains44xx_data.c
··· 381 381 .cm_inst = OMAP4430_PRM_WKUP_CM_INST, 382 382 .clkdm_offs = OMAP4430_PRM_WKUP_CM_WKUP_CDOFFS, 383 383 .dep_bit = OMAP4430_L4WKUP_STATDEP_SHIFT, 384 - .flags = CLKDM_CAN_HWSUP, 384 + .flags = CLKDM_CAN_HWSUP | CLKDM_ACTIVE_WITH_MPU, 385 385 }; 386 386 387 387 static struct clockdomain emu_sys_44xx_clkdm = {
+24 -8
arch/arm/mach-omap2/omap_hwmod.c
··· 1124 1124 * _enable_sysc - try to bring a module out of idle via OCP_SYSCONFIG 1125 1125 * @oh: struct omap_hwmod * 1126 1126 * 1127 - * If module is marked as SWSUP_SIDLE, force the module out of slave 1128 - * idle; otherwise, configure it for smart-idle. If module is marked 1129 - * as SWSUP_MSUSPEND, force the module out of master standby; 1130 - * otherwise, configure it for smart-standby. No return value. 1127 + * Ensure that the OCP_SYSCONFIG register for the IP block represented 1128 + * by @oh is set to indicate to the PRCM that the IP block is active. 1129 + * Usually this means placing the module into smart-idle mode and 1130 + * smart-standby, but if there is a bug in the automatic idle handling 1131 + * for the IP block, it may need to be placed into the force-idle or 1132 + * no-idle variants of these modes. No return value. 1131 1133 */ 1132 1134 static void _enable_sysc(struct omap_hwmod *oh) 1133 1135 { 1134 1136 u8 idlemode, sf; 1135 1137 u32 v; 1138 + bool clkdm_act; 1136 1139 1137 1140 if (!oh->class->sysc) 1138 1141 return; ··· 1144 1141 sf = oh->class->sysc->sysc_flags; 1145 1142 1146 1143 if (sf & SYSC_HAS_SIDLEMODE) { 1147 - idlemode = (oh->flags & HWMOD_SWSUP_SIDLE) ? 1148 - HWMOD_IDLEMODE_NO : HWMOD_IDLEMODE_SMART; 1144 + clkdm_act = ((oh->clkdm && 1145 + oh->clkdm->flags & CLKDM_ACTIVE_WITH_MPU) || 1146 + (oh->_clk && oh->_clk->clkdm && 1147 + oh->_clk->clkdm->flags & CLKDM_ACTIVE_WITH_MPU)); 1148 + if (clkdm_act && !(oh->class->sysc->idlemodes & 1149 + (SIDLE_SMART | SIDLE_SMART_WKUP))) 1150 + idlemode = HWMOD_IDLEMODE_FORCE; 1151 + else 1152 + idlemode = (oh->flags & HWMOD_SWSUP_SIDLE) ? 1153 + HWMOD_IDLEMODE_NO : HWMOD_IDLEMODE_SMART; 1149 1154 _set_slave_idlemode(oh, idlemode, &v); 1150 1155 } 1151 1156 ··· 1219 1208 sf = oh->class->sysc->sysc_flags; 1220 1209 1221 1210 if (sf & SYSC_HAS_SIDLEMODE) { 1222 - idlemode = (oh->flags & HWMOD_SWSUP_SIDLE) ? 1223 - HWMOD_IDLEMODE_FORCE : HWMOD_IDLEMODE_SMART; 1211 + /* XXX What about HWMOD_IDLEMODE_SMART_WKUP? */ 1212 + if (oh->flags & HWMOD_SWSUP_SIDLE || 1213 + !(oh->class->sysc->idlemodes & 1214 + (SIDLE_SMART | SIDLE_SMART_WKUP))) 1215 + idlemode = HWMOD_IDLEMODE_FORCE; 1216 + else 1217 + idlemode = HWMOD_IDLEMODE_SMART; 1224 1218 _set_slave_idlemode(oh, idlemode, &v); 1225 1219 } 1226 1220
+14 -14
arch/arm/mach-omap2/omap_hwmod_44xx_data.c
··· 1928 1928 1929 1929 static struct omap_hwmod_opt_clk mcbsp1_opt_clks[] = { 1930 1930 { .role = "pad_fck", .clk = "pad_clks_ck" }, 1931 - { .role = "prcm_clk", .clk = "mcbsp1_sync_mux_ck" }, 1931 + { .role = "prcm_fck", .clk = "mcbsp1_sync_mux_ck" }, 1932 1932 }; 1933 1933 1934 1934 static struct omap_hwmod omap44xx_mcbsp1_hwmod = { ··· 1963 1963 1964 1964 static struct omap_hwmod_opt_clk mcbsp2_opt_clks[] = { 1965 1965 { .role = "pad_fck", .clk = "pad_clks_ck" }, 1966 - { .role = "prcm_clk", .clk = "mcbsp2_sync_mux_ck" }, 1966 + { .role = "prcm_fck", .clk = "mcbsp2_sync_mux_ck" }, 1967 1967 }; 1968 1968 1969 1969 static struct omap_hwmod omap44xx_mcbsp2_hwmod = { ··· 1998 1998 1999 1999 static struct omap_hwmod_opt_clk mcbsp3_opt_clks[] = { 2000 2000 { .role = "pad_fck", .clk = "pad_clks_ck" }, 2001 - { .role = "prcm_clk", .clk = "mcbsp3_sync_mux_ck" }, 2001 + { .role = "prcm_fck", .clk = "mcbsp3_sync_mux_ck" }, 2002 2002 }; 2003 2003 2004 2004 static struct omap_hwmod omap44xx_mcbsp3_hwmod = { ··· 2033 2033 2034 2034 static struct omap_hwmod_opt_clk mcbsp4_opt_clks[] = { 2035 2035 { .role = "pad_fck", .clk = "pad_clks_ck" }, 2036 - { .role = "prcm_clk", .clk = "mcbsp4_sync_mux_ck" }, 2036 + { .role = "prcm_fck", .clk = "mcbsp4_sync_mux_ck" }, 2037 2037 }; 2038 2038 2039 2039 static struct omap_hwmod omap44xx_mcbsp4_hwmod = { ··· 3864 3864 }; 3865 3865 3866 3866 /* usb_host_fs -> l3_main_2 */ 3867 - static struct omap_hwmod_ocp_if omap44xx_usb_host_fs__l3_main_2 = { 3867 + static struct omap_hwmod_ocp_if __maybe_unused omap44xx_usb_host_fs__l3_main_2 = { 3868 3868 .master = &omap44xx_usb_host_fs_hwmod, 3869 3869 .slave = &omap44xx_l3_main_2_hwmod, 3870 3870 .clk = "l3_div_ck", ··· 3922 3922 }; 3923 3923 3924 3924 /* aess -> l4_abe */ 3925 - static struct omap_hwmod_ocp_if omap44xx_aess__l4_abe = { 3925 + static struct omap_hwmod_ocp_if __maybe_unused omap44xx_aess__l4_abe = { 3926 3926 .master = &omap44xx_aess_hwmod, 3927 3927 .slave = &omap44xx_l4_abe_hwmod, 3928 3928 .clk = "ocp_abe_iclk", ··· 4013 4013 }; 4014 4014 4015 4015 /* l4_abe -> aess */ 4016 - static struct omap_hwmod_ocp_if omap44xx_l4_abe__aess = { 4016 + static struct omap_hwmod_ocp_if __maybe_unused omap44xx_l4_abe__aess = { 4017 4017 .master = &omap44xx_l4_abe_hwmod, 4018 4018 .slave = &omap44xx_aess_hwmod, 4019 4019 .clk = "ocp_abe_iclk", ··· 4031 4031 }; 4032 4032 4033 4033 /* l4_abe -> aess (dma) */ 4034 - static struct omap_hwmod_ocp_if omap44xx_l4_abe__aess_dma = { 4034 + static struct omap_hwmod_ocp_if __maybe_unused omap44xx_l4_abe__aess_dma = { 4035 4035 .master = &omap44xx_l4_abe_hwmod, 4036 4036 .slave = &omap44xx_aess_hwmod, 4037 4037 .clk = "ocp_abe_iclk", ··· 5857 5857 }; 5858 5858 5859 5859 /* l4_cfg -> usb_host_fs */ 5860 - static struct omap_hwmod_ocp_if omap44xx_l4_cfg__usb_host_fs = { 5860 + static struct omap_hwmod_ocp_if __maybe_unused omap44xx_l4_cfg__usb_host_fs = { 5861 5861 .master = &omap44xx_l4_cfg_hwmod, 5862 5862 .slave = &omap44xx_usb_host_fs_hwmod, 5863 5863 .clk = "l4_div_ck", ··· 6014 6014 &omap44xx_iva__l3_main_2, 6015 6015 &omap44xx_l3_main_1__l3_main_2, 6016 6016 &omap44xx_l4_cfg__l3_main_2, 6017 - &omap44xx_usb_host_fs__l3_main_2, 6017 + /* &omap44xx_usb_host_fs__l3_main_2, */ 6018 6018 &omap44xx_usb_host_hs__l3_main_2, 6019 6019 &omap44xx_usb_otg_hs__l3_main_2, 6020 6020 &omap44xx_l3_main_1__l3_main_3, 6021 6021 &omap44xx_l3_main_2__l3_main_3, 6022 6022 &omap44xx_l4_cfg__l3_main_3, 6023 - &omap44xx_aess__l4_abe, 6023 + /* &omap44xx_aess__l4_abe, */ 6024 6024 &omap44xx_dsp__l4_abe, 6025 6025 &omap44xx_l3_main_1__l4_abe, 6026 6026 &omap44xx_mpu__l4_abe, ··· 6029 6029 &omap44xx_l4_cfg__l4_wkup, 6030 6030 &omap44xx_mpu__mpu_private, 6031 6031 &omap44xx_l4_cfg__ocp_wp_noc, 6032 - &omap44xx_l4_abe__aess, 6033 - &omap44xx_l4_abe__aess_dma, 6032 + /* &omap44xx_l4_abe__aess, */ 6033 + /* &omap44xx_l4_abe__aess_dma, */ 6034 6034 &omap44xx_l3_main_2__c2c, 6035 6035 &omap44xx_l4_wkup__counter_32k, 6036 6036 &omap44xx_l4_cfg__ctrl_module_core, ··· 6136 6136 &omap44xx_l4_per__uart2, 6137 6137 &omap44xx_l4_per__uart3, 6138 6138 &omap44xx_l4_per__uart4, 6139 - &omap44xx_l4_cfg__usb_host_fs, 6139 + /* &omap44xx_l4_cfg__usb_host_fs, */ 6140 6140 &omap44xx_l4_cfg__usb_host_hs, 6141 6141 &omap44xx_l4_cfg__usb_otg_hs, 6142 6142 &omap44xx_l4_cfg__usb_tll_hs,
+2
arch/arm/mach-omap2/twl-common.c
··· 32 32 #include "twl-common.h" 33 33 #include "pm.h" 34 34 #include "voltage.h" 35 + #include "mux.h" 35 36 36 37 static struct i2c_board_info __initdata pmic_i2c_board_info = { 37 38 .addr = 0x48, ··· 78 77 struct twl6040_platform_data *twl6040_data, int twl6040_irq) 79 78 { 80 79 /* PMIC part*/ 80 + omap_mux_init_signal("sys_nirq1", OMAP_PIN_INPUT_PULLUP | OMAP_PIN_OFF_WAKEUPENABLE); 81 81 strncpy(omap4_i2c1_board_info[0].type, pmic_type, 82 82 sizeof(omap4_i2c1_board_info[0].type)); 83 83 omap4_i2c1_board_info[0].irq = OMAP44XX_IRQ_SYS_1N;
+14 -1
arch/arm/mach-pxa/hx4700.c
··· 127 127 GPIO19_SSP2_SCLK, 128 128 GPIO86_SSP2_RXD, 129 129 GPIO87_SSP2_TXD, 130 - GPIO88_GPIO, 130 + GPIO88_GPIO | MFP_LPM_DRIVE_HIGH, /* TSC2046_CS */ 131 + 132 + /* BQ24022 Regulator */ 133 + GPIO72_GPIO | MFP_LPM_KEEP_OUTPUT, /* BQ24022_nCHARGE_EN */ 134 + GPIO96_GPIO | MFP_LPM_KEEP_OUTPUT, /* BQ24022_ISET2 */ 131 135 132 136 /* HX4700 specific input GPIOs */ 133 137 GPIO12_GPIO | WAKEUP_ON_EDGE_RISE, /* ASIC3_IRQ */ ··· 139 135 GPIO14_GPIO, /* nWLAN_IRQ */ 140 136 141 137 /* HX4700 specific output GPIOs */ 138 + GPIO61_GPIO | MFP_LPM_DRIVE_HIGH, /* W3220_nRESET */ 139 + GPIO71_GPIO | MFP_LPM_DRIVE_HIGH, /* ASIC3_nRESET */ 140 + GPIO81_GPIO | MFP_LPM_DRIVE_HIGH, /* CPU_GP_nRESET */ 141 + GPIO116_GPIO | MFP_LPM_DRIVE_HIGH, /* CPU_HW_nRESET */ 142 142 GPIO102_GPIO | MFP_LPM_DRIVE_LOW, /* SYNAPTICS_POWER_ON */ 143 143 144 144 GPIO10_GPIO, /* GSM_IRQ */ ··· 880 872 { GPIO110_HX4700_LCD_LVDD_3V3_ON, GPIOF_OUT_INIT_HIGH, "LCD_LVDD" }, 881 873 { GPIO111_HX4700_LCD_AVDD_3V3_ON, GPIOF_OUT_INIT_HIGH, "LCD_AVDD" }, 882 874 { GPIO32_HX4700_RS232_ON, GPIOF_OUT_INIT_HIGH, "RS232_ON" }, 875 + { GPIO61_HX4700_W3220_nRESET, GPIOF_OUT_INIT_HIGH, "W3220_nRESET" }, 883 876 { GPIO71_HX4700_ASIC3_nRESET, GPIOF_OUT_INIT_HIGH, "ASIC3_nRESET" }, 877 + { GPIO81_HX4700_CPU_GP_nRESET, GPIOF_OUT_INIT_HIGH, "CPU_GP_nRESET" }, 884 878 { GPIO82_HX4700_EUART_RESET, GPIOF_OUT_INIT_HIGH, "EUART_RESET" }, 879 + { GPIO116_HX4700_CPU_HW_nRESET, GPIOF_OUT_INIT_HIGH, "CPU_HW_nRESET" }, 885 880 }; 886 881 887 882 static void __init hx4700_init(void) 888 883 { 889 884 int ret; 885 + 886 + PCFR = PCFR_GPR_EN | PCFR_OPDE; 890 887 891 888 pxa2xx_mfp_config(ARRAY_AND_SIZE(hx4700_pin_config)); 892 889 gpio_set_wake(GPIO12_HX4700_ASIC3_IRQ, 1);
+1 -1
arch/arm/mach-s3c24xx/clock-s3c2440.c
··· 106 106 static struct clk s3c2440_clk_ac97 = { 107 107 .name = "ac97", 108 108 .enable = s3c2410_clkcon_enable, 109 - .ctrlbit = S3C2440_CLKCON_CAMERA, 109 + .ctrlbit = S3C2440_CLKCON_AC97, 110 110 }; 111 111 112 112 static unsigned long s3c2440_fclk_n_getrate(struct clk *clk)
+5
arch/arm/mach-shmobile/platsmp.c
··· 22 22 #include <mach/common.h> 23 23 #include <mach/emev2.h> 24 24 25 + #ifdef CONFIG_ARCH_SH73A0 25 26 #define is_sh73a0() (machine_is_ag5evm() || machine_is_kota2() || \ 26 27 of_machine_is_compatible("renesas,sh73a0")) 28 + #else 29 + #define is_sh73a0() (0) 30 + #endif 31 + 27 32 #define is_r8a7779() machine_is_marzen() 28 33 29 34 #ifdef CONFIG_ARCH_EMEV2
+1 -1
arch/arm/mach-spear3xx/spear3xx.c
··· 87 87 88 88 static void __init spear3xx_timer_init(void) 89 89 { 90 - char pclk_name[] = "pll3_48m_clk"; 90 + char pclk_name[] = "pll3_clk"; 91 91 struct clk *gpt_clk, *pclk; 92 92 93 93 spear3xx_clk_init();
+1 -1
arch/arm/mach-spear6xx/spear6xx.c
··· 423 423 424 424 static void __init spear6xx_timer_init(void) 425 425 { 426 - char pclk_name[] = "pll3_48m_clk"; 426 + char pclk_name[] = "pll3_clk"; 427 427 struct clk *gpt_clk, *pclk; 428 428 429 429 spear6xx_clk_init();
+7 -5
arch/arm/mach-ux500/board-mop500.c
··· 625 625 &ab8500_device, 626 626 }; 627 627 628 - static struct platform_device *snowball_of_platform_devs[] __initdata = { 629 - &snowball_led_dev, 630 - &snowball_key_dev, 631 - }; 632 - 633 628 static void __init mop500_init_machine(void) 634 629 { 635 630 struct device *parent = NULL; ··· 764 769 765 770 #ifdef CONFIG_MACH_UX500_DT 766 771 772 + static struct platform_device *snowball_of_platform_devs[] __initdata = { 773 + &snowball_led_dev, 774 + &snowball_key_dev, 775 + }; 776 + 767 777 struct of_dev_auxdata u8500_auxdata_lookup[] __initdata = { 768 778 /* Requires DMA and call-back bindings. */ 769 779 OF_DEV_AUXDATA("arm,pl011", 0x80120000, "uart0", &uart0_plat), ··· 786 786 OF_DEV_AUXDATA("st,nomadik-gpio", 0x8011e000, "gpio.6", NULL), 787 787 OF_DEV_AUXDATA("st,nomadik-gpio", 0x8011e080, "gpio.7", NULL), 788 788 OF_DEV_AUXDATA("st,nomadik-gpio", 0xa03fe000, "gpio.8", NULL), 789 + /* Requires device name bindings. */ 790 + OF_DEV_AUXDATA("stericsson,nmk_pinctrl", 0, "pinctrl-db8500", NULL), 789 791 {}, 790 792 }; 791 793
+2
arch/arm/mach-ux500/timer.c
··· 63 63 64 64 /* TODO: Once MTU has been DT:ed place code above into else. */ 65 65 if (of_have_populated_dt()) { 66 + #ifdef CONFIG_OF 66 67 np = of_find_matching_node(NULL, prcmu_timer_of_match); 67 68 if (!np) 69 + #endif 68 70 goto dt_fail; 69 71 70 72 tmp_base = of_iomap(np, 0);
-1
arch/arm/mach-versatile/pci.c
··· 339 339 static int __init versatile_map_irq(const struct pci_dev *dev, u8 slot, u8 pin) 340 340 { 341 341 int irq; 342 - int devslot = PCI_SLOT(dev->devfn); 343 342 344 343 /* slot, pin, irq 345 344 * 24 1 27
+2 -2
arch/arm/mm/dma-mapping.c
··· 1091 1091 while (--i) 1092 1092 if (pages[i]) 1093 1093 __free_pages(pages[i], 0); 1094 - if (array_size < PAGE_SIZE) 1094 + if (array_size <= PAGE_SIZE) 1095 1095 kfree(pages); 1096 1096 else 1097 1097 vfree(pages); ··· 1106 1106 for (i = 0; i < count; i++) 1107 1107 if (pages[i]) 1108 1108 __free_pages(pages[i], 0); 1109 - if (array_size < PAGE_SIZE) 1109 + if (array_size <= PAGE_SIZE) 1110 1110 kfree(pages); 1111 1111 else 1112 1112 vfree(pages);
+1 -1
arch/arm/mm/mm.h
··· 64 64 #ifdef CONFIG_ZONE_DMA 65 65 extern phys_addr_t arm_dma_limit; 66 66 #else 67 - #define arm_dma_limit ((u32)~0) 67 + #define arm_dma_limit ((phys_addr_t)~0) 68 68 #endif 69 69 70 70 extern phys_addr_t arm_lowmem_limit;
+5 -3
arch/arm/plat-samsung/adc.c
··· 157 157 return -EINVAL; 158 158 } 159 159 160 - if (client->is_ts && adc->ts_pend) 161 - return -EAGAIN; 162 - 163 160 spin_lock_irqsave(&adc->lock, flags); 161 + 162 + if (client->is_ts && adc->ts_pend) { 163 + spin_unlock_irqrestore(&adc->lock, flags); 164 + return -EAGAIN; 165 + } 164 166 165 167 client->channel = channel; 166 168 client->nr_samples = nr_samples;
+2 -1
arch/arm/plat-samsung/devs.c
··· 126 126 #ifdef CONFIG_CPU_S3C2440 127 127 static struct resource s3c_camif_resource[] = { 128 128 [0] = DEFINE_RES_MEM(S3C2440_PA_CAMIF, S3C2440_SZ_CAMIF), 129 - [1] = DEFINE_RES_IRQ(IRQ_CAM), 129 + [1] = DEFINE_RES_IRQ(IRQ_S3C2440_CAM_C), 130 + [2] = DEFINE_RES_IRQ(IRQ_S3C2440_CAM_P), 130 131 }; 131 132 132 133 struct platform_device s3c_device_camif = {
+1
arch/arm/plat-samsung/s5p-clock.c
··· 37 37 struct clk clk_xusbxti = { 38 38 .name = "xusbxti", 39 39 .id = -1, 40 + .rate = 24000000, 40 41 }; 41 42 42 43 struct clk s5p_clk_27m = {
+3
arch/h8300/include/asm/pgtable.h
··· 70 70 #define VMALLOC_END 0xffffffff 71 71 72 72 #define arch_enter_lazy_cpu_mode() do {} while (0) 73 + 74 + #include <asm-generic/pgtable.h> 75 + 73 76 #endif /* _H8300_PGTABLE_H */
+2 -1
arch/h8300/include/asm/uaccess.h
··· 100 100 break; \ 101 101 default: \ 102 102 __gu_err = __get_user_bad(); \ 103 - __gu_val = 0; \ 104 103 break; \ 105 104 } \ 106 105 (x) = __gu_val; \ ··· 157 158 memset(to, 0, n); 158 159 return 0; 159 160 } 161 + 162 + #define __clear_user clear_user 160 163 161 164 #endif /* _H8300_UACCESS_H */
+1 -1
arch/h8300/kernel/signal.c
··· 447 447 * want to handle. Thus you cannot kill init even with a SIGKILL even by 448 448 * mistake. 449 449 */ 450 - statis void do_signal(struct pt_regs *regs) 450 + static void do_signal(struct pt_regs *regs) 451 451 { 452 452 siginfo_t info; 453 453 int signr;
+1
arch/h8300/kernel/time.c
··· 27 27 #include <linux/profile.h> 28 28 29 29 #include <asm/io.h> 30 + #include <asm/irq_regs.h> 30 31 #include <asm/timer.h> 31 32 32 33 #define TICK_SIZE (tick_nsec / 1000)
+3 -3
arch/m32r/boot/compressed/Makefile
··· 43 43 44 44 OBJCOPYFLAGS += -R .empty_zero_page 45 45 46 - suffix_$(CONFIG_KERNEL_GZIP) = gz 47 - suffix_$(CONFIG_KERNEL_BZIP2) = bz2 48 - suffix_$(CONFIG_KERNEL_LZMA) = lzma 46 + suffix-$(CONFIG_KERNEL_GZIP) = gz 47 + suffix-$(CONFIG_KERNEL_BZIP2) = bz2 48 + suffix-$(CONFIG_KERNEL_LZMA) = lzma 49 49 50 50 $(obj)/piggy.o: $(obj)/vmlinux.scr $(obj)/vmlinux.bin.$(suffix-y) FORCE 51 51 $(call if_changed,ld)
+11 -1
arch/m32r/boot/compressed/misc.c
··· 28 28 static unsigned long free_mem_end_ptr; 29 29 30 30 #ifdef CONFIG_KERNEL_BZIP2 31 - static void *memset(void *s, int c, size_t n) 31 + void *memset(void *s, int c, size_t n) 32 32 { 33 33 char *ss = s; 34 34 ··· 39 39 #endif 40 40 41 41 #ifdef CONFIG_KERNEL_GZIP 42 + void *memcpy(void *dest, const void *src, size_t n) 43 + { 44 + char *d = dest; 45 + const char *s = src; 46 + while (n--) 47 + *d++ = *s++; 48 + 49 + return dest; 50 + } 51 + 42 52 #define BOOT_HEAP_SIZE 0x10000 43 53 #include "../../../../lib/decompress_inflate.c" 44 54 #endif
-3
arch/m32r/include/asm/ptrace.h
··· 113 113 114 114 #define PTRACE_OLDSETOPTIONS 21 115 115 116 - /* options set using PTRACE_SETOPTIONS */ 117 - #define PTRACE_O_TRACESYSGOOD 0x00000001 118 - 119 116 #ifdef __KERNEL__ 120 117 121 118 #include <asm/m32r.h> /* M32R_PSW_BSM, M32R_PSW_BPM */
+3 -4
arch/m32r/kernel/ptrace.c
··· 591 591 592 592 if (access_process_vm(child, pc&~3, &insn, sizeof(insn), 0) 593 593 != sizeof(insn)) 594 - return -EIO; 594 + return; 595 595 596 596 compute_next_pc(insn, pc, &next_pc, child); 597 597 if (next_pc & 0x80000000) 598 - return -EIO; 598 + return; 599 599 600 600 if (embed_debug_trap(child, next_pc)) 601 - return -EIO; 601 + return; 602 602 603 603 invalidate_cache(); 604 - return 0; 605 604 } 606 605 607 606 void user_disable_single_step(struct task_struct *child)
+1 -1
arch/m32r/kernel/signal.c
··· 286 286 case -ERESTARTNOINTR: 287 287 regs->r0 = regs->orig_r0; 288 288 if (prev_insn(regs) < 0) 289 - return -EFAULT; 289 + return; 290 290 } 291 291 } 292 292
-1
arch/mips/include/asm/bitops.h
··· 17 17 #include <linux/irqflags.h> 18 18 #include <linux/types.h> 19 19 #include <asm/barrier.h> 20 - #include <asm/bug.h> 21 20 #include <asm/byteorder.h> /* sigh ... */ 22 21 #include <asm/cpu-features.h> 23 22 #include <asm/sgidefs.h>
+1
arch/mips/include/asm/io.h
··· 17 17 #include <linux/types.h> 18 18 19 19 #include <asm/addrspace.h> 20 + #include <asm/bug.h> 20 21 #include <asm/byteorder.h> 21 22 #include <asm/cpu.h> 22 23 #include <asm/cpu-features.h>
+2 -2
arch/mips/pci/pci-lantiq.c
··· 129 129 130 130 /* setup reset gpio used by pci */ 131 131 reset_gpio = of_get_named_gpio(node, "gpio-reset", 0); 132 - if (reset_gpio > 0) 132 + if (gpio_is_valid(reset_gpio)) 133 133 devm_gpio_request(&pdev->dev, reset_gpio, "pci-reset"); 134 134 135 135 /* enable auto-switching between PCI and EBU */ ··· 192 192 ltq_ebu_w32(ltq_ebu_r32(LTQ_EBU_PCC_IEN) | 0x10, LTQ_EBU_PCC_IEN); 193 193 194 194 /* toggle reset pin */ 195 - if (reset_gpio > 0) { 195 + if (gpio_is_valid(reset_gpio)) { 196 196 __gpio_set_value(reset_gpio, 0); 197 197 wmb(); 198 198 mdelay(1);
-3
arch/mn10300/include/asm/ptrace.h
··· 81 81 #define PTRACE_GETFPREGS 14 82 82 #define PTRACE_SETFPREGS 15 83 83 84 - /* options set using PTRACE_SETOPTIONS */ 85 - #define PTRACE_O_TRACESYSGOOD 0x00000001 86 - 87 84 #ifdef __KERNEL__ 88 85 89 86 #define user_mode(regs) (((regs)->epsw & EPSW_nSL) == EPSW_nSL)
+1 -1
arch/mn10300/include/asm/thread_info.h
··· 123 123 } 124 124 125 125 #ifndef CONFIG_KGDB 126 - void arch_release_thread_info(struct thread_info *ti) 126 + void arch_release_thread_info(struct thread_info *ti); 127 127 #endif 128 128 #define get_thread_info(ti) get_task_struct((ti)->task) 129 129 #define put_thread_info(ti) put_task_struct((ti)->task)
-11
arch/mn10300/include/asm/timex.h
··· 11 11 #ifndef _ASM_TIMEX_H 12 12 #define _ASM_TIMEX_H 13 13 14 - #include <asm/hardirq.h> 15 14 #include <unit/timex.h> 16 15 17 16 #define TICK_SIZE (tick_nsec / 1000) ··· 28 29 29 30 extern int init_clockevents(void); 30 31 extern int init_clocksource(void); 31 - 32 - static inline void setup_jiffies_interrupt(int irq, 33 - struct irqaction *action) 34 - { 35 - u16 tmp; 36 - setup_irq(irq, action); 37 - set_intr_level(irq, NUM2GxICR_LEVEL(CONFIG_TIMER_IRQ_LEVEL)); 38 - GxICR(irq) |= GxICR_ENABLE | GxICR_DETECT | GxICR_REQUEST; 39 - tmp = GxICR(irq); 40 - } 41 32 42 33 #endif /* __KERNEL__ */ 43 34
+10
arch/mn10300/kernel/cevt-mn10300.c
··· 70 70 { 71 71 } 72 72 73 + static inline void setup_jiffies_interrupt(int irq, 74 + struct irqaction *action) 75 + { 76 + u16 tmp; 77 + setup_irq(irq, action); 78 + set_intr_level(irq, NUM2GxICR_LEVEL(CONFIG_TIMER_IRQ_LEVEL)); 79 + GxICR(irq) |= GxICR_ENABLE | GxICR_DETECT | GxICR_REQUEST; 80 + tmp = GxICR(irq); 81 + } 82 + 73 83 int __init init_clockevents(void) 74 84 { 75 85 struct clock_event_device *cd;
+2
arch/mn10300/kernel/internal.h
··· 9 9 * 2 of the Licence, or (at your option) any later version. 10 10 */ 11 11 12 + #include <linux/irqreturn.h> 13 + 12 14 struct clocksource; 13 15 struct clock_event_device; 14 16
+2 -2
arch/mn10300/kernel/irq.c
··· 170 170 case SC1TXIRQ: 171 171 #ifdef CONFIG_MN10300_TTYSM1_TIMER12 172 172 case TM12IRQ: 173 - #elif CONFIG_MN10300_TTYSM1_TIMER9 173 + #elif defined(CONFIG_MN10300_TTYSM1_TIMER9) 174 174 case TM9IRQ: 175 - #elif CONFIG_MN10300_TTYSM1_TIMER3 175 + #elif defined(CONFIG_MN10300_TTYSM1_TIMER3) 176 176 case TM3IRQ: 177 177 #endif /* CONFIG_MN10300_TTYSM1_TIMER12 */ 178 178 #endif /* CONFIG_MN10300_TTYSM1 */
+3 -2
arch/mn10300/kernel/signal.c
··· 459 459 else 460 460 ret = setup_frame(sig, ka, oldset, regs); 461 461 if (ret) 462 - return; 462 + return ret; 463 463 464 464 signal_delivered(sig, info, ka, regs, 465 - test_thread_flag(TIF_SINGLESTEP)); 465 + test_thread_flag(TIF_SINGLESTEP)); 466 + return 0; 466 467 } 467 468 468 469 /*
+1
arch/mn10300/kernel/traps.c
··· 26 26 #include <linux/kdebug.h> 27 27 #include <linux/bug.h> 28 28 #include <linux/irq.h> 29 + #include <linux/export.h> 29 30 #include <asm/processor.h> 30 31 #include <linux/uaccess.h> 31 32 #include <asm/io.h>
+1
arch/mn10300/mm/dma-alloc.c
··· 15 15 #include <linux/string.h> 16 16 #include <linux/pci.h> 17 17 #include <linux/gfp.h> 18 + #include <linux/export.h> 18 19 #include <asm/io.h> 19 20 20 21 static unsigned long pci_sram_allocated = 0xbc000000;
-4
arch/mn10300/unit-asb2303/include/unit/timex.h
··· 11 11 #ifndef _ASM_UNIT_TIMEX_H 12 12 #define _ASM_UNIT_TIMEX_H 13 13 14 - #ifndef __ASSEMBLY__ 15 - #include <linux/irq.h> 16 - #endif /* __ASSEMBLY__ */ 17 - 18 14 #include <asm/timer-regs.h> 19 15 #include <unit/clock.h> 20 16 #include <asm/param.h>
+1
arch/mn10300/unit-asb2303/smc91111.c
··· 15 15 #include <linux/platform_device.h> 16 16 17 17 #include <asm/io.h> 18 + #include <asm/irq.h> 18 19 #include <asm/timex.h> 19 20 #include <asm/processor.h> 20 21 #include <asm/intctl-regs.h>
-4
arch/mn10300/unit-asb2305/include/unit/timex.h
··· 11 11 #ifndef _ASM_UNIT_TIMEX_H 12 12 #define _ASM_UNIT_TIMEX_H 13 13 14 - #ifndef __ASSEMBLY__ 15 - #include <linux/irq.h> 16 - #endif /* __ASSEMBLY__ */ 17 - 18 14 #include <asm/timer-regs.h> 19 15 #include <unit/clock.h> 20 16 #include <asm/param.h>
+1
arch/mn10300/unit-asb2305/unit-init.c
··· 13 13 #include <linux/init.h> 14 14 #include <linux/pci.h> 15 15 #include <asm/io.h> 16 + #include <asm/irq.h> 16 17 #include <asm/setup.h> 17 18 #include <asm/processor.h> 18 19 #include <asm/intctl-regs.h>
-4
arch/mn10300/unit-asb2364/include/unit/timex.h
··· 11 11 #ifndef _ASM_UNIT_TIMEX_H 12 12 #define _ASM_UNIT_TIMEX_H 13 13 14 - #ifndef __ASSEMBLY__ 15 - #include <linux/irq.h> 16 - #endif /* __ASSEMBLY__ */ 17 - 18 14 #include <asm/timer-regs.h> 19 15 #include <unit/clock.h> 20 16 #include <asm/param.h>
+4 -2
arch/powerpc/include/asm/hw_irq.h
··· 86 86 } 87 87 88 88 #ifdef CONFIG_PPC_BOOK3E 89 - #define __hard_irq_enable() asm volatile("wrteei 1" : : : "memory"); 90 - #define __hard_irq_disable() asm volatile("wrteei 0" : : : "memory"); 89 + #define __hard_irq_enable() asm volatile("wrteei 1" : : : "memory") 90 + #define __hard_irq_disable() asm volatile("wrteei 0" : : : "memory") 91 91 #else 92 92 #define __hard_irq_enable() __mtmsrd(local_paca->kernel_msr | MSR_EE, 1) 93 93 #define __hard_irq_disable() __mtmsrd(local_paca->kernel_msr, 1) ··· 124 124 { 125 125 return !regs->softe; 126 126 } 127 + 128 + extern bool prep_irq_for_idle(void); 127 129 128 130 #else /* CONFIG_PPC64 */ 129 131
+47 -1
arch/powerpc/kernel/irq.c
··· 229 229 */ 230 230 if (unlikely(irq_happened != PACA_IRQ_HARD_DIS)) 231 231 __hard_irq_disable(); 232 - #ifdef CONFIG_TRACE_IRQFLAG 232 + #ifdef CONFIG_TRACE_IRQFLAGS 233 233 else { 234 234 /* 235 235 * We should already be hard disabled here. We had bugs ··· 284 284 local_irq_enable(); 285 285 } else 286 286 __hard_irq_enable(); 287 + } 288 + 289 + /* 290 + * This is a helper to use when about to go into idle low-power 291 + * when the latter has the side effect of re-enabling interrupts 292 + * (such as calling H_CEDE under pHyp). 293 + * 294 + * You call this function with interrupts soft-disabled (this is 295 + * already the case when ppc_md.power_save is called). The function 296 + * will return whether to enter power save or just return. 297 + * 298 + * In the former case, it will have notified lockdep of interrupts 299 + * being re-enabled and generally sanitized the lazy irq state, 300 + * and in the latter case it will leave with interrupts hard 301 + * disabled and marked as such, so the local_irq_enable() call 302 + * in cpu_idle() will properly re-enable everything. 303 + */ 304 + bool prep_irq_for_idle(void) 305 + { 306 + /* 307 + * First we need to hard disable to ensure no interrupt 308 + * occurs before we effectively enter the low power state 309 + */ 310 + hard_irq_disable(); 311 + 312 + /* 313 + * If anything happened while we were soft-disabled, 314 + * we return now and do not enter the low power state. 315 + */ 316 + if (lazy_irq_pending()) 317 + return false; 318 + 319 + /* Tell lockdep we are about to re-enable */ 320 + trace_hardirqs_on(); 321 + 322 + /* 323 + * Mark interrupts as soft-enabled and clear the 324 + * PACA_IRQ_HARD_DIS from the pending mask since we 325 + * are about to hard enable as well as a side effect 326 + * of entering the low power state. 327 + */ 328 + local_paca->irq_happened &= ~PACA_IRQ_HARD_DIS; 329 + local_paca->soft_enabled = 1; 330 + 331 + /* Tell the caller to enter the low power state */ 332 + return true; 287 333 } 288 334 289 335 #endif /* CONFIG_PPC64 */
+1
arch/powerpc/kvm/book3s_pr_papr.c
··· 241 241 case H_PUT_TCE: 242 242 return kvmppc_h_pr_put_tce(vcpu); 243 243 case H_CEDE: 244 + vcpu->arch.shared->msr |= MSR_EE; 244 245 kvm_vcpu_block(vcpu); 245 246 clear_bit(KVM_REQ_UNHALT, &vcpu->requests); 246 247 vcpu->stat.halt_wakeup++;
+1 -1
arch/powerpc/mm/numa.c
··· 639 639 unsigned int n, rc, ranges, is_kexec_kdump = 0; 640 640 unsigned long lmb_size, base, size, sz; 641 641 int nid; 642 - struct assoc_arrays aa; 642 + struct assoc_arrays aa = { .arrays = NULL }; 643 643 644 644 n = of_get_drconf_memory(memory, &dm); 645 645 if (!n)
+6 -5
arch/powerpc/platforms/cell/pervasive.c
··· 42 42 { 43 43 unsigned long ctrl, thread_switch_control; 44 44 45 - /* 46 - * We need to hard disable interrupts, the local_irq_enable() done by 47 - * our caller upon return will hard re-enable. 48 - */ 49 - hard_irq_disable(); 45 + /* Ensure our interrupt state is properly tracked */ 46 + if (!prep_irq_for_idle()) 47 + return; 50 48 51 49 ctrl = mfspr(SPRN_CTRLF); 52 50 ··· 79 81 */ 80 82 ctrl &= ~(CTRL_RUNLATCH | CTRL_TE); 81 83 mtspr(SPRN_CTRLT, ctrl); 84 + 85 + /* Re-enable interrupts in MSR */ 86 + __hard_irq_enable(); 82 87 } 83 88 84 89 static int cbe_system_reset_exception(struct pt_regs *regs)
+10 -7
arch/powerpc/platforms/pseries/processor_idle.c
··· 99 99 static void check_and_cede_processor(void) 100 100 { 101 101 /* 102 - * Interrupts are soft-disabled at this point, 103 - * but not hard disabled. So an interrupt might have 104 - * occurred before entering NAP, and would be potentially 105 - * lost (edge events, decrementer events, etc...) unless 106 - * we first hard disable then check. 102 + * Ensure our interrupt state is properly tracked, 103 + * also checks if no interrupt has occurred while we 104 + * were soft-disabled 107 105 */ 108 - hard_irq_disable(); 109 - if (!lazy_irq_pending()) 106 + if (prep_irq_for_idle()) { 110 107 cede_processor(); 108 + #ifdef CONFIG_TRACE_IRQFLAGS 109 + /* Ensure that H_CEDE returns with IRQs on */ 110 + if (WARN_ON(!(mfmsr() & MSR_EE))) 111 + __hard_irq_enable(); 112 + #endif 113 + } 111 114 } 112 115 113 116 static int dedicated_cede_loop(struct cpuidle_device *dev,
+14 -3
arch/sh/include/asm/io_noioport.h
··· 19 19 return -1; 20 20 } 21 21 22 - #define outb(x, y) BUG() 23 - #define outw(x, y) BUG() 24 - #define outl(x, y) BUG() 22 + static inline void outb(unsigned char x, unsigned long port) 23 + { 24 + BUG(); 25 + } 26 + 27 + static inline void outw(unsigned short x, unsigned long port) 28 + { 29 + BUG(); 30 + } 31 + 32 + static inline void outl(unsigned int x, unsigned long port) 33 + { 34 + BUG(); 35 + } 25 36 26 37 #define inb_p(addr) inb(addr) 27 38 #define inw_p(addr) inw(addr)
+1 -1
arch/sh/kernel/cpu/sh3/serial-sh7720.c
··· 2 2 #include <linux/serial_core.h> 3 3 #include <linux/io.h> 4 4 #include <cpu/serial.h> 5 - #include <asm/gpio.h> 5 + #include <cpu/gpio.h> 6 6 7 7 static void sh7720_sci_init_pins(struct uart_port *port, unsigned int cflag) 8 8 {
+7 -2
arch/tile/kernel/backtrace.c
··· 14 14 15 15 #include <linux/kernel.h> 16 16 #include <linux/string.h> 17 + #include <asm/byteorder.h> 17 18 #include <asm/backtrace.h> 18 19 #include <asm/tile-desc.h> 19 20 #include <arch/abi.h> ··· 337 336 bytes_to_prefetch / sizeof(tile_bundle_bits); 338 337 } 339 338 340 - /* Decode the next bundle. */ 341 - bundle.bits = prefetched_bundles[next_bundle++]; 339 + /* 340 + * Decode the next bundle. 341 + * TILE always stores instruction bundles in little-endian 342 + * mode, even when the chip is running in big-endian mode. 343 + */ 344 + bundle.bits = le64_to_cpu(prefetched_bundles[next_bundle++]); 342 345 bundle.num_insns = 343 346 parse_insn_tile(bundle.bits, pc, bundle.insns); 344 347 num_info_ops = bt_get_info_ops(&bundle, info_operands);
-1
arch/um/drivers/mconsole_kern.c
··· 705 705 struct task_struct *from = current, *to = arg; 706 706 707 707 to->thread.saved_task = from; 708 - rcu_switch_from(from); 709 708 switch_to(from, to, from); 710 709 } 711 710
+35 -4
arch/x86/kernel/vsyscall_64.c
··· 139 139 return nr; 140 140 } 141 141 142 + #ifdef CONFIG_SECCOMP 143 + static int vsyscall_seccomp(struct task_struct *tsk, int syscall_nr) 144 + { 145 + if (!seccomp_mode(&tsk->seccomp)) 146 + return 0; 147 + task_pt_regs(tsk)->orig_ax = syscall_nr; 148 + task_pt_regs(tsk)->ax = syscall_nr; 149 + return __secure_computing(syscall_nr); 150 + } 151 + #else 152 + #define vsyscall_seccomp(_tsk, _nr) 0 153 + #endif 154 + 142 155 static bool write_ok_or_segv(unsigned long ptr, size_t size) 143 156 { 144 157 /* ··· 187 174 int vsyscall_nr; 188 175 int prev_sig_on_uaccess_error; 189 176 long ret; 177 + int skip; 190 178 191 179 /* 192 180 * No point in checking CS -- the only way to get here is a user mode ··· 219 205 } 220 206 221 207 tsk = current; 222 - if (seccomp_mode(&tsk->seccomp)) 223 - do_exit(SIGKILL); 224 - 225 208 /* 226 209 * With a real vsyscall, page faults cause SIGSEGV. We want to 227 210 * preserve that behavior to make writing exploits harder. ··· 233 222 * address 0". 234 223 */ 235 224 ret = -EFAULT; 225 + skip = 0; 236 226 switch (vsyscall_nr) { 237 227 case 0: 228 + skip = vsyscall_seccomp(tsk, __NR_gettimeofday); 229 + if (skip) 230 + break; 231 + 238 232 if (!write_ok_or_segv(regs->di, sizeof(struct timeval)) || 239 233 !write_ok_or_segv(regs->si, sizeof(struct timezone))) 240 234 break; ··· 250 234 break; 251 235 252 236 case 1: 237 + skip = vsyscall_seccomp(tsk, __NR_time); 238 + if (skip) 239 + break; 240 + 253 241 if (!write_ok_or_segv(regs->di, sizeof(time_t))) 254 242 break; 255 243 ··· 261 241 break; 262 242 263 243 case 2: 244 + skip = vsyscall_seccomp(tsk, __NR_getcpu); 245 + if (skip) 246 + break; 247 + 264 248 if (!write_ok_or_segv(regs->di, sizeof(unsigned)) || 265 249 !write_ok_or_segv(regs->si, sizeof(unsigned))) 266 250 break; ··· 276 252 } 277 253 278 254 current_thread_info()->sig_on_uaccess_error = prev_sig_on_uaccess_error; 255 + 256 + if (skip) { 257 + if ((long)regs->ax <= 0L) /* seccomp errno emulation */ 258 + goto do_ret; 259 + goto done; /* seccomp trace/trap */ 260 + } 279 261 280 262 if (ret == -EFAULT) { 281 263 /* Bad news -- userspace fed a bad pointer to a vsyscall. */ ··· 301 271 302 272 regs->ax = ret; 303 273 274 + do_ret: 304 275 /* Emulate a ret instruction. */ 305 276 regs->ip = caller; 306 277 regs->sp += 8; 307 - 278 + done: 308 279 return true; 309 280 310 281 sigsegv:
+3
arch/x86/kvm/mmu.c
··· 3934 3934 { 3935 3935 struct kvm_mmu_page *page; 3936 3936 3937 + if (list_empty(&kvm->arch.active_mmu_pages)) 3938 + return; 3939 + 3937 3940 page = container_of(kvm->arch.active_mmu_pages.prev, 3938 3941 struct kvm_mmu_page, link); 3939 3942 kvm_mmu_prepare_zap_page(kvm, page, invalid_list);
+1 -1
arch/xtensa/kernel/process.c
··· 277 277 278 278 /* Don't leak any random bits. */ 279 279 280 - memset(elfregs, 0, sizeof (elfregs)); 280 + memset(elfregs, 0, sizeof(*elfregs)); 281 281 282 282 /* Note: PS.EXCM is not set while user task is running; its 283 283 * being set in regs->ps is for exception handling convenience.
-22
drivers/acpi/acpica/hwsleep.c
··· 95 95 return_ACPI_STATUS(status); 96 96 } 97 97 98 - if (sleep_state != ACPI_STATE_S5) { 99 - /* 100 - * Disable BM arbitration. This feature is contained within an 101 - * optional register (PM2 Control), so ignore a BAD_ADDRESS 102 - * exception. 103 - */ 104 - status = acpi_write_bit_register(ACPI_BITREG_ARB_DISABLE, 1); 105 - if (ACPI_FAILURE(status) && (status != AE_BAD_ADDRESS)) { 106 - return_ACPI_STATUS(status); 107 - } 108 - } 109 - 110 98 /* 111 99 * 1) Disable/Clear all GPEs 112 100 * 2) Enable all wakeup GPEs ··· 351 363 acpi_write_bit_register(acpi_gbl_fixed_event_info 352 364 [ACPI_EVENT_POWER_BUTTON]. 353 365 status_register_id, ACPI_CLEAR_STATUS); 354 - 355 - /* 356 - * Enable BM arbitration. This feature is contained within an 357 - * optional register (PM2 Control), so ignore a BAD_ADDRESS 358 - * exception. 359 - */ 360 - status = acpi_write_bit_register(ACPI_BITREG_ARB_DISABLE, 0); 361 - if (ACPI_FAILURE(status) && (status != AE_BAD_ADDRESS)) { 362 - return_ACPI_STATUS(status); 363 - } 364 366 365 367 acpi_hw_execute_sleep_method(METHOD_PATHNAME__SST, ACPI_SST_WORKING); 366 368 return_ACPI_STATUS(status);
+1 -1
drivers/acpi/acpica/nspredef.c
··· 638 638 /* Create the new outer package and populate it */ 639 639 640 640 status = 641 - acpi_ns_wrap_with_package(data, *elements, 641 + acpi_ns_wrap_with_package(data, return_object, 642 642 return_object_ptr); 643 643 if (ACPI_FAILURE(status)) { 644 644 return (status);
+4 -2
drivers/acpi/processor_core.c
··· 189 189 * Processor (CPU3, 0x03, 0x00000410, 0x06) {} 190 190 * } 191 191 * 192 - * Ignores apic_id and always return 0 for CPU0's handle. 192 + * Ignores apic_id and always returns 0 for the processor 193 + * handle with acpi id 0 if nr_cpu_ids is 1. 194 + * This should be the case if SMP tables are not found. 193 195 * Return -1 for other CPU's handle. 194 196 */ 195 - if (acpi_id == 0) 197 + if (nr_cpu_ids <= 1 && acpi_id == 0) 196 198 return acpi_id; 197 199 else 198 200 return apic_id;
+2
drivers/base/dd.c
··· 24 24 #include <linux/wait.h> 25 25 #include <linux/async.h> 26 26 #include <linux/pm_runtime.h> 27 + #include <scsi/scsi_scan.h> 27 28 28 29 #include "base.h" 29 30 #include "power/power.h" ··· 333 332 /* wait for the known devices to complete their probing */ 334 333 wait_event(probe_waitqueue, atomic_read(&probe_count) == 0); 335 334 async_synchronize_full(); 335 + scsi_complete_async_scans(); 336 336 } 337 337 EXPORT_SYMBOL_GPL(wait_for_device_probe); 338 338
+3 -5
drivers/block/loop.c
··· 1597 1597 struct gendisk *disk; 1598 1598 int err; 1599 1599 1600 + err = -ENOMEM; 1600 1601 lo = kzalloc(sizeof(*lo), GFP_KERNEL); 1601 - if (!lo) { 1602 - err = -ENOMEM; 1602 + if (!lo) 1603 1603 goto out; 1604 - } 1605 1604 1606 - err = idr_pre_get(&loop_index_idr, GFP_KERNEL); 1607 - if (err < 0) 1605 + if (!idr_pre_get(&loop_index_idr, GFP_KERNEL)) 1608 1606 goto out_free_dev; 1609 1607 1610 1608 if (i >= 0) {
+155 -157
drivers/clk/spear/spear1310_clock.c
··· 345 345 /* clock parents */ 346 346 static const char *vco_parents[] = { "osc_24m_clk", "osc_25m_clk", }; 347 347 static const char *gpt_parents[] = { "osc_24m_clk", "apb_clk", }; 348 - static const char *uart0_parents[] = { "pll5_clk", "uart_synth_gate_clk", }; 349 - static const char *c3_parents[] = { "pll5_clk", "c3_synth_gate_clk", }; 350 - static const char *gmac_phy_input_parents[] = { "gmii_125m_pad_clk", "pll2_clk", 348 + static const char *uart0_parents[] = { "pll5_clk", "uart_syn_gclk", }; 349 + static const char *c3_parents[] = { "pll5_clk", "c3_syn_gclk", }; 350 + static const char *gmac_phy_input_parents[] = { "gmii_pad_clk", "pll2_clk", 351 351 "osc_25m_clk", }; 352 - static const char *gmac_phy_parents[] = { "gmac_phy_input_mux_clk", 353 - "gmac_phy_synth_gate_clk", }; 352 + static const char *gmac_phy_parents[] = { "phy_input_mclk", "phy_syn_gclk", }; 354 353 static const char *clcd_synth_parents[] = { "vco1div4_clk", "pll2_clk", }; 355 - static const char *clcd_pixel_parents[] = { "pll5_clk", "clcd_synth_clk", }; 354 + static const char *clcd_pixel_parents[] = { "pll5_clk", "clcd_syn_clk", }; 356 355 static const char *i2s_src_parents[] = { "vco1div2_clk", "none", "pll3_clk", 357 356 "i2s_src_pad_clk", }; 358 - static const char *i2s_ref_parents[] = { "i2s_src_mux_clk", "i2s_prs1_clk", }; 357 + static const char *i2s_ref_parents[] = { "i2s_src_mclk", "i2s_prs1_clk", }; 359 358 static const char *gen_synth0_1_parents[] = { "vco1div4_clk", "vco3div2_clk", 360 359 "pll3_clk", }; 361 360 static const char *gen_synth2_3_parents[] = { "vco1div4_clk", "vco3div2_clk", 362 361 "pll2_clk", }; 363 362 static const char *rmii_phy_parents[] = { "ras_tx50_clk", "none", 364 - "ras_pll2_clk", "ras_synth0_clk", }; 363 + "ras_pll2_clk", "ras_syn0_clk", }; 365 364 static const char *smii_rgmii_phy_parents[] = { "none", "ras_tx125_clk", 366 - "ras_pll2_clk", "ras_synth0_clk", }; 367 - static const char *uart_parents[] = { "ras_apb_clk", "gen_synth3_clk", }; 368 - static const char *i2c_parents[] = { "ras_apb_clk", "gen_synth1_clk", }; 369 - static const char *ssp1_parents[] = { "ras_apb_clk", "gen_synth1_clk", 365 + "ras_pll2_clk", "ras_syn0_clk", }; 366 + static const char *uart_parents[] = { "ras_apb_clk", "gen_syn3_clk", }; 367 + static const char *i2c_parents[] = { "ras_apb_clk", "gen_syn1_clk", }; 368 + static const char *ssp1_parents[] = { "ras_apb_clk", "gen_syn1_clk", 370 369 "ras_plclk0_clk", }; 371 - static const char *pci_parents[] = { "ras_pll3_clk", "gen_synth2_clk", }; 372 - static const char *tdm_parents[] = { "ras_pll3_clk", "gen_synth1_clk", }; 370 + static const char *pci_parents[] = { "ras_pll3_clk", "gen_syn2_clk", }; 371 + static const char *tdm_parents[] = { "ras_pll3_clk", "gen_syn1_clk", }; 373 372 374 373 void __init spear1310_clk_init(void) 375 374 { ··· 389 390 25000000); 390 391 clk_register_clkdev(clk, "osc_25m_clk", NULL); 391 392 392 - clk = clk_register_fixed_rate(NULL, "gmii_125m_pad_clk", NULL, 393 - CLK_IS_ROOT, 125000000); 394 - clk_register_clkdev(clk, "gmii_125m_pad_clk", NULL); 393 + clk = clk_register_fixed_rate(NULL, "gmii_pad_clk", NULL, CLK_IS_ROOT, 394 + 125000000); 395 + clk_register_clkdev(clk, "gmii_pad_clk", NULL); 395 396 396 397 clk = clk_register_fixed_rate(NULL, "i2s_src_pad_clk", NULL, 397 398 CLK_IS_ROOT, 12288000); ··· 405 406 406 407 /* clock derived from 24 or 25 MHz osc clk */ 407 408 /* vco-pll */ 408 - clk = clk_register_mux(NULL, "vco1_mux_clk", vco_parents, 409 + clk = clk_register_mux(NULL, "vco1_mclk", vco_parents, 409 410 ARRAY_SIZE(vco_parents), 0, SPEAR1310_PLL_CFG, 410 411 SPEAR1310_PLL1_CLK_SHIFT, SPEAR1310_PLL_CLK_MASK, 0, 411 412 &_lock); 412 - clk_register_clkdev(clk, "vco1_mux_clk", NULL); 413 - clk = clk_register_vco_pll("vco1_clk", "pll1_clk", NULL, "vco1_mux_clk", 413 + clk_register_clkdev(clk, "vco1_mclk", NULL); 414 + clk = clk_register_vco_pll("vco1_clk", "pll1_clk", NULL, "vco1_mclk", 414 415 0, SPEAR1310_PLL1_CTR, SPEAR1310_PLL1_FRQ, pll_rtbl, 415 416 ARRAY_SIZE(pll_rtbl), &_lock, &clk1, NULL); 416 417 clk_register_clkdev(clk, "vco1_clk", NULL); 417 418 clk_register_clkdev(clk1, "pll1_clk", NULL); 418 419 419 - clk = clk_register_mux(NULL, "vco2_mux_clk", vco_parents, 420 + clk = clk_register_mux(NULL, "vco2_mclk", vco_parents, 420 421 ARRAY_SIZE(vco_parents), 0, SPEAR1310_PLL_CFG, 421 422 SPEAR1310_PLL2_CLK_SHIFT, SPEAR1310_PLL_CLK_MASK, 0, 422 423 &_lock); 423 - clk_register_clkdev(clk, "vco2_mux_clk", NULL); 424 - clk = clk_register_vco_pll("vco2_clk", "pll2_clk", NULL, "vco2_mux_clk", 424 + clk_register_clkdev(clk, "vco2_mclk", NULL); 425 + clk = clk_register_vco_pll("vco2_clk", "pll2_clk", NULL, "vco2_mclk", 425 426 0, SPEAR1310_PLL2_CTR, SPEAR1310_PLL2_FRQ, pll_rtbl, 426 427 ARRAY_SIZE(pll_rtbl), &_lock, &clk1, NULL); 427 428 clk_register_clkdev(clk, "vco2_clk", NULL); 428 429 clk_register_clkdev(clk1, "pll2_clk", NULL); 429 430 430 - clk = clk_register_mux(NULL, "vco3_mux_clk", vco_parents, 431 + clk = clk_register_mux(NULL, "vco3_mclk", vco_parents, 431 432 ARRAY_SIZE(vco_parents), 0, SPEAR1310_PLL_CFG, 432 433 SPEAR1310_PLL3_CLK_SHIFT, SPEAR1310_PLL_CLK_MASK, 0, 433 434 &_lock); 434 - clk_register_clkdev(clk, "vco3_mux_clk", NULL); 435 - clk = clk_register_vco_pll("vco3_clk", "pll3_clk", NULL, "vco3_mux_clk", 435 + clk_register_clkdev(clk, "vco3_mclk", NULL); 436 + clk = clk_register_vco_pll("vco3_clk", "pll3_clk", NULL, "vco3_mclk", 436 437 0, SPEAR1310_PLL3_CTR, SPEAR1310_PLL3_FRQ, pll_rtbl, 437 438 ARRAY_SIZE(pll_rtbl), &_lock, &clk1, NULL); 438 439 clk_register_clkdev(clk, "vco3_clk", NULL); ··· 472 473 /* peripherals */ 473 474 clk_register_fixed_factor(NULL, "thermal_clk", "osc_24m_clk", 0, 1, 474 475 128); 475 - clk = clk_register_gate(NULL, "thermal_gate_clk", "thermal_clk", 0, 476 + clk = clk_register_gate(NULL, "thermal_gclk", "thermal_clk", 0, 476 477 SPEAR1310_PERIP2_CLK_ENB, SPEAR1310_THSENS_CLK_ENB, 0, 477 478 &_lock); 478 479 clk_register_clkdev(clk, NULL, "spear_thermal"); ··· 499 500 clk_register_clkdev(clk, "apb_clk", NULL); 500 501 501 502 /* gpt clocks */ 502 - clk = clk_register_mux(NULL, "gpt0_mux_clk", gpt_parents, 503 + clk = clk_register_mux(NULL, "gpt0_mclk", gpt_parents, 503 504 ARRAY_SIZE(gpt_parents), 0, SPEAR1310_PERIP_CLK_CFG, 504 505 SPEAR1310_GPT0_CLK_SHIFT, SPEAR1310_GPT_CLK_MASK, 0, 505 506 &_lock); 506 - clk_register_clkdev(clk, "gpt0_mux_clk", NULL); 507 - clk = clk_register_gate(NULL, "gpt0_clk", "gpt0_mux_clk", 0, 507 + clk_register_clkdev(clk, "gpt0_mclk", NULL); 508 + clk = clk_register_gate(NULL, "gpt0_clk", "gpt0_mclk", 0, 508 509 SPEAR1310_PERIP1_CLK_ENB, SPEAR1310_GPT0_CLK_ENB, 0, 509 510 &_lock); 510 511 clk_register_clkdev(clk, NULL, "gpt0"); 511 512 512 - clk = clk_register_mux(NULL, "gpt1_mux_clk", gpt_parents, 513 + clk = clk_register_mux(NULL, "gpt1_mclk", gpt_parents, 513 514 ARRAY_SIZE(gpt_parents), 0, SPEAR1310_PERIP_CLK_CFG, 514 515 SPEAR1310_GPT1_CLK_SHIFT, SPEAR1310_GPT_CLK_MASK, 0, 515 516 &_lock); 516 - clk_register_clkdev(clk, "gpt1_mux_clk", NULL); 517 - clk = clk_register_gate(NULL, "gpt1_clk", "gpt1_mux_clk", 0, 517 + clk_register_clkdev(clk, "gpt1_mclk", NULL); 518 + clk = clk_register_gate(NULL, "gpt1_clk", "gpt1_mclk", 0, 518 519 SPEAR1310_PERIP1_CLK_ENB, SPEAR1310_GPT1_CLK_ENB, 0, 519 520 &_lock); 520 521 clk_register_clkdev(clk, NULL, "gpt1"); 521 522 522 - clk = clk_register_mux(NULL, "gpt2_mux_clk", gpt_parents, 523 + clk = clk_register_mux(NULL, "gpt2_mclk", gpt_parents, 523 524 ARRAY_SIZE(gpt_parents), 0, SPEAR1310_PERIP_CLK_CFG, 524 525 SPEAR1310_GPT2_CLK_SHIFT, SPEAR1310_GPT_CLK_MASK, 0, 525 526 &_lock); 526 - clk_register_clkdev(clk, "gpt2_mux_clk", NULL); 527 - clk = clk_register_gate(NULL, "gpt2_clk", "gpt2_mux_clk", 0, 527 + clk_register_clkdev(clk, "gpt2_mclk", NULL); 528 + clk = clk_register_gate(NULL, "gpt2_clk", "gpt2_mclk", 0, 528 529 SPEAR1310_PERIP2_CLK_ENB, SPEAR1310_GPT2_CLK_ENB, 0, 529 530 &_lock); 530 531 clk_register_clkdev(clk, NULL, "gpt2"); 531 532 532 - clk = clk_register_mux(NULL, "gpt3_mux_clk", gpt_parents, 533 + clk = clk_register_mux(NULL, "gpt3_mclk", gpt_parents, 533 534 ARRAY_SIZE(gpt_parents), 0, SPEAR1310_PERIP_CLK_CFG, 534 535 SPEAR1310_GPT3_CLK_SHIFT, SPEAR1310_GPT_CLK_MASK, 0, 535 536 &_lock); 536 - clk_register_clkdev(clk, "gpt3_mux_clk", NULL); 537 - clk = clk_register_gate(NULL, "gpt3_clk", "gpt3_mux_clk", 0, 537 + clk_register_clkdev(clk, "gpt3_mclk", NULL); 538 + clk = clk_register_gate(NULL, "gpt3_clk", "gpt3_mclk", 0, 538 539 SPEAR1310_PERIP2_CLK_ENB, SPEAR1310_GPT3_CLK_ENB, 0, 539 540 &_lock); 540 541 clk_register_clkdev(clk, NULL, "gpt3"); 541 542 542 543 /* others */ 543 - clk = clk_register_aux("uart_synth_clk", "uart_synth_gate_clk", 544 - "vco1div2_clk", 0, SPEAR1310_UART_CLK_SYNT, NULL, 545 - aux_rtbl, ARRAY_SIZE(aux_rtbl), &_lock, &clk1); 546 - clk_register_clkdev(clk, "uart_synth_clk", NULL); 547 - clk_register_clkdev(clk1, "uart_synth_gate_clk", NULL); 544 + clk = clk_register_aux("uart_syn_clk", "uart_syn_gclk", "vco1div2_clk", 545 + 0, SPEAR1310_UART_CLK_SYNT, NULL, aux_rtbl, 546 + ARRAY_SIZE(aux_rtbl), &_lock, &clk1); 547 + clk_register_clkdev(clk, "uart_syn_clk", NULL); 548 + clk_register_clkdev(clk1, "uart_syn_gclk", NULL); 548 549 549 - clk = clk_register_mux(NULL, "uart0_mux_clk", uart0_parents, 550 + clk = clk_register_mux(NULL, "uart0_mclk", uart0_parents, 550 551 ARRAY_SIZE(uart0_parents), 0, SPEAR1310_PERIP_CLK_CFG, 551 552 SPEAR1310_UART_CLK_SHIFT, SPEAR1310_UART_CLK_MASK, 0, 552 553 &_lock); 553 - clk_register_clkdev(clk, "uart0_mux_clk", NULL); 554 + clk_register_clkdev(clk, "uart0_mclk", NULL); 554 555 555 - clk = clk_register_gate(NULL, "uart0_clk", "uart0_mux_clk", 0, 556 + clk = clk_register_gate(NULL, "uart0_clk", "uart0_mclk", 0, 556 557 SPEAR1310_PERIP1_CLK_ENB, SPEAR1310_UART_CLK_ENB, 0, 557 558 &_lock); 558 559 clk_register_clkdev(clk, NULL, "e0000000.serial"); 559 560 560 - clk = clk_register_aux("sdhci_synth_clk", "sdhci_synth_gate_clk", 561 + clk = clk_register_aux("sdhci_syn_clk", "sdhci_syn_gclk", 561 562 "vco1div2_clk", 0, SPEAR1310_SDHCI_CLK_SYNT, NULL, 562 563 aux_rtbl, ARRAY_SIZE(aux_rtbl), &_lock, &clk1); 563 - clk_register_clkdev(clk, "sdhci_synth_clk", NULL); 564 - clk_register_clkdev(clk1, "sdhci_synth_gate_clk", NULL); 564 + clk_register_clkdev(clk, "sdhci_syn_clk", NULL); 565 + clk_register_clkdev(clk1, "sdhci_syn_gclk", NULL); 565 566 566 - clk = clk_register_gate(NULL, "sdhci_clk", "sdhci_synth_gate_clk", 0, 567 + clk = clk_register_gate(NULL, "sdhci_clk", "sdhci_syn_gclk", 0, 567 568 SPEAR1310_PERIP1_CLK_ENB, SPEAR1310_SDHCI_CLK_ENB, 0, 568 569 &_lock); 569 570 clk_register_clkdev(clk, NULL, "b3000000.sdhci"); 570 571 571 - clk = clk_register_aux("cfxd_synth_clk", "cfxd_synth_gate_clk", 572 - "vco1div2_clk", 0, SPEAR1310_CFXD_CLK_SYNT, NULL, 573 - aux_rtbl, ARRAY_SIZE(aux_rtbl), &_lock, &clk1); 574 - clk_register_clkdev(clk, "cfxd_synth_clk", NULL); 575 - clk_register_clkdev(clk1, "cfxd_synth_gate_clk", NULL); 572 + clk = clk_register_aux("cfxd_syn_clk", "cfxd_syn_gclk", "vco1div2_clk", 573 + 0, SPEAR1310_CFXD_CLK_SYNT, NULL, aux_rtbl, 574 + ARRAY_SIZE(aux_rtbl), &_lock, &clk1); 575 + clk_register_clkdev(clk, "cfxd_syn_clk", NULL); 576 + clk_register_clkdev(clk1, "cfxd_syn_gclk", NULL); 576 577 577 - clk = clk_register_gate(NULL, "cfxd_clk", "cfxd_synth_gate_clk", 0, 578 + clk = clk_register_gate(NULL, "cfxd_clk", "cfxd_syn_gclk", 0, 578 579 SPEAR1310_PERIP1_CLK_ENB, SPEAR1310_CFXD_CLK_ENB, 0, 579 580 &_lock); 580 581 clk_register_clkdev(clk, NULL, "b2800000.cf"); 581 582 clk_register_clkdev(clk, NULL, "arasan_xd"); 582 583 583 - clk = clk_register_aux("c3_synth_clk", "c3_synth_gate_clk", 584 - "vco1div2_clk", 0, SPEAR1310_C3_CLK_SYNT, NULL, 585 - aux_rtbl, ARRAY_SIZE(aux_rtbl), &_lock, &clk1); 586 - clk_register_clkdev(clk, "c3_synth_clk", NULL); 587 - clk_register_clkdev(clk1, "c3_synth_gate_clk", NULL); 584 + clk = clk_register_aux("c3_syn_clk", "c3_syn_gclk", "vco1div2_clk", 585 + 0, SPEAR1310_C3_CLK_SYNT, NULL, aux_rtbl, 586 + ARRAY_SIZE(aux_rtbl), &_lock, &clk1); 587 + clk_register_clkdev(clk, "c3_syn_clk", NULL); 588 + clk_register_clkdev(clk1, "c3_syn_gclk", NULL); 588 589 589 - clk = clk_register_mux(NULL, "c3_mux_clk", c3_parents, 590 + clk = clk_register_mux(NULL, "c3_mclk", c3_parents, 590 591 ARRAY_SIZE(c3_parents), 0, SPEAR1310_PERIP_CLK_CFG, 591 592 SPEAR1310_C3_CLK_SHIFT, SPEAR1310_C3_CLK_MASK, 0, 592 593 &_lock); 593 - clk_register_clkdev(clk, "c3_mux_clk", NULL); 594 + clk_register_clkdev(clk, "c3_mclk", NULL); 594 595 595 - clk = clk_register_gate(NULL, "c3_clk", "c3_mux_clk", 0, 596 + clk = clk_register_gate(NULL, "c3_clk", "c3_mclk", 0, 596 597 SPEAR1310_PERIP1_CLK_ENB, SPEAR1310_C3_CLK_ENB, 0, 597 598 &_lock); 598 599 clk_register_clkdev(clk, NULL, "c3"); 599 600 600 601 /* gmac */ 601 - clk = clk_register_mux(NULL, "gmac_phy_input_mux_clk", 602 - gmac_phy_input_parents, 602 + clk = clk_register_mux(NULL, "phy_input_mclk", gmac_phy_input_parents, 603 603 ARRAY_SIZE(gmac_phy_input_parents), 0, 604 604 SPEAR1310_GMAC_CLK_CFG, 605 605 SPEAR1310_GMAC_PHY_INPUT_CLK_SHIFT, 606 606 SPEAR1310_GMAC_PHY_INPUT_CLK_MASK, 0, &_lock); 607 - clk_register_clkdev(clk, "gmac_phy_input_mux_clk", NULL); 607 + clk_register_clkdev(clk, "phy_input_mclk", NULL); 608 608 609 - clk = clk_register_aux("gmac_phy_synth_clk", "gmac_phy_synth_gate_clk", 610 - "gmac_phy_input_mux_clk", 0, SPEAR1310_GMAC_CLK_SYNT, 611 - NULL, gmac_rtbl, ARRAY_SIZE(gmac_rtbl), &_lock, &clk1); 612 - clk_register_clkdev(clk, "gmac_phy_synth_clk", NULL); 613 - clk_register_clkdev(clk1, "gmac_phy_synth_gate_clk", NULL); 609 + clk = clk_register_aux("phy_syn_clk", "phy_syn_gclk", "phy_input_mclk", 610 + 0, SPEAR1310_GMAC_CLK_SYNT, NULL, gmac_rtbl, 611 + ARRAY_SIZE(gmac_rtbl), &_lock, &clk1); 612 + clk_register_clkdev(clk, "phy_syn_clk", NULL); 613 + clk_register_clkdev(clk1, "phy_syn_gclk", NULL); 614 614 615 - clk = clk_register_mux(NULL, "gmac_phy_mux_clk", gmac_phy_parents, 615 + clk = clk_register_mux(NULL, "phy_mclk", gmac_phy_parents, 616 616 ARRAY_SIZE(gmac_phy_parents), 0, 617 617 SPEAR1310_PERIP_CLK_CFG, SPEAR1310_GMAC_PHY_CLK_SHIFT, 618 618 SPEAR1310_GMAC_PHY_CLK_MASK, 0, &_lock); 619 619 clk_register_clkdev(clk, NULL, "stmmacphy.0"); 620 620 621 621 /* clcd */ 622 - clk = clk_register_mux(NULL, "clcd_synth_mux_clk", clcd_synth_parents, 622 + clk = clk_register_mux(NULL, "clcd_syn_mclk", clcd_synth_parents, 623 623 ARRAY_SIZE(clcd_synth_parents), 0, 624 624 SPEAR1310_CLCD_CLK_SYNT, SPEAR1310_CLCD_SYNT_CLK_SHIFT, 625 625 SPEAR1310_CLCD_SYNT_CLK_MASK, 0, &_lock); 626 - clk_register_clkdev(clk, "clcd_synth_mux_clk", NULL); 626 + clk_register_clkdev(clk, "clcd_syn_mclk", NULL); 627 627 628 - clk = clk_register_frac("clcd_synth_clk", "clcd_synth_mux_clk", 0, 628 + clk = clk_register_frac("clcd_syn_clk", "clcd_syn_mclk", 0, 629 629 SPEAR1310_CLCD_CLK_SYNT, clcd_rtbl, 630 630 ARRAY_SIZE(clcd_rtbl), &_lock); 631 - clk_register_clkdev(clk, "clcd_synth_clk", NULL); 631 + clk_register_clkdev(clk, "clcd_syn_clk", NULL); 632 632 633 - clk = clk_register_mux(NULL, "clcd_pixel_mux_clk", clcd_pixel_parents, 633 + clk = clk_register_mux(NULL, "clcd_pixel_mclk", clcd_pixel_parents, 634 634 ARRAY_SIZE(clcd_pixel_parents), 0, 635 635 SPEAR1310_PERIP_CLK_CFG, SPEAR1310_CLCD_CLK_SHIFT, 636 636 SPEAR1310_CLCD_CLK_MASK, 0, &_lock); 637 637 clk_register_clkdev(clk, "clcd_pixel_clk", NULL); 638 638 639 - clk = clk_register_gate(NULL, "clcd_clk", "clcd_pixel_mux_clk", 0, 639 + clk = clk_register_gate(NULL, "clcd_clk", "clcd_pixel_mclk", 0, 640 640 SPEAR1310_PERIP1_CLK_ENB, SPEAR1310_CLCD_CLK_ENB, 0, 641 641 &_lock); 642 642 clk_register_clkdev(clk, "clcd_clk", NULL); 643 643 644 644 /* i2s */ 645 - clk = clk_register_mux(NULL, "i2s_src_mux_clk", i2s_src_parents, 645 + clk = clk_register_mux(NULL, "i2s_src_mclk", i2s_src_parents, 646 646 ARRAY_SIZE(i2s_src_parents), 0, SPEAR1310_I2S_CLK_CFG, 647 647 SPEAR1310_I2S_SRC_CLK_SHIFT, SPEAR1310_I2S_SRC_CLK_MASK, 648 648 0, &_lock); 649 649 clk_register_clkdev(clk, "i2s_src_clk", NULL); 650 650 651 - clk = clk_register_aux("i2s_prs1_clk", NULL, "i2s_src_mux_clk", 0, 651 + clk = clk_register_aux("i2s_prs1_clk", NULL, "i2s_src_mclk", 0, 652 652 SPEAR1310_I2S_CLK_CFG, &i2s_prs1_masks, i2s_prs1_rtbl, 653 653 ARRAY_SIZE(i2s_prs1_rtbl), &_lock, NULL); 654 654 clk_register_clkdev(clk, "i2s_prs1_clk", NULL); 655 655 656 - clk = clk_register_mux(NULL, "i2s_ref_mux_clk", i2s_ref_parents, 656 + clk = clk_register_mux(NULL, "i2s_ref_mclk", i2s_ref_parents, 657 657 ARRAY_SIZE(i2s_ref_parents), 0, SPEAR1310_I2S_CLK_CFG, 658 658 SPEAR1310_I2S_REF_SHIFT, SPEAR1310_I2S_REF_SEL_MASK, 0, 659 659 &_lock); 660 660 clk_register_clkdev(clk, "i2s_ref_clk", NULL); 661 661 662 - clk = clk_register_gate(NULL, "i2s_ref_pad_clk", "i2s_ref_mux_clk", 0, 662 + clk = clk_register_gate(NULL, "i2s_ref_pad_clk", "i2s_ref_mclk", 0, 663 663 SPEAR1310_PERIP2_CLK_ENB, SPEAR1310_I2S_REF_PAD_CLK_ENB, 664 664 0, &_lock); 665 665 clk_register_clkdev(clk, "i2s_ref_pad_clk", NULL); 666 666 667 - clk = clk_register_aux("i2s_sclk_clk", "i2s_sclk_gate_clk", 667 + clk = clk_register_aux("i2s_sclk_clk", "i2s_sclk_gclk", 668 668 "i2s_ref_pad_clk", 0, SPEAR1310_I2S_CLK_CFG, 669 669 &i2s_sclk_masks, i2s_sclk_rtbl, 670 670 ARRAY_SIZE(i2s_sclk_rtbl), &_lock, &clk1); 671 671 clk_register_clkdev(clk, "i2s_sclk_clk", NULL); 672 - clk_register_clkdev(clk1, "i2s_sclk_gate_clk", NULL); 672 + clk_register_clkdev(clk1, "i2s_sclk_gclk", NULL); 673 673 674 674 /* clock derived from ahb clk */ 675 675 clk = clk_register_gate(NULL, "i2c0_clk", "ahb_clk", 0, ··· 745 747 &_lock); 746 748 clk_register_clkdev(clk, "sysram1_clk", NULL); 747 749 748 - clk = clk_register_aux("adc_synth_clk", "adc_synth_gate_clk", "ahb_clk", 750 + clk = clk_register_aux("adc_syn_clk", "adc_syn_gclk", "ahb_clk", 749 751 0, SPEAR1310_ADC_CLK_SYNT, NULL, adc_rtbl, 750 752 ARRAY_SIZE(adc_rtbl), &_lock, &clk1); 751 - clk_register_clkdev(clk, "adc_synth_clk", NULL); 752 - clk_register_clkdev(clk1, "adc_synth_gate_clk", NULL); 753 + clk_register_clkdev(clk, "adc_syn_clk", NULL); 754 + clk_register_clkdev(clk1, "adc_syn_gclk", NULL); 753 755 754 - clk = clk_register_gate(NULL, "adc_clk", "adc_synth_gate_clk", 0, 756 + clk = clk_register_gate(NULL, "adc_clk", "adc_syn_gclk", 0, 755 757 SPEAR1310_PERIP1_CLK_ENB, SPEAR1310_ADC_CLK_ENB, 0, 756 758 &_lock); 757 759 clk_register_clkdev(clk, NULL, "adc_clk"); ··· 788 790 clk_register_clkdev(clk, NULL, "e0300000.kbd"); 789 791 790 792 /* RAS clks */ 791 - clk = clk_register_mux(NULL, "gen_synth0_1_mux_clk", 792 - gen_synth0_1_parents, ARRAY_SIZE(gen_synth0_1_parents), 793 - 0, SPEAR1310_PLL_CFG, SPEAR1310_RAS_SYNT0_1_CLK_SHIFT, 793 + clk = clk_register_mux(NULL, "gen_syn0_1_mclk", gen_synth0_1_parents, 794 + ARRAY_SIZE(gen_synth0_1_parents), 0, SPEAR1310_PLL_CFG, 795 + SPEAR1310_RAS_SYNT0_1_CLK_SHIFT, 794 796 SPEAR1310_RAS_SYNT_CLK_MASK, 0, &_lock); 795 - clk_register_clkdev(clk, "gen_synth0_1_clk", NULL); 797 + clk_register_clkdev(clk, "gen_syn0_1_clk", NULL); 796 798 797 - clk = clk_register_mux(NULL, "gen_synth2_3_mux_clk", 798 - gen_synth2_3_parents, ARRAY_SIZE(gen_synth2_3_parents), 799 - 0, SPEAR1310_PLL_CFG, SPEAR1310_RAS_SYNT2_3_CLK_SHIFT, 799 + clk = clk_register_mux(NULL, "gen_syn2_3_mclk", gen_synth2_3_parents, 800 + ARRAY_SIZE(gen_synth2_3_parents), 0, SPEAR1310_PLL_CFG, 801 + SPEAR1310_RAS_SYNT2_3_CLK_SHIFT, 800 802 SPEAR1310_RAS_SYNT_CLK_MASK, 0, &_lock); 801 - clk_register_clkdev(clk, "gen_synth2_3_clk", NULL); 803 + clk_register_clkdev(clk, "gen_syn2_3_clk", NULL); 802 804 803 - clk = clk_register_frac("gen_synth0_clk", "gen_synth0_1_clk", 0, 805 + clk = clk_register_frac("gen_syn0_clk", "gen_syn0_1_clk", 0, 804 806 SPEAR1310_RAS_CLK_SYNT0, gen_rtbl, ARRAY_SIZE(gen_rtbl), 805 807 &_lock); 806 - clk_register_clkdev(clk, "gen_synth0_clk", NULL); 808 + clk_register_clkdev(clk, "gen_syn0_clk", NULL); 807 809 808 - clk = clk_register_frac("gen_synth1_clk", "gen_synth0_1_clk", 0, 810 + clk = clk_register_frac("gen_syn1_clk", "gen_syn0_1_clk", 0, 809 811 SPEAR1310_RAS_CLK_SYNT1, gen_rtbl, ARRAY_SIZE(gen_rtbl), 810 812 &_lock); 811 - clk_register_clkdev(clk, "gen_synth1_clk", NULL); 813 + clk_register_clkdev(clk, "gen_syn1_clk", NULL); 812 814 813 - clk = clk_register_frac("gen_synth2_clk", "gen_synth2_3_clk", 0, 815 + clk = clk_register_frac("gen_syn2_clk", "gen_syn2_3_clk", 0, 814 816 SPEAR1310_RAS_CLK_SYNT2, gen_rtbl, ARRAY_SIZE(gen_rtbl), 815 817 &_lock); 816 - clk_register_clkdev(clk, "gen_synth2_clk", NULL); 818 + clk_register_clkdev(clk, "gen_syn2_clk", NULL); 817 819 818 - clk = clk_register_frac("gen_synth3_clk", "gen_synth2_3_clk", 0, 820 + clk = clk_register_frac("gen_syn3_clk", "gen_syn2_3_clk", 0, 819 821 SPEAR1310_RAS_CLK_SYNT3, gen_rtbl, ARRAY_SIZE(gen_rtbl), 820 822 &_lock); 821 - clk_register_clkdev(clk, "gen_synth3_clk", NULL); 823 + clk_register_clkdev(clk, "gen_syn3_clk", NULL); 822 824 823 825 clk = clk_register_gate(NULL, "ras_osc_24m_clk", "osc_24m_clk", 0, 824 826 SPEAR1310_RAS_CLK_ENB, SPEAR1310_OSC_24M_CLK_ENB, 0, ··· 845 847 &_lock); 846 848 clk_register_clkdev(clk, "ras_pll3_clk", NULL); 847 849 848 - clk = clk_register_gate(NULL, "ras_tx125_clk", "gmii_125m_pad_clk", 0, 850 + clk = clk_register_gate(NULL, "ras_tx125_clk", "gmii_pad_clk", 0, 849 851 SPEAR1310_RAS_CLK_ENB, SPEAR1310_C125M_PAD_CLK_ENB, 0, 850 852 &_lock); 851 853 clk_register_clkdev(clk, "ras_tx125_clk", NULL); ··· 910 912 &_lock); 911 913 clk_register_clkdev(clk, NULL, "5c700000.eth"); 912 914 913 - clk = clk_register_mux(NULL, "smii_rgmii_phy_mux_clk", 915 + clk = clk_register_mux(NULL, "smii_rgmii_phy_mclk", 914 916 smii_rgmii_phy_parents, 915 917 ARRAY_SIZE(smii_rgmii_phy_parents), 0, 916 918 SPEAR1310_RAS_CTRL_REG1, ··· 920 922 clk_register_clkdev(clk, NULL, "stmmacphy.2"); 921 923 clk_register_clkdev(clk, NULL, "stmmacphy.4"); 922 924 923 - clk = clk_register_mux(NULL, "rmii_phy_mux_clk", rmii_phy_parents, 925 + clk = clk_register_mux(NULL, "rmii_phy_mclk", rmii_phy_parents, 924 926 ARRAY_SIZE(rmii_phy_parents), 0, 925 927 SPEAR1310_RAS_CTRL_REG1, SPEAR1310_RMII_PHY_CLK_SHIFT, 926 928 SPEAR1310_PHY_CLK_MASK, 0, &_lock); 927 929 clk_register_clkdev(clk, NULL, "stmmacphy.3"); 928 930 929 - clk = clk_register_mux(NULL, "uart1_mux_clk", uart_parents, 931 + clk = clk_register_mux(NULL, "uart1_mclk", uart_parents, 930 932 ARRAY_SIZE(uart_parents), 0, SPEAR1310_RAS_CTRL_REG0, 931 933 SPEAR1310_UART1_CLK_SHIFT, SPEAR1310_RAS_UART_CLK_MASK, 932 934 0, &_lock); 933 - clk_register_clkdev(clk, "uart1_mux_clk", NULL); 935 + clk_register_clkdev(clk, "uart1_mclk", NULL); 934 936 935 - clk = clk_register_gate(NULL, "uart1_clk", "uart1_mux_clk", 0, 937 + clk = clk_register_gate(NULL, "uart1_clk", "uart1_mclk", 0, 936 938 SPEAR1310_RAS_SW_CLK_CTRL, SPEAR1310_UART1_CLK_ENB, 0, 937 939 &_lock); 938 940 clk_register_clkdev(clk, NULL, "5c800000.serial"); 939 941 940 - clk = clk_register_mux(NULL, "uart2_mux_clk", uart_parents, 942 + clk = clk_register_mux(NULL, "uart2_mclk", uart_parents, 941 943 ARRAY_SIZE(uart_parents), 0, SPEAR1310_RAS_CTRL_REG0, 942 944 SPEAR1310_UART2_CLK_SHIFT, SPEAR1310_RAS_UART_CLK_MASK, 943 945 0, &_lock); 944 - clk_register_clkdev(clk, "uart2_mux_clk", NULL); 946 + clk_register_clkdev(clk, "uart2_mclk", NULL); 945 947 946 - clk = clk_register_gate(NULL, "uart2_clk", "uart2_mux_clk", 0, 948 + clk = clk_register_gate(NULL, "uart2_clk", "uart2_mclk", 0, 947 949 SPEAR1310_RAS_SW_CLK_CTRL, SPEAR1310_UART2_CLK_ENB, 0, 948 950 &_lock); 949 951 clk_register_clkdev(clk, NULL, "5c900000.serial"); 950 952 951 - clk = clk_register_mux(NULL, "uart3_mux_clk", uart_parents, 953 + clk = clk_register_mux(NULL, "uart3_mclk", uart_parents, 952 954 ARRAY_SIZE(uart_parents), 0, SPEAR1310_RAS_CTRL_REG0, 953 955 SPEAR1310_UART3_CLK_SHIFT, SPEAR1310_RAS_UART_CLK_MASK, 954 956 0, &_lock); 955 - clk_register_clkdev(clk, "uart3_mux_clk", NULL); 957 + clk_register_clkdev(clk, "uart3_mclk", NULL); 956 958 957 - clk = clk_register_gate(NULL, "uart3_clk", "uart3_mux_clk", 0, 959 + clk = clk_register_gate(NULL, "uart3_clk", "uart3_mclk", 0, 958 960 SPEAR1310_RAS_SW_CLK_CTRL, SPEAR1310_UART3_CLK_ENB, 0, 959 961 &_lock); 960 962 clk_register_clkdev(clk, NULL, "5ca00000.serial"); 961 963 962 - clk = clk_register_mux(NULL, "uart4_mux_clk", uart_parents, 964 + clk = clk_register_mux(NULL, "uart4_mclk", uart_parents, 963 965 ARRAY_SIZE(uart_parents), 0, SPEAR1310_RAS_CTRL_REG0, 964 966 SPEAR1310_UART4_CLK_SHIFT, SPEAR1310_RAS_UART_CLK_MASK, 965 967 0, &_lock); 966 - clk_register_clkdev(clk, "uart4_mux_clk", NULL); 968 + clk_register_clkdev(clk, "uart4_mclk", NULL); 967 969 968 - clk = clk_register_gate(NULL, "uart4_clk", "uart4_mux_clk", 0, 970 + clk = clk_register_gate(NULL, "uart4_clk", "uart4_mclk", 0, 969 971 SPEAR1310_RAS_SW_CLK_CTRL, SPEAR1310_UART4_CLK_ENB, 0, 970 972 &_lock); 971 973 clk_register_clkdev(clk, NULL, "5cb00000.serial"); 972 974 973 - clk = clk_register_mux(NULL, "uart5_mux_clk", uart_parents, 975 + clk = clk_register_mux(NULL, "uart5_mclk", uart_parents, 974 976 ARRAY_SIZE(uart_parents), 0, SPEAR1310_RAS_CTRL_REG0, 975 977 SPEAR1310_UART5_CLK_SHIFT, SPEAR1310_RAS_UART_CLK_MASK, 976 978 0, &_lock); 977 - clk_register_clkdev(clk, "uart5_mux_clk", NULL); 979 + clk_register_clkdev(clk, "uart5_mclk", NULL); 978 980 979 - clk = clk_register_gate(NULL, "uart5_clk", "uart5_mux_clk", 0, 981 + clk = clk_register_gate(NULL, "uart5_clk", "uart5_mclk", 0, 980 982 SPEAR1310_RAS_SW_CLK_CTRL, SPEAR1310_UART5_CLK_ENB, 0, 981 983 &_lock); 982 984 clk_register_clkdev(clk, NULL, "5cc00000.serial"); 983 985 984 - clk = clk_register_mux(NULL, "i2c1_mux_clk", i2c_parents, 986 + clk = clk_register_mux(NULL, "i2c1_mclk", i2c_parents, 985 987 ARRAY_SIZE(i2c_parents), 0, SPEAR1310_RAS_CTRL_REG0, 986 988 SPEAR1310_I2C1_CLK_SHIFT, SPEAR1310_I2C_CLK_MASK, 0, 987 989 &_lock); 988 - clk_register_clkdev(clk, "i2c1_mux_clk", NULL); 990 + clk_register_clkdev(clk, "i2c1_mclk", NULL); 989 991 990 - clk = clk_register_gate(NULL, "i2c1_clk", "i2c1_mux_clk", 0, 992 + clk = clk_register_gate(NULL, "i2c1_clk", "i2c1_mclk", 0, 991 993 SPEAR1310_RAS_SW_CLK_CTRL, SPEAR1310_I2C1_CLK_ENB, 0, 992 994 &_lock); 993 995 clk_register_clkdev(clk, NULL, "5cd00000.i2c"); 994 996 995 - clk = clk_register_mux(NULL, "i2c2_mux_clk", i2c_parents, 997 + clk = clk_register_mux(NULL, "i2c2_mclk", i2c_parents, 996 998 ARRAY_SIZE(i2c_parents), 0, SPEAR1310_RAS_CTRL_REG0, 997 999 SPEAR1310_I2C2_CLK_SHIFT, SPEAR1310_I2C_CLK_MASK, 0, 998 1000 &_lock); 999 - clk_register_clkdev(clk, "i2c2_mux_clk", NULL); 1001 + clk_register_clkdev(clk, "i2c2_mclk", NULL); 1000 1002 1001 - clk = clk_register_gate(NULL, "i2c2_clk", "i2c2_mux_clk", 0, 1003 + clk = clk_register_gate(NULL, "i2c2_clk", "i2c2_mclk", 0, 1002 1004 SPEAR1310_RAS_SW_CLK_CTRL, SPEAR1310_I2C2_CLK_ENB, 0, 1003 1005 &_lock); 1004 1006 clk_register_clkdev(clk, NULL, "5ce00000.i2c"); 1005 1007 1006 - clk = clk_register_mux(NULL, "i2c3_mux_clk", i2c_parents, 1008 + clk = clk_register_mux(NULL, "i2c3_mclk", i2c_parents, 1007 1009 ARRAY_SIZE(i2c_parents), 0, SPEAR1310_RAS_CTRL_REG0, 1008 1010 SPEAR1310_I2C3_CLK_SHIFT, SPEAR1310_I2C_CLK_MASK, 0, 1009 1011 &_lock); 1010 - clk_register_clkdev(clk, "i2c3_mux_clk", NULL); 1012 + clk_register_clkdev(clk, "i2c3_mclk", NULL); 1011 1013 1012 - clk = clk_register_gate(NULL, "i2c3_clk", "i2c3_mux_clk", 0, 1014 + clk = clk_register_gate(NULL, "i2c3_clk", "i2c3_mclk", 0, 1013 1015 SPEAR1310_RAS_SW_CLK_CTRL, SPEAR1310_I2C3_CLK_ENB, 0, 1014 1016 &_lock); 1015 1017 clk_register_clkdev(clk, NULL, "5cf00000.i2c"); 1016 1018 1017 - clk = clk_register_mux(NULL, "i2c4_mux_clk", i2c_parents, 1019 + clk = clk_register_mux(NULL, "i2c4_mclk", i2c_parents, 1018 1020 ARRAY_SIZE(i2c_parents), 0, SPEAR1310_RAS_CTRL_REG0, 1019 1021 SPEAR1310_I2C4_CLK_SHIFT, SPEAR1310_I2C_CLK_MASK, 0, 1020 1022 &_lock); 1021 - clk_register_clkdev(clk, "i2c4_mux_clk", NULL); 1023 + clk_register_clkdev(clk, "i2c4_mclk", NULL); 1022 1024 1023 - clk = clk_register_gate(NULL, "i2c4_clk", "i2c4_mux_clk", 0, 1025 + clk = clk_register_gate(NULL, "i2c4_clk", "i2c4_mclk", 0, 1024 1026 SPEAR1310_RAS_SW_CLK_CTRL, SPEAR1310_I2C4_CLK_ENB, 0, 1025 1027 &_lock); 1026 1028 clk_register_clkdev(clk, NULL, "5d000000.i2c"); 1027 1029 1028 - clk = clk_register_mux(NULL, "i2c5_mux_clk", i2c_parents, 1030 + clk = clk_register_mux(NULL, "i2c5_mclk", i2c_parents, 1029 1031 ARRAY_SIZE(i2c_parents), 0, SPEAR1310_RAS_CTRL_REG0, 1030 1032 SPEAR1310_I2C5_CLK_SHIFT, SPEAR1310_I2C_CLK_MASK, 0, 1031 1033 &_lock); 1032 - clk_register_clkdev(clk, "i2c5_mux_clk", NULL); 1034 + clk_register_clkdev(clk, "i2c5_mclk", NULL); 1033 1035 1034 - clk = clk_register_gate(NULL, "i2c5_clk", "i2c5_mux_clk", 0, 1036 + clk = clk_register_gate(NULL, "i2c5_clk", "i2c5_mclk", 0, 1035 1037 SPEAR1310_RAS_SW_CLK_CTRL, SPEAR1310_I2C5_CLK_ENB, 0, 1036 1038 &_lock); 1037 1039 clk_register_clkdev(clk, NULL, "5d100000.i2c"); 1038 1040 1039 - clk = clk_register_mux(NULL, "i2c6_mux_clk", i2c_parents, 1041 + clk = clk_register_mux(NULL, "i2c6_mclk", i2c_parents, 1040 1042 ARRAY_SIZE(i2c_parents), 0, SPEAR1310_RAS_CTRL_REG0, 1041 1043 SPEAR1310_I2C6_CLK_SHIFT, SPEAR1310_I2C_CLK_MASK, 0, 1042 1044 &_lock); 1043 - clk_register_clkdev(clk, "i2c6_mux_clk", NULL); 1045 + clk_register_clkdev(clk, "i2c6_mclk", NULL); 1044 1046 1045 - clk = clk_register_gate(NULL, "i2c6_clk", "i2c6_mux_clk", 0, 1047 + clk = clk_register_gate(NULL, "i2c6_clk", "i2c6_mclk", 0, 1046 1048 SPEAR1310_RAS_SW_CLK_CTRL, SPEAR1310_I2C6_CLK_ENB, 0, 1047 1049 &_lock); 1048 1050 clk_register_clkdev(clk, NULL, "5d200000.i2c"); 1049 1051 1050 - clk = clk_register_mux(NULL, "i2c7_mux_clk", i2c_parents, 1052 + clk = clk_register_mux(NULL, "i2c7_mclk", i2c_parents, 1051 1053 ARRAY_SIZE(i2c_parents), 0, SPEAR1310_RAS_CTRL_REG0, 1052 1054 SPEAR1310_I2C7_CLK_SHIFT, SPEAR1310_I2C_CLK_MASK, 0, 1053 1055 &_lock); 1054 - clk_register_clkdev(clk, "i2c7_mux_clk", NULL); 1056 + clk_register_clkdev(clk, "i2c7_mclk", NULL); 1055 1057 1056 - clk = clk_register_gate(NULL, "i2c7_clk", "i2c7_mux_clk", 0, 1058 + clk = clk_register_gate(NULL, "i2c7_clk", "i2c7_mclk", 0, 1057 1059 SPEAR1310_RAS_SW_CLK_CTRL, SPEAR1310_I2C7_CLK_ENB, 0, 1058 1060 &_lock); 1059 1061 clk_register_clkdev(clk, NULL, "5d300000.i2c"); 1060 1062 1061 - clk = clk_register_mux(NULL, "ssp1_mux_clk", ssp1_parents, 1063 + clk = clk_register_mux(NULL, "ssp1_mclk", ssp1_parents, 1062 1064 ARRAY_SIZE(ssp1_parents), 0, SPEAR1310_RAS_CTRL_REG0, 1063 1065 SPEAR1310_SSP1_CLK_SHIFT, SPEAR1310_SSP1_CLK_MASK, 0, 1064 1066 &_lock); 1065 - clk_register_clkdev(clk, "ssp1_mux_clk", NULL); 1067 + clk_register_clkdev(clk, "ssp1_mclk", NULL); 1066 1068 1067 - clk = clk_register_gate(NULL, "ssp1_clk", "ssp1_mux_clk", 0, 1069 + clk = clk_register_gate(NULL, "ssp1_clk", "ssp1_mclk", 0, 1068 1070 SPEAR1310_RAS_SW_CLK_CTRL, SPEAR1310_SSP1_CLK_ENB, 0, 1069 1071 &_lock); 1070 1072 clk_register_clkdev(clk, NULL, "5d400000.spi"); 1071 1073 1072 - clk = clk_register_mux(NULL, "pci_mux_clk", pci_parents, 1074 + clk = clk_register_mux(NULL, "pci_mclk", pci_parents, 1073 1075 ARRAY_SIZE(pci_parents), 0, SPEAR1310_RAS_CTRL_REG0, 1074 1076 SPEAR1310_PCI_CLK_SHIFT, SPEAR1310_PCI_CLK_MASK, 0, 1075 1077 &_lock); 1076 - clk_register_clkdev(clk, "pci_mux_clk", NULL); 1078 + clk_register_clkdev(clk, "pci_mclk", NULL); 1077 1079 1078 - clk = clk_register_gate(NULL, "pci_clk", "pci_mux_clk", 0, 1080 + clk = clk_register_gate(NULL, "pci_clk", "pci_mclk", 0, 1079 1081 SPEAR1310_RAS_SW_CLK_CTRL, SPEAR1310_PCI_CLK_ENB, 0, 1080 1082 &_lock); 1081 1083 clk_register_clkdev(clk, NULL, "pci"); 1082 1084 1083 - clk = clk_register_mux(NULL, "tdm1_mux_clk", tdm_parents, 1085 + clk = clk_register_mux(NULL, "tdm1_mclk", tdm_parents, 1084 1086 ARRAY_SIZE(tdm_parents), 0, SPEAR1310_RAS_CTRL_REG0, 1085 1087 SPEAR1310_TDM1_CLK_SHIFT, SPEAR1310_TDM_CLK_MASK, 0, 1086 1088 &_lock); 1087 - clk_register_clkdev(clk, "tdm1_mux_clk", NULL); 1089 + clk_register_clkdev(clk, "tdm1_mclk", NULL); 1088 1090 1089 - clk = clk_register_gate(NULL, "tdm1_clk", "tdm1_mux_clk", 0, 1091 + clk = clk_register_gate(NULL, "tdm1_clk", "tdm1_mclk", 0, 1090 1092 SPEAR1310_RAS_SW_CLK_CTRL, SPEAR1310_TDM1_CLK_ENB, 0, 1091 1093 &_lock); 1092 1094 clk_register_clkdev(clk, NULL, "tdm_hdlc.0"); 1093 1095 1094 - clk = clk_register_mux(NULL, "tdm2_mux_clk", tdm_parents, 1096 + clk = clk_register_mux(NULL, "tdm2_mclk", tdm_parents, 1095 1097 ARRAY_SIZE(tdm_parents), 0, SPEAR1310_RAS_CTRL_REG0, 1096 1098 SPEAR1310_TDM2_CLK_SHIFT, SPEAR1310_TDM_CLK_MASK, 0, 1097 1099 &_lock); 1098 - clk_register_clkdev(clk, "tdm2_mux_clk", NULL); 1100 + clk_register_clkdev(clk, "tdm2_mclk", NULL); 1099 1101 1100 - clk = clk_register_gate(NULL, "tdm2_clk", "tdm2_mux_clk", 0, 1102 + clk = clk_register_gate(NULL, "tdm2_clk", "tdm2_mclk", 0, 1101 1103 SPEAR1310_RAS_SW_CLK_CTRL, SPEAR1310_TDM2_CLK_ENB, 0, 1102 1104 &_lock); 1103 1105 clk_register_clkdev(clk, NULL, "tdm_hdlc.1");
+138 -141
drivers/clk/spear/spear1340_clock.c
··· 369 369 370 370 /* clock parents */ 371 371 static const char *vco_parents[] = { "osc_24m_clk", "osc_25m_clk", }; 372 - static const char *sys_parents[] = { "none", "pll1_clk", "none", "none", 373 - "sys_synth_clk", "none", "pll2_clk", "pll3_clk", }; 374 - static const char *ahb_parents[] = { "cpu_div3_clk", "amba_synth_clk", }; 372 + static const char *sys_parents[] = { "pll1_clk", "pll1_clk", "pll1_clk", 373 + "pll1_clk", "sys_synth_clk", "sys_synth_clk", "pll2_clk", "pll3_clk", }; 374 + static const char *ahb_parents[] = { "cpu_div3_clk", "amba_syn_clk", }; 375 375 static const char *gpt_parents[] = { "osc_24m_clk", "apb_clk", }; 376 376 static const char *uart0_parents[] = { "pll5_clk", "osc_24m_clk", 377 - "uart0_synth_gate_clk", }; 377 + "uart0_syn_gclk", }; 378 378 static const char *uart1_parents[] = { "pll5_clk", "osc_24m_clk", 379 - "uart1_synth_gate_clk", }; 380 - static const char *c3_parents[] = { "pll5_clk", "c3_synth_gate_clk", }; 381 - static const char *gmac_phy_input_parents[] = { "gmii_125m_pad_clk", "pll2_clk", 379 + "uart1_syn_gclk", }; 380 + static const char *c3_parents[] = { "pll5_clk", "c3_syn_gclk", }; 381 + static const char *gmac_phy_input_parents[] = { "gmii_pad_clk", "pll2_clk", 382 382 "osc_25m_clk", }; 383 - static const char *gmac_phy_parents[] = { "gmac_phy_input_mux_clk", 384 - "gmac_phy_synth_gate_clk", }; 383 + static const char *gmac_phy_parents[] = { "phy_input_mclk", "phy_syn_gclk", }; 385 384 static const char *clcd_synth_parents[] = { "vco1div4_clk", "pll2_clk", }; 386 - static const char *clcd_pixel_parents[] = { "pll5_clk", "clcd_synth_clk", }; 385 + static const char *clcd_pixel_parents[] = { "pll5_clk", "clcd_syn_clk", }; 387 386 static const char *i2s_src_parents[] = { "vco1div2_clk", "pll2_clk", "pll3_clk", 388 387 "i2s_src_pad_clk", }; 389 - static const char *i2s_ref_parents[] = { "i2s_src_mux_clk", "i2s_prs1_clk", }; 390 - static const char *spdif_out_parents[] = { "i2s_src_pad_clk", "gen_synth2_clk", 391 - }; 392 - static const char *spdif_in_parents[] = { "pll2_clk", "gen_synth3_clk", }; 388 + static const char *i2s_ref_parents[] = { "i2s_src_mclk", "i2s_prs1_clk", }; 389 + static const char *spdif_out_parents[] = { "i2s_src_pad_clk", "gen_syn2_clk", }; 390 + static const char *spdif_in_parents[] = { "pll2_clk", "gen_syn3_clk", }; 393 391 394 392 static const char *gen_synth0_1_parents[] = { "vco1div4_clk", "vco3div2_clk", 395 393 "pll3_clk", }; ··· 413 415 25000000); 414 416 clk_register_clkdev(clk, "osc_25m_clk", NULL); 415 417 416 - clk = clk_register_fixed_rate(NULL, "gmii_125m_pad_clk", NULL, 417 - CLK_IS_ROOT, 125000000); 418 - clk_register_clkdev(clk, "gmii_125m_pad_clk", NULL); 418 + clk = clk_register_fixed_rate(NULL, "gmii_pad_clk", NULL, CLK_IS_ROOT, 419 + 125000000); 420 + clk_register_clkdev(clk, "gmii_pad_clk", NULL); 419 421 420 422 clk = clk_register_fixed_rate(NULL, "i2s_src_pad_clk", NULL, 421 423 CLK_IS_ROOT, 12288000); ··· 429 431 430 432 /* clock derived from 24 or 25 MHz osc clk */ 431 433 /* vco-pll */ 432 - clk = clk_register_mux(NULL, "vco1_mux_clk", vco_parents, 434 + clk = clk_register_mux(NULL, "vco1_mclk", vco_parents, 433 435 ARRAY_SIZE(vco_parents), 0, SPEAR1340_PLL_CFG, 434 436 SPEAR1340_PLL1_CLK_SHIFT, SPEAR1340_PLL_CLK_MASK, 0, 435 437 &_lock); 436 - clk_register_clkdev(clk, "vco1_mux_clk", NULL); 437 - clk = clk_register_vco_pll("vco1_clk", "pll1_clk", NULL, "vco1_mux_clk", 438 - 0, SPEAR1340_PLL1_CTR, SPEAR1340_PLL1_FRQ, pll_rtbl, 438 + clk_register_clkdev(clk, "vco1_mclk", NULL); 439 + clk = clk_register_vco_pll("vco1_clk", "pll1_clk", NULL, "vco1_mclk", 0, 440 + SPEAR1340_PLL1_CTR, SPEAR1340_PLL1_FRQ, pll_rtbl, 439 441 ARRAY_SIZE(pll_rtbl), &_lock, &clk1, NULL); 440 442 clk_register_clkdev(clk, "vco1_clk", NULL); 441 443 clk_register_clkdev(clk1, "pll1_clk", NULL); 442 444 443 - clk = clk_register_mux(NULL, "vco2_mux_clk", vco_parents, 445 + clk = clk_register_mux(NULL, "vco2_mclk", vco_parents, 444 446 ARRAY_SIZE(vco_parents), 0, SPEAR1340_PLL_CFG, 445 447 SPEAR1340_PLL2_CLK_SHIFT, SPEAR1340_PLL_CLK_MASK, 0, 446 448 &_lock); 447 - clk_register_clkdev(clk, "vco2_mux_clk", NULL); 448 - clk = clk_register_vco_pll("vco2_clk", "pll2_clk", NULL, "vco2_mux_clk", 449 - 0, SPEAR1340_PLL2_CTR, SPEAR1340_PLL2_FRQ, pll_rtbl, 449 + clk_register_clkdev(clk, "vco2_mclk", NULL); 450 + clk = clk_register_vco_pll("vco2_clk", "pll2_clk", NULL, "vco2_mclk", 0, 451 + SPEAR1340_PLL2_CTR, SPEAR1340_PLL2_FRQ, pll_rtbl, 450 452 ARRAY_SIZE(pll_rtbl), &_lock, &clk1, NULL); 451 453 clk_register_clkdev(clk, "vco2_clk", NULL); 452 454 clk_register_clkdev(clk1, "pll2_clk", NULL); 453 455 454 - clk = clk_register_mux(NULL, "vco3_mux_clk", vco_parents, 456 + clk = clk_register_mux(NULL, "vco3_mclk", vco_parents, 455 457 ARRAY_SIZE(vco_parents), 0, SPEAR1340_PLL_CFG, 456 458 SPEAR1340_PLL3_CLK_SHIFT, SPEAR1340_PLL_CLK_MASK, 0, 457 459 &_lock); 458 - clk_register_clkdev(clk, "vco3_mux_clk", NULL); 459 - clk = clk_register_vco_pll("vco3_clk", "pll3_clk", NULL, "vco3_mux_clk", 460 - 0, SPEAR1340_PLL3_CTR, SPEAR1340_PLL3_FRQ, pll_rtbl, 460 + clk_register_clkdev(clk, "vco3_mclk", NULL); 461 + clk = clk_register_vco_pll("vco3_clk", "pll3_clk", NULL, "vco3_mclk", 0, 462 + SPEAR1340_PLL3_CTR, SPEAR1340_PLL3_FRQ, pll_rtbl, 461 463 ARRAY_SIZE(pll_rtbl), &_lock, &clk1, NULL); 462 464 clk_register_clkdev(clk, "vco3_clk", NULL); 463 465 clk_register_clkdev(clk1, "pll3_clk", NULL); ··· 496 498 /* peripherals */ 497 499 clk_register_fixed_factor(NULL, "thermal_clk", "osc_24m_clk", 0, 1, 498 500 128); 499 - clk = clk_register_gate(NULL, "thermal_gate_clk", "thermal_clk", 0, 501 + clk = clk_register_gate(NULL, "thermal_gclk", "thermal_clk", 0, 500 502 SPEAR1340_PERIP2_CLK_ENB, SPEAR1340_THSENS_CLK_ENB, 0, 501 503 &_lock); 502 504 clk_register_clkdev(clk, NULL, "spear_thermal"); ··· 507 509 clk_register_clkdev(clk, "ddr_clk", NULL); 508 510 509 511 /* clock derived from pll1 clk */ 510 - clk = clk_register_frac("sys_synth_clk", "vco1div2_clk", 0, 512 + clk = clk_register_frac("sys_syn_clk", "vco1div2_clk", 0, 511 513 SPEAR1340_SYS_CLK_SYNT, sys_synth_rtbl, 512 514 ARRAY_SIZE(sys_synth_rtbl), &_lock); 513 - clk_register_clkdev(clk, "sys_synth_clk", NULL); 515 + clk_register_clkdev(clk, "sys_syn_clk", NULL); 514 516 515 - clk = clk_register_frac("amba_synth_clk", "vco1div2_clk", 0, 517 + clk = clk_register_frac("amba_syn_clk", "vco1div2_clk", 0, 516 518 SPEAR1340_AMBA_CLK_SYNT, amba_synth_rtbl, 517 519 ARRAY_SIZE(amba_synth_rtbl), &_lock); 518 - clk_register_clkdev(clk, "amba_synth_clk", NULL); 520 + clk_register_clkdev(clk, "amba_syn_clk", NULL); 519 521 520 - clk = clk_register_mux(NULL, "sys_mux_clk", sys_parents, 522 + clk = clk_register_mux(NULL, "sys_mclk", sys_parents, 521 523 ARRAY_SIZE(sys_parents), 0, SPEAR1340_SYS_CLK_CTRL, 522 524 SPEAR1340_SCLK_SRC_SEL_SHIFT, 523 525 SPEAR1340_SCLK_SRC_SEL_MASK, 0, &_lock); 524 526 clk_register_clkdev(clk, "sys_clk", NULL); 525 527 526 - clk = clk_register_fixed_factor(NULL, "cpu_clk", "sys_mux_clk", 0, 1, 528 + clk = clk_register_fixed_factor(NULL, "cpu_clk", "sys_mclk", 0, 1, 527 529 2); 528 530 clk_register_clkdev(clk, "cpu_clk", NULL); 529 531 ··· 546 548 clk_register_clkdev(clk, "apb_clk", NULL); 547 549 548 550 /* gpt clocks */ 549 - clk = clk_register_mux(NULL, "gpt0_mux_clk", gpt_parents, 551 + clk = clk_register_mux(NULL, "gpt0_mclk", gpt_parents, 550 552 ARRAY_SIZE(gpt_parents), 0, SPEAR1340_PERIP_CLK_CFG, 551 553 SPEAR1340_GPT0_CLK_SHIFT, SPEAR1340_GPT_CLK_MASK, 0, 552 554 &_lock); 553 - clk_register_clkdev(clk, "gpt0_mux_clk", NULL); 554 - clk = clk_register_gate(NULL, "gpt0_clk", "gpt0_mux_clk", 0, 555 + clk_register_clkdev(clk, "gpt0_mclk", NULL); 556 + clk = clk_register_gate(NULL, "gpt0_clk", "gpt0_mclk", 0, 555 557 SPEAR1340_PERIP1_CLK_ENB, SPEAR1340_GPT0_CLK_ENB, 0, 556 558 &_lock); 557 559 clk_register_clkdev(clk, NULL, "gpt0"); 558 560 559 - clk = clk_register_mux(NULL, "gpt1_mux_clk", gpt_parents, 561 + clk = clk_register_mux(NULL, "gpt1_mclk", gpt_parents, 560 562 ARRAY_SIZE(gpt_parents), 0, SPEAR1340_PERIP_CLK_CFG, 561 563 SPEAR1340_GPT1_CLK_SHIFT, SPEAR1340_GPT_CLK_MASK, 0, 562 564 &_lock); 563 - clk_register_clkdev(clk, "gpt1_mux_clk", NULL); 564 - clk = clk_register_gate(NULL, "gpt1_clk", "gpt1_mux_clk", 0, 565 + clk_register_clkdev(clk, "gpt1_mclk", NULL); 566 + clk = clk_register_gate(NULL, "gpt1_clk", "gpt1_mclk", 0, 565 567 SPEAR1340_PERIP1_CLK_ENB, SPEAR1340_GPT1_CLK_ENB, 0, 566 568 &_lock); 567 569 clk_register_clkdev(clk, NULL, "gpt1"); 568 570 569 - clk = clk_register_mux(NULL, "gpt2_mux_clk", gpt_parents, 571 + clk = clk_register_mux(NULL, "gpt2_mclk", gpt_parents, 570 572 ARRAY_SIZE(gpt_parents), 0, SPEAR1340_PERIP_CLK_CFG, 571 573 SPEAR1340_GPT2_CLK_SHIFT, SPEAR1340_GPT_CLK_MASK, 0, 572 574 &_lock); 573 - clk_register_clkdev(clk, "gpt2_mux_clk", NULL); 574 - clk = clk_register_gate(NULL, "gpt2_clk", "gpt2_mux_clk", 0, 575 + clk_register_clkdev(clk, "gpt2_mclk", NULL); 576 + clk = clk_register_gate(NULL, "gpt2_clk", "gpt2_mclk", 0, 575 577 SPEAR1340_PERIP2_CLK_ENB, SPEAR1340_GPT2_CLK_ENB, 0, 576 578 &_lock); 577 579 clk_register_clkdev(clk, NULL, "gpt2"); 578 580 579 - clk = clk_register_mux(NULL, "gpt3_mux_clk", gpt_parents, 581 + clk = clk_register_mux(NULL, "gpt3_mclk", gpt_parents, 580 582 ARRAY_SIZE(gpt_parents), 0, SPEAR1340_PERIP_CLK_CFG, 581 583 SPEAR1340_GPT3_CLK_SHIFT, SPEAR1340_GPT_CLK_MASK, 0, 582 584 &_lock); 583 - clk_register_clkdev(clk, "gpt3_mux_clk", NULL); 584 - clk = clk_register_gate(NULL, "gpt3_clk", "gpt3_mux_clk", 0, 585 + clk_register_clkdev(clk, "gpt3_mclk", NULL); 586 + clk = clk_register_gate(NULL, "gpt3_clk", "gpt3_mclk", 0, 585 587 SPEAR1340_PERIP2_CLK_ENB, SPEAR1340_GPT3_CLK_ENB, 0, 586 588 &_lock); 587 589 clk_register_clkdev(clk, NULL, "gpt3"); 588 590 589 591 /* others */ 590 - clk = clk_register_aux("uart0_synth_clk", "uart0_synth_gate_clk", 592 + clk = clk_register_aux("uart0_syn_clk", "uart0_syn_gclk", 591 593 "vco1div2_clk", 0, SPEAR1340_UART0_CLK_SYNT, NULL, 592 594 aux_rtbl, ARRAY_SIZE(aux_rtbl), &_lock, &clk1); 593 - clk_register_clkdev(clk, "uart0_synth_clk", NULL); 594 - clk_register_clkdev(clk1, "uart0_synth_gate_clk", NULL); 595 + clk_register_clkdev(clk, "uart0_syn_clk", NULL); 596 + clk_register_clkdev(clk1, "uart0_syn_gclk", NULL); 595 597 596 - clk = clk_register_mux(NULL, "uart0_mux_clk", uart0_parents, 598 + clk = clk_register_mux(NULL, "uart0_mclk", uart0_parents, 597 599 ARRAY_SIZE(uart0_parents), 0, SPEAR1340_PERIP_CLK_CFG, 598 600 SPEAR1340_UART0_CLK_SHIFT, SPEAR1340_UART_CLK_MASK, 0, 599 601 &_lock); 600 - clk_register_clkdev(clk, "uart0_mux_clk", NULL); 602 + clk_register_clkdev(clk, "uart0_mclk", NULL); 601 603 602 - clk = clk_register_gate(NULL, "uart0_clk", "uart0_mux_clk", 0, 604 + clk = clk_register_gate(NULL, "uart0_clk", "uart0_mclk", 0, 603 605 SPEAR1340_PERIP1_CLK_ENB, SPEAR1340_UART0_CLK_ENB, 0, 604 606 &_lock); 605 607 clk_register_clkdev(clk, NULL, "e0000000.serial"); 606 608 607 - clk = clk_register_aux("uart1_synth_clk", "uart1_synth_gate_clk", 609 + clk = clk_register_aux("uart1_syn_clk", "uart1_syn_gclk", 608 610 "vco1div2_clk", 0, SPEAR1340_UART1_CLK_SYNT, NULL, 609 611 aux_rtbl, ARRAY_SIZE(aux_rtbl), &_lock, &clk1); 610 - clk_register_clkdev(clk, "uart1_synth_clk", NULL); 611 - clk_register_clkdev(clk1, "uart1_synth_gate_clk", NULL); 612 + clk_register_clkdev(clk, "uart1_syn_clk", NULL); 613 + clk_register_clkdev(clk1, "uart1_syn_gclk", NULL); 612 614 613 - clk = clk_register_mux(NULL, "uart1_mux_clk", uart1_parents, 615 + clk = clk_register_mux(NULL, "uart1_mclk", uart1_parents, 614 616 ARRAY_SIZE(uart1_parents), 0, SPEAR1340_PERIP_CLK_CFG, 615 617 SPEAR1340_UART1_CLK_SHIFT, SPEAR1340_UART_CLK_MASK, 0, 616 618 &_lock); 617 - clk_register_clkdev(clk, "uart1_mux_clk", NULL); 619 + clk_register_clkdev(clk, "uart1_mclk", NULL); 618 620 619 - clk = clk_register_gate(NULL, "uart1_clk", "uart1_mux_clk", 0, 620 - SPEAR1340_PERIP1_CLK_ENB, SPEAR1340_UART1_CLK_ENB, 0, 621 + clk = clk_register_gate(NULL, "uart1_clk", "uart1_mclk", 0, 622 + SPEAR1340_PERIP3_CLK_ENB, SPEAR1340_UART1_CLK_ENB, 0, 621 623 &_lock); 622 624 clk_register_clkdev(clk, NULL, "b4100000.serial"); 623 625 624 - clk = clk_register_aux("sdhci_synth_clk", "sdhci_synth_gate_clk", 626 + clk = clk_register_aux("sdhci_syn_clk", "sdhci_syn_gclk", 625 627 "vco1div2_clk", 0, SPEAR1340_SDHCI_CLK_SYNT, NULL, 626 628 aux_rtbl, ARRAY_SIZE(aux_rtbl), &_lock, &clk1); 627 - clk_register_clkdev(clk, "sdhci_synth_clk", NULL); 628 - clk_register_clkdev(clk1, "sdhci_synth_gate_clk", NULL); 629 + clk_register_clkdev(clk, "sdhci_syn_clk", NULL); 630 + clk_register_clkdev(clk1, "sdhci_syn_gclk", NULL); 629 631 630 - clk = clk_register_gate(NULL, "sdhci_clk", "sdhci_synth_gate_clk", 0, 632 + clk = clk_register_gate(NULL, "sdhci_clk", "sdhci_syn_gclk", 0, 631 633 SPEAR1340_PERIP1_CLK_ENB, SPEAR1340_SDHCI_CLK_ENB, 0, 632 634 &_lock); 633 635 clk_register_clkdev(clk, NULL, "b3000000.sdhci"); 634 636 635 - clk = clk_register_aux("cfxd_synth_clk", "cfxd_synth_gate_clk", 636 - "vco1div2_clk", 0, SPEAR1340_CFXD_CLK_SYNT, NULL, 637 - aux_rtbl, ARRAY_SIZE(aux_rtbl), &_lock, &clk1); 638 - clk_register_clkdev(clk, "cfxd_synth_clk", NULL); 639 - clk_register_clkdev(clk1, "cfxd_synth_gate_clk", NULL); 637 + clk = clk_register_aux("cfxd_syn_clk", "cfxd_syn_gclk", "vco1div2_clk", 638 + 0, SPEAR1340_CFXD_CLK_SYNT, NULL, aux_rtbl, 639 + ARRAY_SIZE(aux_rtbl), &_lock, &clk1); 640 + clk_register_clkdev(clk, "cfxd_syn_clk", NULL); 641 + clk_register_clkdev(clk1, "cfxd_syn_gclk", NULL); 640 642 641 - clk = clk_register_gate(NULL, "cfxd_clk", "cfxd_synth_gate_clk", 0, 643 + clk = clk_register_gate(NULL, "cfxd_clk", "cfxd_syn_gclk", 0, 642 644 SPEAR1340_PERIP1_CLK_ENB, SPEAR1340_CFXD_CLK_ENB, 0, 643 645 &_lock); 644 646 clk_register_clkdev(clk, NULL, "b2800000.cf"); 645 647 clk_register_clkdev(clk, NULL, "arasan_xd"); 646 648 647 - clk = clk_register_aux("c3_synth_clk", "c3_synth_gate_clk", 648 - "vco1div2_clk", 0, SPEAR1340_C3_CLK_SYNT, NULL, 649 - aux_rtbl, ARRAY_SIZE(aux_rtbl), &_lock, &clk1); 650 - clk_register_clkdev(clk, "c3_synth_clk", NULL); 651 - clk_register_clkdev(clk1, "c3_synth_gate_clk", NULL); 649 + clk = clk_register_aux("c3_syn_clk", "c3_syn_gclk", "vco1div2_clk", 0, 650 + SPEAR1340_C3_CLK_SYNT, NULL, aux_rtbl, 651 + ARRAY_SIZE(aux_rtbl), &_lock, &clk1); 652 + clk_register_clkdev(clk, "c3_syn_clk", NULL); 653 + clk_register_clkdev(clk1, "c3_syn_gclk", NULL); 652 654 653 - clk = clk_register_mux(NULL, "c3_mux_clk", c3_parents, 655 + clk = clk_register_mux(NULL, "c3_mclk", c3_parents, 654 656 ARRAY_SIZE(c3_parents), 0, SPEAR1340_PERIP_CLK_CFG, 655 657 SPEAR1340_C3_CLK_SHIFT, SPEAR1340_C3_CLK_MASK, 0, 656 658 &_lock); 657 - clk_register_clkdev(clk, "c3_mux_clk", NULL); 659 + clk_register_clkdev(clk, "c3_mclk", NULL); 658 660 659 - clk = clk_register_gate(NULL, "c3_clk", "c3_mux_clk", 0, 661 + clk = clk_register_gate(NULL, "c3_clk", "c3_mclk", 0, 660 662 SPEAR1340_PERIP1_CLK_ENB, SPEAR1340_C3_CLK_ENB, 0, 661 663 &_lock); 662 664 clk_register_clkdev(clk, NULL, "c3"); 663 665 664 666 /* gmac */ 665 - clk = clk_register_mux(NULL, "gmac_phy_input_mux_clk", 666 - gmac_phy_input_parents, 667 + clk = clk_register_mux(NULL, "phy_input_mclk", gmac_phy_input_parents, 667 668 ARRAY_SIZE(gmac_phy_input_parents), 0, 668 669 SPEAR1340_GMAC_CLK_CFG, 669 670 SPEAR1340_GMAC_PHY_INPUT_CLK_SHIFT, 670 671 SPEAR1340_GMAC_PHY_INPUT_CLK_MASK, 0, &_lock); 671 - clk_register_clkdev(clk, "gmac_phy_input_mux_clk", NULL); 672 + clk_register_clkdev(clk, "phy_input_mclk", NULL); 672 673 673 - clk = clk_register_aux("gmac_phy_synth_clk", "gmac_phy_synth_gate_clk", 674 - "gmac_phy_input_mux_clk", 0, SPEAR1340_GMAC_CLK_SYNT, 675 - NULL, gmac_rtbl, ARRAY_SIZE(gmac_rtbl), &_lock, &clk1); 676 - clk_register_clkdev(clk, "gmac_phy_synth_clk", NULL); 677 - clk_register_clkdev(clk1, "gmac_phy_synth_gate_clk", NULL); 674 + clk = clk_register_aux("phy_syn_clk", "phy_syn_gclk", "phy_input_mclk", 675 + 0, SPEAR1340_GMAC_CLK_SYNT, NULL, gmac_rtbl, 676 + ARRAY_SIZE(gmac_rtbl), &_lock, &clk1); 677 + clk_register_clkdev(clk, "phy_syn_clk", NULL); 678 + clk_register_clkdev(clk1, "phy_syn_gclk", NULL); 678 679 679 - clk = clk_register_mux(NULL, "gmac_phy_mux_clk", gmac_phy_parents, 680 + clk = clk_register_mux(NULL, "phy_mclk", gmac_phy_parents, 680 681 ARRAY_SIZE(gmac_phy_parents), 0, 681 682 SPEAR1340_PERIP_CLK_CFG, SPEAR1340_GMAC_PHY_CLK_SHIFT, 682 683 SPEAR1340_GMAC_PHY_CLK_MASK, 0, &_lock); 683 684 clk_register_clkdev(clk, NULL, "stmmacphy.0"); 684 685 685 686 /* clcd */ 686 - clk = clk_register_mux(NULL, "clcd_synth_mux_clk", clcd_synth_parents, 687 + clk = clk_register_mux(NULL, "clcd_syn_mclk", clcd_synth_parents, 687 688 ARRAY_SIZE(clcd_synth_parents), 0, 688 689 SPEAR1340_CLCD_CLK_SYNT, SPEAR1340_CLCD_SYNT_CLK_SHIFT, 689 690 SPEAR1340_CLCD_SYNT_CLK_MASK, 0, &_lock); 690 - clk_register_clkdev(clk, "clcd_synth_mux_clk", NULL); 691 + clk_register_clkdev(clk, "clcd_syn_mclk", NULL); 691 692 692 - clk = clk_register_frac("clcd_synth_clk", "clcd_synth_mux_clk", 0, 693 + clk = clk_register_frac("clcd_syn_clk", "clcd_syn_mclk", 0, 693 694 SPEAR1340_CLCD_CLK_SYNT, clcd_rtbl, 694 695 ARRAY_SIZE(clcd_rtbl), &_lock); 695 - clk_register_clkdev(clk, "clcd_synth_clk", NULL); 696 + clk_register_clkdev(clk, "clcd_syn_clk", NULL); 696 697 697 - clk = clk_register_mux(NULL, "clcd_pixel_mux_clk", clcd_pixel_parents, 698 + clk = clk_register_mux(NULL, "clcd_pixel_mclk", clcd_pixel_parents, 698 699 ARRAY_SIZE(clcd_pixel_parents), 0, 699 700 SPEAR1340_PERIP_CLK_CFG, SPEAR1340_CLCD_CLK_SHIFT, 700 701 SPEAR1340_CLCD_CLK_MASK, 0, &_lock); 701 702 clk_register_clkdev(clk, "clcd_pixel_clk", NULL); 702 703 703 - clk = clk_register_gate(NULL, "clcd_clk", "clcd_pixel_mux_clk", 0, 704 + clk = clk_register_gate(NULL, "clcd_clk", "clcd_pixel_mclk", 0, 704 705 SPEAR1340_PERIP1_CLK_ENB, SPEAR1340_CLCD_CLK_ENB, 0, 705 706 &_lock); 706 707 clk_register_clkdev(clk, "clcd_clk", NULL); 707 708 708 709 /* i2s */ 709 - clk = clk_register_mux(NULL, "i2s_src_mux_clk", i2s_src_parents, 710 + clk = clk_register_mux(NULL, "i2s_src_mclk", i2s_src_parents, 710 711 ARRAY_SIZE(i2s_src_parents), 0, SPEAR1340_I2S_CLK_CFG, 711 712 SPEAR1340_I2S_SRC_CLK_SHIFT, SPEAR1340_I2S_SRC_CLK_MASK, 712 713 0, &_lock); 713 714 clk_register_clkdev(clk, "i2s_src_clk", NULL); 714 715 715 - clk = clk_register_aux("i2s_prs1_clk", NULL, "i2s_src_mux_clk", 0, 716 + clk = clk_register_aux("i2s_prs1_clk", NULL, "i2s_src_mclk", 0, 716 717 SPEAR1340_I2S_CLK_CFG, &i2s_prs1_masks, i2s_prs1_rtbl, 717 718 ARRAY_SIZE(i2s_prs1_rtbl), &_lock, NULL); 718 719 clk_register_clkdev(clk, "i2s_prs1_clk", NULL); 719 720 720 - clk = clk_register_mux(NULL, "i2s_ref_mux_clk", i2s_ref_parents, 721 + clk = clk_register_mux(NULL, "i2s_ref_mclk", i2s_ref_parents, 721 722 ARRAY_SIZE(i2s_ref_parents), 0, SPEAR1340_I2S_CLK_CFG, 722 723 SPEAR1340_I2S_REF_SHIFT, SPEAR1340_I2S_REF_SEL_MASK, 0, 723 724 &_lock); 724 725 clk_register_clkdev(clk, "i2s_ref_clk", NULL); 725 726 726 - clk = clk_register_gate(NULL, "i2s_ref_pad_clk", "i2s_ref_mux_clk", 0, 727 + clk = clk_register_gate(NULL, "i2s_ref_pad_clk", "i2s_ref_mclk", 0, 727 728 SPEAR1340_PERIP2_CLK_ENB, SPEAR1340_I2S_REF_PAD_CLK_ENB, 728 729 0, &_lock); 729 730 clk_register_clkdev(clk, "i2s_ref_pad_clk", NULL); 730 731 731 - clk = clk_register_aux("i2s_sclk_clk", "i2s_sclk_gate_clk", 732 - "i2s_ref_mux_clk", 0, SPEAR1340_I2S_CLK_CFG, 733 - &i2s_sclk_masks, i2s_sclk_rtbl, 734 - ARRAY_SIZE(i2s_sclk_rtbl), &_lock, &clk1); 732 + clk = clk_register_aux("i2s_sclk_clk", "i2s_sclk_gclk", "i2s_ref_mclk", 733 + 0, SPEAR1340_I2S_CLK_CFG, &i2s_sclk_masks, 734 + i2s_sclk_rtbl, ARRAY_SIZE(i2s_sclk_rtbl), &_lock, 735 + &clk1); 735 736 clk_register_clkdev(clk, "i2s_sclk_clk", NULL); 736 - clk_register_clkdev(clk1, "i2s_sclk_gate_clk", NULL); 737 + clk_register_clkdev(clk1, "i2s_sclk_gclk", NULL); 737 738 738 739 /* clock derived from ahb clk */ 739 740 clk = clk_register_gate(NULL, "i2c0_clk", "ahb_clk", 0, ··· 741 744 clk_register_clkdev(clk, NULL, "e0280000.i2c"); 742 745 743 746 clk = clk_register_gate(NULL, "i2c1_clk", "ahb_clk", 0, 744 - SPEAR1340_PERIP1_CLK_ENB, SPEAR1340_I2C1_CLK_ENB, 0, 747 + SPEAR1340_PERIP3_CLK_ENB, SPEAR1340_I2C1_CLK_ENB, 0, 745 748 &_lock); 746 749 clk_register_clkdev(clk, NULL, "b4000000.i2c"); 747 750 ··· 797 800 &_lock); 798 801 clk_register_clkdev(clk, "sysram1_clk", NULL); 799 802 800 - clk = clk_register_aux("adc_synth_clk", "adc_synth_gate_clk", "ahb_clk", 803 + clk = clk_register_aux("adc_syn_clk", "adc_syn_gclk", "ahb_clk", 801 804 0, SPEAR1340_ADC_CLK_SYNT, NULL, adc_rtbl, 802 805 ARRAY_SIZE(adc_rtbl), &_lock, &clk1); 803 - clk_register_clkdev(clk, "adc_synth_clk", NULL); 804 - clk_register_clkdev(clk1, "adc_synth_gate_clk", NULL); 806 + clk_register_clkdev(clk, "adc_syn_clk", NULL); 807 + clk_register_clkdev(clk1, "adc_syn_gclk", NULL); 805 808 806 - clk = clk_register_gate(NULL, "adc_clk", "adc_synth_gate_clk", 0, 809 + clk = clk_register_gate(NULL, "adc_clk", "adc_syn_gclk", 0, 807 810 SPEAR1340_PERIP1_CLK_ENB, SPEAR1340_ADC_CLK_ENB, 0, 808 811 &_lock); 809 812 clk_register_clkdev(clk, NULL, "adc_clk"); ··· 840 843 clk_register_clkdev(clk, NULL, "e0300000.kbd"); 841 844 842 845 /* RAS clks */ 843 - clk = clk_register_mux(NULL, "gen_synth0_1_mux_clk", 844 - gen_synth0_1_parents, ARRAY_SIZE(gen_synth0_1_parents), 845 - 0, SPEAR1340_PLL_CFG, SPEAR1340_GEN_SYNT0_1_CLK_SHIFT, 846 + clk = clk_register_mux(NULL, "gen_syn0_1_mclk", gen_synth0_1_parents, 847 + ARRAY_SIZE(gen_synth0_1_parents), 0, SPEAR1340_PLL_CFG, 848 + SPEAR1340_GEN_SYNT0_1_CLK_SHIFT, 846 849 SPEAR1340_GEN_SYNT_CLK_MASK, 0, &_lock); 847 - clk_register_clkdev(clk, "gen_synth0_1_clk", NULL); 850 + clk_register_clkdev(clk, "gen_syn0_1_clk", NULL); 848 851 849 - clk = clk_register_mux(NULL, "gen_synth2_3_mux_clk", 850 - gen_synth2_3_parents, ARRAY_SIZE(gen_synth2_3_parents), 851 - 0, SPEAR1340_PLL_CFG, SPEAR1340_GEN_SYNT2_3_CLK_SHIFT, 852 + clk = clk_register_mux(NULL, "gen_syn2_3_mclk", gen_synth2_3_parents, 853 + ARRAY_SIZE(gen_synth2_3_parents), 0, SPEAR1340_PLL_CFG, 854 + SPEAR1340_GEN_SYNT2_3_CLK_SHIFT, 852 855 SPEAR1340_GEN_SYNT_CLK_MASK, 0, &_lock); 853 - clk_register_clkdev(clk, "gen_synth2_3_clk", NULL); 856 + clk_register_clkdev(clk, "gen_syn2_3_clk", NULL); 854 857 855 - clk = clk_register_frac("gen_synth0_clk", "gen_synth0_1_clk", 0, 858 + clk = clk_register_frac("gen_syn0_clk", "gen_syn0_1_clk", 0, 856 859 SPEAR1340_GEN_CLK_SYNT0, gen_rtbl, ARRAY_SIZE(gen_rtbl), 857 860 &_lock); 858 - clk_register_clkdev(clk, "gen_synth0_clk", NULL); 861 + clk_register_clkdev(clk, "gen_syn0_clk", NULL); 859 862 860 - clk = clk_register_frac("gen_synth1_clk", "gen_synth0_1_clk", 0, 863 + clk = clk_register_frac("gen_syn1_clk", "gen_syn0_1_clk", 0, 861 864 SPEAR1340_GEN_CLK_SYNT1, gen_rtbl, ARRAY_SIZE(gen_rtbl), 862 865 &_lock); 863 - clk_register_clkdev(clk, "gen_synth1_clk", NULL); 866 + clk_register_clkdev(clk, "gen_syn1_clk", NULL); 864 867 865 - clk = clk_register_frac("gen_synth2_clk", "gen_synth2_3_clk", 0, 868 + clk = clk_register_frac("gen_syn2_clk", "gen_syn2_3_clk", 0, 866 869 SPEAR1340_GEN_CLK_SYNT2, gen_rtbl, ARRAY_SIZE(gen_rtbl), 867 870 &_lock); 868 - clk_register_clkdev(clk, "gen_synth2_clk", NULL); 871 + clk_register_clkdev(clk, "gen_syn2_clk", NULL); 869 872 870 - clk = clk_register_frac("gen_synth3_clk", "gen_synth2_3_clk", 0, 873 + clk = clk_register_frac("gen_syn3_clk", "gen_syn2_3_clk", 0, 871 874 SPEAR1340_GEN_CLK_SYNT3, gen_rtbl, ARRAY_SIZE(gen_rtbl), 872 875 &_lock); 873 - clk_register_clkdev(clk, "gen_synth3_clk", NULL); 876 + clk_register_clkdev(clk, "gen_syn3_clk", NULL); 874 877 875 - clk = clk_register_gate(NULL, "mali_clk", "gen_synth3_clk", 0, 878 + clk = clk_register_gate(NULL, "mali_clk", "gen_syn3_clk", 0, 876 879 SPEAR1340_PERIP3_CLK_ENB, SPEAR1340_MALI_CLK_ENB, 0, 877 880 &_lock); 878 881 clk_register_clkdev(clk, NULL, "mali"); ··· 887 890 &_lock); 888 891 clk_register_clkdev(clk, NULL, "spear_cec.1"); 889 892 890 - clk = clk_register_mux(NULL, "spdif_out_mux_clk", spdif_out_parents, 893 + clk = clk_register_mux(NULL, "spdif_out_mclk", spdif_out_parents, 891 894 ARRAY_SIZE(spdif_out_parents), 0, 892 895 SPEAR1340_PERIP_CLK_CFG, SPEAR1340_SPDIF_OUT_CLK_SHIFT, 893 896 SPEAR1340_SPDIF_CLK_MASK, 0, &_lock); 894 - clk_register_clkdev(clk, "spdif_out_mux_clk", NULL); 897 + clk_register_clkdev(clk, "spdif_out_mclk", NULL); 895 898 896 - clk = clk_register_gate(NULL, "spdif_out_clk", "spdif_out_mux_clk", 0, 899 + clk = clk_register_gate(NULL, "spdif_out_clk", "spdif_out_mclk", 0, 897 900 SPEAR1340_PERIP3_CLK_ENB, SPEAR1340_SPDIF_OUT_CLK_ENB, 898 901 0, &_lock); 899 902 clk_register_clkdev(clk, NULL, "spdif-out"); 900 903 901 - clk = clk_register_mux(NULL, "spdif_in_mux_clk", spdif_in_parents, 904 + clk = clk_register_mux(NULL, "spdif_in_mclk", spdif_in_parents, 902 905 ARRAY_SIZE(spdif_in_parents), 0, 903 906 SPEAR1340_PERIP_CLK_CFG, SPEAR1340_SPDIF_IN_CLK_SHIFT, 904 907 SPEAR1340_SPDIF_CLK_MASK, 0, &_lock); 905 - clk_register_clkdev(clk, "spdif_in_mux_clk", NULL); 908 + clk_register_clkdev(clk, "spdif_in_mclk", NULL); 906 909 907 - clk = clk_register_gate(NULL, "spdif_in_clk", "spdif_in_mux_clk", 0, 910 + clk = clk_register_gate(NULL, "spdif_in_clk", "spdif_in_mclk", 0, 908 911 SPEAR1340_PERIP3_CLK_ENB, SPEAR1340_SPDIF_IN_CLK_ENB, 0, 909 912 &_lock); 910 913 clk_register_clkdev(clk, NULL, "spdif-in"); 911 914 912 - clk = clk_register_gate(NULL, "acp_clk", "acp_mux_clk", 0, 915 + clk = clk_register_gate(NULL, "acp_clk", "acp_mclk", 0, 913 916 SPEAR1340_PERIP2_CLK_ENB, SPEAR1340_ACP_CLK_ENB, 0, 914 917 &_lock); 915 918 clk_register_clkdev(clk, NULL, "acp_clk"); 916 919 917 - clk = clk_register_gate(NULL, "plgpio_clk", "plgpio_mux_clk", 0, 920 + clk = clk_register_gate(NULL, "plgpio_clk", "plgpio_mclk", 0, 918 921 SPEAR1340_PERIP3_CLK_ENB, SPEAR1340_PLGPIO_CLK_ENB, 0, 919 922 &_lock); 920 923 clk_register_clkdev(clk, NULL, "plgpio"); 921 924 922 - clk = clk_register_gate(NULL, "video_dec_clk", "video_dec_mux_clk", 0, 925 + clk = clk_register_gate(NULL, "video_dec_clk", "video_dec_mclk", 0, 923 926 SPEAR1340_PERIP3_CLK_ENB, SPEAR1340_VIDEO_DEC_CLK_ENB, 924 927 0, &_lock); 925 928 clk_register_clkdev(clk, NULL, "video_dec"); 926 929 927 - clk = clk_register_gate(NULL, "video_enc_clk", "video_enc_mux_clk", 0, 930 + clk = clk_register_gate(NULL, "video_enc_clk", "video_enc_mclk", 0, 928 931 SPEAR1340_PERIP3_CLK_ENB, SPEAR1340_VIDEO_ENC_CLK_ENB, 929 932 0, &_lock); 930 933 clk_register_clkdev(clk, NULL, "video_enc"); 931 934 932 - clk = clk_register_gate(NULL, "video_in_clk", "video_in_mux_clk", 0, 935 + clk = clk_register_gate(NULL, "video_in_clk", "video_in_mclk", 0, 933 936 SPEAR1340_PERIP3_CLK_ENB, SPEAR1340_VIDEO_IN_CLK_ENB, 0, 934 937 &_lock); 935 938 clk_register_clkdev(clk, NULL, "spear_vip"); 936 939 937 - clk = clk_register_gate(NULL, "cam0_clk", "cam0_mux_clk", 0, 940 + clk = clk_register_gate(NULL, "cam0_clk", "cam0_mclk", 0, 938 941 SPEAR1340_PERIP3_CLK_ENB, SPEAR1340_CAM0_CLK_ENB, 0, 939 942 &_lock); 940 943 clk_register_clkdev(clk, NULL, "spear_camif.0"); 941 944 942 - clk = clk_register_gate(NULL, "cam1_clk", "cam1_mux_clk", 0, 945 + clk = clk_register_gate(NULL, "cam1_clk", "cam1_mclk", 0, 943 946 SPEAR1340_PERIP3_CLK_ENB, SPEAR1340_CAM1_CLK_ENB, 0, 944 947 &_lock); 945 948 clk_register_clkdev(clk, NULL, "spear_camif.1"); 946 949 947 - clk = clk_register_gate(NULL, "cam2_clk", "cam2_mux_clk", 0, 950 + clk = clk_register_gate(NULL, "cam2_clk", "cam2_mclk", 0, 948 951 SPEAR1340_PERIP3_CLK_ENB, SPEAR1340_CAM2_CLK_ENB, 0, 949 952 &_lock); 950 953 clk_register_clkdev(clk, NULL, "spear_camif.2"); 951 954 952 - clk = clk_register_gate(NULL, "cam3_clk", "cam3_mux_clk", 0, 955 + clk = clk_register_gate(NULL, "cam3_clk", "cam3_mclk", 0, 953 956 SPEAR1340_PERIP3_CLK_ENB, SPEAR1340_CAM3_CLK_ENB, 0, 954 957 &_lock); 955 958 clk_register_clkdev(clk, NULL, "spear_camif.3"); 956 959 957 - clk = clk_register_gate(NULL, "pwm_clk", "pwm_mux_clk", 0, 960 + clk = clk_register_gate(NULL, "pwm_clk", "pwm_mclk", 0, 958 961 SPEAR1340_PERIP3_CLK_ENB, SPEAR1340_PWM_CLK_ENB, 0, 959 962 &_lock); 960 963 clk_register_clkdev(clk, NULL, "pwm");
+80 -88
drivers/clk/spear/spear3xx_clock.c
··· 122 122 }; 123 123 124 124 /* clock parents */ 125 - static const char *uart0_parents[] = { "pll3_48m_clk", "uart_synth_gate_clk", }; 126 - static const char *firda_parents[] = { "pll3_48m_clk", "firda_synth_gate_clk", 125 + static const char *uart0_parents[] = { "pll3_clk", "uart_syn_gclk", }; 126 + static const char *firda_parents[] = { "pll3_clk", "firda_syn_gclk", 127 127 }; 128 - static const char *gpt0_parents[] = { "pll3_48m_clk", "gpt0_synth_clk", }; 129 - static const char *gpt1_parents[] = { "pll3_48m_clk", "gpt1_synth_clk", }; 130 - static const char *gpt2_parents[] = { "pll3_48m_clk", "gpt2_synth_clk", }; 128 + static const char *gpt0_parents[] = { "pll3_clk", "gpt0_syn_clk", }; 129 + static const char *gpt1_parents[] = { "pll3_clk", "gpt1_syn_clk", }; 130 + static const char *gpt2_parents[] = { "pll3_clk", "gpt2_syn_clk", }; 131 131 static const char *gen2_3_parents[] = { "pll1_clk", "pll2_clk", }; 132 132 static const char *ddr_parents[] = { "ahb_clk", "ahbmult2_clk", "none", 133 133 "pll2_clk", }; ··· 137 137 { 138 138 struct clk *clk; 139 139 140 - clk = clk_register_fixed_factor(NULL, "clcd_clk", "ras_pll3_48m_clk", 0, 140 + clk = clk_register_fixed_factor(NULL, "clcd_clk", "ras_pll3_clk", 0, 141 141 1, 1); 142 142 clk_register_clkdev(clk, NULL, "60000000.clcd"); 143 143 ··· 219 219 #define SPEAR320_UARTX_PCLK_VAL_SYNTH1 0x0 220 220 #define SPEAR320_UARTX_PCLK_VAL_APB 0x1 221 221 222 - static const char *i2s_ref_parents[] = { "ras_pll2_clk", 223 - "ras_gen2_synth_gate_clk", }; 224 - static const char *sdhci_parents[] = { "ras_pll3_48m_clk", 225 - "ras_gen3_synth_gate_clk", 226 - }; 222 + static const char *i2s_ref_parents[] = { "ras_pll2_clk", "ras_syn2_gclk", }; 223 + static const char *sdhci_parents[] = { "ras_pll3_clk", "ras_syn3_gclk", }; 227 224 static const char *smii0_parents[] = { "smii_125m_pad", "ras_pll2_clk", 228 - "ras_gen0_synth_gate_clk", }; 229 - static const char *uartx_parents[] = { "ras_gen1_synth_gate_clk", "ras_apb_clk", 230 - }; 225 + "ras_syn0_gclk", }; 226 + static const char *uartx_parents[] = { "ras_syn1_gclk", "ras_apb_clk", }; 231 227 232 228 static void __init spear320_clk_init(void) 233 229 { ··· 233 237 CLK_IS_ROOT, 125000000); 234 238 clk_register_clkdev(clk, "smii_125m_pad", NULL); 235 239 236 - clk = clk_register_fixed_factor(NULL, "clcd_clk", "ras_pll3_48m_clk", 0, 240 + clk = clk_register_fixed_factor(NULL, "clcd_clk", "ras_pll3_clk", 0, 237 241 1, 1); 238 242 clk_register_clkdev(clk, NULL, "90000000.clcd"); 239 243 ··· 359 363 clk_register_clkdev(clk, NULL, "fc900000.rtc"); 360 364 361 365 /* clock derived from 24 MHz osc clk */ 362 - clk = clk_register_fixed_rate(NULL, "pll3_48m_clk", "osc_24m_clk", 0, 366 + clk = clk_register_fixed_rate(NULL, "pll3_clk", "osc_24m_clk", 0, 363 367 48000000); 364 - clk_register_clkdev(clk, "pll3_48m_clk", NULL); 368 + clk_register_clkdev(clk, "pll3_clk", NULL); 365 369 366 370 clk = clk_register_fixed_factor(NULL, "wdt_clk", "osc_24m_clk", 0, 1, 367 371 1); ··· 388 392 HCLK_RATIO_MASK, 0, &_lock); 389 393 clk_register_clkdev(clk, "ahb_clk", NULL); 390 394 391 - clk = clk_register_aux("uart_synth_clk", "uart_synth_gate_clk", 392 - "pll1_clk", 0, UART_CLK_SYNT, NULL, aux_rtbl, 393 - ARRAY_SIZE(aux_rtbl), &_lock, &clk1); 394 - clk_register_clkdev(clk, "uart_synth_clk", NULL); 395 - clk_register_clkdev(clk1, "uart_synth_gate_clk", NULL); 395 + clk = clk_register_aux("uart_syn_clk", "uart_syn_gclk", "pll1_clk", 0, 396 + UART_CLK_SYNT, NULL, aux_rtbl, ARRAY_SIZE(aux_rtbl), 397 + &_lock, &clk1); 398 + clk_register_clkdev(clk, "uart_syn_clk", NULL); 399 + clk_register_clkdev(clk1, "uart_syn_gclk", NULL); 396 400 397 - clk = clk_register_mux(NULL, "uart0_mux_clk", uart0_parents, 401 + clk = clk_register_mux(NULL, "uart0_mclk", uart0_parents, 398 402 ARRAY_SIZE(uart0_parents), 0, PERIP_CLK_CFG, 399 403 UART_CLK_SHIFT, UART_CLK_MASK, 0, &_lock); 400 - clk_register_clkdev(clk, "uart0_mux_clk", NULL); 404 + clk_register_clkdev(clk, "uart0_mclk", NULL); 401 405 402 - clk = clk_register_gate(NULL, "uart0", "uart0_mux_clk", 0, 403 - PERIP1_CLK_ENB, UART_CLK_ENB, 0, &_lock); 406 + clk = clk_register_gate(NULL, "uart0", "uart0_mclk", 0, PERIP1_CLK_ENB, 407 + UART_CLK_ENB, 0, &_lock); 404 408 clk_register_clkdev(clk, NULL, "d0000000.serial"); 405 409 406 - clk = clk_register_aux("firda_synth_clk", "firda_synth_gate_clk", 407 - "pll1_clk", 0, FIRDA_CLK_SYNT, NULL, aux_rtbl, 408 - ARRAY_SIZE(aux_rtbl), &_lock, &clk1); 409 - clk_register_clkdev(clk, "firda_synth_clk", NULL); 410 - clk_register_clkdev(clk1, "firda_synth_gate_clk", NULL); 410 + clk = clk_register_aux("firda_syn_clk", "firda_syn_gclk", "pll1_clk", 0, 411 + FIRDA_CLK_SYNT, NULL, aux_rtbl, ARRAY_SIZE(aux_rtbl), 412 + &_lock, &clk1); 413 + clk_register_clkdev(clk, "firda_syn_clk", NULL); 414 + clk_register_clkdev(clk1, "firda_syn_gclk", NULL); 411 415 412 - clk = clk_register_mux(NULL, "firda_mux_clk", firda_parents, 416 + clk = clk_register_mux(NULL, "firda_mclk", firda_parents, 413 417 ARRAY_SIZE(firda_parents), 0, PERIP_CLK_CFG, 414 418 FIRDA_CLK_SHIFT, FIRDA_CLK_MASK, 0, &_lock); 415 - clk_register_clkdev(clk, "firda_mux_clk", NULL); 419 + clk_register_clkdev(clk, "firda_mclk", NULL); 416 420 417 - clk = clk_register_gate(NULL, "firda_clk", "firda_mux_clk", 0, 421 + clk = clk_register_gate(NULL, "firda_clk", "firda_mclk", 0, 418 422 PERIP1_CLK_ENB, FIRDA_CLK_ENB, 0, &_lock); 419 423 clk_register_clkdev(clk, NULL, "firda"); 420 424 421 425 /* gpt clocks */ 422 - clk_register_gpt("gpt0_synth_clk", "pll1_clk", 0, PRSC0_CLK_CFG, 423 - gpt_rtbl, ARRAY_SIZE(gpt_rtbl), &_lock); 426 + clk_register_gpt("gpt0_syn_clk", "pll1_clk", 0, PRSC0_CLK_CFG, gpt_rtbl, 427 + ARRAY_SIZE(gpt_rtbl), &_lock); 424 428 clk = clk_register_mux(NULL, "gpt0_clk", gpt0_parents, 425 429 ARRAY_SIZE(gpt0_parents), 0, PERIP_CLK_CFG, 426 430 GPT0_CLK_SHIFT, GPT_CLK_MASK, 0, &_lock); 427 431 clk_register_clkdev(clk, NULL, "gpt0"); 428 432 429 - clk_register_gpt("gpt1_synth_clk", "pll1_clk", 0, PRSC1_CLK_CFG, 430 - gpt_rtbl, ARRAY_SIZE(gpt_rtbl), &_lock); 431 - clk = clk_register_mux(NULL, "gpt1_mux_clk", gpt1_parents, 433 + clk_register_gpt("gpt1_syn_clk", "pll1_clk", 0, PRSC1_CLK_CFG, gpt_rtbl, 434 + ARRAY_SIZE(gpt_rtbl), &_lock); 435 + clk = clk_register_mux(NULL, "gpt1_mclk", gpt1_parents, 432 436 ARRAY_SIZE(gpt1_parents), 0, PERIP_CLK_CFG, 433 437 GPT1_CLK_SHIFT, GPT_CLK_MASK, 0, &_lock); 434 - clk_register_clkdev(clk, "gpt1_mux_clk", NULL); 435 - clk = clk_register_gate(NULL, "gpt1_clk", "gpt1_mux_clk", 0, 438 + clk_register_clkdev(clk, "gpt1_mclk", NULL); 439 + clk = clk_register_gate(NULL, "gpt1_clk", "gpt1_mclk", 0, 436 440 PERIP1_CLK_ENB, GPT1_CLK_ENB, 0, &_lock); 437 441 clk_register_clkdev(clk, NULL, "gpt1"); 438 442 439 - clk_register_gpt("gpt2_synth_clk", "pll1_clk", 0, PRSC2_CLK_CFG, 440 - gpt_rtbl, ARRAY_SIZE(gpt_rtbl), &_lock); 441 - clk = clk_register_mux(NULL, "gpt2_mux_clk", gpt2_parents, 443 + clk_register_gpt("gpt2_syn_clk", "pll1_clk", 0, PRSC2_CLK_CFG, gpt_rtbl, 444 + ARRAY_SIZE(gpt_rtbl), &_lock); 445 + clk = clk_register_mux(NULL, "gpt2_mclk", gpt2_parents, 442 446 ARRAY_SIZE(gpt2_parents), 0, PERIP_CLK_CFG, 443 447 GPT2_CLK_SHIFT, GPT_CLK_MASK, 0, &_lock); 444 - clk_register_clkdev(clk, "gpt2_mux_clk", NULL); 445 - clk = clk_register_gate(NULL, "gpt2_clk", "gpt2_mux_clk", 0, 448 + clk_register_clkdev(clk, "gpt2_mclk", NULL); 449 + clk = clk_register_gate(NULL, "gpt2_clk", "gpt2_mclk", 0, 446 450 PERIP1_CLK_ENB, GPT2_CLK_ENB, 0, &_lock); 447 451 clk_register_clkdev(clk, NULL, "gpt2"); 448 452 449 453 /* general synths clocks */ 450 - clk = clk_register_aux("gen0_synth_clk", "gen0_synth_gate_clk", 451 - "pll1_clk", 0, GEN0_CLK_SYNT, NULL, aux_rtbl, 452 - ARRAY_SIZE(aux_rtbl), &_lock, &clk1); 453 - clk_register_clkdev(clk, "gen0_synth_clk", NULL); 454 - clk_register_clkdev(clk1, "gen0_synth_gate_clk", NULL); 454 + clk = clk_register_aux("gen0_syn_clk", "gen0_syn_gclk", "pll1_clk", 455 + 0, GEN0_CLK_SYNT, NULL, aux_rtbl, ARRAY_SIZE(aux_rtbl), 456 + &_lock, &clk1); 457 + clk_register_clkdev(clk, "gen0_syn_clk", NULL); 458 + clk_register_clkdev(clk1, "gen0_syn_gclk", NULL); 455 459 456 - clk = clk_register_aux("gen1_synth_clk", "gen1_synth_gate_clk", 457 - "pll1_clk", 0, GEN1_CLK_SYNT, NULL, aux_rtbl, 458 - ARRAY_SIZE(aux_rtbl), &_lock, &clk1); 459 - clk_register_clkdev(clk, "gen1_synth_clk", NULL); 460 - clk_register_clkdev(clk1, "gen1_synth_gate_clk", NULL); 460 + clk = clk_register_aux("gen1_syn_clk", "gen1_syn_gclk", "pll1_clk", 461 + 0, GEN1_CLK_SYNT, NULL, aux_rtbl, ARRAY_SIZE(aux_rtbl), 462 + &_lock, &clk1); 463 + clk_register_clkdev(clk, "gen1_syn_clk", NULL); 464 + clk_register_clkdev(clk1, "gen1_syn_gclk", NULL); 461 465 462 - clk = clk_register_mux(NULL, "gen2_3_parent_clk", gen2_3_parents, 466 + clk = clk_register_mux(NULL, "gen2_3_par_clk", gen2_3_parents, 463 467 ARRAY_SIZE(gen2_3_parents), 0, CORE_CLK_CFG, 464 468 GEN_SYNTH2_3_CLK_SHIFT, GEN_SYNTH2_3_CLK_MASK, 0, 465 469 &_lock); 466 - clk_register_clkdev(clk, "gen2_3_parent_clk", NULL); 470 + clk_register_clkdev(clk, "gen2_3_par_clk", NULL); 467 471 468 - clk = clk_register_aux("gen2_synth_clk", "gen2_synth_gate_clk", 469 - "gen2_3_parent_clk", 0, GEN2_CLK_SYNT, NULL, aux_rtbl, 472 + clk = clk_register_aux("gen2_syn_clk", "gen2_syn_gclk", 473 + "gen2_3_par_clk", 0, GEN2_CLK_SYNT, NULL, aux_rtbl, 470 474 ARRAY_SIZE(aux_rtbl), &_lock, &clk1); 471 - clk_register_clkdev(clk, "gen2_synth_clk", NULL); 472 - clk_register_clkdev(clk1, "gen2_synth_gate_clk", NULL); 475 + clk_register_clkdev(clk, "gen2_syn_clk", NULL); 476 + clk_register_clkdev(clk1, "gen2_syn_gclk", NULL); 473 477 474 - clk = clk_register_aux("gen3_synth_clk", "gen3_synth_gate_clk", 475 - "gen2_3_parent_clk", 0, GEN3_CLK_SYNT, NULL, aux_rtbl, 478 + clk = clk_register_aux("gen3_syn_clk", "gen3_syn_gclk", 479 + "gen2_3_par_clk", 0, GEN3_CLK_SYNT, NULL, aux_rtbl, 476 480 ARRAY_SIZE(aux_rtbl), &_lock, &clk1); 477 - clk_register_clkdev(clk, "gen3_synth_clk", NULL); 478 - clk_register_clkdev(clk1, "gen3_synth_gate_clk", NULL); 481 + clk_register_clkdev(clk, "gen3_syn_clk", NULL); 482 + clk_register_clkdev(clk1, "gen3_syn_gclk", NULL); 479 483 480 484 /* clock derived from pll3 clk */ 481 - clk = clk_register_gate(NULL, "usbh_clk", "pll3_48m_clk", 0, 482 - PERIP1_CLK_ENB, USBH_CLK_ENB, 0, &_lock); 485 + clk = clk_register_gate(NULL, "usbh_clk", "pll3_clk", 0, PERIP1_CLK_ENB, 486 + USBH_CLK_ENB, 0, &_lock); 483 487 clk_register_clkdev(clk, "usbh_clk", NULL); 484 488 485 489 clk = clk_register_fixed_factor(NULL, "usbh.0_clk", "usbh_clk", 0, 1, ··· 490 494 1); 491 495 clk_register_clkdev(clk, "usbh.1_clk", NULL); 492 496 493 - clk = clk_register_gate(NULL, "usbd_clk", "pll3_48m_clk", 0, 494 - PERIP1_CLK_ENB, USBD_CLK_ENB, 0, &_lock); 497 + clk = clk_register_gate(NULL, "usbd_clk", "pll3_clk", 0, PERIP1_CLK_ENB, 498 + USBD_CLK_ENB, 0, &_lock); 495 499 clk_register_clkdev(clk, NULL, "designware_udc"); 496 500 497 501 /* clock derived from ahb clk */ ··· 575 579 RAS_CLK_ENB, RAS_PLL2_CLK_ENB, 0, &_lock); 576 580 clk_register_clkdev(clk, "ras_pll2_clk", NULL); 577 581 578 - clk = clk_register_gate(NULL, "ras_pll3_48m_clk", "pll3_48m_clk", 0, 582 + clk = clk_register_gate(NULL, "ras_pll3_clk", "pll3_clk", 0, 579 583 RAS_CLK_ENB, RAS_48M_CLK_ENB, 0, &_lock); 580 - clk_register_clkdev(clk, "ras_pll3_48m_clk", NULL); 584 + clk_register_clkdev(clk, "ras_pll3_clk", NULL); 581 585 582 - clk = clk_register_gate(NULL, "ras_gen0_synth_gate_clk", 583 - "gen0_synth_gate_clk", 0, RAS_CLK_ENB, 584 - RAS_SYNT0_CLK_ENB, 0, &_lock); 585 - clk_register_clkdev(clk, "ras_gen0_synth_gate_clk", NULL); 586 + clk = clk_register_gate(NULL, "ras_syn0_gclk", "gen0_syn_gclk", 0, 587 + RAS_CLK_ENB, RAS_SYNT0_CLK_ENB, 0, &_lock); 588 + clk_register_clkdev(clk, "ras_syn0_gclk", NULL); 586 589 587 - clk = clk_register_gate(NULL, "ras_gen1_synth_gate_clk", 588 - "gen1_synth_gate_clk", 0, RAS_CLK_ENB, 589 - RAS_SYNT1_CLK_ENB, 0, &_lock); 590 - clk_register_clkdev(clk, "ras_gen1_synth_gate_clk", NULL); 590 + clk = clk_register_gate(NULL, "ras_syn1_gclk", "gen1_syn_gclk", 0, 591 + RAS_CLK_ENB, RAS_SYNT1_CLK_ENB, 0, &_lock); 592 + clk_register_clkdev(clk, "ras_syn1_gclk", NULL); 591 593 592 - clk = clk_register_gate(NULL, "ras_gen2_synth_gate_clk", 593 - "gen2_synth_gate_clk", 0, RAS_CLK_ENB, 594 - RAS_SYNT2_CLK_ENB, 0, &_lock); 595 - clk_register_clkdev(clk, "ras_gen2_synth_gate_clk", NULL); 594 + clk = clk_register_gate(NULL, "ras_syn2_gclk", "gen2_syn_gclk", 0, 595 + RAS_CLK_ENB, RAS_SYNT2_CLK_ENB, 0, &_lock); 596 + clk_register_clkdev(clk, "ras_syn2_gclk", NULL); 596 597 597 - clk = clk_register_gate(NULL, "ras_gen3_synth_gate_clk", 598 - "gen3_synth_gate_clk", 0, RAS_CLK_ENB, 599 - RAS_SYNT3_CLK_ENB, 0, &_lock); 600 - clk_register_clkdev(clk, "ras_gen3_synth_gate_clk", NULL); 598 + clk = clk_register_gate(NULL, "ras_syn3_gclk", "gen3_syn_gclk", 0, 599 + RAS_CLK_ENB, RAS_SYNT3_CLK_ENB, 0, &_lock); 600 + clk_register_clkdev(clk, "ras_syn3_gclk", NULL); 601 601 602 602 if (of_machine_is_compatible("st,spear300")) 603 603 spear300_clk_init();
+60 -62
drivers/clk/spear/spear6xx_clock.c
··· 97 97 {.xscale = 1, .yscale = 2, .eq = 1}, /* 166 MHz */ 98 98 }; 99 99 100 - static const char *clcd_parents[] = { "pll3_48m_clk", "clcd_synth_gate_clk", }; 101 - static const char *firda_parents[] = { "pll3_48m_clk", "firda_synth_gate_clk", 102 - }; 103 - static const char *uart_parents[] = { "pll3_48m_clk", "uart_synth_gate_clk", }; 104 - static const char *gpt0_1_parents[] = { "pll3_48m_clk", "gpt0_1_synth_clk", }; 105 - static const char *gpt2_parents[] = { "pll3_48m_clk", "gpt2_synth_clk", }; 106 - static const char *gpt3_parents[] = { "pll3_48m_clk", "gpt3_synth_clk", }; 100 + static const char *clcd_parents[] = { "pll3_clk", "clcd_syn_gclk", }; 101 + static const char *firda_parents[] = { "pll3_clk", "firda_syn_gclk", }; 102 + static const char *uart_parents[] = { "pll3_clk", "uart_syn_gclk", }; 103 + static const char *gpt0_1_parents[] = { "pll3_clk", "gpt0_1_syn_clk", }; 104 + static const char *gpt2_parents[] = { "pll3_clk", "gpt2_syn_clk", }; 105 + static const char *gpt3_parents[] = { "pll3_clk", "gpt3_syn_clk", }; 107 106 static const char *ddr_parents[] = { "ahb_clk", "ahbmult2_clk", "none", 108 107 "pll2_clk", }; 109 108 ··· 135 136 clk_register_clkdev(clk, NULL, "rtc-spear"); 136 137 137 138 /* clock derived from 30 MHz osc clk */ 138 - clk = clk_register_fixed_rate(NULL, "pll3_48m_clk", "osc_24m_clk", 0, 139 + clk = clk_register_fixed_rate(NULL, "pll3_clk", "osc_24m_clk", 0, 139 140 48000000); 140 - clk_register_clkdev(clk, "pll3_48m_clk", NULL); 141 + clk_register_clkdev(clk, "pll3_clk", NULL); 141 142 142 143 clk = clk_register_vco_pll("vco1_clk", "pll1_clk", NULL, "osc_30m_clk", 143 144 0, PLL1_CTR, PLL1_FRQ, pll_rtbl, ARRAY_SIZE(pll_rtbl), ··· 145 146 clk_register_clkdev(clk, "vco1_clk", NULL); 146 147 clk_register_clkdev(clk1, "pll1_clk", NULL); 147 148 148 - clk = clk_register_vco_pll("vco2_clk", "pll2_clk", NULL, 149 - "osc_30m_clk", 0, PLL2_CTR, PLL2_FRQ, pll_rtbl, 150 - ARRAY_SIZE(pll_rtbl), &_lock, &clk1, NULL); 149 + clk = clk_register_vco_pll("vco2_clk", "pll2_clk", NULL, "osc_30m_clk", 150 + 0, PLL2_CTR, PLL2_FRQ, pll_rtbl, ARRAY_SIZE(pll_rtbl), 151 + &_lock, &clk1, NULL); 151 152 clk_register_clkdev(clk, "vco2_clk", NULL); 152 153 clk_register_clkdev(clk1, "pll2_clk", NULL); 153 154 ··· 164 165 HCLK_RATIO_MASK, 0, &_lock); 165 166 clk_register_clkdev(clk, "ahb_clk", NULL); 166 167 167 - clk = clk_register_aux("uart_synth_clk", "uart_synth_gate_clk", 168 - "pll1_clk", 0, UART_CLK_SYNT, NULL, aux_rtbl, 169 - ARRAY_SIZE(aux_rtbl), &_lock, &clk1); 170 - clk_register_clkdev(clk, "uart_synth_clk", NULL); 171 - clk_register_clkdev(clk1, "uart_synth_gate_clk", NULL); 168 + clk = clk_register_aux("uart_syn_clk", "uart_syn_gclk", "pll1_clk", 0, 169 + UART_CLK_SYNT, NULL, aux_rtbl, ARRAY_SIZE(aux_rtbl), 170 + &_lock, &clk1); 171 + clk_register_clkdev(clk, "uart_syn_clk", NULL); 172 + clk_register_clkdev(clk1, "uart_syn_gclk", NULL); 172 173 173 - clk = clk_register_mux(NULL, "uart_mux_clk", uart_parents, 174 + clk = clk_register_mux(NULL, "uart_mclk", uart_parents, 174 175 ARRAY_SIZE(uart_parents), 0, PERIP_CLK_CFG, 175 176 UART_CLK_SHIFT, UART_CLK_MASK, 0, &_lock); 176 - clk_register_clkdev(clk, "uart_mux_clk", NULL); 177 + clk_register_clkdev(clk, "uart_mclk", NULL); 177 178 178 - clk = clk_register_gate(NULL, "uart0", "uart_mux_clk", 0, 179 - PERIP1_CLK_ENB, UART0_CLK_ENB, 0, &_lock); 179 + clk = clk_register_gate(NULL, "uart0", "uart_mclk", 0, PERIP1_CLK_ENB, 180 + UART0_CLK_ENB, 0, &_lock); 180 181 clk_register_clkdev(clk, NULL, "d0000000.serial"); 181 182 182 - clk = clk_register_gate(NULL, "uart1", "uart_mux_clk", 0, 183 - PERIP1_CLK_ENB, UART1_CLK_ENB, 0, &_lock); 183 + clk = clk_register_gate(NULL, "uart1", "uart_mclk", 0, PERIP1_CLK_ENB, 184 + UART1_CLK_ENB, 0, &_lock); 184 185 clk_register_clkdev(clk, NULL, "d0080000.serial"); 185 186 186 - clk = clk_register_aux("firda_synth_clk", "firda_synth_gate_clk", 187 - "pll1_clk", 0, FIRDA_CLK_SYNT, NULL, aux_rtbl, 188 - ARRAY_SIZE(aux_rtbl), &_lock, &clk1); 189 - clk_register_clkdev(clk, "firda_synth_clk", NULL); 190 - clk_register_clkdev(clk1, "firda_synth_gate_clk", NULL); 187 + clk = clk_register_aux("firda_syn_clk", "firda_syn_gclk", "pll1_clk", 188 + 0, FIRDA_CLK_SYNT, NULL, aux_rtbl, ARRAY_SIZE(aux_rtbl), 189 + &_lock, &clk1); 190 + clk_register_clkdev(clk, "firda_syn_clk", NULL); 191 + clk_register_clkdev(clk1, "firda_syn_gclk", NULL); 191 192 192 - clk = clk_register_mux(NULL, "firda_mux_clk", firda_parents, 193 + clk = clk_register_mux(NULL, "firda_mclk", firda_parents, 193 194 ARRAY_SIZE(firda_parents), 0, PERIP_CLK_CFG, 194 195 FIRDA_CLK_SHIFT, FIRDA_CLK_MASK, 0, &_lock); 195 - clk_register_clkdev(clk, "firda_mux_clk", NULL); 196 + clk_register_clkdev(clk, "firda_mclk", NULL); 196 197 197 - clk = clk_register_gate(NULL, "firda_clk", "firda_mux_clk", 0, 198 + clk = clk_register_gate(NULL, "firda_clk", "firda_mclk", 0, 198 199 PERIP1_CLK_ENB, FIRDA_CLK_ENB, 0, &_lock); 199 200 clk_register_clkdev(clk, NULL, "firda"); 200 201 201 - clk = clk_register_aux("clcd_synth_clk", "clcd_synth_gate_clk", 202 - "pll1_clk", 0, CLCD_CLK_SYNT, NULL, aux_rtbl, 203 - ARRAY_SIZE(aux_rtbl), &_lock, &clk1); 204 - clk_register_clkdev(clk, "clcd_synth_clk", NULL); 205 - clk_register_clkdev(clk1, "clcd_synth_gate_clk", NULL); 202 + clk = clk_register_aux("clcd_syn_clk", "clcd_syn_gclk", "pll1_clk", 203 + 0, CLCD_CLK_SYNT, NULL, aux_rtbl, ARRAY_SIZE(aux_rtbl), 204 + &_lock, &clk1); 205 + clk_register_clkdev(clk, "clcd_syn_clk", NULL); 206 + clk_register_clkdev(clk1, "clcd_syn_gclk", NULL); 206 207 207 - clk = clk_register_mux(NULL, "clcd_mux_clk", clcd_parents, 208 + clk = clk_register_mux(NULL, "clcd_mclk", clcd_parents, 208 209 ARRAY_SIZE(clcd_parents), 0, PERIP_CLK_CFG, 209 210 CLCD_CLK_SHIFT, CLCD_CLK_MASK, 0, &_lock); 210 - clk_register_clkdev(clk, "clcd_mux_clk", NULL); 211 + clk_register_clkdev(clk, "clcd_mclk", NULL); 211 212 212 - clk = clk_register_gate(NULL, "clcd_clk", "clcd_mux_clk", 0, 213 + clk = clk_register_gate(NULL, "clcd_clk", "clcd_mclk", 0, 213 214 PERIP1_CLK_ENB, CLCD_CLK_ENB, 0, &_lock); 214 215 clk_register_clkdev(clk, NULL, "clcd"); 215 216 216 217 /* gpt clocks */ 217 - clk = clk_register_gpt("gpt0_1_synth_clk", "pll1_clk", 0, PRSC0_CLK_CFG, 218 + clk = clk_register_gpt("gpt0_1_syn_clk", "pll1_clk", 0, PRSC0_CLK_CFG, 218 219 gpt_rtbl, ARRAY_SIZE(gpt_rtbl), &_lock); 219 - clk_register_clkdev(clk, "gpt0_1_synth_clk", NULL); 220 + clk_register_clkdev(clk, "gpt0_1_syn_clk", NULL); 220 221 221 - clk = clk_register_mux(NULL, "gpt0_mux_clk", gpt0_1_parents, 222 + clk = clk_register_mux(NULL, "gpt0_mclk", gpt0_1_parents, 222 223 ARRAY_SIZE(gpt0_1_parents), 0, PERIP_CLK_CFG, 223 224 GPT0_CLK_SHIFT, GPT_CLK_MASK, 0, &_lock); 224 225 clk_register_clkdev(clk, NULL, "gpt0"); 225 226 226 - clk = clk_register_mux(NULL, "gpt1_mux_clk", gpt0_1_parents, 227 + clk = clk_register_mux(NULL, "gpt1_mclk", gpt0_1_parents, 227 228 ARRAY_SIZE(gpt0_1_parents), 0, PERIP_CLK_CFG, 228 229 GPT1_CLK_SHIFT, GPT_CLK_MASK, 0, &_lock); 229 - clk_register_clkdev(clk, "gpt1_mux_clk", NULL); 230 + clk_register_clkdev(clk, "gpt1_mclk", NULL); 230 231 231 - clk = clk_register_gate(NULL, "gpt1_clk", "gpt1_mux_clk", 0, 232 + clk = clk_register_gate(NULL, "gpt1_clk", "gpt1_mclk", 0, 232 233 PERIP1_CLK_ENB, GPT1_CLK_ENB, 0, &_lock); 233 234 clk_register_clkdev(clk, NULL, "gpt1"); 234 235 235 - clk = clk_register_gpt("gpt2_synth_clk", "pll1_clk", 0, PRSC1_CLK_CFG, 236 + clk = clk_register_gpt("gpt2_syn_clk", "pll1_clk", 0, PRSC1_CLK_CFG, 236 237 gpt_rtbl, ARRAY_SIZE(gpt_rtbl), &_lock); 237 - clk_register_clkdev(clk, "gpt2_synth_clk", NULL); 238 + clk_register_clkdev(clk, "gpt2_syn_clk", NULL); 238 239 239 - clk = clk_register_mux(NULL, "gpt2_mux_clk", gpt2_parents, 240 + clk = clk_register_mux(NULL, "gpt2_mclk", gpt2_parents, 240 241 ARRAY_SIZE(gpt2_parents), 0, PERIP_CLK_CFG, 241 242 GPT2_CLK_SHIFT, GPT_CLK_MASK, 0, &_lock); 242 - clk_register_clkdev(clk, "gpt2_mux_clk", NULL); 243 + clk_register_clkdev(clk, "gpt2_mclk", NULL); 243 244 244 - clk = clk_register_gate(NULL, "gpt2_clk", "gpt2_mux_clk", 0, 245 + clk = clk_register_gate(NULL, "gpt2_clk", "gpt2_mclk", 0, 245 246 PERIP1_CLK_ENB, GPT2_CLK_ENB, 0, &_lock); 246 247 clk_register_clkdev(clk, NULL, "gpt2"); 247 248 248 - clk = clk_register_gpt("gpt3_synth_clk", "pll1_clk", 0, PRSC2_CLK_CFG, 249 + clk = clk_register_gpt("gpt3_syn_clk", "pll1_clk", 0, PRSC2_CLK_CFG, 249 250 gpt_rtbl, ARRAY_SIZE(gpt_rtbl), &_lock); 250 - clk_register_clkdev(clk, "gpt3_synth_clk", NULL); 251 + clk_register_clkdev(clk, "gpt3_syn_clk", NULL); 251 252 252 - clk = clk_register_mux(NULL, "gpt3_mux_clk", gpt3_parents, 253 + clk = clk_register_mux(NULL, "gpt3_mclk", gpt3_parents, 253 254 ARRAY_SIZE(gpt3_parents), 0, PERIP_CLK_CFG, 254 255 GPT3_CLK_SHIFT, GPT_CLK_MASK, 0, &_lock); 255 - clk_register_clkdev(clk, "gpt3_mux_clk", NULL); 256 + clk_register_clkdev(clk, "gpt3_mclk", NULL); 256 257 257 - clk = clk_register_gate(NULL, "gpt3_clk", "gpt3_mux_clk", 0, 258 + clk = clk_register_gate(NULL, "gpt3_clk", "gpt3_mclk", 0, 258 259 PERIP1_CLK_ENB, GPT3_CLK_ENB, 0, &_lock); 259 260 clk_register_clkdev(clk, NULL, "gpt3"); 260 261 261 262 /* clock derived from pll3 clk */ 262 - clk = clk_register_gate(NULL, "usbh0_clk", "pll3_48m_clk", 0, 263 + clk = clk_register_gate(NULL, "usbh0_clk", "pll3_clk", 0, 263 264 PERIP1_CLK_ENB, USBH0_CLK_ENB, 0, &_lock); 264 265 clk_register_clkdev(clk, NULL, "usbh.0_clk"); 265 266 266 - clk = clk_register_gate(NULL, "usbh1_clk", "pll3_48m_clk", 0, 267 + clk = clk_register_gate(NULL, "usbh1_clk", "pll3_clk", 0, 267 268 PERIP1_CLK_ENB, USBH1_CLK_ENB, 0, &_lock); 268 269 clk_register_clkdev(clk, NULL, "usbh.1_clk"); 269 270 270 - clk = clk_register_gate(NULL, "usbd_clk", "pll3_48m_clk", 0, 271 - PERIP1_CLK_ENB, USBD_CLK_ENB, 0, &_lock); 271 + clk = clk_register_gate(NULL, "usbd_clk", "pll3_clk", 0, PERIP1_CLK_ENB, 272 + USBD_CLK_ENB, 0, &_lock); 272 273 clk_register_clkdev(clk, NULL, "designware_udc"); 273 274 274 275 /* clock derived from ahb clk */ ··· 277 278 clk_register_clkdev(clk, "ahbmult2_clk", NULL); 278 279 279 280 clk = clk_register_mux(NULL, "ddr_clk", ddr_parents, 280 - ARRAY_SIZE(ddr_parents), 281 - 0, PLL_CLK_CFG, MCTR_CLK_SHIFT, MCTR_CLK_MASK, 0, 282 - &_lock); 281 + ARRAY_SIZE(ddr_parents), 0, PLL_CLK_CFG, MCTR_CLK_SHIFT, 282 + MCTR_CLK_MASK, 0, &_lock); 283 283 clk_register_clkdev(clk, "ddr_clk", NULL); 284 284 285 285 clk = clk_register_divider(NULL, "apb_clk", "ahb_clk",
+1 -1
drivers/gpio/Kconfig
··· 136 136 137 137 config GPIO_MSM_V1 138 138 tristate "Qualcomm MSM GPIO v1" 139 - depends on GPIOLIB && ARCH_MSM 139 + depends on GPIOLIB && ARCH_MSM && (ARCH_MSM7X00A || ARCH_MSM7X30 || ARCH_QSD8X50) 140 140 help 141 141 Say yes here to support the GPIO interface on ARM v6 based 142 142 Qualcomm MSM chips. Most of the pins on the MSM can be
+1
drivers/gpio/devres.c
··· 98 98 99 99 return 0; 100 100 } 101 + EXPORT_SYMBOL(devm_gpio_request_one); 101 102 102 103 /** 103 104 * devm_gpio_free - free an interrupt
+6 -4
drivers/gpio/gpio-mxc.c
··· 398 398 writel(~0, port->base + GPIO_ISR); 399 399 400 400 if (mxc_gpio_hwtype == IMX21_GPIO) { 401 - /* setup one handler for all GPIO interrupts */ 402 - if (pdev->id == 0) 403 - irq_set_chained_handler(port->irq, 404 - mx2_gpio_irq_handler); 401 + /* 402 + * Setup one handler for all GPIO interrupts. Actually setting 403 + * the handler is needed only once, but doing it for every port 404 + * is more robust and easier. 405 + */ 406 + irq_set_chained_handler(port->irq, mx2_gpio_irq_handler); 405 407 } else { 406 408 /* setup one handler for each entry */ 407 409 irq_set_chained_handler(port->irq, mx3_gpio_irq_handler);
+13 -1
drivers/gpio/gpio-omap.c
··· 174 174 if (bank->dbck_enable_mask && !bank->dbck_enabled) { 175 175 clk_enable(bank->dbck); 176 176 bank->dbck_enabled = true; 177 + 178 + __raw_writel(bank->dbck_enable_mask, 179 + bank->base + bank->regs->debounce_en); 177 180 } 178 181 } 179 182 180 183 static inline void _gpio_dbck_disable(struct gpio_bank *bank) 181 184 { 182 185 if (bank->dbck_enable_mask && bank->dbck_enabled) { 186 + /* 187 + * Disable debounce before cutting it's clock. If debounce is 188 + * enabled but the clock is not, GPIO module seems to be unable 189 + * to detect events and generate interrupts at least on OMAP3. 190 + */ 191 + __raw_writel(0, bank->base + bank->regs->debounce_en); 192 + 183 193 clk_disable(bank->dbck); 184 194 bank->dbck_enabled = false; 185 195 } ··· 1091 1081 bank->is_mpuio = pdata->is_mpuio; 1092 1082 bank->non_wakeup_gpios = pdata->non_wakeup_gpios; 1093 1083 bank->loses_context = pdata->loses_context; 1094 - bank->get_context_loss_count = pdata->get_context_loss_count; 1095 1084 bank->regs = pdata->regs; 1096 1085 #ifdef CONFIG_OF_GPIO 1097 1086 bank->chip.of_node = of_node_get(node); ··· 1143 1134 omap_gpio_mod_init(bank); 1144 1135 omap_gpio_chip_init(bank); 1145 1136 omap_gpio_show_rev(bank); 1137 + 1138 + if (bank->loses_context) 1139 + bank->get_context_loss_count = pdata->get_context_loss_count; 1146 1140 1147 1141 pm_runtime_put(bank->dev); 1148 1142
+3 -2
drivers/gpio/gpio-sta2x11.c
··· 383 383 } 384 384 spin_lock_init(&chip->lock); 385 385 gsta_gpio_setup(chip); 386 - for (i = 0; i < GSTA_NR_GPIO; i++) 387 - gsta_set_config(chip, i, gpio_pdata->pinconfig[i]); 386 + if (gpio_pdata) 387 + for (i = 0; i < GSTA_NR_GPIO; i++) 388 + gsta_set_config(chip, i, gpio_pdata->pinconfig[i]); 388 389 389 390 /* 384 was used in previous code: be compatible for other drivers */ 390 391 err = irq_alloc_descs(-1, 384, GSTA_NR_GPIO, NUMA_NO_NODE);
+3
drivers/gpio/gpio-tps65910.c
··· 149 149 tps65910_gpio->gpio_chip.set = tps65910_gpio_set; 150 150 tps65910_gpio->gpio_chip.get = tps65910_gpio_get; 151 151 tps65910_gpio->gpio_chip.dev = &pdev->dev; 152 + #ifdef CONFIG_OF_GPIO 153 + tps65910_gpio->gpio_chip.of_node = tps65910->dev->of_node; 154 + #endif 152 155 if (pdata && pdata->gpio_base) 153 156 tps65910_gpio->gpio_chip.base = pdata->gpio_base; 154 157 else
+4 -1
drivers/gpio/gpio-wm8994.c
··· 89 89 struct wm8994_gpio *wm8994_gpio = to_wm8994_gpio(chip); 90 90 struct wm8994 *wm8994 = wm8994_gpio->wm8994; 91 91 92 + if (value) 93 + value = WM8994_GPN_LVL; 94 + 92 95 return wm8994_set_bits(wm8994, WM8994_GPIO_1 + offset, 93 - WM8994_GPN_DIR, 0); 96 + WM8994_GPN_DIR | WM8994_GPN_LVL, value); 94 97 } 95 98 96 99 static void wm8994_gpio_set(struct gpio_chip *chip, unsigned offset, int value)
+19 -16
drivers/gpu/drm/gma500/cdv_device.c
··· 78 78 return REG_READ(BLC_PWM_CTL2) & PWM_LEGACY_MODE; 79 79 } 80 80 81 - static int cdv_get_brightness(struct backlight_device *bd) 82 - { 83 - struct drm_device *dev = bl_get_data(bd); 84 - u32 val = REG_READ(BLC_PWM_CTL) & BACKLIGHT_DUTY_CYCLE_MASK; 85 - 86 - if (cdv_backlight_combination_mode(dev)) { 87 - u8 lbpc; 88 - 89 - val &= ~1; 90 - pci_read_config_byte(dev->pdev, 0xF4, &lbpc); 91 - val *= lbpc; 92 - } 93 - return val; 94 - } 95 - 96 81 static u32 cdv_get_max_backlight(struct drm_device *dev) 97 82 { 98 83 u32 max = REG_READ(BLC_PWM_CTL); ··· 95 110 return max; 96 111 } 97 112 113 + static int cdv_get_brightness(struct backlight_device *bd) 114 + { 115 + struct drm_device *dev = bl_get_data(bd); 116 + u32 val = REG_READ(BLC_PWM_CTL) & BACKLIGHT_DUTY_CYCLE_MASK; 117 + 118 + if (cdv_backlight_combination_mode(dev)) { 119 + u8 lbpc; 120 + 121 + val &= ~1; 122 + pci_read_config_byte(dev->pdev, 0xF4, &lbpc); 123 + val *= lbpc; 124 + } 125 + return (val * 100)/cdv_get_max_backlight(dev); 126 + 127 + } 128 + 98 129 static int cdv_set_brightness(struct backlight_device *bd) 99 130 { 100 131 struct drm_device *dev = bl_get_data(bd); ··· 120 119 /* Percentage 1-100% being valid */ 121 120 if (level < 1) 122 121 level = 1; 122 + 123 + level *= cdv_get_max_backlight(dev); 124 + level /= 100; 123 125 124 126 if (cdv_backlight_combination_mode(dev)) { 125 127 u32 max = cdv_get_max_backlight(dev); ··· 161 157 162 158 cdv_backlight_device->props.brightness = 163 159 cdv_get_brightness(cdv_backlight_device); 164 - cdv_backlight_device->props.max_brightness = cdv_get_max_backlight(dev); 165 160 backlight_update_status(cdv_backlight_device); 166 161 dev_priv->backlight_device = cdv_backlight_device; 167 162 return 0;
+3 -5
drivers/gpu/drm/gma500/opregion.c
··· 144 144 145 145 #define ASLE_CBLV_VALID (1<<31) 146 146 147 + static struct psb_intel_opregion *system_opregion; 148 + 147 149 static u32 asle_set_backlight(struct drm_device *dev, u32 bclp) 148 150 { 149 151 struct drm_psb_private *dev_priv = dev->dev_private; ··· 207 205 struct drm_psb_private *dev_priv = dev->dev_private; 208 206 struct opregion_asle *asle = dev_priv->opregion.asle; 209 207 210 - if (asle) { 208 + if (asle && system_opregion ) { 211 209 /* Don't do this on Medfield or other non PC like devices, they 212 210 use the bit for something different altogether */ 213 211 psb_enable_pipestat(dev_priv, 0, PIPE_LEGACY_BLC_EVENT_ENABLE); ··· 223 221 #define ACPI_EV_LID (1<<1) 224 222 #define ACPI_EV_DOCK (1<<2) 225 223 226 - static struct psb_intel_opregion *system_opregion; 227 224 228 225 static int psb_intel_opregion_video_event(struct notifier_block *nb, 229 226 unsigned long val, void *data) ··· 267 266 system_opregion = opregion; 268 267 register_acpi_notifier(&psb_intel_opregion_notifier); 269 268 } 270 - 271 - if (opregion->asle) 272 - psb_intel_opregion_enable_asle(dev); 273 269 } 274 270 275 271 void psb_intel_opregion_fini(struct drm_device *dev)
+5
drivers/gpu/drm/gma500/opregion.h
··· 27 27 extern void psb_intel_opregion_init(struct drm_device *dev); 28 28 extern void psb_intel_opregion_fini(struct drm_device *dev); 29 29 extern int psb_intel_opregion_setup(struct drm_device *dev); 30 + extern void psb_intel_opregion_enable_asle(struct drm_device *dev); 30 31 31 32 #else 32 33 ··· 46 45 extern inline int psb_intel_opregion_setup(struct drm_device *dev) 47 46 { 48 47 return 0; 48 + } 49 + 50 + extern inline void psb_intel_opregion_enable_asle(struct drm_device *dev) 51 + { 49 52 } 50 53 #endif
+4 -8
drivers/gpu/drm/gma500/psb_device.c
··· 144 144 psb_backlight_device->props.max_brightness = 100; 145 145 backlight_update_status(psb_backlight_device); 146 146 dev_priv->backlight_device = psb_backlight_device; 147 + 148 + /* This must occur after the backlight is properly initialised */ 149 + psb_lid_timer_init(dev_priv); 150 + 147 151 return 0; 148 152 } 149 153 ··· 358 354 return 0; 359 355 } 360 356 361 - /* Not exactly an erratum more an irritation */ 362 - static void psb_chip_errata(struct drm_device *dev) 363 - { 364 - struct drm_psb_private *dev_priv = dev->dev_private; 365 - psb_lid_timer_init(dev_priv); 366 - } 367 - 368 357 static void psb_chip_teardown(struct drm_device *dev) 369 358 { 370 359 struct drm_psb_private *dev_priv = dev->dev_private; ··· 376 379 .sgx_offset = PSB_SGX_OFFSET, 377 380 .chip_setup = psb_chip_setup, 378 381 .chip_teardown = psb_chip_teardown, 379 - .errata = psb_chip_errata, 380 382 381 383 .crtc_helper = &psb_intel_helper_funcs, 382 384 .crtc_funcs = &psb_intel_crtc_funcs,
+1
drivers/gpu/drm/gma500/psb_drv.c
··· 374 374 375 375 if (ret) 376 376 return ret; 377 + psb_intel_opregion_enable_asle(dev); 377 378 #if 0 378 379 /*enable runtime pm at last*/ 379 380 pm_runtime_enable(&dev->pdev->dev);
+1
drivers/hid/Kconfig
··· 386 386 - Unitec Panels 387 387 - XAT optical touch panels 388 388 - Xiroku optical touch panels 389 + - Zytronic touch panels 389 390 390 391 If unsure, say N. 391 392
+6
drivers/hid/hid-apple.c
··· 517 517 .driver_data = APPLE_HAS_FN | APPLE_ISO_KEYBOARD }, 518 518 { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING5A_JIS), 519 519 .driver_data = APPLE_HAS_FN | APPLE_RDESC_JIS }, 520 + { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING7_ANSI), 521 + .driver_data = APPLE_HAS_FN }, 522 + { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING7_ISO), 523 + .driver_data = APPLE_HAS_FN | APPLE_ISO_KEYBOARD }, 524 + { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING7_JIS), 525 + .driver_data = APPLE_HAS_FN | APPLE_RDESC_JIS }, 520 526 { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_ALU_WIRELESS_2009_ANSI), 521 527 .driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN }, 522 528 { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_ALU_WIRELESS_2009_ISO),
+7
drivers/hid/hid-core.c
··· 1503 1503 { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING6A_ANSI) }, 1504 1504 { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING6A_ISO) }, 1505 1505 { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING6A_JIS) }, 1506 + { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING7_ANSI) }, 1507 + { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING7_ISO) }, 1508 + { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING7_JIS) }, 1506 1509 { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_ALU_WIRELESS_2009_ANSI) }, 1507 1510 { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_ALU_WIRELESS_2009_ISO) }, 1508 1511 { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_ALU_WIRELESS_2009_JIS) }, ··· 1998 1995 { HID_USB_DEVICE(USB_VENDOR_ID_LD, USB_DEVICE_ID_LD_MCT) }, 1999 1996 { HID_USB_DEVICE(USB_VENDOR_ID_LD, USB_DEVICE_ID_LD_HYBRID) }, 2000 1997 { HID_USB_DEVICE(USB_VENDOR_ID_LD, USB_DEVICE_ID_LD_HEATCONTROL) }, 1998 + { HID_USB_DEVICE(USB_VENDOR_ID_MADCATZ, USB_DEVICE_ID_MADCATZ_BEATPAD) }, 2001 1999 { HID_USB_DEVICE(USB_VENDOR_ID_MCC, USB_DEVICE_ID_MCC_PMD1024LS) }, 2002 2000 { HID_USB_DEVICE(USB_VENDOR_ID_MCC, USB_DEVICE_ID_MCC_PMD1208LS) }, 2003 2001 { HID_USB_DEVICE(USB_VENDOR_ID_MICROCHIP, USB_DEVICE_ID_PICKIT1) }, ··· 2093 2089 { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING6A_ANSI) }, 2094 2090 { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING6A_ISO) }, 2095 2091 { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING6A_JIS) }, 2092 + { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING7_ANSI) }, 2093 + { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING7_ISO) }, 2094 + { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING7_JIS) }, 2096 2095 { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_FOUNTAIN_TP_ONLY) }, 2097 2096 { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_GEYSER1_TP_ONLY) }, 2098 2097 { }
+12
drivers/hid/hid-ids.h
··· 125 125 #define USB_DEVICE_ID_APPLE_WELLSPRING6_ANSI 0x024c 126 126 #define USB_DEVICE_ID_APPLE_WELLSPRING6_ISO 0x024d 127 127 #define USB_DEVICE_ID_APPLE_WELLSPRING6_JIS 0x024e 128 + #define USB_DEVICE_ID_APPLE_WELLSPRING7_ANSI 0x0262 129 + #define USB_DEVICE_ID_APPLE_WELLSPRING7_ISO 0x0263 130 + #define USB_DEVICE_ID_APPLE_WELLSPRING7_JIS 0x0264 128 131 #define USB_DEVICE_ID_APPLE_ALU_WIRELESS_2009_ANSI 0x0239 129 132 #define USB_DEVICE_ID_APPLE_ALU_WIRELESS_2009_ISO 0x023a 130 133 #define USB_DEVICE_ID_APPLE_ALU_WIRELESS_2009_JIS 0x023b ··· 521 518 #define USB_DEVICE_ID_CRYSTALTOUCH 0x0006 522 519 #define USB_DEVICE_ID_CRYSTALTOUCH_DUAL 0x0007 523 520 521 + #define USB_VENDOR_ID_MADCATZ 0x0738 522 + #define USB_DEVICE_ID_MADCATZ_BEATPAD 0x4540 523 + 524 524 #define USB_VENDOR_ID_MCC 0x09db 525 525 #define USB_DEVICE_ID_MCC_PMD1024LS 0x0076 526 526 #define USB_DEVICE_ID_MCC_PMD1208LS 0x007a ··· 658 652 #define USB_VENDOR_ID_SAMSUNG 0x0419 659 653 #define USB_DEVICE_ID_SAMSUNG_IR_REMOTE 0x0001 660 654 #define USB_DEVICE_ID_SAMSUNG_WIRELESS_KBD_MOUSE 0x0600 655 + 656 + #define USB_VENDOR_ID_SENNHEISER 0x1395 657 + #define USB_DEVICE_ID_SENNHEISER_BTD500USB 0x002c 661 658 662 659 #define USB_VENDOR_ID_SIGMA_MICRO 0x1c4f 663 660 #define USB_DEVICE_ID_SIGMA_MICRO_KEYBOARD 0x0002 ··· 810 801 811 802 #define USB_VENDOR_ID_ZYDACRON 0x13EC 812 803 #define USB_DEVICE_ID_ZYDACRON_REMOTE_CONTROL 0x0006 804 + 805 + #define USB_VENDOR_ID_ZYTRONIC 0x14c8 806 + #define USB_DEVICE_ID_ZYTRONIC_ZXY100 0x0005 813 807 814 808 #define USB_VENDOR_ID_PRIMAX 0x0461 815 809 #define USB_DEVICE_ID_PRIMAX_KEYBOARD 0x4e05
+3
drivers/hid/hid-input.c
··· 301 301 { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_APPLE, 302 302 USB_DEVICE_ID_APPLE_ALU_WIRELESS_2011_ANSI), 303 303 HID_BATTERY_QUIRK_PERCENT | HID_BATTERY_QUIRK_FEATURE }, 304 + { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_APPLE, 305 + USB_DEVICE_ID_APPLE_ALU_WIRELESS_ANSI), 306 + HID_BATTERY_QUIRK_PERCENT | HID_BATTERY_QUIRK_FEATURE }, 304 307 {} 305 308 }; 306 309
+5
drivers/hid/hid-multitouch.c
··· 1048 1048 MT_USB_DEVICE(USB_VENDOR_ID_XIROKU, 1049 1049 USB_DEVICE_ID_XIROKU_CSR2) }, 1050 1050 1051 + /* Zytronic panels */ 1052 + { .driver_data = MT_CLS_SERIAL, 1053 + MT_USB_DEVICE(USB_VENDOR_ID_ZYTRONIC, 1054 + USB_DEVICE_ID_ZYTRONIC_ZXY100) }, 1055 + 1051 1056 /* Generic MT device */ 1052 1057 { HID_DEVICE(HID_BUS_ANY, HID_GROUP_MULTITOUCH, HID_ANY_ID, HID_ANY_ID) }, 1053 1058 { }
+1
drivers/hid/usbhid/hid-quirks.c
··· 76 76 { USB_VENDOR_ID_PRODIGE, USB_DEVICE_ID_PRODIGE_CORDLESS, HID_QUIRK_NOGET }, 77 77 { USB_VENDOR_ID_QUANTA, USB_DEVICE_ID_PIXART_IMAGING_INC_OPTICAL_TOUCH_SCREEN, HID_QUIRK_NOGET }, 78 78 { USB_VENDOR_ID_QUANTA, USB_DEVICE_ID_QUANTA_OPTICAL_TOUCH_3008, HID_QUIRK_NOGET }, 79 + { USB_VENDOR_ID_SENNHEISER, USB_DEVICE_ID_SENNHEISER_BTD500USB, HID_QUIRK_NOGET }, 79 80 { USB_VENDOR_ID_SUN, USB_DEVICE_ID_RARITAN_KVM_DONGLE, HID_QUIRK_NOGET }, 80 81 { USB_VENDOR_ID_SYMBOL, USB_DEVICE_ID_SYMBOL_SCANNER_1, HID_QUIRK_NOGET }, 81 82 { USB_VENDOR_ID_SYMBOL, USB_DEVICE_ID_SYMBOL_SCANNER_2, HID_QUIRK_NOGET },
+1 -1
drivers/hwmon/it87.c
··· 2341 2341 2342 2342 /* Start monitoring */ 2343 2343 it87_write_value(data, IT87_REG_CONFIG, 2344 - (it87_read_value(data, IT87_REG_CONFIG) & 0x36) 2344 + (it87_read_value(data, IT87_REG_CONFIG) & 0x3e) 2345 2345 | (update_vbat ? 0x41 : 0x01)); 2346 2346 } 2347 2347
+2 -2
drivers/hwspinlock/hwspinlock_core.c
··· 345 345 spin_lock_init(&hwlock->lock); 346 346 hwlock->bank = bank; 347 347 348 - ret = hwspin_lock_register_single(hwlock, i); 348 + ret = hwspin_lock_register_single(hwlock, base_id + i); 349 349 if (ret) 350 350 goto reg_failed; 351 351 } ··· 354 354 355 355 reg_failed: 356 356 while (--i >= 0) 357 - hwspin_lock_unregister_single(i); 357 + hwspin_lock_unregister_single(base_id + i); 358 358 return ret; 359 359 } 360 360 EXPORT_SYMBOL_GPL(hwspin_lock_register);
+3 -2
drivers/input/joystick/as5011.c
··· 282 282 283 283 error = request_threaded_irq(as5011->button_irq, 284 284 NULL, as5011_button_interrupt, 285 - IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING, 285 + IRQF_TRIGGER_RISING | 286 + IRQF_TRIGGER_FALLING | IRQF_ONESHOT, 286 287 "as5011_button", as5011); 287 288 if (error < 0) { 288 289 dev_err(&client->dev, ··· 297 296 298 297 error = request_threaded_irq(as5011->axis_irq, NULL, 299 298 as5011_axis_interrupt, 300 - plat_data->axis_irqflags, 299 + plat_data->axis_irqflags | IRQF_ONESHOT, 301 300 "as5011_joystick", as5011); 302 301 if (error) { 303 302 dev_err(&client->dev,
+5 -1
drivers/input/joystick/xpad.c
··· 142 142 { 0x0c12, 0x880a, "Pelican Eclipse PL-2023", 0, XTYPE_XBOX }, 143 143 { 0x0c12, 0x8810, "Zeroplus Xbox Controller", 0, XTYPE_XBOX }, 144 144 { 0x0c12, 0x9902, "HAMA VibraX - *FAULTY HARDWARE*", 0, XTYPE_XBOX }, 145 + { 0x0d2f, 0x0002, "Andamiro Pump It Up pad", MAP_DPAD_TO_BUTTONS, XTYPE_XBOX }, 145 146 { 0x0e4c, 0x1097, "Radica Gamester Controller", 0, XTYPE_XBOX }, 146 147 { 0x0e4c, 0x2390, "Radica Games Jtech Controller", 0, XTYPE_XBOX }, 147 148 { 0x0e6f, 0x0003, "Logic3 Freebird wireless Controller", 0, XTYPE_XBOX }, ··· 165 164 { 0x1bad, 0x0003, "Harmonix Rock Band Drumkit", MAP_DPAD_TO_BUTTONS, XTYPE_XBOX360 }, 166 165 { 0x0f0d, 0x0016, "Hori Real Arcade Pro.EX", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOX360 }, 167 166 { 0x0f0d, 0x000d, "Hori Fighting Stick EX2", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOX360 }, 167 + { 0x1689, 0xfd00, "Razer Onza Tournament Edition", MAP_DPAD_TO_BUTTONS, XTYPE_XBOX360 }, 168 168 { 0xffff, 0xffff, "Chinese-made Xbox Controller", 0, XTYPE_XBOX }, 169 169 { 0x0000, 0x0000, "Generic X-Box pad", 0, XTYPE_UNKNOWN } 170 170 }; ··· 240 238 XPAD_XBOX360_VENDOR(0x045e), /* Microsoft X-Box 360 controllers */ 241 239 XPAD_XBOX360_VENDOR(0x046d), /* Logitech X-Box 360 style controllers */ 242 240 XPAD_XBOX360_VENDOR(0x0738), /* Mad Catz X-Box 360 controllers */ 241 + { USB_DEVICE(0x0738, 0x4540) }, /* Mad Catz Beat Pad */ 243 242 XPAD_XBOX360_VENDOR(0x0e6f), /* 0x0e6f X-Box 360 controllers */ 244 243 XPAD_XBOX360_VENDOR(0x12ab), /* X-Box 360 dance pads */ 245 244 XPAD_XBOX360_VENDOR(0x1430), /* RedOctane X-Box 360 controllers */ 246 245 XPAD_XBOX360_VENDOR(0x146b), /* BigBen Interactive Controllers */ 247 246 XPAD_XBOX360_VENDOR(0x1bad), /* Harminix Rock Band Guitar and Drums */ 248 - XPAD_XBOX360_VENDOR(0x0f0d), /* Hori Controllers */ 247 + XPAD_XBOX360_VENDOR(0x0f0d), /* Hori Controllers */ 248 + XPAD_XBOX360_VENDOR(0x1689), /* Razer Onza */ 249 249 { } 250 250 }; 251 251
+2 -1
drivers/input/keyboard/mcs_touchkey.c
··· 178 178 } 179 179 180 180 error = request_threaded_irq(client->irq, NULL, mcs_touchkey_interrupt, 181 - IRQF_TRIGGER_FALLING, client->dev.driver->name, data); 181 + IRQF_TRIGGER_FALLING | IRQF_ONESHOT, 182 + client->dev.driver->name, data); 182 183 if (error) { 183 184 dev_err(&client->dev, "Failed to register interrupt\n"); 184 185 goto err_free_mem;
+1 -1
drivers/input/keyboard/mpr121_touchkey.c
··· 248 248 249 249 error = request_threaded_irq(client->irq, NULL, 250 250 mpr_touchkey_interrupt, 251 - IRQF_TRIGGER_FALLING, 251 + IRQF_TRIGGER_FALLING | IRQF_ONESHOT, 252 252 client->dev.driver->name, mpr121); 253 253 if (error) { 254 254 dev_err(&client->dev, "Failed to register interrupt\n");
+2 -1
drivers/input/keyboard/qt1070.c
··· 201 201 msleep(QT1070_RESET_TIME); 202 202 203 203 err = request_threaded_irq(client->irq, NULL, qt1070_interrupt, 204 - IRQF_TRIGGER_NONE, client->dev.driver->name, data); 204 + IRQF_TRIGGER_NONE | IRQF_ONESHOT, 205 + client->dev.driver->name, data); 205 206 if (err) { 206 207 dev_err(&client->dev, "fail to request irq\n"); 207 208 goto err_free_mem;
+2 -1
drivers/input/keyboard/tca6416-keypad.c
··· 278 278 279 279 error = request_threaded_irq(chip->irqnum, NULL, 280 280 tca6416_keys_isr, 281 - IRQF_TRIGGER_FALLING, 281 + IRQF_TRIGGER_FALLING | 282 + IRQF_ONESHOT, 282 283 "tca6416-keypad", chip); 283 284 if (error) { 284 285 dev_dbg(&client->dev,
+1 -1
drivers/input/keyboard/tca8418_keypad.c
··· 360 360 client->irq = gpio_to_irq(client->irq); 361 361 362 362 error = request_threaded_irq(client->irq, NULL, tca8418_irq_handler, 363 - IRQF_TRIGGER_FALLING, 363 + IRQF_TRIGGER_FALLING | IRQF_ONESHOT, 364 364 client->name, keypad_data); 365 365 if (error) { 366 366 dev_dbg(&client->dev,
+4 -4
drivers/input/keyboard/tnetv107x-keypad.c
··· 227 227 goto error_clk; 228 228 } 229 229 230 - error = request_threaded_irq(kp->irq_press, NULL, keypad_irq, 0, 231 - dev_name(dev), kp); 230 + error = request_threaded_irq(kp->irq_press, NULL, keypad_irq, 231 + IRQF_ONESHOT, dev_name(dev), kp); 232 232 if (error < 0) { 233 233 dev_err(kp->dev, "Could not allocate keypad press key irq\n"); 234 234 goto error_irq_press; 235 235 } 236 236 237 - error = request_threaded_irq(kp->irq_release, NULL, keypad_irq, 0, 238 - dev_name(dev), kp); 237 + error = request_threaded_irq(kp->irq_release, NULL, keypad_irq, 238 + IRQF_ONESHOT, dev_name(dev), kp); 239 239 if (error < 0) { 240 240 dev_err(kp->dev, "Could not allocate keypad release key irq\n"); 241 241 goto error_irq_release;
+5 -3
drivers/input/misc/ad714x.c
··· 972 972 struct ad714x_platform_data *plat_data = dev->platform_data; 973 973 struct ad714x_chip *ad714x; 974 974 void *drv_mem; 975 + unsigned long irqflags; 975 976 976 977 struct ad714x_button_drv *bt_drv; 977 978 struct ad714x_slider_drv *sd_drv; ··· 1163 1162 alloc_idx++; 1164 1163 } 1165 1164 1165 + irqflags = plat_data->irqflags ?: IRQF_TRIGGER_FALLING; 1166 + irqflags |= IRQF_ONESHOT; 1167 + 1166 1168 error = request_threaded_irq(ad714x->irq, NULL, ad714x_interrupt_thread, 1167 - plat_data->irqflags ? 1168 - plat_data->irqflags : IRQF_TRIGGER_FALLING, 1169 - "ad714x_captouch", ad714x); 1169 + irqflags, "ad714x_captouch", ad714x); 1170 1170 if (error) { 1171 1171 dev_err(dev, "can't allocate irq %d\n", ad714x->irq); 1172 1172 goto err_unreg_dev;
+2 -1
drivers/input/misc/dm355evm_keys.c
··· 213 213 /* REVISIT: flush the event queue? */ 214 214 215 215 status = request_threaded_irq(keys->irq, NULL, dm355evm_keys_irq, 216 - IRQF_TRIGGER_FALLING, dev_name(&pdev->dev), keys); 216 + IRQF_TRIGGER_FALLING | IRQF_ONESHOT, 217 + dev_name(&pdev->dev), keys); 217 218 if (status < 0) 218 219 goto fail2; 219 220
+20
drivers/input/mouse/bcm5974.c
··· 79 79 #define USB_DEVICE_ID_APPLE_WELLSPRING5A_ANSI 0x0252 80 80 #define USB_DEVICE_ID_APPLE_WELLSPRING5A_ISO 0x0253 81 81 #define USB_DEVICE_ID_APPLE_WELLSPRING5A_JIS 0x0254 82 + /* MacbookPro10,1 (unibody, June 2012) */ 83 + #define USB_DEVICE_ID_APPLE_WELLSPRING7_ANSI 0x0262 84 + #define USB_DEVICE_ID_APPLE_WELLSPRING7_ISO 0x0263 85 + #define USB_DEVICE_ID_APPLE_WELLSPRING7_JIS 0x0264 82 86 83 87 #define BCM5974_DEVICE(prod) { \ 84 88 .match_flags = (USB_DEVICE_ID_MATCH_DEVICE | \ ··· 132 128 BCM5974_DEVICE(USB_DEVICE_ID_APPLE_WELLSPRING5A_ANSI), 133 129 BCM5974_DEVICE(USB_DEVICE_ID_APPLE_WELLSPRING5A_ISO), 134 130 BCM5974_DEVICE(USB_DEVICE_ID_APPLE_WELLSPRING5A_JIS), 131 + /* MacbookPro10,1 */ 132 + BCM5974_DEVICE(USB_DEVICE_ID_APPLE_WELLSPRING7_ANSI), 133 + BCM5974_DEVICE(USB_DEVICE_ID_APPLE_WELLSPRING7_ISO), 134 + BCM5974_DEVICE(USB_DEVICE_ID_APPLE_WELLSPRING7_JIS), 135 135 /* Terminating entry */ 136 136 {} 137 137 }; ··· 361 353 { DIM_WIDTH, DIM_WIDTH / SN_WIDTH, 0, 2048 }, 362 354 { DIM_X, DIM_X / SN_COORD, -4620, 5140 }, 363 355 { DIM_Y, DIM_Y / SN_COORD, -150, 6600 } 356 + }, 357 + { 358 + USB_DEVICE_ID_APPLE_WELLSPRING7_ANSI, 359 + USB_DEVICE_ID_APPLE_WELLSPRING7_ISO, 360 + USB_DEVICE_ID_APPLE_WELLSPRING7_JIS, 361 + HAS_INTEGRATED_BUTTON, 362 + 0x84, sizeof(struct bt_data), 363 + 0x81, TYPE2, FINGER_TYPE2, FINGER_TYPE2 + SIZEOF_ALL_FINGERS, 364 + { DIM_PRESSURE, DIM_PRESSURE / SN_PRESSURE, 0, 300 }, 365 + { DIM_WIDTH, DIM_WIDTH / SN_WIDTH, 0, 2048 }, 366 + { DIM_X, DIM_X / SN_COORD, -4750, 5280 }, 367 + { DIM_Y, DIM_Y / SN_COORD, -150, 6730 } 364 368 }, 365 369 {} 366 370 };
+4 -2
drivers/input/tablet/wacom_sys.c
··· 216 216 217 217 rep_data[0] = 12; 218 218 result = wacom_get_report(intf, WAC_HID_FEATURE_REPORT, 219 - rep_data[0], &rep_data, 2, 219 + rep_data[0], rep_data, 2, 220 220 WAC_MSG_RETRIES); 221 221 222 222 if (result >= 0 && rep_data[1] > 2) ··· 401 401 break; 402 402 403 403 case HID_USAGE_CONTACTMAX: 404 - wacom_retrieve_report_data(intf, features); 404 + /* leave touch_max as is if predefined */ 405 + if (!features->touch_max) 406 + wacom_retrieve_report_data(intf, features); 405 407 i++; 406 408 break; 407 409 }
+1 -1
drivers/input/touchscreen/ad7879.c
··· 597 597 AD7879_TMR(ts->pen_down_acc_interval); 598 598 599 599 err = request_threaded_irq(ts->irq, NULL, ad7879_irq, 600 - IRQF_TRIGGER_FALLING, 600 + IRQF_TRIGGER_FALLING | IRQF_ONESHOT, 601 601 dev_name(dev), ts); 602 602 if (err) { 603 603 dev_err(dev, "irq %d busy?\n", ts->irq);
+2 -1
drivers/input/touchscreen/atmel_mxt_ts.c
··· 1149 1149 goto err_free_object; 1150 1150 1151 1151 error = request_threaded_irq(client->irq, NULL, mxt_interrupt, 1152 - pdata->irqflags, client->dev.driver->name, data); 1152 + pdata->irqflags | IRQF_ONESHOT, 1153 + client->dev.driver->name, data); 1153 1154 if (error) { 1154 1155 dev_err(&client->dev, "Failed to register interrupt\n"); 1155 1156 goto err_free_object;
+2 -1
drivers/input/touchscreen/bu21013_ts.c
··· 509 509 input_set_drvdata(in_dev, bu21013_data); 510 510 511 511 error = request_threaded_irq(pdata->irq, NULL, bu21013_gpio_irq, 512 - IRQF_TRIGGER_FALLING | IRQF_SHARED, 512 + IRQF_TRIGGER_FALLING | IRQF_SHARED | 513 + IRQF_ONESHOT, 513 514 DRIVER_TP, bu21013_data); 514 515 if (error) { 515 516 dev_err(&client->dev, "request irq %d failed\n", pdata->irq);
+2 -1
drivers/input/touchscreen/cy8ctmg110_ts.c
··· 251 251 } 252 252 253 253 err = request_threaded_irq(client->irq, NULL, cy8ctmg110_irq_thread, 254 - IRQF_TRIGGER_RISING, "touch_reset_key", ts); 254 + IRQF_TRIGGER_RISING | IRQF_ONESHOT, 255 + "touch_reset_key", ts); 255 256 if (err < 0) { 256 257 dev_err(&client->dev, 257 258 "irq %d busy? error %d\n", client->irq, err);
+1 -1
drivers/input/touchscreen/intel-mid-touch.c
··· 620 620 MRST_PRESSURE_MIN, MRST_PRESSURE_MAX, 0, 0); 621 621 622 622 err = request_threaded_irq(tsdev->irq, NULL, mrstouch_pendet_irq, 623 - 0, "mrstouch", tsdev); 623 + IRQF_ONESHOT, "mrstouch", tsdev); 624 624 if (err) { 625 625 dev_err(tsdev->dev, "unable to allocate irq\n"); 626 626 goto err_free_mem;
+1 -1
drivers/input/touchscreen/pixcir_i2c_ts.c
··· 165 165 input_set_drvdata(input, tsdata); 166 166 167 167 error = request_threaded_irq(client->irq, NULL, pixcir_ts_isr, 168 - IRQF_TRIGGER_FALLING, 168 + IRQF_TRIGGER_FALLING | IRQF_ONESHOT, 169 169 client->name, tsdata); 170 170 if (error) { 171 171 dev_err(&client->dev, "Unable to request touchscreen IRQ.\n");
+1 -1
drivers/input/touchscreen/tnetv107x-ts.c
··· 297 297 goto error_clk; 298 298 } 299 299 300 - error = request_threaded_irq(ts->tsc_irq, NULL, tsc_irq, 0, 300 + error = request_threaded_irq(ts->tsc_irq, NULL, tsc_irq, IRQF_ONESHOT, 301 301 dev_name(dev), ts); 302 302 if (error < 0) { 303 303 dev_err(ts->dev, "Could not allocate ts irq\n");
+2 -1
drivers/input/touchscreen/tsc2005.c
··· 650 650 tsc2005_stop_scan(ts); 651 651 652 652 error = request_threaded_irq(spi->irq, NULL, tsc2005_irq_thread, 653 - IRQF_TRIGGER_RISING, "tsc2005", ts); 653 + IRQF_TRIGGER_RISING | IRQF_ONESHOT, 654 + "tsc2005", ts); 654 655 if (error) { 655 656 dev_err(&spi->dev, "Failed to request irq, err: %d\n", error); 656 657 goto err_free_mem;
+10 -1
drivers/iommu/amd_iommu.c
··· 83 83 static ATOMIC_NOTIFIER_HEAD(ppr_notifier); 84 84 int amd_iommu_max_glx_val = -1; 85 85 86 + static struct dma_map_ops amd_iommu_dma_ops; 87 + 86 88 /* 87 89 * general struct to manage commands send to an IOMMU 88 90 */ ··· 404 402 return; 405 403 406 404 de_fflush = debugfs_create_bool("fullflush", 0444, stats_dir, 407 - (u32 *)&amd_iommu_unmap_flush); 405 + &amd_iommu_unmap_flush); 408 406 409 407 amd_iommu_stats_add(&compl_wait); 410 408 amd_iommu_stats_add(&cnt_map_single); ··· 2268 2266 spin_lock_irqsave(&iommu_pd_list_lock, flags); 2269 2267 list_add_tail(&dma_domain->list, &iommu_pd_list); 2270 2268 spin_unlock_irqrestore(&iommu_pd_list_lock, flags); 2269 + 2270 + dev_data = get_dev_data(dev); 2271 + 2272 + if (!dev_data->passthrough) 2273 + dev->archdata.dma_ops = &amd_iommu_dma_ops; 2274 + else 2275 + dev->archdata.dma_ops = &nommu_dma_ops; 2271 2276 2272 2277 break; 2273 2278 case BUS_NOTIFY_DEL_DEVICE:
+3 -3
drivers/iommu/amd_iommu_init.c
··· 129 129 to handle */ 130 130 LIST_HEAD(amd_iommu_unity_map); /* a list of required unity mappings 131 131 we find in ACPI */ 132 - bool amd_iommu_unmap_flush; /* if true, flush on every unmap */ 132 + u32 amd_iommu_unmap_flush; /* if true, flush on every unmap */ 133 133 134 134 LIST_HEAD(amd_iommu_list); /* list of all AMD IOMMUs in the 135 135 system */ ··· 1641 1641 1642 1642 amd_iommu_init_api(); 1643 1643 1644 + x86_platform.iommu_shutdown = disable_iommus; 1645 + 1644 1646 if (iommu_pass_through) 1645 1647 goto out; 1646 1648 ··· 1650 1648 printk(KERN_INFO "AMD-Vi: IO/TLB flush on unmap enabled\n"); 1651 1649 else 1652 1650 printk(KERN_INFO "AMD-Vi: Lazy IO/TLB flushing enabled\n"); 1653 - 1654 - x86_platform.iommu_shutdown = disable_iommus; 1655 1651 1656 1652 out: 1657 1653 return ret;
+1 -1
drivers/iommu/amd_iommu_types.h
··· 652 652 * If true, the addresses will be flushed on unmap time, not when 653 653 * they are reused 654 654 */ 655 - extern bool amd_iommu_unmap_flush; 655 + extern u32 amd_iommu_unmap_flush; 656 656 657 657 /* Smallest number of PASIDs supported by any IOMMU in the system */ 658 658 extern u32 amd_iommu_max_pasids;
+2 -2
drivers/iommu/tegra-smmu.c
··· 550 550 return 0; 551 551 552 552 as->pte_count = devm_kzalloc(smmu->dev, 553 - sizeof(as->pte_count[0]) * SMMU_PDIR_COUNT, GFP_KERNEL); 553 + sizeof(as->pte_count[0]) * SMMU_PDIR_COUNT, GFP_ATOMIC); 554 554 if (!as->pte_count) { 555 555 dev_err(smmu->dev, 556 556 "failed to allocate smmu_device PTE cunters\n"); 557 557 return -ENOMEM; 558 558 } 559 - as->pdir_page = alloc_page(GFP_KERNEL | __GFP_DMA); 559 + as->pdir_page = alloc_page(GFP_ATOMIC | __GFP_DMA); 560 560 if (!as->pdir_page) { 561 561 dev_err(smmu->dev, 562 562 "failed to allocate smmu_device page directory\n");
+15 -1
drivers/leds/ledtrig-heartbeat.c
··· 21 21 #include <linux/reboot.h> 22 22 #include "leds.h" 23 23 24 + static int panic_heartbeats; 25 + 24 26 struct heartbeat_trig_data { 25 27 unsigned int phase; 26 28 unsigned int period; ··· 35 33 struct heartbeat_trig_data *heartbeat_data = led_cdev->trigger_data; 36 34 unsigned long brightness = LED_OFF; 37 35 unsigned long delay = 0; 36 + 37 + if (unlikely(panic_heartbeats)) { 38 + led_set_brightness(led_cdev, LED_OFF); 39 + return; 40 + } 38 41 39 42 /* acts like an actual heart beat -- ie thump-thump-pause... */ 40 43 switch (heartbeat_data->phase) { ··· 118 111 return NOTIFY_DONE; 119 112 } 120 113 114 + static int heartbeat_panic_notifier(struct notifier_block *nb, 115 + unsigned long code, void *unused) 116 + { 117 + panic_heartbeats = 1; 118 + return NOTIFY_DONE; 119 + } 120 + 121 121 static struct notifier_block heartbeat_reboot_nb = { 122 122 .notifier_call = heartbeat_reboot_notifier, 123 123 }; 124 124 125 125 static struct notifier_block heartbeat_panic_nb = { 126 - .notifier_call = heartbeat_reboot_notifier, 126 + .notifier_call = heartbeat_panic_notifier, 127 127 }; 128 128 129 129 static int __init heartbeat_trig_init(void)
+24 -13
drivers/md/md.c
··· 2931 2931 * can be sane */ 2932 2932 return -EBUSY; 2933 2933 rdev->data_offset = offset; 2934 + rdev->new_data_offset = offset; 2934 2935 return len; 2935 2936 } 2936 2937 ··· 3927 3926 return sprintf(page, "%s\n", array_states[st]); 3928 3927 } 3929 3928 3930 - static int do_md_stop(struct mddev * mddev, int ro, int is_open); 3931 - static int md_set_readonly(struct mddev * mddev, int is_open); 3929 + static int do_md_stop(struct mddev * mddev, int ro, struct block_device *bdev); 3930 + static int md_set_readonly(struct mddev * mddev, struct block_device *bdev); 3932 3931 static int do_md_run(struct mddev * mddev); 3933 3932 static int restart_array(struct mddev *mddev); 3934 3933 ··· 3944 3943 /* stopping an active array */ 3945 3944 if (atomic_read(&mddev->openers) > 0) 3946 3945 return -EBUSY; 3947 - err = do_md_stop(mddev, 0, 0); 3946 + err = do_md_stop(mddev, 0, NULL); 3948 3947 break; 3949 3948 case inactive: 3950 3949 /* stopping an active array */ 3951 3950 if (mddev->pers) { 3952 3951 if (atomic_read(&mddev->openers) > 0) 3953 3952 return -EBUSY; 3954 - err = do_md_stop(mddev, 2, 0); 3953 + err = do_md_stop(mddev, 2, NULL); 3955 3954 } else 3956 3955 err = 0; /* already inactive */ 3957 3956 break; ··· 3959 3958 break; /* not supported yet */ 3960 3959 case readonly: 3961 3960 if (mddev->pers) 3962 - err = md_set_readonly(mddev, 0); 3961 + err = md_set_readonly(mddev, NULL); 3963 3962 else { 3964 3963 mddev->ro = 1; 3965 3964 set_disk_ro(mddev->gendisk, 1); ··· 3969 3968 case read_auto: 3970 3969 if (mddev->pers) { 3971 3970 if (mddev->ro == 0) 3972 - err = md_set_readonly(mddev, 0); 3971 + err = md_set_readonly(mddev, NULL); 3973 3972 else if (mddev->ro == 1) 3974 3973 err = restart_array(mddev); 3975 3974 if (err == 0) { ··· 5352 5351 } 5353 5352 EXPORT_SYMBOL_GPL(md_stop); 5354 5353 5355 - static int md_set_readonly(struct mddev *mddev, int is_open) 5354 + static int md_set_readonly(struct mddev *mddev, struct block_device *bdev) 5356 5355 { 5357 5356 int err = 0; 5358 5357 mutex_lock(&mddev->open_mutex); 5359 - if (atomic_read(&mddev->openers) > is_open) { 5358 + if (atomic_read(&mddev->openers) > !!bdev) { 5360 5359 printk("md: %s still in use.\n",mdname(mddev)); 5361 5360 err = -EBUSY; 5362 5361 goto out; 5363 5362 } 5363 + if (bdev) 5364 + sync_blockdev(bdev); 5364 5365 if (mddev->pers) { 5365 5366 __md_stop_writes(mddev); 5366 5367 ··· 5384 5381 * 0 - completely stop and dis-assemble array 5385 5382 * 2 - stop but do not disassemble array 5386 5383 */ 5387 - static int do_md_stop(struct mddev * mddev, int mode, int is_open) 5384 + static int do_md_stop(struct mddev * mddev, int mode, 5385 + struct block_device *bdev) 5388 5386 { 5389 5387 struct gendisk *disk = mddev->gendisk; 5390 5388 struct md_rdev *rdev; 5391 5389 5392 5390 mutex_lock(&mddev->open_mutex); 5393 - if (atomic_read(&mddev->openers) > is_open || 5391 + if (atomic_read(&mddev->openers) > !!bdev || 5394 5392 mddev->sysfs_active) { 5395 5393 printk("md: %s still in use.\n",mdname(mddev)); 5396 5394 mutex_unlock(&mddev->open_mutex); 5397 5395 return -EBUSY; 5398 5396 } 5397 + if (bdev) 5398 + /* It is possible IO was issued on some other 5399 + * open file which was closed before we took ->open_mutex. 5400 + * As that was not the last close __blkdev_put will not 5401 + * have called sync_blockdev, so we must. 5402 + */ 5403 + sync_blockdev(bdev); 5399 5404 5400 5405 if (mddev->pers) { 5401 5406 if (mddev->ro) ··· 5477 5466 err = do_md_run(mddev); 5478 5467 if (err) { 5479 5468 printk(KERN_WARNING "md: do_md_run() returned %d\n", err); 5480 - do_md_stop(mddev, 0, 0); 5469 + do_md_stop(mddev, 0, NULL); 5481 5470 } 5482 5471 } 5483 5472 ··· 6492 6481 goto done_unlock; 6493 6482 6494 6483 case STOP_ARRAY: 6495 - err = do_md_stop(mddev, 0, 1); 6484 + err = do_md_stop(mddev, 0, bdev); 6496 6485 goto done_unlock; 6497 6486 6498 6487 case STOP_ARRAY_RO: 6499 - err = md_set_readonly(mddev, 1); 6488 + err = md_set_readonly(mddev, bdev); 6500 6489 goto done_unlock; 6501 6490 6502 6491 case BLKROSET:
+10 -3
drivers/md/raid1.c
··· 1818 1818 1819 1819 if (atomic_dec_and_test(&r1_bio->remaining)) { 1820 1820 /* if we're here, all write(s) have completed, so clean up */ 1821 - md_done_sync(mddev, r1_bio->sectors, 1); 1822 - put_buf(r1_bio); 1821 + int s = r1_bio->sectors; 1822 + if (test_bit(R1BIO_MadeGood, &r1_bio->state) || 1823 + test_bit(R1BIO_WriteError, &r1_bio->state)) 1824 + reschedule_retry(r1_bio); 1825 + else { 1826 + put_buf(r1_bio); 1827 + md_done_sync(mddev, s, 1); 1828 + } 1823 1829 } 1824 1830 } 1825 1831 ··· 2491 2485 */ 2492 2486 if (test_bit(MD_RECOVERY_REQUESTED, &mddev->recovery)) { 2493 2487 atomic_set(&r1_bio->remaining, read_targets); 2494 - for (i = 0; i < conf->raid_disks * 2; i++) { 2488 + for (i = 0; i < conf->raid_disks * 2 && read_targets; i++) { 2495 2489 bio = r1_bio->bios[i]; 2496 2490 if (bio->bi_end_io == end_sync_read) { 2491 + read_targets--; 2497 2492 md_sync_acct(bio->bi_bdev, nr_sectors); 2498 2493 generic_make_request(bio); 2499 2494 }
+1
drivers/media/dvb/dvb-core/dvbdev.c
··· 243 243 if (minor == MAX_DVB_MINORS) { 244 244 kfree(dvbdevfops); 245 245 kfree(dvbdev); 246 + up_write(&minor_rwsem); 246 247 mutex_unlock(&dvbdev_register_lock); 247 248 return -EINVAL; 248 249 }
+3 -1
drivers/media/rc/winbond-cir.c
··· 232 232 233 233 static bool txandrx; /* default = 0 */ 234 234 module_param(txandrx, bool, 0444); 235 - MODULE_PARM_DESC(invert, "Allow simultaneous TX and RX"); 235 + MODULE_PARM_DESC(txandrx, "Allow simultaneous TX and RX"); 236 236 237 237 static unsigned int wake_sc = 0x800F040C; 238 238 module_param(wake_sc, uint, 0644); ··· 1032 1032 data->dev->tx_ir = wbcir_tx; 1033 1033 data->dev->priv = data; 1034 1034 data->dev->dev.parent = &device->dev; 1035 + data->dev->timeout = MS_TO_NS(100); 1036 + data->dev->allowed_protos = RC_TYPE_ALL; 1035 1037 1036 1038 if (!request_region(data->wbase, WAKEUP_IOMEM_LEN, DRVNAME)) { 1037 1039 dev_err(dev, "Region 0x%lx-0x%lx already in use!\n",
+2 -2
drivers/media/video/cx231xx/cx231xx-audio.c
··· 307 307 urb->context = dev; 308 308 urb->pipe = usb_rcvisocpipe(dev->udev, 309 309 dev->adev.end_point_addr); 310 - urb->transfer_flags = URB_ISO_ASAP | URB_NO_TRANSFER_DMA_MAP; 310 + urb->transfer_flags = URB_ISO_ASAP; 311 311 urb->transfer_buffer = dev->adev.transfer_buffer[i]; 312 312 urb->interval = 1; 313 313 urb->complete = cx231xx_audio_isocirq; ··· 368 368 urb->context = dev; 369 369 urb->pipe = usb_rcvbulkpipe(dev->udev, 370 370 dev->adev.end_point_addr); 371 - urb->transfer_flags = URB_NO_TRANSFER_DMA_MAP; 371 + urb->transfer_flags = 0; 372 372 urb->transfer_buffer = dev->adev.transfer_buffer[i]; 373 373 urb->complete = cx231xx_audio_bulkirq; 374 374 urb->transfer_buffer_length = sb_size;
+1 -1
drivers/media/video/cx231xx/cx231xx-vbi.c
··· 448 448 return -ENOMEM; 449 449 } 450 450 dev->vbi_mode.bulk_ctl.urb[i] = urb; 451 - urb->transfer_flags = URB_NO_TRANSFER_DMA_MAP; 451 + urb->transfer_flags = 0; 452 452 453 453 dev->vbi_mode.bulk_ctl.transfer_buffer[i] = 454 454 kzalloc(sb_size, GFP_KERNEL);
+79 -10
drivers/media/video/cx23885/cx23885-cards.c
··· 127 127 }, 128 128 [CX23885_BOARD_HAUPPAUGE_HVR1250] = { 129 129 .name = "Hauppauge WinTV-HVR1250", 130 + .porta = CX23885_ANALOG_VIDEO, 130 131 .portc = CX23885_MPEG_DVB, 132 + #ifdef MT2131_NO_ANALOG_SUPPORT_YET 133 + .tuner_type = TUNER_PHILIPS_TDA8290, 134 + .tuner_addr = 0x42, /* 0x84 >> 1 */ 135 + .tuner_bus = 1, 136 + #endif 137 + .force_bff = 1, 131 138 .input = {{ 139 + #ifdef MT2131_NO_ANALOG_SUPPORT_YET 132 140 .type = CX23885_VMUX_TELEVISION, 133 - .vmux = 0, 141 + .vmux = CX25840_VIN7_CH3 | 142 + CX25840_VIN5_CH2 | 143 + CX25840_VIN2_CH1, 144 + .amux = CX25840_AUDIO8, 134 145 .gpio0 = 0xff00, 135 146 }, { 136 - .type = CX23885_VMUX_DEBUG, 137 - .vmux = 0, 138 - .gpio0 = 0xff01, 139 - }, { 147 + #endif 140 148 .type = CX23885_VMUX_COMPOSITE1, 141 - .vmux = 1, 149 + .vmux = CX25840_VIN7_CH3 | 150 + CX25840_VIN4_CH2 | 151 + CX25840_VIN6_CH1, 152 + .amux = CX25840_AUDIO7, 142 153 .gpio0 = 0xff02, 143 154 }, { 144 155 .type = CX23885_VMUX_SVIDEO, 145 - .vmux = 2, 156 + .vmux = CX25840_VIN7_CH3 | 157 + CX25840_VIN4_CH2 | 158 + CX25840_VIN8_CH1 | 159 + CX25840_SVIDEO_ON, 160 + .amux = CX25840_AUDIO7, 146 161 .gpio0 = 0xff02, 147 162 } }, 148 163 }, ··· 282 267 }, 283 268 [CX23885_BOARD_HAUPPAUGE_HVR1255] = { 284 269 .name = "Hauppauge WinTV-HVR1255", 270 + .porta = CX23885_ANALOG_VIDEO, 285 271 .portc = CX23885_MPEG_DVB, 272 + .tuner_type = TUNER_ABSENT, 273 + .tuner_addr = 0x42, /* 0x84 >> 1 */ 274 + .force_bff = 1, 275 + .input = {{ 276 + .type = CX23885_VMUX_TELEVISION, 277 + .vmux = CX25840_VIN7_CH3 | 278 + CX25840_VIN5_CH2 | 279 + CX25840_VIN2_CH1 | 280 + CX25840_DIF_ON, 281 + .amux = CX25840_AUDIO8, 282 + }, { 283 + .type = CX23885_VMUX_COMPOSITE1, 284 + .vmux = CX25840_VIN7_CH3 | 285 + CX25840_VIN4_CH2 | 286 + CX25840_VIN6_CH1, 287 + .amux = CX25840_AUDIO7, 288 + }, { 289 + .type = CX23885_VMUX_SVIDEO, 290 + .vmux = CX25840_VIN7_CH3 | 291 + CX25840_VIN4_CH2 | 292 + CX25840_VIN8_CH1 | 293 + CX25840_SVIDEO_ON, 294 + .amux = CX25840_AUDIO7, 295 + } }, 296 + }, 297 + [CX23885_BOARD_HAUPPAUGE_HVR1255_22111] = { 298 + .name = "Hauppauge WinTV-HVR1255", 299 + .porta = CX23885_ANALOG_VIDEO, 300 + .portc = CX23885_MPEG_DVB, 301 + .tuner_type = TUNER_ABSENT, 302 + .tuner_addr = 0x42, /* 0x84 >> 1 */ 303 + .force_bff = 1, 304 + .input = {{ 305 + .type = CX23885_VMUX_TELEVISION, 306 + .vmux = CX25840_VIN7_CH3 | 307 + CX25840_VIN5_CH2 | 308 + CX25840_VIN2_CH1 | 309 + CX25840_DIF_ON, 310 + .amux = CX25840_AUDIO8, 311 + }, { 312 + .type = CX23885_VMUX_SVIDEO, 313 + .vmux = CX25840_VIN7_CH3 | 314 + CX25840_VIN4_CH2 | 315 + CX25840_VIN8_CH1 | 316 + CX25840_SVIDEO_ON, 317 + .amux = CX25840_AUDIO7, 318 + } }, 286 319 }, 287 320 [CX23885_BOARD_HAUPPAUGE_HVR1210] = { 288 321 .name = "Hauppauge WinTV-HVR1210", ··· 687 624 }, { 688 625 .subvendor = 0x0070, 689 626 .subdevice = 0x2259, 690 - .card = CX23885_BOARD_HAUPPAUGE_HVR1255, 627 + .card = CX23885_BOARD_HAUPPAUGE_HVR1255_22111, 691 628 }, { 692 629 .subvendor = 0x0070, 693 630 .subdevice = 0x2291, ··· 963 900 struct cx23885_dev *dev = port->dev; 964 901 u32 bitmask = 0; 965 902 966 - if (command == XC2028_RESET_CLK) 903 + if ((command == XC2028_RESET_CLK) || (command == XC2028_I2C_FLUSH)) 967 904 return 0; 968 905 969 906 if (command != 0) { ··· 1193 1130 case CX23885_BOARD_HAUPPAUGE_HVR1270: 1194 1131 case CX23885_BOARD_HAUPPAUGE_HVR1275: 1195 1132 case CX23885_BOARD_HAUPPAUGE_HVR1255: 1133 + case CX23885_BOARD_HAUPPAUGE_HVR1255_22111: 1196 1134 case CX23885_BOARD_HAUPPAUGE_HVR1210: 1197 1135 /* GPIO-5 RF Control: 0 = RF1 Terrestrial, 1 = RF2 Cable */ 1198 1136 /* GPIO-6 I2C Gate which can isolate the demod from the bus */ ··· 1331 1267 case CX23885_BOARD_HAUPPAUGE_HVR1400: 1332 1268 case CX23885_BOARD_HAUPPAUGE_HVR1275: 1333 1269 case CX23885_BOARD_HAUPPAUGE_HVR1255: 1270 + case CX23885_BOARD_HAUPPAUGE_HVR1255_22111: 1334 1271 case CX23885_BOARD_HAUPPAUGE_HVR1210: 1335 1272 /* FIXME: Implement me */ 1336 1273 break; ··· 1489 1424 case CX23885_BOARD_HAUPPAUGE_HVR1270: 1490 1425 case CX23885_BOARD_HAUPPAUGE_HVR1275: 1491 1426 case CX23885_BOARD_HAUPPAUGE_HVR1255: 1427 + case CX23885_BOARD_HAUPPAUGE_HVR1255_22111: 1492 1428 case CX23885_BOARD_HAUPPAUGE_HVR1210: 1493 1429 case CX23885_BOARD_HAUPPAUGE_HVR1850: 1494 1430 case CX23885_BOARD_HAUPPAUGE_HVR1290: ··· 1577 1511 case CX23885_BOARD_HAUPPAUGE_HVR1270: 1578 1512 case CX23885_BOARD_HAUPPAUGE_HVR1275: 1579 1513 case CX23885_BOARD_HAUPPAUGE_HVR1255: 1514 + case CX23885_BOARD_HAUPPAUGE_HVR1255_22111: 1580 1515 case CX23885_BOARD_HAUPPAUGE_HVR1210: 1581 1516 case CX23885_BOARD_COMPRO_VIDEOMATE_E800: 1582 1517 case CX23885_BOARD_HAUPPAUGE_HVR1290: ··· 1593 1526 */ 1594 1527 switch (dev->board) { 1595 1528 case CX23885_BOARD_TEVII_S470: 1596 - case CX23885_BOARD_HAUPPAUGE_HVR1250: 1597 1529 /* Currently only enabled for the integrated IR controller */ 1598 1530 if (!enable_885_ir) 1599 1531 break; 1532 + case CX23885_BOARD_HAUPPAUGE_HVR1250: 1600 1533 case CX23885_BOARD_HAUPPAUGE_HVR1800: 1601 1534 case CX23885_BOARD_HAUPPAUGE_HVR1800lp: 1602 1535 case CX23885_BOARD_HAUPPAUGE_HVR1700: ··· 1606 1539 case CX23885_BOARD_NETUP_DUAL_DVBS2_CI: 1607 1540 case CX23885_BOARD_NETUP_DUAL_DVB_T_C_CI_RF: 1608 1541 case CX23885_BOARD_COMPRO_VIDEOMATE_E800: 1542 + case CX23885_BOARD_HAUPPAUGE_HVR1255: 1543 + case CX23885_BOARD_HAUPPAUGE_HVR1255_22111: 1609 1544 case CX23885_BOARD_HAUPPAUGE_HVR1270: 1610 1545 case CX23885_BOARD_HAUPPAUGE_HVR1850: 1611 1546 case CX23885_BOARD_MYGICA_X8506:
+6
drivers/media/video/cx23885/cx23885-dvb.c
··· 712 712 } 713 713 break; 714 714 case CX23885_BOARD_HAUPPAUGE_HVR1255: 715 + case CX23885_BOARD_HAUPPAUGE_HVR1255_22111: 715 716 i2c_bus = &dev->i2c_bus[0]; 716 717 fe0->dvb.frontend = dvb_attach(s5h1411_attach, 717 718 &hcw_s5h1411_config, ··· 722 721 0x60, &dev->i2c_bus[1].i2c_adap, 723 722 &hauppauge_tda18271_config); 724 723 } 724 + 725 + tda18271_attach(&dev->ts1.analog_fe, 726 + 0x60, &dev->i2c_bus[1].i2c_adap, 727 + &hauppauge_tda18271_config); 728 + 725 729 break; 726 730 case CX23885_BOARD_HAUPPAUGE_HVR1800: 727 731 i2c_bus = &dev->i2c_bus[0];
+8 -1
drivers/media/video/cx23885/cx23885-video.c
··· 505 505 506 506 if ((dev->board == CX23885_BOARD_HAUPPAUGE_HVR1800) || 507 507 (dev->board == CX23885_BOARD_MPX885) || 508 + (dev->board == CX23885_BOARD_HAUPPAUGE_HVR1250) || 509 + (dev->board == CX23885_BOARD_HAUPPAUGE_HVR1255) || 510 + (dev->board == CX23885_BOARD_HAUPPAUGE_HVR1255_22111) || 508 511 (dev->board == CX23885_BOARD_HAUPPAUGE_HVR1850)) { 509 512 /* Configure audio routing */ 510 513 v4l2_subdev_call(dev->sd_cx25840, audio, s_routing, ··· 1581 1578 1582 1579 fe = vfe->dvb.frontend; 1583 1580 1584 - if (dev->board == CX23885_BOARD_HAUPPAUGE_HVR1850) 1581 + if ((dev->board == CX23885_BOARD_HAUPPAUGE_HVR1850) || 1582 + (dev->board == CX23885_BOARD_HAUPPAUGE_HVR1255) || 1583 + (dev->board == CX23885_BOARD_HAUPPAUGE_HVR1255_22111)) 1585 1584 fe = &dev->ts1.analog_fe; 1586 1585 1587 1586 if (fe && fe->ops.tuner_ops.set_analog_params) { ··· 1613 1608 int ret; 1614 1609 1615 1610 switch (dev->board) { 1611 + case CX23885_BOARD_HAUPPAUGE_HVR1255: 1612 + case CX23885_BOARD_HAUPPAUGE_HVR1255_22111: 1616 1613 case CX23885_BOARD_HAUPPAUGE_HVR1850: 1617 1614 ret = cx23885_set_freq_via_ops(dev, f); 1618 1615 break;
+1
drivers/media/video/cx23885/cx23885.h
··· 90 90 #define CX23885_BOARD_MYGICA_X8507 33 91 91 #define CX23885_BOARD_TERRATEC_CINERGY_T_PCIE_DUAL 34 92 92 #define CX23885_BOARD_TEVII_S471 35 93 + #define CX23885_BOARD_HAUPPAUGE_HVR1255_22111 36 93 94 94 95 #define GPIO_0 0x00000001 95 96 #define GPIO_1 0x00000002
-3
drivers/media/video/cx25821/cx25821-core.c
··· 904 904 list_add_tail(&dev->devlist, &cx25821_devlist); 905 905 mutex_unlock(&cx25821_devlist_mutex); 906 906 907 - strcpy(cx25821_boards[UNKNOWN_BOARD].name, "unknown"); 908 - strcpy(cx25821_boards[CX25821_BOARD].name, "cx25821"); 909 - 910 907 if (dev->pci->device != 0x8210) { 911 908 pr_info("%s(): Exiting. Incorrect Hardware device = 0x%02x\n", 912 909 __func__, dev->pci->device);
+1 -1
drivers/media/video/cx25821/cx25821.h
··· 187 187 }; 188 188 189 189 struct cx25821_board { 190 - char *name; 190 + const char *name; 191 191 enum port porta; 192 192 enum port portb; 193 193 enum port portc;
+56 -20
drivers/media/video/cx25840/cx25840-core.c
··· 84 84 85 85 86 86 /* ----------------------------------------------------------------------- */ 87 - static void cx23885_std_setup(struct i2c_client *client); 87 + static void cx23888_std_setup(struct i2c_client *client); 88 88 89 89 int cx25840_write(struct i2c_client *client, u16 addr, u8 value) 90 90 { ··· 638 638 finish_wait(&state->fw_wait, &wait); 639 639 destroy_workqueue(q); 640 640 641 - /* Call the cx23885 specific std setup func, we no longer rely on 641 + /* Call the cx23888 specific std setup func, we no longer rely on 642 642 * the generic cx24840 func. 643 643 */ 644 - cx23885_std_setup(client); 644 + if (is_cx23888(state)) 645 + cx23888_std_setup(client); 646 + else 647 + cx25840_std_setup(client); 645 648 646 649 /* (re)set input */ 647 650 set_input(client, state->vid_input, state->aud_input); ··· 1106 1103 1107 1104 cx25840_write4(client, 0x410, 0xffff0dbf); 1108 1105 cx25840_write4(client, 0x414, 0x00137d03); 1109 - cx25840_write4(client, 0x418, 0x01008080); 1106 + 1107 + /* on the 887, 0x418 is HSCALE_CTRL, on the 888 it is 1108 + CHROMA_CTRL */ 1109 + if (is_cx23888(state)) 1110 + cx25840_write4(client, 0x418, 0x01008080); 1111 + else 1112 + cx25840_write4(client, 0x418, 0x01000000); 1113 + 1110 1114 cx25840_write4(client, 0x41c, 0x00000000); 1111 - cx25840_write4(client, 0x420, 0x001c3e0f); 1115 + 1116 + /* on the 887, 0x420 is CHROMA_CTRL, on the 888 it is 1117 + CRUSH_CTRL */ 1118 + if (is_cx23888(state)) 1119 + cx25840_write4(client, 0x420, 0x001c3e0f); 1120 + else 1121 + cx25840_write4(client, 0x420, 0x001c8282); 1122 + 1112 1123 cx25840_write4(client, 0x42c, 0x42600000); 1113 1124 cx25840_write4(client, 0x430, 0x0000039b); 1114 1125 cx25840_write4(client, 0x438, 0x00000000); ··· 1250 1233 cx25840_write4(client, 0x8d0, 0x1f063870); 1251 1234 } 1252 1235 1253 - if (is_cx2388x(state)) { 1236 + if (is_cx23888(state)) { 1254 1237 /* HVR1850 */ 1255 1238 /* AUD_IO_CTRL - I2S Input, Parallel1*/ 1256 1239 /* - Channel 1 src - Parallel1 (Merlin out) */ ··· 1315 1298 } 1316 1299 cx25840_and_or(client, 0x400, ~0xf, fmt); 1317 1300 cx25840_and_or(client, 0x403, ~0x3, pal_m); 1318 - if (is_cx2388x(state)) 1319 - cx23885_std_setup(client); 1301 + if (is_cx23888(state)) 1302 + cx23888_std_setup(client); 1320 1303 else 1321 1304 cx25840_std_setup(client); 1322 1305 if (!is_cx2583x(state)) ··· 1329 1312 static int cx25840_s_ctrl(struct v4l2_ctrl *ctrl) 1330 1313 { 1331 1314 struct v4l2_subdev *sd = to_sd(ctrl); 1315 + struct cx25840_state *state = to_state(sd); 1332 1316 struct i2c_client *client = v4l2_get_subdevdata(sd); 1333 1317 1334 1318 switch (ctrl->id) { ··· 1342 1324 break; 1343 1325 1344 1326 case V4L2_CID_SATURATION: 1345 - cx25840_write(client, 0x420, ctrl->val << 1); 1346 - cx25840_write(client, 0x421, ctrl->val << 1); 1327 + if (is_cx23888(state)) { 1328 + cx25840_write(client, 0x418, ctrl->val << 1); 1329 + cx25840_write(client, 0x419, ctrl->val << 1); 1330 + } else { 1331 + cx25840_write(client, 0x420, ctrl->val << 1); 1332 + cx25840_write(client, 0x421, ctrl->val << 1); 1333 + } 1347 1334 break; 1348 1335 1349 1336 case V4L2_CID_HUE: 1350 - cx25840_write(client, 0x422, ctrl->val); 1337 + if (is_cx23888(state)) 1338 + cx25840_write(client, 0x41a, ctrl->val); 1339 + else 1340 + cx25840_write(client, 0x422, ctrl->val); 1351 1341 break; 1352 1342 1353 1343 default: ··· 1380 1354 fmt->field = V4L2_FIELD_INTERLACED; 1381 1355 fmt->colorspace = V4L2_COLORSPACE_SMPTE170M; 1382 1356 1383 - Vsrc = (cx25840_read(client, 0x476) & 0x3f) << 4; 1384 - Vsrc |= (cx25840_read(client, 0x475) & 0xf0) >> 4; 1357 + if (is_cx23888(state)) { 1358 + Vsrc = (cx25840_read(client, 0x42a) & 0x3f) << 4; 1359 + Vsrc |= (cx25840_read(client, 0x429) & 0xf0) >> 4; 1360 + } else { 1361 + Vsrc = (cx25840_read(client, 0x476) & 0x3f) << 4; 1362 + Vsrc |= (cx25840_read(client, 0x475) & 0xf0) >> 4; 1363 + } 1385 1364 1386 - Hsrc = (cx25840_read(client, 0x472) & 0x3f) << 4; 1387 - Hsrc |= (cx25840_read(client, 0x471) & 0xf0) >> 4; 1365 + if (is_cx23888(state)) { 1366 + Hsrc = (cx25840_read(client, 0x426) & 0x3f) << 4; 1367 + Hsrc |= (cx25840_read(client, 0x425) & 0xf0) >> 4; 1368 + } else { 1369 + Hsrc = (cx25840_read(client, 0x472) & 0x3f) << 4; 1370 + Hsrc |= (cx25840_read(client, 0x471) & 0xf0) >> 4; 1371 + } 1388 1372 1389 1373 Vlines = fmt->height + (is_50Hz ? 4 : 7); 1390 1374 ··· 1818 1782 struct cx25840_state *state = to_state(sd); 1819 1783 struct i2c_client *client = v4l2_get_subdevdata(sd); 1820 1784 1821 - if (is_cx2388x(state)) 1822 - cx23885_std_setup(client); 1785 + if (is_cx23888(state)) 1786 + cx23888_std_setup(client); 1823 1787 1824 1788 return set_input(client, input, state->aud_input); 1825 1789 } ··· 1830 1794 struct cx25840_state *state = to_state(sd); 1831 1795 struct i2c_client *client = v4l2_get_subdevdata(sd); 1832 1796 1833 - if (is_cx2388x(state)) 1834 - cx23885_std_setup(client); 1797 + if (is_cx23888(state)) 1798 + cx23888_std_setup(client); 1835 1799 return set_input(client, state->vid_input, input); 1836 1800 } 1837 1801 ··· 4975 4939 } 4976 4940 } 4977 4941 4978 - static void cx23885_std_setup(struct i2c_client *client) 4942 + static void cx23888_std_setup(struct i2c_client *client) 4979 4943 { 4980 4944 struct cx25840_state *state = to_state(i2c_get_clientdata(client)); 4981 4945 v4l2_std_id std = state->std;
+1 -1
drivers/media/video/em28xx/em28xx-cards.c
··· 2893 2893 2894 2894 if (dev->board.has_dvb) 2895 2895 request_module("em28xx-dvb"); 2896 - if (dev->board.has_ir_i2c && !disable_ir) 2896 + if (dev->board.ir_codes && !disable_ir) 2897 2897 request_module("em28xx-rc"); 2898 2898 } 2899 2899
+8 -5
drivers/media/video/gspca/sn9c20x.c
··· 2070 2070 set_gamma(gspca_dev, v4l2_ctrl_g_ctrl(sd->gamma)); 2071 2071 set_redblue(gspca_dev, v4l2_ctrl_g_ctrl(sd->blue), 2072 2072 v4l2_ctrl_g_ctrl(sd->red)); 2073 - set_gain(gspca_dev, v4l2_ctrl_g_ctrl(sd->gain)); 2074 - set_exposure(gspca_dev, v4l2_ctrl_g_ctrl(sd->exposure)); 2075 - set_hvflip(gspca_dev, v4l2_ctrl_g_ctrl(sd->hflip), 2076 - v4l2_ctrl_g_ctrl(sd->vflip)); 2073 + if (sd->gain) 2074 + set_gain(gspca_dev, v4l2_ctrl_g_ctrl(sd->gain)); 2075 + if (sd->exposure) 2076 + set_exposure(gspca_dev, v4l2_ctrl_g_ctrl(sd->exposure)); 2077 + if (sd->hflip) 2078 + set_hvflip(gspca_dev, v4l2_ctrl_g_ctrl(sd->hflip), 2079 + v4l2_ctrl_g_ctrl(sd->vflip)); 2077 2080 2078 2081 reg_w1(gspca_dev, 0x1007, 0x20); 2079 2082 reg_w1(gspca_dev, 0x1061, 0x03); ··· 2179 2176 struct sd *sd = (struct sd *) gspca_dev; 2180 2177 int avg_lum; 2181 2178 2182 - if (!v4l2_ctrl_g_ctrl(sd->autogain)) 2179 + if (sd->autogain == NULL || !v4l2_ctrl_g_ctrl(sd->autogain)) 2183 2180 return; 2184 2181 2185 2182 avg_lum = atomic_read(&sd->avg_lum);
+22 -5
drivers/media/video/mx2_camera.c
··· 83 83 #define CSICR1_INV_DATA (1 << 3) 84 84 #define CSICR1_INV_PCLK (1 << 2) 85 85 #define CSICR1_REDGE (1 << 1) 86 + #define CSICR1_FMT_MASK (CSICR1_PACK_DIR | CSICR1_SWAP16_EN) 86 87 87 88 #define SHIFT_STATFF_LEVEL 22 88 89 #define SHIFT_RXFF_LEVEL 19 ··· 231 230 u32 src_pixel; 232 231 u32 ch1_pixel; 233 232 u32 irq_flags; 233 + u32 csicr1; 234 234 }; 235 235 236 236 /* prp resizing parameters */ ··· 332 330 .ch1_pixel = 0x2ca00565, /* RGB565 */ 333 331 .irq_flags = PRP_INTR_RDERR | PRP_INTR_CH1WERR | 334 332 PRP_INTR_CH1FC | PRP_INTR_LBOVF, 333 + .csicr1 = 0, 335 334 } 336 335 }, 337 336 { ··· 346 343 .irq_flags = PRP_INTR_RDERR | PRP_INTR_CH2WERR | 347 344 PRP_INTR_CH2FC | PRP_INTR_LBOVF | 348 345 PRP_INTR_CH2OVF, 346 + .csicr1 = CSICR1_PACK_DIR, 347 + } 348 + }, 349 + { 350 + .in_fmt = V4L2_MBUS_FMT_UYVY8_2X8, 351 + .out_fmt = V4L2_PIX_FMT_YUV420, 352 + .cfg = { 353 + .channel = 2, 354 + .in_fmt = PRP_CNTL_DATA_IN_YUV422, 355 + .out_fmt = PRP_CNTL_CH2_OUT_YUV420, 356 + .src_pixel = 0x22000888, /* YUV422 (YUYV) */ 357 + .irq_flags = PRP_INTR_RDERR | PRP_INTR_CH2WERR | 358 + PRP_INTR_CH2FC | PRP_INTR_LBOVF | 359 + PRP_INTR_CH2OVF, 360 + .csicr1 = CSICR1_SWAP16_EN, 349 361 } 350 362 }, 351 363 }; ··· 1033 1015 return ret; 1034 1016 } 1035 1017 1018 + csicr1 = (csicr1 & ~CSICR1_FMT_MASK) | pcdev->emma_prp->cfg.csicr1; 1019 + 1036 1020 if (common_flags & V4L2_MBUS_PCLK_SAMPLE_RISING) 1037 1021 csicr1 |= CSICR1_REDGE; 1038 1022 if (common_flags & V4L2_MBUS_VSYNC_ACTIVE_HIGH) 1039 1023 csicr1 |= CSICR1_SOF_POL; 1040 1024 if (common_flags & V4L2_MBUS_HSYNC_ACTIVE_HIGH) 1041 1025 csicr1 |= CSICR1_HSYNC_POL; 1042 - if (pcdev->platform_flags & MX2_CAMERA_SWAP16) 1043 - csicr1 |= CSICR1_SWAP16_EN; 1044 1026 if (pcdev->platform_flags & MX2_CAMERA_EXT_VSYNC) 1045 1027 csicr1 |= CSICR1_EXT_VSYNC; 1046 1028 if (pcdev->platform_flags & MX2_CAMERA_CCIR) ··· 1051 1033 csicr1 |= CSICR1_GCLK_MODE; 1052 1034 if (pcdev->platform_flags & MX2_CAMERA_INV_DATA) 1053 1035 csicr1 |= CSICR1_INV_DATA; 1054 - if (pcdev->platform_flags & MX2_CAMERA_PACK_DIR_MSB) 1055 - csicr1 |= CSICR1_PACK_DIR; 1056 1036 1057 1037 pcdev->csicr1 = csicr1; 1058 1038 ··· 1125 1109 return 0; 1126 1110 } 1127 1111 1128 - if (code == V4L2_MBUS_FMT_YUYV8_2X8) { 1112 + if (code == V4L2_MBUS_FMT_YUYV8_2X8 || 1113 + code == V4L2_MBUS_FMT_UYVY8_2X8) { 1129 1114 formats++; 1130 1115 if (xlate) { 1131 1116 /*
+3 -3
drivers/media/video/omap3isp/isppreview.c
··· 888 888 preview_config_contrast, 889 889 NULL, 890 890 offsetof(struct prev_params, contrast), 891 - 0, true, 891 + 0, 0, true, 892 892 }, /* OMAP3ISP_PREV_BRIGHTNESS */ { 893 893 preview_config_brightness, 894 894 NULL, 895 895 offsetof(struct prev_params, brightness), 896 - 0, true, 896 + 0, 0, true, 897 897 }, 898 898 }; 899 899 ··· 1102 1102 unsigned int elv = prev->crop.top + prev->crop.height - 1; 1103 1103 u32 features; 1104 1104 1105 - if (format->code == V4L2_MBUS_FMT_Y10_1X10) { 1105 + if (format->code != V4L2_MBUS_FMT_Y10_1X10) { 1106 1106 sph -= 2; 1107 1107 eph += 2; 1108 1108 slv -= 2;
+1
drivers/media/video/pms.c
··· 26 26 #include <linux/fs.h> 27 27 #include <linux/kernel.h> 28 28 #include <linux/mm.h> 29 + #include <linux/slab.h> 29 30 #include <linux/ioport.h> 30 31 #include <linux/init.h> 31 32 #include <linux/mutex.h>
+34 -35
drivers/media/video/s5p-fimc/fimc-capture.c
··· 350 350 if (pixm) 351 351 sizes[i] = max(size, pixm->plane_fmt[i].sizeimage); 352 352 else 353 - sizes[i] = size; 353 + sizes[i] = max_t(u32, size, frame->payload[i]); 354 + 354 355 allocators[i] = ctx->fimc_dev->alloc_ctx; 355 356 } 356 357 ··· 480 479 static int fimc_capture_open(struct file *file) 481 480 { 482 481 struct fimc_dev *fimc = video_drvdata(file); 483 - int ret = v4l2_fh_open(file); 484 - 485 - if (ret) 486 - return ret; 482 + int ret; 487 483 488 484 dbg("pid: %d, state: 0x%lx", task_pid_nr(current), fimc->state); 489 485 490 - /* Return if the corresponding video mem2mem node is already opened. */ 491 486 if (fimc_m2m_active(fimc)) 492 487 return -EBUSY; 493 488 494 489 set_bit(ST_CAPT_BUSY, &fimc->state); 495 - pm_runtime_get_sync(&fimc->pdev->dev); 490 + ret = pm_runtime_get_sync(&fimc->pdev->dev); 491 + if (ret < 0) 492 + return ret; 496 493 497 - if (++fimc->vid_cap.refcnt == 1) { 498 - ret = fimc_pipeline_initialize(&fimc->pipeline, 499 - &fimc->vid_cap.vfd->entity, true); 500 - if (ret < 0) { 501 - dev_err(&fimc->pdev->dev, 502 - "Video pipeline initialization failed\n"); 503 - pm_runtime_put_sync(&fimc->pdev->dev); 504 - fimc->vid_cap.refcnt--; 505 - v4l2_fh_release(file); 506 - clear_bit(ST_CAPT_BUSY, &fimc->state); 507 - return ret; 508 - } 509 - ret = fimc_capture_ctrls_create(fimc); 494 + ret = v4l2_fh_open(file); 495 + if (ret) 496 + return ret; 510 497 511 - if (!ret && !fimc->vid_cap.user_subdev_api) 512 - ret = fimc_capture_set_default_format(fimc); 498 + if (++fimc->vid_cap.refcnt != 1) 499 + return 0; 500 + 501 + ret = fimc_pipeline_initialize(&fimc->pipeline, 502 + &fimc->vid_cap.vfd->entity, true); 503 + if (ret < 0) { 504 + clear_bit(ST_CAPT_BUSY, &fimc->state); 505 + pm_runtime_put_sync(&fimc->pdev->dev); 506 + fimc->vid_cap.refcnt--; 507 + v4l2_fh_release(file); 508 + return ret; 513 509 } 510 + ret = fimc_capture_ctrls_create(fimc); 511 + 512 + if (!ret && !fimc->vid_cap.user_subdev_api) 513 + ret = fimc_capture_set_default_format(fimc); 514 + 514 515 return ret; 515 516 } 516 517 ··· 821 818 struct fimc_dev *fimc = video_drvdata(file); 822 819 struct fimc_ctx *ctx = fimc->vid_cap.ctx; 823 820 824 - if (f->type != V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE) 825 - return -EINVAL; 826 - 827 821 return fimc_fill_format(&ctx->d_frame, f); 828 822 } 829 823 ··· 832 832 struct fimc_ctx *ctx = fimc->vid_cap.ctx; 833 833 struct v4l2_mbus_framefmt mf; 834 834 struct fimc_fmt *ffmt = NULL; 835 - 836 - if (f->type != V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE) 837 - return -EINVAL; 838 835 839 836 if (pix->pixelformat == V4L2_PIX_FMT_JPEG) { 840 837 fimc_capture_try_format(ctx, &pix->width, &pix->height, ··· 884 887 struct fimc_fmt *s_fmt = NULL; 885 888 int ret, i; 886 889 887 - if (f->type != V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE) 888 - return -EINVAL; 889 890 if (vb2_is_busy(&fimc->vid_cap.vbq)) 890 891 return -EBUSY; 891 892 ··· 919 924 pix->width = mf->width; 920 925 pix->height = mf->height; 921 926 } 927 + 922 928 fimc_adjust_mplane_format(ff->fmt, pix->width, pix->height, pix); 923 929 for (i = 0; i < ff->fmt->colplanes; i++) 924 - ff->payload[i] = 925 - (pix->width * pix->height * ff->fmt->depth[i]) / 8; 930 + ff->payload[i] = pix->plane_fmt[i].sizeimage; 926 931 927 932 set_frame_bounds(ff, pix->width, pix->height); 928 933 /* Reset the composition rectangle if not yet configured */ ··· 1040 1045 { 1041 1046 struct fimc_dev *fimc = video_drvdata(file); 1042 1047 struct fimc_pipeline *p = &fimc->pipeline; 1048 + struct v4l2_subdev *sd = p->subdevs[IDX_SENSOR]; 1043 1049 int ret; 1044 1050 1045 1051 if (fimc_capture_active(fimc)) 1046 1052 return -EBUSY; 1047 1053 1048 - media_entity_pipeline_start(&p->subdevs[IDX_SENSOR]->entity, 1049 - p->m_pipeline); 1054 + ret = media_entity_pipeline_start(&sd->entity, p->m_pipeline); 1055 + if (ret < 0) 1056 + return ret; 1050 1057 1051 1058 if (fimc->vid_cap.user_subdev_api) { 1052 1059 ret = fimc_pipeline_validate(fimc); 1053 - if (ret) 1060 + if (ret < 0) { 1061 + media_entity_pipeline_stop(&sd->entity); 1054 1062 return ret; 1063 + } 1055 1064 } 1056 1065 return vb2_streamon(&fimc->vid_cap.vbq, type); 1057 1066 }
+10 -9
drivers/media/video/s5p-fimc/fimc-core.c
··· 153 153 .colplanes = 2, 154 154 .flags = FMT_FLAGS_M2M, 155 155 }, { 156 - .name = "YUV 4:2:0 non-contiguous 2-planar, Y/CbCr", 156 + .name = "YUV 4:2:0 non-contig. 2p, Y/CbCr", 157 157 .fourcc = V4L2_PIX_FMT_NV12M, 158 158 .color = FIMC_FMT_YCBCR420, 159 159 .depth = { 8, 4 }, ··· 161 161 .colplanes = 2, 162 162 .flags = FMT_FLAGS_M2M, 163 163 }, { 164 - .name = "YUV 4:2:0 non-contiguous 3-planar, Y/Cb/Cr", 164 + .name = "YUV 4:2:0 non-contig. 3p, Y/Cb/Cr", 165 165 .fourcc = V4L2_PIX_FMT_YUV420M, 166 166 .color = FIMC_FMT_YCBCR420, 167 167 .depth = { 8, 2, 2 }, ··· 169 169 .colplanes = 3, 170 170 .flags = FMT_FLAGS_M2M, 171 171 }, { 172 - .name = "YUV 4:2:0 non-contiguous 2-planar, Y/CbCr, tiled", 172 + .name = "YUV 4:2:0 non-contig. 2p, tiled", 173 173 .fourcc = V4L2_PIX_FMT_NV12MT, 174 174 .color = FIMC_FMT_YCBCR420, 175 175 .depth = { 8, 4 }, ··· 641 641 if (!ctrls->ready) 642 642 return; 643 643 644 - mutex_lock(&ctrls->handler.lock); 644 + mutex_lock(ctrls->handler.lock); 645 645 v4l2_ctrl_activate(ctrls->rotate, active); 646 646 v4l2_ctrl_activate(ctrls->hflip, active); 647 647 v4l2_ctrl_activate(ctrls->vflip, active); ··· 660 660 ctx->hflip = 0; 661 661 ctx->vflip = 0; 662 662 } 663 - mutex_unlock(&ctrls->handler.lock); 663 + mutex_unlock(ctrls->handler.lock); 664 664 } 665 665 666 666 /* Update maximum value of the alpha color control */ ··· 741 741 pix->width = width; 742 742 743 743 for (i = 0; i < pix->num_planes; ++i) { 744 - u32 bpl = pix->plane_fmt[i].bytesperline; 745 - u32 *sizeimage = &pix->plane_fmt[i].sizeimage; 744 + struct v4l2_plane_pix_format *plane_fmt = &pix->plane_fmt[i]; 745 + u32 bpl = plane_fmt->bytesperline; 746 746 747 747 if (fmt->colplanes > 1 && (bpl == 0 || bpl < pix->width)) 748 748 bpl = pix->width; /* Planar */ ··· 754 754 if (i == 0) /* Same bytesperline for each plane. */ 755 755 bytesperline = bpl; 756 756 757 - pix->plane_fmt[i].bytesperline = bytesperline; 758 - *sizeimage = (pix->width * pix->height * fmt->depth[i]) / 8; 757 + plane_fmt->bytesperline = bytesperline; 758 + plane_fmt->sizeimage = max((pix->width * pix->height * 759 + fmt->depth[i]) / 8, plane_fmt->sizeimage); 759 760 } 760 761 } 761 762
+52 -21
drivers/media/video/s5p-fimc/fimc-lite.c
··· 451 451 static int fimc_lite_open(struct file *file) 452 452 { 453 453 struct fimc_lite *fimc = video_drvdata(file); 454 - int ret = v4l2_fh_open(file); 454 + int ret; 455 455 456 - if (ret) 457 - return ret; 456 + if (mutex_lock_interruptible(&fimc->lock)) 457 + return -ERESTARTSYS; 458 458 459 459 set_bit(ST_FLITE_IN_USE, &fimc->state); 460 - pm_runtime_get_sync(&fimc->pdev->dev); 460 + ret = pm_runtime_get_sync(&fimc->pdev->dev); 461 + if (ret < 0) 462 + goto done; 461 463 462 - if (++fimc->ref_count != 1 || fimc->out_path != FIMC_IO_DMA) 463 - return ret; 464 + ret = v4l2_fh_open(file); 465 + if (ret < 0) 466 + goto done; 464 467 465 - ret = fimc_pipeline_initialize(&fimc->pipeline, &fimc->vfd->entity, 466 - true); 467 - if (ret < 0) { 468 - v4l2_err(fimc->vfd, "Video pipeline initialization failed\n"); 469 - pm_runtime_put_sync(&fimc->pdev->dev); 470 - fimc->ref_count--; 471 - v4l2_fh_release(file); 472 - clear_bit(ST_FLITE_IN_USE, &fimc->state); 468 + if (++fimc->ref_count == 1 && fimc->out_path == FIMC_IO_DMA) { 469 + ret = fimc_pipeline_initialize(&fimc->pipeline, 470 + &fimc->vfd->entity, true); 471 + if (ret < 0) { 472 + pm_runtime_put_sync(&fimc->pdev->dev); 473 + fimc->ref_count--; 474 + v4l2_fh_release(file); 475 + clear_bit(ST_FLITE_IN_USE, &fimc->state); 476 + } 477 + 478 + fimc_lite_clear_event_counters(fimc); 473 479 } 474 - 475 - fimc_lite_clear_event_counters(fimc); 480 + done: 481 + mutex_unlock(&fimc->lock); 476 482 return ret; 477 483 } 478 484 479 485 static int fimc_lite_close(struct file *file) 480 486 { 481 487 struct fimc_lite *fimc = video_drvdata(file); 488 + int ret; 489 + 490 + if (mutex_lock_interruptible(&fimc->lock)) 491 + return -ERESTARTSYS; 482 492 483 493 if (--fimc->ref_count == 0 && fimc->out_path == FIMC_IO_DMA) { 484 494 clear_bit(ST_FLITE_IN_USE, &fimc->state); ··· 502 492 if (fimc->ref_count == 0) 503 493 vb2_queue_release(&fimc->vb_queue); 504 494 505 - return v4l2_fh_release(file); 495 + ret = v4l2_fh_release(file); 496 + 497 + mutex_unlock(&fimc->lock); 498 + return ret; 506 499 } 507 500 508 501 static unsigned int fimc_lite_poll(struct file *file, 509 502 struct poll_table_struct *wait) 510 503 { 511 504 struct fimc_lite *fimc = video_drvdata(file); 512 - return vb2_poll(&fimc->vb_queue, file, wait); 505 + int ret; 506 + 507 + if (mutex_lock_interruptible(&fimc->lock)) 508 + return POLL_ERR; 509 + 510 + ret = vb2_poll(&fimc->vb_queue, file, wait); 511 + mutex_unlock(&fimc->lock); 512 + 513 + return ret; 513 514 } 514 515 515 516 static int fimc_lite_mmap(struct file *file, struct vm_area_struct *vma) 516 517 { 517 518 struct fimc_lite *fimc = video_drvdata(file); 518 - return vb2_mmap(&fimc->vb_queue, vma); 519 + int ret; 520 + 521 + if (mutex_lock_interruptible(&fimc->lock)) 522 + return -ERESTARTSYS; 523 + 524 + ret = vb2_mmap(&fimc->vb_queue, vma); 525 + mutex_unlock(&fimc->lock); 526 + 527 + return ret; 519 528 } 520 529 521 530 static const struct v4l2_file_operations fimc_lite_fops = { ··· 791 762 if (fimc_lite_active(fimc)) 792 763 return -EBUSY; 793 764 794 - media_entity_pipeline_start(&sensor->entity, p->m_pipeline); 765 + ret = media_entity_pipeline_start(&sensor->entity, p->m_pipeline); 766 + if (ret < 0) 767 + return ret; 795 768 796 769 ret = fimc_pipeline_validate(fimc); 797 770 if (ret) { ··· 1539 1508 return 0; 1540 1509 1541 1510 ret = fimc_lite_stop_capture(fimc, suspend); 1542 - if (ret) 1511 + if (ret < 0 || !fimc_lite_active(fimc)) 1543 1512 return ret; 1544 1513 1545 1514 return fimc_pipeline_shutdown(&fimc->pipeline);
+24 -24
drivers/media/video/s5p-fimc/fimc-mdevice.c
··· 193 193 194 194 int fimc_pipeline_shutdown(struct fimc_pipeline *p) 195 195 { 196 - struct media_entity *me = &p->subdevs[IDX_SENSOR]->entity; 196 + struct media_entity *me; 197 197 int ret; 198 198 199 + if (!p || !p->subdevs[IDX_SENSOR]) 200 + return -EINVAL; 201 + 202 + me = &p->subdevs[IDX_SENSOR]->entity; 199 203 mutex_lock(&me->parent->graph_mutex); 200 204 ret = __fimc_pipeline_shutdown(p); 201 205 mutex_unlock(&me->parent->graph_mutex); ··· 502 498 * @source: the source entity to create links to all fimc entities from 503 499 * @sensor: sensor subdev linked to FIMC[fimc_id] entity, may be null 504 500 * @pad: the source entity pad index 505 - * @fimc_id: index of the fimc device for which link should be enabled 501 + * @link_mask: bitmask of the fimc devices for which link should be enabled 506 502 */ 507 503 static int __fimc_md_create_fimc_sink_links(struct fimc_md *fmd, 508 504 struct media_entity *source, 509 505 struct v4l2_subdev *sensor, 510 - int pad, int fimc_id) 506 + int pad, int link_mask) 511 507 { 512 508 struct fimc_sensor_info *s_info; 513 509 struct media_entity *sink; ··· 524 520 if (!fmd->fimc[i]->variant->has_cam_if) 525 521 continue; 526 522 527 - flags = (i == fimc_id) ? MEDIA_LNK_FL_ENABLED : 0; 523 + flags = ((1 << i) & link_mask) ? MEDIA_LNK_FL_ENABLED : 0; 528 524 529 525 sink = &fmd->fimc[i]->vid_cap.subdev.entity; 530 526 ret = media_entity_create_link(source, pad, sink, ··· 556 552 if (!fmd->fimc_lite[i]) 557 553 continue; 558 554 559 - flags = (i == fimc_id) ? MEDIA_LNK_FL_ENABLED : 0; 555 + if (link_mask & (1 << (i + FIMC_MAX_DEVS))) 556 + flags = MEDIA_LNK_FL_ENABLED; 557 + else 558 + flags = 0; 560 559 561 560 sink = &fmd->fimc_lite[i]->subdev.entity; 562 561 ret = media_entity_create_link(source, pad, sink, ··· 621 614 struct s5p_fimc_isp_info *pdata; 622 615 struct fimc_sensor_info *s_info; 623 616 struct media_entity *source, *sink; 624 - int i, pad, fimc_id = 0; 625 - int ret = 0; 626 - u32 flags; 617 + int i, pad, fimc_id = 0, ret = 0; 618 + u32 flags, link_mask = 0; 627 619 628 620 for (i = 0; i < fmd->num_sensors; i++) { 629 621 if (fmd->sensor[i].subdev == NULL) ··· 674 668 if (source == NULL) 675 669 continue; 676 670 671 + link_mask = 1 << fimc_id++; 677 672 ret = __fimc_md_create_fimc_sink_links(fmd, source, sensor, 678 - pad, fimc_id++); 673 + pad, link_mask); 679 674 } 680 675 681 - fimc_id = 0; 682 676 for (i = 0; i < ARRAY_SIZE(fmd->csis); i++) { 683 677 if (fmd->csis[i].sd == NULL) 684 678 continue; 685 679 source = &fmd->csis[i].sd->entity; 686 680 pad = CSIS_PAD_SOURCE; 687 681 682 + link_mask = 1 << fimc_id++; 688 683 ret = __fimc_md_create_fimc_sink_links(fmd, source, NULL, 689 - pad, fimc_id++); 684 + pad, link_mask); 690 685 } 691 686 692 687 /* Create immutable links between each FIMC's subdev and video node */ ··· 741 734 } 742 735 743 736 static int __fimc_md_set_camclk(struct fimc_md *fmd, 744 - struct fimc_sensor_info *s_info, 745 - bool on) 737 + struct fimc_sensor_info *s_info, 738 + bool on) 746 739 { 747 740 struct s5p_fimc_isp_info *pdata = s_info->pdata; 748 741 struct fimc_camclk_info *camclk; ··· 751 744 if (WARN_ON(pdata->clk_id >= FIMC_MAX_CAMCLKS) || fmd == NULL) 752 745 return -EINVAL; 753 746 754 - if (s_info->clk_on == on) 755 - return 0; 756 747 camclk = &fmd->camclk[pdata->clk_id]; 757 748 758 - dbg("camclk %d, f: %lu, clk: %p, on: %d", 759 - pdata->clk_id, pdata->clk_frequency, camclk, on); 749 + dbg("camclk %d, f: %lu, use_count: %d, on: %d", 750 + pdata->clk_id, pdata->clk_frequency, camclk->use_count, on); 760 751 761 752 if (on) { 762 753 if (camclk->use_count > 0 && ··· 765 760 clk_set_rate(camclk->clock, pdata->clk_frequency); 766 761 camclk->frequency = pdata->clk_frequency; 767 762 ret = clk_enable(camclk->clock); 763 + dbg("Enabled camclk %d: f: %lu", pdata->clk_id, 764 + clk_get_rate(camclk->clock)); 768 765 } 769 - s_info->clk_on = 1; 770 - dbg("Enabled camclk %d: f: %lu", pdata->clk_id, 771 - clk_get_rate(camclk->clock)); 772 - 773 766 return ret; 774 767 } 775 768 ··· 776 773 777 774 if (--camclk->use_count == 0) { 778 775 clk_disable(camclk->clock); 779 - s_info->clk_on = 0; 780 776 dbg("Disabled camclk %d", pdata->clk_id); 781 777 } 782 778 return ret; ··· 791 789 * devices to which sensors can be attached, either directly or through 792 790 * the MIPI CSI receiver. The clock is allowed here to be used by 793 791 * multiple sensors concurrently if they use same frequency. 794 - * The per sensor subdev clk_on attribute helps to synchronize accesses 795 - * to the sclk_cam clocks from the video and media device nodes. 796 792 * This function should only be called when the graph mutex is held. 797 793 */ 798 794 int fimc_md_set_camclk(struct v4l2_subdev *sd, bool on)
-2
drivers/media/video/s5p-fimc/fimc-mdevice.h
··· 47 47 * @pdata: sensor's atrributes passed as media device's platform data 48 48 * @subdev: image sensor v4l2 subdev 49 49 * @host: fimc device the sensor is currently linked to 50 - * @clk_on: sclk_cam clock's state associated with this subdev 51 50 * 52 51 * This data structure applies to image sensor and the writeback subdevs. 53 52 */ ··· 54 55 struct s5p_fimc_isp_info *pdata; 55 56 struct v4l2_subdev *subdev; 56 57 struct fimc_dev *host; 57 - bool clk_on; 58 58 }; 59 59 60 60 /**
+1
drivers/media/video/s5p-mfc/s5p_mfc_dec.c
··· 996 996 997 997 for (i = 0; i < NUM_CTRLS; i++) { 998 998 if (IS_MFC51_PRIV(controls[i].id)) { 999 + memset(&cfg, 0, sizeof(struct v4l2_ctrl_config)); 999 1000 cfg.ops = &s5p_mfc_dec_ctrl_ops; 1000 1001 cfg.id = controls[i].id; 1001 1002 cfg.min = controls[i].minimum;
+1
drivers/media/video/s5p-mfc/s5p_mfc_enc.c
··· 1773 1773 } 1774 1774 for (i = 0; i < NUM_CTRLS; i++) { 1775 1775 if (IS_MFC51_PRIV(controls[i].id)) { 1776 + memset(&cfg, 0, sizeof(struct v4l2_ctrl_config)); 1776 1777 cfg.ops = &s5p_mfc_enc_ctrl_ops; 1777 1778 cfg.id = controls[i].id; 1778 1779 cfg.min = controls[i].minimum;
+1
drivers/media/video/smiapp/smiapp-core.c
··· 31 31 #include <linux/device.h> 32 32 #include <linux/gpio.h> 33 33 #include <linux/module.h> 34 + #include <linux/slab.h> 34 35 #include <linux/regulator/consumer.h> 35 36 #include <linux/slab.h> 36 37 #include <linux/v4l2-mediabus.h>
+1
drivers/media/video/v4l2-dev.c
··· 681 681 SET_VALID_IOCTL(ops, VIDIOC_G_DV_TIMINGS, vidioc_g_dv_timings); 682 682 SET_VALID_IOCTL(ops, VIDIOC_ENUM_DV_TIMINGS, vidioc_enum_dv_timings); 683 683 SET_VALID_IOCTL(ops, VIDIOC_QUERY_DV_TIMINGS, vidioc_query_dv_timings); 684 + SET_VALID_IOCTL(ops, VIDIOC_DV_TIMINGS_CAP, vidioc_dv_timings_cap); 684 685 /* yes, really vidioc_subscribe_event */ 685 686 SET_VALID_IOCTL(ops, VIDIOC_DQEVENT, vidioc_subscribe_event); 686 687 SET_VALID_IOCTL(ops, VIDIOC_SUBSCRIBE_EVENT, vidioc_subscribe_event);
+1
drivers/mfd/Kconfig
··· 286 286 depends on I2C=y && GENERIC_HARDIRQS 287 287 select MFD_CORE 288 288 select REGMAP_I2C 289 + select IRQ_DOMAIN 289 290 default n 290 291 help 291 292 Say yes here if you want support for Texas Instruments TWL6040 audio
-87
drivers/mfd/ab5500-core.h
··· 1 - /* 2 - * Copyright (C) 2011 ST-Ericsson 3 - * License terms: GNU General Public License (GPL) version 2 4 - * Shared definitions and data structures for the AB5500 MFD driver 5 - */ 6 - 7 - /* Read/write operation values. */ 8 - #define AB5500_PERM_RD (0x01) 9 - #define AB5500_PERM_WR (0x02) 10 - 11 - /* Read/write permissions. */ 12 - #define AB5500_PERM_RO (AB5500_PERM_RD) 13 - #define AB5500_PERM_RW (AB5500_PERM_RD | AB5500_PERM_WR) 14 - 15 - #define AB5500_MASK_BASE (0x60) 16 - #define AB5500_MASK_END (0x79) 17 - #define AB5500_CHIP_ID (0x20) 18 - 19 - /** 20 - * struct ab5500_reg_range 21 - * @first: the first address of the range 22 - * @last: the last address of the range 23 - * @perm: access permissions for the range 24 - */ 25 - struct ab5500_reg_range { 26 - u8 first; 27 - u8 last; 28 - u8 perm; 29 - }; 30 - 31 - /** 32 - * struct ab5500_i2c_ranges 33 - * @count: the number of ranges in the list 34 - * @range: the list of register ranges 35 - */ 36 - struct ab5500_i2c_ranges { 37 - u8 nranges; 38 - u8 bankid; 39 - const struct ab5500_reg_range *range; 40 - }; 41 - 42 - /** 43 - * struct ab5500_i2c_banks 44 - * @count: the number of ranges in the list 45 - * @range: the list of register ranges 46 - */ 47 - struct ab5500_i2c_banks { 48 - u8 nbanks; 49 - const struct ab5500_i2c_ranges *bank; 50 - }; 51 - 52 - /** 53 - * struct ab5500_bank 54 - * @slave_addr: I2C slave_addr found in AB5500 specification 55 - * @name: Documentation name of the bank. For reference 56 - */ 57 - struct ab5500_bank { 58 - u8 slave_addr; 59 - const char *name; 60 - }; 61 - 62 - static const struct ab5500_bank bankinfo[AB5500_NUM_BANKS] = { 63 - [AB5500_BANK_VIT_IO_I2C_CLK_TST_OTP] = { 64 - AB5500_ADDR_VIT_IO_I2C_CLK_TST_OTP, "VIT_IO_I2C_CLK_TST_OTP"}, 65 - [AB5500_BANK_VDDDIG_IO_I2C_CLK_TST] = { 66 - AB5500_ADDR_VDDDIG_IO_I2C_CLK_TST, "VDDDIG_IO_I2C_CLK_TST"}, 67 - [AB5500_BANK_VDENC] = {AB5500_ADDR_VDENC, "VDENC"}, 68 - [AB5500_BANK_SIM_USBSIM] = {AB5500_ADDR_SIM_USBSIM, "SIM_USBSIM"}, 69 - [AB5500_BANK_LED] = {AB5500_ADDR_LED, "LED"}, 70 - [AB5500_BANK_ADC] = {AB5500_ADDR_ADC, "ADC"}, 71 - [AB5500_BANK_RTC] = {AB5500_ADDR_RTC, "RTC"}, 72 - [AB5500_BANK_STARTUP] = {AB5500_ADDR_STARTUP, "STARTUP"}, 73 - [AB5500_BANK_DBI_ECI] = {AB5500_ADDR_DBI_ECI, "DBI-ECI"}, 74 - [AB5500_BANK_CHG] = {AB5500_ADDR_CHG, "CHG"}, 75 - [AB5500_BANK_FG_BATTCOM_ACC] = { 76 - AB5500_ADDR_FG_BATTCOM_ACC, "FG_BATCOM_ACC"}, 77 - [AB5500_BANK_USB] = {AB5500_ADDR_USB, "USB"}, 78 - [AB5500_BANK_IT] = {AB5500_ADDR_IT, "IT"}, 79 - [AB5500_BANK_VIBRA] = {AB5500_ADDR_VIBRA, "VIBRA"}, 80 - [AB5500_BANK_AUDIO_HEADSETUSB] = { 81 - AB5500_ADDR_AUDIO_HEADSETUSB, "AUDIO_HEADSETUSB"}, 82 - }; 83 - 84 - int ab5500_get_register_interruptible_raw(struct ab5500 *ab, u8 bank, u8 reg, 85 - u8 *value); 86 - int ab5500_mask_and_set_register_interruptible_raw(struct ab5500 *ab, u8 bank, 87 - u8 reg, u8 bitmask, u8 bitvalues);
+65 -2
drivers/mfd/mc13xxx-spi.c
··· 49 49 .reg_bits = 7, 50 50 .pad_bits = 1, 51 51 .val_bits = 24, 52 + .write_flag_mask = 0x80, 52 53 53 54 .max_register = MC13XXX_NUMREGS, 54 55 55 56 .cache_type = REGCACHE_NONE, 57 + .use_single_rw = 1, 58 + }; 59 + 60 + static int mc13xxx_spi_read(void *context, const void *reg, size_t reg_size, 61 + void *val, size_t val_size) 62 + { 63 + unsigned char w[4] = { *((unsigned char *) reg), 0, 0, 0}; 64 + unsigned char r[4]; 65 + unsigned char *p = val; 66 + struct device *dev = context; 67 + struct spi_device *spi = to_spi_device(dev); 68 + struct spi_transfer t = { 69 + .tx_buf = w, 70 + .rx_buf = r, 71 + .len = 4, 72 + }; 73 + 74 + struct spi_message m; 75 + int ret; 76 + 77 + if (val_size != 3 || reg_size != 1) 78 + return -ENOTSUPP; 79 + 80 + spi_message_init(&m); 81 + spi_message_add_tail(&t, &m); 82 + ret = spi_sync(spi, &m); 83 + 84 + memcpy(p, &r[1], 3); 85 + 86 + return ret; 87 + } 88 + 89 + static int mc13xxx_spi_write(void *context, const void *data, size_t count) 90 + { 91 + struct device *dev = context; 92 + struct spi_device *spi = to_spi_device(dev); 93 + 94 + if (count != 4) 95 + return -ENOTSUPP; 96 + 97 + return spi_write(spi, data, count); 98 + } 99 + 100 + /* 101 + * We cannot use regmap-spi generic bus implementation here. 102 + * The MC13783 chip will get corrupted if CS signal is deasserted 103 + * and on i.Mx31 SoC (the target SoC for MC13783 PMIC) the SPI controller 104 + * has the following errata (DSPhl22960): 105 + * "The CSPI negates SS when the FIFO becomes empty with 106 + * SSCTL= 0. Software cannot guarantee that the FIFO will not 107 + * drain because of higher priority interrupts and the 108 + * non-realtime characteristics of the operating system. As a 109 + * result, the SS will negate before all of the data has been 110 + * transferred to/from the peripheral." 111 + * We workaround this by accessing the SPI controller with a 112 + * single transfert. 113 + */ 114 + 115 + static struct regmap_bus regmap_mc13xxx_bus = { 116 + .write = mc13xxx_spi_write, 117 + .read = mc13xxx_spi_read, 56 118 }; 57 119 58 120 static int mc13xxx_spi_probe(struct spi_device *spi) ··· 135 73 136 74 dev_set_drvdata(&spi->dev, mc13xxx); 137 75 spi->mode = SPI_MODE_0 | SPI_CS_HIGH; 138 - spi->bits_per_word = 32; 139 76 140 77 mc13xxx->dev = &spi->dev; 141 78 mutex_init(&mc13xxx->lock); 142 79 143 - mc13xxx->regmap = regmap_init_spi(spi, &mc13xxx_regmap_spi_config); 80 + mc13xxx->regmap = regmap_init(&spi->dev, &regmap_mc13xxx_bus, &spi->dev, 81 + &mc13xxx_regmap_spi_config); 82 + 144 83 if (IS_ERR(mc13xxx->regmap)) { 145 84 ret = PTR_ERR(mc13xxx->regmap); 146 85 dev_err(mc13xxx->dev, "Failed to initialize register map: %d\n",
+47 -1
drivers/mfd/omap-usb-host.c
··· 25 25 #include <linux/clk.h> 26 26 #include <linux/dma-mapping.h> 27 27 #include <linux/spinlock.h> 28 + #include <linux/gpio.h> 28 29 #include <plat/cpu.h> 29 30 #include <plat/usb.h> 30 31 #include <linux/pm_runtime.h> ··· 501 500 dev_dbg(dev, "starting TI HSUSB Controller\n"); 502 501 503 502 pm_runtime_get_sync(dev); 504 - spin_lock_irqsave(&omap->lock, flags); 505 503 504 + if (pdata->ehci_data->phy_reset) { 505 + if (gpio_is_valid(pdata->ehci_data->reset_gpio_port[0])) 506 + gpio_request_one(pdata->ehci_data->reset_gpio_port[0], 507 + GPIOF_OUT_INIT_LOW, "USB1 PHY reset"); 508 + 509 + if (gpio_is_valid(pdata->ehci_data->reset_gpio_port[1])) 510 + gpio_request_one(pdata->ehci_data->reset_gpio_port[1], 511 + GPIOF_OUT_INIT_LOW, "USB2 PHY reset"); 512 + 513 + /* Hold the PHY in RESET for enough time till DIR is high */ 514 + udelay(10); 515 + } 516 + 517 + spin_lock_irqsave(&omap->lock, flags); 506 518 omap->usbhs_rev = usbhs_read(omap->uhh_base, OMAP_UHH_REVISION); 507 519 dev_dbg(dev, "OMAP UHH_REVISION 0x%x\n", omap->usbhs_rev); 508 520 ··· 595 581 } 596 582 597 583 spin_unlock_irqrestore(&omap->lock, flags); 584 + 585 + if (pdata->ehci_data->phy_reset) { 586 + /* Hold the PHY in RESET for enough time till 587 + * PHY is settled and ready 588 + */ 589 + udelay(10); 590 + 591 + if (gpio_is_valid(pdata->ehci_data->reset_gpio_port[0])) 592 + gpio_set_value_cansleep 593 + (pdata->ehci_data->reset_gpio_port[0], 1); 594 + 595 + if (gpio_is_valid(pdata->ehci_data->reset_gpio_port[1])) 596 + gpio_set_value_cansleep 597 + (pdata->ehci_data->reset_gpio_port[1], 1); 598 + } 599 + 598 600 pm_runtime_put_sync(dev); 601 + } 602 + 603 + static void omap_usbhs_deinit(struct device *dev) 604 + { 605 + struct usbhs_hcd_omap *omap = dev_get_drvdata(dev); 606 + struct usbhs_omap_platform_data *pdata = &omap->platdata; 607 + 608 + if (pdata->ehci_data->phy_reset) { 609 + if (gpio_is_valid(pdata->ehci_data->reset_gpio_port[0])) 610 + gpio_free(pdata->ehci_data->reset_gpio_port[0]); 611 + 612 + if (gpio_is_valid(pdata->ehci_data->reset_gpio_port[1])) 613 + gpio_free(pdata->ehci_data->reset_gpio_port[1]); 614 + } 599 615 } 600 616 601 617 ··· 811 767 goto end_probe; 812 768 813 769 err_alloc: 770 + omap_usbhs_deinit(&pdev->dev); 814 771 iounmap(omap->tll_base); 815 772 816 773 err_tll: ··· 863 818 { 864 819 struct usbhs_hcd_omap *omap = platform_get_drvdata(pdev); 865 820 821 + omap_usbhs_deinit(&pdev->dev); 866 822 iounmap(omap->tll_base); 867 823 iounmap(omap->uhh_base); 868 824 clk_put(omap->init_60m_fclk);
+12 -1
drivers/mfd/palmas.c
··· 356 356 } 357 357 } 358 358 359 - ret = regmap_add_irq_chip(palmas->regmap[1], palmas->irq, 359 + /* Change IRQ into clear on read mode for efficiency */ 360 + slave = PALMAS_BASE_TO_SLAVE(PALMAS_INTERRUPT_BASE); 361 + addr = PALMAS_BASE_TO_REG(PALMAS_INTERRUPT_BASE, PALMAS_INT_CTRL); 362 + reg = PALMAS_INT_CTRL_INT_CLEAR; 363 + 364 + regmap_write(palmas->regmap[slave], addr, reg); 365 + 366 + ret = regmap_add_irq_chip(palmas->regmap[slave], palmas->irq, 360 367 IRQF_ONESHOT | IRQF_TRIGGER_LOW, -1, &palmas_irq_chip, 361 368 &palmas->irq_data); 362 369 if (ret < 0) ··· 448 441 goto err; 449 442 } 450 443 444 + children[PALMAS_PMIC_ID].platform_data = pdata->pmic_pdata; 445 + children[PALMAS_PMIC_ID].pdata_size = sizeof(*pdata->pmic_pdata); 446 + 451 447 ret = mfd_add_devices(palmas->dev, -1, 452 448 children, ARRAY_SIZE(palmas_children), 453 449 NULL, regmap_irq_chip_get_base(palmas->irq_data)); ··· 482 472 { "twl6035", }, 483 473 { "twl6037", }, 484 474 { "tps65913", }, 475 + { /* end */ } 485 476 }; 486 477 MODULE_DEVICE_TABLE(i2c, palmas_i2c_id); 487 478
+1 -1
drivers/misc/mei/main.c
··· 1147 1147 err = request_threaded_irq(pdev->irq, 1148 1148 NULL, 1149 1149 mei_interrupt_thread_handler, 1150 - 0, mei_driver_name, dev); 1150 + IRQF_ONESHOT, mei_driver_name, dev); 1151 1151 else 1152 1152 err = request_threaded_irq(pdev->irq, 1153 1153 mei_interrupt_quick_handler,
+2 -2
drivers/misc/sgi-xp/xpc_uv.c
··· 452 452 453 453 if (msg->activate_gru_mq_desc_gpa != 454 454 part_uv->activate_gru_mq_desc_gpa) { 455 - spin_lock_irqsave(&part_uv->flags_lock, irq_flags); 455 + spin_lock(&part_uv->flags_lock); 456 456 part_uv->flags &= ~XPC_P_CACHED_ACTIVATE_GRU_MQ_DESC_UV; 457 - spin_unlock_irqrestore(&part_uv->flags_lock, irq_flags); 457 + spin_unlock(&part_uv->flags_lock); 458 458 part_uv->activate_gru_mq_desc_gpa = 459 459 msg->activate_gru_mq_desc_gpa; 460 460 }
+2 -2
drivers/mmc/core/cd-gpio.c
··· 50 50 goto egpioreq; 51 51 52 52 ret = request_threaded_irq(irq, NULL, mmc_cd_gpio_irqt, 53 - IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING, 54 - cd->label, host); 53 + IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING | 54 + IRQF_ONESHOT, cd->label, host); 55 55 if (ret < 0) 56 56 goto eirqreq; 57 57
+11 -7
drivers/mmc/core/mmc.c
··· 717 717 card->ext_csd.generic_cmd6_time); 718 718 } 719 719 720 - if (err) 721 - pr_err("%s: power class selection for ext_csd_bus_width %d" 722 - " failed\n", mmc_hostname(card->host), bus_width); 723 - 724 720 return err; 725 721 } 726 722 ··· 1100 1104 EXT_CSD_BUS_WIDTH_8 : EXT_CSD_BUS_WIDTH_4; 1101 1105 err = mmc_select_powerclass(card, ext_csd_bits, ext_csd); 1102 1106 if (err) 1103 - goto err; 1107 + pr_warning("%s: power class selection to bus width %d" 1108 + " failed\n", mmc_hostname(card->host), 1109 + 1 << bus_width); 1104 1110 } 1105 1111 1106 1112 /* ··· 1134 1136 err = mmc_select_powerclass(card, ext_csd_bits[idx][0], 1135 1137 ext_csd); 1136 1138 if (err) 1137 - goto err; 1139 + pr_warning("%s: power class selection to " 1140 + "bus width %d failed\n", 1141 + mmc_hostname(card->host), 1142 + 1 << bus_width); 1138 1143 1139 1144 err = mmc_switch(card, EXT_CSD_CMD_SET_NORMAL, 1140 1145 EXT_CSD_BUS_WIDTH, ··· 1165 1164 err = mmc_select_powerclass(card, ext_csd_bits[idx][1], 1166 1165 ext_csd); 1167 1166 if (err) 1168 - goto err; 1167 + pr_warning("%s: power class selection to " 1168 + "bus width %d ddr %d failed\n", 1169 + mmc_hostname(card->host), 1170 + 1 << bus_width, ddr); 1169 1171 1170 1172 err = mmc_switch(card, EXT_CSD_CMD_SET_NORMAL, 1171 1173 EXT_CSD_BUS_WIDTH,
+1 -1
drivers/mtd/nand/cafe_nand.c
··· 102 102 static int cafe_device_ready(struct mtd_info *mtd) 103 103 { 104 104 struct cafe_priv *cafe = mtd->priv; 105 - int result = !!(cafe_readl(cafe, NAND_STATUS) | 0x40000000); 105 + int result = !!(cafe_readl(cafe, NAND_STATUS) & 0x40000000); 106 106 uint32_t irqs = cafe_readl(cafe, NAND_IRQ); 107 107 108 108 cafe_writel(cafe, irqs, NAND_IRQ);
+5 -5
drivers/mtd/nand/gpmi-nand/gpmi-nand.c
··· 920 920 */ 921 921 memset(chip->oob_poi, ~0, mtd->oobsize); 922 922 chip->oob_poi[0] = ((uint8_t *) auxiliary_virt)[0]; 923 - 924 - read_page_swap_end(this, buf, mtd->writesize, 925 - this->payload_virt, this->payload_phys, 926 - nfc_geo->payload_size, 927 - payload_virt, payload_phys); 928 923 } 924 + 925 + read_page_swap_end(this, buf, mtd->writesize, 926 + this->payload_virt, this->payload_phys, 927 + nfc_geo->payload_size, 928 + payload_virt, payload_phys); 929 929 exit_nfc: 930 930 return ret; 931 931 }
+29 -8
drivers/mtd/nand/mxc_nand.c
··· 273 273 274 274 static const char *part_probes[] = { "RedBoot", "cmdlinepart", "ofpart", NULL }; 275 275 276 + static void memcpy32_fromio(void *trg, const void __iomem *src, size_t size) 277 + { 278 + int i; 279 + u32 *t = trg; 280 + const __iomem u32 *s = src; 281 + 282 + for (i = 0; i < (size >> 2); i++) 283 + *t++ = __raw_readl(s++); 284 + } 285 + 286 + static void memcpy32_toio(void __iomem *trg, const void *src, int size) 287 + { 288 + int i; 289 + u32 __iomem *t = trg; 290 + const u32 *s = src; 291 + 292 + for (i = 0; i < (size >> 2); i++) 293 + __raw_writel(*s++, t++); 294 + } 295 + 276 296 static int check_int_v3(struct mxc_nand_host *host) 277 297 { 278 298 uint32_t tmp; ··· 539 519 540 520 wait_op_done(host, true); 541 521 542 - memcpy_fromio(host->data_buf, host->main_area0, 16); 522 + memcpy32_fromio(host->data_buf, host->main_area0, 16); 543 523 } 544 524 545 525 /* Request the NANDFC to perform a read of the NAND device ID. */ ··· 555 535 /* Wait for operation to complete */ 556 536 wait_op_done(host, true); 557 537 558 - memcpy_fromio(host->data_buf, host->main_area0, 16); 538 + memcpy32_fromio(host->data_buf, host->main_area0, 16); 559 539 560 540 if (this->options & NAND_BUSWIDTH_16) { 561 541 /* compress the ID info */ ··· 817 797 818 798 if (bfrom) { 819 799 for (i = 0; i < n - 1; i++) 820 - memcpy_fromio(d + i * j, s + i * t, j); 800 + memcpy32_fromio(d + i * j, s + i * t, j); 821 801 822 802 /* the last section */ 823 - memcpy_fromio(d + i * j, s + i * t, mtd->oobsize - i * j); 803 + memcpy32_fromio(d + i * j, s + i * t, mtd->oobsize - i * j); 824 804 } else { 825 805 for (i = 0; i < n - 1; i++) 826 - memcpy_toio(&s[i * t], &d[i * j], j); 806 + memcpy32_toio(&s[i * t], &d[i * j], j); 827 807 828 808 /* the last section */ 829 - memcpy_toio(&s[i * t], &d[i * j], mtd->oobsize - i * j); 809 + memcpy32_toio(&s[i * t], &d[i * j], mtd->oobsize - i * j); 830 810 } 831 811 } 832 812 ··· 1090 1070 1091 1071 host->devtype_data->send_page(mtd, NFC_OUTPUT); 1092 1072 1093 - memcpy_fromio(host->data_buf, host->main_area0, mtd->writesize); 1073 + memcpy32_fromio(host->data_buf, host->main_area0, 1074 + mtd->writesize); 1094 1075 copy_spare(mtd, true); 1095 1076 break; 1096 1077 ··· 1107 1086 break; 1108 1087 1109 1088 case NAND_CMD_PAGEPROG: 1110 - memcpy_toio(host->main_area0, host->data_buf, mtd->writesize); 1089 + memcpy32_toio(host->main_area0, host->data_buf, mtd->writesize); 1111 1090 copy_spare(mtd, false); 1112 1091 host->devtype_data->send_page(mtd, NFC_INPUT); 1113 1092 host->devtype_data->send_cmd(host, command, true);
+7
drivers/mtd/nand/nand_base.c
··· 3501 3501 /* propagate ecc info to mtd_info */ 3502 3502 mtd->ecclayout = chip->ecc.layout; 3503 3503 mtd->ecc_strength = chip->ecc.strength; 3504 + /* 3505 + * Initialize bitflip_threshold to its default prior scan_bbt() call. 3506 + * scan_bbt() might invoke mtd_read(), thus bitflip_threshold must be 3507 + * properly set. 3508 + */ 3509 + if (!mtd->bitflip_threshold) 3510 + mtd->bitflip_threshold = mtd->ecc_strength; 3504 3511 3505 3512 /* Check, if we should skip the bad block table scan */ 3506 3513 if (chip->options & NAND_SKIP_BBTSCAN)
+3 -9
drivers/mtd/nand/nandsim.c
··· 28 28 #include <linux/module.h> 29 29 #include <linux/moduleparam.h> 30 30 #include <linux/vmalloc.h> 31 - #include <asm/div64.h> 31 + #include <linux/math64.h> 32 32 #include <linux/slab.h> 33 33 #include <linux/errno.h> 34 34 #include <linux/string.h> ··· 546 546 return kstrdup(buf, GFP_KERNEL); 547 547 } 548 548 549 - static uint64_t divide(uint64_t n, uint32_t d) 550 - { 551 - do_div(n, d); 552 - return n; 553 - } 554 - 555 549 /* 556 550 * Initialize the nandsim structure. 557 551 * ··· 574 580 ns->geom.oobsz = mtd->oobsize; 575 581 ns->geom.secsz = mtd->erasesize; 576 582 ns->geom.pgszoob = ns->geom.pgsz + ns->geom.oobsz; 577 - ns->geom.pgnum = divide(ns->geom.totsz, ns->geom.pgsz); 583 + ns->geom.pgnum = div_u64(ns->geom.totsz, ns->geom.pgsz); 578 584 ns->geom.totszoob = ns->geom.totsz + (uint64_t)ns->geom.pgnum * ns->geom.oobsz; 579 585 ns->geom.secshift = ffs(ns->geom.secsz) - 1; 580 586 ns->geom.pgshift = chip->page_shift; ··· 915 921 916 922 if (!rptwear) 917 923 return 0; 918 - wear_eb_count = divide(mtd->size, mtd->erasesize); 924 + wear_eb_count = div_u64(mtd->size, mtd->erasesize); 919 925 mem = wear_eb_count * sizeof(unsigned long); 920 926 if (mem / sizeof(unsigned long) != wear_eb_count) { 921 927 NS_ERR("Too many erase blocks for wear reporting\n");
+3
drivers/net/ethernet/intel/e1000e/82571.c
··· 1572 1572 ctrl = er32(CTRL); 1573 1573 status = er32(STATUS); 1574 1574 rxcw = er32(RXCW); 1575 + /* SYNCH bit and IV bit are sticky */ 1576 + udelay(10); 1577 + rxcw = er32(RXCW); 1575 1578 1576 1579 if ((rxcw & E1000_RXCW_SYNCH) && !(rxcw & E1000_RXCW_IV)) { 1577 1580
+32 -10
drivers/net/ethernet/intel/e1000e/ich8lan.c
··· 325 325 **/ 326 326 static bool e1000_phy_is_accessible_pchlan(struct e1000_hw *hw) 327 327 { 328 - u16 phy_reg; 329 - u32 phy_id; 328 + u16 phy_reg = 0; 329 + u32 phy_id = 0; 330 + s32 ret_val; 331 + u16 retry_count; 330 332 331 - e1e_rphy_locked(hw, PHY_ID1, &phy_reg); 332 - phy_id = (u32)(phy_reg << 16); 333 - e1e_rphy_locked(hw, PHY_ID2, &phy_reg); 334 - phy_id |= (u32)(phy_reg & PHY_REVISION_MASK); 333 + for (retry_count = 0; retry_count < 2; retry_count++) { 334 + ret_val = e1e_rphy_locked(hw, PHY_ID1, &phy_reg); 335 + if (ret_val || (phy_reg == 0xFFFF)) 336 + continue; 337 + phy_id = (u32)(phy_reg << 16); 338 + 339 + ret_val = e1e_rphy_locked(hw, PHY_ID2, &phy_reg); 340 + if (ret_val || (phy_reg == 0xFFFF)) { 341 + phy_id = 0; 342 + continue; 343 + } 344 + phy_id |= (u32)(phy_reg & PHY_REVISION_MASK); 345 + break; 346 + } 335 347 336 348 if (hw->phy.id) { 337 349 if (hw->phy.id == phy_id) 338 350 return true; 339 - } else { 340 - if ((phy_id != 0) && (phy_id != PHY_REVISION_MASK)) 341 - hw->phy.id = phy_id; 351 + } else if (phy_id) { 352 + hw->phy.id = phy_id; 353 + hw->phy.revision = (u32)(phy_reg & ~PHY_REVISION_MASK); 342 354 return true; 343 355 } 344 356 345 - return false; 357 + /* 358 + * In case the PHY needs to be in mdio slow mode, 359 + * set slow mode and try to get the PHY id again. 360 + */ 361 + hw->phy.ops.release(hw); 362 + ret_val = e1000_set_mdio_slow_mode_hv(hw); 363 + if (!ret_val) 364 + ret_val = e1000e_get_phy_id(hw); 365 + hw->phy.ops.acquire(hw); 366 + 367 + return !ret_val; 346 368 } 347 369 348 370 /**
+3
drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
··· 193 193 unsigned int i, eop, count = 0; 194 194 unsigned int total_bytes = 0, total_packets = 0; 195 195 196 + if (test_bit(__IXGBEVF_DOWN, &adapter->state)) 197 + return true; 198 + 196 199 i = tx_ring->next_to_clean; 197 200 eop = tx_ring->tx_buffer_info[i].next_to_watch; 198 201 eop_desc = IXGBEVF_TX_DESC(tx_ring, eop);
+4 -4
drivers/of/platform.c
··· 317 317 for(; lookup->compatible != NULL; lookup++) { 318 318 if (!of_device_is_compatible(np, lookup->compatible)) 319 319 continue; 320 - if (of_address_to_resource(np, 0, &res)) 321 - continue; 322 - if (res.start != lookup->phys_addr) 323 - continue; 320 + if (!of_address_to_resource(np, 0, &res)) 321 + if (res.start != lookup->phys_addr) 322 + continue; 324 323 pr_debug("%s: devname=%s\n", np->full_name, lookup->name); 325 324 return lookup; 326 325 } ··· 461 462 of_node_put(root); 462 463 return rc; 463 464 } 465 + EXPORT_SYMBOL_GPL(of_platform_populate); 464 466 #endif /* CONFIG_OF_ADDRESS */
+12
drivers/pci/pci-driver.c
··· 748 748 749 749 pci_pm_set_unknown_state(pci_dev); 750 750 751 + /* 752 + * Some BIOSes from ASUS have a bug: If a USB EHCI host controller's 753 + * PCI COMMAND register isn't 0, the BIOS assumes that the controller 754 + * hasn't been quiesced and tries to turn it off. If the controller 755 + * is already in D3, this can hang or cause memory corruption. 756 + * 757 + * Since the value of the COMMAND register doesn't matter once the 758 + * device has been suspended, we can safely set it to 0 here. 759 + */ 760 + if (pci_dev->class == PCI_CLASS_SERIAL_USB_EHCI) 761 + pci_write_config_word(pci_dev, PCI_COMMAND, 0); 762 + 751 763 return 0; 752 764 } 753 765
-5
drivers/pci/pci.c
··· 1744 1744 if (target_state == PCI_POWER_ERROR) 1745 1745 return -EIO; 1746 1746 1747 - /* Some devices mustn't be in D3 during system sleep */ 1748 - if (target_state == PCI_D3hot && 1749 - (dev->dev_flags & PCI_DEV_FLAGS_NO_D3_DURING_SLEEP)) 1750 - return 0; 1751 - 1752 1747 pci_enable_wake(dev, target_state, device_may_wakeup(&dev->dev)); 1753 1748 1754 1749 error = pci_set_power_state(dev, target_state);
-26
drivers/pci/quirks.c
··· 2929 2929 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0102, disable_igfx_irq); 2930 2930 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x010a, disable_igfx_irq); 2931 2931 2932 - /* 2933 - * The Intel 6 Series/C200 Series chipset's EHCI controllers on many 2934 - * ASUS motherboards will cause memory corruption or a system crash 2935 - * if they are in D3 while the system is put into S3 sleep. 2936 - */ 2937 - static void __devinit asus_ehci_no_d3(struct pci_dev *dev) 2938 - { 2939 - const char *sys_info; 2940 - static const char good_Asus_board[] = "P8Z68-V"; 2941 - 2942 - if (dev->dev_flags & PCI_DEV_FLAGS_NO_D3_DURING_SLEEP) 2943 - return; 2944 - if (dev->subsystem_vendor != PCI_VENDOR_ID_ASUSTEK) 2945 - return; 2946 - sys_info = dmi_get_system_info(DMI_BOARD_NAME); 2947 - if (sys_info && memcmp(sys_info, good_Asus_board, 2948 - sizeof(good_Asus_board) - 1) == 0) 2949 - return; 2950 - 2951 - dev_info(&dev->dev, "broken D3 during system sleep on ASUS\n"); 2952 - dev->dev_flags |= PCI_DEV_FLAGS_NO_D3_DURING_SLEEP; 2953 - device_set_wakeup_capable(&dev->dev, false); 2954 - } 2955 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1c26, asus_ehci_no_d3); 2956 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1c2d, asus_ehci_no_d3); 2957 - 2958 2932 static void pci_do_fixups(struct pci_dev *dev, struct pci_fixup *f, 2959 2933 struct pci_fixup *end) 2960 2934 {
+2
drivers/pinctrl/pinctrl-imx.c
··· 474 474 grp->configs[j] = config & ~IMX_PAD_SION; 475 475 } 476 476 477 + #ifdef DEBUG 477 478 IMX_PMX_DUMP(info, grp->pins, grp->mux_mode, grp->configs, grp->npins); 479 + #endif 478 480 479 481 return 0; 480 482 }
+2
drivers/pinctrl/pinctrl-imx6q.c
··· 1950 1950 IMX_PIN_REG(MX6Q_PAD_SD2_DAT3, 0x0744, 0x035C, 5, 0x0000, 0), /* MX6Q_PAD_SD2_DAT3__GPIO_1_12 */ 1951 1951 IMX_PIN_REG(MX6Q_PAD_SD2_DAT3, 0x0744, 0x035C, 6, 0x0000, 0), /* MX6Q_PAD_SD2_DAT3__SJC_DONE */ 1952 1952 IMX_PIN_REG(MX6Q_PAD_SD2_DAT3, 0x0744, 0x035C, 7, 0x0000, 0), /* MX6Q_PAD_SD2_DAT3__ANATOP_TESTO_3 */ 1953 + IMX_PIN_REG(MX6Q_PAD_ENET_RX_ER, 0x04EC, 0x01D8, 0, 0x0000, 0), /* MX6Q_PAD_ENET_RX_ER__ANATOP_USBOTG_ID */ 1954 + IMX_PIN_REG(MX6Q_PAD_GPIO_1, 0x05F4, 0x0224, 3, 0x0000, 0), /* MX6Q_PAD_GPIO_1__ANATOP_USBOTG_ID */ 1953 1955 }; 1954 1956 1955 1957 /* Pad names for the pinmux subsystem */
+3 -3
drivers/platform/x86/ideapad-laptop.c
··· 694 694 static int __devinit ideapad_acpi_add(struct acpi_device *adevice) 695 695 { 696 696 int ret, i; 697 - unsigned long cfg; 697 + int cfg; 698 698 struct ideapad_private *priv; 699 699 700 - if (read_method_int(adevice->handle, "_CFG", (int *)&cfg)) 700 + if (read_method_int(adevice->handle, "_CFG", &cfg)) 701 701 return -ENODEV; 702 702 703 703 priv = kzalloc(sizeof(*priv), GFP_KERNEL); ··· 721 721 goto input_failed; 722 722 723 723 for (i = 0; i < IDEAPAD_RFKILL_DEV_NUM; i++) { 724 - if (test_bit(ideapad_rfk_data[i].cfgbit, &cfg)) 724 + if (test_bit(ideapad_rfk_data[i].cfgbit, &priv->cfg)) 725 725 ideapad_register_rfkill(adevice, i); 726 726 else 727 727 priv->rfk[i] = NULL;
+22
drivers/platform/x86/intel_ips.c
··· 72 72 #include <linux/string.h> 73 73 #include <linux/tick.h> 74 74 #include <linux/timer.h> 75 + #include <linux/dmi.h> 75 76 #include <drm/i915_drm.h> 76 77 #include <asm/msr.h> 77 78 #include <asm/processor.h> ··· 1486 1485 1487 1486 MODULE_DEVICE_TABLE(pci, ips_id_table); 1488 1487 1488 + static int ips_blacklist_callback(const struct dmi_system_id *id) 1489 + { 1490 + pr_info("Blacklisted intel_ips for %s\n", id->ident); 1491 + return 1; 1492 + } 1493 + 1494 + static const struct dmi_system_id ips_blacklist[] = { 1495 + { 1496 + .callback = ips_blacklist_callback, 1497 + .ident = "HP ProBook", 1498 + .matches = { 1499 + DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"), 1500 + DMI_MATCH(DMI_PRODUCT_NAME, "HP ProBook"), 1501 + }, 1502 + }, 1503 + { } /* terminating entry */ 1504 + }; 1505 + 1489 1506 static int ips_probe(struct pci_dev *dev, const struct pci_device_id *id) 1490 1507 { 1491 1508 u64 platform_info; ··· 1512 1493 int ret = 0; 1513 1494 u16 htshi, trc, trc_required_mask; 1514 1495 u8 tse; 1496 + 1497 + if (dmi_check_system(ips_blacklist)) 1498 + return -ENODEV; 1515 1499 1516 1500 ips = kzalloc(sizeof(struct ips_driver), GFP_KERNEL); 1517 1501 if (!ips)
+89 -47
drivers/platform/x86/sony-laptop.c
··· 973 973 struct device_attribute *attr, 974 974 const char *buffer, size_t count) 975 975 { 976 - unsigned long value = 0; 976 + int value; 977 977 int ret = 0; 978 978 struct sony_nc_value *item = 979 979 container_of(attr, struct sony_nc_value, devattr); ··· 984 984 if (count > 31) 985 985 return -EINVAL; 986 986 987 - if (kstrtoul(buffer, 10, &value)) 987 + if (kstrtoint(buffer, 10, &value)) 988 988 return -EINVAL; 989 989 990 990 if (item->validate) ··· 994 994 return value; 995 995 996 996 ret = sony_nc_int_call(sony_nc_acpi_handle, *item->acpiset, 997 - (int *)&value, NULL); 997 + &value, NULL); 998 998 if (ret < 0) 999 999 return -EIO; 1000 1000 ··· 1010 1010 struct sony_backlight_props { 1011 1011 struct backlight_device *dev; 1012 1012 int handle; 1013 + int cmd_base; 1013 1014 u8 offset; 1014 1015 u8 maxlvl; 1015 1016 }; ··· 1038 1037 struct sony_backlight_props *sdev = 1039 1038 (struct sony_backlight_props *)bl_get_data(bd); 1040 1039 1041 - sony_call_snc_handle(sdev->handle, 0x0200, &result); 1040 + sony_call_snc_handle(sdev->handle, sdev->cmd_base + 0x100, &result); 1042 1041 1043 1042 return (result & 0xff) - sdev->offset; 1044 1043 } ··· 1050 1049 (struct sony_backlight_props *)bl_get_data(bd); 1051 1050 1052 1051 value = bd->props.brightness + sdev->offset; 1053 - if (sony_call_snc_handle(sdev->handle, 0x0100 | (value << 16), &result)) 1052 + if (sony_call_snc_handle(sdev->handle, sdev->cmd_base | (value << 0x10), 1053 + &result)) 1054 1054 return -EIO; 1055 1055 1056 1056 return value; ··· 1174 1172 /* 1175 1173 * ACPI callbacks 1176 1174 */ 1175 + enum event_types { 1176 + HOTKEY = 1, 1177 + KILLSWITCH, 1178 + GFX_SWITCH 1179 + }; 1177 1180 static void sony_nc_notify(struct acpi_device *device, u32 event) 1178 1181 { 1179 1182 u32 real_ev = event; ··· 1203 1196 /* hotkey event */ 1204 1197 case 0x0100: 1205 1198 case 0x0127: 1206 - ev_type = 1; 1199 + ev_type = HOTKEY; 1207 1200 real_ev = sony_nc_hotkeys_decode(event, handle); 1208 1201 1209 1202 if (real_ev > 0) ··· 1223 1216 * update the rfkill device status when the 1224 1217 * switch is moved. 1225 1218 */ 1226 - ev_type = 2; 1219 + ev_type = KILLSWITCH; 1227 1220 sony_call_snc_handle(handle, 0x0100, &result); 1228 1221 real_ev = result & 0x03; 1229 1222 ··· 1231 1224 if (real_ev == 1) 1232 1225 sony_nc_rfkill_update(); 1233 1226 1227 + break; 1228 + 1229 + case 0x0128: 1230 + case 0x0146: 1231 + /* Hybrid GFX switching */ 1232 + sony_call_snc_handle(handle, 0x0000, &result); 1233 + dprintk("GFX switch event received (reason: %s)\n", 1234 + (result & 0x01) ? 1235 + "switch change" : "unknown"); 1236 + 1237 + /* verify the switch state 1238 + * 1: discrete GFX 1239 + * 0: integrated GFX 1240 + */ 1241 + sony_call_snc_handle(handle, 0x0100, &result); 1242 + 1243 + ev_type = GFX_SWITCH; 1244 + real_ev = result & 0xff; 1234 1245 break; 1235 1246 1236 1247 default: ··· 1263 1238 1264 1239 } else { 1265 1240 /* old style event */ 1266 - ev_type = 1; 1241 + ev_type = HOTKEY; 1267 1242 sony_laptop_report_input_event(real_ev); 1268 1243 } 1269 1244 ··· 1918 1893 * bits 4,5: store the limit into the EC 1919 1894 * bits 6,7: store the limit into the battery 1920 1895 */ 1896 + cmd = 0; 1921 1897 1922 - /* 1923 - * handle 0x0115 should allow storing on battery too; 1924 - * handle 0x0136 same as 0x0115 + health status; 1925 - * handle 0x013f, same as 0x0136 but no storing on the battery 1926 - * 1927 - * Store only inside the EC for now, regardless the handle number 1928 - */ 1929 - if (value == 0) 1930 - /* disable limits */ 1931 - cmd = 0x0; 1898 + if (value > 0) { 1899 + if (value <= 50) 1900 + cmd = 0x20; 1932 1901 1933 - else if (value <= 50) 1934 - cmd = 0x21; 1902 + else if (value <= 80) 1903 + cmd = 0x10; 1935 1904 1936 - else if (value <= 80) 1937 - cmd = 0x11; 1905 + else if (value <= 100) 1906 + cmd = 0x30; 1938 1907 1939 - else if (value <= 100) 1940 - cmd = 0x31; 1908 + else 1909 + return -EINVAL; 1941 1910 1942 - else 1943 - return -EINVAL; 1911 + /* 1912 + * handle 0x0115 should allow storing on battery too; 1913 + * handle 0x0136 same as 0x0115 + health status; 1914 + * handle 0x013f, same as 0x0136 but no storing on the battery 1915 + */ 1916 + if (bcare_ctl->handle != 0x013f) 1917 + cmd = cmd | (cmd << 2); 1944 1918 1945 - if (sony_call_snc_handle(bcare_ctl->handle, (cmd << 0x10) | 0x0100, 1946 - &result)) 1919 + cmd = (cmd | 0x1) << 0x10; 1920 + } 1921 + 1922 + if (sony_call_snc_handle(bcare_ctl->handle, cmd | 0x0100, &result)) 1947 1923 return -EIO; 1948 1924 1949 1925 return count; ··· 2139 2113 struct device_attribute *attr, char *buffer) 2140 2114 { 2141 2115 ssize_t count = 0; 2142 - unsigned int mode = sony_nc_thermal_mode_get(); 2116 + int mode = sony_nc_thermal_mode_get(); 2143 2117 2144 2118 if (mode < 0) 2145 2119 return mode; ··· 2498 2472 { 2499 2473 u64 offset; 2500 2474 int i; 2475 + int lvl_table_len = 0; 2501 2476 u8 min = 0xff, max = 0x00; 2502 2477 unsigned char buffer[32] = { 0 }; 2503 2478 ··· 2507 2480 props->maxlvl = 0xff; 2508 2481 2509 2482 offset = sony_find_snc_handle(handle); 2510 - if (offset < 0) 2511 - return; 2512 2483 2513 2484 /* try to read the boundaries from ACPI tables, if we fail the above 2514 2485 * defaults should be reasonable ··· 2516 2491 if (i < 0) 2517 2492 return; 2518 2493 2494 + switch (handle) { 2495 + case 0x012f: 2496 + case 0x0137: 2497 + lvl_table_len = 9; 2498 + break; 2499 + case 0x143: 2500 + lvl_table_len = 16; 2501 + break; 2502 + } 2503 + 2519 2504 /* the buffer lists brightness levels available, brightness levels are 2520 2505 * from position 0 to 8 in the array, other values are used by ALS 2521 2506 * control. 2522 2507 */ 2523 - for (i = 0; i < 9 && i < ARRAY_SIZE(buffer); i++) { 2508 + for (i = 0; i < lvl_table_len && i < ARRAY_SIZE(buffer); i++) { 2524 2509 2525 2510 dprintk("Brightness level: %d\n", buffer[i]); 2526 2511 ··· 2555 2520 const struct backlight_ops *ops = NULL; 2556 2521 struct backlight_properties props; 2557 2522 2558 - if (sony_find_snc_handle(0x12f) != -1) { 2523 + if (sony_find_snc_handle(0x12f) >= 0) { 2559 2524 ops = &sony_backlight_ng_ops; 2525 + sony_bl_props.cmd_base = 0x0100; 2560 2526 sony_nc_backlight_ng_read_limits(0x12f, &sony_bl_props); 2561 2527 max_brightness = sony_bl_props.maxlvl - sony_bl_props.offset; 2562 2528 2563 - } else if (sony_find_snc_handle(0x137) != -1) { 2529 + } else if (sony_find_snc_handle(0x137) >= 0) { 2564 2530 ops = &sony_backlight_ng_ops; 2531 + sony_bl_props.cmd_base = 0x0100; 2565 2532 sony_nc_backlight_ng_read_limits(0x137, &sony_bl_props); 2533 + max_brightness = sony_bl_props.maxlvl - sony_bl_props.offset; 2534 + 2535 + } else if (sony_find_snc_handle(0x143) >= 0) { 2536 + ops = &sony_backlight_ng_ops; 2537 + sony_bl_props.cmd_base = 0x3000; 2538 + sony_nc_backlight_ng_read_limits(0x143, &sony_bl_props); 2566 2539 max_brightness = sony_bl_props.maxlvl - sony_bl_props.offset; 2567 2540 2568 2541 } else if (ACPI_SUCCESS(acpi_get_handle(sony_nc_acpi_handle, "GBRT", ··· 2640 2597 } 2641 2598 } 2642 2599 2600 + result = sony_laptop_setup_input(device); 2601 + if (result) { 2602 + pr_err("Unable to create input devices\n"); 2603 + goto outplatform; 2604 + } 2605 + 2643 2606 if (ACPI_SUCCESS(acpi_get_handle(sony_nc_acpi_handle, "ECON", 2644 2607 &handle))) { 2645 2608 int arg = 1; ··· 2663 2614 } 2664 2615 2665 2616 /* setup input devices and helper fifo */ 2666 - result = sony_laptop_setup_input(device); 2667 - if (result) { 2668 - pr_err("Unable to create input devices\n"); 2669 - goto outsnc; 2670 - } 2671 - 2672 2617 if (acpi_video_backlight_support()) { 2673 2618 pr_info("brightness ignored, must be controlled by ACPI video driver\n"); 2674 2619 } else { ··· 2710 2667 2711 2668 return 0; 2712 2669 2713 - out_sysfs: 2670 + out_sysfs: 2714 2671 for (item = sony_nc_values; item->name; ++item) { 2715 2672 device_remove_file(&sony_pf_device->dev, &item->devattr); 2716 2673 } 2717 2674 sony_nc_backlight_cleanup(); 2718 - 2719 - sony_laptop_remove_input(); 2720 - 2721 - outsnc: 2722 2675 sony_nc_function_cleanup(sony_pf_device); 2723 2676 sony_nc_handles_cleanup(sony_pf_device); 2724 2677 2725 - outpresent: 2678 + outplatform: 2679 + sony_laptop_remove_input(); 2680 + 2681 + outpresent: 2726 2682 sony_pf_remove(); 2727 2683 2728 - outwalk: 2684 + outwalk: 2729 2685 sony_nc_rfkill_cleanup(); 2730 2686 return result; 2731 2687 }
+5 -5
drivers/regulator/core.c
··· 2519 2519 { 2520 2520 struct regulator_dev *rdev = regulator->rdev; 2521 2521 struct regulator *consumer; 2522 - int ret, output_uV, input_uV, total_uA_load = 0; 2522 + int ret, output_uV, input_uV = 0, total_uA_load = 0; 2523 2523 unsigned int mode; 2524 + 2525 + if (rdev->supply) 2526 + input_uV = regulator_get_voltage(rdev->supply); 2524 2527 2525 2528 mutex_lock(&rdev->mutex); 2526 2529 ··· 2557 2554 goto out; 2558 2555 } 2559 2556 2560 - /* get input voltage */ 2561 - input_uV = 0; 2562 - if (rdev->supply) 2563 - input_uV = regulator_get_voltage(rdev->supply); 2557 + /* No supply? Use constraint voltage */ 2564 2558 if (input_uV <= 0) 2565 2559 input_uV = rdev->constraints->input_uV; 2566 2560 if (input_uV <= 0) {
+2
drivers/remoteproc/Kconfig
··· 4 4 config REMOTEPROC 5 5 tristate 6 6 depends on EXPERIMENTAL 7 + select FW_CONFIG 7 8 8 9 config OMAP_REMOTEPROC 9 10 tristate "OMAP remoteproc support" 11 + depends on EXPERIMENTAL 10 12 depends on ARCH_OMAP4 11 13 depends on OMAP_IOMMU 12 14 select REMOTEPROC
+51 -6
drivers/rpmsg/virtio_rpmsg_bus.c
··· 188 188 rpdev->id.name); 189 189 } 190 190 191 + /** 192 + * __ept_release() - deallocate an rpmsg endpoint 193 + * @kref: the ept's reference count 194 + * 195 + * This function deallocates an ept, and is invoked when its @kref refcount 196 + * drops to zero. 197 + * 198 + * Never invoke this function directly! 199 + */ 200 + static void __ept_release(struct kref *kref) 201 + { 202 + struct rpmsg_endpoint *ept = container_of(kref, struct rpmsg_endpoint, 203 + refcount); 204 + /* 205 + * At this point no one holds a reference to ept anymore, 206 + * so we can directly free it 207 + */ 208 + kfree(ept); 209 + } 210 + 191 211 /* for more info, see below documentation of rpmsg_create_ept() */ 192 212 static struct rpmsg_endpoint *__rpmsg_create_ept(struct virtproc_info *vrp, 193 213 struct rpmsg_channel *rpdev, rpmsg_rx_cb_t cb, ··· 225 205 dev_err(dev, "failed to kzalloc a new ept\n"); 226 206 return NULL; 227 207 } 208 + 209 + kref_init(&ept->refcount); 210 + mutex_init(&ept->cb_lock); 228 211 229 212 ept->rpdev = rpdev; 230 213 ept->cb = cb; ··· 261 238 idr_remove(&vrp->endpoints, request); 262 239 free_ept: 263 240 mutex_unlock(&vrp->endpoints_lock); 264 - kfree(ept); 241 + kref_put(&ept->refcount, __ept_release); 265 242 return NULL; 266 243 } 267 244 ··· 325 302 static void 326 303 __rpmsg_destroy_ept(struct virtproc_info *vrp, struct rpmsg_endpoint *ept) 327 304 { 305 + /* make sure new inbound messages can't find this ept anymore */ 328 306 mutex_lock(&vrp->endpoints_lock); 329 307 idr_remove(&vrp->endpoints, ept->addr); 330 308 mutex_unlock(&vrp->endpoints_lock); 331 309 332 - kfree(ept); 310 + /* make sure in-flight inbound messages won't invoke cb anymore */ 311 + mutex_lock(&ept->cb_lock); 312 + ept->cb = NULL; 313 + mutex_unlock(&ept->cb_lock); 314 + 315 + kref_put(&ept->refcount, __ept_release); 333 316 } 334 317 335 318 /** ··· 819 790 820 791 /* use the dst addr to fetch the callback of the appropriate user */ 821 792 mutex_lock(&vrp->endpoints_lock); 793 + 822 794 ept = idr_find(&vrp->endpoints, msg->dst); 795 + 796 + /* let's make sure no one deallocates ept while we use it */ 797 + if (ept) 798 + kref_get(&ept->refcount); 799 + 823 800 mutex_unlock(&vrp->endpoints_lock); 824 801 825 - if (ept && ept->cb) 826 - ept->cb(ept->rpdev, msg->data, msg->len, ept->priv, msg->src); 827 - else 802 + if (ept) { 803 + /* make sure ept->cb doesn't go away while we use it */ 804 + mutex_lock(&ept->cb_lock); 805 + 806 + if (ept->cb) 807 + ept->cb(ept->rpdev, msg->data, msg->len, ept->priv, 808 + msg->src); 809 + 810 + mutex_unlock(&ept->cb_lock); 811 + 812 + /* farewell, ept, we don't need you anymore */ 813 + kref_put(&ept->refcount, __ept_release); 814 + } else 828 815 dev_warn(dev, "msg received with no recepient\n"); 829 816 830 817 /* publish the real size of the buffer */ ··· 1085 1040 1086 1041 return ret; 1087 1042 } 1088 - module_init(rpmsg_init); 1043 + subsys_initcall(rpmsg_init); 1089 1044 1090 1045 static void __exit rpmsg_fini(void) 1091 1046 {
+8 -2
drivers/rtc/rtc-ab8500.c
··· 17 17 #include <linux/mfd/abx500.h> 18 18 #include <linux/mfd/abx500/ab8500.h> 19 19 #include <linux/delay.h> 20 + #include <linux/of.h> 20 21 21 22 #define AB8500_RTC_SOFF_STAT_REG 0x00 22 23 #define AB8500_RTC_CC_CONF_REG 0x01 ··· 423 422 } 424 423 425 424 err = request_threaded_irq(irq, NULL, rtc_alarm_handler, 426 - IRQF_NO_SUSPEND, "ab8500-rtc", rtc); 425 + IRQF_NO_SUSPEND | IRQF_ONESHOT, "ab8500-rtc", rtc); 427 426 if (err < 0) { 428 427 rtc_device_unregister(rtc); 429 428 return err; 430 429 } 431 430 432 431 platform_set_drvdata(pdev, rtc); 433 - 434 432 435 433 err = ab8500_sysfs_rtc_register(&pdev->dev); 436 434 if (err) { ··· 454 454 return 0; 455 455 } 456 456 457 + static const struct of_device_id ab8500_rtc_match[] = { 458 + { .compatible = "stericsson,ab8500-rtc", }, 459 + {} 460 + }; 461 + 457 462 static struct platform_driver ab8500_rtc_driver = { 458 463 .driver = { 459 464 .name = "ab8500-rtc", 460 465 .owner = THIS_MODULE, 466 + .of_match_table = ab8500_rtc_match, 461 467 }, 462 468 .probe = ab8500_rtc_probe, 463 469 .remove = __devexit_p(ab8500_rtc_remove),
+3 -2
drivers/rtc/rtc-mxc.c
··· 202 202 struct platform_device *pdev = dev_id; 203 203 struct rtc_plat_data *pdata = platform_get_drvdata(pdev); 204 204 void __iomem *ioaddr = pdata->ioaddr; 205 + unsigned long flags; 205 206 u32 status; 206 207 u32 events = 0; 207 208 208 - spin_lock_irq(&pdata->rtc->irq_lock); 209 + spin_lock_irqsave(&pdata->rtc->irq_lock, flags); 209 210 status = readw(ioaddr + RTC_RTCISR) & readw(ioaddr + RTC_RTCIENR); 210 211 /* clear interrupt sources */ 211 212 writew(status, ioaddr + RTC_RTCISR); ··· 225 224 events |= (RTC_PF | RTC_IRQF); 226 225 227 226 rtc_update_irq(pdata->rtc, 1, events); 228 - spin_unlock_irq(&pdata->rtc->irq_lock); 227 + spin_unlock_irqrestore(&pdata->rtc->irq_lock, flags); 229 228 230 229 return IRQ_HANDLED; 231 230 }
+1 -1
drivers/rtc/rtc-spear.c
··· 458 458 clk_disable(config->clk); 459 459 clk_put(config->clk); 460 460 iounmap(config->ioaddr); 461 - kfree(config); 462 461 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 463 462 if (res) 464 463 release_mem_region(res->start, resource_size(res)); 465 464 platform_set_drvdata(pdev, NULL); 466 465 rtc_device_unregister(config->rtc); 466 + kfree(config); 467 467 468 468 return 0; 469 469 }
+1 -1
drivers/rtc/rtc-twl.c
··· 510 510 } 511 511 512 512 ret = request_threaded_irq(irq, NULL, twl_rtc_interrupt, 513 - IRQF_TRIGGER_RISING, 513 + IRQF_TRIGGER_RISING | IRQF_ONESHOT, 514 514 dev_name(&rtc->dev), rtc); 515 515 if (ret < 0) { 516 516 dev_err(&pdev->dev, "IRQ is not free.\n");
+1 -1
drivers/scsi/aic94xx/aic94xx_task.c
··· 201 201 202 202 if (SAS_STATUS_BUF_SIZE >= sizeof(*resp)) { 203 203 resp->frame_len = le16_to_cpu(*(__le16 *)(r+6)); 204 - memcpy(&resp->ending_fis[0], r+16, 24); 204 + memcpy(&resp->ending_fis[0], r+16, ATA_RESP_FIS_SIZE); 205 205 ts->buf_valid_size = sizeof(*resp); 206 206 } 207 207 }
+1
drivers/scsi/bnx2i/bnx2i.h
··· 400 400 struct pci_dev *pcidev; 401 401 struct net_device *netdev; 402 402 void __iomem *regview; 403 + resource_size_t reg_base; 403 404 404 405 u32 age; 405 406 unsigned long cnic_dev_type;
+1 -2
drivers/scsi/bnx2i/bnx2i_hwi.c
··· 2741 2741 goto arm_cq; 2742 2742 } 2743 2743 2744 - reg_base = ep->hba->netdev->base_addr; 2745 2744 if ((test_bit(BNX2I_NX2_DEV_5709, &ep->hba->cnic_dev_type)) && 2746 2745 (ep->hba->mail_queue_access == BNX2I_MQ_BIN_MODE)) { 2747 2746 config2 = REG_RD(ep->hba, BNX2_MQ_CONFIG2); ··· 2756 2757 /* 5709 device in normal node and 5706/5708 devices */ 2757 2758 reg_off = CTX_OFFSET + (MB_KERNEL_CTX_SIZE * cid_num); 2758 2759 2759 - ep->qp.ctx_base = ioremap_nocache(reg_base + reg_off, 2760 + ep->qp.ctx_base = ioremap_nocache(ep->hba->reg_base + reg_off, 2760 2761 MB_KERNEL_CTX_SIZE); 2761 2762 if (!ep->qp.ctx_base) 2762 2763 return -ENOMEM;
+5 -5
drivers/scsi/bnx2i/bnx2i_iscsi.c
··· 811 811 bnx2i_identify_device(hba); 812 812 bnx2i_setup_host_queue_size(hba, shost); 813 813 814 + hba->reg_base = pci_resource_start(hba->pcidev, 0); 814 815 if (test_bit(BNX2I_NX2_DEV_5709, &hba->cnic_dev_type)) { 815 - hba->regview = ioremap_nocache(hba->netdev->base_addr, 816 - BNX2_MQ_CONFIG2); 816 + hba->regview = pci_iomap(hba->pcidev, 0, BNX2_MQ_CONFIG2); 817 817 if (!hba->regview) 818 818 goto ioreg_map_err; 819 819 } else if (test_bit(BNX2I_NX2_DEV_57710, &hba->cnic_dev_type)) { 820 - hba->regview = ioremap_nocache(hba->netdev->base_addr, 4096); 820 + hba->regview = pci_iomap(hba->pcidev, 0, 4096); 821 821 if (!hba->regview) 822 822 goto ioreg_map_err; 823 823 } ··· 889 889 bnx2i_free_mp_bdt(hba); 890 890 mp_bdt_mem_err: 891 891 if (hba->regview) { 892 - iounmap(hba->regview); 892 + pci_iounmap(hba->pcidev, hba->regview); 893 893 hba->regview = NULL; 894 894 } 895 895 ioreg_map_err: ··· 915 915 pci_dev_put(hba->pcidev); 916 916 917 917 if (hba->regview) { 918 - iounmap(hba->regview); 918 + pci_iounmap(hba->pcidev, hba->regview); 919 919 hba->regview = NULL; 920 920 } 921 921 bnx2i_free_mp_bdt(hba);
+6 -6
drivers/scsi/libsas/sas_ata.c
··· 139 139 if (stat->stat == SAS_PROTO_RESPONSE || stat->stat == SAM_STAT_GOOD || 140 140 ((stat->stat == SAM_STAT_CHECK_CONDITION && 141 141 dev->sata_dev.command_set == ATAPI_COMMAND_SET))) { 142 - ata_tf_from_fis(resp->ending_fis, &dev->sata_dev.tf); 142 + memcpy(dev->sata_dev.fis, resp->ending_fis, ATA_RESP_FIS_SIZE); 143 143 144 144 if (!link->sactive) { 145 - qc->err_mask |= ac_err_mask(dev->sata_dev.tf.command); 145 + qc->err_mask |= ac_err_mask(dev->sata_dev.fis[2]); 146 146 } else { 147 - link->eh_info.err_mask |= ac_err_mask(dev->sata_dev.tf.command); 147 + link->eh_info.err_mask |= ac_err_mask(dev->sata_dev.fis[2]); 148 148 if (unlikely(link->eh_info.err_mask)) 149 149 qc->flags |= ATA_QCFLAG_FAILED; 150 150 } ··· 161 161 qc->flags |= ATA_QCFLAG_FAILED; 162 162 } 163 163 164 - dev->sata_dev.tf.feature = 0x04; /* status err */ 165 - dev->sata_dev.tf.command = ATA_ERR; 164 + dev->sata_dev.fis[3] = 0x04; /* status err */ 165 + dev->sata_dev.fis[2] = ATA_ERR; 166 166 } 167 167 } 168 168 ··· 269 269 { 270 270 struct domain_device *dev = qc->ap->private_data; 271 271 272 - memcpy(&qc->result_tf, &dev->sata_dev.tf, sizeof(qc->result_tf)); 272 + ata_tf_from_fis(dev->sata_dev.fis, &qc->result_tf); 273 273 return true; 274 274 } 275 275
+18 -17
drivers/scsi/qla2xxx/qla_target.c
··· 3960 3960 { 3961 3961 struct qla_hw_data *ha = vha->hw; 3962 3962 struct qla_tgt *tgt = ha->tgt.qla_tgt; 3963 - int reason_code; 3963 + int login_code; 3964 3964 3965 3965 ql_dbg(ql_dbg_tgt, vha, 0xe039, 3966 3966 "scsi(%ld): ha state %d init_done %d oper_mode %d topo %d\n", ··· 4003 4003 { 4004 4004 ql_dbg(ql_dbg_tgt_mgt, vha, 0xf03b, 4005 4005 "qla_target(%d): Async LOOP_UP occured " 4006 - "(m[1]=%x, m[2]=%x, m[3]=%x, m[4]=%x)", vha->vp_idx, 4007 - le16_to_cpu(mailbox[1]), le16_to_cpu(mailbox[2]), 4008 - le16_to_cpu(mailbox[3]), le16_to_cpu(mailbox[4])); 4006 + "(m[0]=%x, m[1]=%x, m[2]=%x, m[3]=%x)", vha->vp_idx, 4007 + le16_to_cpu(mailbox[0]), le16_to_cpu(mailbox[1]), 4008 + le16_to_cpu(mailbox[2]), le16_to_cpu(mailbox[3])); 4009 4009 if (tgt->link_reinit_iocb_pending) { 4010 4010 qlt_send_notify_ack(vha, (void *)&tgt->link_reinit_iocb, 4011 4011 0, 0, 0, 0, 0, 0); ··· 4020 4020 case MBA_RSCN_UPDATE: 4021 4021 ql_dbg(ql_dbg_tgt_mgt, vha, 0xf03c, 4022 4022 "qla_target(%d): Async event %#x occured " 4023 - "(m[1]=%x, m[2]=%x, m[3]=%x, m[4]=%x)", vha->vp_idx, code, 4024 - le16_to_cpu(mailbox[1]), le16_to_cpu(mailbox[2]), 4025 - le16_to_cpu(mailbox[3]), le16_to_cpu(mailbox[4])); 4023 + "(m[0]=%x, m[1]=%x, m[2]=%x, m[3]=%x)", vha->vp_idx, code, 4024 + le16_to_cpu(mailbox[0]), le16_to_cpu(mailbox[1]), 4025 + le16_to_cpu(mailbox[2]), le16_to_cpu(mailbox[3])); 4026 4026 break; 4027 4027 4028 4028 case MBA_PORT_UPDATE: 4029 4029 ql_dbg(ql_dbg_tgt_mgt, vha, 0xf03d, 4030 4030 "qla_target(%d): Port update async event %#x " 4031 - "occured: updating the ports database (m[1]=%x, m[2]=%x, " 4032 - "m[3]=%x, m[4]=%x)", vha->vp_idx, code, 4033 - le16_to_cpu(mailbox[1]), le16_to_cpu(mailbox[2]), 4034 - le16_to_cpu(mailbox[3]), le16_to_cpu(mailbox[4])); 4035 - reason_code = le16_to_cpu(mailbox[2]); 4036 - if (reason_code == 0x4) 4031 + "occured: updating the ports database (m[0]=%x, m[1]=%x, " 4032 + "m[2]=%x, m[3]=%x)", vha->vp_idx, code, 4033 + le16_to_cpu(mailbox[0]), le16_to_cpu(mailbox[1]), 4034 + le16_to_cpu(mailbox[2]), le16_to_cpu(mailbox[3])); 4035 + 4036 + login_code = le16_to_cpu(mailbox[2]); 4037 + if (login_code == 0x4) 4037 4038 ql_dbg(ql_dbg_tgt_mgt, vha, 0xf03e, 4038 4039 "Async MB 2: Got PLOGI Complete\n"); 4039 - else if (reason_code == 0x7) 4040 + else if (login_code == 0x7) 4040 4041 ql_dbg(ql_dbg_tgt_mgt, vha, 0xf03f, 4041 4042 "Async MB 2: Port Logged Out\n"); 4042 4043 break; ··· 4045 4044 default: 4046 4045 ql_dbg(ql_dbg_tgt_mgt, vha, 0xf040, 4047 4046 "qla_target(%d): Async event %#x occured: " 4048 - "ignore (m[1]=%x, m[2]=%x, m[3]=%x, m[4]=%x)", vha->vp_idx, 4049 - code, le16_to_cpu(mailbox[1]), le16_to_cpu(mailbox[2]), 4050 - le16_to_cpu(mailbox[3]), le16_to_cpu(mailbox[4])); 4047 + "ignore (m[0]=%x, m[1]=%x, m[2]=%x, m[3]=%x)", vha->vp_idx, 4048 + code, le16_to_cpu(mailbox[0]), le16_to_cpu(mailbox[1]), 4049 + le16_to_cpu(mailbox[2]), le16_to_cpu(mailbox[3])); 4051 4050 break; 4052 4051 } 4053 4052
-5
drivers/scsi/scsi_wait_scan.c
··· 22 22 * and might not yet have reached the scsi async scanning 23 23 */ 24 24 wait_for_device_probe(); 25 - /* 26 - * and then we wait for the actual asynchronous scsi scan 27 - * to finish. 28 - */ 29 - scsi_complete_async_scans(); 30 25 return 0; 31 26 } 32 27
+1 -1
drivers/target/target_core_cdb.c
··· 1095 1095 if (num_blocks != 0) 1096 1096 range = num_blocks; 1097 1097 else 1098 - range = (dev->transport->get_blocks(dev) - lba); 1098 + range = (dev->transport->get_blocks(dev) - lba) + 1; 1099 1099 1100 1100 pr_debug("WRITE_SAME UNMAP: LBA: %llu Range: %llu\n", 1101 1101 (unsigned long long)lba, (unsigned long long)range);
+4 -3
drivers/target/target_core_pr.c
··· 2031 2031 if (IS_ERR(file) || !file || !file->f_dentry) { 2032 2032 pr_err("filp_open(%s) for APTPL metadata" 2033 2033 " failed\n", path); 2034 - return (PTR_ERR(file) < 0 ? PTR_ERR(file) : -ENOENT); 2034 + return IS_ERR(file) ? PTR_ERR(file) : -ENOENT; 2035 2035 } 2036 2036 2037 2037 iov[0].iov_base = &buf[0]; ··· 3818 3818 " SPC-2 reservation is held, returning" 3819 3819 " RESERVATION_CONFLICT\n"); 3820 3820 cmd->scsi_sense_reason = TCM_RESERVATION_CONFLICT; 3821 - ret = EINVAL; 3821 + ret = -EINVAL; 3822 3822 goto out; 3823 3823 } 3824 3824 ··· 3828 3828 */ 3829 3829 if (!cmd->se_sess) { 3830 3830 cmd->scsi_sense_reason = TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; 3831 - return -EINVAL; 3831 + ret = -EINVAL; 3832 + goto out; 3832 3833 } 3833 3834 3834 3835 if (cmd->data_length < 24) {
+2
drivers/target/tcm_fc/tfc_cmd.c
··· 230 230 { 231 231 struct ft_cmd *cmd = container_of(se_cmd, struct ft_cmd, se_cmd); 232 232 233 + if (cmd->aborted) 234 + return ~0; 233 235 return fc_seq_exch(cmd->seq)->rxid; 234 236 } 235 237
+2 -1
drivers/target/tcm_fc/tfc_sess.c
··· 58 58 struct ft_tport *tport; 59 59 int i; 60 60 61 - tport = rcu_dereference(lport->prov[FC_TYPE_FCP]); 61 + tport = rcu_dereference_protected(lport->prov[FC_TYPE_FCP], 62 + lockdep_is_held(&ft_lport_lock)); 62 63 if (tport && tport->tpg) 63 64 return tport; 64 65
+1 -1
drivers/tty/hvc/hvc_opal.c
··· 401 401 } 402 402 403 403 #ifdef CONFIG_PPC_EARLY_DEBUG_OPAL_RAW 404 - void __init udbg_init_debug_opal(void) 404 + void __init udbg_init_debug_opal_raw(void) 405 405 { 406 406 u32 index = CONFIG_PPC_EARLY_DEBUG_OPAL_VTERMNO; 407 407 hvc_opal_privs[index] = &hvc_opal_boot_priv;
+2
drivers/usb/class/cdc-wdm.c
··· 500 500 goto retry; 501 501 } 502 502 if (!desc->reslength) { /* zero length read */ 503 + dev_dbg(&desc->intf->dev, "%s: zero length - clearing WDM_READ\n", __func__); 504 + clear_bit(WDM_READ, &desc->flags); 503 505 spin_unlock_irq(&desc->iuspin); 504 506 goto retry; 505 507 }
+10 -8
drivers/usb/core/hub.c
··· 2324 2324 static int hub_port_reset(struct usb_hub *hub, int port1, 2325 2325 struct usb_device *udev, unsigned int delay, bool warm); 2326 2326 2327 - /* Is a USB 3.0 port in the Inactive state? */ 2328 - static bool hub_port_inactive(struct usb_hub *hub, u16 portstatus) 2327 + /* Is a USB 3.0 port in the Inactive or Complinance Mode state? 2328 + * Port worm reset is required to recover 2329 + */ 2330 + static bool hub_port_warm_reset_required(struct usb_hub *hub, u16 portstatus) 2329 2331 { 2330 2332 return hub_is_superspeed(hub->hdev) && 2331 - (portstatus & USB_PORT_STAT_LINK_STATE) == 2332 - USB_SS_PORT_LS_SS_INACTIVE; 2333 + (((portstatus & USB_PORT_STAT_LINK_STATE) == 2334 + USB_SS_PORT_LS_SS_INACTIVE) || 2335 + ((portstatus & USB_PORT_STAT_LINK_STATE) == 2336 + USB_SS_PORT_LS_COMP_MOD)) ; 2333 2337 } 2334 2338 2335 2339 static int hub_port_wait_reset(struct usb_hub *hub, int port1, ··· 2369 2365 * 2370 2366 * See https://bugzilla.kernel.org/show_bug.cgi?id=41752 2371 2367 */ 2372 - if (hub_port_inactive(hub, portstatus)) { 2368 + if (hub_port_warm_reset_required(hub, portstatus)) { 2373 2369 int ret; 2374 2370 2375 2371 if ((portchange & USB_PORT_STAT_C_CONNECTION)) ··· 4412 4408 /* Warm reset a USB3 protocol port if it's in 4413 4409 * SS.Inactive state. 4414 4410 */ 4415 - if (hub_is_superspeed(hub->hdev) && 4416 - (portstatus & USB_PORT_STAT_LINK_STATE) 4417 - == USB_SS_PORT_LS_SS_INACTIVE) { 4411 + if (hub_port_warm_reset_required(hub, portstatus)) { 4418 4412 dev_dbg(hub_dev, "warm reset port %d\n", i); 4419 4413 hub_port_reset(hub, i, NULL, 4420 4414 HUB_BH_RESET_TIME, true);
+8 -10
drivers/usb/host/ehci-omap.c
··· 281 281 } 282 282 } 283 283 284 + /* Hold PHYs in reset while initializing EHCI controller */ 284 285 if (pdata->phy_reset) { 285 286 if (gpio_is_valid(pdata->reset_gpio_port[0])) 286 - gpio_request_one(pdata->reset_gpio_port[0], 287 - GPIOF_OUT_INIT_LOW, "USB1 PHY reset"); 287 + gpio_set_value_cansleep(pdata->reset_gpio_port[0], 0); 288 288 289 289 if (gpio_is_valid(pdata->reset_gpio_port[1])) 290 - gpio_request_one(pdata->reset_gpio_port[1], 291 - GPIOF_OUT_INIT_LOW, "USB2 PHY reset"); 290 + gpio_set_value_cansleep(pdata->reset_gpio_port[1], 0); 292 291 293 292 /* Hold the PHY in RESET for enough time till DIR is high */ 294 293 udelay(10); ··· 329 330 omap_ehci->hcs_params = readl(&omap_ehci->caps->hcs_params); 330 331 331 332 ehci_reset(omap_ehci); 333 + ret = usb_add_hcd(hcd, irq, IRQF_SHARED); 334 + if (ret) { 335 + dev_err(dev, "failed to add hcd with err %d\n", ret); 336 + goto err_add_hcd; 337 + } 332 338 333 339 if (pdata->phy_reset) { 334 340 /* Hold the PHY in RESET for enough time till ··· 346 342 347 343 if (gpio_is_valid(pdata->reset_gpio_port[1])) 348 344 gpio_set_value_cansleep(pdata->reset_gpio_port[1], 1); 349 - } 350 - 351 - ret = usb_add_hcd(hcd, irq, IRQF_SHARED); 352 - if (ret) { 353 - dev_err(dev, "failed to add hcd with err %d\n", ret); 354 - goto err_add_hcd; 355 345 } 356 346 357 347 /* root ports should always stay powered */
+38 -6
drivers/usb/host/xhci-hub.c
··· 462 462 } 463 463 } 464 464 465 + /* Updates Link Status for super Speed port */ 466 + static void xhci_hub_report_link_state(u32 *status, u32 status_reg) 467 + { 468 + u32 pls = status_reg & PORT_PLS_MASK; 469 + 470 + /* resume state is a xHCI internal state. 471 + * Do not report it to usb core. 472 + */ 473 + if (pls == XDEV_RESUME) 474 + return; 475 + 476 + /* When the CAS bit is set then warm reset 477 + * should be performed on port 478 + */ 479 + if (status_reg & PORT_CAS) { 480 + /* The CAS bit can be set while the port is 481 + * in any link state. 482 + * Only roothubs have CAS bit, so we 483 + * pretend to be in compliance mode 484 + * unless we're already in compliance 485 + * or the inactive state. 486 + */ 487 + if (pls != USB_SS_PORT_LS_COMP_MOD && 488 + pls != USB_SS_PORT_LS_SS_INACTIVE) { 489 + pls = USB_SS_PORT_LS_COMP_MOD; 490 + } 491 + /* Return also connection bit - 492 + * hub state machine resets port 493 + * when this bit is set. 494 + */ 495 + pls |= USB_PORT_STAT_CONNECTION; 496 + } 497 + /* update status field */ 498 + *status |= pls; 499 + } 500 + 465 501 int xhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue, 466 502 u16 wIndex, char *buf, u16 wLength) 467 503 { ··· 642 606 else 643 607 status |= USB_PORT_STAT_POWER; 644 608 } 645 - /* Port Link State */ 609 + /* Update Port Link State for super speed ports*/ 646 610 if (hcd->speed == HCD_USB3) { 647 - /* resume state is a xHCI internal state. 648 - * Do not report it to usb core. 649 - */ 650 - if ((temp & PORT_PLS_MASK) != XDEV_RESUME) 651 - status |= (temp & PORT_PLS_MASK); 611 + xhci_hub_report_link_state(&status, temp); 652 612 } 653 613 if (bus_state->port_c_suspend & (1 << wIndex)) 654 614 status |= 1 << USB_PORT_FEAT_C_SUSPEND;
+11
drivers/usb/host/xhci-ring.c
··· 885 885 num_trbs_free_temp = ep_ring->num_trbs_free; 886 886 dequeue_temp = ep_ring->dequeue; 887 887 888 + /* If we get two back-to-back stalls, and the first stalled transfer 889 + * ends just before a link TRB, the dequeue pointer will be left on 890 + * the link TRB by the code in the while loop. So we have to update 891 + * the dequeue pointer one segment further, or we'll jump off 892 + * the segment into la-la-land. 893 + */ 894 + if (last_trb(xhci, ep_ring, ep_ring->deq_seg, ep_ring->dequeue)) { 895 + ep_ring->deq_seg = ep_ring->deq_seg->next; 896 + ep_ring->dequeue = ep_ring->deq_seg->trbs; 897 + } 898 + 888 899 while (ep_ring->dequeue != dev->eps[ep_index].queued_deq_ptr) { 889 900 /* We have more usable TRBs */ 890 901 ep_ring->num_trbs_free++;
+5 -1
drivers/usb/host/xhci.h
··· 341 341 #define PORT_PLC (1 << 22) 342 342 /* port configure error change - port failed to configure its link partner */ 343 343 #define PORT_CEC (1 << 23) 344 - /* bit 24 reserved */ 344 + /* Cold Attach Status - xHC can set this bit to report device attached during 345 + * Sx state. Warm port reset should be perfomed to clear this bit and move port 346 + * to connected state. 347 + */ 348 + #define PORT_CAS (1 << 24) 345 349 /* wake on connect (enable) */ 346 350 #define PORT_WKCONN_E (1 << 25) 347 351 /* wake on disconnect (enable) */
-8
drivers/usb/serial/metro-usb.c
··· 222 222 metro_priv->throttled = 0; 223 223 spin_unlock_irqrestore(&metro_priv->lock, flags); 224 224 225 - /* 226 - * Force low_latency on so that our tty_push actually forces the data 227 - * through, otherwise it is scheduled, and with high data rates (like 228 - * with OHCI) data can get lost. 229 - */ 230 - if (tty) 231 - tty->low_latency = 1; 232 - 233 225 /* Clear the urb pipe. */ 234 226 usb_clear_halt(serial->dev, port->interrupt_in_urb->pipe); 235 227
+26
drivers/usb/serial/option.c
··· 497 497 498 498 /* MediaTek products */ 499 499 #define MEDIATEK_VENDOR_ID 0x0e8d 500 + #define MEDIATEK_PRODUCT_DC_1COM 0x00a0 501 + #define MEDIATEK_PRODUCT_DC_4COM 0x00a5 502 + #define MEDIATEK_PRODUCT_DC_5COM 0x00a4 503 + #define MEDIATEK_PRODUCT_7208_1COM 0x7101 504 + #define MEDIATEK_PRODUCT_7208_2COM 0x7102 505 + #define MEDIATEK_PRODUCT_FP_1COM 0x0003 506 + #define MEDIATEK_PRODUCT_FP_2COM 0x0023 507 + #define MEDIATEK_PRODUCT_FPDC_1COM 0x0043 508 + #define MEDIATEK_PRODUCT_FPDC_2COM 0x0033 500 509 501 510 /* Cellient products */ 502 511 #define CELLIENT_VENDOR_ID 0x2692 ··· 561 552 562 553 static const struct option_blacklist_info net_intf1_blacklist = { 563 554 .reserved = BIT(1), 555 + }; 556 + 557 + static const struct option_blacklist_info net_intf2_blacklist = { 558 + .reserved = BIT(2), 564 559 }; 565 560 566 561 static const struct option_blacklist_info net_intf3_blacklist = { ··· 1112 1099 { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1298, 0xff, 0xff, 0xff) }, 1113 1100 { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1299, 0xff, 0xff, 0xff) }, 1114 1101 { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1300, 0xff, 0xff, 0xff) }, 1102 + { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1402, 0xff, 0xff, 0xff), 1103 + .driver_info = (kernel_ulong_t)&net_intf2_blacklist }, 1115 1104 { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x2002, 0xff, 1116 1105 0xff, 0xff), .driver_info = (kernel_ulong_t)&zte_k3765_z_blacklist }, 1117 1106 { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x2003, 0xff, 0xff, 0xff) }, ··· 1255 1240 { USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, 0x00a1, 0xff, 0x02, 0x01) }, 1256 1241 { USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, 0x00a2, 0xff, 0x00, 0x00) }, 1257 1242 { USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, 0x00a2, 0xff, 0x02, 0x01) }, /* MediaTek MT6276M modem & app port */ 1243 + { USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, MEDIATEK_PRODUCT_DC_1COM, 0x0a, 0x00, 0x00) }, 1244 + { USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, MEDIATEK_PRODUCT_DC_5COM, 0xff, 0x02, 0x01) }, 1245 + { USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, MEDIATEK_PRODUCT_DC_5COM, 0xff, 0x00, 0x00) }, 1246 + { USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, MEDIATEK_PRODUCT_DC_4COM, 0xff, 0x02, 0x01) }, 1247 + { USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, MEDIATEK_PRODUCT_DC_4COM, 0xff, 0x00, 0x00) }, 1248 + { USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, MEDIATEK_PRODUCT_7208_1COM, 0x02, 0x00, 0x00) }, 1249 + { USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, MEDIATEK_PRODUCT_7208_2COM, 0x02, 0x02, 0x01) }, 1250 + { USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, MEDIATEK_PRODUCT_FP_1COM, 0x0a, 0x00, 0x00) }, 1251 + { USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, MEDIATEK_PRODUCT_FP_2COM, 0x0a, 0x00, 0x00) }, 1252 + { USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, MEDIATEK_PRODUCT_FPDC_1COM, 0x0a, 0x00, 0x00) }, 1253 + { USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, MEDIATEK_PRODUCT_FPDC_2COM, 0x0a, 0x00, 0x00) }, 1258 1254 { USB_DEVICE(CELLIENT_VENDOR_ID, CELLIENT_PRODUCT_MEN200) }, 1259 1255 { } /* Terminating entry */ 1260 1256 };
+27 -16
drivers/video/omap2/dss/core.c
··· 32 32 #include <linux/io.h> 33 33 #include <linux/device.h> 34 34 #include <linux/regulator/consumer.h> 35 + #include <linux/suspend.h> 35 36 36 37 #include <video/omapdss.h> 37 38 ··· 202 201 #endif /* CONFIG_DEBUG_FS && CONFIG_OMAP2_DSS_DEBUG_SUPPORT */ 203 202 204 203 /* PLATFORM DEVICE */ 204 + static int omap_dss_pm_notif(struct notifier_block *b, unsigned long v, void *d) 205 + { 206 + DSSDBG("pm notif %lu\n", v); 207 + 208 + switch (v) { 209 + case PM_SUSPEND_PREPARE: 210 + DSSDBG("suspending displays\n"); 211 + return dss_suspend_all_devices(); 212 + 213 + case PM_POST_SUSPEND: 214 + DSSDBG("resuming displays\n"); 215 + return dss_resume_all_devices(); 216 + 217 + default: 218 + return 0; 219 + } 220 + } 221 + 222 + static struct notifier_block omap_dss_pm_notif_block = { 223 + .notifier_call = omap_dss_pm_notif, 224 + }; 225 + 205 226 static int __init omap_dss_probe(struct platform_device *pdev) 206 227 { 207 228 struct omap_dss_board_info *pdata = pdev->dev.platform_data; ··· 247 224 else if (pdata->default_device) 248 225 core.default_display_name = pdata->default_device->name; 249 226 227 + register_pm_notifier(&omap_dss_pm_notif_block); 228 + 250 229 return 0; 251 230 252 231 err_debugfs: ··· 258 233 259 234 static int omap_dss_remove(struct platform_device *pdev) 260 235 { 236 + unregister_pm_notifier(&omap_dss_pm_notif_block); 237 + 261 238 dss_uninitialize_debugfs(); 262 239 263 240 dss_uninit_overlays(pdev); ··· 274 247 dss_disable_all_devices(); 275 248 } 276 249 277 - static int omap_dss_suspend(struct platform_device *pdev, pm_message_t state) 278 - { 279 - DSSDBG("suspend %d\n", state.event); 280 - 281 - return dss_suspend_all_devices(); 282 - } 283 - 284 - static int omap_dss_resume(struct platform_device *pdev) 285 - { 286 - DSSDBG("resume\n"); 287 - 288 - return dss_resume_all_devices(); 289 - } 290 - 291 250 static struct platform_driver omap_dss_driver = { 292 251 .remove = omap_dss_remove, 293 252 .shutdown = omap_dss_shutdown, 294 - .suspend = omap_dss_suspend, 295 - .resume = omap_dss_resume, 296 253 .driver = { 297 254 .name = "omapdss", 298 255 .owner = THIS_MODULE,
+1 -1
drivers/video/omap2/dss/dispc.c
··· 384 384 DSSDBG("dispc_runtime_put\n"); 385 385 386 386 r = pm_runtime_put_sync(&dispc.pdev->dev); 387 - WARN_ON(r < 0); 387 + WARN_ON(r < 0 && r != -ENOSYS); 388 388 } 389 389 390 390 static inline bool dispc_mgr_is_lcd(enum omap_channel channel)
+1 -1
drivers/video/omap2/dss/dsi.c
··· 1075 1075 DSSDBG("dsi_runtime_put\n"); 1076 1076 1077 1077 r = pm_runtime_put_sync(&dsi->pdev->dev); 1078 - WARN_ON(r < 0); 1078 + WARN_ON(r < 0 && r != -ENOSYS); 1079 1079 } 1080 1080 1081 1081 /* source clock for DSI PLL. this could also be PCLKFREE */
+1 -1
drivers/video/omap2/dss/dss.c
··· 731 731 DSSDBG("dss_runtime_put\n"); 732 732 733 733 r = pm_runtime_put_sync(&dss.pdev->dev); 734 - WARN_ON(r < 0 && r != -EBUSY); 734 + WARN_ON(r < 0 && r != -ENOSYS && r != -EBUSY); 735 735 } 736 736 737 737 /* DEBUGFS */
+1 -1
drivers/video/omap2/dss/hdmi.c
··· 138 138 DSSDBG("hdmi_runtime_put\n"); 139 139 140 140 r = pm_runtime_put_sync(&hdmi.pdev->dev); 141 - WARN_ON(r < 0); 141 + WARN_ON(r < 0 && r != -ENOSYS); 142 142 } 143 143 144 144 static int __init hdmi_init_display(struct omap_dss_device *dssdev)
+1 -1
drivers/video/omap2/dss/rfbi.c
··· 141 141 DSSDBG("rfbi_runtime_put\n"); 142 142 143 143 r = pm_runtime_put_sync(&rfbi.pdev->dev); 144 - WARN_ON(r < 0); 144 + WARN_ON(r < 0 && r != -ENOSYS); 145 145 } 146 146 147 147 void rfbi_bus_lock(void)
+1 -1
drivers/video/omap2/dss/venc.c
··· 402 402 DSSDBG("venc_runtime_put\n"); 403 403 404 404 r = pm_runtime_put_sync(&venc.pdev->dev); 405 - WARN_ON(r < 0); 405 + WARN_ON(r < 0 && r != -ENOSYS); 406 406 } 407 407 408 408 static const struct venc_config *venc_timings_to_config(
+10 -14
drivers/virtio/virtio_balloon.c
··· 47 47 struct task_struct *thread; 48 48 49 49 /* Waiting for host to ack the pages we released. */ 50 - struct completion acked; 50 + wait_queue_head_t acked; 51 51 52 52 /* Number of balloon pages we've told the Host we're not using. */ 53 53 unsigned int num_pages; ··· 89 89 90 90 static void balloon_ack(struct virtqueue *vq) 91 91 { 92 - struct virtio_balloon *vb; 93 - unsigned int len; 92 + struct virtio_balloon *vb = vq->vdev->priv; 94 93 95 - vb = virtqueue_get_buf(vq, &len); 96 - if (vb) 97 - complete(&vb->acked); 94 + wake_up(&vb->acked); 98 95 } 99 96 100 97 static void tell_host(struct virtio_balloon *vb, struct virtqueue *vq) 101 98 { 102 99 struct scatterlist sg; 100 + unsigned int len; 103 101 104 102 sg_init_one(&sg, vb->pfns, sizeof(vb->pfns[0]) * vb->num_pfns); 105 - 106 - init_completion(&vb->acked); 107 103 108 104 /* We should always be able to add one buffer to an empty queue. */ 109 105 if (virtqueue_add_buf(vq, &sg, 1, 0, vb, GFP_KERNEL) < 0) ··· 107 111 virtqueue_kick(vq); 108 112 109 113 /* When host has read buffer, this completes via balloon_ack */ 110 - wait_for_completion(&vb->acked); 114 + wait_event(vb->acked, virtqueue_get_buf(vq, &len)); 111 115 } 112 116 113 117 static void set_page_pfns(u32 pfns[], struct page *page) ··· 227 231 */ 228 232 static void stats_request(struct virtqueue *vq) 229 233 { 230 - struct virtio_balloon *vb; 231 - unsigned int len; 234 + struct virtio_balloon *vb = vq->vdev->priv; 232 235 233 - vb = virtqueue_get_buf(vq, &len); 234 - if (!vb) 235 - return; 236 236 vb->need_stats_update = 1; 237 237 wake_up(&vb->config_change); 238 238 } ··· 237 245 { 238 246 struct virtqueue *vq; 239 247 struct scatterlist sg; 248 + unsigned int len; 240 249 241 250 vb->need_stats_update = 0; 242 251 update_balloon_stats(vb); 243 252 244 253 vq = vb->stats_vq; 254 + if (!virtqueue_get_buf(vq, &len)) 255 + return; 245 256 sg_init_one(&sg, vb->stats, sizeof(vb->stats)); 246 257 if (virtqueue_add_buf(vq, &sg, 1, 0, vb, GFP_KERNEL) < 0) 247 258 BUG(); ··· 353 358 INIT_LIST_HEAD(&vb->pages); 354 359 vb->num_pages = 0; 355 360 init_waitqueue_head(&vb->config_change); 361 + init_waitqueue_head(&vb->acked); 356 362 vb->vdev = vdev; 357 363 vb->need_stats_update = 0; 358 364
+9 -6
fs/btrfs/backref.c
··· 301 301 goto out; 302 302 303 303 eb = path->nodes[level]; 304 - if (!eb) { 305 - WARN_ON(1); 306 - ret = 1; 307 - goto out; 304 + while (!eb) { 305 + if (!level) { 306 + WARN_ON(1); 307 + ret = 1; 308 + goto out; 309 + } 310 + level--; 311 + eb = path->nodes[level]; 308 312 } 309 313 310 314 ret = add_all_parents(root, path, parents, level, &ref->key_for_search, ··· 839 835 } 840 836 ret = __add_delayed_refs(head, delayed_ref_seq, 841 837 &prefs_delayed); 838 + mutex_unlock(&head->mutex); 842 839 if (ret) { 843 840 spin_unlock(&delayed_refs->lock); 844 841 goto out; ··· 933 928 } 934 929 935 930 out: 936 - if (head) 937 - mutex_unlock(&head->mutex); 938 931 btrfs_free_path(path); 939 932 while (!list_empty(&prefs)) { 940 933 ref = list_first_entry(&prefs, struct __prelim_ref, list);
+35 -25
fs/btrfs/ctree.c
··· 1024 1024 if (!looped && !tm) 1025 1025 return 0; 1026 1026 /* 1027 - * we must have key remove operations in the log before the 1028 - * replace operation. 1027 + * if there are no tree operation for the oldest root, we simply 1028 + * return it. this should only happen if that (old) root is at 1029 + * level 0. 1029 1030 */ 1030 - BUG_ON(!tm); 1031 + if (!tm) 1032 + break; 1031 1033 1034 + /* 1035 + * if there's an operation that's not a root replacement, we 1036 + * found the oldest version of our root. normally, we'll find a 1037 + * MOD_LOG_KEY_REMOVE_WHILE_FREEING operation here. 1038 + */ 1032 1039 if (tm->op != MOD_LOG_ROOT_REPLACE) 1033 1040 break; 1034 1041 ··· 1094 1087 tm->generation); 1095 1088 break; 1096 1089 case MOD_LOG_KEY_ADD: 1097 - if (tm->slot != n - 1) { 1098 - o_dst = btrfs_node_key_ptr_offset(tm->slot); 1099 - o_src = btrfs_node_key_ptr_offset(tm->slot + 1); 1100 - memmove_extent_buffer(eb, o_dst, o_src, p_size); 1101 - } 1090 + /* if a move operation is needed it's in the log */ 1102 1091 n--; 1103 1092 break; 1104 1093 case MOD_LOG_MOVE_KEYS: ··· 1195 1192 } 1196 1193 1197 1194 tm = tree_mod_log_search(root->fs_info, logical, time_seq); 1198 - /* 1199 - * there was an item in the log when __tree_mod_log_oldest_root 1200 - * returned. this one must not go away, because the time_seq passed to 1201 - * us must be blocking its removal. 1202 - */ 1203 - BUG_ON(!tm); 1204 - 1205 1195 if (old_root) 1206 - eb = alloc_dummy_extent_buffer(tm->index << PAGE_CACHE_SHIFT, 1207 - root->nodesize); 1196 + eb = alloc_dummy_extent_buffer(logical, root->nodesize); 1208 1197 else 1209 1198 eb = btrfs_clone_extent_buffer(root->node); 1210 1199 btrfs_tree_read_unlock(root->node); ··· 1211 1216 btrfs_set_header_level(eb, old_root->level); 1212 1217 btrfs_set_header_generation(eb, old_generation); 1213 1218 } 1214 - __tree_mod_log_rewind(eb, time_seq, tm); 1219 + if (tm) 1220 + __tree_mod_log_rewind(eb, time_seq, tm); 1221 + else 1222 + WARN_ON(btrfs_header_level(eb) != 0); 1215 1223 extent_buffer_get(eb); 1216 1224 1217 1225 return eb; ··· 2993 2995 static void insert_ptr(struct btrfs_trans_handle *trans, 2994 2996 struct btrfs_root *root, struct btrfs_path *path, 2995 2997 struct btrfs_disk_key *key, u64 bytenr, 2996 - int slot, int level, int tree_mod_log) 2998 + int slot, int level) 2997 2999 { 2998 3000 struct extent_buffer *lower; 2999 3001 int nritems; ··· 3006 3008 BUG_ON(slot > nritems); 3007 3009 BUG_ON(nritems == BTRFS_NODEPTRS_PER_BLOCK(root)); 3008 3010 if (slot != nritems) { 3009 - if (tree_mod_log && level) 3011 + if (level) 3010 3012 tree_mod_log_eb_move(root->fs_info, lower, slot + 1, 3011 3013 slot, nritems - slot); 3012 3014 memmove_extent_buffer(lower, ··· 3014 3016 btrfs_node_key_ptr_offset(slot), 3015 3017 (nritems - slot) * sizeof(struct btrfs_key_ptr)); 3016 3018 } 3017 - if (tree_mod_log && level) { 3019 + if (level) { 3018 3020 ret = tree_mod_log_insert_key(root->fs_info, lower, slot, 3019 3021 MOD_LOG_KEY_ADD); 3020 3022 BUG_ON(ret < 0); ··· 3102 3104 btrfs_mark_buffer_dirty(split); 3103 3105 3104 3106 insert_ptr(trans, root, path, &disk_key, split->start, 3105 - path->slots[level + 1] + 1, level + 1, 1); 3107 + path->slots[level + 1] + 1, level + 1); 3106 3108 3107 3109 if (path->slots[level] >= mid) { 3108 3110 path->slots[level] -= mid; ··· 3639 3641 btrfs_set_header_nritems(l, mid); 3640 3642 btrfs_item_key(right, &disk_key, 0); 3641 3643 insert_ptr(trans, root, path, &disk_key, right->start, 3642 - path->slots[1] + 1, 1, 0); 3644 + path->slots[1] + 1, 1); 3643 3645 3644 3646 btrfs_mark_buffer_dirty(right); 3645 3647 btrfs_mark_buffer_dirty(l); ··· 3846 3848 if (mid <= slot) { 3847 3849 btrfs_set_header_nritems(right, 0); 3848 3850 insert_ptr(trans, root, path, &disk_key, right->start, 3849 - path->slots[1] + 1, 1, 0); 3851 + path->slots[1] + 1, 1); 3850 3852 btrfs_tree_unlock(path->nodes[0]); 3851 3853 free_extent_buffer(path->nodes[0]); 3852 3854 path->nodes[0] = right; ··· 3855 3857 } else { 3856 3858 btrfs_set_header_nritems(right, 0); 3857 3859 insert_ptr(trans, root, path, &disk_key, right->start, 3858 - path->slots[1], 1, 0); 3860 + path->slots[1], 1); 3859 3861 btrfs_tree_unlock(path->nodes[0]); 3860 3862 free_extent_buffer(path->nodes[0]); 3861 3863 path->nodes[0] = right; ··· 5119 5121 5120 5122 if (!path->skip_locking) { 5121 5123 ret = btrfs_try_tree_read_lock(next); 5124 + if (!ret && time_seq) { 5125 + /* 5126 + * If we don't get the lock, we may be racing 5127 + * with push_leaf_left, holding that lock while 5128 + * itself waiting for the leaf we've currently 5129 + * locked. To solve this situation, we give up 5130 + * on our lock and cycle. 5131 + */ 5132 + btrfs_release_path(path); 5133 + cond_resched(); 5134 + goto again; 5135 + } 5122 5136 if (!ret) { 5123 5137 btrfs_set_path_blocking(path); 5124 5138 btrfs_tree_read_lock(next);
+21 -13
fs/btrfs/disk-io.c
··· 2354 2354 BTRFS_CSUM_TREE_OBJECTID, csum_root); 2355 2355 if (ret) 2356 2356 goto recovery_tree_root; 2357 - 2358 2357 csum_root->track_dirty = 1; 2359 2358 2360 2359 fs_info->generation = generation; 2361 2360 fs_info->last_trans_committed = generation; 2361 + 2362 + ret = btrfs_recover_balance(fs_info); 2363 + if (ret) { 2364 + printk(KERN_WARNING "btrfs: failed to recover balance\n"); 2365 + goto fail_block_groups; 2366 + } 2362 2367 2363 2368 ret = btrfs_init_dev_stats(fs_info); 2364 2369 if (ret) { ··· 2490 2485 goto fail_trans_kthread; 2491 2486 } 2492 2487 2493 - if (!(sb->s_flags & MS_RDONLY)) { 2494 - down_read(&fs_info->cleanup_work_sem); 2495 - err = btrfs_orphan_cleanup(fs_info->fs_root); 2496 - if (!err) 2497 - err = btrfs_orphan_cleanup(fs_info->tree_root); 2488 + if (sb->s_flags & MS_RDONLY) 2489 + return 0; 2490 + 2491 + down_read(&fs_info->cleanup_work_sem); 2492 + if ((ret = btrfs_orphan_cleanup(fs_info->fs_root)) || 2493 + (ret = btrfs_orphan_cleanup(fs_info->tree_root))) { 2498 2494 up_read(&fs_info->cleanup_work_sem); 2495 + close_ctree(tree_root); 2496 + return ret; 2497 + } 2498 + up_read(&fs_info->cleanup_work_sem); 2499 2499 2500 - if (!err) 2501 - err = btrfs_recover_balance(fs_info->tree_root); 2502 - 2503 - if (err) { 2504 - close_ctree(tree_root); 2505 - return err; 2506 - } 2500 + ret = btrfs_resume_balance_async(fs_info); 2501 + if (ret) { 2502 + printk(KERN_WARNING "btrfs: failed to resume balance\n"); 2503 + close_ctree(tree_root); 2504 + return ret; 2507 2505 } 2508 2506 2509 2507 return 0;
+6 -5
fs/btrfs/extent-tree.c
··· 2347 2347 return count; 2348 2348 } 2349 2349 2350 - 2351 2350 static void wait_for_more_refs(struct btrfs_delayed_ref_root *delayed_refs, 2352 - unsigned long num_refs) 2351 + unsigned long num_refs, 2352 + struct list_head *first_seq) 2353 2353 { 2354 - struct list_head *first_seq = delayed_refs->seq_head.next; 2355 - 2356 2354 spin_unlock(&delayed_refs->lock); 2357 2355 pr_debug("waiting for more refs (num %ld, first %p)\n", 2358 2356 num_refs, first_seq); ··· 2379 2381 struct btrfs_delayed_ref_root *delayed_refs; 2380 2382 struct btrfs_delayed_ref_node *ref; 2381 2383 struct list_head cluster; 2384 + struct list_head *first_seq = NULL; 2382 2385 int ret; 2383 2386 u64 delayed_start; 2384 2387 int run_all = count == (unsigned long)-1; ··· 2435 2436 */ 2436 2437 consider_waiting = 1; 2437 2438 num_refs = delayed_refs->num_entries; 2439 + first_seq = root->fs_info->tree_mod_seq_list.next; 2438 2440 } else { 2439 - wait_for_more_refs(delayed_refs, num_refs); 2441 + wait_for_more_refs(delayed_refs, 2442 + num_refs, first_seq); 2440 2443 /* 2441 2444 * after waiting, things have changed. we 2442 2445 * dropped the lock and someone else might have
+14
fs/btrfs/extent_io.c
··· 3324 3324 writepage_t writepage, void *data, 3325 3325 void (*flush_fn)(void *)) 3326 3326 { 3327 + struct inode *inode = mapping->host; 3327 3328 int ret = 0; 3328 3329 int done = 0; 3329 3330 int nr_to_write_done = 0; ··· 3334 3333 pgoff_t end; /* Inclusive */ 3335 3334 int scanned = 0; 3336 3335 int tag; 3336 + 3337 + /* 3338 + * We have to hold onto the inode so that ordered extents can do their 3339 + * work when the IO finishes. The alternative to this is failing to add 3340 + * an ordered extent if the igrab() fails there and that is a huge pain 3341 + * to deal with, so instead just hold onto the inode throughout the 3342 + * writepages operation. If it fails here we are freeing up the inode 3343 + * anyway and we'd rather not waste our time writing out stuff that is 3344 + * going to be truncated anyway. 3345 + */ 3346 + if (!igrab(inode)) 3347 + return 0; 3337 3348 3338 3349 pagevec_init(&pvec, 0); 3339 3350 if (wbc->range_cyclic) { ··· 3441 3428 index = 0; 3442 3429 goto retry; 3443 3430 } 3431 + btrfs_add_delayed_iput(inode); 3444 3432 return ret; 3445 3433 } 3446 3434
-13
fs/btrfs/file.c
··· 1334 1334 loff_t *ppos, size_t count, size_t ocount) 1335 1335 { 1336 1336 struct file *file = iocb->ki_filp; 1337 - struct inode *inode = fdentry(file)->d_inode; 1338 1337 struct iov_iter i; 1339 1338 ssize_t written; 1340 1339 ssize_t written_buffered; ··· 1342 1343 1343 1344 written = generic_file_direct_write(iocb, iov, &nr_segs, pos, ppos, 1344 1345 count, ocount); 1345 - 1346 - /* 1347 - * the generic O_DIRECT will update in-memory i_size after the 1348 - * DIOs are done. But our endio handlers that update the on 1349 - * disk i_size never update past the in memory i_size. So we 1350 - * need one more update here to catch any additions to the 1351 - * file 1352 - */ 1353 - if (inode->i_size != BTRFS_I(inode)->disk_i_size) { 1354 - btrfs_ordered_update_i_size(inode, inode->i_size, NULL); 1355 - mark_inode_dirty(inode); 1356 - } 1357 1346 1358 1347 if (written < 0 || written == count) 1359 1348 return written;
+51 -92
fs/btrfs/free-space-cache.c
··· 1543 1543 end = bitmap_info->offset + (u64)(BITS_PER_BITMAP * ctl->unit) - 1; 1544 1544 1545 1545 /* 1546 - * XXX - this can go away after a few releases. 1547 - * 1548 - * since the only user of btrfs_remove_free_space is the tree logging 1549 - * stuff, and the only way to test that is under crash conditions, we 1550 - * want to have this debug stuff here just in case somethings not 1551 - * working. Search the bitmap for the space we are trying to use to 1552 - * make sure its actually there. If its not there then we need to stop 1553 - * because something has gone wrong. 1546 + * We need to search for bits in this bitmap. We could only cover some 1547 + * of the extent in this bitmap thanks to how we add space, so we need 1548 + * to search for as much as it as we can and clear that amount, and then 1549 + * go searching for the next bit. 1554 1550 */ 1555 1551 search_start = *offset; 1556 - search_bytes = *bytes; 1552 + search_bytes = ctl->unit; 1557 1553 search_bytes = min(search_bytes, end - search_start + 1); 1558 1554 ret = search_bitmap(ctl, bitmap_info, &search_start, &search_bytes); 1559 1555 BUG_ON(ret < 0 || search_start != *offset); 1560 1556 1561 - if (*offset > bitmap_info->offset && *offset + *bytes > end) { 1562 - bitmap_clear_bits(ctl, bitmap_info, *offset, end - *offset + 1); 1563 - *bytes -= end - *offset + 1; 1564 - *offset = end + 1; 1565 - } else if (*offset >= bitmap_info->offset && *offset + *bytes <= end) { 1566 - bitmap_clear_bits(ctl, bitmap_info, *offset, *bytes); 1567 - *bytes = 0; 1568 - } 1557 + /* We may have found more bits than what we need */ 1558 + search_bytes = min(search_bytes, *bytes); 1559 + 1560 + /* Cannot clear past the end of the bitmap */ 1561 + search_bytes = min(search_bytes, end - search_start + 1); 1562 + 1563 + bitmap_clear_bits(ctl, bitmap_info, search_start, search_bytes); 1564 + *offset += search_bytes; 1565 + *bytes -= search_bytes; 1569 1566 1570 1567 if (*bytes) { 1571 1568 struct rb_node *next = rb_next(&bitmap_info->offset_index); ··· 1593 1596 * everything over again. 1594 1597 */ 1595 1598 search_start = *offset; 1596 - search_bytes = *bytes; 1599 + search_bytes = ctl->unit; 1597 1600 ret = search_bitmap(ctl, bitmap_info, &search_start, 1598 1601 &search_bytes); 1599 1602 if (ret < 0 || search_start != *offset) ··· 1876 1879 { 1877 1880 struct btrfs_free_space_ctl *ctl = block_group->free_space_ctl; 1878 1881 struct btrfs_free_space *info; 1879 - struct btrfs_free_space *next_info = NULL; 1880 1882 int ret = 0; 1881 1883 1882 1884 spin_lock(&ctl->tree_lock); 1883 1885 1884 1886 again: 1887 + if (!bytes) 1888 + goto out_lock; 1889 + 1885 1890 info = tree_search_offset(ctl, offset, 0, 0); 1886 1891 if (!info) { 1887 1892 /* ··· 1904 1905 } 1905 1906 } 1906 1907 1907 - if (info->bytes < bytes && rb_next(&info->offset_index)) { 1908 - u64 end; 1909 - next_info = rb_entry(rb_next(&info->offset_index), 1910 - struct btrfs_free_space, 1911 - offset_index); 1912 - 1913 - if (next_info->bitmap) 1914 - end = next_info->offset + 1915 - BITS_PER_BITMAP * ctl->unit - 1; 1916 - else 1917 - end = next_info->offset + next_info->bytes; 1918 - 1919 - if (next_info->bytes < bytes || 1920 - next_info->offset > offset || offset > end) { 1921 - printk(KERN_CRIT "Found free space at %llu, size %llu," 1922 - " trying to use %llu\n", 1923 - (unsigned long long)info->offset, 1924 - (unsigned long long)info->bytes, 1925 - (unsigned long long)bytes); 1926 - WARN_ON(1); 1927 - ret = -EINVAL; 1928 - goto out_lock; 1929 - } 1930 - 1931 - info = next_info; 1932 - } 1933 - 1934 - if (info->bytes == bytes) { 1908 + if (!info->bitmap) { 1935 1909 unlink_free_space(ctl, info); 1936 - if (info->bitmap) { 1937 - kfree(info->bitmap); 1938 - ctl->total_bitmaps--; 1939 - } 1940 - kmem_cache_free(btrfs_free_space_cachep, info); 1941 - ret = 0; 1942 - goto out_lock; 1943 - } 1910 + if (offset == info->offset) { 1911 + u64 to_free = min(bytes, info->bytes); 1944 1912 1945 - if (!info->bitmap && info->offset == offset) { 1946 - unlink_free_space(ctl, info); 1947 - info->offset += bytes; 1948 - info->bytes -= bytes; 1949 - ret = link_free_space(ctl, info); 1950 - WARN_ON(ret); 1951 - goto out_lock; 1952 - } 1913 + info->bytes -= to_free; 1914 + info->offset += to_free; 1915 + if (info->bytes) { 1916 + ret = link_free_space(ctl, info); 1917 + WARN_ON(ret); 1918 + } else { 1919 + kmem_cache_free(btrfs_free_space_cachep, info); 1920 + } 1953 1921 1954 - if (!info->bitmap && info->offset <= offset && 1955 - info->offset + info->bytes >= offset + bytes) { 1956 - u64 old_start = info->offset; 1957 - /* 1958 - * we're freeing space in the middle of the info, 1959 - * this can happen during tree log replay 1960 - * 1961 - * first unlink the old info and then 1962 - * insert it again after the hole we're creating 1963 - */ 1964 - unlink_free_space(ctl, info); 1965 - if (offset + bytes < info->offset + info->bytes) { 1966 - u64 old_end = info->offset + info->bytes; 1922 + offset += to_free; 1923 + bytes -= to_free; 1924 + goto again; 1925 + } else { 1926 + u64 old_end = info->bytes + info->offset; 1967 1927 1968 - info->offset = offset + bytes; 1969 - info->bytes = old_end - info->offset; 1928 + info->bytes = offset - info->offset; 1970 1929 ret = link_free_space(ctl, info); 1971 1930 WARN_ON(ret); 1972 1931 if (ret) 1973 1932 goto out_lock; 1974 - } else { 1975 - /* the hole we're creating ends at the end 1976 - * of the info struct, just free the info 1977 - */ 1978 - kmem_cache_free(btrfs_free_space_cachep, info); 1979 - } 1980 - spin_unlock(&ctl->tree_lock); 1981 1933 1982 - /* step two, insert a new info struct to cover 1983 - * anything before the hole 1984 - */ 1985 - ret = btrfs_add_free_space(block_group, old_start, 1986 - offset - old_start); 1987 - WARN_ON(ret); /* -ENOMEM */ 1988 - goto out; 1934 + /* Not enough bytes in this entry to satisfy us */ 1935 + if (old_end < offset + bytes) { 1936 + bytes -= old_end - offset; 1937 + offset = old_end; 1938 + goto again; 1939 + } else if (old_end == offset + bytes) { 1940 + /* all done */ 1941 + goto out_lock; 1942 + } 1943 + spin_unlock(&ctl->tree_lock); 1944 + 1945 + ret = btrfs_add_free_space(block_group, offset + bytes, 1946 + old_end - (offset + bytes)); 1947 + WARN_ON(ret); 1948 + goto out; 1949 + } 1989 1950 } 1990 1951 1991 1952 ret = remove_from_bitmap(ctl, info, &offset, &bytes);
+51 -6
fs/btrfs/inode.c
··· 3754 3754 btrfs_wait_ordered_range(inode, 0, (u64)-1); 3755 3755 3756 3756 if (root->fs_info->log_root_recovering) { 3757 - BUG_ON(!test_bit(BTRFS_INODE_HAS_ORPHAN_ITEM, 3757 + BUG_ON(test_bit(BTRFS_INODE_HAS_ORPHAN_ITEM, 3758 3758 &BTRFS_I(inode)->runtime_flags)); 3759 3759 goto no_delete; 3760 3760 } ··· 5876 5876 bh_result->b_size = len; 5877 5877 bh_result->b_bdev = em->bdev; 5878 5878 set_buffer_mapped(bh_result); 5879 - if (create && !test_bit(EXTENT_FLAG_PREALLOC, &em->flags)) 5880 - set_buffer_new(bh_result); 5879 + if (create) { 5880 + if (!test_bit(EXTENT_FLAG_PREALLOC, &em->flags)) 5881 + set_buffer_new(bh_result); 5882 + 5883 + /* 5884 + * Need to update the i_size under the extent lock so buffered 5885 + * readers will get the updated i_size when we unlock. 5886 + */ 5887 + if (start + len > i_size_read(inode)) 5888 + i_size_write(inode, start + len); 5889 + } 5881 5890 5882 5891 free_extent_map(em); 5883 5892 ··· 6369 6360 */ 6370 6361 ordered = btrfs_lookup_ordered_range(inode, lockstart, 6371 6362 lockend - lockstart + 1); 6372 - if (!ordered) 6363 + 6364 + /* 6365 + * We need to make sure there are no buffered pages in this 6366 + * range either, we could have raced between the invalidate in 6367 + * generic_file_direct_write and locking the extent. The 6368 + * invalidate needs to happen so that reads after a write do not 6369 + * get stale data. 6370 + */ 6371 + if (!ordered && (!writing || 6372 + !test_range_bit(&BTRFS_I(inode)->io_tree, 6373 + lockstart, lockend, EXTENT_UPTODATE, 0, 6374 + cached_state))) 6373 6375 break; 6376 + 6374 6377 unlock_extent_cached(&BTRFS_I(inode)->io_tree, lockstart, lockend, 6375 6378 &cached_state, GFP_NOFS); 6376 - btrfs_start_ordered_extent(inode, ordered, 1); 6377 - btrfs_put_ordered_extent(ordered); 6379 + 6380 + if (ordered) { 6381 + btrfs_start_ordered_extent(inode, ordered, 1); 6382 + btrfs_put_ordered_extent(ordered); 6383 + } else { 6384 + /* Screw you mmap */ 6385 + ret = filemap_write_and_wait_range(file->f_mapping, 6386 + lockstart, 6387 + lockend); 6388 + if (ret) 6389 + goto out; 6390 + 6391 + /* 6392 + * If we found a page that couldn't be invalidated just 6393 + * fall back to buffered. 6394 + */ 6395 + ret = invalidate_inode_pages2_range(file->f_mapping, 6396 + lockstart >> PAGE_CACHE_SHIFT, 6397 + lockend >> PAGE_CACHE_SHIFT); 6398 + if (ret) { 6399 + if (ret == -EBUSY) 6400 + ret = 0; 6401 + goto out; 6402 + } 6403 + } 6404 + 6378 6405 cond_resched(); 6379 6406 } 6380 6407
+1 -1
fs/btrfs/ioctl.h
··· 339 339 #define BTRFS_IOC_WAIT_SYNC _IOW(BTRFS_IOCTL_MAGIC, 22, __u64) 340 340 #define BTRFS_IOC_SNAP_CREATE_V2 _IOW(BTRFS_IOCTL_MAGIC, 23, \ 341 341 struct btrfs_ioctl_vol_args_v2) 342 - #define BTRFS_IOC_SUBVOL_GETFLAGS _IOW(BTRFS_IOCTL_MAGIC, 25, __u64) 342 + #define BTRFS_IOC_SUBVOL_GETFLAGS _IOR(BTRFS_IOCTL_MAGIC, 25, __u64) 343 343 #define BTRFS_IOC_SUBVOL_SETFLAGS _IOW(BTRFS_IOCTL_MAGIC, 26, __u64) 344 344 #define BTRFS_IOC_SCRUB _IOWR(BTRFS_IOCTL_MAGIC, 27, \ 345 345 struct btrfs_ioctl_scrub_args)
+4
fs/btrfs/super.c
··· 1187 1187 if (ret) 1188 1188 goto restore; 1189 1189 1190 + ret = btrfs_resume_balance_async(fs_info); 1191 + if (ret) 1192 + goto restore; 1193 + 1190 1194 sb->s_flags &= ~MS_RDONLY; 1191 1195 } 1192 1196
+6
fs/btrfs/tree-log.c
··· 690 690 kfree(name); 691 691 692 692 iput(inode); 693 + 694 + btrfs_run_delayed_items(trans, root); 693 695 return ret; 694 696 } 695 697 ··· 897 895 ret = btrfs_unlink_inode(trans, root, dir, 898 896 inode, victim_name, 899 897 victim_name_len); 898 + btrfs_run_delayed_items(trans, root); 900 899 } 901 900 kfree(victim_name); 902 901 ptr = (unsigned long)(victim_ref + 1) + victim_name_len; ··· 1478 1475 ret = btrfs_unlink_inode(trans, root, dir, inode, 1479 1476 name, name_len); 1480 1477 BUG_ON(ret); 1478 + 1479 + btrfs_run_delayed_items(trans, root); 1480 + 1481 1481 kfree(name); 1482 1482 iput(inode); 1483 1483
+60 -41
fs/btrfs/volumes.c
··· 2845 2845 2846 2846 static int balance_kthread(void *data) 2847 2847 { 2848 - struct btrfs_balance_control *bctl = 2849 - (struct btrfs_balance_control *)data; 2850 - struct btrfs_fs_info *fs_info = bctl->fs_info; 2848 + struct btrfs_fs_info *fs_info = data; 2851 2849 int ret = 0; 2852 2850 2853 2851 mutex_lock(&fs_info->volume_mutex); 2854 2852 mutex_lock(&fs_info->balance_mutex); 2855 2853 2856 - set_balance_control(bctl); 2857 - 2858 - if (btrfs_test_opt(fs_info->tree_root, SKIP_BALANCE)) { 2859 - printk(KERN_INFO "btrfs: force skipping balance\n"); 2860 - } else { 2854 + if (fs_info->balance_ctl) { 2861 2855 printk(KERN_INFO "btrfs: continuing balance\n"); 2862 - ret = btrfs_balance(bctl, NULL); 2856 + ret = btrfs_balance(fs_info->balance_ctl, NULL); 2863 2857 } 2864 2858 2865 2859 mutex_unlock(&fs_info->balance_mutex); 2866 2860 mutex_unlock(&fs_info->volume_mutex); 2861 + 2867 2862 return ret; 2868 2863 } 2869 2864 2870 - int btrfs_recover_balance(struct btrfs_root *tree_root) 2865 + int btrfs_resume_balance_async(struct btrfs_fs_info *fs_info) 2871 2866 { 2872 2867 struct task_struct *tsk; 2868 + 2869 + spin_lock(&fs_info->balance_lock); 2870 + if (!fs_info->balance_ctl) { 2871 + spin_unlock(&fs_info->balance_lock); 2872 + return 0; 2873 + } 2874 + spin_unlock(&fs_info->balance_lock); 2875 + 2876 + if (btrfs_test_opt(fs_info->tree_root, SKIP_BALANCE)) { 2877 + printk(KERN_INFO "btrfs: force skipping balance\n"); 2878 + return 0; 2879 + } 2880 + 2881 + tsk = kthread_run(balance_kthread, fs_info, "btrfs-balance"); 2882 + if (IS_ERR(tsk)) 2883 + return PTR_ERR(tsk); 2884 + 2885 + return 0; 2886 + } 2887 + 2888 + int btrfs_recover_balance(struct btrfs_fs_info *fs_info) 2889 + { 2873 2890 struct btrfs_balance_control *bctl; 2874 2891 struct btrfs_balance_item *item; 2875 2892 struct btrfs_disk_balance_args disk_bargs; ··· 2899 2882 if (!path) 2900 2883 return -ENOMEM; 2901 2884 2885 + key.objectid = BTRFS_BALANCE_OBJECTID; 2886 + key.type = BTRFS_BALANCE_ITEM_KEY; 2887 + key.offset = 0; 2888 + 2889 + ret = btrfs_search_slot(NULL, fs_info->tree_root, &key, path, 0, 0); 2890 + if (ret < 0) 2891 + goto out; 2892 + if (ret > 0) { /* ret = -ENOENT; */ 2893 + ret = 0; 2894 + goto out; 2895 + } 2896 + 2902 2897 bctl = kzalloc(sizeof(*bctl), GFP_NOFS); 2903 2898 if (!bctl) { 2904 2899 ret = -ENOMEM; 2905 2900 goto out; 2906 2901 } 2907 2902 2908 - key.objectid = BTRFS_BALANCE_OBJECTID; 2909 - key.type = BTRFS_BALANCE_ITEM_KEY; 2910 - key.offset = 0; 2911 - 2912 - ret = btrfs_search_slot(NULL, tree_root, &key, path, 0, 0); 2913 - if (ret < 0) 2914 - goto out_bctl; 2915 - if (ret > 0) { /* ret = -ENOENT; */ 2916 - ret = 0; 2917 - goto out_bctl; 2918 - } 2919 - 2920 2903 leaf = path->nodes[0]; 2921 2904 item = btrfs_item_ptr(leaf, path->slots[0], struct btrfs_balance_item); 2922 2905 2923 - bctl->fs_info = tree_root->fs_info; 2924 - bctl->flags = btrfs_balance_flags(leaf, item) | BTRFS_BALANCE_RESUME; 2906 + bctl->fs_info = fs_info; 2907 + bctl->flags = btrfs_balance_flags(leaf, item); 2908 + bctl->flags |= BTRFS_BALANCE_RESUME; 2925 2909 2926 2910 btrfs_balance_data(leaf, item, &disk_bargs); 2927 2911 btrfs_disk_balance_args_to_cpu(&bctl->data, &disk_bargs); ··· 2931 2913 btrfs_balance_sys(leaf, item, &disk_bargs); 2932 2914 btrfs_disk_balance_args_to_cpu(&bctl->sys, &disk_bargs); 2933 2915 2934 - tsk = kthread_run(balance_kthread, bctl, "btrfs-balance"); 2935 - if (IS_ERR(tsk)) 2936 - ret = PTR_ERR(tsk); 2937 - else 2938 - goto out; 2916 + mutex_lock(&fs_info->volume_mutex); 2917 + mutex_lock(&fs_info->balance_mutex); 2939 2918 2940 - out_bctl: 2941 - kfree(bctl); 2919 + set_balance_control(bctl); 2920 + 2921 + mutex_unlock(&fs_info->balance_mutex); 2922 + mutex_unlock(&fs_info->volume_mutex); 2942 2923 out: 2943 2924 btrfs_free_path(path); 2944 2925 return ret; ··· 4078 4061 4079 4062 BUG_ON(stripe_index >= bbio->num_stripes); 4080 4063 dev = bbio->stripes[stripe_index].dev; 4081 - if (bio->bi_rw & WRITE) 4082 - btrfs_dev_stat_inc(dev, 4083 - BTRFS_DEV_STAT_WRITE_ERRS); 4084 - else 4085 - btrfs_dev_stat_inc(dev, 4086 - BTRFS_DEV_STAT_READ_ERRS); 4087 - if ((bio->bi_rw & WRITE_FLUSH) == WRITE_FLUSH) 4088 - btrfs_dev_stat_inc(dev, 4089 - BTRFS_DEV_STAT_FLUSH_ERRS); 4090 - btrfs_dev_stat_print_on_error(dev); 4064 + if (dev->bdev) { 4065 + if (bio->bi_rw & WRITE) 4066 + btrfs_dev_stat_inc(dev, 4067 + BTRFS_DEV_STAT_WRITE_ERRS); 4068 + else 4069 + btrfs_dev_stat_inc(dev, 4070 + BTRFS_DEV_STAT_READ_ERRS); 4071 + if ((bio->bi_rw & WRITE_FLUSH) == WRITE_FLUSH) 4072 + btrfs_dev_stat_inc(dev, 4073 + BTRFS_DEV_STAT_FLUSH_ERRS); 4074 + btrfs_dev_stat_print_on_error(dev); 4075 + } 4091 4076 } 4092 4077 } 4093 4078
+2 -1
fs/btrfs/volumes.h
··· 281 281 int btrfs_init_new_device(struct btrfs_root *root, char *path); 282 282 int btrfs_balance(struct btrfs_balance_control *bctl, 283 283 struct btrfs_ioctl_balance_args *bargs); 284 - int btrfs_recover_balance(struct btrfs_root *tree_root); 284 + int btrfs_resume_balance_async(struct btrfs_fs_info *fs_info); 285 + int btrfs_recover_balance(struct btrfs_fs_info *fs_info); 285 286 int btrfs_pause_balance(struct btrfs_fs_info *fs_info); 286 287 int btrfs_cancel_balance(struct btrfs_fs_info *fs_info); 287 288 int btrfs_chunk_readonly(struct btrfs_root *root, u64 chunk_offset);
+13 -9
fs/buffer.c
··· 1036 1036 static struct buffer_head * 1037 1037 __getblk_slow(struct block_device *bdev, sector_t block, int size) 1038 1038 { 1039 + int ret; 1040 + struct buffer_head *bh; 1041 + 1039 1042 /* Size must be multiple of hard sectorsize */ 1040 1043 if (unlikely(size & (bdev_logical_block_size(bdev)-1) || 1041 1044 (size < 512 || size > PAGE_SIZE))) { ··· 1051 1048 return NULL; 1052 1049 } 1053 1050 1054 - for (;;) { 1055 - struct buffer_head * bh; 1056 - int ret; 1051 + retry: 1052 + bh = __find_get_block(bdev, block, size); 1053 + if (bh) 1054 + return bh; 1057 1055 1056 + ret = grow_buffers(bdev, block, size); 1057 + if (ret == 0) { 1058 + free_more_memory(); 1059 + goto retry; 1060 + } else if (ret > 0) { 1058 1061 bh = __find_get_block(bdev, block, size); 1059 1062 if (bh) 1060 1063 return bh; 1061 - 1062 - ret = grow_buffers(bdev, block, size); 1063 - if (ret < 0) 1064 - return NULL; 1065 - if (ret == 0) 1066 - free_more_memory(); 1067 1064 } 1065 + return NULL; 1068 1066 } 1069 1067 1070 1068 /*
+29 -1
fs/cifs/cifssmb.c
··· 86 86 #endif /* CONFIG_CIFS_WEAK_PW_HASH */ 87 87 #endif /* CIFS_POSIX */ 88 88 89 - /* Forward declarations */ 89 + #ifdef CONFIG_HIGHMEM 90 + /* 91 + * On arches that have high memory, kmap address space is limited. By 92 + * serializing the kmap operations on those arches, we ensure that we don't 93 + * end up with a bunch of threads in writeback with partially mapped page 94 + * arrays, stuck waiting for kmap to come back. That situation prevents 95 + * progress and can deadlock. 96 + */ 97 + static DEFINE_MUTEX(cifs_kmap_mutex); 98 + 99 + static inline void 100 + cifs_kmap_lock(void) 101 + { 102 + mutex_lock(&cifs_kmap_mutex); 103 + } 104 + 105 + static inline void 106 + cifs_kmap_unlock(void) 107 + { 108 + mutex_unlock(&cifs_kmap_mutex); 109 + } 110 + #else /* !CONFIG_HIGHMEM */ 111 + #define cifs_kmap_lock() do { ; } while(0) 112 + #define cifs_kmap_unlock() do { ; } while(0) 113 + #endif /* CONFIG_HIGHMEM */ 90 114 91 115 /* Mark as invalid, all open files on tree connections since they 92 116 were closed when session to server was lost */ ··· 1527 1503 } 1528 1504 1529 1505 /* marshal up the page array */ 1506 + cifs_kmap_lock(); 1530 1507 len = rdata->marshal_iov(rdata, data_len); 1508 + cifs_kmap_unlock(); 1531 1509 data_len -= len; 1532 1510 1533 1511 /* issue the read if we have any iovecs left to fill */ ··· 2095 2069 * and set the iov_len properly for each one. It may also set 2096 2070 * wdata->bytes too. 2097 2071 */ 2072 + cifs_kmap_lock(); 2098 2073 wdata->marshal_iov(iov, wdata); 2074 + cifs_kmap_unlock(); 2099 2075 2100 2076 cFYI(1, "async write at %llu %u bytes", wdata->offset, wdata->bytes); 2101 2077
+38 -21
fs/cifs/connect.c
··· 1653 1653 * If yes, we have encountered a double deliminator 1654 1654 * reset the NULL character to the deliminator 1655 1655 */ 1656 - if (tmp_end < end && tmp_end[1] == delim) 1656 + if (tmp_end < end && tmp_end[1] == delim) { 1657 1657 tmp_end[0] = delim; 1658 1658 1659 - /* Keep iterating until we get to a single deliminator 1660 - * OR the end 1661 - */ 1662 - while ((tmp_end = strchr(tmp_end, delim)) != NULL && 1663 - (tmp_end[1] == delim)) { 1664 - tmp_end = (char *) &tmp_end[2]; 1665 - } 1659 + /* Keep iterating until we get to a single 1660 + * deliminator OR the end 1661 + */ 1662 + while ((tmp_end = strchr(tmp_end, delim)) 1663 + != NULL && (tmp_end[1] == delim)) { 1664 + tmp_end = (char *) &tmp_end[2]; 1665 + } 1666 1666 1667 - /* Reset var options to point to next element */ 1668 - if (tmp_end) { 1669 - tmp_end[0] = '\0'; 1670 - options = (char *) &tmp_end[1]; 1671 - } else 1672 - /* Reached the end of the mount option string */ 1673 - options = end; 1667 + /* Reset var options to point to next element */ 1668 + if (tmp_end) { 1669 + tmp_end[0] = '\0'; 1670 + options = (char *) &tmp_end[1]; 1671 + } else 1672 + /* Reached the end of the mount option 1673 + * string */ 1674 + options = end; 1675 + } 1674 1676 1675 1677 /* Now build new password string */ 1676 1678 temp_len = strlen(value); ··· 3445 3443 #define CIFS_DEFAULT_NON_POSIX_RSIZE (60 * 1024) 3446 3444 #define CIFS_DEFAULT_NON_POSIX_WSIZE (65536) 3447 3445 3446 + /* 3447 + * On hosts with high memory, we can't currently support wsize/rsize that are 3448 + * larger than we can kmap at once. Cap the rsize/wsize at 3449 + * LAST_PKMAP * PAGE_SIZE. We'll never be able to fill a read or write request 3450 + * larger than that anyway. 3451 + */ 3452 + #ifdef CONFIG_HIGHMEM 3453 + #define CIFS_KMAP_SIZE_LIMIT (LAST_PKMAP * PAGE_CACHE_SIZE) 3454 + #else /* CONFIG_HIGHMEM */ 3455 + #define CIFS_KMAP_SIZE_LIMIT (1<<24) 3456 + #endif /* CONFIG_HIGHMEM */ 3457 + 3448 3458 static unsigned int 3449 3459 cifs_negotiate_wsize(struct cifs_tcon *tcon, struct smb_vol *pvolume_info) 3450 3460 { ··· 3487 3473 wsize = min_t(unsigned int, wsize, 3488 3474 server->maxBuf - sizeof(WRITE_REQ) + 4); 3489 3475 3476 + /* limit to the amount that we can kmap at once */ 3477 + wsize = min_t(unsigned int, wsize, CIFS_KMAP_SIZE_LIMIT); 3478 + 3490 3479 /* hard limit of CIFS_MAX_WSIZE */ 3491 3480 wsize = min_t(unsigned int, wsize, CIFS_MAX_WSIZE); 3492 3481 ··· 3510 3493 * MS-CIFS indicates that servers are only limited by the client's 3511 3494 * bufsize for reads, testing against win98se shows that it throws 3512 3495 * INVALID_PARAMETER errors if you try to request too large a read. 3496 + * OS/2 just sends back short reads. 3513 3497 * 3514 - * If the server advertises a MaxBufferSize of less than one page, 3515 - * assume that it also can't satisfy reads larger than that either. 3516 - * 3517 - * FIXME: Is there a better heuristic for this? 3498 + * If the server doesn't advertise CAP_LARGE_READ_X, then assume that 3499 + * it can't handle a read request larger than its MaxBufferSize either. 3518 3500 */ 3519 3501 if (tcon->unix_ext && (unix_cap & CIFS_UNIX_LARGE_READ_CAP)) 3520 3502 defsize = CIFS_DEFAULT_IOSIZE; 3521 3503 else if (server->capabilities & CAP_LARGE_READ_X) 3522 3504 defsize = CIFS_DEFAULT_NON_POSIX_RSIZE; 3523 - else if (server->maxBuf >= PAGE_CACHE_SIZE) 3524 - defsize = CIFSMaxBufSize; 3525 3505 else 3526 3506 defsize = server->maxBuf - sizeof(READ_RSP); 3527 3507 ··· 3530 3516 */ 3531 3517 if (!(server->capabilities & CAP_LARGE_READ_X)) 3532 3518 rsize = min_t(unsigned int, CIFSMaxBufSize, rsize); 3519 + 3520 + /* limit to the amount that we can kmap at once */ 3521 + rsize = min_t(unsigned int, rsize, CIFS_KMAP_SIZE_LIMIT); 3533 3522 3534 3523 /* hard limit of CIFS_MAX_RSIZE */ 3535 3524 rsize = min_t(unsigned int, rsize, CIFS_MAX_RSIZE);
+5 -2
fs/cifs/readdir.c
··· 86 86 87 87 dentry = d_lookup(parent, name); 88 88 if (dentry) { 89 - /* FIXME: check for inode number changes? */ 90 - if (dentry->d_inode != NULL) 89 + inode = dentry->d_inode; 90 + /* update inode in place if i_ino didn't change */ 91 + if (inode && CIFS_I(inode)->uniqueid == fattr->cf_uniqueid) { 92 + cifs_fattr_to_inode(inode, fattr); 91 93 return dentry; 94 + } 92 95 d_drop(dentry); 93 96 dput(dentry); 94 97 }
+14 -12
fs/cifs/transport.c
··· 365 365 if (mid == NULL) 366 366 return -ENOMEM; 367 367 368 - /* put it on the pending_mid_q */ 369 - spin_lock(&GlobalMid_Lock); 370 - list_add_tail(&mid->qhead, &server->pending_mid_q); 371 - spin_unlock(&GlobalMid_Lock); 372 - 373 368 rc = cifs_sign_smb2(iov, nvec, server, &mid->sequence_number); 374 - if (rc) 375 - delete_mid(mid); 369 + if (rc) { 370 + DeleteMidQEntry(mid); 371 + return rc; 372 + } 373 + 376 374 *ret_mid = mid; 377 - return rc; 375 + return 0; 378 376 } 379 377 380 378 /* ··· 405 407 mid->callback_data = cbdata; 406 408 mid->mid_state = MID_REQUEST_SUBMITTED; 407 409 410 + /* put it on the pending_mid_q */ 411 + spin_lock(&GlobalMid_Lock); 412 + list_add_tail(&mid->qhead, &server->pending_mid_q); 413 + spin_unlock(&GlobalMid_Lock); 414 + 415 + 408 416 cifs_in_send_inc(server); 409 417 rc = smb_sendv(server, iov, nvec); 410 418 cifs_in_send_dec(server); 411 419 cifs_save_when_sent(mid); 412 420 mutex_unlock(&server->srv_mutex); 413 421 414 - if (rc) 415 - goto out_err; 422 + if (rc == 0) 423 + return 0; 416 424 417 - return rc; 418 - out_err: 419 425 delete_mid(mid); 420 426 add_credits(server, 1); 421 427 wake_up(&server->request_q);
+1 -1
fs/ecryptfs/kthread.c
··· 149 149 (*lower_file) = dentry_open(lower_dentry, lower_mnt, flags, cred); 150 150 if (!IS_ERR(*lower_file)) 151 151 goto out; 152 - if (flags & O_RDONLY) { 152 + if ((flags & O_ACCMODE) == O_RDONLY) { 153 153 rc = PTR_ERR((*lower_file)); 154 154 goto out; 155 155 }
+29 -19
fs/ecryptfs/miscdev.c
··· 49 49 mutex_lock(&ecryptfs_daemon_hash_mux); 50 50 /* TODO: Just use file->private_data? */ 51 51 rc = ecryptfs_find_daemon_by_euid(&daemon, euid, current_user_ns()); 52 - BUG_ON(rc || !daemon); 52 + if (rc || !daemon) { 53 + mutex_unlock(&ecryptfs_daemon_hash_mux); 54 + return -EINVAL; 55 + } 53 56 mutex_lock(&daemon->mux); 54 57 mutex_unlock(&ecryptfs_daemon_hash_mux); 55 58 if (daemon->flags & ECRYPTFS_DAEMON_ZOMBIE) { ··· 125 122 goto out_unlock_daemon; 126 123 } 127 124 daemon->flags |= ECRYPTFS_DAEMON_MISCDEV_OPEN; 125 + file->private_data = daemon; 128 126 atomic_inc(&ecryptfs_num_miscdev_opens); 129 127 out_unlock_daemon: 130 128 mutex_unlock(&daemon->mux); ··· 156 152 157 153 mutex_lock(&ecryptfs_daemon_hash_mux); 158 154 rc = ecryptfs_find_daemon_by_euid(&daemon, euid, current_user_ns()); 159 - BUG_ON(rc || !daemon); 155 + if (rc || !daemon) 156 + daemon = file->private_data; 160 157 mutex_lock(&daemon->mux); 161 - BUG_ON(daemon->pid != task_pid(current)); 162 158 BUG_ON(!(daemon->flags & ECRYPTFS_DAEMON_MISCDEV_OPEN)); 163 159 daemon->flags &= ~ECRYPTFS_DAEMON_MISCDEV_OPEN; 164 160 atomic_dec(&ecryptfs_num_miscdev_opens); ··· 195 191 struct ecryptfs_msg_ctx *msg_ctx, u8 msg_type, 196 192 u16 msg_flags, struct ecryptfs_daemon *daemon) 197 193 { 198 - int rc = 0; 194 + struct ecryptfs_message *msg; 199 195 200 - mutex_lock(&msg_ctx->mux); 201 - msg_ctx->msg = kmalloc((sizeof(*msg_ctx->msg) + data_size), 202 - GFP_KERNEL); 203 - if (!msg_ctx->msg) { 204 - rc = -ENOMEM; 196 + msg = kmalloc((sizeof(*msg) + data_size), GFP_KERNEL); 197 + if (!msg) { 205 198 printk(KERN_ERR "%s: Out of memory whilst attempting " 206 199 "to kmalloc(%zd, GFP_KERNEL)\n", __func__, 207 - (sizeof(*msg_ctx->msg) + data_size)); 208 - goto out_unlock; 200 + (sizeof(*msg) + data_size)); 201 + return -ENOMEM; 209 202 } 203 + 204 + mutex_lock(&msg_ctx->mux); 205 + msg_ctx->msg = msg; 210 206 msg_ctx->msg->index = msg_ctx->index; 211 207 msg_ctx->msg->data_len = data_size; 212 208 msg_ctx->type = msg_type; 213 209 memcpy(msg_ctx->msg->data, data, data_size); 214 210 msg_ctx->msg_size = (sizeof(*msg_ctx->msg) + data_size); 215 - mutex_lock(&daemon->mux); 216 211 list_add_tail(&msg_ctx->daemon_out_list, &daemon->msg_ctx_out_queue); 212 + mutex_unlock(&msg_ctx->mux); 213 + 214 + mutex_lock(&daemon->mux); 217 215 daemon->num_queued_msg_ctx++; 218 216 wake_up_interruptible(&daemon->wait); 219 217 mutex_unlock(&daemon->mux); 220 - out_unlock: 221 - mutex_unlock(&msg_ctx->mux); 222 - return rc; 218 + 219 + return 0; 223 220 } 224 221 225 222 /* ··· 274 269 mutex_lock(&ecryptfs_daemon_hash_mux); 275 270 /* TODO: Just use file->private_data? */ 276 271 rc = ecryptfs_find_daemon_by_euid(&daemon, euid, current_user_ns()); 277 - BUG_ON(rc || !daemon); 272 + if (rc || !daemon) { 273 + mutex_unlock(&ecryptfs_daemon_hash_mux); 274 + return -EINVAL; 275 + } 278 276 mutex_lock(&daemon->mux); 277 + if (task_pid(current) != daemon->pid) { 278 + mutex_unlock(&daemon->mux); 279 + mutex_unlock(&ecryptfs_daemon_hash_mux); 280 + return -EPERM; 281 + } 279 282 if (daemon->flags & ECRYPTFS_DAEMON_ZOMBIE) { 280 283 rc = 0; 281 284 mutex_unlock(&ecryptfs_daemon_hash_mux); ··· 320 307 * message from the queue; try again */ 321 308 goto check_list; 322 309 } 323 - BUG_ON(euid != daemon->euid); 324 - BUG_ON(current_user_ns() != daemon->user_ns); 325 - BUG_ON(task_pid(current) != daemon->pid); 326 310 msg_ctx = list_first_entry(&daemon->msg_ctx_out_queue, 327 311 struct ecryptfs_msg_ctx, daemon_out_list); 328 312 BUG_ON(!msg_ctx);
+1 -1
fs/eventpoll.c
··· 1710 1710 goto error_tgt_fput; 1711 1711 1712 1712 /* Check if EPOLLWAKEUP is allowed */ 1713 - if ((epds.events & EPOLLWAKEUP) && !capable(CAP_EPOLLWAKEUP)) 1713 + if ((epds.events & EPOLLWAKEUP) && !capable(CAP_BLOCK_SUSPEND)) 1714 1714 epds.events &= ~EPOLLWAKEUP; 1715 1715 1716 1716 /*
-1
fs/ext4/ioctl.c
··· 268 268 err = ext4_move_extents(filp, donor_filp, me.orig_start, 269 269 me.donor_start, me.len, &me.moved_len); 270 270 mnt_drop_write_file(filp); 271 - mnt_drop_write(filp->f_path.mnt); 272 271 273 272 if (copy_to_user((struct move_extent __user *)arg, 274 273 &me, sizeof(me)))
+6 -7
fs/fat/inode.c
··· 738 738 fat_encode_fh(struct inode *inode, __u32 *fh, int *lenp, struct inode *parent) 739 739 { 740 740 int len = *lenp; 741 - u32 ipos_h, ipos_m, ipos_l; 741 + struct msdos_sb_info *sbi = MSDOS_SB(inode->i_sb); 742 + loff_t i_pos; 742 743 743 744 if (len < 5) { 744 745 *lenp = 5; 745 746 return 255; /* no room */ 746 747 } 747 748 748 - ipos_h = MSDOS_I(inode)->i_pos >> 8; 749 - ipos_m = (MSDOS_I(inode)->i_pos & 0xf0) << 24; 750 - ipos_l = (MSDOS_I(inode)->i_pos & 0x0f) << 28; 749 + i_pos = fat_i_pos_read(sbi, inode); 751 750 *lenp = 5; 752 751 fh[0] = inode->i_ino; 753 752 fh[1] = inode->i_generation; 754 - fh[2] = ipos_h; 755 - fh[3] = ipos_m | MSDOS_I(inode)->i_logstart; 756 - fh[4] = ipos_l; 753 + fh[2] = i_pos >> 8; 754 + fh[3] = ((i_pos & 0xf0) << 24) | MSDOS_I(inode)->i_logstart; 755 + fh[4] = (i_pos & 0x0f) << 28; 757 756 if (parent) 758 757 fh[4] |= MSDOS_I(parent)->i_logstart; 759 758 return 3;
+4 -5
fs/fifo.c
··· 14 14 #include <linux/sched.h> 15 15 #include <linux/pipe_fs_i.h> 16 16 17 - static void wait_for_partner(struct inode* inode, unsigned int *cnt) 17 + static int wait_for_partner(struct inode* inode, unsigned int *cnt) 18 18 { 19 19 int cur = *cnt; 20 20 ··· 23 23 if (signal_pending(current)) 24 24 break; 25 25 } 26 + return cur == *cnt ? -ERESTARTSYS : 0; 26 27 } 27 28 28 29 static void wake_up_partner(struct inode* inode) ··· 68 67 * seen a writer */ 69 68 filp->f_version = pipe->w_counter; 70 69 } else { 71 - wait_for_partner(inode, &pipe->w_counter); 72 - if(signal_pending(current)) 70 + if (wait_for_partner(inode, &pipe->w_counter)) 73 71 goto err_rd; 74 72 } 75 73 } ··· 90 90 wake_up_partner(inode); 91 91 92 92 if (!pipe->readers) { 93 - wait_for_partner(inode, &pipe->r_counter); 94 - if (signal_pending(current)) 93 + if (wait_for_partner(inode, &pipe->r_counter)) 95 94 goto err_wr; 96 95 } 97 96 break;
+1 -1
fs/locks.c
··· 1465 1465 case F_WRLCK: 1466 1466 return generic_add_lease(filp, arg, flp); 1467 1467 default: 1468 - BUG(); 1468 + return -EINVAL; 1469 1469 } 1470 1470 } 1471 1471 EXPORT_SYMBOL(generic_setlease);
+5 -1
fs/nfs/direct.c
··· 484 484 485 485 list_for_each_entry_safe(req, tmp, &reqs, wb_list) { 486 486 if (!nfs_pageio_add_request(&desc, req)) { 487 + nfs_list_remove_request(req); 487 488 nfs_list_add_request(req, &failed); 488 489 spin_lock(cinfo.lock); 489 490 dreq->flags = 0; ··· 495 494 } 496 495 nfs_pageio_complete(&desc); 497 496 498 - while (!list_empty(&failed)) 497 + while (!list_empty(&failed)) { 498 + req = nfs_list_entry(failed.next); 499 + nfs_list_remove_request(req); 499 500 nfs_unlock_and_release_request(req); 501 + } 500 502 501 503 if (put_dreq(dreq)) 502 504 nfs_direct_write_complete(dreq, dreq->inode);
+2
fs/nfs/super.c
··· 2860 2860 2861 2861 dfprintk(MOUNT, "--> nfs4_try_mount()\n"); 2862 2862 2863 + mount_info->fill_super = nfs4_fill_super; 2864 + 2863 2865 export_path = data->nfs_server.export_path; 2864 2866 data->nfs_server.export_path = "/"; 2865 2867 root_mnt = nfs_do_root_mount(&nfs4_remote_fs_type, flags, mount_info,
+20 -13
fs/ocfs2/dlmglue.c
··· 456 456 stats->ls_gets++; 457 457 stats->ls_total += ktime_to_ns(kt); 458 458 /* overflow */ 459 - if (unlikely(stats->ls_gets) == 0) { 459 + if (unlikely(stats->ls_gets == 0)) { 460 460 stats->ls_gets++; 461 461 stats->ls_total = ktime_to_ns(kt); 462 462 } ··· 3932 3932 static void ocfs2_schedule_blocked_lock(struct ocfs2_super *osb, 3933 3933 struct ocfs2_lock_res *lockres) 3934 3934 { 3935 + unsigned long flags; 3936 + 3935 3937 assert_spin_locked(&lockres->l_lock); 3936 3938 3937 3939 if (lockres->l_flags & OCFS2_LOCK_FREEING) { ··· 3947 3945 3948 3946 lockres_or_flags(lockres, OCFS2_LOCK_QUEUED); 3949 3947 3950 - spin_lock(&osb->dc_task_lock); 3948 + spin_lock_irqsave(&osb->dc_task_lock, flags); 3951 3949 if (list_empty(&lockres->l_blocked_list)) { 3952 3950 list_add_tail(&lockres->l_blocked_list, 3953 3951 &osb->blocked_lock_list); 3954 3952 osb->blocked_lock_count++; 3955 3953 } 3956 - spin_unlock(&osb->dc_task_lock); 3954 + spin_unlock_irqrestore(&osb->dc_task_lock, flags); 3957 3955 } 3958 3956 3959 3957 static void ocfs2_downconvert_thread_do_work(struct ocfs2_super *osb) 3960 3958 { 3961 3959 unsigned long processed; 3960 + unsigned long flags; 3962 3961 struct ocfs2_lock_res *lockres; 3963 3962 3964 - spin_lock(&osb->dc_task_lock); 3963 + spin_lock_irqsave(&osb->dc_task_lock, flags); 3965 3964 /* grab this early so we know to try again if a state change and 3966 3965 * wake happens part-way through our work */ 3967 3966 osb->dc_work_sequence = osb->dc_wake_sequence; ··· 3975 3972 struct ocfs2_lock_res, l_blocked_list); 3976 3973 list_del_init(&lockres->l_blocked_list); 3977 3974 osb->blocked_lock_count--; 3978 - spin_unlock(&osb->dc_task_lock); 3975 + spin_unlock_irqrestore(&osb->dc_task_lock, flags); 3979 3976 3980 3977 BUG_ON(!processed); 3981 3978 processed--; 3982 3979 3983 3980 ocfs2_process_blocked_lock(osb, lockres); 3984 3981 3985 - spin_lock(&osb->dc_task_lock); 3982 + spin_lock_irqsave(&osb->dc_task_lock, flags); 3986 3983 } 3987 - spin_unlock(&osb->dc_task_lock); 3984 + spin_unlock_irqrestore(&osb->dc_task_lock, flags); 3988 3985 } 3989 3986 3990 3987 static int ocfs2_downconvert_thread_lists_empty(struct ocfs2_super *osb) 3991 3988 { 3992 3989 int empty = 0; 3990 + unsigned long flags; 3993 3991 3994 - spin_lock(&osb->dc_task_lock); 3992 + spin_lock_irqsave(&osb->dc_task_lock, flags); 3995 3993 if (list_empty(&osb->blocked_lock_list)) 3996 3994 empty = 1; 3997 3995 3998 - spin_unlock(&osb->dc_task_lock); 3996 + spin_unlock_irqrestore(&osb->dc_task_lock, flags); 3999 3997 return empty; 4000 3998 } 4001 3999 4002 4000 static int ocfs2_downconvert_thread_should_wake(struct ocfs2_super *osb) 4003 4001 { 4004 4002 int should_wake = 0; 4003 + unsigned long flags; 4005 4004 4006 - spin_lock(&osb->dc_task_lock); 4005 + spin_lock_irqsave(&osb->dc_task_lock, flags); 4007 4006 if (osb->dc_work_sequence != osb->dc_wake_sequence) 4008 4007 should_wake = 1; 4009 - spin_unlock(&osb->dc_task_lock); 4008 + spin_unlock_irqrestore(&osb->dc_task_lock, flags); 4010 4009 4011 4010 return should_wake; 4012 4011 } ··· 4038 4033 4039 4034 void ocfs2_wake_downconvert_thread(struct ocfs2_super *osb) 4040 4035 { 4041 - spin_lock(&osb->dc_task_lock); 4036 + unsigned long flags; 4037 + 4038 + spin_lock_irqsave(&osb->dc_task_lock, flags); 4042 4039 /* make sure the voting thread gets a swipe at whatever changes 4043 4040 * the caller may have made to the voting state */ 4044 4041 osb->dc_wake_sequence++; 4045 - spin_unlock(&osb->dc_task_lock); 4042 + spin_unlock_irqrestore(&osb->dc_task_lock, flags); 4046 4043 wake_up(&osb->dc_event); 4047 4044 }
-2
fs/ocfs2/extent_map.c
··· 923 923 924 924 ocfs2_inode_unlock(inode, 0); 925 925 out: 926 - if (ret && ret != -ENXIO) 927 - ret = -ENXIO; 928 926 return ret; 929 927 } 930 928
+4 -2
fs/ocfs2/file.c
··· 1950 1950 if (ret < 0) 1951 1951 mlog_errno(ret); 1952 1952 1953 - if (file->f_flags & O_SYNC) 1953 + if (file && (file->f_flags & O_SYNC)) 1954 1954 handle->h_sync = 1; 1955 1955 1956 1956 ocfs2_commit_trans(osb, handle); ··· 2422 2422 unaligned_dio = 0; 2423 2423 } 2424 2424 2425 - if (unaligned_dio) 2425 + if (unaligned_dio) { 2426 + ocfs2_iocb_clear_unaligned_aio(iocb); 2426 2427 atomic_dec(&OCFS2_I(inode)->ip_unaligned_aio); 2428 + } 2427 2429 2428 2430 out: 2429 2431 if (rw_level != -1)
-2
fs/ocfs2/quota_global.c
··· 399 399 msecs_to_jiffies(oinfo->dqi_syncms)); 400 400 401 401 out_err: 402 - if (status) 403 - mlog_errno(status); 404 402 return status; 405 403 out_unlock: 406 404 ocfs2_unlock_global_qf(oinfo, 0);
+3 -3
fs/open.c
··· 397 397 { 398 398 struct file *file; 399 399 struct inode *inode; 400 - int error; 400 + int error, fput_needed; 401 401 402 402 error = -EBADF; 403 - file = fget(fd); 403 + file = fget_raw_light(fd, &fput_needed); 404 404 if (!file) 405 405 goto out; 406 406 ··· 414 414 if (!error) 415 415 set_fs_pwd(current->fs, &file->f_path); 416 416 out_putf: 417 - fput(file); 417 + fput_light(file, fput_needed); 418 418 out: 419 419 return error; 420 420 }
+1
fs/ramfs/file-nommu.c
··· 110 110 111 111 /* prevent the page from being discarded on memory pressure */ 112 112 SetPageDirty(page); 113 + SetPageUptodate(page); 113 114 114 115 unlock_page(page); 115 116 put_page(page);
+14 -5
fs/xfs/xfs_alloc.c
··· 1074 1074 * If we couldn't get anything, give up. 1075 1075 */ 1076 1076 if (bno_cur_lt == NULL && bno_cur_gt == NULL) { 1077 + xfs_btree_del_cursor(cnt_cur, XFS_BTREE_NOERROR); 1078 + 1077 1079 if (!forced++) { 1078 1080 trace_xfs_alloc_near_busy(args); 1079 1081 xfs_log_force(args->mp, XFS_LOG_SYNC); 1080 1082 goto restart; 1081 1083 } 1082 - 1083 - xfs_btree_del_cursor(cnt_cur, XFS_BTREE_NOERROR); 1084 1084 trace_xfs_alloc_size_neither(args); 1085 1085 args->agbno = NULLAGBLOCK; 1086 1086 return 0; ··· 2434 2434 current_restore_flags_nested(&pflags, PF_FSTRANS); 2435 2435 } 2436 2436 2437 - 2438 - int /* error */ 2437 + /* 2438 + * Data allocation requests often come in with little stack to work on. Push 2439 + * them off to a worker thread so there is lots of stack to use. Metadata 2440 + * requests, OTOH, are generally from low stack usage paths, so avoid the 2441 + * context switch overhead here. 2442 + */ 2443 + int 2439 2444 xfs_alloc_vextent( 2440 - xfs_alloc_arg_t *args) /* allocation argument structure */ 2445 + struct xfs_alloc_arg *args) 2441 2446 { 2442 2447 DECLARE_COMPLETION_ONSTACK(done); 2448 + 2449 + if (!args->userdata) 2450 + return __xfs_alloc_vextent(args); 2451 + 2443 2452 2444 2453 args->done = &done; 2445 2454 INIT_WORK_ONSTACK(&args->work, xfs_alloc_vextent_worker);
+23 -30
fs/xfs/xfs_buf.c
··· 989 989 (__uint64_t)XFS_BUF_ADDR(bp), func, bp->b_error, bp->b_length); 990 990 } 991 991 992 - int 993 - xfs_bwrite( 994 - struct xfs_buf *bp) 995 - { 996 - int error; 997 - 998 - ASSERT(xfs_buf_islocked(bp)); 999 - 1000 - bp->b_flags |= XBF_WRITE; 1001 - bp->b_flags &= ~(XBF_ASYNC | XBF_READ | _XBF_DELWRI_Q); 1002 - 1003 - xfs_bdstrat_cb(bp); 1004 - 1005 - error = xfs_buf_iowait(bp); 1006 - if (error) { 1007 - xfs_force_shutdown(bp->b_target->bt_mount, 1008 - SHUTDOWN_META_IO_ERROR); 1009 - } 1010 - return error; 1011 - } 1012 - 1013 992 /* 1014 993 * Called when we want to stop a buffer from getting written or read. 1015 994 * We attach the EIO error, muck with its flags, and call xfs_buf_ioend ··· 1058 1079 return EIO; 1059 1080 } 1060 1081 1061 - 1062 - /* 1063 - * All xfs metadata buffers except log state machine buffers 1064 - * get this attached as their b_bdstrat callback function. 1065 - * This is so that we can catch a buffer 1066 - * after prematurely unpinning it to forcibly shutdown the filesystem. 1067 - */ 1068 - int 1082 + STATIC int 1069 1083 xfs_bdstrat_cb( 1070 1084 struct xfs_buf *bp) 1071 1085 { ··· 1077 1105 1078 1106 xfs_buf_iorequest(bp); 1079 1107 return 0; 1108 + } 1109 + 1110 + int 1111 + xfs_bwrite( 1112 + struct xfs_buf *bp) 1113 + { 1114 + int error; 1115 + 1116 + ASSERT(xfs_buf_islocked(bp)); 1117 + 1118 + bp->b_flags |= XBF_WRITE; 1119 + bp->b_flags &= ~(XBF_ASYNC | XBF_READ | _XBF_DELWRI_Q); 1120 + 1121 + xfs_bdstrat_cb(bp); 1122 + 1123 + error = xfs_buf_iowait(bp); 1124 + if (error) { 1125 + xfs_force_shutdown(bp->b_target->bt_mount, 1126 + SHUTDOWN_META_IO_ERROR); 1127 + } 1128 + return error; 1080 1129 } 1081 1130 1082 1131 /* ··· 1236 1243 */ 1237 1244 atomic_set(&bp->b_io_remaining, 1); 1238 1245 _xfs_buf_ioapply(bp); 1239 - _xfs_buf_ioend(bp, 0); 1246 + _xfs_buf_ioend(bp, 1); 1240 1247 1241 1248 xfs_buf_rele(bp); 1242 1249 }
-1
fs/xfs/xfs_buf.h
··· 180 180 extern int xfs_bwrite(struct xfs_buf *bp); 181 181 182 182 extern void xfsbdstrat(struct xfs_mount *, struct xfs_buf *); 183 - extern int xfs_bdstrat_cb(struct xfs_buf *); 184 183 185 184 extern void xfs_buf_ioend(xfs_buf_t *, int); 186 185 extern void xfs_buf_ioerror(xfs_buf_t *, int);
+1 -1
fs/xfs/xfs_buf_item.c
··· 954 954 955 955 if (!XFS_BUF_ISSTALE(bp)) { 956 956 bp->b_flags |= XBF_WRITE | XBF_ASYNC | XBF_DONE; 957 - xfs_bdstrat_cb(bp); 957 + xfs_buf_iorequest(bp); 958 958 } else { 959 959 xfs_buf_relse(bp); 960 960 }
+1 -1
include/asm-generic/dma-contiguous.h
··· 18 18 { 19 19 if (dev) 20 20 dev->cma_area = cma; 21 - if (!dev || !dma_contiguous_default_area) 21 + if (!dev && !dma_contiguous_default_area) 22 22 dma_contiguous_default_area = cma; 23 23 } 24 24
+1
include/linux/aio.h
··· 140 140 (x)->ki_dtor = NULL; \ 141 141 (x)->ki_obj.tsk = tsk; \ 142 142 (x)->ki_user_data = 0; \ 143 + (x)->private = NULL; \ 143 144 } while (0) 144 145 145 146 #define AIO_RING_MAGIC 0xa10a10a1
+5
include/linux/bootmem.h
··· 91 91 unsigned long size, 92 92 unsigned long align, 93 93 unsigned long goal); 94 + void *___alloc_bootmem_node_nopanic(pg_data_t *pgdat, 95 + unsigned long size, 96 + unsigned long align, 97 + unsigned long goal, 98 + unsigned long limit); 94 99 extern void *__alloc_bootmem_low(unsigned long size, 95 100 unsigned long align, 96 101 unsigned long goal);
+3 -3
include/linux/capability.h
··· 360 360 361 361 #define CAP_WAKE_ALARM 35 362 362 363 - /* Allow preventing system suspends while epoll events are pending */ 363 + /* Allow preventing system suspends */ 364 364 365 - #define CAP_EPOLLWAKEUP 36 365 + #define CAP_BLOCK_SUSPEND 36 366 366 367 - #define CAP_LAST_CAP CAP_EPOLLWAKEUP 367 + #define CAP_LAST_CAP CAP_BLOCK_SUSPEND 368 368 369 369 #define cap_valid(x) ((x) >= 0 && (x) <= CAP_LAST_CAP) 370 370
-2
include/linux/device.h
··· 865 865 extern struct device *get_device(struct device *dev); 866 866 extern void put_device(struct device *dev); 867 867 868 - extern void wait_for_device_probe(void); 869 - 870 868 #ifdef CONFIG_DEVTMPFS 871 869 extern int devtmpfs_create_node(struct device *dev); 872 870 extern int devtmpfs_delete_node(struct device *dev);
+1 -1
include/linux/eventpoll.h
··· 34 34 * re-allowed until epoll_wait is called again after consuming the wakeup 35 35 * event(s). 36 36 * 37 - * Requires CAP_EPOLLWAKEUP 37 + * Requires CAP_BLOCK_SUSPEND 38 38 */ 39 39 #define EPOLLWAKEUP (1 << 29) 40 40
+2 -2
include/linux/gpio.h
··· 22 22 /* Gpio pin is open source */ 23 23 #define GPIOF_OPEN_SOURCE (1 << 3) 24 24 25 - #define GPIOF_EXPORT (1 << 2) 26 - #define GPIOF_EXPORT_CHANGEABLE (1 << 3) 25 + #define GPIOF_EXPORT (1 << 4) 26 + #define GPIOF_EXPORT_CHANGEABLE (1 << 5) 27 27 #define GPIOF_EXPORT_DIR_FIXED (GPIOF_EXPORT) 28 28 #define GPIOF_EXPORT_DIR_CHANGEABLE (GPIOF_EXPORT | GPIOF_EXPORT_CHANGEABLE) 29 29
+9 -1
include/linux/hrtimer.h
··· 165 165 * @lock: lock protecting the base and associated clock bases 166 166 * and timers 167 167 * @active_bases: Bitfield to mark bases with active timers 168 + * @clock_was_set: Indicates that clock was set from irq context. 168 169 * @expires_next: absolute time of the next event which was scheduled 169 170 * via clock_set_next_event() 170 171 * @hres_active: State of high resolution mode ··· 178 177 */ 179 178 struct hrtimer_cpu_base { 180 179 raw_spinlock_t lock; 181 - unsigned long active_bases; 180 + unsigned int active_bases; 181 + unsigned int clock_was_set; 182 182 #ifdef CONFIG_HIGH_RES_TIMERS 183 183 ktime_t expires_next; 184 184 int hres_active; ··· 288 286 # define MONOTONIC_RES_NSEC HIGH_RES_NSEC 289 287 # define KTIME_MONOTONIC_RES KTIME_HIGH_RES 290 288 289 + extern void clock_was_set_delayed(void); 290 + 291 291 #else 292 292 293 293 # define MONOTONIC_RES_NSEC LOW_RES_NSEC ··· 310 306 { 311 307 return 0; 312 308 } 309 + 310 + static inline void clock_was_set_delayed(void) { } 311 + 313 312 #endif 314 313 315 314 extern void clock_was_set(void); ··· 327 320 extern ktime_t ktime_get_real(void); 328 321 extern ktime_t ktime_get_boottime(void); 329 322 extern ktime_t ktime_get_monotonic_offset(void); 323 + extern ktime_t ktime_get_update_offsets(ktime_t *offs_real, ktime_t *offs_boot); 330 324 331 325 DECLARE_PER_CPU(struct tick_device, tick_cpu_device); 332 326
+1
include/linux/input.h
··· 116 116 117 117 /** 118 118 * EVIOCGMTSLOTS(len) - get MT slot values 119 + * @len: size of the data buffer in bytes 119 120 * 120 121 * The ioctl buffer argument should be binary equivalent to 121 122 *
+2 -2
include/linux/kvm_host.h
··· 815 815 #ifdef CONFIG_HAVE_KVM_EVENTFD 816 816 817 817 void kvm_eventfd_init(struct kvm *kvm); 818 - int kvm_irqfd(struct kvm *kvm, int fd, int gsi, int flags); 818 + int kvm_irqfd(struct kvm *kvm, struct kvm_irqfd *args); 819 819 void kvm_irqfd_release(struct kvm *kvm); 820 820 void kvm_irq_routing_update(struct kvm *, struct kvm_irq_routing_table *); 821 821 int kvm_ioeventfd(struct kvm *kvm, struct kvm_ioeventfd *args); ··· 824 824 825 825 static inline void kvm_eventfd_init(struct kvm *kvm) {} 826 826 827 - static inline int kvm_irqfd(struct kvm *kvm, int fd, int gsi, int flags) 827 + static inline int kvm_irqfd(struct kvm *kvm, struct kvm_irqfd *args) 828 828 { 829 829 return -EINVAL; 830 830 }
+1 -3
include/linux/memblock.h
··· 50 50 phys_addr_t size, phys_addr_t align, int nid); 51 51 phys_addr_t memblock_find_in_range(phys_addr_t start, phys_addr_t end, 52 52 phys_addr_t size, phys_addr_t align); 53 - int memblock_free_reserved_regions(void); 54 - int memblock_reserve_reserved_regions(void); 55 - 53 + phys_addr_t get_allocated_memblock_reserved_regions_info(phys_addr_t *addr); 56 54 void memblock_allow_resize(void); 57 55 int memblock_add_node(phys_addr_t base, phys_addr_t size, int nid); 58 56 int memblock_add(phys_addr_t base, phys_addr_t size);
+1 -1
include/linux/mmzone.h
··· 694 694 range, including holes */ 695 695 int node_id; 696 696 wait_queue_head_t kswapd_wait; 697 - struct task_struct *kswapd; 697 + struct task_struct *kswapd; /* Protected by lock_memory_hotplug() */ 698 698 int kswapd_max_order; 699 699 enum zone_type classzone_idx; 700 700 } pg_data_t;
-2
include/linux/pci.h
··· 176 176 PCI_DEV_FLAGS_NO_D3 = (__force pci_dev_flags_t) 2, 177 177 /* Provide indication device is assigned by a Virtual Machine Manager */ 178 178 PCI_DEV_FLAGS_ASSIGNED = (__force pci_dev_flags_t) 4, 179 - /* Device causes system crash if in D3 during S3 sleep */ 180 - PCI_DEV_FLAGS_NO_D3_DURING_SLEEP = (__force pci_dev_flags_t) 8, 181 179 }; 182 180 183 181 enum pci_irq_reroute_variant {
+2
include/linux/prctl.h
··· 141 141 * Changing LSM security domain is considered a new privilege. So, for example, 142 142 * asking selinux for a specific new context (e.g. with runcon) will result 143 143 * in execve returning -EPERM. 144 + * 145 + * See Documentation/prctl/no_new_privs.txt for more details. 144 146 */ 145 147 #define PR_SET_NO_NEW_PRIVS 38 146 148 #define PR_GET_NO_NEW_PRIVS 39
-1
include/linux/rcupdate.h
··· 184 184 /* Internal to kernel */ 185 185 extern void rcu_sched_qs(int cpu); 186 186 extern void rcu_bh_qs(int cpu); 187 - extern void rcu_preempt_note_context_switch(void); 188 187 extern void rcu_check_callbacks(int cpu, int user); 189 188 struct notifier_block; 190 189 extern void rcu_idle_enter(void);
+6
include/linux/rcutiny.h
··· 87 87 88 88 #ifdef CONFIG_TINY_RCU 89 89 90 + static inline void rcu_preempt_note_context_switch(void) 91 + { 92 + } 93 + 90 94 static inline int rcu_needs_cpu(int cpu, unsigned long *delta_jiffies) 91 95 { 92 96 *delta_jiffies = ULONG_MAX; ··· 99 95 100 96 #else /* #ifdef CONFIG_TINY_RCU */ 101 97 98 + void rcu_preempt_note_context_switch(void); 102 99 int rcu_preempt_needs_cpu(void); 103 100 104 101 static inline int rcu_needs_cpu(int cpu, unsigned long *delta_jiffies) ··· 113 108 static inline void rcu_note_context_switch(int cpu) 114 109 { 115 110 rcu_sched_qs(cpu); 111 + rcu_preempt_note_context_switch(); 116 112 } 117 113 118 114 /*
+6
include/linux/rpmsg.h
··· 38 38 #include <linux/types.h> 39 39 #include <linux/device.h> 40 40 #include <linux/mod_devicetable.h> 41 + #include <linux/kref.h> 42 + #include <linux/mutex.h> 41 43 42 44 /* The feature bitmap for virtio rpmsg */ 43 45 #define VIRTIO_RPMSG_F_NS 0 /* RP supports name service notifications */ ··· 122 120 /** 123 121 * struct rpmsg_endpoint - binds a local rpmsg address to its user 124 122 * @rpdev: rpmsg channel device 123 + * @refcount: when this drops to zero, the ept is deallocated 125 124 * @cb: rx callback handler 125 + * @cb_lock: must be taken before accessing/changing @cb 126 126 * @addr: local rpmsg address 127 127 * @priv: private data for the driver's use 128 128 * ··· 144 140 */ 145 141 struct rpmsg_endpoint { 146 142 struct rpmsg_channel *rpdev; 143 + struct kref refcount; 147 144 rpmsg_rx_cb_t cb; 145 + struct mutex cb_lock; 148 146 u32 addr; 149 147 void *priv; 150 148 };
+8 -10
include/linux/sched.h
··· 1871 1871 INIT_LIST_HEAD(&p->rcu_node_entry); 1872 1872 } 1873 1873 1874 - static inline void rcu_switch_from(struct task_struct *prev) 1875 - { 1876 - if (prev->rcu_read_lock_nesting != 0) 1877 - rcu_preempt_note_context_switch(); 1878 - } 1879 - 1880 1874 #else 1881 1875 1882 1876 static inline void rcu_copy_process(struct task_struct *p) 1883 - { 1884 - } 1885 - 1886 - static inline void rcu_switch_from(struct task_struct *prev) 1887 1877 { 1888 1878 } 1889 1879 ··· 1898 1908 return 0; 1899 1909 } 1900 1910 #endif 1911 + 1912 + #ifdef CONFIG_NO_HZ 1913 + void calc_load_enter_idle(void); 1914 + void calc_load_exit_idle(void); 1915 + #else 1916 + static inline void calc_load_enter_idle(void) { } 1917 + static inline void calc_load_exit_idle(void) { } 1918 + #endif /* CONFIG_NO_HZ */ 1901 1919 1902 1920 #ifndef CONFIG_CPUMASK_OFFSTACK 1903 1921 static inline int set_cpus_allowed(struct task_struct *p, cpumask_t new_mask)
+1 -1
include/net/ip_vs.h
··· 1425 1425 struct nf_conn *ct = nf_ct_get(skb, &ctinfo); 1426 1426 1427 1427 if (!ct || !nf_ct_is_untracked(ct)) { 1428 - nf_reset(skb); 1428 + nf_conntrack_put(skb->nfct); 1429 1429 skb->nfct = &nf_ct_untracked_get()->ct_general; 1430 1430 skb->nfctinfo = IP_CT_NEW; 1431 1431 nf_conntrack_get(skb->nfct);
+4 -2
include/scsi/libsas.h
··· 163 163 ATAPI_COMMAND_SET = 1, 164 164 }; 165 165 166 + #define ATA_RESP_FIS_SIZE 24 167 + 166 168 struct sata_device { 167 169 enum ata_command_set command_set; 168 170 struct smp_resp rps_resp; /* report_phy_sata_resp */ ··· 173 171 174 172 struct ata_port *ap; 175 173 struct ata_host ata_host; 176 - struct ata_taskfile tf; 174 + u8 fis[ATA_RESP_FIS_SIZE]; 177 175 }; 178 176 179 177 enum { ··· 539 537 */ 540 538 struct ata_task_resp { 541 539 u16 frame_len; 542 - u8 ending_fis[24]; /* dev to host or data-in */ 540 + u8 ending_fis[ATA_RESP_FIS_SIZE]; /* dev to host or data-in */ 543 541 }; 544 542 545 543 #define SAS_STATUS_BUF_SIZE 96
+7 -1
include/scsi/scsi_cmnd.h
··· 134 134 135 135 static inline struct scsi_driver *scsi_cmd_to_driver(struct scsi_cmnd *cmd) 136 136 { 137 + struct scsi_driver **sdp; 138 + 137 139 if (!cmd->request->rq_disk) 138 140 return NULL; 139 141 140 - return *(struct scsi_driver **)cmd->request->rq_disk->private_data; 142 + sdp = (struct scsi_driver **)cmd->request->rq_disk->private_data; 143 + if (!sdp) 144 + return NULL; 145 + 146 + return *sdp; 141 147 } 142 148 143 149 extern struct scsi_cmnd *scsi_get_command(struct scsi_device *, gfp_t);
+8 -15
kernel/cgroup.c
··· 901 901 mutex_unlock(&cgroup_mutex); 902 902 903 903 /* 904 - * We want to drop the active superblock reference from the 905 - * cgroup creation after all the dentry refs are gone - 906 - * kill_sb gets mighty unhappy otherwise. Mark 907 - * dentry->d_fsdata with cgroup_diput() to tell 908 - * cgroup_d_release() to call deactivate_super(). 904 + * Drop the active superblock reference that we took when we 905 + * created the cgroup 909 906 */ 910 - dentry->d_fsdata = cgroup_diput; 907 + deactivate_super(cgrp->root->sb); 911 908 912 909 /* 913 910 * if we're getting rid of the cgroup, refcount should ensure ··· 928 931 static int cgroup_delete(const struct dentry *d) 929 932 { 930 933 return 1; 931 - } 932 - 933 - static void cgroup_d_release(struct dentry *dentry) 934 - { 935 - /* did cgroup_diput() tell me to deactivate super? */ 936 - if (dentry->d_fsdata == cgroup_diput) 937 - deactivate_super(dentry->d_sb); 938 934 } 939 935 940 936 static void remove_dir(struct dentry *d) ··· 1537 1547 static const struct dentry_operations cgroup_dops = { 1538 1548 .d_iput = cgroup_diput, 1539 1549 .d_delete = cgroup_delete, 1540 - .d_release = cgroup_d_release, 1541 1550 }; 1542 1551 1543 1552 struct inode *inode = ··· 3883 3894 { 3884 3895 struct cgroup_subsys_state *css = 3885 3896 container_of(work, struct cgroup_subsys_state, dput_work); 3897 + struct dentry *dentry = css->cgroup->dentry; 3898 + struct super_block *sb = dentry->d_sb; 3886 3899 3887 - dput(css->cgroup->dentry); 3900 + atomic_inc(&sb->s_active); 3901 + dput(dentry); 3902 + deactivate_super(sb); 3888 3903 } 3889 3904 3890 3905 static void init_cgroup_css(struct cgroup_subsys_state *css,
+8 -3
kernel/fork.c
··· 304 304 } 305 305 306 306 err = arch_dup_task_struct(tsk, orig); 307 + 308 + /* 309 + * We defer looking at err, because we will need this setup 310 + * for the clean up path to work correctly. 311 + */ 312 + tsk->stack = ti; 313 + setup_thread_stack(tsk, orig); 314 + 307 315 if (err) 308 316 goto out; 309 317 310 - tsk->stack = ti; 311 - 312 - setup_thread_stack(tsk, orig); 313 318 clear_user_return_notifier(tsk); 314 319 clear_tsk_need_resched(tsk); 315 320 stackend = end_of_stack(tsk);
+37 -16
kernel/hrtimer.c
··· 657 657 return 0; 658 658 } 659 659 660 + static inline ktime_t hrtimer_update_base(struct hrtimer_cpu_base *base) 661 + { 662 + ktime_t *offs_real = &base->clock_base[HRTIMER_BASE_REALTIME].offset; 663 + ktime_t *offs_boot = &base->clock_base[HRTIMER_BASE_BOOTTIME].offset; 664 + 665 + return ktime_get_update_offsets(offs_real, offs_boot); 666 + } 667 + 660 668 /* 661 669 * Retrigger next event is called after clock was set 662 670 * ··· 673 665 static void retrigger_next_event(void *arg) 674 666 { 675 667 struct hrtimer_cpu_base *base = &__get_cpu_var(hrtimer_bases); 676 - struct timespec realtime_offset, xtim, wtm, sleep; 677 668 678 669 if (!hrtimer_hres_active()) 679 670 return; 680 671 681 - /* Optimized out for !HIGH_RES */ 682 - get_xtime_and_monotonic_and_sleep_offset(&xtim, &wtm, &sleep); 683 - set_normalized_timespec(&realtime_offset, -wtm.tv_sec, -wtm.tv_nsec); 684 - 685 - /* Adjust CLOCK_REALTIME offset */ 686 672 raw_spin_lock(&base->lock); 687 - base->clock_base[HRTIMER_BASE_REALTIME].offset = 688 - timespec_to_ktime(realtime_offset); 689 - base->clock_base[HRTIMER_BASE_BOOTTIME].offset = 690 - timespec_to_ktime(sleep); 691 - 673 + hrtimer_update_base(base); 692 674 hrtimer_force_reprogram(base, 0); 693 675 raw_spin_unlock(&base->lock); 694 676 } ··· 708 710 base->clock_base[i].resolution = KTIME_HIGH_RES; 709 711 710 712 tick_setup_sched_timer(); 711 - 712 713 /* "Retrigger" the interrupt to get things going */ 713 714 retrigger_next_event(NULL); 714 715 local_irq_restore(flags); 715 716 return 1; 717 + } 718 + 719 + /* 720 + * Called from timekeeping code to reprogramm the hrtimer interrupt 721 + * device. If called from the timer interrupt context we defer it to 722 + * softirq context. 723 + */ 724 + void clock_was_set_delayed(void) 725 + { 726 + struct hrtimer_cpu_base *cpu_base = &__get_cpu_var(hrtimer_bases); 727 + 728 + cpu_base->clock_was_set = 1; 729 + __raise_softirq_irqoff(HRTIMER_SOFTIRQ); 716 730 } 717 731 718 732 #else ··· 1260 1250 cpu_base->nr_events++; 1261 1251 dev->next_event.tv64 = KTIME_MAX; 1262 1252 1263 - entry_time = now = ktime_get(); 1253 + raw_spin_lock(&cpu_base->lock); 1254 + entry_time = now = hrtimer_update_base(cpu_base); 1264 1255 retry: 1265 1256 expires_next.tv64 = KTIME_MAX; 1266 - 1267 - raw_spin_lock(&cpu_base->lock); 1268 1257 /* 1269 1258 * We set expires_next to KTIME_MAX here with cpu_base->lock 1270 1259 * held to prevent that a timer is enqueued in our queue via ··· 1339 1330 * We need to prevent that we loop forever in the hrtimer 1340 1331 * interrupt routine. We give it 3 attempts to avoid 1341 1332 * overreacting on some spurious event. 1333 + * 1334 + * Acquire base lock for updating the offsets and retrieving 1335 + * the current time. 1342 1336 */ 1343 - now = ktime_get(); 1337 + raw_spin_lock(&cpu_base->lock); 1338 + now = hrtimer_update_base(cpu_base); 1344 1339 cpu_base->nr_retries++; 1345 1340 if (++retries < 3) 1346 1341 goto retry; ··· 1356 1343 */ 1357 1344 cpu_base->nr_hangs++; 1358 1345 cpu_base->hang_detected = 1; 1346 + raw_spin_unlock(&cpu_base->lock); 1359 1347 delta = ktime_sub(now, entry_time); 1360 1348 if (delta.tv64 > cpu_base->max_hang_time.tv64) 1361 1349 cpu_base->max_hang_time = delta; ··· 1409 1395 1410 1396 static void run_hrtimer_softirq(struct softirq_action *h) 1411 1397 { 1398 + struct hrtimer_cpu_base *cpu_base = &__get_cpu_var(hrtimer_bases); 1399 + 1400 + if (cpu_base->clock_was_set) { 1401 + cpu_base->clock_was_set = 0; 1402 + clock_was_set(); 1403 + } 1404 + 1412 1405 hrtimer_peek_ahead_timers(); 1413 1406 } 1414 1407
-8
kernel/power/hibernate.c
··· 27 27 #include <linux/syscore_ops.h> 28 28 #include <linux/ctype.h> 29 29 #include <linux/genhd.h> 30 - #include <scsi/scsi_scan.h> 31 30 32 31 #include "power.h" 33 32 ··· 746 747 msleep(10); 747 748 async_synchronize_full(); 748 749 } 749 - 750 - /* 751 - * We can't depend on SCSI devices being available after loading 752 - * one of their modules until scsi_complete_async_scans() is 753 - * called and the resume device usually is a SCSI one. 754 - */ 755 - scsi_complete_async_scans(); 756 750 757 751 swsusp_resume_device = name_to_dev_t(resume_file); 758 752 if (!swsusp_resume_device) {
-2
kernel/power/user.c
··· 24 24 #include <linux/console.h> 25 25 #include <linux/cpu.h> 26 26 #include <linux/freezer.h> 27 - #include <scsi/scsi_scan.h> 28 27 29 28 #include <asm/uaccess.h> 30 29 ··· 83 84 * appear. 84 85 */ 85 86 wait_for_device_probe(); 86 - scsi_complete_async_scans(); 87 87 88 88 data->swap = -1; 89 89 data->mode = O_WRONLY;
+126 -76
kernel/printk.c
··· 194 194 */ 195 195 196 196 enum log_flags { 197 - LOG_DEFAULT = 0, 198 - LOG_NOCONS = 1, /* already flushed, do not print to console */ 197 + LOG_NOCONS = 1, /* already flushed, do not print to console */ 198 + LOG_NEWLINE = 2, /* text ended with a newline */ 199 + LOG_PREFIX = 4, /* text started with a prefix */ 200 + LOG_CONT = 8, /* text is a fragment of a continuation line */ 199 201 }; 200 202 201 203 struct log { ··· 219 217 /* the next printk record to read by syslog(READ) or /proc/kmsg */ 220 218 static u64 syslog_seq; 221 219 static u32 syslog_idx; 220 + static enum log_flags syslog_prev; 221 + static size_t syslog_partial; 222 222 223 223 /* index and sequence number of the first record stored in the buffer */ 224 224 static u64 log_first_seq; ··· 434 430 ret = mutex_lock_interruptible(&user->lock); 435 431 if (ret) 436 432 return ret; 437 - raw_spin_lock(&logbuf_lock); 433 + raw_spin_lock_irq(&logbuf_lock); 438 434 while (user->seq == log_next_seq) { 439 435 if (file->f_flags & O_NONBLOCK) { 440 436 ret = -EAGAIN; 441 - raw_spin_unlock(&logbuf_lock); 437 + raw_spin_unlock_irq(&logbuf_lock); 442 438 goto out; 443 439 } 444 440 445 - raw_spin_unlock(&logbuf_lock); 441 + raw_spin_unlock_irq(&logbuf_lock); 446 442 ret = wait_event_interruptible(log_wait, 447 443 user->seq != log_next_seq); 448 444 if (ret) 449 445 goto out; 450 - raw_spin_lock(&logbuf_lock); 446 + raw_spin_lock_irq(&logbuf_lock); 451 447 } 452 448 453 449 if (user->seq < log_first_seq) { ··· 455 451 user->idx = log_first_idx; 456 452 user->seq = log_first_seq; 457 453 ret = -EPIPE; 458 - raw_spin_unlock(&logbuf_lock); 454 + raw_spin_unlock_irq(&logbuf_lock); 459 455 goto out; 460 456 } 461 457 ··· 469 465 for (i = 0; i < msg->text_len; i++) { 470 466 unsigned char c = log_text(msg)[i]; 471 467 472 - if (c < ' ' || c >= 128) 468 + if (c < ' ' || c >= 127 || c == '\\') 473 469 len += sprintf(user->buf + len, "\\x%02x", c); 474 470 else 475 471 user->buf[len++] = c; ··· 493 489 continue; 494 490 } 495 491 496 - if (c < ' ' || c >= 128) { 492 + if (c < ' ' || c >= 127 || c == '\\') { 497 493 len += sprintf(user->buf + len, "\\x%02x", c); 498 494 continue; 499 495 } ··· 505 501 506 502 user->idx = log_next(user->idx); 507 503 user->seq++; 508 - raw_spin_unlock(&logbuf_lock); 504 + raw_spin_unlock_irq(&logbuf_lock); 509 505 510 506 if (len > count) { 511 507 ret = -EINVAL; ··· 532 528 if (offset) 533 529 return -ESPIPE; 534 530 535 - raw_spin_lock(&logbuf_lock); 531 + raw_spin_lock_irq(&logbuf_lock); 536 532 switch (whence) { 537 533 case SEEK_SET: 538 534 /* the first record */ ··· 556 552 default: 557 553 ret = -EINVAL; 558 554 } 559 - raw_spin_unlock(&logbuf_lock); 555 + raw_spin_unlock_irq(&logbuf_lock); 560 556 return ret; 561 557 } 562 558 ··· 570 566 571 567 poll_wait(file, &log_wait, wait); 572 568 573 - raw_spin_lock(&logbuf_lock); 569 + raw_spin_lock_irq(&logbuf_lock); 574 570 if (user->seq < log_next_seq) { 575 571 /* return error when data has vanished underneath us */ 576 572 if (user->seq < log_first_seq) 577 573 ret = POLLIN|POLLRDNORM|POLLERR|POLLPRI; 578 574 ret = POLLIN|POLLRDNORM; 579 575 } 580 - raw_spin_unlock(&logbuf_lock); 576 + raw_spin_unlock_irq(&logbuf_lock); 581 577 582 578 return ret; 583 579 } ··· 601 597 602 598 mutex_init(&user->lock); 603 599 604 - raw_spin_lock(&logbuf_lock); 600 + raw_spin_lock_irq(&logbuf_lock); 605 601 user->idx = log_first_idx; 606 602 user->seq = log_first_seq; 607 - raw_spin_unlock(&logbuf_lock); 603 + raw_spin_unlock_irq(&logbuf_lock); 608 604 609 605 file->private_data = user; 610 606 return 0; ··· 822 818 static size_t print_prefix(const struct log *msg, bool syslog, char *buf) 823 819 { 824 820 size_t len = 0; 821 + unsigned int prefix = (msg->facility << 3) | msg->level; 825 822 826 823 if (syslog) { 827 824 if (buf) { 828 - len += sprintf(buf, "<%u>", msg->level); 825 + len += sprintf(buf, "<%u>", prefix); 829 826 } else { 830 827 len += 3; 831 - if (msg->level > 9) 832 - len++; 833 - if (msg->level > 99) 828 + if (prefix > 999) 829 + len += 3; 830 + else if (prefix > 99) 831 + len += 2; 832 + else if (prefix > 9) 834 833 len++; 835 834 } 836 835 } ··· 842 835 return len; 843 836 } 844 837 845 - static size_t msg_print_text(const struct log *msg, bool syslog, 846 - char *buf, size_t size) 838 + static size_t msg_print_text(const struct log *msg, enum log_flags prev, 839 + bool syslog, char *buf, size_t size) 847 840 { 848 841 const char *text = log_text(msg); 849 842 size_t text_size = msg->text_len; 843 + bool prefix = true; 844 + bool newline = true; 850 845 size_t len = 0; 846 + 847 + if ((prev & LOG_CONT) && !(msg->flags & LOG_PREFIX)) 848 + prefix = false; 849 + 850 + if (msg->flags & LOG_CONT) { 851 + if ((prev & LOG_CONT) && !(prev & LOG_NEWLINE)) 852 + prefix = false; 853 + 854 + if (!(msg->flags & LOG_NEWLINE)) 855 + newline = false; 856 + } 851 857 852 858 do { 853 859 const char *next = memchr(text, '\n', text_size); ··· 879 859 text_len + 1>= size - len) 880 860 break; 881 861 882 - len += print_prefix(msg, syslog, buf + len); 862 + if (prefix) 863 + len += print_prefix(msg, syslog, buf + len); 883 864 memcpy(buf + len, text, text_len); 884 865 len += text_len; 885 - buf[len++] = '\n'; 866 + if (next || newline) 867 + buf[len++] = '\n'; 886 868 } else { 887 869 /* SYSLOG_ACTION_* buffer size only calculation */ 888 - len += print_prefix(msg, syslog, NULL); 889 - len += text_len + 1; 870 + if (prefix) 871 + len += print_prefix(msg, syslog, NULL); 872 + len += text_len; 873 + if (next || newline) 874 + len++; 890 875 } 891 876 877 + prefix = true; 892 878 text = next; 893 879 } while (text); 894 880 ··· 913 887 914 888 while (size > 0) { 915 889 size_t n; 890 + size_t skip; 916 891 917 892 raw_spin_lock_irq(&logbuf_lock); 918 893 if (syslog_seq < log_first_seq) { 919 894 /* messages are gone, move to first one */ 920 895 syslog_seq = log_first_seq; 921 896 syslog_idx = log_first_idx; 897 + syslog_prev = 0; 898 + syslog_partial = 0; 922 899 } 923 900 if (syslog_seq == log_next_seq) { 924 901 raw_spin_unlock_irq(&logbuf_lock); 925 902 break; 926 903 } 904 + 905 + skip = syslog_partial; 927 906 msg = log_from_idx(syslog_idx); 928 - n = msg_print_text(msg, true, text, LOG_LINE_MAX); 929 - if (n <= size) { 907 + n = msg_print_text(msg, syslog_prev, true, text, LOG_LINE_MAX); 908 + if (n - syslog_partial <= size) { 909 + /* message fits into buffer, move forward */ 930 910 syslog_idx = log_next(syslog_idx); 931 911 syslog_seq++; 912 + syslog_prev = msg->flags; 913 + n -= syslog_partial; 914 + syslog_partial = 0; 915 + } else if (!len){ 916 + /* partial read(), remember position */ 917 + n = size; 918 + syslog_partial += n; 932 919 } else 933 920 n = 0; 934 921 raw_spin_unlock_irq(&logbuf_lock); ··· 949 910 if (!n) 950 911 break; 951 912 952 - len += n; 953 - size -= n; 954 - buf += n; 955 - n = copy_to_user(buf - n, text, n); 956 - 957 - if (n) { 958 - len -= n; 913 + if (copy_to_user(buf, text + skip, n)) { 959 914 if (!len) 960 915 len = -EFAULT; 961 916 break; 962 917 } 918 + 919 + len += n; 920 + size -= n; 921 + buf += n; 963 922 } 964 923 965 924 kfree(text); ··· 978 941 u64 next_seq; 979 942 u64 seq; 980 943 u32 idx; 944 + enum log_flags prev; 981 945 982 946 if (clear_seq < log_first_seq) { 983 947 /* messages are gone, move to first available one */ ··· 992 954 */ 993 955 seq = clear_seq; 994 956 idx = clear_idx; 957 + prev = 0; 995 958 while (seq < log_next_seq) { 996 959 struct log *msg = log_from_idx(idx); 997 960 998 - len += msg_print_text(msg, true, NULL, 0); 961 + len += msg_print_text(msg, prev, true, NULL, 0); 999 962 idx = log_next(idx); 1000 963 seq++; 1001 964 } ··· 1004 965 /* move first record forward until length fits into the buffer */ 1005 966 seq = clear_seq; 1006 967 idx = clear_idx; 968 + prev = 0; 1007 969 while (len > size && seq < log_next_seq) { 1008 970 struct log *msg = log_from_idx(idx); 1009 971 1010 - len -= msg_print_text(msg, true, NULL, 0); 972 + len -= msg_print_text(msg, prev, true, NULL, 0); 1011 973 idx = log_next(idx); 1012 974 seq++; 1013 975 } ··· 1017 977 next_seq = log_next_seq; 1018 978 1019 979 len = 0; 980 + prev = 0; 1020 981 while (len >= 0 && seq < next_seq) { 1021 982 struct log *msg = log_from_idx(idx); 1022 983 int textlen; 1023 984 1024 - textlen = msg_print_text(msg, true, text, LOG_LINE_MAX); 985 + textlen = msg_print_text(msg, prev, true, text, LOG_LINE_MAX); 1025 986 if (textlen < 0) { 1026 987 len = textlen; 1027 988 break; 1028 989 } 1029 990 idx = log_next(idx); 1030 991 seq++; 992 + prev = msg->flags; 1031 993 1032 994 raw_spin_unlock_irq(&logbuf_lock); 1033 995 if (copy_to_user(buf + len, text, textlen)) ··· 1042 1000 /* messages are gone, move to next one */ 1043 1001 seq = log_first_seq; 1044 1002 idx = log_first_idx; 1003 + prev = 0; 1045 1004 } 1046 1005 } 1047 1006 } ··· 1061 1018 { 1062 1019 bool clear = false; 1063 1020 static int saved_console_loglevel = -1; 1064 - static DEFINE_MUTEX(syslog_mutex); 1065 1021 int error; 1066 1022 1067 1023 error = check_syslog_permissions(type, from_file); ··· 1087 1045 error = -EFAULT; 1088 1046 goto out; 1089 1047 } 1090 - error = mutex_lock_interruptible(&syslog_mutex); 1091 - if (error) 1092 - goto out; 1093 1048 error = wait_event_interruptible(log_wait, 1094 1049 syslog_seq != log_next_seq); 1095 - if (error) { 1096 - mutex_unlock(&syslog_mutex); 1050 + if (error) 1097 1051 goto out; 1098 - } 1099 1052 error = syslog_print(buf, len); 1100 - mutex_unlock(&syslog_mutex); 1101 1053 break; 1102 1054 /* Read/clear last kernel messages */ 1103 1055 case SYSLOG_ACTION_READ_CLEAR: ··· 1147 1111 /* messages are gone, move to first one */ 1148 1112 syslog_seq = log_first_seq; 1149 1113 syslog_idx = log_first_idx; 1114 + syslog_prev = 0; 1115 + syslog_partial = 0; 1150 1116 } 1151 1117 if (from_file) { 1152 1118 /* ··· 1158 1120 */ 1159 1121 error = log_next_idx - syslog_idx; 1160 1122 } else { 1161 - u64 seq; 1162 - u32 idx; 1123 + u64 seq = syslog_seq; 1124 + u32 idx = syslog_idx; 1125 + enum log_flags prev = syslog_prev; 1163 1126 1164 1127 error = 0; 1165 - seq = syslog_seq; 1166 - idx = syslog_idx; 1167 1128 while (seq < log_next_seq) { 1168 1129 struct log *msg = log_from_idx(idx); 1169 1130 1170 - error += msg_print_text(msg, true, NULL, 0); 1131 + error += msg_print_text(msg, prev, true, NULL, 0); 1171 1132 idx = log_next(idx); 1172 1133 seq++; 1134 + prev = msg->flags; 1173 1135 } 1136 + error -= syslog_partial; 1174 1137 } 1175 1138 raw_spin_unlock_irq(&logbuf_lock); 1176 1139 break; ··· 1439 1400 static char textbuf[LOG_LINE_MAX]; 1440 1401 char *text = textbuf; 1441 1402 size_t text_len; 1403 + enum log_flags lflags = 0; 1442 1404 unsigned long flags; 1443 1405 int this_cpu; 1444 - bool newline = false; 1445 - bool prefix = false; 1446 1406 int printed_len = 0; 1447 1407 1448 1408 boot_delay_msec(); ··· 1480 1442 recursion_bug = 0; 1481 1443 printed_len += strlen(recursion_msg); 1482 1444 /* emit KERN_CRIT message */ 1483 - log_store(0, 2, LOG_DEFAULT, 0, 1445 + log_store(0, 2, LOG_PREFIX|LOG_NEWLINE, 0, 1484 1446 NULL, 0, recursion_msg, printed_len); 1485 1447 } 1486 1448 ··· 1493 1455 /* mark and strip a trailing newline */ 1494 1456 if (text_len && text[text_len-1] == '\n') { 1495 1457 text_len--; 1496 - newline = true; 1458 + lflags |= LOG_NEWLINE; 1497 1459 } 1498 1460 1499 1461 /* strip syslog prefix and extract log level or control flags */ ··· 1503 1465 if (level == -1) 1504 1466 level = text[1] - '0'; 1505 1467 case 'd': /* KERN_DEFAULT */ 1506 - prefix = true; 1468 + lflags |= LOG_PREFIX; 1507 1469 case 'c': /* KERN_CONT */ 1508 1470 text += 3; 1509 1471 text_len -= 3; ··· 1513 1475 if (level == -1) 1514 1476 level = default_message_loglevel; 1515 1477 1516 - if (dict) { 1517 - prefix = true; 1518 - newline = true; 1519 - } 1478 + if (dict) 1479 + lflags |= LOG_PREFIX|LOG_NEWLINE; 1520 1480 1521 - if (!newline) { 1481 + if (!(lflags & LOG_NEWLINE)) { 1522 1482 /* 1523 1483 * Flush the conflicting buffer. An earlier newline was missing, 1524 1484 * or another task also prints continuation lines. 1525 1485 */ 1526 - if (cont.len && (prefix || cont.owner != current)) 1486 + if (cont.len && (lflags & LOG_PREFIX || cont.owner != current)) 1527 1487 cont_flush(); 1528 1488 1529 1489 /* buffer line if possible, otherwise store it right away */ 1530 1490 if (!cont_add(facility, level, text, text_len)) 1531 - log_store(facility, level, LOG_DEFAULT, 0, 1491 + log_store(facility, level, lflags | LOG_CONT, 0, 1532 1492 dict, dictlen, text, text_len); 1533 1493 } else { 1534 1494 bool stored = false; ··· 1538 1502 * flush it out and store this line separately. 1539 1503 */ 1540 1504 if (cont.len && cont.owner == current) { 1541 - if (!prefix) 1505 + if (!(lflags & LOG_PREFIX)) 1542 1506 stored = cont_add(facility, level, text, text_len); 1543 1507 cont_flush(); 1544 1508 } 1545 1509 1546 1510 if (!stored) 1547 - log_store(facility, level, LOG_DEFAULT, 0, 1511 + log_store(facility, level, lflags, 0, 1548 1512 dict, dictlen, text, text_len); 1549 1513 } 1550 1514 printed_len += text_len; ··· 1643 1607 static struct log *log_from_idx(u32 idx) { return NULL; } 1644 1608 static u32 log_next(u32 idx) { return 0; } 1645 1609 static void call_console_drivers(int level, const char *text, size_t len) {} 1646 - static size_t msg_print_text(const struct log *msg, bool syslog, 1647 - char *buf, size_t size) { return 0; } 1610 + static size_t msg_print_text(const struct log *msg, enum log_flags prev, 1611 + bool syslog, char *buf, size_t size) { return 0; } 1648 1612 static size_t cont_print_text(char *text, size_t size) { return 0; } 1649 1613 1650 1614 #endif /* CONFIG_PRINTK */ ··· 1920 1884 /* the next printk record to write to the console */ 1921 1885 static u64 console_seq; 1922 1886 static u32 console_idx; 1887 + static enum log_flags console_prev; 1923 1888 1924 1889 /** 1925 1890 * console_unlock - unlock the console system ··· 1981 1944 /* messages are gone, move to first one */ 1982 1945 console_seq = log_first_seq; 1983 1946 console_idx = log_first_idx; 1947 + console_prev = 0; 1984 1948 } 1985 1949 skip: 1986 1950 if (console_seq == log_next_seq) ··· 1995 1957 */ 1996 1958 console_idx = log_next(console_idx); 1997 1959 console_seq++; 1960 + /* 1961 + * We will get here again when we register a new 1962 + * CON_PRINTBUFFER console. Clear the flag so we 1963 + * will properly dump everything later. 1964 + */ 1965 + msg->flags &= ~LOG_NOCONS; 1998 1966 goto skip; 1999 1967 } 2000 1968 2001 1969 level = msg->level; 2002 - len = msg_print_text(msg, false, text, sizeof(text)); 2003 - 1970 + len = msg_print_text(msg, console_prev, false, 1971 + text, sizeof(text)); 2004 1972 console_idx = log_next(console_idx); 2005 1973 console_seq++; 1974 + console_prev = msg->flags; 2006 1975 raw_spin_unlock(&logbuf_lock); 2007 1976 2008 1977 stop_critical_timings(); /* don't trace print latency */ ··· 2272 2227 raw_spin_lock_irqsave(&logbuf_lock, flags); 2273 2228 console_seq = syslog_seq; 2274 2229 console_idx = syslog_idx; 2230 + console_prev = syslog_prev; 2275 2231 raw_spin_unlock_irqrestore(&logbuf_lock, flags); 2276 2232 /* 2277 2233 * We're about to replay the log buffer. Only do this to the ··· 2566 2520 } 2567 2521 2568 2522 msg = log_from_idx(dumper->cur_idx); 2569 - l = msg_print_text(msg, syslog, 2570 - line, size); 2523 + l = msg_print_text(msg, 0, syslog, line, size); 2571 2524 2572 2525 dumper->cur_idx = log_next(dumper->cur_idx); 2573 2526 dumper->cur_seq++; ··· 2606 2561 u32 idx; 2607 2562 u64 next_seq; 2608 2563 u32 next_idx; 2564 + enum log_flags prev; 2609 2565 size_t l = 0; 2610 2566 bool ret = false; 2611 2567 ··· 2629 2583 /* calculate length of entire buffer */ 2630 2584 seq = dumper->cur_seq; 2631 2585 idx = dumper->cur_idx; 2586 + prev = 0; 2632 2587 while (seq < dumper->next_seq) { 2633 2588 struct log *msg = log_from_idx(idx); 2634 2589 2635 - l += msg_print_text(msg, true, NULL, 0); 2590 + l += msg_print_text(msg, prev, true, NULL, 0); 2636 2591 idx = log_next(idx); 2637 2592 seq++; 2593 + prev = msg->flags; 2638 2594 } 2639 2595 2640 2596 /* move first record forward until length fits into the buffer */ 2641 2597 seq = dumper->cur_seq; 2642 2598 idx = dumper->cur_idx; 2599 + prev = 0; 2643 2600 while (l > size && seq < dumper->next_seq) { 2644 2601 struct log *msg = log_from_idx(idx); 2645 2602 2646 - l -= msg_print_text(msg, true, NULL, 0); 2603 + l -= msg_print_text(msg, prev, true, NULL, 0); 2647 2604 idx = log_next(idx); 2648 2605 seq++; 2606 + prev = msg->flags; 2649 2607 } 2650 2608 2651 2609 /* last message in next interation */ ··· 2657 2607 next_idx = idx; 2658 2608 2659 2609 l = 0; 2610 + prev = 0; 2660 2611 while (seq < dumper->next_seq) { 2661 2612 struct log *msg = log_from_idx(idx); 2662 2613 2663 - l += msg_print_text(msg, syslog, 2664 - buf + l, size - l); 2665 - 2614 + l += msg_print_text(msg, prev, syslog, buf + l, size - l); 2666 2615 idx = log_next(idx); 2667 2616 seq++; 2617 + prev = msg->flags; 2668 2618 } 2669 2619 2670 2620 dumper->next_seq = next_seq;
+1
kernel/rcutree.c
··· 201 201 { 202 202 trace_rcu_utilization("Start context switch"); 203 203 rcu_sched_qs(cpu); 204 + rcu_preempt_note_context_switch(cpu); 204 205 trace_rcu_utilization("End context switch"); 205 206 } 206 207 EXPORT_SYMBOL_GPL(rcu_note_context_switch);
+1
kernel/rcutree.h
··· 444 444 /* Forward declarations for rcutree_plugin.h */ 445 445 static void rcu_bootup_announce(void); 446 446 long rcu_batches_completed(void); 447 + static void rcu_preempt_note_context_switch(int cpu); 447 448 static int rcu_preempt_blocked_readers_cgp(struct rcu_node *rnp); 448 449 #ifdef CONFIG_HOTPLUG_CPU 449 450 static void rcu_report_unblock_qs_rnp(struct rcu_node *rnp,
+11 -3
kernel/rcutree_plugin.h
··· 153 153 * 154 154 * Caller must disable preemption. 155 155 */ 156 - void rcu_preempt_note_context_switch(void) 156 + static void rcu_preempt_note_context_switch(int cpu) 157 157 { 158 158 struct task_struct *t = current; 159 159 unsigned long flags; ··· 164 164 (t->rcu_read_unlock_special & RCU_READ_UNLOCK_BLOCKED) == 0) { 165 165 166 166 /* Possibly blocking in an RCU read-side critical section. */ 167 - rdp = __this_cpu_ptr(rcu_preempt_state.rda); 167 + rdp = per_cpu_ptr(rcu_preempt_state.rda, cpu); 168 168 rnp = rdp->mynode; 169 169 raw_spin_lock_irqsave(&rnp->lock, flags); 170 170 t->rcu_read_unlock_special |= RCU_READ_UNLOCK_BLOCKED; ··· 228 228 * means that we continue to block the current grace period. 229 229 */ 230 230 local_irq_save(flags); 231 - rcu_preempt_qs(smp_processor_id()); 231 + rcu_preempt_qs(cpu); 232 232 local_irq_restore(flags); 233 233 } 234 234 ··· 1000 1000 rcu_sched_force_quiescent_state(); 1001 1001 } 1002 1002 EXPORT_SYMBOL_GPL(rcu_force_quiescent_state); 1003 + 1004 + /* 1005 + * Because preemptible RCU does not exist, we never have to check for 1006 + * CPUs being in quiescent states. 1007 + */ 1008 + static void rcu_preempt_note_context_switch(int cpu) 1009 + { 1010 + } 1003 1011 1004 1012 /* 1005 1013 * Because preemptible RCU does not exist, there are never any preempted
+204 -74
kernel/sched/core.c
··· 2081 2081 #endif 2082 2082 2083 2083 /* Here we just switch the register state and the stack. */ 2084 - rcu_switch_from(prev); 2085 2084 switch_to(prev, next, prev); 2086 2085 2087 2086 barrier(); ··· 2160 2161 } 2161 2162 2162 2163 2164 + /* 2165 + * Global load-average calculations 2166 + * 2167 + * We take a distributed and async approach to calculating the global load-avg 2168 + * in order to minimize overhead. 2169 + * 2170 + * The global load average is an exponentially decaying average of nr_running + 2171 + * nr_uninterruptible. 2172 + * 2173 + * Once every LOAD_FREQ: 2174 + * 2175 + * nr_active = 0; 2176 + * for_each_possible_cpu(cpu) 2177 + * nr_active += cpu_of(cpu)->nr_running + cpu_of(cpu)->nr_uninterruptible; 2178 + * 2179 + * avenrun[n] = avenrun[0] * exp_n + nr_active * (1 - exp_n) 2180 + * 2181 + * Due to a number of reasons the above turns in the mess below: 2182 + * 2183 + * - for_each_possible_cpu() is prohibitively expensive on machines with 2184 + * serious number of cpus, therefore we need to take a distributed approach 2185 + * to calculating nr_active. 2186 + * 2187 + * \Sum_i x_i(t) = \Sum_i x_i(t) - x_i(t_0) | x_i(t_0) := 0 2188 + * = \Sum_i { \Sum_j=1 x_i(t_j) - x_i(t_j-1) } 2189 + * 2190 + * So assuming nr_active := 0 when we start out -- true per definition, we 2191 + * can simply take per-cpu deltas and fold those into a global accumulate 2192 + * to obtain the same result. See calc_load_fold_active(). 2193 + * 2194 + * Furthermore, in order to avoid synchronizing all per-cpu delta folding 2195 + * across the machine, we assume 10 ticks is sufficient time for every 2196 + * cpu to have completed this task. 2197 + * 2198 + * This places an upper-bound on the IRQ-off latency of the machine. Then 2199 + * again, being late doesn't loose the delta, just wrecks the sample. 2200 + * 2201 + * - cpu_rq()->nr_uninterruptible isn't accurately tracked per-cpu because 2202 + * this would add another cross-cpu cacheline miss and atomic operation 2203 + * to the wakeup path. Instead we increment on whatever cpu the task ran 2204 + * when it went into uninterruptible state and decrement on whatever cpu 2205 + * did the wakeup. This means that only the sum of nr_uninterruptible over 2206 + * all cpus yields the correct result. 2207 + * 2208 + * This covers the NO_HZ=n code, for extra head-aches, see the comment below. 2209 + */ 2210 + 2163 2211 /* Variables and functions for calc_load */ 2164 2212 static atomic_long_t calc_load_tasks; 2165 2213 static unsigned long calc_load_update; 2166 2214 unsigned long avenrun[3]; 2167 - EXPORT_SYMBOL(avenrun); 2215 + EXPORT_SYMBOL(avenrun); /* should be removed */ 2216 + 2217 + /** 2218 + * get_avenrun - get the load average array 2219 + * @loads: pointer to dest load array 2220 + * @offset: offset to add 2221 + * @shift: shift count to shift the result left 2222 + * 2223 + * These values are estimates at best, so no need for locking. 2224 + */ 2225 + void get_avenrun(unsigned long *loads, unsigned long offset, int shift) 2226 + { 2227 + loads[0] = (avenrun[0] + offset) << shift; 2228 + loads[1] = (avenrun[1] + offset) << shift; 2229 + loads[2] = (avenrun[2] + offset) << shift; 2230 + } 2168 2231 2169 2232 static long calc_load_fold_active(struct rq *this_rq) 2170 2233 { ··· 2243 2182 return delta; 2244 2183 } 2245 2184 2185 + /* 2186 + * a1 = a0 * e + a * (1 - e) 2187 + */ 2246 2188 static unsigned long 2247 2189 calc_load(unsigned long load, unsigned long exp, unsigned long active) 2248 2190 { ··· 2257 2193 2258 2194 #ifdef CONFIG_NO_HZ 2259 2195 /* 2260 - * For NO_HZ we delay the active fold to the next LOAD_FREQ update. 2196 + * Handle NO_HZ for the global load-average. 2197 + * 2198 + * Since the above described distributed algorithm to compute the global 2199 + * load-average relies on per-cpu sampling from the tick, it is affected by 2200 + * NO_HZ. 2201 + * 2202 + * The basic idea is to fold the nr_active delta into a global idle-delta upon 2203 + * entering NO_HZ state such that we can include this as an 'extra' cpu delta 2204 + * when we read the global state. 2205 + * 2206 + * Obviously reality has to ruin such a delightfully simple scheme: 2207 + * 2208 + * - When we go NO_HZ idle during the window, we can negate our sample 2209 + * contribution, causing under-accounting. 2210 + * 2211 + * We avoid this by keeping two idle-delta counters and flipping them 2212 + * when the window starts, thus separating old and new NO_HZ load. 2213 + * 2214 + * The only trick is the slight shift in index flip for read vs write. 2215 + * 2216 + * 0s 5s 10s 15s 2217 + * +10 +10 +10 +10 2218 + * |-|-----------|-|-----------|-|-----------|-| 2219 + * r:0 0 1 1 0 0 1 1 0 2220 + * w:0 1 1 0 0 1 1 0 0 2221 + * 2222 + * This ensures we'll fold the old idle contribution in this window while 2223 + * accumlating the new one. 2224 + * 2225 + * - When we wake up from NO_HZ idle during the window, we push up our 2226 + * contribution, since we effectively move our sample point to a known 2227 + * busy state. 2228 + * 2229 + * This is solved by pushing the window forward, and thus skipping the 2230 + * sample, for this cpu (effectively using the idle-delta for this cpu which 2231 + * was in effect at the time the window opened). This also solves the issue 2232 + * of having to deal with a cpu having been in NOHZ idle for multiple 2233 + * LOAD_FREQ intervals. 2261 2234 * 2262 2235 * When making the ILB scale, we should try to pull this in as well. 2263 2236 */ 2264 - static atomic_long_t calc_load_tasks_idle; 2237 + static atomic_long_t calc_load_idle[2]; 2238 + static int calc_load_idx; 2265 2239 2266 - void calc_load_account_idle(struct rq *this_rq) 2240 + static inline int calc_load_write_idx(void) 2267 2241 { 2242 + int idx = calc_load_idx; 2243 + 2244 + /* 2245 + * See calc_global_nohz(), if we observe the new index, we also 2246 + * need to observe the new update time. 2247 + */ 2248 + smp_rmb(); 2249 + 2250 + /* 2251 + * If the folding window started, make sure we start writing in the 2252 + * next idle-delta. 2253 + */ 2254 + if (!time_before(jiffies, calc_load_update)) 2255 + idx++; 2256 + 2257 + return idx & 1; 2258 + } 2259 + 2260 + static inline int calc_load_read_idx(void) 2261 + { 2262 + return calc_load_idx & 1; 2263 + } 2264 + 2265 + void calc_load_enter_idle(void) 2266 + { 2267 + struct rq *this_rq = this_rq(); 2268 2268 long delta; 2269 2269 2270 + /* 2271 + * We're going into NOHZ mode, if there's any pending delta, fold it 2272 + * into the pending idle delta. 2273 + */ 2270 2274 delta = calc_load_fold_active(this_rq); 2271 - if (delta) 2272 - atomic_long_add(delta, &calc_load_tasks_idle); 2275 + if (delta) { 2276 + int idx = calc_load_write_idx(); 2277 + atomic_long_add(delta, &calc_load_idle[idx]); 2278 + } 2279 + } 2280 + 2281 + void calc_load_exit_idle(void) 2282 + { 2283 + struct rq *this_rq = this_rq(); 2284 + 2285 + /* 2286 + * If we're still before the sample window, we're done. 2287 + */ 2288 + if (time_before(jiffies, this_rq->calc_load_update)) 2289 + return; 2290 + 2291 + /* 2292 + * We woke inside or after the sample window, this means we're already 2293 + * accounted through the nohz accounting, so skip the entire deal and 2294 + * sync up for the next window. 2295 + */ 2296 + this_rq->calc_load_update = calc_load_update; 2297 + if (time_before(jiffies, this_rq->calc_load_update + 10)) 2298 + this_rq->calc_load_update += LOAD_FREQ; 2273 2299 } 2274 2300 2275 2301 static long calc_load_fold_idle(void) 2276 2302 { 2303 + int idx = calc_load_read_idx(); 2277 2304 long delta = 0; 2278 2305 2279 - /* 2280 - * Its got a race, we don't care... 2281 - */ 2282 - if (atomic_long_read(&calc_load_tasks_idle)) 2283 - delta = atomic_long_xchg(&calc_load_tasks_idle, 0); 2306 + if (atomic_long_read(&calc_load_idle[idx])) 2307 + delta = atomic_long_xchg(&calc_load_idle[idx], 0); 2284 2308 2285 2309 return delta; 2286 2310 } ··· 2454 2302 { 2455 2303 long delta, active, n; 2456 2304 2457 - /* 2458 - * If we crossed a calc_load_update boundary, make sure to fold 2459 - * any pending idle changes, the respective CPUs might have 2460 - * missed the tick driven calc_load_account_active() update 2461 - * due to NO_HZ. 2462 - */ 2463 - delta = calc_load_fold_idle(); 2464 - if (delta) 2465 - atomic_long_add(delta, &calc_load_tasks); 2305 + if (!time_before(jiffies, calc_load_update + 10)) { 2306 + /* 2307 + * Catch-up, fold however many we are behind still 2308 + */ 2309 + delta = jiffies - calc_load_update - 10; 2310 + n = 1 + (delta / LOAD_FREQ); 2311 + 2312 + active = atomic_long_read(&calc_load_tasks); 2313 + active = active > 0 ? active * FIXED_1 : 0; 2314 + 2315 + avenrun[0] = calc_load_n(avenrun[0], EXP_1, active, n); 2316 + avenrun[1] = calc_load_n(avenrun[1], EXP_5, active, n); 2317 + avenrun[2] = calc_load_n(avenrun[2], EXP_15, active, n); 2318 + 2319 + calc_load_update += n * LOAD_FREQ; 2320 + } 2466 2321 2467 2322 /* 2468 - * It could be the one fold was all it took, we done! 2323 + * Flip the idle index... 2324 + * 2325 + * Make sure we first write the new time then flip the index, so that 2326 + * calc_load_write_idx() will see the new time when it reads the new 2327 + * index, this avoids a double flip messing things up. 2469 2328 */ 2470 - if (time_before(jiffies, calc_load_update + 10)) 2471 - return; 2472 - 2473 - /* 2474 - * Catch-up, fold however many we are behind still 2475 - */ 2476 - delta = jiffies - calc_load_update - 10; 2477 - n = 1 + (delta / LOAD_FREQ); 2478 - 2479 - active = atomic_long_read(&calc_load_tasks); 2480 - active = active > 0 ? active * FIXED_1 : 0; 2481 - 2482 - avenrun[0] = calc_load_n(avenrun[0], EXP_1, active, n); 2483 - avenrun[1] = calc_load_n(avenrun[1], EXP_5, active, n); 2484 - avenrun[2] = calc_load_n(avenrun[2], EXP_15, active, n); 2485 - 2486 - calc_load_update += n * LOAD_FREQ; 2329 + smp_wmb(); 2330 + calc_load_idx++; 2487 2331 } 2488 - #else 2489 - void calc_load_account_idle(struct rq *this_rq) 2490 - { 2491 - } 2332 + #else /* !CONFIG_NO_HZ */ 2492 2333 2493 - static inline long calc_load_fold_idle(void) 2494 - { 2495 - return 0; 2496 - } 2334 + static inline long calc_load_fold_idle(void) { return 0; } 2335 + static inline void calc_global_nohz(void) { } 2497 2336 2498 - static void calc_global_nohz(void) 2499 - { 2500 - } 2501 - #endif 2502 - 2503 - /** 2504 - * get_avenrun - get the load average array 2505 - * @loads: pointer to dest load array 2506 - * @offset: offset to add 2507 - * @shift: shift count to shift the result left 2508 - * 2509 - * These values are estimates at best, so no need for locking. 2510 - */ 2511 - void get_avenrun(unsigned long *loads, unsigned long offset, int shift) 2512 - { 2513 - loads[0] = (avenrun[0] + offset) << shift; 2514 - loads[1] = (avenrun[1] + offset) << shift; 2515 - loads[2] = (avenrun[2] + offset) << shift; 2516 - } 2337 + #endif /* CONFIG_NO_HZ */ 2517 2338 2518 2339 /* 2519 2340 * calc_load - update the avenrun load estimates 10 ticks after the ··· 2494 2369 */ 2495 2370 void calc_global_load(unsigned long ticks) 2496 2371 { 2497 - long active; 2372 + long active, delta; 2498 2373 2499 2374 if (time_before(jiffies, calc_load_update + 10)) 2500 2375 return; 2376 + 2377 + /* 2378 + * Fold the 'old' idle-delta to include all NO_HZ cpus. 2379 + */ 2380 + delta = calc_load_fold_idle(); 2381 + if (delta) 2382 + atomic_long_add(delta, &calc_load_tasks); 2501 2383 2502 2384 active = atomic_long_read(&calc_load_tasks); 2503 2385 active = active > 0 ? active * FIXED_1 : 0; ··· 2516 2384 calc_load_update += LOAD_FREQ; 2517 2385 2518 2386 /* 2519 - * Account one period with whatever state we found before 2520 - * folding in the nohz state and ageing the entire idle period. 2521 - * 2522 - * This avoids loosing a sample when we go idle between 2523 - * calc_load_account_active() (10 ticks ago) and now and thus 2524 - * under-accounting. 2387 + * In case we idled for multiple LOAD_FREQ intervals, catch up in bulk. 2525 2388 */ 2526 2389 calc_global_nohz(); 2527 2390 } ··· 2533 2406 return; 2534 2407 2535 2408 delta = calc_load_fold_active(this_rq); 2536 - delta += calc_load_fold_idle(); 2537 2409 if (delta) 2538 2410 atomic_long_add(delta, &calc_load_tasks); 2539 2411 2540 2412 this_rq->calc_load_update += LOAD_FREQ; 2541 2413 } 2414 + 2415 + /* 2416 + * End of global load-average stuff 2417 + */ 2542 2418 2543 2419 /* 2544 2420 * The exact cpuload at various idx values, calculated at every tick would be
-1
kernel/sched/idle_task.c
··· 25 25 static struct task_struct *pick_next_task_idle(struct rq *rq) 26 26 { 27 27 schedstat_inc(rq, sched_goidle); 28 - calc_load_account_idle(rq); 29 28 return rq->idle; 30 29 } 31 30
-2
kernel/sched/sched.h
··· 942 942 return (u64)sysctl_sched_time_avg * NSEC_PER_MSEC / 2; 943 943 } 944 944 945 - void calc_load_account_idle(struct rq *this_rq); 946 - 947 945 #ifdef CONFIG_SCHED_HRTICK 948 946 949 947 /*
+10 -6
kernel/sys.c
··· 1788 1788 #ifdef CONFIG_CHECKPOINT_RESTORE 1789 1789 static int prctl_set_mm_exe_file(struct mm_struct *mm, unsigned int fd) 1790 1790 { 1791 - struct vm_area_struct *vma; 1792 1791 struct file *exe_file; 1793 1792 struct dentry *dentry; 1794 1793 int err; ··· 1815 1816 down_write(&mm->mmap_sem); 1816 1817 1817 1818 /* 1818 - * Forbid mm->exe_file change if there are mapped other files. 1819 + * Forbid mm->exe_file change if old file still mapped. 1819 1820 */ 1820 1821 err = -EBUSY; 1821 - for (vma = mm->mmap; vma; vma = vma->vm_next) { 1822 - if (vma->vm_file && !path_equal(&vma->vm_file->f_path, 1823 - &exe_file->f_path)) 1824 - goto exit_unlock; 1822 + if (mm->exe_file) { 1823 + struct vm_area_struct *vma; 1824 + 1825 + for (vma = mm->mmap; vma; vma = vma->vm_next) 1826 + if (vma->vm_file && 1827 + path_equal(&vma->vm_file->f_path, 1828 + &mm->exe_file->f_path)) 1829 + goto exit_unlock; 1825 1830 } 1826 1831 1827 1832 /* ··· 1838 1835 if (test_and_set_bit(MMF_EXE_FILE_CHANGED, &mm->flags)) 1839 1836 goto exit_unlock; 1840 1837 1838 + err = 0; 1841 1839 set_mm_exe_file(mm, exe_file); 1842 1840 exit_unlock: 1843 1841 up_write(&mm->mmap_sem);
+6 -2
kernel/time/ntp.c
··· 409 409 time_state = TIME_DEL; 410 410 break; 411 411 case TIME_INS: 412 - if (secs % 86400 == 0) { 412 + if (!(time_status & STA_INS)) 413 + time_state = TIME_OK; 414 + else if (secs % 86400 == 0) { 413 415 leap = -1; 414 416 time_state = TIME_OOP; 415 417 time_tai++; ··· 420 418 } 421 419 break; 422 420 case TIME_DEL: 423 - if ((secs + 1) % 86400 == 0) { 421 + if (!(time_status & STA_DEL)) 422 + time_state = TIME_OK; 423 + else if ((secs + 1) % 86400 == 0) { 424 424 leap = 1; 425 425 time_tai--; 426 426 time_state = TIME_WAIT;
+2
kernel/time/tick-sched.c
··· 406 406 */ 407 407 if (!ts->tick_stopped) { 408 408 select_nohz_load_balancer(1); 409 + calc_load_enter_idle(); 409 410 410 411 ts->idle_tick = hrtimer_get_expires(&ts->sched_timer); 411 412 ts->tick_stopped = 1; ··· 598 597 account_idle_ticks(ticks); 599 598 #endif 600 599 600 + calc_load_exit_idle(); 601 601 touch_softlockup_watchdog(); 602 602 /* 603 603 * Cancel the scheduled timer and restore the tick
+62 -2
kernel/time/timekeeping.c
··· 70 70 /* The raw monotonic time for the CLOCK_MONOTONIC_RAW posix clock. */ 71 71 struct timespec raw_time; 72 72 73 + /* Offset clock monotonic -> clock realtime */ 74 + ktime_t offs_real; 75 + 76 + /* Offset clock monotonic -> clock boottime */ 77 + ktime_t offs_boot; 78 + 73 79 /* Seqlock for all timekeeper values */ 74 80 seqlock_t lock; 75 81 }; ··· 178 172 return clocksource_cyc2ns(cycle_delta, clock->mult, clock->shift); 179 173 } 180 174 175 + static void update_rt_offset(void) 176 + { 177 + struct timespec tmp, *wtm = &timekeeper.wall_to_monotonic; 178 + 179 + set_normalized_timespec(&tmp, -wtm->tv_sec, -wtm->tv_nsec); 180 + timekeeper.offs_real = timespec_to_ktime(tmp); 181 + } 182 + 181 183 /* must hold write on timekeeper.lock */ 182 184 static void timekeeping_update(bool clearntp) 183 185 { ··· 193 179 timekeeper.ntp_error = 0; 194 180 ntp_clear(); 195 181 } 182 + update_rt_offset(); 196 183 update_vsyscall(&timekeeper.xtime, &timekeeper.wall_to_monotonic, 197 184 timekeeper.clock, timekeeper.mult); 198 185 } ··· 619 604 } 620 605 set_normalized_timespec(&timekeeper.wall_to_monotonic, 621 606 -boot.tv_sec, -boot.tv_nsec); 607 + update_rt_offset(); 622 608 timekeeper.total_sleep_time.tv_sec = 0; 623 609 timekeeper.total_sleep_time.tv_nsec = 0; 624 610 write_sequnlock_irqrestore(&timekeeper.lock, flags); ··· 627 611 628 612 /* time in seconds when suspend began */ 629 613 static struct timespec timekeeping_suspend_time; 614 + 615 + static void update_sleep_time(struct timespec t) 616 + { 617 + timekeeper.total_sleep_time = t; 618 + timekeeper.offs_boot = timespec_to_ktime(t); 619 + } 630 620 631 621 /** 632 622 * __timekeeping_inject_sleeptime - Internal function to add sleep interval ··· 652 630 timekeeper.xtime = timespec_add(timekeeper.xtime, *delta); 653 631 timekeeper.wall_to_monotonic = 654 632 timespec_sub(timekeeper.wall_to_monotonic, *delta); 655 - timekeeper.total_sleep_time = timespec_add( 656 - timekeeper.total_sleep_time, *delta); 633 + update_sleep_time(timespec_add(timekeeper.total_sleep_time, *delta)); 657 634 } 658 635 659 636 ··· 717 696 timekeeper.clock->cycle_last = timekeeper.clock->read(timekeeper.clock); 718 697 timekeeper.ntp_error = 0; 719 698 timekeeping_suspended = 0; 699 + timekeeping_update(false); 720 700 write_sequnlock_irqrestore(&timekeeper.lock, flags); 721 701 722 702 touch_softlockup_watchdog(); ··· 985 963 leap = second_overflow(timekeeper.xtime.tv_sec); 986 964 timekeeper.xtime.tv_sec += leap; 987 965 timekeeper.wall_to_monotonic.tv_sec -= leap; 966 + if (leap) 967 + clock_was_set_delayed(); 988 968 } 989 969 990 970 /* Accumulate raw time */ ··· 1103 1079 leap = second_overflow(timekeeper.xtime.tv_sec); 1104 1080 timekeeper.xtime.tv_sec += leap; 1105 1081 timekeeper.wall_to_monotonic.tv_sec -= leap; 1082 + if (leap) 1083 + clock_was_set_delayed(); 1106 1084 } 1107 1085 1108 1086 timekeeping_update(false); ··· 1271 1245 *sleep = timekeeper.total_sleep_time; 1272 1246 } while (read_seqretry(&timekeeper.lock, seq)); 1273 1247 } 1248 + 1249 + #ifdef CONFIG_HIGH_RES_TIMERS 1250 + /** 1251 + * ktime_get_update_offsets - hrtimer helper 1252 + * @offs_real: pointer to storage for monotonic -> realtime offset 1253 + * @offs_boot: pointer to storage for monotonic -> boottime offset 1254 + * 1255 + * Returns current monotonic time and updates the offsets 1256 + * Called from hrtimer_interupt() or retrigger_next_event() 1257 + */ 1258 + ktime_t ktime_get_update_offsets(ktime_t *offs_real, ktime_t *offs_boot) 1259 + { 1260 + ktime_t now; 1261 + unsigned int seq; 1262 + u64 secs, nsecs; 1263 + 1264 + do { 1265 + seq = read_seqbegin(&timekeeper.lock); 1266 + 1267 + secs = timekeeper.xtime.tv_sec; 1268 + nsecs = timekeeper.xtime.tv_nsec; 1269 + nsecs += timekeeping_get_ns(); 1270 + /* If arch requires, add in gettimeoffset() */ 1271 + nsecs += arch_gettimeoffset(); 1272 + 1273 + *offs_real = timekeeper.offs_real; 1274 + *offs_boot = timekeeper.offs_boot; 1275 + } while (read_seqretry(&timekeeper.lock, seq)); 1276 + 1277 + now = ktime_add_ns(ktime_set(secs, 0), nsecs); 1278 + now = ktime_sub(now, *offs_real); 1279 + return now; 1280 + } 1281 + #endif 1274 1282 1275 1283 /** 1276 1284 * ktime_get_monotonic_offset() - get wall_to_monotonic in ktime_t format
+3 -3
kernel/trace/ring_buffer.c
··· 1075 1075 rb_init_page(bpage->page); 1076 1076 1077 1077 INIT_LIST_HEAD(&cpu_buffer->reader_page->list); 1078 + INIT_LIST_HEAD(&cpu_buffer->new_pages); 1078 1079 1079 1080 ret = rb_allocate_pages(cpu_buffer, nr_pages); 1080 1081 if (ret < 0) ··· 1347 1346 * If something was added to this page, it was full 1348 1347 * since it is not the tail page. So we deduct the 1349 1348 * bytes consumed in ring buffer from here. 1350 - * No need to update overruns, since this page is 1351 - * deleted from ring buffer and its entries are 1352 - * already accounted for. 1349 + * Increment overrun to account for the lost events. 1353 1350 */ 1351 + local_add(page_entries, &cpu_buffer->overrun); 1354 1352 local_sub(BUF_PAGE_SIZE, &cpu_buffer->entries_bytes); 1355 1353 } 1356 1354
+2 -2
lib/dma-debug.c
··· 78 78 static DEFINE_SPINLOCK(free_entries_lock); 79 79 80 80 /* Global disable flag - will be set in case of an error */ 81 - static bool global_disable __read_mostly; 81 + static u32 global_disable __read_mostly; 82 82 83 83 /* Global error count */ 84 84 static u32 error_count; ··· 657 657 658 658 global_disable_dent = debugfs_create_bool("disabled", 0444, 659 659 dma_debug_dent, 660 - (u32 *)&global_disable); 660 + &global_disable); 661 661 if (!global_disable_dent) 662 662 goto out_err; 663 663
+5 -1
mm/bootmem.c
··· 698 698 return ___alloc_bootmem(size, align, goal, limit); 699 699 } 700 700 701 - static void * __init ___alloc_bootmem_node_nopanic(pg_data_t *pgdat, 701 + void * __init ___alloc_bootmem_node_nopanic(pg_data_t *pgdat, 702 702 unsigned long size, unsigned long align, 703 703 unsigned long goal, unsigned long limit) 704 704 { ··· 709 709 align, goal, limit); 710 710 if (ptr) 711 711 return ptr; 712 + 713 + /* do not panic in alloc_bootmem_bdata() */ 714 + if (limit && goal + size > limit) 715 + limit = 0; 712 716 713 717 ptr = alloc_bootmem_bdata(pgdat->bdata, size, align, goal, limit); 714 718 if (ptr)
+4 -1
mm/compaction.c
··· 701 701 if (err) { 702 702 putback_lru_pages(&cc->migratepages); 703 703 cc->nr_migratepages = 0; 704 + if (err == -ENOMEM) { 705 + ret = COMPACT_PARTIAL; 706 + goto out; 707 + } 704 708 } 705 - 706 709 } 707 710 708 711 out:
+14 -4
mm/madvise.c
··· 15 15 #include <linux/sched.h> 16 16 #include <linux/ksm.h> 17 17 #include <linux/fs.h> 18 + #include <linux/file.h> 18 19 19 20 /* 20 21 * Any behaviour which results in changes to the vma->vm_flags needs to ··· 205 204 { 206 205 loff_t offset; 207 206 int error; 207 + struct file *f; 208 208 209 209 *prev = NULL; /* tell sys_madvise we drop mmap_sem */ 210 210 211 211 if (vma->vm_flags & (VM_LOCKED|VM_NONLINEAR|VM_HUGETLB)) 212 212 return -EINVAL; 213 213 214 - if (!vma->vm_file || !vma->vm_file->f_mapping 215 - || !vma->vm_file->f_mapping->host) { 214 + f = vma->vm_file; 215 + 216 + if (!f || !f->f_mapping || !f->f_mapping->host) { 216 217 return -EINVAL; 217 218 } 218 219 ··· 224 221 offset = (loff_t)(start - vma->vm_start) 225 222 + ((loff_t)vma->vm_pgoff << PAGE_SHIFT); 226 223 227 - /* filesystem's fallocate may need to take i_mutex */ 224 + /* 225 + * Filesystem's fallocate may need to take i_mutex. We need to 226 + * explicitly grab a reference because the vma (and hence the 227 + * vma's reference to the file) can go away as soon as we drop 228 + * mmap_sem. 229 + */ 230 + get_file(f); 228 231 up_read(&current->mm->mmap_sem); 229 - error = do_fallocate(vma->vm_file, 232 + error = do_fallocate(f, 230 233 FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE, 231 234 offset, end - start); 235 + fput(f); 232 236 down_read(&current->mm->mmap_sem); 233 237 return error; 234 238 }
+23 -28
mm/memblock.c
··· 143 143 MAX_NUMNODES); 144 144 } 145 145 146 - /* 147 - * Free memblock.reserved.regions 148 - */ 149 - int __init_memblock memblock_free_reserved_regions(void) 150 - { 151 - if (memblock.reserved.regions == memblock_reserved_init_regions) 152 - return 0; 153 - 154 - return memblock_free(__pa(memblock.reserved.regions), 155 - sizeof(struct memblock_region) * memblock.reserved.max); 156 - } 157 - 158 - /* 159 - * Reserve memblock.reserved.regions 160 - */ 161 - int __init_memblock memblock_reserve_reserved_regions(void) 162 - { 163 - if (memblock.reserved.regions == memblock_reserved_init_regions) 164 - return 0; 165 - 166 - return memblock_reserve(__pa(memblock.reserved.regions), 167 - sizeof(struct memblock_region) * memblock.reserved.max); 168 - } 169 - 170 146 static void __init_memblock memblock_remove_region(struct memblock_type *type, unsigned long r) 171 147 { 172 148 type->total_size -= type->regions[r].size; ··· 158 182 type->regions[0].size = 0; 159 183 memblock_set_region_node(&type->regions[0], MAX_NUMNODES); 160 184 } 185 + } 186 + 187 + phys_addr_t __init_memblock get_allocated_memblock_reserved_regions_info( 188 + phys_addr_t *addr) 189 + { 190 + if (memblock.reserved.regions == memblock_reserved_init_regions) 191 + return 0; 192 + 193 + *addr = __pa(memblock.reserved.regions); 194 + 195 + return PAGE_ALIGN(sizeof(struct memblock_region) * 196 + memblock.reserved.max); 161 197 } 162 198 163 199 /** ··· 192 204 phys_addr_t new_area_size) 193 205 { 194 206 struct memblock_region *new_array, *old_array; 207 + phys_addr_t old_alloc_size, new_alloc_size; 195 208 phys_addr_t old_size, new_size, addr; 196 209 int use_slab = slab_is_available(); 197 210 int *in_slab; ··· 206 217 /* Calculate new doubled size */ 207 218 old_size = type->max * sizeof(struct memblock_region); 208 219 new_size = old_size << 1; 220 + /* 221 + * We need to allocated new one align to PAGE_SIZE, 222 + * so we can free them completely later. 223 + */ 224 + old_alloc_size = PAGE_ALIGN(old_size); 225 + new_alloc_size = PAGE_ALIGN(new_size); 209 226 210 227 /* Retrieve the slab flag */ 211 228 if (type == &memblock.memory) ··· 240 245 241 246 addr = memblock_find_in_range(new_area_start + new_area_size, 242 247 memblock.current_limit, 243 - new_size, sizeof(phys_addr_t)); 248 + new_alloc_size, PAGE_SIZE); 244 249 if (!addr && new_area_size) 245 250 addr = memblock_find_in_range(0, 246 251 min(new_area_start, memblock.current_limit), 247 - new_size, sizeof(phys_addr_t)); 252 + new_alloc_size, PAGE_SIZE); 248 253 249 254 new_array = addr ? __va(addr) : 0; 250 255 } ··· 274 279 kfree(old_array); 275 280 else if (old_array != memblock_memory_init_regions && 276 281 old_array != memblock_reserved_init_regions) 277 - memblock_free(__pa(old_array), old_size); 282 + memblock_free(__pa(old_array), old_alloc_size); 278 283 279 284 /* Reserve the new array if that comes from the memblock. 280 285 * Otherwise, we needn't do it 281 286 */ 282 287 if (!use_slab) 283 - BUG_ON(memblock_reserve(addr, new_size)); 288 + BUG_ON(memblock_reserve(addr, new_alloc_size)); 284 289 285 290 /* Update slab flag */ 286 291 *in_slab = use_slab;
+1 -1
mm/memory_hotplug.c
··· 618 618 pgdat = hotadd_new_pgdat(nid, start); 619 619 ret = -ENOMEM; 620 620 if (!pgdat) 621 - goto out; 621 + goto error; 622 622 new_pgdat = 1; 623 623 } 624 624
+23 -15
mm/nobootmem.c
··· 105 105 __free_pages_bootmem(pfn_to_page(i), 0); 106 106 } 107 107 108 + static unsigned long __init __free_memory_core(phys_addr_t start, 109 + phys_addr_t end) 110 + { 111 + unsigned long start_pfn = PFN_UP(start); 112 + unsigned long end_pfn = min_t(unsigned long, 113 + PFN_DOWN(end), max_low_pfn); 114 + 115 + if (start_pfn > end_pfn) 116 + return 0; 117 + 118 + __free_pages_memory(start_pfn, end_pfn); 119 + 120 + return end_pfn - start_pfn; 121 + } 122 + 108 123 unsigned long __init free_low_memory_core_early(int nodeid) 109 124 { 110 125 unsigned long count = 0; 111 - phys_addr_t start, end; 126 + phys_addr_t start, end, size; 112 127 u64 i; 113 128 114 - /* free reserved array temporarily so that it's treated as free area */ 115 - memblock_free_reserved_regions(); 129 + for_each_free_mem_range(i, MAX_NUMNODES, &start, &end, NULL) 130 + count += __free_memory_core(start, end); 116 131 117 - for_each_free_mem_range(i, MAX_NUMNODES, &start, &end, NULL) { 118 - unsigned long start_pfn = PFN_UP(start); 119 - unsigned long end_pfn = min_t(unsigned long, 120 - PFN_DOWN(end), max_low_pfn); 121 - if (start_pfn < end_pfn) { 122 - __free_pages_memory(start_pfn, end_pfn); 123 - count += end_pfn - start_pfn; 124 - } 125 - } 132 + /* free range that is used for reserved array if we allocate it */ 133 + size = get_allocated_memblock_reserved_regions_info(&start); 134 + if (size) 135 + count += __free_memory_core(start, start + size); 126 136 127 - /* put region array back? */ 128 - memblock_reserve_reserved_regions(); 129 137 return count; 130 138 } 131 139 ··· 282 274 return ___alloc_bootmem(size, align, goal, limit); 283 275 } 284 276 285 - static void * __init ___alloc_bootmem_node_nopanic(pg_data_t *pgdat, 277 + void * __init ___alloc_bootmem_node_nopanic(pg_data_t *pgdat, 286 278 unsigned long size, 287 279 unsigned long align, 288 280 unsigned long goal,
+6 -1
mm/page_alloc.c
··· 5635 5635 __alloc_contig_migrate_alloc(struct page *page, unsigned long private, 5636 5636 int **resultp) 5637 5637 { 5638 - return alloc_page(GFP_HIGHUSER_MOVABLE); 5638 + gfp_t gfp_mask = GFP_USER | __GFP_MOVABLE; 5639 + 5640 + if (PageHighMem(page)) 5641 + gfp_mask |= __GFP_HIGHMEM; 5642 + 5643 + return alloc_page(gfp_mask); 5639 5644 } 5640 5645 5641 5646 /* [start, end) must belong to a single zone. */
+59 -136
mm/shmem.c
··· 264 264 } 265 265 266 266 /* 267 + * Sometimes, before we decide whether to proceed or to fail, we must check 268 + * that an entry was not already brought back from swap by a racing thread. 269 + * 270 + * Checking page is not enough: by the time a SwapCache page is locked, it 271 + * might be reused, and again be SwapCache, using the same swap as before. 272 + */ 273 + static bool shmem_confirm_swap(struct address_space *mapping, 274 + pgoff_t index, swp_entry_t swap) 275 + { 276 + void *item; 277 + 278 + rcu_read_lock(); 279 + item = radix_tree_lookup(&mapping->page_tree, index); 280 + rcu_read_unlock(); 281 + return item == swp_to_radix_entry(swap); 282 + } 283 + 284 + /* 267 285 * Like add_to_page_cache_locked, but error if expected item has gone. 268 286 */ 269 287 static int shmem_add_to_page_cache(struct page *page, 270 288 struct address_space *mapping, 271 289 pgoff_t index, gfp_t gfp, void *expected) 272 290 { 273 - int error = 0; 291 + int error; 274 292 275 293 VM_BUG_ON(!PageLocked(page)); 276 294 VM_BUG_ON(!PageSwapBacked(page)); 277 295 278 - if (!expected) 279 - error = radix_tree_preload(gfp & GFP_RECLAIM_MASK); 280 - if (!error) { 281 - page_cache_get(page); 282 - page->mapping = mapping; 283 - page->index = index; 296 + page_cache_get(page); 297 + page->mapping = mapping; 298 + page->index = index; 284 299 285 - spin_lock_irq(&mapping->tree_lock); 286 - if (!expected) 287 - error = radix_tree_insert(&mapping->page_tree, 288 - index, page); 289 - else 290 - error = shmem_radix_tree_replace(mapping, index, 291 - expected, page); 292 - if (!error) { 293 - mapping->nrpages++; 294 - __inc_zone_page_state(page, NR_FILE_PAGES); 295 - __inc_zone_page_state(page, NR_SHMEM); 296 - spin_unlock_irq(&mapping->tree_lock); 297 - } else { 298 - page->mapping = NULL; 299 - spin_unlock_irq(&mapping->tree_lock); 300 - page_cache_release(page); 301 - } 302 - if (!expected) 303 - radix_tree_preload_end(); 300 + spin_lock_irq(&mapping->tree_lock); 301 + if (!expected) 302 + error = radix_tree_insert(&mapping->page_tree, index, page); 303 + else 304 + error = shmem_radix_tree_replace(mapping, index, expected, 305 + page); 306 + if (!error) { 307 + mapping->nrpages++; 308 + __inc_zone_page_state(page, NR_FILE_PAGES); 309 + __inc_zone_page_state(page, NR_SHMEM); 310 + spin_unlock_irq(&mapping->tree_lock); 311 + } else { 312 + page->mapping = NULL; 313 + spin_unlock_irq(&mapping->tree_lock); 314 + page_cache_release(page); 304 315 } 305 - if (error) 306 - mem_cgroup_uncharge_cache_page(page); 307 316 return error; 308 317 } 309 318 ··· 1133 1124 /* We have to do this with page locked to prevent races */ 1134 1125 lock_page(page); 1135 1126 if (!PageSwapCache(page) || page_private(page) != swap.val || 1136 - page->mapping) { 1127 + !shmem_confirm_swap(mapping, index, swap)) { 1137 1128 error = -EEXIST; /* try again */ 1138 - goto failed; 1129 + goto unlock; 1139 1130 } 1140 1131 if (!PageUptodate(page)) { 1141 1132 error = -EIO; ··· 1151 1142 1152 1143 error = mem_cgroup_cache_charge(page, current->mm, 1153 1144 gfp & GFP_RECLAIM_MASK); 1154 - if (!error) 1145 + if (!error) { 1155 1146 error = shmem_add_to_page_cache(page, mapping, index, 1156 1147 gfp, swp_to_radix_entry(swap)); 1148 + /* We already confirmed swap, and make no allocation */ 1149 + VM_BUG_ON(error); 1150 + } 1157 1151 if (error) 1158 1152 goto failed; 1159 1153 ··· 1193 1181 __set_page_locked(page); 1194 1182 error = mem_cgroup_cache_charge(page, current->mm, 1195 1183 gfp & GFP_RECLAIM_MASK); 1196 - if (!error) 1197 - error = shmem_add_to_page_cache(page, mapping, index, 1198 - gfp, NULL); 1199 1184 if (error) 1200 1185 goto decused; 1186 + error = radix_tree_preload(gfp & GFP_RECLAIM_MASK); 1187 + if (!error) { 1188 + error = shmem_add_to_page_cache(page, mapping, index, 1189 + gfp, NULL); 1190 + radix_tree_preload_end(); 1191 + } 1192 + if (error) { 1193 + mem_cgroup_uncharge_cache_page(page); 1194 + goto decused; 1195 + } 1201 1196 lru_cache_add_anon(page); 1202 1197 1203 1198 spin_lock(&info->lock); ··· 1264 1245 unacct: 1265 1246 shmem_unacct_blocks(info->flags, 1); 1266 1247 failed: 1267 - if (swap.val && error != -EINVAL) { 1268 - struct page *test = find_get_page(mapping, index); 1269 - if (test && !radix_tree_exceptional_entry(test)) 1270 - page_cache_release(test); 1271 - /* Have another try if the entry has changed */ 1272 - if (test != swp_to_radix_entry(swap)) 1273 - error = -EEXIST; 1274 - } 1248 + if (swap.val && error != -EINVAL && 1249 + !shmem_confirm_swap(mapping, index, swap)) 1250 + error = -EEXIST; 1251 + unlock: 1275 1252 if (page) { 1276 1253 unlock_page(page); 1277 1254 page_cache_release(page); ··· 1279 1264 spin_unlock(&info->lock); 1280 1265 goto repeat; 1281 1266 } 1282 - if (error == -EEXIST) 1267 + if (error == -EEXIST) /* from above or from radix_tree_insert */ 1283 1268 goto repeat; 1284 1269 return error; 1285 1270 } ··· 1705 1690 file_accessed(in); 1706 1691 } 1707 1692 return error; 1708 - } 1709 - 1710 - /* 1711 - * llseek SEEK_DATA or SEEK_HOLE through the radix_tree. 1712 - */ 1713 - static pgoff_t shmem_seek_hole_data(struct address_space *mapping, 1714 - pgoff_t index, pgoff_t end, int origin) 1715 - { 1716 - struct page *page; 1717 - struct pagevec pvec; 1718 - pgoff_t indices[PAGEVEC_SIZE]; 1719 - bool done = false; 1720 - int i; 1721 - 1722 - pagevec_init(&pvec, 0); 1723 - pvec.nr = 1; /* start small: we may be there already */ 1724 - while (!done) { 1725 - pvec.nr = shmem_find_get_pages_and_swap(mapping, index, 1726 - pvec.nr, pvec.pages, indices); 1727 - if (!pvec.nr) { 1728 - if (origin == SEEK_DATA) 1729 - index = end; 1730 - break; 1731 - } 1732 - for (i = 0; i < pvec.nr; i++, index++) { 1733 - if (index < indices[i]) { 1734 - if (origin == SEEK_HOLE) { 1735 - done = true; 1736 - break; 1737 - } 1738 - index = indices[i]; 1739 - } 1740 - page = pvec.pages[i]; 1741 - if (page && !radix_tree_exceptional_entry(page)) { 1742 - if (!PageUptodate(page)) 1743 - page = NULL; 1744 - } 1745 - if (index >= end || 1746 - (page && origin == SEEK_DATA) || 1747 - (!page && origin == SEEK_HOLE)) { 1748 - done = true; 1749 - break; 1750 - } 1751 - } 1752 - shmem_deswap_pagevec(&pvec); 1753 - pagevec_release(&pvec); 1754 - pvec.nr = PAGEVEC_SIZE; 1755 - cond_resched(); 1756 - } 1757 - return index; 1758 - } 1759 - 1760 - static loff_t shmem_file_llseek(struct file *file, loff_t offset, int origin) 1761 - { 1762 - struct address_space *mapping; 1763 - struct inode *inode; 1764 - pgoff_t start, end; 1765 - loff_t new_offset; 1766 - 1767 - if (origin != SEEK_DATA && origin != SEEK_HOLE) 1768 - return generic_file_llseek_size(file, offset, origin, 1769 - MAX_LFS_FILESIZE); 1770 - mapping = file->f_mapping; 1771 - inode = mapping->host; 1772 - mutex_lock(&inode->i_mutex); 1773 - /* We're holding i_mutex so we can access i_size directly */ 1774 - 1775 - if (offset < 0) 1776 - offset = -EINVAL; 1777 - else if (offset >= inode->i_size) 1778 - offset = -ENXIO; 1779 - else { 1780 - start = offset >> PAGE_CACHE_SHIFT; 1781 - end = (inode->i_size + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT; 1782 - new_offset = shmem_seek_hole_data(mapping, start, end, origin); 1783 - new_offset <<= PAGE_CACHE_SHIFT; 1784 - if (new_offset > offset) { 1785 - if (new_offset < inode->i_size) 1786 - offset = new_offset; 1787 - else if (origin == SEEK_DATA) 1788 - offset = -ENXIO; 1789 - else 1790 - offset = inode->i_size; 1791 - } 1792 - } 1793 - 1794 - if (offset >= 0 && offset != file->f_pos) { 1795 - file->f_pos = offset; 1796 - file->f_version = 0; 1797 - } 1798 - mutex_unlock(&inode->i_mutex); 1799 - return offset; 1800 1693 } 1801 1694 1802 1695 static long shmem_fallocate(struct file *file, int mode, loff_t offset, ··· 2710 2787 static const struct file_operations shmem_file_operations = { 2711 2788 .mmap = shmem_mmap, 2712 2789 #ifdef CONFIG_TMPFS 2713 - .llseek = shmem_file_llseek, 2790 + .llseek = generic_file_llseek, 2714 2791 .read = do_sync_read, 2715 2792 .write = do_sync_write, 2716 2793 .aio_read = shmem_file_aio_read,
+14 -6
mm/sparse.c
··· 275 275 sparse_early_usemaps_alloc_pgdat_section(struct pglist_data *pgdat, 276 276 unsigned long size) 277 277 { 278 - pg_data_t *host_pgdat; 279 - unsigned long goal; 278 + unsigned long goal, limit; 279 + unsigned long *p; 280 + int nid; 280 281 /* 281 282 * A page may contain usemaps for other sections preventing the 282 283 * page being freed and making a section unremovable while ··· 288 287 * from the same section as the pgdat where possible to avoid 289 288 * this problem. 290 289 */ 291 - goal = __pa(pgdat) & PAGE_SECTION_MASK; 292 - host_pgdat = NODE_DATA(early_pfn_to_nid(goal >> PAGE_SHIFT)); 293 - return __alloc_bootmem_node_nopanic(host_pgdat, size, 294 - SMP_CACHE_BYTES, goal); 290 + goal = __pa(pgdat) & (PAGE_SECTION_MASK << PAGE_SHIFT); 291 + limit = goal + (1UL << PA_SECTION_SHIFT); 292 + nid = early_pfn_to_nid(goal >> PAGE_SHIFT); 293 + again: 294 + p = ___alloc_bootmem_node_nopanic(NODE_DATA(nid), size, 295 + SMP_CACHE_BYTES, goal, limit); 296 + if (!p && limit) { 297 + limit = 0; 298 + goto again; 299 + } 300 + return p; 295 301 } 296 302 297 303 static void __init check_usemap_section_nr(int nid, unsigned long *usemap)
+9 -3
mm/vmscan.c
··· 2688 2688 * them before going back to sleep. 2689 2689 */ 2690 2690 set_pgdat_percpu_threshold(pgdat, calculate_normal_threshold); 2691 - schedule(); 2691 + 2692 + if (!kthread_should_stop()) 2693 + schedule(); 2694 + 2692 2695 set_pgdat_percpu_threshold(pgdat, calculate_pressure_threshold); 2693 2696 } else { 2694 2697 if (remaining) ··· 2958 2955 } 2959 2956 2960 2957 /* 2961 - * Called by memory hotplug when all memory in a node is offlined. 2958 + * Called by memory hotplug when all memory in a node is offlined. Caller must 2959 + * hold lock_memory_hotplug(). 2962 2960 */ 2963 2961 void kswapd_stop(int nid) 2964 2962 { 2965 2963 struct task_struct *kswapd = NODE_DATA(nid)->kswapd; 2966 2964 2967 - if (kswapd) 2965 + if (kswapd) { 2968 2966 kthread_stop(kswapd); 2967 + NODE_DATA(nid)->kswapd = NULL; 2968 + } 2969 2969 } 2970 2970 2971 2971 static int __init kswapd_init(void)
+1
net/ax25/af_ax25.c
··· 842 842 case AX25_P_NETROM: 843 843 if (ax25_protocol_is_registered(AX25_P_NETROM)) 844 844 return -ESOCKTNOSUPPORT; 845 + break; 845 846 #endif 846 847 #ifdef CONFIG_ROSE_MODULE 847 848 case AX25_P_ROSE:
+1 -1
net/caif/caif_dev.c
··· 563 563 564 564 static void __exit caif_device_exit(void) 565 565 { 566 - unregister_pernet_subsys(&caif_net_ops); 567 566 unregister_netdevice_notifier(&caif_device_notifier); 568 567 dev_remove_pack(&caif_packet_type); 568 + unregister_pernet_subsys(&caif_net_ops); 569 569 } 570 570 571 571 module_init(caif_device_init);
+2 -1
net/core/dev.c
··· 6313 6313 /* Initialize per network namespace state */ 6314 6314 static int __net_init netdev_init(struct net *net) 6315 6315 { 6316 - INIT_LIST_HEAD(&net->dev_base_head); 6316 + if (net != &init_net) 6317 + INIT_LIST_HEAD(&net->dev_base_head); 6317 6318 6318 6319 net->dev_name_head = netdev_create_hash(); 6319 6320 if (net->dev_name_head == NULL)
+3 -1
net/core/net_namespace.c
··· 27 27 LIST_HEAD(net_namespace_list); 28 28 EXPORT_SYMBOL_GPL(net_namespace_list); 29 29 30 - struct net init_net; 30 + struct net init_net = { 31 + .dev_base_head = LIST_HEAD_INIT(init_net.dev_base_head), 32 + }; 31 33 EXPORT_SYMBOL(init_net); 32 34 33 35 #define INITIAL_NET_GEN_PTRS 13 /* +1 for len +2 for rcu_head */
+55 -18
net/core/netprio_cgroup.c
··· 65 65 spin_unlock_irqrestore(&prioidx_map_lock, flags); 66 66 } 67 67 68 - static void extend_netdev_table(struct net_device *dev, u32 new_len) 68 + static int extend_netdev_table(struct net_device *dev, u32 new_len) 69 69 { 70 70 size_t new_size = sizeof(struct netprio_map) + 71 71 ((sizeof(u32) * new_len)); ··· 77 77 78 78 if (!new_priomap) { 79 79 pr_warn("Unable to alloc new priomap!\n"); 80 - return; 80 + return -ENOMEM; 81 81 } 82 82 83 83 for (i = 0; ··· 90 90 rcu_assign_pointer(dev->priomap, new_priomap); 91 91 if (old_priomap) 92 92 kfree_rcu(old_priomap, rcu); 93 + return 0; 93 94 } 94 95 95 - static void update_netdev_tables(void) 96 + static int write_update_netdev_table(struct net_device *dev) 96 97 { 97 - struct net_device *dev; 98 - u32 max_len = atomic_read(&max_prioidx) + 1; 98 + int ret = 0; 99 + u32 max_len; 99 100 struct netprio_map *map; 100 101 101 102 rtnl_lock(); 103 + max_len = atomic_read(&max_prioidx) + 1; 104 + map = rtnl_dereference(dev->priomap); 105 + if (!map || map->priomap_len < max_len) 106 + ret = extend_netdev_table(dev, max_len); 107 + rtnl_unlock(); 108 + 109 + return ret; 110 + } 111 + 112 + static int update_netdev_tables(void) 113 + { 114 + int ret = 0; 115 + struct net_device *dev; 116 + u32 max_len; 117 + struct netprio_map *map; 118 + 119 + rtnl_lock(); 120 + max_len = atomic_read(&max_prioidx) + 1; 102 121 for_each_netdev(&init_net, dev) { 103 122 map = rtnl_dereference(dev->priomap); 104 - if ((!map) || 105 - (map->priomap_len < max_len)) 106 - extend_netdev_table(dev, max_len); 123 + /* 124 + * don't allocate priomap if we didn't 125 + * change net_prio.ifpriomap (map == NULL), 126 + * this will speed up skb_update_prio. 127 + */ 128 + if (map && map->priomap_len < max_len) { 129 + ret = extend_netdev_table(dev, max_len); 130 + if (ret < 0) 131 + break; 132 + } 107 133 } 108 134 rtnl_unlock(); 135 + return ret; 109 136 } 110 137 111 138 static struct cgroup_subsys_state *cgrp_create(struct cgroup *cgrp) 112 139 { 113 140 struct cgroup_netprio_state *cs; 114 - int ret; 141 + int ret = -EINVAL; 115 142 116 143 cs = kzalloc(sizeof(*cs), GFP_KERNEL); 117 144 if (!cs) 118 145 return ERR_PTR(-ENOMEM); 119 146 120 - if (cgrp->parent && cgrp_netprio_state(cgrp->parent)->prioidx) { 121 - kfree(cs); 122 - return ERR_PTR(-EINVAL); 123 - } 147 + if (cgrp->parent && cgrp_netprio_state(cgrp->parent)->prioidx) 148 + goto out; 124 149 125 150 ret = get_prioidx(&cs->prioidx); 126 - if (ret != 0) { 151 + if (ret < 0) { 127 152 pr_warn("No space in priority index array\n"); 128 - kfree(cs); 129 - return ERR_PTR(ret); 153 + goto out; 154 + } 155 + 156 + ret = update_netdev_tables(); 157 + if (ret < 0) { 158 + put_prioidx(cs->prioidx); 159 + goto out; 130 160 } 131 161 132 162 return &cs->css; 163 + out: 164 + kfree(cs); 165 + return ERR_PTR(ret); 133 166 } 134 167 135 168 static void cgrp_destroy(struct cgroup *cgrp) ··· 254 221 if (!dev) 255 222 goto out_free_devname; 256 223 257 - update_netdev_tables(); 258 - ret = 0; 224 + ret = write_update_netdev_table(dev); 225 + if (ret < 0) 226 + goto out_put_dev; 227 + 259 228 rcu_read_lock(); 260 229 map = rcu_dereference(dev->priomap); 261 230 if (map) 262 231 map->priomap[prioidx] = priority; 263 232 rcu_read_unlock(); 233 + 234 + out_put_dev: 264 235 dev_put(dev); 265 236 266 237 out_free_devname:
+1 -1
net/core/skbuff.c
··· 365 365 unsigned int fragsz = SKB_DATA_ALIGN(length + NET_SKB_PAD) + 366 366 SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); 367 367 368 - if (fragsz <= PAGE_SIZE && !(gfp_mask & __GFP_WAIT)) { 368 + if (fragsz <= PAGE_SIZE && !(gfp_mask & (__GFP_WAIT | GFP_DMA))) { 369 369 void *data = netdev_alloc_frag(fragsz); 370 370 371 371 if (likely(data)) {
+4 -2
net/ipv4/cipso_ipv4.c
··· 1725 1725 case CIPSO_V4_TAG_LOCAL: 1726 1726 /* This is a non-standard tag that we only allow for 1727 1727 * local connections, so if the incoming interface is 1728 - * not the loopback device drop the packet. */ 1729 - if (!(skb->dev->flags & IFF_LOOPBACK)) { 1728 + * not the loopback device drop the packet. Further, 1729 + * there is no legitimate reason for setting this from 1730 + * userspace so reject it if skb is NULL. */ 1731 + if (skb == NULL || !(skb->dev->flags & IFF_LOOPBACK)) { 1730 1732 err_offset = opt_iter; 1731 1733 goto validate_return_locked; 1732 1734 }
+3 -2
net/netfilter/ipvs/ip_vs_ctl.c
··· 1521 1521 { 1522 1522 struct net_device *dev = ptr; 1523 1523 struct net *net = dev_net(dev); 1524 + struct netns_ipvs *ipvs = net_ipvs(net); 1524 1525 struct ip_vs_service *svc; 1525 1526 struct ip_vs_dest *dest; 1526 1527 unsigned int idx; 1527 1528 1528 - if (event != NETDEV_UNREGISTER) 1529 + if (event != NETDEV_UNREGISTER || !ipvs) 1529 1530 return NOTIFY_DONE; 1530 1531 IP_VS_DBG(3, "%s() dev=%s\n", __func__, dev->name); 1531 1532 EnterFunction(2); ··· 1552 1551 } 1553 1552 } 1554 1553 1555 - list_for_each_entry(dest, &net_ipvs(net)->dest_trash, n_list) { 1554 + list_for_each_entry(dest, &ipvs->dest_trash, n_list) { 1556 1555 __ip_vs_dev_reset(dest, dev); 1557 1556 } 1558 1557 mutex_unlock(&__ip_vs_mutex);
+2
net/sched/sch_sfb.c
··· 570 570 571 571 sch->qstats.backlog = q->qdisc->qstats.backlog; 572 572 opts = nla_nest_start(skb, TCA_OPTIONS); 573 + if (opts == NULL) 574 + goto nla_put_failure; 573 575 if (nla_put(skb, TCA_SFB_PARMS, sizeof(opt), &opt)) 574 576 goto nla_put_failure; 575 577 return nla_nest_end(skb, opts);
+2 -5
net/sctp/input.c
··· 752 752 753 753 epb = &ep->base; 754 754 755 - if (hlist_unhashed(&epb->node)) 756 - return; 757 - 758 755 epb->hashent = sctp_ep_hashfn(epb->bind_addr.port); 759 756 760 757 head = &sctp_ep_hashtable[epb->hashent]; 761 758 762 759 sctp_write_lock(&head->lock); 763 - __hlist_del(&epb->node); 760 + hlist_del_init(&epb->node); 764 761 sctp_write_unlock(&head->lock); 765 762 } 766 763 ··· 838 841 head = &sctp_assoc_hashtable[epb->hashent]; 839 842 840 843 sctp_write_lock(&head->lock); 841 - __hlist_del(&epb->node); 844 + hlist_del_init(&epb->node); 842 845 sctp_write_unlock(&head->lock); 843 846 } 844 847
+10 -2
net/sctp/socket.c
··· 1231 1231 SCTP_DEBUG_PRINTK("About to exit __sctp_connect() free asoc: %p" 1232 1232 " kaddrs: %p err: %d\n", 1233 1233 asoc, kaddrs, err); 1234 - if (asoc) 1234 + if (asoc) { 1235 + /* sctp_primitive_ASSOCIATE may have added this association 1236 + * To the hash table, try to unhash it, just in case, its a noop 1237 + * if it wasn't hashed so we're safe 1238 + */ 1239 + sctp_unhash_established(asoc); 1235 1240 sctp_association_free(asoc); 1241 + } 1236 1242 return err; 1237 1243 } 1238 1244 ··· 1948 1942 goto out_unlock; 1949 1943 1950 1944 out_free: 1951 - if (new_asoc) 1945 + if (new_asoc) { 1946 + sctp_unhash_established(asoc); 1952 1947 sctp_association_free(asoc); 1948 + } 1953 1949 out_unlock: 1954 1950 sctp_release_sock(sk); 1955 1951
+1 -1
security/selinux/hooks.c
··· 2717 2717 ATTR_ATIME_SET | ATTR_MTIME_SET | ATTR_TIMES_SET)) 2718 2718 return dentry_has_perm(cred, dentry, FILE__SETATTR); 2719 2719 2720 - if (ia_valid & ATTR_SIZE) 2720 + if (selinux_policycap_openperm && (ia_valid & ATTR_SIZE)) 2721 2721 av |= FILE__OPEN; 2722 2722 2723 2723 return dentry_has_perm(cred, dentry, av);
+3 -1
security/selinux/include/classmap.h
··· 145 145 "node_bind", "name_connect", NULL } }, 146 146 { "memprotect", { "mmap_zero", NULL } }, 147 147 { "peer", { "recv", NULL } }, 148 - { "capability2", { "mac_override", "mac_admin", "syslog", NULL } }, 148 + { "capability2", 149 + { "mac_override", "mac_admin", "syslog", "wake_alarm", "block_suspend", 150 + NULL } }, 149 151 { "kernel_service", { "use_as_override", "create_files_as", NULL } }, 150 152 { "tun_socket", 151 153 { COMMON_SOCK_PERMS, NULL } },
+6 -67
sound/usb/endpoint.c
··· 414 414 { 415 415 struct list_head *p; 416 416 struct snd_usb_endpoint *ep; 417 - int ret, is_playback = direction == SNDRV_PCM_STREAM_PLAYBACK; 417 + int is_playback = direction == SNDRV_PCM_STREAM_PLAYBACK; 418 418 419 419 mutex_lock(&chip->mutex); 420 420 ··· 433 433 is_playback ? "playback" : "capture", 434 434 type == SND_USB_ENDPOINT_TYPE_DATA ? "data" : "sync", 435 435 ep_num); 436 - 437 - /* select the alt setting once so the endpoints become valid */ 438 - ret = usb_set_interface(chip->dev, alts->desc.bInterfaceNumber, 439 - alts->desc.bAlternateSetting); 440 - if (ret < 0) { 441 - snd_printk(KERN_ERR "%s(): usb_set_interface() failed, ret = %d\n", 442 - __func__, ret); 443 - ep = NULL; 444 - goto __exit_unlock; 445 - } 446 436 447 437 ep = kzalloc(sizeof(*ep), GFP_KERNEL); 448 438 if (!ep) ··· 821 831 if (++ep->use_count != 1) 822 832 return 0; 823 833 824 - if (snd_BUG_ON(!test_bit(EP_FLAG_ACTIVATED, &ep->flags))) 825 - return -EINVAL; 826 - 827 834 /* just to be sure */ 828 835 deactivate_urbs(ep, 0, 1); 829 836 wait_clear_urbs(ep); ··· 898 911 if (snd_BUG_ON(ep->use_count == 0)) 899 912 return; 900 913 901 - if (snd_BUG_ON(!test_bit(EP_FLAG_ACTIVATED, &ep->flags))) 902 - return; 903 - 904 914 if (--ep->use_count == 0) { 905 915 deactivate_urbs(ep, force, can_sleep); 906 916 ep->data_subs = NULL; ··· 908 924 if (wait) 909 925 wait_clear_urbs(ep); 910 926 } 911 - } 912 - 913 - /** 914 - * snd_usb_endpoint_activate: activate an snd_usb_endpoint 915 - * 916 - * @ep: the endpoint to activate 917 - * 918 - * If the endpoint is not currently in use, this functions will select the 919 - * correct alternate interface setting for the interface of this endpoint. 920 - * 921 - * In case of any active users, this functions does nothing. 922 - * 923 - * Returns an error if usb_set_interface() failed, 0 in all other 924 - * cases. 925 - */ 926 - int snd_usb_endpoint_activate(struct snd_usb_endpoint *ep) 927 - { 928 - if (ep->use_count != 0) 929 - return 0; 930 - 931 - if (!ep->chip->shutdown && 932 - !test_and_set_bit(EP_FLAG_ACTIVATED, &ep->flags)) { 933 - int ret; 934 - 935 - ret = usb_set_interface(ep->chip->dev, ep->iface, ep->alt_idx); 936 - if (ret < 0) { 937 - snd_printk(KERN_ERR "%s() usb_set_interface() failed, ret = %d\n", 938 - __func__, ret); 939 - clear_bit(EP_FLAG_ACTIVATED, &ep->flags); 940 - return ret; 941 - } 942 - 943 - return 0; 944 - } 945 - 946 - return -EBUSY; 947 927 } 948 928 949 929 /** ··· 928 980 if (!ep) 929 981 return -EINVAL; 930 982 983 + deactivate_urbs(ep, 1, 1); 984 + wait_clear_urbs(ep); 985 + 931 986 if (ep->use_count != 0) 932 987 return 0; 933 988 934 - if (!ep->chip->shutdown && 935 - test_and_clear_bit(EP_FLAG_ACTIVATED, &ep->flags)) { 936 - int ret; 989 + clear_bit(EP_FLAG_ACTIVATED, &ep->flags); 937 990 938 - ret = usb_set_interface(ep->chip->dev, ep->iface, 0); 939 - if (ret < 0) { 940 - snd_printk(KERN_ERR "%s(): usb_set_interface() failed, ret = %d\n", 941 - __func__, ret); 942 - return ret; 943 - } 944 - 945 - return 0; 946 - } 947 - 948 - return -EBUSY; 991 + return 0; 949 992 } 950 993 951 994 /**
+37 -24
sound/usb/pcm.c
··· 261 261 force, can_sleep, wait); 262 262 } 263 263 264 - static int activate_endpoints(struct snd_usb_substream *subs) 265 - { 266 - if (subs->sync_endpoint) { 267 - int ret; 268 - 269 - ret = snd_usb_endpoint_activate(subs->sync_endpoint); 270 - if (ret < 0) 271 - return ret; 272 - } 273 - 274 - return snd_usb_endpoint_activate(subs->data_endpoint); 275 - } 276 - 277 264 static int deactivate_endpoints(struct snd_usb_substream *subs) 278 265 { 279 266 int reta, retb; ··· 300 313 301 314 if (fmt == subs->cur_audiofmt) 302 315 return 0; 316 + 317 + /* close the old interface */ 318 + if (subs->interface >= 0 && subs->interface != fmt->iface) { 319 + err = usb_set_interface(subs->dev, subs->interface, 0); 320 + if (err < 0) { 321 + snd_printk(KERN_ERR "%d:%d:%d: return to setting 0 failed (%d)\n", 322 + dev->devnum, fmt->iface, fmt->altsetting, err); 323 + return -EIO; 324 + } 325 + subs->interface = -1; 326 + subs->altset_idx = 0; 327 + } 328 + 329 + /* set interface */ 330 + if (subs->interface != fmt->iface || 331 + subs->altset_idx != fmt->altset_idx) { 332 + err = usb_set_interface(dev, fmt->iface, fmt->altsetting); 333 + if (err < 0) { 334 + snd_printk(KERN_ERR "%d:%d:%d: usb_set_interface failed (%d)\n", 335 + dev->devnum, fmt->iface, fmt->altsetting, err); 336 + return -EIO; 337 + } 338 + snd_printdd(KERN_INFO "setting usb interface %d:%d\n", 339 + fmt->iface, fmt->altsetting); 340 + subs->interface = fmt->iface; 341 + subs->altset_idx = fmt->altset_idx; 342 + } 303 343 304 344 subs->data_endpoint = snd_usb_add_endpoint(subs->stream->chip, 305 345 alts, fmt->endpoint, subs->direction, ··· 401 387 subs->data_endpoint->sync_master = subs->sync_endpoint; 402 388 } 403 389 404 - if ((err = snd_usb_init_pitch(subs->stream->chip, subs->interface, alts, fmt)) < 0) 390 + if ((err = snd_usb_init_pitch(subs->stream->chip, fmt->iface, alts, fmt)) < 0) 405 391 return err; 406 392 407 393 subs->cur_audiofmt = fmt; ··· 464 450 struct usb_interface *iface; 465 451 iface = usb_ifnum_to_if(subs->dev, fmt->iface); 466 452 alts = &iface->altsetting[fmt->altset_idx]; 467 - ret = snd_usb_init_sample_rate(subs->stream->chip, subs->interface, alts, fmt, rate); 453 + ret = snd_usb_init_sample_rate(subs->stream->chip, fmt->iface, alts, fmt, rate); 468 454 if (ret < 0) 469 455 return ret; 470 456 subs->cur_rate = rate; ··· 474 460 mutex_lock(&subs->stream->chip->shutdown_mutex); 475 461 /* format changed */ 476 462 stop_endpoints(subs, 0, 0, 0); 477 - deactivate_endpoints(subs); 478 - 479 - ret = activate_endpoints(subs); 480 - if (ret < 0) 481 - goto unlock; 482 - 483 463 ret = snd_usb_endpoint_set_params(subs->data_endpoint, hw_params, fmt, 484 464 subs->sync_endpoint); 485 465 if (ret < 0) ··· 508 500 subs->period_bytes = 0; 509 501 mutex_lock(&subs->stream->chip->shutdown_mutex); 510 502 stop_endpoints(subs, 0, 1, 1); 503 + deactivate_endpoints(subs); 511 504 mutex_unlock(&subs->stream->chip->shutdown_mutex); 512 505 return snd_pcm_lib_free_vmalloc_buffer(substream); 513 506 } ··· 947 938 948 939 static int snd_usb_pcm_close(struct snd_pcm_substream *substream, int direction) 949 940 { 950 - int ret; 951 941 struct snd_usb_stream *as = snd_pcm_substream_chip(substream); 952 942 struct snd_usb_substream *subs = &as->substream[direction]; 953 943 954 944 stop_endpoints(subs, 0, 0, 0); 955 - ret = deactivate_endpoints(subs); 945 + 946 + if (!as->chip->shutdown && subs->interface >= 0) { 947 + usb_set_interface(subs->dev, subs->interface, 0); 948 + subs->interface = -1; 949 + } 950 + 956 951 subs->pcm_substream = NULL; 957 952 snd_usb_autosuspend(subs->stream->chip); 958 953 959 - return ret; 954 + return 0; 960 955 } 961 956 962 957 /* Since a URB can handle only a single linear buffer, we must use double
+15 -14
tools/perf/util/map.c
··· 669 669 struct machine *machines__findnew(struct rb_root *self, pid_t pid) 670 670 { 671 671 char path[PATH_MAX]; 672 - const char *root_dir; 672 + const char *root_dir = ""; 673 673 struct machine *machine = machines__find(self, pid); 674 674 675 - if (!machine || machine->pid != pid) { 676 - if (pid == HOST_KERNEL_ID || pid == DEFAULT_GUEST_KERNEL_ID) 677 - root_dir = ""; 678 - else { 679 - if (!symbol_conf.guestmount) 680 - goto out; 681 - sprintf(path, "%s/%d", symbol_conf.guestmount, pid); 682 - if (access(path, R_OK)) { 683 - pr_err("Can't access file %s\n", path); 684 - goto out; 685 - } 686 - root_dir = path; 675 + if (machine && (machine->pid == pid)) 676 + goto out; 677 + 678 + if ((pid != HOST_KERNEL_ID) && 679 + (pid != DEFAULT_GUEST_KERNEL_ID) && 680 + (symbol_conf.guestmount)) { 681 + sprintf(path, "%s/%d", symbol_conf.guestmount, pid); 682 + if (access(path, R_OK)) { 683 + pr_err("Can't access file %s\n", path); 684 + machine = NULL; 685 + goto out; 687 686 } 688 - machine = machines__add(self, pid, root_dir); 687 + root_dir = path; 689 688 } 689 + 690 + machine = machines__add(self, pid, root_dir); 690 691 691 692 out: 692 693 return machine;
+1 -1
tools/perf/util/session.c
··· 926 926 else 927 927 pid = event->ip.pid; 928 928 929 - return perf_session__find_machine(session, pid); 929 + return perf_session__findnew_machine(session, pid); 930 930 } 931 931 932 932 return perf_session__find_host_machine(session);
+1 -2
tools/perf/util/trace-event-parse.c
··· 198 198 record.data = data; 199 199 200 200 trace_seq_init(&s); 201 - pevent_print_event(pevent, &s, &record); 201 + pevent_event_info(&s, event, &record); 202 202 trace_seq_do_printf(&s); 203 - printf("\n"); 204 203 } 205 204 206 205 void print_event(int cpu, void *data, int size, unsigned long long nsecs,
+13 -2
virt/kvm/assigned-dev.c
··· 334 334 } 335 335 336 336 #ifdef __KVM_HAVE_MSI 337 + static irqreturn_t kvm_assigned_dev_msi(int irq, void *dev_id) 338 + { 339 + return IRQ_WAKE_THREAD; 340 + } 341 + 337 342 static int assigned_device_enable_host_msi(struct kvm *kvm, 338 343 struct kvm_assigned_dev_kernel *dev) 339 344 { ··· 351 346 } 352 347 353 348 dev->host_irq = dev->dev->irq; 354 - if (request_threaded_irq(dev->host_irq, NULL, 349 + if (request_threaded_irq(dev->host_irq, kvm_assigned_dev_msi, 355 350 kvm_assigned_dev_thread_msi, 0, 356 351 dev->irq_name, dev)) { 357 352 pci_disable_msi(dev->dev); ··· 363 358 #endif 364 359 365 360 #ifdef __KVM_HAVE_MSIX 361 + static irqreturn_t kvm_assigned_dev_msix(int irq, void *dev_id) 362 + { 363 + return IRQ_WAKE_THREAD; 364 + } 365 + 366 366 static int assigned_device_enable_host_msix(struct kvm *kvm, 367 367 struct kvm_assigned_dev_kernel *dev) 368 368 { ··· 384 374 385 375 for (i = 0; i < dev->entries_nr; i++) { 386 376 r = request_threaded_irq(dev->host_msix_entries[i].vector, 387 - NULL, kvm_assigned_dev_thread_msix, 377 + kvm_assigned_dev_msix, 378 + kvm_assigned_dev_thread_msix, 388 379 0, dev->irq_name, dev); 389 380 if (r) 390 381 goto err;
+13 -10
virt/kvm/eventfd.c
··· 198 198 } 199 199 200 200 static int 201 - kvm_irqfd_assign(struct kvm *kvm, int fd, int gsi) 201 + kvm_irqfd_assign(struct kvm *kvm, struct kvm_irqfd *args) 202 202 { 203 203 struct kvm_irq_routing_table *irq_rt; 204 204 struct _irqfd *irqfd, *tmp; ··· 212 212 return -ENOMEM; 213 213 214 214 irqfd->kvm = kvm; 215 - irqfd->gsi = gsi; 215 + irqfd->gsi = args->gsi; 216 216 INIT_LIST_HEAD(&irqfd->list); 217 217 INIT_WORK(&irqfd->inject, irqfd_inject); 218 218 INIT_WORK(&irqfd->shutdown, irqfd_shutdown); 219 219 220 - file = eventfd_fget(fd); 220 + file = eventfd_fget(args->fd); 221 221 if (IS_ERR(file)) { 222 222 ret = PTR_ERR(file); 223 223 goto fail; ··· 298 298 * shutdown any irqfd's that match fd+gsi 299 299 */ 300 300 static int 301 - kvm_irqfd_deassign(struct kvm *kvm, int fd, int gsi) 301 + kvm_irqfd_deassign(struct kvm *kvm, struct kvm_irqfd *args) 302 302 { 303 303 struct _irqfd *irqfd, *tmp; 304 304 struct eventfd_ctx *eventfd; 305 305 306 - eventfd = eventfd_ctx_fdget(fd); 306 + eventfd = eventfd_ctx_fdget(args->fd); 307 307 if (IS_ERR(eventfd)) 308 308 return PTR_ERR(eventfd); 309 309 310 310 spin_lock_irq(&kvm->irqfds.lock); 311 311 312 312 list_for_each_entry_safe(irqfd, tmp, &kvm->irqfds.items, list) { 313 - if (irqfd->eventfd == eventfd && irqfd->gsi == gsi) { 313 + if (irqfd->eventfd == eventfd && irqfd->gsi == args->gsi) { 314 314 /* 315 315 * This rcu_assign_pointer is needed for when 316 316 * another thread calls kvm_irq_routing_update before ··· 338 338 } 339 339 340 340 int 341 - kvm_irqfd(struct kvm *kvm, int fd, int gsi, int flags) 341 + kvm_irqfd(struct kvm *kvm, struct kvm_irqfd *args) 342 342 { 343 - if (flags & KVM_IRQFD_FLAG_DEASSIGN) 344 - return kvm_irqfd_deassign(kvm, fd, gsi); 343 + if (args->flags & ~KVM_IRQFD_FLAG_DEASSIGN) 344 + return -EINVAL; 345 345 346 - return kvm_irqfd_assign(kvm, fd, gsi); 346 + if (args->flags & KVM_IRQFD_FLAG_DEASSIGN) 347 + return kvm_irqfd_deassign(kvm, args); 348 + 349 + return kvm_irqfd_assign(kvm, args); 347 350 } 348 351 349 352 /*
+2 -1
virt/kvm/kvm_main.c
··· 2047 2047 r = -EFAULT; 2048 2048 if (copy_from_user(&data, argp, sizeof data)) 2049 2049 goto out; 2050 - r = kvm_irqfd(kvm, data.fd, data.gsi, data.flags); 2050 + r = kvm_irqfd(kvm, &data); 2051 2051 break; 2052 2052 } 2053 2053 case KVM_IOEVENTFD: { ··· 2845 2845 kvm_arch_hardware_unsetup(); 2846 2846 kvm_arch_exit(); 2847 2847 free_cpumask_var(cpus_hardware_enabled); 2848 + __free_page(fault_page); 2848 2849 __free_page(hwpoison_page); 2849 2850 __free_page(bad_page); 2850 2851 }