Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

Minor line offset auto-merges.

Signed-off-by: David S. Miller <davem@davemloft.net>

+4492 -2372
+7 -7
Documentation/arm64/memory.txt
··· 27 27 ----------------------------------------------------------------------- 28 28 0000000000000000 0000007fffffffff 512GB user 29 29 30 - ffffff8000000000 ffffffbbfffcffff ~240GB vmalloc 30 + ffffff8000000000 ffffffbbfffeffff ~240GB vmalloc 31 31 32 - ffffffbbfffd0000 ffffffbcfffdffff 64KB [guard page] 33 - 34 - ffffffbbfffe0000 ffffffbcfffeffff 64KB PCI I/O space 35 - 36 - ffffffbbffff0000 ffffffbcffffffff 64KB [guard page] 32 + ffffffbbffff0000 ffffffbbffffffff 64KB [guard page] 37 33 38 34 ffffffbc00000000 ffffffbdffffffff 8GB vmemmap 39 35 40 - ffffffbe00000000 ffffffbffbffffff ~8GB [guard, future vmmemap] 36 + ffffffbe00000000 ffffffbffbbfffff ~8GB [guard, future vmmemap] 37 + 38 + ffffffbffbe00000 ffffffbffbe0ffff 64KB PCI I/O space 39 + 40 + ffffffbbffff0000 ffffffbcffffffff ~2MB [guard] 41 41 42 42 ffffffbffc000000 ffffffbfffffffff 64MB modules 43 43
+4
Documentation/cgroups/memory.txt
··· 466 466 5.3 swappiness 467 467 468 468 Similar to /proc/sys/vm/swappiness, but affecting a hierarchy of groups only. 469 + Please note that unlike the global swappiness, memcg knob set to 0 470 + really prevents from any swapping even if there is a swap storage 471 + available. This might lead to memcg OOM killer if there are no file 472 + pages to reclaim. 469 473 470 474 Following cgroups' swappiness can't be changed. 471 475 - root cgroup (uses /proc/sys/vm/swappiness).
+12 -4
Documentation/filesystems/proc.txt
··· 33 33 2 Modifying System Parameters 34 34 35 35 3 Per-Process Parameters 36 - 3.1 /proc/<pid>/oom_score_adj - Adjust the oom-killer 36 + 3.1 /proc/<pid>/oom_adj & /proc/<pid>/oom_score_adj - Adjust the oom-killer 37 37 score 38 38 3.2 /proc/<pid>/oom_score - Display current oom-killer score 39 39 3.3 /proc/<pid>/io - Display the IO accounting fields ··· 1320 1320 CHAPTER 3: PER-PROCESS PARAMETERS 1321 1321 ------------------------------------------------------------------------------ 1322 1322 1323 - 3.1 /proc/<pid>/oom_score_adj- Adjust the oom-killer score 1323 + 3.1 /proc/<pid>/oom_adj & /proc/<pid>/oom_score_adj- Adjust the oom-killer score 1324 1324 -------------------------------------------------------------------------------- 1325 1325 1326 - This file can be used to adjust the badness heuristic used to select which 1326 + These file can be used to adjust the badness heuristic used to select which 1327 1327 process gets killed in out of memory conditions. 1328 1328 1329 1329 The badness heuristic assigns a value to each candidate task ranging from 0 ··· 1361 1361 equivalent to discounting 50% of the task's allowed memory from being considered 1362 1362 as scoring against the task. 1363 1363 1364 + For backwards compatibility with previous kernels, /proc/<pid>/oom_adj may also 1365 + be used to tune the badness score. Its acceptable values range from -16 1366 + (OOM_ADJUST_MIN) to +15 (OOM_ADJUST_MAX) and a special value of -17 1367 + (OOM_DISABLE) to disable oom killing entirely for that task. Its value is 1368 + scaled linearly with /proc/<pid>/oom_score_adj. 1369 + 1364 1370 The value of /proc/<pid>/oom_score_adj may be reduced no lower than the last 1365 1371 value set by a CAP_SYS_RESOURCE process. To reduce the value any lower 1366 1372 requires CAP_SYS_RESOURCE. ··· 1381 1375 ------------------------------------------------------------- 1382 1376 1383 1377 This file can be used to check the current score used by the oom-killer is for 1384 - any given <pid>. 1378 + any given <pid>. Use it together with /proc/<pid>/oom_score_adj to tune which 1379 + process should be killed in an out-of-memory situation. 1380 + 1385 1381 1386 1382 3.3 /proc/<pid>/io - Display the IO accounting fields 1387 1383 -------------------------------------------------------
+1 -1
Documentation/networking/netdev-features.txt
··· 164 164 This requests that the NIC receive all possible frames, including errored 165 165 frames (such as bad FCS, etc). This can be helpful when sniffing a link with 166 166 bad packets on it. Some NICs may receive more packets if also put into normal 167 - PROMISC mdoe. 167 + PROMISC mode.
+60 -9
MAINTAINERS
··· 2507 2507 M: Seung-Woo Kim <sw0312.kim@samsung.com> 2508 2508 M: Kyungmin Park <kyungmin.park@samsung.com> 2509 2509 L: dri-devel@lists.freedesktop.org 2510 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/daeinki/drm-exynos.git 2510 2511 S: Supported 2511 2512 F: drivers/gpu/drm/exynos 2512 2513 F: include/drm/exynos* ··· 3598 3597 F: drivers/net/hyperv/ 3599 3598 F: drivers/staging/hv/ 3600 3599 3600 + I2C OVER PARALLEL PORT 3601 + M: Jean Delvare <khali@linux-fr.org> 3602 + L: linux-i2c@vger.kernel.org 3603 + S: Maintained 3604 + F: Documentation/i2c/busses/i2c-parport 3605 + F: Documentation/i2c/busses/i2c-parport-light 3606 + F: drivers/i2c/busses/i2c-parport.c 3607 + F: drivers/i2c/busses/i2c-parport-light.c 3608 + 3609 + I2C/SMBUS CONTROLLER DRIVERS FOR PC 3610 + M: Jean Delvare <khali@linux-fr.org> 3611 + L: linux-i2c@vger.kernel.org 3612 + S: Maintained 3613 + F: Documentation/i2c/busses/i2c-ali1535 3614 + F: Documentation/i2c/busses/i2c-ali1563 3615 + F: Documentation/i2c/busses/i2c-ali15x3 3616 + F: Documentation/i2c/busses/i2c-amd756 3617 + F: Documentation/i2c/busses/i2c-amd8111 3618 + F: Documentation/i2c/busses/i2c-i801 3619 + F: Documentation/i2c/busses/i2c-nforce2 3620 + F: Documentation/i2c/busses/i2c-piix4 3621 + F: Documentation/i2c/busses/i2c-sis5595 3622 + F: Documentation/i2c/busses/i2c-sis630 3623 + F: Documentation/i2c/busses/i2c-sis96x 3624 + F: Documentation/i2c/busses/i2c-via 3625 + F: Documentation/i2c/busses/i2c-viapro 3626 + F: drivers/i2c/busses/i2c-ali1535.c 3627 + F: drivers/i2c/busses/i2c-ali1563.c 3628 + F: drivers/i2c/busses/i2c-ali15x3.c 3629 + F: drivers/i2c/busses/i2c-amd756.c 3630 + F: drivers/i2c/busses/i2c-amd756-s4882.c 3631 + F: drivers/i2c/busses/i2c-amd8111.c 3632 + F: drivers/i2c/busses/i2c-i801.c 3633 + F: drivers/i2c/busses/i2c-isch.c 3634 + F: drivers/i2c/busses/i2c-nforce2.c 3635 + F: drivers/i2c/busses/i2c-nforce2-s4985.c 3636 + F: drivers/i2c/busses/i2c-piix4.c 3637 + F: drivers/i2c/busses/i2c-sis5595.c 3638 + F: drivers/i2c/busses/i2c-sis630.c 3639 + F: drivers/i2c/busses/i2c-sis96x.c 3640 + F: drivers/i2c/busses/i2c-via.c 3641 + F: drivers/i2c/busses/i2c-viapro.c 3642 + 3601 3643 I2C/SMBUS STUB DRIVER 3602 3644 M: "Mark M. Hoffman" <mhoffman@lightlink.com> 3603 3645 L: linux-i2c@vger.kernel.org ··· 3648 3604 F: drivers/i2c/busses/i2c-stub.c 3649 3605 3650 3606 I2C SUBSYSTEM 3651 - M: "Jean Delvare (PC drivers, core)" <khali@linux-fr.org> 3607 + M: Wolfram Sang <w.sang@pengutronix.de> 3652 3608 M: "Ben Dooks (embedded platforms)" <ben-linux@fluff.org> 3653 - M: "Wolfram Sang (embedded platforms)" <w.sang@pengutronix.de> 3654 3609 L: linux-i2c@vger.kernel.org 3655 3610 W: http://i2c.wiki.kernel.org/ 3656 3611 T: quilt kernel.org/pub/linux/kernel/people/jdelvare/linux-2.6/jdelvare-i2c/ ··· 3659 3616 F: drivers/i2c/ 3660 3617 F: include/linux/i2c.h 3661 3618 F: include/linux/i2c-*.h 3619 + 3620 + I2C-TAOS-EVM DRIVER 3621 + M: Jean Delvare <khali@linux-fr.org> 3622 + L: linux-i2c@vger.kernel.org 3623 + S: Maintained 3624 + F: Documentation/i2c/busses/i2c-taos-evm 3625 + F: drivers/i2c/busses/i2c-taos-evm.c 3662 3626 3663 3627 I2C-TINY-USB DRIVER 3664 3628 M: Till Harbaum <till@harbaum.org> ··· 7262 7212 S: Maintained 7263 7213 F: arch/xtensa/ 7264 7214 7215 + THERMAL 7216 + M: Zhang Rui <rui.zhang@intel.com> 7217 + L: linux-pm@vger.kernel.org 7218 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/rzhang/linux.git 7219 + S: Supported 7220 + F: drivers/thermal/ 7221 + F: include/linux/thermal.h 7222 + 7265 7223 THINKPAD ACPI EXTRAS DRIVER 7266 7224 M: Henrique de Moraes Holschuh <ibm-acpi@hmh.eng.br> 7267 7225 L: ibm-acpi-devel@lists.sourceforge.net ··· 7952 7894 M: Roger Luethi <rl@hellgate.ch> 7953 7895 S: Maintained 7954 7896 F: drivers/net/ethernet/via/via-rhine.c 7955 - 7956 - VIAPRO SMBUS DRIVER 7957 - M: Jean Delvare <khali@linux-fr.org> 7958 - L: linux-i2c@vger.kernel.org 7959 - S: Maintained 7960 - F: Documentation/i2c/busses/i2c-viapro 7961 - F: drivers/i2c/busses/i2c-viapro.c 7962 7897 7963 7898 VIA SD/MMC CARD CONTROLLER DRIVER 7964 7899 M: Bruce Chang <brucechang@via.com.tw>
+1 -1
Makefile
··· 1 1 VERSION = 3 2 2 PATCHLEVEL = 7 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc3 4 + EXTRAVERSION = -rc6 5 5 NAME = Terrified Chipmunk 6 6 7 7 # *DOCUMENTATION*
+5 -5
arch/arm/boot/Makefile
··· 33 33 34 34 $(obj)/xipImage: vmlinux FORCE 35 35 $(call if_changed,objcopy) 36 - $(kecho) ' Kernel: $@ is ready (physical address: $(CONFIG_XIP_PHYS_ADDR))' 36 + @$(kecho) ' Kernel: $@ is ready (physical address: $(CONFIG_XIP_PHYS_ADDR))' 37 37 38 38 $(obj)/Image $(obj)/zImage: FORCE 39 39 @echo 'Kernel configured for XIP (CONFIG_XIP_KERNEL=y)' ··· 48 48 49 49 $(obj)/Image: vmlinux FORCE 50 50 $(call if_changed,objcopy) 51 - $(kecho) ' Kernel: $@ is ready' 51 + @$(kecho) ' Kernel: $@ is ready' 52 52 53 53 $(obj)/compressed/vmlinux: $(obj)/Image FORCE 54 54 $(Q)$(MAKE) $(build)=$(obj)/compressed $@ 55 55 56 56 $(obj)/zImage: $(obj)/compressed/vmlinux FORCE 57 57 $(call if_changed,objcopy) 58 - $(kecho) ' Kernel: $@ is ready' 58 + @$(kecho) ' Kernel: $@ is ready' 59 59 60 60 endif 61 61 ··· 90 90 $(obj)/uImage: $(obj)/zImage FORCE 91 91 @$(check_for_multiple_loadaddr) 92 92 $(call if_changed,uimage) 93 - $(kecho) ' Image $@ is ready' 93 + @$(kecho) ' Image $@ is ready' 94 94 95 95 $(obj)/bootp/bootp: $(obj)/zImage initrd FORCE 96 96 $(Q)$(MAKE) $(build)=$(obj)/bootp $@ ··· 98 98 99 99 $(obj)/bootpImage: $(obj)/bootp/bootp FORCE 100 100 $(call if_changed,objcopy) 101 - $(kecho) ' Kernel: $@ is ready' 101 + @$(kecho) ' Kernel: $@ is ready' 102 102 103 103 PHONY += initrd FORCE 104 104 initrd:
+2 -2
arch/arm/boot/dts/tegra30.dtsi
··· 73 73 74 74 pinmux: pinmux { 75 75 compatible = "nvidia,tegra30-pinmux"; 76 - reg = <0x70000868 0xd0 /* Pad control registers */ 77 - 0x70003000 0x3e0>; /* Mux registers */ 76 + reg = <0x70000868 0xd4 /* Pad control registers */ 77 + 0x70003000 0x3e4>; /* Mux registers */ 78 78 }; 79 79 80 80 serial@70006000 {
+2 -2
arch/arm/include/asm/io.h
··· 64 64 static inline void __raw_writew(u16 val, volatile void __iomem *addr) 65 65 { 66 66 asm volatile("strh %1, %0" 67 - : "+Qo" (*(volatile u16 __force *)addr) 67 + : "+Q" (*(volatile u16 __force *)addr) 68 68 : "r" (val)); 69 69 } 70 70 ··· 72 72 { 73 73 u16 val; 74 74 asm volatile("ldrh %1, %0" 75 - : "+Qo" (*(volatile u16 __force *)addr), 75 + : "+Q" (*(volatile u16 __force *)addr), 76 76 "=r" (val)); 77 77 return val; 78 78 }
-2
arch/arm/include/asm/sched_clock.h
··· 10 10 11 11 extern void sched_clock_postinit(void); 12 12 extern void setup_sched_clock(u32 (*read)(void), int bits, unsigned long rate); 13 - extern void setup_sched_clock_needs_suspend(u32 (*read)(void), int bits, 14 - unsigned long rate); 15 13 16 14 #endif
+6 -6
arch/arm/include/asm/vfpmacros.h
··· 27 27 #if __LINUX_ARM_ARCH__ <= 6 28 28 ldr \tmp, =elf_hwcap @ may not have MVFR regs 29 29 ldr \tmp, [\tmp, #0] 30 - tst \tmp, #HWCAP_VFPv3D16 31 - ldceql p11, cr0, [\base],#32*4 @ FLDMIAD \base!, {d16-d31} 32 - addne \base, \base, #32*4 @ step over unused register space 30 + tst \tmp, #HWCAP_VFPD32 31 + ldcnel p11, cr0, [\base],#32*4 @ FLDMIAD \base!, {d16-d31} 32 + addeq \base, \base, #32*4 @ step over unused register space 33 33 #else 34 34 VFPFMRX \tmp, MVFR0 @ Media and VFP Feature Register 0 35 35 and \tmp, \tmp, #MVFR0_A_SIMD_MASK @ A_SIMD field ··· 51 51 #if __LINUX_ARM_ARCH__ <= 6 52 52 ldr \tmp, =elf_hwcap @ may not have MVFR regs 53 53 ldr \tmp, [\tmp, #0] 54 - tst \tmp, #HWCAP_VFPv3D16 55 - stceql p11, cr0, [\base],#32*4 @ FSTMIAD \base!, {d16-d31} 56 - addne \base, \base, #32*4 @ step over unused register space 54 + tst \tmp, #HWCAP_VFPD32 55 + stcnel p11, cr0, [\base],#32*4 @ FSTMIAD \base!, {d16-d31} 56 + addeq \base, \base, #32*4 @ step over unused register space 57 57 #else 58 58 VFPFMRX \tmp, MVFR0 @ Media and VFP Feature Register 0 59 59 and \tmp, \tmp, #MVFR0_A_SIMD_MASK @ A_SIMD field
+2 -1
arch/arm/include/uapi/asm/hwcap.h
··· 18 18 #define HWCAP_THUMBEE (1 << 11) 19 19 #define HWCAP_NEON (1 << 12) 20 20 #define HWCAP_VFPv3 (1 << 13) 21 - #define HWCAP_VFPv3D16 (1 << 14) 21 + #define HWCAP_VFPv3D16 (1 << 14) /* also set for VFPv4-D16 */ 22 22 #define HWCAP_TLS (1 << 15) 23 23 #define HWCAP_VFPv4 (1 << 16) 24 24 #define HWCAP_IDIVA (1 << 17) 25 25 #define HWCAP_IDIVT (1 << 18) 26 + #define HWCAP_VFPD32 (1 << 19) /* set if VFP has 32 regs (not 16) */ 26 27 #define HWCAP_IDIV (HWCAP_IDIVA | HWCAP_IDIVT) 27 28 28 29
+4 -14
arch/arm/kernel/sched_clock.c
··· 107 107 update_sched_clock(); 108 108 } 109 109 110 - void __init setup_sched_clock_needs_suspend(u32 (*read)(void), int bits, 111 - unsigned long rate) 112 - { 113 - setup_sched_clock(read, bits, rate); 114 - cd.needs_suspend = true; 115 - } 116 - 117 110 void __init setup_sched_clock(u32 (*read)(void), int bits, unsigned long rate) 118 111 { 119 112 unsigned long r, w; ··· 182 189 static int sched_clock_suspend(void) 183 190 { 184 191 sched_clock_poll(sched_clock_timer.data); 185 - if (cd.needs_suspend) 186 - cd.suspended = true; 192 + cd.suspended = true; 187 193 return 0; 188 194 } 189 195 190 196 static void sched_clock_resume(void) 191 197 { 192 - if (cd.needs_suspend) { 193 - cd.epoch_cyc = read_sched_clock(); 194 - cd.epoch_cyc_copy = cd.epoch_cyc; 195 - cd.suspended = false; 196 - } 198 + cd.epoch_cyc = read_sched_clock(); 199 + cd.epoch_cyc_copy = cd.epoch_cyc; 200 + cd.suspended = false; 197 201 } 198 202 199 203 static struct syscore_ops sched_clock_ops = {
+1 -1
arch/arm/mach-at91/at91rm9200_devices.c
··· 68 68 69 69 /* Enable overcurrent notification */ 70 70 for (i = 0; i < data->ports; i++) { 71 - if (data->overcurrent_pin[i]) 71 + if (gpio_is_valid(data->overcurrent_pin[i])) 72 72 at91_set_gpio_input(data->overcurrent_pin[i], 1); 73 73 } 74 74
+1 -1
arch/arm/mach-at91/at91sam9260_devices.c
··· 72 72 73 73 /* Enable overcurrent notification */ 74 74 for (i = 0; i < data->ports; i++) { 75 - if (data->overcurrent_pin[i]) 75 + if (gpio_is_valid(data->overcurrent_pin[i])) 76 76 at91_set_gpio_input(data->overcurrent_pin[i], 1); 77 77 } 78 78
+1 -1
arch/arm/mach-at91/at91sam9261_devices.c
··· 72 72 73 73 /* Enable overcurrent notification */ 74 74 for (i = 0; i < data->ports; i++) { 75 - if (data->overcurrent_pin[i]) 75 + if (gpio_is_valid(data->overcurrent_pin[i])) 76 76 at91_set_gpio_input(data->overcurrent_pin[i], 1); 77 77 } 78 78
+1 -1
arch/arm/mach-at91/at91sam9263_devices.c
··· 78 78 79 79 /* Enable overcurrent notification */ 80 80 for (i = 0; i < data->ports; i++) { 81 - if (data->overcurrent_pin[i]) 81 + if (gpio_is_valid(data->overcurrent_pin[i])) 82 82 at91_set_gpio_input(data->overcurrent_pin[i], 1); 83 83 } 84 84
+6 -6
arch/arm/mach-at91/at91sam9g45_devices.c
··· 1841 1841 .flags = IORESOURCE_MEM, 1842 1842 }, 1843 1843 [1] = { 1844 - .start = AT91SAM9G45_ID_AESTDESSHA, 1845 - .end = AT91SAM9G45_ID_AESTDESSHA, 1844 + .start = NR_IRQS_LEGACY + AT91SAM9G45_ID_AESTDESSHA, 1845 + .end = NR_IRQS_LEGACY + AT91SAM9G45_ID_AESTDESSHA, 1846 1846 .flags = IORESOURCE_IRQ, 1847 1847 }, 1848 1848 }; ··· 1874 1874 .flags = IORESOURCE_MEM, 1875 1875 }, 1876 1876 [1] = { 1877 - .start = AT91SAM9G45_ID_AESTDESSHA, 1878 - .end = AT91SAM9G45_ID_AESTDESSHA, 1877 + .start = NR_IRQS_LEGACY + AT91SAM9G45_ID_AESTDESSHA, 1878 + .end = NR_IRQS_LEGACY + AT91SAM9G45_ID_AESTDESSHA, 1879 1879 .flags = IORESOURCE_IRQ, 1880 1880 }, 1881 1881 }; ··· 1910 1910 .flags = IORESOURCE_MEM, 1911 1911 }, 1912 1912 [1] = { 1913 - .start = AT91SAM9G45_ID_AESTDESSHA, 1914 - .end = AT91SAM9G45_ID_AESTDESSHA, 1913 + .start = NR_IRQS_LEGACY + AT91SAM9G45_ID_AESTDESSHA, 1914 + .end = NR_IRQS_LEGACY + AT91SAM9G45_ID_AESTDESSHA, 1915 1915 .flags = IORESOURCE_IRQ, 1916 1916 }, 1917 1917 };
+2 -1
arch/arm/mach-highbank/system.c
··· 28 28 hignbank_set_pwr_soft_reset(); 29 29 30 30 scu_power_mode(scu_base_addr, SCU_PM_POWEROFF); 31 - cpu_do_idle(); 31 + while (1) 32 + cpu_do_idle(); 32 33 } 33 34
+1 -1
arch/arm/mach-imx/clk-gate2.c
··· 112 112 113 113 clk = clk_register(dev, &gate->hw); 114 114 if (IS_ERR(clk)) 115 - kfree(clk); 115 + kfree(gate); 116 116 117 117 return clk; 118 118 }
+1 -1
arch/arm/mach-imx/ehci-imx25.c
··· 30 30 #define MX25_H1_SIC_SHIFT 21 31 31 #define MX25_H1_SIC_MASK (0x3 << MX25_H1_SIC_SHIFT) 32 32 #define MX25_H1_PP_BIT (1 << 18) 33 - #define MX25_H1_PM_BIT (1 << 8) 33 + #define MX25_H1_PM_BIT (1 << 16) 34 34 #define MX25_H1_IPPUE_UP_BIT (1 << 7) 35 35 #define MX25_H1_IPPUE_DOWN_BIT (1 << 6) 36 36 #define MX25_H1_TLL_BIT (1 << 5)
+1 -1
arch/arm/mach-imx/ehci-imx35.c
··· 30 30 #define MX35_H1_SIC_SHIFT 21 31 31 #define MX35_H1_SIC_MASK (0x3 << MX35_H1_SIC_SHIFT) 32 32 #define MX35_H1_PP_BIT (1 << 18) 33 - #define MX35_H1_PM_BIT (1 << 8) 33 + #define MX35_H1_PM_BIT (1 << 16) 34 34 #define MX35_H1_IPPUE_UP_BIT (1 << 7) 35 35 #define MX35_H1_IPPUE_DOWN_BIT (1 << 6) 36 36 #define MX35_H1_TLL_BIT (1 << 5)
+1 -1
arch/arm/mach-omap2/clockdomains44xx_data.c
··· 359 359 .clkdm_offs = OMAP4430_CM2_CAM_CAM_CDOFFS, 360 360 .wkdep_srcs = iss_wkup_sleep_deps, 361 361 .sleepdep_srcs = iss_wkup_sleep_deps, 362 - .flags = CLKDM_CAN_HWSUP_SWSUP, 362 + .flags = CLKDM_CAN_SWSUP, 363 363 }; 364 364 365 365 static struct clockdomain l3_dss_44xx_clkdm = {
+79
arch/arm/mach-omap2/devices.c
··· 19 19 #include <linux/of.h> 20 20 #include <linux/pinctrl/machine.h> 21 21 #include <linux/platform_data/omap4-keypad.h> 22 + #include <linux/platform_data/omap_ocp2scp.h> 22 23 23 24 #include <asm/mach-types.h> 24 25 #include <asm/mach/map.h> ··· 614 613 static inline void omap_init_vout(void) {} 615 614 #endif 616 615 616 + #if defined(CONFIG_OMAP_OCP2SCP) || defined(CONFIG_OMAP_OCP2SCP_MODULE) 617 + static int count_ocp2scp_devices(struct omap_ocp2scp_dev *ocp2scp_dev) 618 + { 619 + int cnt = 0; 620 + 621 + while (ocp2scp_dev->drv_name != NULL) { 622 + cnt++; 623 + ocp2scp_dev++; 624 + } 625 + 626 + return cnt; 627 + } 628 + 629 + static void omap_init_ocp2scp(void) 630 + { 631 + struct omap_hwmod *oh; 632 + struct platform_device *pdev; 633 + int bus_id = -1, dev_cnt = 0, i; 634 + struct omap_ocp2scp_dev *ocp2scp_dev; 635 + const char *oh_name, *name; 636 + struct omap_ocp2scp_platform_data *pdata; 637 + 638 + if (!cpu_is_omap44xx()) 639 + return; 640 + 641 + oh_name = "ocp2scp_usb_phy"; 642 + name = "omap-ocp2scp"; 643 + 644 + oh = omap_hwmod_lookup(oh_name); 645 + if (!oh) { 646 + pr_err("%s: could not find omap_hwmod for %s\n", __func__, 647 + oh_name); 648 + return; 649 + } 650 + 651 + pdata = kzalloc(sizeof(*pdata), GFP_KERNEL); 652 + if (!pdata) { 653 + pr_err("%s: No memory for ocp2scp pdata\n", __func__); 654 + return; 655 + } 656 + 657 + ocp2scp_dev = oh->dev_attr; 658 + dev_cnt = count_ocp2scp_devices(ocp2scp_dev); 659 + 660 + if (!dev_cnt) { 661 + pr_err("%s: No devices connected to ocp2scp\n", __func__); 662 + kfree(pdata); 663 + return; 664 + } 665 + 666 + pdata->devices = kzalloc(sizeof(struct omap_ocp2scp_dev *) 667 + * dev_cnt, GFP_KERNEL); 668 + if (!pdata->devices) { 669 + pr_err("%s: No memory for ocp2scp pdata devices\n", __func__); 670 + kfree(pdata); 671 + return; 672 + } 673 + 674 + for (i = 0; i < dev_cnt; i++, ocp2scp_dev++) 675 + pdata->devices[i] = ocp2scp_dev; 676 + 677 + pdata->dev_cnt = dev_cnt; 678 + 679 + pdev = omap_device_build(name, bus_id, oh, pdata, sizeof(*pdata), NULL, 680 + 0, false); 681 + if (IS_ERR(pdev)) { 682 + pr_err("Could not build omap_device for %s %s\n", 683 + name, oh_name); 684 + kfree(pdata->devices); 685 + kfree(pdata); 686 + return; 687 + } 688 + } 689 + #else 690 + static inline void omap_init_ocp2scp(void) { } 691 + #endif 692 + 617 693 /*-------------------------------------------------------------------------*/ 618 694 619 695 static int __init omap2_init_devices(void) ··· 718 640 omap_init_sham(); 719 641 omap_init_aes(); 720 642 omap_init_vout(); 643 + omap_init_ocp2scp(); 721 644 722 645 return 0; 723 646 }
+49 -14
arch/arm/mach-omap2/omap_hwmod.c
··· 422 422 } 423 423 424 424 /** 425 + * _wait_softreset_complete - wait for an OCP softreset to complete 426 + * @oh: struct omap_hwmod * to wait on 427 + * 428 + * Wait until the IP block represented by @oh reports that its OCP 429 + * softreset is complete. This can be triggered by software (see 430 + * _ocp_softreset()) or by hardware upon returning from off-mode (one 431 + * example is HSMMC). Waits for up to MAX_MODULE_SOFTRESET_WAIT 432 + * microseconds. Returns the number of microseconds waited. 433 + */ 434 + static int _wait_softreset_complete(struct omap_hwmod *oh) 435 + { 436 + struct omap_hwmod_class_sysconfig *sysc; 437 + u32 softrst_mask; 438 + int c = 0; 439 + 440 + sysc = oh->class->sysc; 441 + 442 + if (sysc->sysc_flags & SYSS_HAS_RESET_STATUS) 443 + omap_test_timeout((omap_hwmod_read(oh, sysc->syss_offs) 444 + & SYSS_RESETDONE_MASK), 445 + MAX_MODULE_SOFTRESET_WAIT, c); 446 + else if (sysc->sysc_flags & SYSC_HAS_RESET_STATUS) { 447 + softrst_mask = (0x1 << sysc->sysc_fields->srst_shift); 448 + omap_test_timeout(!(omap_hwmod_read(oh, sysc->sysc_offs) 449 + & softrst_mask), 450 + MAX_MODULE_SOFTRESET_WAIT, c); 451 + } 452 + 453 + return c; 454 + } 455 + 456 + /** 425 457 * _set_dmadisable: set OCP_SYSCONFIG.DMADISABLE bit in @v 426 458 * @oh: struct omap_hwmod * 427 459 * ··· 1314 1282 if (!oh->class->sysc) 1315 1283 return; 1316 1284 1285 + /* 1286 + * Wait until reset has completed, this is needed as the IP 1287 + * block is reset automatically by hardware in some cases 1288 + * (off-mode for example), and the drivers require the 1289 + * IP to be ready when they access it 1290 + */ 1291 + if (oh->flags & HWMOD_CONTROL_OPT_CLKS_IN_RESET) 1292 + _enable_optional_clocks(oh); 1293 + _wait_softreset_complete(oh); 1294 + if (oh->flags & HWMOD_CONTROL_OPT_CLKS_IN_RESET) 1295 + _disable_optional_clocks(oh); 1296 + 1317 1297 v = oh->_sysc_cache; 1318 1298 sf = oh->class->sysc->sysc_flags; 1319 1299 ··· 1848 1804 */ 1849 1805 static int _ocp_softreset(struct omap_hwmod *oh) 1850 1806 { 1851 - u32 v, softrst_mask; 1807 + u32 v; 1852 1808 int c = 0; 1853 1809 int ret = 0; 1854 1810 ··· 1878 1834 if (oh->class->sysc->srst_udelay) 1879 1835 udelay(oh->class->sysc->srst_udelay); 1880 1836 1881 - if (oh->class->sysc->sysc_flags & SYSS_HAS_RESET_STATUS) 1882 - omap_test_timeout((omap_hwmod_read(oh, 1883 - oh->class->sysc->syss_offs) 1884 - & SYSS_RESETDONE_MASK), 1885 - MAX_MODULE_SOFTRESET_WAIT, c); 1886 - else if (oh->class->sysc->sysc_flags & SYSC_HAS_RESET_STATUS) { 1887 - softrst_mask = (0x1 << oh->class->sysc->sysc_fields->srst_shift); 1888 - omap_test_timeout(!(omap_hwmod_read(oh, 1889 - oh->class->sysc->sysc_offs) 1890 - & softrst_mask), 1891 - MAX_MODULE_SOFTRESET_WAIT, c); 1892 - } 1893 - 1837 + c = _wait_softreset_complete(oh); 1894 1838 if (c == MAX_MODULE_SOFTRESET_WAIT) 1895 1839 pr_warning("omap_hwmod: %s: softreset failed (waited %d usec)\n", 1896 1840 oh->name, MAX_MODULE_SOFTRESET_WAIT); ··· 2383 2351 2384 2352 if (oh->_state != _HWMOD_STATE_INITIALIZED) 2385 2353 return -EINVAL; 2354 + 2355 + if (oh->flags & HWMOD_EXT_OPT_MAIN_CLK) 2356 + return -EPERM; 2386 2357 2387 2358 if (oh->rst_lines_cnt == 0) { 2388 2359 r = _enable(oh);
+36
arch/arm/mach-omap2/omap_hwmod_44xx_data.c
··· 21 21 #include <linux/io.h> 22 22 #include <linux/platform_data/gpio-omap.h> 23 23 #include <linux/power/smartreflex.h> 24 + #include <linux/platform_data/omap_ocp2scp.h> 24 25 25 26 #include <plat/omap_hwmod.h> 26 27 #include <plat/i2c.h> ··· 2126 2125 .name = "mcpdm", 2127 2126 .class = &omap44xx_mcpdm_hwmod_class, 2128 2127 .clkdm_name = "abe_clkdm", 2128 + /* 2129 + * It's suspected that the McPDM requires an off-chip main 2130 + * functional clock, controlled via I2C. This IP block is 2131 + * currently reset very early during boot, before I2C is 2132 + * available, so it doesn't seem that we have any choice in 2133 + * the kernel other than to avoid resetting it. 2134 + */ 2135 + .flags = HWMOD_EXT_OPT_MAIN_CLK, 2129 2136 .mpu_irqs = omap44xx_mcpdm_irqs, 2130 2137 .sdma_reqs = omap44xx_mcpdm_sdma_reqs, 2131 2138 .main_clk = "mcpdm_fck", ··· 2690 2681 .sysc = &omap44xx_ocp2scp_sysc, 2691 2682 }; 2692 2683 2684 + /* ocp2scp dev_attr */ 2685 + static struct resource omap44xx_usb_phy_and_pll_addrs[] = { 2686 + { 2687 + .name = "usb_phy", 2688 + .start = 0x4a0ad080, 2689 + .end = 0x4a0ae000, 2690 + .flags = IORESOURCE_MEM, 2691 + }, 2692 + { 2693 + /* XXX: Remove this once control module driver is in place */ 2694 + .name = "ctrl_dev", 2695 + .start = 0x4a002300, 2696 + .end = 0x4a002303, 2697 + .flags = IORESOURCE_MEM, 2698 + }, 2699 + { } 2700 + }; 2701 + 2702 + static struct omap_ocp2scp_dev ocp2scp_dev_attr[] = { 2703 + { 2704 + .drv_name = "omap-usb2", 2705 + .res = omap44xx_usb_phy_and_pll_addrs, 2706 + }, 2707 + { } 2708 + }; 2709 + 2693 2710 /* ocp2scp_usb_phy */ 2694 2711 static struct omap_hwmod omap44xx_ocp2scp_usb_phy_hwmod = { 2695 2712 .name = "ocp2scp_usb_phy", ··· 2729 2694 .modulemode = MODULEMODE_HWCTRL, 2730 2695 }, 2731 2696 }, 2697 + .dev_attr = ocp2scp_dev_attr, 2732 2698 }; 2733 2699 2734 2700 /*
+1 -1
arch/arm/mach-omap2/twl-common.c
··· 366 366 }; 367 367 368 368 static struct regulator_consumer_supply omap4_vdd1_supply[] = { 369 - REGULATOR_SUPPLY("vcc", "mpu.0"), 369 + REGULATOR_SUPPLY("vcc", "cpu0"), 370 370 }; 371 371 372 372 static struct regulator_consumer_supply omap4_vdd2_supply[] = {
+1 -1
arch/arm/mach-omap2/vc.c
··· 264 264 265 265 if (initialized) { 266 266 if (voltdm->pmic->i2c_high_speed != i2c_high_speed) 267 - pr_warn("%s: I2C config for vdd_%s does not match other channels (%u).", 267 + pr_warn("%s: I2C config for vdd_%s does not match other channels (%u).\n", 268 268 __func__, voltdm->name, i2c_high_speed); 269 269 return; 270 270 }
+7 -1
arch/arm/mach-pxa/hx4700.c
··· 28 28 #include <linux/mfd/asic3.h> 29 29 #include <linux/mtd/physmap.h> 30 30 #include <linux/pda_power.h> 31 + #include <linux/pwm.h> 31 32 #include <linux/pwm_backlight.h> 32 33 #include <linux/regulator/driver.h> 33 34 #include <linux/regulator/gpio-regulator.h> ··· 557 556 */ 558 557 559 558 static struct platform_pwm_backlight_data backlight_data = { 560 - .pwm_id = 1, 559 + .pwm_id = -1, /* Superseded by pwm_lookup */ 561 560 .max_brightness = 200, 562 561 .dft_brightness = 100, 563 562 .pwm_period_ns = 30923, ··· 570 569 .parent = &pxa27x_device_pwm1.dev, 571 570 .platform_data = &backlight_data, 572 571 }, 572 + }; 573 + 574 + static struct pwm_lookup hx4700_pwm_lookup[] = { 575 + PWM_LOOKUP("pxa27x-pwm.1", 0, "pwm-backlight", NULL), 573 576 }; 574 577 575 578 /* ··· 877 872 pxa_set_stuart_info(NULL); 878 873 879 874 platform_add_devices(devices, ARRAY_SIZE(devices)); 875 + pwm_add_table(hx4700_pwm_lookup, ARRAY_SIZE(hx4700_pwm_lookup)); 880 876 881 877 pxa_set_ficp_info(&ficp_info); 882 878 pxa27x_set_i2c_power_info(NULL);
+2 -6
arch/arm/mach-pxa/spitz_pm.c
··· 86 86 gpio_set_value(SPITZ_GPIO_LED_GREEN, on); 87 87 } 88 88 89 - static unsigned long gpio18_config[] = { 90 - GPIO18_RDY, 91 - GPIO18_GPIO, 92 - }; 89 + static unsigned long gpio18_config = GPIO18_GPIO; 93 90 94 91 static void spitz_presuspend(void) 95 92 { ··· 109 112 PGSR3 &= ~SPITZ_GPIO_G3_STROBE_BIT; 110 113 PGSR2 |= GPIO_bit(SPITZ_GPIO_KEY_STROBE0); 111 114 112 - pxa2xx_mfp_config(&gpio18_config[0], 1); 115 + pxa2xx_mfp_config(&gpio18_config, 1); 113 116 gpio_request_one(18, GPIOF_OUT_INIT_HIGH, "Unknown"); 114 117 gpio_free(18); 115 118 ··· 128 131 129 132 static void spitz_postsuspend(void) 130 133 { 131 - pxa2xx_mfp_config(&gpio18_config[1], 1); 132 134 } 133 135 134 136 static int spitz_should_wakeup(unsigned int resume_on_alarm)
+1 -1
arch/arm/mm/alignment.c
··· 745 745 static int 746 746 do_alignment(unsigned long addr, unsigned int fsr, struct pt_regs *regs) 747 747 { 748 - union offset_union offset; 748 + union offset_union uninitialized_var(offset); 749 749 unsigned long instr = 0, instrptr; 750 750 int (*handler)(unsigned long addr, unsigned long instr, struct pt_regs *regs); 751 751 unsigned int type;
+6
arch/arm/plat-omap/include/plat/omap_hwmod.h
··· 443 443 * in order to complete the reset. Optional clocks will be disabled 444 444 * again after the reset. 445 445 * HWMOD_16BIT_REG: Module has 16bit registers 446 + * HWMOD_EXT_OPT_MAIN_CLK: The only main functional clock source for 447 + * this IP block comes from an off-chip source and is not always 448 + * enabled. This prevents the hwmod code from being able to 449 + * enable and reset the IP block early. XXX Eventually it should 450 + * be possible to query the clock framework for this information. 446 451 */ 447 452 #define HWMOD_SWSUP_SIDLE (1 << 0) 448 453 #define HWMOD_SWSUP_MSTANDBY (1 << 1) ··· 458 453 #define HWMOD_NO_IDLEST (1 << 6) 459 454 #define HWMOD_CONTROL_OPT_CLKS_IN_RESET (1 << 7) 460 455 #define HWMOD_16BIT_REG (1 << 8) 456 + #define HWMOD_EXT_OPT_MAIN_CLK (1 << 9) 461 457 462 458 /* 463 459 * omap_hwmod._int_flags definitions
+1 -1
arch/arm/tools/Makefile
··· 5 5 # 6 6 7 7 include/generated/mach-types.h: $(src)/gen-mach-types $(src)/mach-types 8 - $(kecho) ' Generating $@' 8 + @$(kecho) ' Generating $@' 9 9 @mkdir -p $(dir $@) 10 10 $(Q)$(AWK) -f $^ > $@ || { rm -f $@; /bin/false; }
+6 -3
arch/arm/vfp/vfpmodule.c
··· 701 701 elf_hwcap |= HWCAP_VFPv3; 702 702 703 703 /* 704 - * Check for VFPv3 D16. CPUs in this configuration 705 - * only have 16 x 64bit registers. 704 + * Check for VFPv3 D16 and VFPv4 D16. CPUs in 705 + * this configuration only have 16 x 64bit 706 + * registers. 706 707 */ 707 708 if (((fmrx(MVFR0) & MVFR0_A_SIMD_MASK)) == 1) 708 - elf_hwcap |= HWCAP_VFPv3D16; 709 + elf_hwcap |= HWCAP_VFPv3D16; /* also v4-D16 */ 710 + else 711 + elf_hwcap |= HWCAP_VFPD32; 709 712 } 710 713 #endif 711 714 /*
+11
arch/arm/xen/enlighten.c
··· 166 166 *pages = NULL; 167 167 } 168 168 EXPORT_SYMBOL_GPL(free_xenballooned_pages); 169 + 170 + /* In the hypervisor.S file. */ 171 + EXPORT_SYMBOL_GPL(HYPERVISOR_event_channel_op); 172 + EXPORT_SYMBOL_GPL(HYPERVISOR_grant_table_op); 173 + EXPORT_SYMBOL_GPL(HYPERVISOR_xen_version); 174 + EXPORT_SYMBOL_GPL(HYPERVISOR_console_io); 175 + EXPORT_SYMBOL_GPL(HYPERVISOR_sched_op); 176 + EXPORT_SYMBOL_GPL(HYPERVISOR_hvm_op); 177 + EXPORT_SYMBOL_GPL(HYPERVISOR_memory_op); 178 + EXPORT_SYMBOL_GPL(HYPERVISOR_physdev_op); 179 + EXPORT_SYMBOL_GPL(privcmd_call);
+1
arch/arm64/Kconfig
··· 1 1 config ARM64 2 2 def_bool y 3 3 select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE 4 + select ARCH_WANT_COMPAT_IPC_PARSE_VERSION 4 5 select GENERIC_CLOCKEVENTS 5 6 select GENERIC_HARDIRQS_NO_DEPRECATED 6 7 select GENERIC_IOMAP
+1 -4
arch/arm64/include/asm/elf.h
··· 25 25 #include <asm/user.h> 26 26 27 27 typedef unsigned long elf_greg_t; 28 - typedef unsigned long elf_freg_t[3]; 29 28 30 29 #define ELF_NGREG (sizeof (struct pt_regs) / sizeof(elf_greg_t)) 31 30 typedef elf_greg_t elf_gregset_t[ELF_NGREG]; 32 - 33 - typedef struct user_fp elf_fpregset_t; 31 + typedef struct user_fpsimd_state elf_fpregset_t; 34 32 35 33 #define EM_AARCH64 183 36 34 ··· 84 86 #define R_AARCH64_MOVW_PREL_G2 291 85 87 #define R_AARCH64_MOVW_PREL_G2_NC 292 86 88 #define R_AARCH64_MOVW_PREL_G3 293 87 - 88 89 89 90 /* 90 91 * These are used to set parameters in the core dumps.
+2 -3
arch/arm64/include/asm/fpsimd.h
··· 25 25 * - FPSR and FPCR 26 26 * - 32 128-bit data registers 27 27 * 28 - * Note that user_fp forms a prefix of this structure, which is relied 29 - * upon in the ptrace FP/SIMD accessors. struct user_fpsimd_state must 30 - * form a prefix of struct fpsimd_state. 28 + * Note that user_fpsimd forms a prefix of this structure, which is 29 + * relied upon in the ptrace FP/SIMD accessors. 31 30 */ 32 31 struct fpsimd_state { 33 32 union {
+5 -5
arch/arm64/include/asm/io.h
··· 114 114 * I/O port access primitives. 115 115 */ 116 116 #define IO_SPACE_LIMIT 0xffff 117 - #define PCI_IOBASE ((void __iomem *)0xffffffbbfffe0000UL) 117 + #define PCI_IOBASE ((void __iomem *)(MODULES_VADDR - SZ_2M)) 118 118 119 119 static inline u8 inb(unsigned long addr) 120 120 { ··· 222 222 extern void __iounmap(volatile void __iomem *addr); 223 223 224 224 #define PROT_DEFAULT (PTE_TYPE_PAGE | PTE_AF | PTE_DIRTY) 225 - #define PROT_DEVICE_nGnRE (PROT_DEFAULT | PTE_XN | PTE_ATTRINDX(MT_DEVICE_nGnRE)) 225 + #define PROT_DEVICE_nGnRE (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_ATTRINDX(MT_DEVICE_nGnRE)) 226 226 #define PROT_NORMAL_NC (PROT_DEFAULT | PTE_ATTRINDX(MT_NORMAL_NC)) 227 227 228 - #define ioremap(addr, size) __ioremap((addr), (size), PROT_DEVICE_nGnRE) 229 - #define ioremap_nocache(addr, size) __ioremap((addr), (size), PROT_DEVICE_nGnRE) 230 - #define ioremap_wc(addr, size) __ioremap((addr), (size), PROT_NORMAL_NC) 228 + #define ioremap(addr, size) __ioremap((addr), (size), __pgprot(PROT_DEVICE_nGnRE)) 229 + #define ioremap_nocache(addr, size) __ioremap((addr), (size), __pgprot(PROT_DEVICE_nGnRE)) 230 + #define ioremap_wc(addr, size) __ioremap((addr), (size), __pgprot(PROT_NORMAL_NC)) 231 231 #define iounmap __iounmap 232 232 233 233 #define ARCH_HAS_IOREMAP_WC
+4 -2
arch/arm64/include/asm/pgtable-hwdef.h
··· 38 38 #define PMD_SECT_S (_AT(pmdval_t, 3) << 8) 39 39 #define PMD_SECT_AF (_AT(pmdval_t, 1) << 10) 40 40 #define PMD_SECT_NG (_AT(pmdval_t, 1) << 11) 41 - #define PMD_SECT_XN (_AT(pmdval_t, 1) << 54) 41 + #define PMD_SECT_PXN (_AT(pmdval_t, 1) << 53) 42 + #define PMD_SECT_UXN (_AT(pmdval_t, 1) << 54) 42 43 43 44 /* 44 45 * AttrIndx[2:0] encoding (mapping attributes defined in the MAIR* registers). ··· 58 57 #define PTE_SHARED (_AT(pteval_t, 3) << 8) /* SH[1:0], inner shareable */ 59 58 #define PTE_AF (_AT(pteval_t, 1) << 10) /* Access Flag */ 60 59 #define PTE_NG (_AT(pteval_t, 1) << 11) /* nG */ 61 - #define PTE_XN (_AT(pteval_t, 1) << 54) /* XN */ 60 + #define PTE_PXN (_AT(pteval_t, 1) << 53) /* Privileged XN */ 61 + #define PTE_UXN (_AT(pteval_t, 1) << 54) /* User XN */ 62 62 63 63 /* 64 64 * AttrIndx[2:0] encoding (mapping attributes defined in the MAIR* registers).
+19 -19
arch/arm64/include/asm/pgtable.h
··· 62 62 63 63 #define _MOD_PROT(p, b) __pgprot(pgprot_val(p) | (b)) 64 64 65 - #define PAGE_NONE _MOD_PROT(pgprot_default, PTE_NG | PTE_XN | PTE_RDONLY) 66 - #define PAGE_SHARED _MOD_PROT(pgprot_default, PTE_USER | PTE_NG | PTE_XN) 67 - #define PAGE_SHARED_EXEC _MOD_PROT(pgprot_default, PTE_USER | PTE_NG) 68 - #define PAGE_COPY _MOD_PROT(pgprot_default, PTE_USER | PTE_NG | PTE_XN | PTE_RDONLY) 69 - #define PAGE_COPY_EXEC _MOD_PROT(pgprot_default, PTE_USER | PTE_NG | PTE_RDONLY) 70 - #define PAGE_READONLY _MOD_PROT(pgprot_default, PTE_USER | PTE_NG | PTE_XN | PTE_RDONLY) 71 - #define PAGE_READONLY_EXEC _MOD_PROT(pgprot_default, PTE_USER | PTE_NG | PTE_RDONLY) 72 - #define PAGE_KERNEL _MOD_PROT(pgprot_default, PTE_XN | PTE_DIRTY) 73 - #define PAGE_KERNEL_EXEC _MOD_PROT(pgprot_default, PTE_DIRTY) 65 + #define PAGE_NONE _MOD_PROT(pgprot_default, PTE_NG | PTE_PXN | PTE_UXN | PTE_RDONLY) 66 + #define PAGE_SHARED _MOD_PROT(pgprot_default, PTE_USER | PTE_NG | PTE_PXN | PTE_UXN) 67 + #define PAGE_SHARED_EXEC _MOD_PROT(pgprot_default, PTE_USER | PTE_NG | PTE_PXN) 68 + #define PAGE_COPY _MOD_PROT(pgprot_default, PTE_USER | PTE_NG | PTE_PXN | PTE_UXN | PTE_RDONLY) 69 + #define PAGE_COPY_EXEC _MOD_PROT(pgprot_default, PTE_USER | PTE_NG | PTE_PXN | PTE_RDONLY) 70 + #define PAGE_READONLY _MOD_PROT(pgprot_default, PTE_USER | PTE_NG | PTE_PXN | PTE_UXN | PTE_RDONLY) 71 + #define PAGE_READONLY_EXEC _MOD_PROT(pgprot_default, PTE_USER | PTE_NG | PTE_PXN | PTE_RDONLY) 72 + #define PAGE_KERNEL _MOD_PROT(pgprot_default, PTE_PXN | PTE_UXN | PTE_DIRTY) 73 + #define PAGE_KERNEL_EXEC _MOD_PROT(pgprot_default, PTE_UXN | PTE_DIRTY) 74 74 75 - #define __PAGE_NONE __pgprot(_PAGE_DEFAULT | PTE_NG | PTE_XN | PTE_RDONLY) 76 - #define __PAGE_SHARED __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_XN) 77 - #define __PAGE_SHARED_EXEC __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_NG) 78 - #define __PAGE_COPY __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_XN | PTE_RDONLY) 79 - #define __PAGE_COPY_EXEC __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_RDONLY) 80 - #define __PAGE_READONLY __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_XN | PTE_RDONLY) 81 - #define __PAGE_READONLY_EXEC __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_RDONLY) 75 + #define __PAGE_NONE __pgprot(_PAGE_DEFAULT | PTE_NG | PTE_PXN | PTE_UXN | PTE_RDONLY) 76 + #define __PAGE_SHARED __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_PXN | PTE_UXN) 77 + #define __PAGE_SHARED_EXEC __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_PXN) 78 + #define __PAGE_COPY __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_PXN | PTE_UXN | PTE_RDONLY) 79 + #define __PAGE_COPY_EXEC __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_PXN | PTE_RDONLY) 80 + #define __PAGE_READONLY __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_PXN | PTE_UXN | PTE_RDONLY) 81 + #define __PAGE_READONLY_EXEC __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_PXN | PTE_RDONLY) 82 82 83 83 #endif /* __ASSEMBLY__ */ 84 84 ··· 130 130 #define pte_young(pte) (pte_val(pte) & PTE_AF) 131 131 #define pte_special(pte) (pte_val(pte) & PTE_SPECIAL) 132 132 #define pte_write(pte) (!(pte_val(pte) & PTE_RDONLY)) 133 - #define pte_exec(pte) (!(pte_val(pte) & PTE_XN)) 133 + #define pte_exec(pte) (!(pte_val(pte) & PTE_UXN)) 134 134 135 135 #define pte_present_exec_user(pte) \ 136 - ((pte_val(pte) & (PTE_VALID | PTE_USER | PTE_XN)) == \ 136 + ((pte_val(pte) & (PTE_VALID | PTE_USER | PTE_UXN)) == \ 137 137 (PTE_VALID | PTE_USER)) 138 138 139 139 #define PTE_BIT_FUNC(fn,op) \ ··· 262 262 263 263 static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) 264 264 { 265 - const pteval_t mask = PTE_USER | PTE_XN | PTE_RDONLY; 265 + const pteval_t mask = PTE_USER | PTE_PXN | PTE_UXN | PTE_RDONLY; 266 266 pte_val(pte) = (pte_val(pte) & ~mask) | (pgprot_val(newprot) & mask); 267 267 return pte; 268 268 }
+2
arch/arm64/include/asm/processor.h
··· 43 43 #else 44 44 #define STACK_TOP STACK_TOP_MAX 45 45 #endif /* CONFIG_COMPAT */ 46 + 47 + #define ARCH_LOW_ADDRESS_LIMIT PHYS_MASK 46 48 #endif /* __KERNEL__ */ 47 49 48 50 struct debug_info {
-1
arch/arm64/include/asm/unistd.h
··· 14 14 * along with this program. If not, see <http://www.gnu.org/licenses/>. 15 15 */ 16 16 #ifdef CONFIG_COMPAT 17 - #define __ARCH_WANT_COMPAT_IPC_PARSE_VERSION 18 17 #define __ARCH_WANT_COMPAT_STAT64 19 18 #define __ARCH_WANT_SYS_GETHOSTNAME 20 19 #define __ARCH_WANT_SYS_PAUSE
+2 -8
arch/arm64/kernel/perf_event.c
··· 613 613 ARMV8_PMUV3_PERFCTR_BUS_ACCESS = 0x19, 614 614 ARMV8_PMUV3_PERFCTR_MEM_ERROR = 0x1A, 615 615 ARMV8_PMUV3_PERFCTR_BUS_CYCLES = 0x1D, 616 - 617 - /* 618 - * This isn't an architected event. 619 - * We detect this event number and use the cycle counter instead. 620 - */ 621 - ARMV8_PMUV3_PERFCTR_CPU_CYCLES = 0xFF, 622 616 }; 623 617 624 618 /* PMUv3 HW events mapping. */ 625 619 static const unsigned armv8_pmuv3_perf_map[PERF_COUNT_HW_MAX] = { 626 - [PERF_COUNT_HW_CPU_CYCLES] = ARMV8_PMUV3_PERFCTR_CPU_CYCLES, 620 + [PERF_COUNT_HW_CPU_CYCLES] = ARMV8_PMUV3_PERFCTR_CLOCK_CYCLES, 627 621 [PERF_COUNT_HW_INSTRUCTIONS] = ARMV8_PMUV3_PERFCTR_INSTR_EXECUTED, 628 622 [PERF_COUNT_HW_CACHE_REFERENCES] = ARMV8_PMUV3_PERFCTR_L1_DCACHE_ACCESS, 629 623 [PERF_COUNT_HW_CACHE_MISSES] = ARMV8_PMUV3_PERFCTR_L1_DCACHE_REFILL, ··· 1100 1106 unsigned long evtype = event->config_base & ARMV8_EVTYPE_EVENT; 1101 1107 1102 1108 /* Always place a cycle counter into the cycle counter. */ 1103 - if (evtype == ARMV8_PMUV3_PERFCTR_CPU_CYCLES) { 1109 + if (evtype == ARMV8_PMUV3_PERFCTR_CLOCK_CYCLES) { 1104 1110 if (test_and_set_bit(ARMV8_IDX_CYCLE_COUNTER, cpuc->used_mask)) 1105 1111 return -EAGAIN; 1106 1112
-18
arch/arm64/kernel/process.c
··· 310 310 } 311 311 312 312 /* 313 - * Fill in the task's elfregs structure for a core dump. 314 - */ 315 - int dump_task_regs(struct task_struct *t, elf_gregset_t *elfregs) 316 - { 317 - elf_core_copy_regs(elfregs, task_pt_regs(t)); 318 - return 1; 319 - } 320 - 321 - /* 322 - * fill in the fpe structure for a core dump... 323 - */ 324 - int dump_fpu (struct pt_regs *regs, struct user_fp *fp) 325 - { 326 - return 0; 327 - } 328 - EXPORT_SYMBOL(dump_fpu); 329 - 330 - /* 331 313 * Shuffle the argument into the correct register before calling the 332 314 * thread function. x1 is the thread argument, x2 is the pointer to 333 315 * the thread function, and x3 points to the exit function.
+1 -2
arch/arm64/kernel/smp.c
··· 211 211 * before we continue. 212 212 */ 213 213 set_cpu_online(cpu, true); 214 - while (!cpu_active(cpu)) 215 - cpu_relax(); 214 + complete(&cpu_running); 216 215 217 216 /* 218 217 * OK, it's off to the idle thread for us
+1 -1
arch/arm64/mm/init.c
··· 80 80 #ifdef CONFIG_ZONE_DMA32 81 81 /* 4GB maximum for 32-bit only capable devices */ 82 82 max_dma32 = min(max, MAX_DMA32_PFN); 83 - zone_size[ZONE_DMA32] = max_dma32 - min; 83 + zone_size[ZONE_DMA32] = max(min, max_dma32) - min; 84 84 #endif 85 85 zone_size[ZONE_NORMAL] = max - max_dma32; 86 86
+2 -1
arch/h8300/include/asm/cache.h
··· 2 2 #define __ARCH_H8300_CACHE_H 3 3 4 4 /* bytes per L1 cache line */ 5 - #define L1_CACHE_BYTES 4 5 + #define L1_CACHE_SHIFT 2 6 + #define L1_CACHE_BYTES (1 << L1_CACHE_SHIFT) 6 7 7 8 /* m68k-elf-gcc 2.95.2 doesn't like these */ 8 9
-1
arch/ia64/mm/init.c
··· 637 637 638 638 high_memory = __va(max_low_pfn * PAGE_SIZE); 639 639 640 - reset_zone_present_pages(); 641 640 for_each_online_pgdat(pgdat) 642 641 if (pgdat->bdata->node_bootmem_map) 643 642 totalram_pages += free_all_bootmem_node(pgdat);
+1
arch/mips/cavium-octeon/executive/cvmx-l2c.c
··· 30 30 * measurement, and debugging facilities. 31 31 */ 32 32 33 + #include <linux/irqflags.h> 33 34 #include <asm/octeon/cvmx.h> 34 35 #include <asm/octeon/cvmx-l2c.h> 35 36 #include <asm/octeon/cvmx-spinlock.h>
+1
arch/mips/fw/arc/misc.c
··· 11 11 */ 12 12 #include <linux/init.h> 13 13 #include <linux/kernel.h> 14 + #include <linux/irqflags.h> 14 15 15 16 #include <asm/bcache.h> 16 17
+39 -89
arch/mips/include/asm/bitops.h
··· 14 14 #endif 15 15 16 16 #include <linux/compiler.h> 17 - #include <linux/irqflags.h> 18 17 #include <linux/types.h> 19 18 #include <asm/barrier.h> 20 19 #include <asm/byteorder.h> /* sigh ... */ ··· 43 44 #define smp_mb__before_clear_bit() smp_mb__before_llsc() 44 45 #define smp_mb__after_clear_bit() smp_llsc_mb() 45 46 47 + 48 + /* 49 + * These are the "slower" versions of the functions and are in bitops.c. 50 + * These functions call raw_local_irq_{save,restore}(). 51 + */ 52 + void __mips_set_bit(unsigned long nr, volatile unsigned long *addr); 53 + void __mips_clear_bit(unsigned long nr, volatile unsigned long *addr); 54 + void __mips_change_bit(unsigned long nr, volatile unsigned long *addr); 55 + int __mips_test_and_set_bit(unsigned long nr, 56 + volatile unsigned long *addr); 57 + int __mips_test_and_set_bit_lock(unsigned long nr, 58 + volatile unsigned long *addr); 59 + int __mips_test_and_clear_bit(unsigned long nr, 60 + volatile unsigned long *addr); 61 + int __mips_test_and_change_bit(unsigned long nr, 62 + volatile unsigned long *addr); 63 + 64 + 46 65 /* 47 66 * set_bit - Atomically set a bit in memory 48 67 * @nr: the bit to set ··· 74 57 static inline void set_bit(unsigned long nr, volatile unsigned long *addr) 75 58 { 76 59 unsigned long *m = ((unsigned long *) addr) + (nr >> SZLONG_LOG); 77 - unsigned short bit = nr & SZLONG_MASK; 60 + int bit = nr & SZLONG_MASK; 78 61 unsigned long temp; 79 62 80 63 if (kernel_uses_llsc && R10000_LLSC_WAR) { ··· 109 92 : "=&r" (temp), "+m" (*m) 110 93 : "ir" (1UL << bit)); 111 94 } while (unlikely(!temp)); 112 - } else { 113 - volatile unsigned long *a = addr; 114 - unsigned long mask; 115 - unsigned long flags; 116 - 117 - a += nr >> SZLONG_LOG; 118 - mask = 1UL << bit; 119 - raw_local_irq_save(flags); 120 - *a |= mask; 121 - raw_local_irq_restore(flags); 122 - } 95 + } else 96 + __mips_set_bit(nr, addr); 123 97 } 124 98 125 99 /* ··· 126 118 static inline void clear_bit(unsigned long nr, volatile unsigned long *addr) 127 119 { 128 120 unsigned long *m = ((unsigned long *) addr) + (nr >> SZLONG_LOG); 129 - unsigned short bit = nr & SZLONG_MASK; 121 + int bit = nr & SZLONG_MASK; 130 122 unsigned long temp; 131 123 132 124 if (kernel_uses_llsc && R10000_LLSC_WAR) { ··· 161 153 : "=&r" (temp), "+m" (*m) 162 154 : "ir" (~(1UL << bit))); 163 155 } while (unlikely(!temp)); 164 - } else { 165 - volatile unsigned long *a = addr; 166 - unsigned long mask; 167 - unsigned long flags; 168 - 169 - a += nr >> SZLONG_LOG; 170 - mask = 1UL << bit; 171 - raw_local_irq_save(flags); 172 - *a &= ~mask; 173 - raw_local_irq_restore(flags); 174 - } 156 + } else 157 + __mips_clear_bit(nr, addr); 175 158 } 176 159 177 160 /* ··· 190 191 */ 191 192 static inline void change_bit(unsigned long nr, volatile unsigned long *addr) 192 193 { 193 - unsigned short bit = nr & SZLONG_MASK; 194 + int bit = nr & SZLONG_MASK; 194 195 195 196 if (kernel_uses_llsc && R10000_LLSC_WAR) { 196 197 unsigned long *m = ((unsigned long *) addr) + (nr >> SZLONG_LOG); ··· 219 220 : "=&r" (temp), "+m" (*m) 220 221 : "ir" (1UL << bit)); 221 222 } while (unlikely(!temp)); 222 - } else { 223 - volatile unsigned long *a = addr; 224 - unsigned long mask; 225 - unsigned long flags; 226 - 227 - a += nr >> SZLONG_LOG; 228 - mask = 1UL << bit; 229 - raw_local_irq_save(flags); 230 - *a ^= mask; 231 - raw_local_irq_restore(flags); 232 - } 223 + } else 224 + __mips_change_bit(nr, addr); 233 225 } 234 226 235 227 /* ··· 234 244 static inline int test_and_set_bit(unsigned long nr, 235 245 volatile unsigned long *addr) 236 246 { 237 - unsigned short bit = nr & SZLONG_MASK; 247 + int bit = nr & SZLONG_MASK; 238 248 unsigned long res; 239 249 240 250 smp_mb__before_llsc(); ··· 271 281 } while (unlikely(!res)); 272 282 273 283 res = temp & (1UL << bit); 274 - } else { 275 - volatile unsigned long *a = addr; 276 - unsigned long mask; 277 - unsigned long flags; 278 - 279 - a += nr >> SZLONG_LOG; 280 - mask = 1UL << bit; 281 - raw_local_irq_save(flags); 282 - res = (mask & *a); 283 - *a |= mask; 284 - raw_local_irq_restore(flags); 285 - } 284 + } else 285 + res = __mips_test_and_set_bit(nr, addr); 286 286 287 287 smp_llsc_mb(); 288 288 ··· 290 310 static inline int test_and_set_bit_lock(unsigned long nr, 291 311 volatile unsigned long *addr) 292 312 { 293 - unsigned short bit = nr & SZLONG_MASK; 313 + int bit = nr & SZLONG_MASK; 294 314 unsigned long res; 295 315 296 316 if (kernel_uses_llsc && R10000_LLSC_WAR) { ··· 325 345 } while (unlikely(!res)); 326 346 327 347 res = temp & (1UL << bit); 328 - } else { 329 - volatile unsigned long *a = addr; 330 - unsigned long mask; 331 - unsigned long flags; 332 - 333 - a += nr >> SZLONG_LOG; 334 - mask = 1UL << bit; 335 - raw_local_irq_save(flags); 336 - res = (mask & *a); 337 - *a |= mask; 338 - raw_local_irq_restore(flags); 339 - } 348 + } else 349 + res = __mips_test_and_set_bit_lock(nr, addr); 340 350 341 351 smp_llsc_mb(); 342 352 ··· 343 373 static inline int test_and_clear_bit(unsigned long nr, 344 374 volatile unsigned long *addr) 345 375 { 346 - unsigned short bit = nr & SZLONG_MASK; 376 + int bit = nr & SZLONG_MASK; 347 377 unsigned long res; 348 378 349 379 smp_mb__before_llsc(); ··· 398 428 } while (unlikely(!res)); 399 429 400 430 res = temp & (1UL << bit); 401 - } else { 402 - volatile unsigned long *a = addr; 403 - unsigned long mask; 404 - unsigned long flags; 405 - 406 - a += nr >> SZLONG_LOG; 407 - mask = 1UL << bit; 408 - raw_local_irq_save(flags); 409 - res = (mask & *a); 410 - *a &= ~mask; 411 - raw_local_irq_restore(flags); 412 - } 431 + } else 432 + res = __mips_test_and_clear_bit(nr, addr); 413 433 414 434 smp_llsc_mb(); 415 435 ··· 417 457 static inline int test_and_change_bit(unsigned long nr, 418 458 volatile unsigned long *addr) 419 459 { 420 - unsigned short bit = nr & SZLONG_MASK; 460 + int bit = nr & SZLONG_MASK; 421 461 unsigned long res; 422 462 423 463 smp_mb__before_llsc(); ··· 454 494 } while (unlikely(!res)); 455 495 456 496 res = temp & (1UL << bit); 457 - } else { 458 - volatile unsigned long *a = addr; 459 - unsigned long mask; 460 - unsigned long flags; 461 - 462 - a += nr >> SZLONG_LOG; 463 - mask = 1UL << bit; 464 - raw_local_irq_save(flags); 465 - res = (mask & *a); 466 - *a ^= mask; 467 - raw_local_irq_restore(flags); 468 - } 497 + } else 498 + res = __mips_test_and_change_bit(nr, addr); 469 499 470 500 smp_llsc_mb(); 471 501
+1 -1
arch/mips/include/asm/compat.h
··· 290 290 291 291 static inline int is_compat_task(void) 292 292 { 293 - return test_thread_flag(TIF_32BIT); 293 + return test_thread_flag(TIF_32BIT_ADDR); 294 294 } 295 295 296 296 #endif /* _ASM_COMPAT_H */
+1
arch/mips/include/asm/io.h
··· 15 15 #include <linux/compiler.h> 16 16 #include <linux/kernel.h> 17 17 #include <linux/types.h> 18 + #include <linux/irqflags.h> 18 19 19 20 #include <asm/addrspace.h> 20 21 #include <asm/bug.h>
+100 -157
arch/mips/include/asm/irqflags.h
··· 16 16 #include <linux/compiler.h> 17 17 #include <asm/hazards.h> 18 18 19 + #if defined(CONFIG_CPU_MIPSR2) && !defined(CONFIG_MIPS_MT_SMTC) 20 + 21 + __asm__( 22 + " .macro arch_local_irq_disable\n" 23 + " .set push \n" 24 + " .set noat \n" 25 + " di \n" 26 + " irq_disable_hazard \n" 27 + " .set pop \n" 28 + " .endm \n"); 29 + 30 + static inline void arch_local_irq_disable(void) 31 + { 32 + __asm__ __volatile__( 33 + "arch_local_irq_disable" 34 + : /* no outputs */ 35 + : /* no inputs */ 36 + : "memory"); 37 + } 38 + 39 + 40 + __asm__( 41 + " .macro arch_local_irq_save result \n" 42 + " .set push \n" 43 + " .set reorder \n" 44 + " .set noat \n" 45 + " di \\result \n" 46 + " andi \\result, 1 \n" 47 + " irq_disable_hazard \n" 48 + " .set pop \n" 49 + " .endm \n"); 50 + 51 + static inline unsigned long arch_local_irq_save(void) 52 + { 53 + unsigned long flags; 54 + asm volatile("arch_local_irq_save\t%0" 55 + : "=r" (flags) 56 + : /* no inputs */ 57 + : "memory"); 58 + return flags; 59 + } 60 + 61 + 62 + __asm__( 63 + " .macro arch_local_irq_restore flags \n" 64 + " .set push \n" 65 + " .set noreorder \n" 66 + " .set noat \n" 67 + #if defined(CONFIG_IRQ_CPU) 68 + /* 69 + * Slow, but doesn't suffer from a relatively unlikely race 70 + * condition we're having since days 1. 71 + */ 72 + " beqz \\flags, 1f \n" 73 + " di \n" 74 + " ei \n" 75 + "1: \n" 76 + #else 77 + /* 78 + * Fast, dangerous. Life is fun, life is good. 79 + */ 80 + " mfc0 $1, $12 \n" 81 + " ins $1, \\flags, 0, 1 \n" 82 + " mtc0 $1, $12 \n" 83 + #endif 84 + " irq_disable_hazard \n" 85 + " .set pop \n" 86 + " .endm \n"); 87 + 88 + static inline void arch_local_irq_restore(unsigned long flags) 89 + { 90 + unsigned long __tmp1; 91 + 92 + __asm__ __volatile__( 93 + "arch_local_irq_restore\t%0" 94 + : "=r" (__tmp1) 95 + : "0" (flags) 96 + : "memory"); 97 + } 98 + 99 + static inline void __arch_local_irq_restore(unsigned long flags) 100 + { 101 + unsigned long __tmp1; 102 + 103 + __asm__ __volatile__( 104 + "arch_local_irq_restore\t%0" 105 + : "=r" (__tmp1) 106 + : "0" (flags) 107 + : "memory"); 108 + } 109 + #else 110 + /* Functions that require preempt_{dis,en}able() are in mips-atomic.c */ 111 + void arch_local_irq_disable(void); 112 + unsigned long arch_local_irq_save(void); 113 + void arch_local_irq_restore(unsigned long flags); 114 + void __arch_local_irq_restore(unsigned long flags); 115 + #endif /* if defined(CONFIG_CPU_MIPSR2) && !defined(CONFIG_MIPS_MT_SMTC) */ 116 + 117 + 19 118 __asm__( 20 119 " .macro arch_local_irq_enable \n" 21 120 " .set push \n" ··· 156 57 } 157 58 158 59 159 - /* 160 - * For cli() we have to insert nops to make sure that the new value 161 - * has actually arrived in the status register before the end of this 162 - * macro. 163 - * R4000/R4400 need three nops, the R4600 two nops and the R10000 needs 164 - * no nops at all. 165 - */ 166 - /* 167 - * For TX49, operating only IE bit is not enough. 168 - * 169 - * If mfc0 $12 follows store and the mfc0 is last instruction of a 170 - * page and fetching the next instruction causes TLB miss, the result 171 - * of the mfc0 might wrongly contain EXL bit. 172 - * 173 - * ERT-TX49H2-027, ERT-TX49H3-012, ERT-TX49HL3-006, ERT-TX49H4-008 174 - * 175 - * Workaround: mask EXL bit of the result or place a nop before mfc0. 176 - */ 177 - __asm__( 178 - " .macro arch_local_irq_disable\n" 179 - " .set push \n" 180 - " .set noat \n" 181 - #ifdef CONFIG_MIPS_MT_SMTC 182 - " mfc0 $1, $2, 1 \n" 183 - " ori $1, 0x400 \n" 184 - " .set noreorder \n" 185 - " mtc0 $1, $2, 1 \n" 186 - #elif defined(CONFIG_CPU_MIPSR2) 187 - " di \n" 188 - #else 189 - " mfc0 $1,$12 \n" 190 - " ori $1,0x1f \n" 191 - " xori $1,0x1f \n" 192 - " .set noreorder \n" 193 - " mtc0 $1,$12 \n" 194 - #endif 195 - " irq_disable_hazard \n" 196 - " .set pop \n" 197 - " .endm \n"); 198 - 199 - static inline void arch_local_irq_disable(void) 200 - { 201 - __asm__ __volatile__( 202 - "arch_local_irq_disable" 203 - : /* no outputs */ 204 - : /* no inputs */ 205 - : "memory"); 206 - } 207 - 208 60 __asm__( 209 61 " .macro arch_local_save_flags flags \n" 210 62 " .set push \n" ··· 175 125 return flags; 176 126 } 177 127 178 - __asm__( 179 - " .macro arch_local_irq_save result \n" 180 - " .set push \n" 181 - " .set reorder \n" 182 - " .set noat \n" 183 - #ifdef CONFIG_MIPS_MT_SMTC 184 - " mfc0 \\result, $2, 1 \n" 185 - " ori $1, \\result, 0x400 \n" 186 - " .set noreorder \n" 187 - " mtc0 $1, $2, 1 \n" 188 - " andi \\result, \\result, 0x400 \n" 189 - #elif defined(CONFIG_CPU_MIPSR2) 190 - " di \\result \n" 191 - " andi \\result, 1 \n" 192 - #else 193 - " mfc0 \\result, $12 \n" 194 - " ori $1, \\result, 0x1f \n" 195 - " xori $1, 0x1f \n" 196 - " .set noreorder \n" 197 - " mtc0 $1, $12 \n" 198 - #endif 199 - " irq_disable_hazard \n" 200 - " .set pop \n" 201 - " .endm \n"); 202 - 203 - static inline unsigned long arch_local_irq_save(void) 204 - { 205 - unsigned long flags; 206 - asm volatile("arch_local_irq_save\t%0" 207 - : "=r" (flags) 208 - : /* no inputs */ 209 - : "memory"); 210 - return flags; 211 - } 212 - 213 - __asm__( 214 - " .macro arch_local_irq_restore flags \n" 215 - " .set push \n" 216 - " .set noreorder \n" 217 - " .set noat \n" 218 - #ifdef CONFIG_MIPS_MT_SMTC 219 - "mfc0 $1, $2, 1 \n" 220 - "andi \\flags, 0x400 \n" 221 - "ori $1, 0x400 \n" 222 - "xori $1, 0x400 \n" 223 - "or \\flags, $1 \n" 224 - "mtc0 \\flags, $2, 1 \n" 225 - #elif defined(CONFIG_CPU_MIPSR2) && defined(CONFIG_IRQ_CPU) 226 - /* 227 - * Slow, but doesn't suffer from a relatively unlikely race 228 - * condition we're having since days 1. 229 - */ 230 - " beqz \\flags, 1f \n" 231 - " di \n" 232 - " ei \n" 233 - "1: \n" 234 - #elif defined(CONFIG_CPU_MIPSR2) 235 - /* 236 - * Fast, dangerous. Life is fun, life is good. 237 - */ 238 - " mfc0 $1, $12 \n" 239 - " ins $1, \\flags, 0, 1 \n" 240 - " mtc0 $1, $12 \n" 241 - #else 242 - " mfc0 $1, $12 \n" 243 - " andi \\flags, 1 \n" 244 - " ori $1, 0x1f \n" 245 - " xori $1, 0x1f \n" 246 - " or \\flags, $1 \n" 247 - " mtc0 \\flags, $12 \n" 248 - #endif 249 - " irq_disable_hazard \n" 250 - " .set pop \n" 251 - " .endm \n"); 252 - 253 - 254 - static inline void arch_local_irq_restore(unsigned long flags) 255 - { 256 - unsigned long __tmp1; 257 - 258 - #ifdef CONFIG_MIPS_MT_SMTC 259 - /* 260 - * SMTC kernel needs to do a software replay of queued 261 - * IPIs, at the cost of branch and call overhead on each 262 - * local_irq_restore() 263 - */ 264 - if (unlikely(!(flags & 0x0400))) 265 - smtc_ipi_replay(); 266 - #endif 267 - 268 - __asm__ __volatile__( 269 - "arch_local_irq_restore\t%0" 270 - : "=r" (__tmp1) 271 - : "0" (flags) 272 - : "memory"); 273 - } 274 - 275 - static inline void __arch_local_irq_restore(unsigned long flags) 276 - { 277 - unsigned long __tmp1; 278 - 279 - __asm__ __volatile__( 280 - "arch_local_irq_restore\t%0" 281 - : "=r" (__tmp1) 282 - : "0" (flags) 283 - : "memory"); 284 - } 285 128 286 129 static inline int arch_irqs_disabled_flags(unsigned long flags) 287 130 { ··· 188 245 #endif 189 246 } 190 247 191 - #endif 248 + #endif /* #ifndef __ASSEMBLY__ */ 192 249 193 250 /* 194 251 * Do the CPU's IRQ-state tracing from assembly code.
-6
arch/mips/include/asm/thread_info.h
··· 112 112 #define TIF_LOAD_WATCH 25 /* If set, load watch registers */ 113 113 #define TIF_SYSCALL_TRACE 31 /* syscall trace active */ 114 114 115 - #ifdef CONFIG_MIPS32_O32 116 - #define TIF_32BIT TIF_32BIT_REGS 117 - #elif defined(CONFIG_MIPS32_N32) 118 - #define TIF_32BIT _TIF_32BIT_ADDR 119 - #endif /* CONFIG_MIPS32_O32 */ 120 - 121 115 #define _TIF_SYSCALL_TRACE (1<<TIF_SYSCALL_TRACE) 122 116 #define _TIF_SIGPENDING (1<<TIF_SIGPENDING) 123 117 #define _TIF_NEED_RESCHED (1<<TIF_NEED_RESCHED)
+3 -2
arch/mips/lib/Makefile
··· 2 2 # Makefile for MIPS-specific library files.. 3 3 # 4 4 5 - lib-y += csum_partial.o delay.o memcpy.o memset.o \ 6 - strlen_user.o strncpy_user.o strnlen_user.o uncached.o 5 + lib-y += bitops.o csum_partial.o delay.o memcpy.o memset.o \ 6 + mips-atomic.o strlen_user.o strncpy_user.o \ 7 + strnlen_user.o uncached.o 7 8 8 9 obj-y += iomap.o 9 10 obj-$(CONFIG_PCI) += iomap-pci.o
+179
arch/mips/lib/bitops.c
··· 1 + /* 2 + * This file is subject to the terms and conditions of the GNU General Public 3 + * License. See the file "COPYING" in the main directory of this archive 4 + * for more details. 5 + * 6 + * Copyright (c) 1994-1997, 99, 2000, 06, 07 Ralf Baechle (ralf@linux-mips.org) 7 + * Copyright (c) 1999, 2000 Silicon Graphics, Inc. 8 + */ 9 + #include <linux/bitops.h> 10 + #include <linux/irqflags.h> 11 + #include <linux/export.h> 12 + 13 + 14 + /** 15 + * __mips_set_bit - Atomically set a bit in memory. This is called by 16 + * set_bit() if it cannot find a faster solution. 17 + * @nr: the bit to set 18 + * @addr: the address to start counting from 19 + */ 20 + void __mips_set_bit(unsigned long nr, volatile unsigned long *addr) 21 + { 22 + volatile unsigned long *a = addr; 23 + unsigned bit = nr & SZLONG_MASK; 24 + unsigned long mask; 25 + unsigned long flags; 26 + 27 + a += nr >> SZLONG_LOG; 28 + mask = 1UL << bit; 29 + raw_local_irq_save(flags); 30 + *a |= mask; 31 + raw_local_irq_restore(flags); 32 + } 33 + EXPORT_SYMBOL(__mips_set_bit); 34 + 35 + 36 + /** 37 + * __mips_clear_bit - Clears a bit in memory. This is called by clear_bit() if 38 + * it cannot find a faster solution. 39 + * @nr: Bit to clear 40 + * @addr: Address to start counting from 41 + */ 42 + void __mips_clear_bit(unsigned long nr, volatile unsigned long *addr) 43 + { 44 + volatile unsigned long *a = addr; 45 + unsigned bit = nr & SZLONG_MASK; 46 + unsigned long mask; 47 + unsigned long flags; 48 + 49 + a += nr >> SZLONG_LOG; 50 + mask = 1UL << bit; 51 + raw_local_irq_save(flags); 52 + *a &= ~mask; 53 + raw_local_irq_restore(flags); 54 + } 55 + EXPORT_SYMBOL(__mips_clear_bit); 56 + 57 + 58 + /** 59 + * __mips_change_bit - Toggle a bit in memory. This is called by change_bit() 60 + * if it cannot find a faster solution. 61 + * @nr: Bit to change 62 + * @addr: Address to start counting from 63 + */ 64 + void __mips_change_bit(unsigned long nr, volatile unsigned long *addr) 65 + { 66 + volatile unsigned long *a = addr; 67 + unsigned bit = nr & SZLONG_MASK; 68 + unsigned long mask; 69 + unsigned long flags; 70 + 71 + a += nr >> SZLONG_LOG; 72 + mask = 1UL << bit; 73 + raw_local_irq_save(flags); 74 + *a ^= mask; 75 + raw_local_irq_restore(flags); 76 + } 77 + EXPORT_SYMBOL(__mips_change_bit); 78 + 79 + 80 + /** 81 + * __mips_test_and_set_bit - Set a bit and return its old value. This is 82 + * called by test_and_set_bit() if it cannot find a faster solution. 83 + * @nr: Bit to set 84 + * @addr: Address to count from 85 + */ 86 + int __mips_test_and_set_bit(unsigned long nr, 87 + volatile unsigned long *addr) 88 + { 89 + volatile unsigned long *a = addr; 90 + unsigned bit = nr & SZLONG_MASK; 91 + unsigned long mask; 92 + unsigned long flags; 93 + unsigned long res; 94 + 95 + a += nr >> SZLONG_LOG; 96 + mask = 1UL << bit; 97 + raw_local_irq_save(flags); 98 + res = (mask & *a); 99 + *a |= mask; 100 + raw_local_irq_restore(flags); 101 + return res; 102 + } 103 + EXPORT_SYMBOL(__mips_test_and_set_bit); 104 + 105 + 106 + /** 107 + * __mips_test_and_set_bit_lock - Set a bit and return its old value. This is 108 + * called by test_and_set_bit_lock() if it cannot find a faster solution. 109 + * @nr: Bit to set 110 + * @addr: Address to count from 111 + */ 112 + int __mips_test_and_set_bit_lock(unsigned long nr, 113 + volatile unsigned long *addr) 114 + { 115 + volatile unsigned long *a = addr; 116 + unsigned bit = nr & SZLONG_MASK; 117 + unsigned long mask; 118 + unsigned long flags; 119 + unsigned long res; 120 + 121 + a += nr >> SZLONG_LOG; 122 + mask = 1UL << bit; 123 + raw_local_irq_save(flags); 124 + res = (mask & *a); 125 + *a |= mask; 126 + raw_local_irq_restore(flags); 127 + return res; 128 + } 129 + EXPORT_SYMBOL(__mips_test_and_set_bit_lock); 130 + 131 + 132 + /** 133 + * __mips_test_and_clear_bit - Clear a bit and return its old value. This is 134 + * called by test_and_clear_bit() if it cannot find a faster solution. 135 + * @nr: Bit to clear 136 + * @addr: Address to count from 137 + */ 138 + int __mips_test_and_clear_bit(unsigned long nr, volatile unsigned long *addr) 139 + { 140 + volatile unsigned long *a = addr; 141 + unsigned bit = nr & SZLONG_MASK; 142 + unsigned long mask; 143 + unsigned long flags; 144 + unsigned long res; 145 + 146 + a += nr >> SZLONG_LOG; 147 + mask = 1UL << bit; 148 + raw_local_irq_save(flags); 149 + res = (mask & *a); 150 + *a &= ~mask; 151 + raw_local_irq_restore(flags); 152 + return res; 153 + } 154 + EXPORT_SYMBOL(__mips_test_and_clear_bit); 155 + 156 + 157 + /** 158 + * __mips_test_and_change_bit - Change a bit and return its old value. This is 159 + * called by test_and_change_bit() if it cannot find a faster solution. 160 + * @nr: Bit to change 161 + * @addr: Address to count from 162 + */ 163 + int __mips_test_and_change_bit(unsigned long nr, volatile unsigned long *addr) 164 + { 165 + volatile unsigned long *a = addr; 166 + unsigned bit = nr & SZLONG_MASK; 167 + unsigned long mask; 168 + unsigned long flags; 169 + unsigned long res; 170 + 171 + a += nr >> SZLONG_LOG; 172 + mask = 1UL << bit; 173 + raw_local_irq_save(flags); 174 + res = (mask & *a); 175 + *a ^= mask; 176 + raw_local_irq_restore(flags); 177 + return res; 178 + } 179 + EXPORT_SYMBOL(__mips_test_and_change_bit);
+176
arch/mips/lib/mips-atomic.c
··· 1 + /* 2 + * This file is subject to the terms and conditions of the GNU General Public 3 + * License. See the file "COPYING" in the main directory of this archive 4 + * for more details. 5 + * 6 + * Copyright (C) 1994, 95, 96, 97, 98, 99, 2003 by Ralf Baechle 7 + * Copyright (C) 1996 by Paul M. Antoine 8 + * Copyright (C) 1999 Silicon Graphics 9 + * Copyright (C) 2000 MIPS Technologies, Inc. 10 + */ 11 + #include <asm/irqflags.h> 12 + #include <asm/hazards.h> 13 + #include <linux/compiler.h> 14 + #include <linux/preempt.h> 15 + #include <linux/export.h> 16 + 17 + #if !defined(CONFIG_CPU_MIPSR2) || defined(CONFIG_MIPS_MT_SMTC) 18 + 19 + /* 20 + * For cli() we have to insert nops to make sure that the new value 21 + * has actually arrived in the status register before the end of this 22 + * macro. 23 + * R4000/R4400 need three nops, the R4600 two nops and the R10000 needs 24 + * no nops at all. 25 + */ 26 + /* 27 + * For TX49, operating only IE bit is not enough. 28 + * 29 + * If mfc0 $12 follows store and the mfc0 is last instruction of a 30 + * page and fetching the next instruction causes TLB miss, the result 31 + * of the mfc0 might wrongly contain EXL bit. 32 + * 33 + * ERT-TX49H2-027, ERT-TX49H3-012, ERT-TX49HL3-006, ERT-TX49H4-008 34 + * 35 + * Workaround: mask EXL bit of the result or place a nop before mfc0. 36 + */ 37 + __asm__( 38 + " .macro arch_local_irq_disable\n" 39 + " .set push \n" 40 + " .set noat \n" 41 + #ifdef CONFIG_MIPS_MT_SMTC 42 + " mfc0 $1, $2, 1 \n" 43 + " ori $1, 0x400 \n" 44 + " .set noreorder \n" 45 + " mtc0 $1, $2, 1 \n" 46 + #elif defined(CONFIG_CPU_MIPSR2) 47 + /* see irqflags.h for inline function */ 48 + #else 49 + " mfc0 $1,$12 \n" 50 + " ori $1,0x1f \n" 51 + " xori $1,0x1f \n" 52 + " .set noreorder \n" 53 + " mtc0 $1,$12 \n" 54 + #endif 55 + " irq_disable_hazard \n" 56 + " .set pop \n" 57 + " .endm \n"); 58 + 59 + void arch_local_irq_disable(void) 60 + { 61 + preempt_disable(); 62 + __asm__ __volatile__( 63 + "arch_local_irq_disable" 64 + : /* no outputs */ 65 + : /* no inputs */ 66 + : "memory"); 67 + preempt_enable(); 68 + } 69 + EXPORT_SYMBOL(arch_local_irq_disable); 70 + 71 + 72 + __asm__( 73 + " .macro arch_local_irq_save result \n" 74 + " .set push \n" 75 + " .set reorder \n" 76 + " .set noat \n" 77 + #ifdef CONFIG_MIPS_MT_SMTC 78 + " mfc0 \\result, $2, 1 \n" 79 + " ori $1, \\result, 0x400 \n" 80 + " .set noreorder \n" 81 + " mtc0 $1, $2, 1 \n" 82 + " andi \\result, \\result, 0x400 \n" 83 + #elif defined(CONFIG_CPU_MIPSR2) 84 + /* see irqflags.h for inline function */ 85 + #else 86 + " mfc0 \\result, $12 \n" 87 + " ori $1, \\result, 0x1f \n" 88 + " xori $1, 0x1f \n" 89 + " .set noreorder \n" 90 + " mtc0 $1, $12 \n" 91 + #endif 92 + " irq_disable_hazard \n" 93 + " .set pop \n" 94 + " .endm \n"); 95 + 96 + unsigned long arch_local_irq_save(void) 97 + { 98 + unsigned long flags; 99 + preempt_disable(); 100 + asm volatile("arch_local_irq_save\t%0" 101 + : "=r" (flags) 102 + : /* no inputs */ 103 + : "memory"); 104 + preempt_enable(); 105 + return flags; 106 + } 107 + EXPORT_SYMBOL(arch_local_irq_save); 108 + 109 + 110 + __asm__( 111 + " .macro arch_local_irq_restore flags \n" 112 + " .set push \n" 113 + " .set noreorder \n" 114 + " .set noat \n" 115 + #ifdef CONFIG_MIPS_MT_SMTC 116 + "mfc0 $1, $2, 1 \n" 117 + "andi \\flags, 0x400 \n" 118 + "ori $1, 0x400 \n" 119 + "xori $1, 0x400 \n" 120 + "or \\flags, $1 \n" 121 + "mtc0 \\flags, $2, 1 \n" 122 + #elif defined(CONFIG_CPU_MIPSR2) && defined(CONFIG_IRQ_CPU) 123 + /* see irqflags.h for inline function */ 124 + #elif defined(CONFIG_CPU_MIPSR2) 125 + /* see irqflags.h for inline function */ 126 + #else 127 + " mfc0 $1, $12 \n" 128 + " andi \\flags, 1 \n" 129 + " ori $1, 0x1f \n" 130 + " xori $1, 0x1f \n" 131 + " or \\flags, $1 \n" 132 + " mtc0 \\flags, $12 \n" 133 + #endif 134 + " irq_disable_hazard \n" 135 + " .set pop \n" 136 + " .endm \n"); 137 + 138 + void arch_local_irq_restore(unsigned long flags) 139 + { 140 + unsigned long __tmp1; 141 + 142 + #ifdef CONFIG_MIPS_MT_SMTC 143 + /* 144 + * SMTC kernel needs to do a software replay of queued 145 + * IPIs, at the cost of branch and call overhead on each 146 + * local_irq_restore() 147 + */ 148 + if (unlikely(!(flags & 0x0400))) 149 + smtc_ipi_replay(); 150 + #endif 151 + preempt_disable(); 152 + __asm__ __volatile__( 153 + "arch_local_irq_restore\t%0" 154 + : "=r" (__tmp1) 155 + : "0" (flags) 156 + : "memory"); 157 + preempt_enable(); 158 + } 159 + EXPORT_SYMBOL(arch_local_irq_restore); 160 + 161 + 162 + void __arch_local_irq_restore(unsigned long flags) 163 + { 164 + unsigned long __tmp1; 165 + 166 + preempt_disable(); 167 + __asm__ __volatile__( 168 + "arch_local_irq_restore\t%0" 169 + : "=r" (__tmp1) 170 + : "0" (flags) 171 + : "memory"); 172 + preempt_enable(); 173 + } 174 + EXPORT_SYMBOL(__arch_local_irq_restore); 175 + 176 + #endif /* !defined(CONFIG_CPU_MIPSR2) || defined(CONFIG_MIPS_MT_SMTC) */
+2 -1
arch/mips/mti-malta/malta-platform.c
··· 29 29 #include <linux/mtd/partitions.h> 30 30 #include <linux/mtd/physmap.h> 31 31 #include <linux/platform_device.h> 32 + #include <asm/mips-boards/maltaint.h> 32 33 #include <mtd/mtd-abi.h> 33 34 34 35 #define SMC_PORT(base, int) \ ··· 49 48 SMC_PORT(0x2F8, 3), 50 49 { 51 50 .mapbase = 0x1f000900, /* The CBUS UART */ 52 - .irq = MIPS_CPU_IRQ_BASE + 2, 51 + .irq = MIPS_CPU_IRQ_BASE + MIPSCPU_INT_MB2, 53 52 .uartclk = 3686400, /* Twice the usual clk! */ 54 53 .iotype = UPIO_MEM32, 55 54 .flags = CBUS_UART_FLAGS,
+1
arch/s390/Kconfig
··· 96 96 select HAVE_MEMBLOCK_NODE_MAP 97 97 select HAVE_CMPXCHG_LOCAL 98 98 select HAVE_CMPXCHG_DOUBLE 99 + select HAVE_ALIGNED_STRUCT_PAGE if SLUB 99 100 select HAVE_VIRT_CPU_ACCOUNTING 100 101 select VIRT_CPU_ACCOUNTING 101 102 select ARCH_DISCARD_MEMBLOCK
+2
arch/s390/include/asm/cio.h
··· 9 9 10 10 #define LPM_ANYPATH 0xff 11 11 #define __MAX_CSSID 0 12 + #define __MAX_SUBCHANNEL 65535 13 + #define __MAX_SSID 3 12 14 13 15 #include <asm/scsw.h> 14 16
+1 -1
arch/s390/include/asm/compat.h
··· 20 20 #define PSW32_MASK_CC 0x00003000UL 21 21 #define PSW32_MASK_PM 0x00000f00UL 22 22 23 - #define PSW32_MASK_USER 0x00003F00UL 23 + #define PSW32_MASK_USER 0x0000FF00UL 24 24 25 25 #define PSW32_ADDR_AMODE 0x80000000UL 26 26 #define PSW32_ADDR_INSN 0x7FFFFFFFUL
+22 -13
arch/s390/include/asm/pgtable.h
··· 506 506 507 507 static inline int pmd_present(pmd_t pmd) 508 508 { 509 - return (pmd_val(pmd) & _SEGMENT_ENTRY_ORIGIN) != 0UL; 509 + unsigned long mask = _SEGMENT_ENTRY_INV | _SEGMENT_ENTRY_RO; 510 + return (pmd_val(pmd) & mask) == _HPAGE_TYPE_NONE || 511 + !(pmd_val(pmd) & _SEGMENT_ENTRY_INV); 510 512 } 511 513 512 514 static inline int pmd_none(pmd_t pmd) 513 515 { 514 - return (pmd_val(pmd) & _SEGMENT_ENTRY_INV) != 0UL; 516 + return (pmd_val(pmd) & _SEGMENT_ENTRY_INV) && 517 + !(pmd_val(pmd) & _SEGMENT_ENTRY_RO); 515 518 } 516 519 517 520 static inline int pmd_large(pmd_t pmd) ··· 1226 1223 } 1227 1224 1228 1225 #ifdef CONFIG_TRANSPARENT_HUGEPAGE 1226 + 1227 + #define SEGMENT_NONE __pgprot(_HPAGE_TYPE_NONE) 1228 + #define SEGMENT_RO __pgprot(_HPAGE_TYPE_RO) 1229 + #define SEGMENT_RW __pgprot(_HPAGE_TYPE_RW) 1230 + 1229 1231 #define __HAVE_ARCH_PGTABLE_DEPOSIT 1230 1232 extern void pgtable_trans_huge_deposit(struct mm_struct *mm, pgtable_t pgtable); 1231 1233 ··· 1250 1242 1251 1243 static inline unsigned long massage_pgprot_pmd(pgprot_t pgprot) 1252 1244 { 1253 - unsigned long pgprot_pmd = 0; 1254 - 1255 - if (pgprot_val(pgprot) & _PAGE_INVALID) { 1256 - if (pgprot_val(pgprot) & _PAGE_SWT) 1257 - pgprot_pmd |= _HPAGE_TYPE_NONE; 1258 - pgprot_pmd |= _SEGMENT_ENTRY_INV; 1259 - } 1260 - if (pgprot_val(pgprot) & _PAGE_RO) 1261 - pgprot_pmd |= _SEGMENT_ENTRY_RO; 1262 - return pgprot_pmd; 1245 + /* 1246 + * pgprot is PAGE_NONE, PAGE_RO, or PAGE_RW (see __Pxxx / __Sxxx) 1247 + * Convert to segment table entry format. 1248 + */ 1249 + if (pgprot_val(pgprot) == pgprot_val(PAGE_NONE)) 1250 + return pgprot_val(SEGMENT_NONE); 1251 + if (pgprot_val(pgprot) == pgprot_val(PAGE_RO)) 1252 + return pgprot_val(SEGMENT_RO); 1253 + return pgprot_val(SEGMENT_RW); 1263 1254 } 1264 1255 1265 1256 static inline pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot) ··· 1276 1269 1277 1270 static inline pmd_t pmd_mkwrite(pmd_t pmd) 1278 1271 { 1279 - pmd_val(pmd) &= ~_SEGMENT_ENTRY_RO; 1272 + /* Do not clobber _HPAGE_TYPE_NONE pages! */ 1273 + if (!(pmd_val(pmd) & _SEGMENT_ENTRY_INV)) 1274 + pmd_val(pmd) &= ~_SEGMENT_ENTRY_RO; 1280 1275 return pmd; 1281 1276 } 1282 1277
+3
arch/s390/include/asm/topology.h
··· 8 8 9 9 #ifdef CONFIG_SCHED_BOOK 10 10 11 + extern unsigned char cpu_socket_id[NR_CPUS]; 12 + #define topology_physical_package_id(cpu) (cpu_socket_id[cpu]) 13 + 11 14 extern unsigned char cpu_core_id[NR_CPUS]; 12 15 extern cpumask_t cpu_core_map[NR_CPUS]; 13 16
+2 -2
arch/s390/include/uapi/asm/ptrace.h
··· 239 239 #define PSW_MASK_EA 0x00000000UL 240 240 #define PSW_MASK_BA 0x00000000UL 241 241 242 - #define PSW_MASK_USER 0x00003F00UL 242 + #define PSW_MASK_USER 0x0000FF00UL 243 243 244 244 #define PSW_ADDR_AMODE 0x80000000UL 245 245 #define PSW_ADDR_INSN 0x7FFFFFFFUL ··· 269 269 #define PSW_MASK_EA 0x0000000100000000UL 270 270 #define PSW_MASK_BA 0x0000000080000000UL 271 271 272 - #define PSW_MASK_USER 0x00003F8180000000UL 272 + #define PSW_MASK_USER 0x0000FF8180000000UL 273 273 274 274 #define PSW_ADDR_AMODE 0x0000000000000000UL 275 275 #define PSW_ADDR_INSN 0xFFFFFFFFFFFFFFFFUL
+12 -2
arch/s390/kernel/compat_signal.c
··· 309 309 regs->psw.mask = (regs->psw.mask & ~PSW_MASK_USER) | 310 310 (__u64)(regs32.psw.mask & PSW32_MASK_USER) << 32 | 311 311 (__u64)(regs32.psw.addr & PSW32_ADDR_AMODE); 312 + /* Check for invalid user address space control. */ 313 + if ((regs->psw.mask & PSW_MASK_ASC) >= (psw_kernel_bits & PSW_MASK_ASC)) 314 + regs->psw.mask = (psw_user_bits & PSW_MASK_ASC) | 315 + (regs->psw.mask & ~PSW_MASK_ASC); 312 316 regs->psw.addr = (__u64)(regs32.psw.addr & PSW32_ADDR_INSN); 313 317 for (i = 0; i < NUM_GPRS; i++) 314 318 regs->gprs[i] = (__u64) regs32.gprs[i]; ··· 485 481 486 482 /* Set up registers for signal handler */ 487 483 regs->gprs[15] = (__force __u64) frame; 488 - regs->psw.mask |= PSW_MASK_BA; /* force amode 31 */ 484 + /* Force 31 bit amode and default user address space control. */ 485 + regs->psw.mask = PSW_MASK_BA | 486 + (psw_user_bits & PSW_MASK_ASC) | 487 + (regs->psw.mask & ~PSW_MASK_ASC); 489 488 regs->psw.addr = (__force __u64) ka->sa.sa_handler; 490 489 491 490 regs->gprs[2] = map_signal(sig); ··· 556 549 557 550 /* Set up registers for signal handler */ 558 551 regs->gprs[15] = (__force __u64) frame; 559 - regs->psw.mask |= PSW_MASK_BA; /* force amode 31 */ 552 + /* Force 31 bit amode and default user address space control. */ 553 + regs->psw.mask = PSW_MASK_BA | 554 + (psw_user_bits & PSW_MASK_ASC) | 555 + (regs->psw.mask & ~PSW_MASK_ASC); 560 556 regs->psw.addr = (__u64) ka->sa.sa_handler; 561 557 562 558 regs->gprs[2] = map_signal(sig);
+7 -1
arch/s390/kernel/sclp.S
··· 44 44 #endif 45 45 mvc .LoldpswS1-.LbaseS1(16,%r13),0(%r8) 46 46 mvc 0(16,%r8),0(%r9) 47 + #ifdef CONFIG_64BIT 48 + epsw %r6,%r7 # set current addressing mode 49 + nill %r6,0x1 # in new psw (31 or 64 bit mode) 50 + nilh %r7,0x8000 51 + stm %r6,%r7,0(%r8) 52 + #endif 47 53 lhi %r6,0x0200 # cr mask for ext int (cr0.54) 48 54 ltr %r2,%r2 49 55 jz .LsetctS1 ··· 93 87 .long 0x00080000, 0x80000000+.LwaitS1 # PSW to handle ext int 94 88 #ifdef CONFIG_64BIT 95 89 .LextpswS1_64: 96 - .quad 0x0000000180000000, .LwaitS1 # PSW to handle ext int, 64 bit 90 + .quad 0, .LwaitS1 # PSW to handle ext int, 64 bit 97 91 #endif 98 92 .LwaitpswS1: 99 93 .long 0x010a0000, 0x00000000+.LloopS1 # PSW to wait for ext int
+12 -2
arch/s390/kernel/signal.c
··· 136 136 /* Use regs->psw.mask instead of psw_user_bits to preserve PER bit. */ 137 137 regs->psw.mask = (regs->psw.mask & ~PSW_MASK_USER) | 138 138 (user_sregs.regs.psw.mask & PSW_MASK_USER); 139 + /* Check for invalid user address space control. */ 140 + if ((regs->psw.mask & PSW_MASK_ASC) >= (psw_kernel_bits & PSW_MASK_ASC)) 141 + regs->psw.mask = (psw_user_bits & PSW_MASK_ASC) | 142 + (regs->psw.mask & ~PSW_MASK_ASC); 139 143 /* Check for invalid amode */ 140 144 if (regs->psw.mask & PSW_MASK_EA) 141 145 regs->psw.mask |= PSW_MASK_BA; ··· 277 273 278 274 /* Set up registers for signal handler */ 279 275 regs->gprs[15] = (unsigned long) frame; 280 - regs->psw.mask |= PSW_MASK_EA | PSW_MASK_BA; /* 64 bit amode */ 276 + /* Force default amode and default user address space control. */ 277 + regs->psw.mask = PSW_MASK_EA | PSW_MASK_BA | 278 + (psw_user_bits & PSW_MASK_ASC) | 279 + (regs->psw.mask & ~PSW_MASK_ASC); 281 280 regs->psw.addr = (unsigned long) ka->sa.sa_handler | PSW_ADDR_AMODE; 282 281 283 282 regs->gprs[2] = map_signal(sig); ··· 353 346 354 347 /* Set up registers for signal handler */ 355 348 regs->gprs[15] = (unsigned long) frame; 356 - regs->psw.mask |= PSW_MASK_EA | PSW_MASK_BA; /* 64 bit amode */ 349 + /* Force default amode and default user address space control. */ 350 + regs->psw.mask = PSW_MASK_EA | PSW_MASK_BA | 351 + (psw_user_bits & PSW_MASK_ASC) | 352 + (regs->psw.mask & ~PSW_MASK_ASC); 357 353 regs->psw.addr = (unsigned long) ka->sa.sa_handler | PSW_ADDR_AMODE; 358 354 359 355 regs->gprs[2] = map_signal(sig);
+4 -2
arch/s390/kernel/topology.c
··· 40 40 static struct mask_info core_info; 41 41 cpumask_t cpu_core_map[NR_CPUS]; 42 42 unsigned char cpu_core_id[NR_CPUS]; 43 + unsigned char cpu_socket_id[NR_CPUS]; 43 44 44 45 static struct mask_info book_info; 45 46 cpumask_t cpu_book_map[NR_CPUS]; ··· 84 83 cpumask_set_cpu(lcpu, &book->mask); 85 84 cpu_book_id[lcpu] = book->id; 86 85 cpumask_set_cpu(lcpu, &core->mask); 86 + cpu_core_id[lcpu] = rcpu; 87 87 if (one_core_per_cpu) { 88 - cpu_core_id[lcpu] = rcpu; 88 + cpu_socket_id[lcpu] = rcpu; 89 89 core = core->next; 90 90 } else { 91 - cpu_core_id[lcpu] = core->id; 91 + cpu_socket_id[lcpu] = core->id; 92 92 } 93 93 smp_cpu_set_polarization(lcpu, tl_cpu->pp); 94 94 }
+1 -1
arch/s390/lib/uaccess_pt.c
··· 39 39 pmd = pmd_offset(pud, addr); 40 40 if (pmd_none(*pmd)) 41 41 return -0x10UL; 42 - if (pmd_huge(*pmd)) { 42 + if (pmd_large(*pmd)) { 43 43 if (write && (pmd_val(*pmd) & _SEGMENT_ENTRY_RO)) 44 44 return -0x04UL; 45 45 return (pmd_val(*pmd) & HPAGE_MASK) + (addr & ~HPAGE_MASK);
+3 -4
arch/s390/mm/gup.c
··· 126 126 */ 127 127 if (pmd_none(pmd) || pmd_trans_splitting(pmd)) 128 128 return 0; 129 - if (unlikely(pmd_huge(pmd))) { 129 + if (unlikely(pmd_large(pmd))) { 130 130 if (!gup_huge_pmd(pmdp, pmd, addr, next, 131 131 write, pages, nr)) 132 132 return 0; ··· 180 180 addr = start; 181 181 len = (unsigned long) nr_pages << PAGE_SHIFT; 182 182 end = start + len; 183 - if (unlikely(!access_ok(write ? VERIFY_WRITE : VERIFY_READ, 184 - (void __user *)start, len))) 183 + if ((end < start) || (end > TASK_SIZE)) 185 184 return 0; 186 185 187 186 local_irq_save(flags); ··· 228 229 addr = start; 229 230 len = (unsigned long) nr_pages << PAGE_SHIFT; 230 231 end = start + len; 231 - if (end < start) 232 + if ((end < start) || (end > TASK_SIZE)) 232 233 goto slow_irqon; 233 234 234 235 /*
+1
arch/sparc/Kconfig
··· 20 20 select HAVE_ARCH_TRACEHOOK 21 21 select SYSCTL_EXCEPTION_TRACE 22 22 select ARCH_WANT_OPTIONAL_GPIOLIB 23 + select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE 23 24 select RTC_CLASS 24 25 select RTC_DRV_M48T59 25 26 select HAVE_IRQ_WORK
+8 -8
arch/sparc/crypto/Makefile
··· 13 13 14 14 obj-$(CONFIG_CRYPTO_CRC32C_SPARC64) += crc32c-sparc64.o 15 15 16 - sha1-sparc64-y := sha1_asm.o sha1_glue.o crop_devid.o 17 - sha256-sparc64-y := sha256_asm.o sha256_glue.o crop_devid.o 18 - sha512-sparc64-y := sha512_asm.o sha512_glue.o crop_devid.o 19 - md5-sparc64-y := md5_asm.o md5_glue.o crop_devid.o 16 + sha1-sparc64-y := sha1_asm.o sha1_glue.o 17 + sha256-sparc64-y := sha256_asm.o sha256_glue.o 18 + sha512-sparc64-y := sha512_asm.o sha512_glue.o 19 + md5-sparc64-y := md5_asm.o md5_glue.o 20 20 21 - aes-sparc64-y := aes_asm.o aes_glue.o crop_devid.o 22 - des-sparc64-y := des_asm.o des_glue.o crop_devid.o 23 - camellia-sparc64-y := camellia_asm.o camellia_glue.o crop_devid.o 21 + aes-sparc64-y := aes_asm.o aes_glue.o 22 + des-sparc64-y := des_asm.o des_glue.o 23 + camellia-sparc64-y := camellia_asm.o camellia_glue.o 24 24 25 - crc32c-sparc64-y := crc32c_asm.o crc32c_glue.o crop_devid.o 25 + crc32c-sparc64-y := crc32c_asm.o crc32c_glue.o
+2
arch/sparc/crypto/aes_glue.c
··· 475 475 MODULE_DESCRIPTION("AES Secure Hash Algorithm, sparc64 aes opcode accelerated"); 476 476 477 477 MODULE_ALIAS("aes"); 478 + 479 + #include "crop_devid.c"
+2
arch/sparc/crypto/camellia_glue.c
··· 320 320 MODULE_DESCRIPTION("Camellia Cipher Algorithm, sparc64 camellia opcode accelerated"); 321 321 322 322 MODULE_ALIAS("aes"); 323 + 324 + #include "crop_devid.c"
+2
arch/sparc/crypto/crc32c_glue.c
··· 177 177 MODULE_DESCRIPTION("CRC32c (Castagnoli), sparc64 crc32c opcode accelerated"); 178 178 179 179 MODULE_ALIAS("crc32c"); 180 + 181 + #include "crop_devid.c"
+2
arch/sparc/crypto/des_glue.c
··· 527 527 MODULE_DESCRIPTION("DES & Triple DES EDE Cipher Algorithms, sparc64 des opcode accelerated"); 528 528 529 529 MODULE_ALIAS("des"); 530 + 531 + #include "crop_devid.c"
+2
arch/sparc/crypto/md5_glue.c
··· 186 186 MODULE_DESCRIPTION("MD5 Secure Hash Algorithm, sparc64 md5 opcode accelerated"); 187 187 188 188 MODULE_ALIAS("md5"); 189 + 190 + #include "crop_devid.c"
+2
arch/sparc/crypto/sha1_glue.c
··· 181 181 MODULE_DESCRIPTION("SHA1 Secure Hash Algorithm, sparc64 sha1 opcode accelerated"); 182 182 183 183 MODULE_ALIAS("sha1"); 184 + 185 + #include "crop_devid.c"
+2
arch/sparc/crypto/sha256_glue.c
··· 239 239 240 240 MODULE_ALIAS("sha224"); 241 241 MODULE_ALIAS("sha256"); 242 + 243 + #include "crop_devid.c"
+2
arch/sparc/crypto/sha512_glue.c
··· 224 224 225 225 MODULE_ALIAS("sha384"); 226 226 MODULE_ALIAS("sha512"); 227 + 228 + #include "crop_devid.c"
+3 -1
arch/sparc/include/asm/atomic_64.h
··· 1 1 /* atomic.h: Thankfully the V9 is at least reasonable for this 2 2 * stuff. 3 3 * 4 - * Copyright (C) 1996, 1997, 2000 David S. Miller (davem@redhat.com) 4 + * Copyright (C) 1996, 1997, 2000, 2012 David S. Miller (davem@redhat.com) 5 5 */ 6 6 7 7 #ifndef __ARCH_SPARC64_ATOMIC__ ··· 105 105 } 106 106 107 107 #define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0) 108 + 109 + extern long atomic64_dec_if_positive(atomic64_t *v); 108 110 109 111 /* Atomic operations are already serializing */ 110 112 #define smp_mb__before_atomic_dec() barrier()
+59 -10
arch/sparc/include/asm/backoff.h
··· 1 1 #ifndef _SPARC64_BACKOFF_H 2 2 #define _SPARC64_BACKOFF_H 3 3 4 + /* The macros in this file implement an exponential backoff facility 5 + * for atomic operations. 6 + * 7 + * When multiple threads compete on an atomic operation, it is 8 + * possible for one thread to be continually denied a successful 9 + * completion of the compare-and-swap instruction. Heavily 10 + * threaded cpu implementations like Niagara can compound this 11 + * problem even further. 12 + * 13 + * When an atomic operation fails and needs to be retried, we spin a 14 + * certain number of times. At each subsequent failure of the same 15 + * operation we double the spin count, realizing an exponential 16 + * backoff. 17 + * 18 + * When we spin, we try to use an operation that will cause the 19 + * current cpu strand to block, and therefore make the core fully 20 + * available to any other other runnable strands. There are two 21 + * options, based upon cpu capabilities. 22 + * 23 + * On all cpus prior to SPARC-T4 we do three dummy reads of the 24 + * condition code register. Each read blocks the strand for something 25 + * between 40 and 50 cpu cycles. 26 + * 27 + * For SPARC-T4 and later we have a special "pause" instruction 28 + * available. This is implemented using writes to register %asr27. 29 + * The cpu will block the number of cycles written into the register, 30 + * unless a disrupting trap happens first. SPARC-T4 specifically 31 + * implements pause with a granularity of 8 cycles. Each strand has 32 + * an internal pause counter which decrements every 8 cycles. So the 33 + * chip shifts the %asr27 value down by 3 bits, and writes the result 34 + * into the pause counter. If a value smaller than 8 is written, the 35 + * chip blocks for 1 cycle. 36 + * 37 + * To achieve the same amount of backoff as the three %ccr reads give 38 + * on earlier chips, we shift the backoff value up by 7 bits. (Three 39 + * %ccr reads block for about 128 cycles, 1 << 7 == 128) We write the 40 + * whole amount we want to block into the pause register, rather than 41 + * loop writing 128 each time. 42 + */ 43 + 4 44 #define BACKOFF_LIMIT (4 * 1024) 5 45 6 46 #ifdef CONFIG_SMP ··· 51 11 #define BACKOFF_LABEL(spin_label, continue_label) \ 52 12 spin_label 53 13 54 - #define BACKOFF_SPIN(reg, tmp, label) \ 55 - mov reg, tmp; \ 56 - 88: brnz,pt tmp, 88b; \ 57 - sub tmp, 1, tmp; \ 58 - set BACKOFF_LIMIT, tmp; \ 59 - cmp reg, tmp; \ 60 - bg,pn %xcc, label; \ 61 - nop; \ 62 - ba,pt %xcc, label; \ 63 - sllx reg, 1, reg; 14 + #define BACKOFF_SPIN(reg, tmp, label) \ 15 + mov reg, tmp; \ 16 + 88: rd %ccr, %g0; \ 17 + rd %ccr, %g0; \ 18 + rd %ccr, %g0; \ 19 + .section .pause_3insn_patch,"ax";\ 20 + .word 88b; \ 21 + sllx tmp, 7, tmp; \ 22 + wr tmp, 0, %asr27; \ 23 + clr tmp; \ 24 + .previous; \ 25 + brnz,pt tmp, 88b; \ 26 + sub tmp, 1, tmp; \ 27 + set BACKOFF_LIMIT, tmp; \ 28 + cmp reg, tmp; \ 29 + bg,pn %xcc, label; \ 30 + nop; \ 31 + ba,pt %xcc, label; \ 32 + sllx reg, 1, reg; 64 33 65 34 #else 66 35
+3 -2
arch/sparc/include/asm/compat.h
··· 232 232 struct pt_regs *regs = current_thread_info()->kregs; 233 233 unsigned long usp = regs->u_regs[UREG_I6]; 234 234 235 - if (!(test_thread_flag(TIF_32BIT))) 235 + if (test_thread_64bit_stack(usp)) 236 236 usp += STACK_BIAS; 237 - else 237 + 238 + if (test_thread_flag(TIF_32BIT)) 238 239 usp &= 0xffffffffUL; 239 240 240 241 usp -= len;
+16 -1
arch/sparc/include/asm/processor_64.h
··· 196 196 #define KSTK_EIP(tsk) (task_pt_regs(tsk)->tpc) 197 197 #define KSTK_ESP(tsk) (task_pt_regs(tsk)->u_regs[UREG_FP]) 198 198 199 - #define cpu_relax() barrier() 199 + /* Please see the commentary in asm/backoff.h for a description of 200 + * what these instructions are doing and how they have been choosen. 201 + * To make a long story short, we are trying to yield the current cpu 202 + * strand during busy loops. 203 + */ 204 + #define cpu_relax() asm volatile("\n99:\n\t" \ 205 + "rd %%ccr, %%g0\n\t" \ 206 + "rd %%ccr, %%g0\n\t" \ 207 + "rd %%ccr, %%g0\n\t" \ 208 + ".section .pause_3insn_patch,\"ax\"\n\t"\ 209 + ".word 99b\n\t" \ 210 + "wr %%g0, 128, %%asr27\n\t" \ 211 + "nop\n\t" \ 212 + "nop\n\t" \ 213 + ".previous" \ 214 + ::: "memory") 200 215 201 216 /* Prefetch support. This is tuned for UltraSPARC-III and later. 202 217 * UltraSPARC-I will treat these as nops, and UltraSPARC-II has
+5
arch/sparc/include/asm/prom.h
··· 63 63 extern void irq_trans_init(struct device_node *dp); 64 64 extern char *build_path_component(struct device_node *dp); 65 65 66 + /* SPARC has a local implementation */ 67 + extern int of_address_to_resource(struct device_node *dev, int index, 68 + struct resource *r); 69 + #define of_address_to_resource of_address_to_resource 70 + 66 71 #endif /* __KERNEL__ */ 67 72 #endif /* _SPARC_PROM_H */
+5
arch/sparc/include/asm/thread_info_64.h
··· 259 259 260 260 #define tsk_is_polling(t) test_tsk_thread_flag(t, TIF_POLLING_NRFLAG) 261 261 262 + #define thread32_stack_is_64bit(__SP) (((__SP) & 0x1) != 0) 263 + #define test_thread_64bit_stack(__SP) \ 264 + ((test_thread_flag(TIF_32BIT) && !thread32_stack_is_64bit(__SP)) ? \ 265 + false : true) 266 + 262 267 #endif /* !__ASSEMBLY__ */ 263 268 264 269 #endif /* __KERNEL__ */
+16 -8
arch/sparc/include/asm/ttable.h
··· 372 372 373 373 /* Normal 32bit spill */ 374 374 #define SPILL_2_GENERIC(ASI) \ 375 - srl %sp, 0, %sp; \ 375 + and %sp, 1, %g3; \ 376 + brnz,pn %g3, (. - (128 + 4)); \ 377 + srl %sp, 0, %sp; \ 376 378 stwa %l0, [%sp + %g0] ASI; \ 377 379 mov 0x04, %g3; \ 378 380 stwa %l1, [%sp + %g3] ASI; \ ··· 400 398 stwa %i6, [%g1 + %g0] ASI; \ 401 399 stwa %i7, [%g1 + %g3] ASI; \ 402 400 saved; \ 403 - retry; nop; nop; \ 401 + retry; \ 404 402 b,a,pt %xcc, spill_fixup_dax; \ 405 403 b,a,pt %xcc, spill_fixup_mna; \ 406 404 b,a,pt %xcc, spill_fixup; 407 405 408 406 #define SPILL_2_GENERIC_ETRAP \ 409 407 etrap_user_spill_32bit: \ 410 - srl %sp, 0, %sp; \ 408 + and %sp, 1, %g3; \ 409 + brnz,pn %g3, etrap_user_spill_64bit; \ 410 + srl %sp, 0, %sp; \ 411 411 stwa %l0, [%sp + 0x00] %asi; \ 412 412 stwa %l1, [%sp + 0x04] %asi; \ 413 413 stwa %l2, [%sp + 0x08] %asi; \ ··· 431 427 ba,pt %xcc, etrap_save; \ 432 428 wrpr %g1, %cwp; \ 433 429 nop; nop; nop; nop; \ 434 - nop; nop; nop; nop; \ 430 + nop; nop; \ 435 431 ba,a,pt %xcc, etrap_spill_fixup_32bit; \ 436 432 ba,a,pt %xcc, etrap_spill_fixup_32bit; \ 437 433 ba,a,pt %xcc, etrap_spill_fixup_32bit; ··· 596 592 597 593 /* Normal 32bit fill */ 598 594 #define FILL_2_GENERIC(ASI) \ 599 - srl %sp, 0, %sp; \ 595 + and %sp, 1, %g3; \ 596 + brnz,pn %g3, (. - (128 + 4)); \ 597 + srl %sp, 0, %sp; \ 600 598 lduwa [%sp + %g0] ASI, %l0; \ 601 599 mov 0x04, %g2; \ 602 600 mov 0x08, %g3; \ ··· 622 616 lduwa [%g1 + %g3] ASI, %i6; \ 623 617 lduwa [%g1 + %g5] ASI, %i7; \ 624 618 restored; \ 625 - retry; nop; nop; nop; nop; \ 619 + retry; nop; nop; \ 626 620 b,a,pt %xcc, fill_fixup_dax; \ 627 621 b,a,pt %xcc, fill_fixup_mna; \ 628 622 b,a,pt %xcc, fill_fixup; 629 623 630 624 #define FILL_2_GENERIC_RTRAP \ 631 625 user_rtt_fill_32bit: \ 632 - srl %sp, 0, %sp; \ 626 + and %sp, 1, %g3; \ 627 + brnz,pn %g3, user_rtt_fill_64bit; \ 628 + srl %sp, 0, %sp; \ 633 629 lduwa [%sp + 0x00] %asi, %l0; \ 634 630 lduwa [%sp + 0x04] %asi, %l1; \ 635 631 lduwa [%sp + 0x08] %asi, %l2; \ ··· 651 643 ba,pt %xcc, user_rtt_pre_restore; \ 652 644 restored; \ 653 645 nop; nop; nop; nop; nop; \ 654 - nop; nop; nop; nop; nop; \ 646 + nop; nop; nop; \ 655 647 ba,a,pt %xcc, user_rtt_fill_fixup; \ 656 648 ba,a,pt %xcc, user_rtt_fill_fixup; \ 657 649 ba,a,pt %xcc, user_rtt_fill_fixup;
+6 -1
arch/sparc/include/uapi/asm/unistd.h
··· 405 405 #define __NR_setns 337 406 406 #define __NR_process_vm_readv 338 407 407 #define __NR_process_vm_writev 339 408 + #define __NR_kern_features 340 409 + #define __NR_kcmp 341 408 410 409 - #define NR_syscalls 340 411 + #define NR_syscalls 342 412 + 413 + /* Bitmask values returned from kern_features system call. */ 414 + #define KERN_FEATURE_MIXED_MODE_STACK 0x00000001 410 415 411 416 #ifdef __32bit_syscall_numbers__ 412 417 /* Sparc 32-bit only has the "setresuid32", "getresuid32" variants,
+7
arch/sparc/kernel/entry.h
··· 59 59 extern struct popc_6insn_patch_entry __popc_6insn_patch, 60 60 __popc_6insn_patch_end; 61 61 62 + struct pause_patch_entry { 63 + unsigned int addr; 64 + unsigned int insns[3]; 65 + }; 66 + extern struct pause_patch_entry __pause_3insn_patch, 67 + __pause_3insn_patch_end; 68 + 62 69 extern void __init per_cpu_patch(void); 63 70 extern void sun4v_patch_1insn_range(struct sun4v_1insn_patch_entry *, 64 71 struct sun4v_1insn_patch_entry *);
+4 -2
arch/sparc/kernel/leon_kernel.c
··· 56 56 static void leon_handle_ext_irq(unsigned int irq, struct irq_desc *desc) 57 57 { 58 58 unsigned int eirq; 59 + struct irq_bucket *p; 59 60 int cpu = sparc_leon3_cpuid(); 60 61 61 62 eirq = leon_eirq_get(cpu); 62 - if ((eirq & 0x10) && irq_map[eirq]->irq) /* bit4 tells if IRQ happened */ 63 - generic_handle_irq(irq_map[eirq]->irq); 63 + p = irq_map[eirq]; 64 + if ((eirq & 0x10) && p && p->irq) /* bit4 tells if IRQ happened */ 65 + generic_handle_irq(p->irq); 64 66 } 65 67 66 68 /* The extended IRQ controller has been found, this function registers it */
+16 -6
arch/sparc/kernel/perf_event.c
··· 1762 1762 1763 1763 ufp = regs->u_regs[UREG_I6] & 0xffffffffUL; 1764 1764 do { 1765 - struct sparc_stackf32 *usf, sf; 1766 1765 unsigned long pc; 1767 1766 1768 - usf = (struct sparc_stackf32 *) ufp; 1769 - if (__copy_from_user_inatomic(&sf, usf, sizeof(sf))) 1770 - break; 1767 + if (thread32_stack_is_64bit(ufp)) { 1768 + struct sparc_stackf *usf, sf; 1771 1769 1772 - pc = sf.callers_pc; 1773 - ufp = (unsigned long)sf.fp; 1770 + ufp += STACK_BIAS; 1771 + usf = (struct sparc_stackf *) ufp; 1772 + if (__copy_from_user_inatomic(&sf, usf, sizeof(sf))) 1773 + break; 1774 + pc = sf.callers_pc & 0xffffffff; 1775 + ufp = ((unsigned long) sf.fp) & 0xffffffff; 1776 + } else { 1777 + struct sparc_stackf32 *usf, sf; 1778 + usf = (struct sparc_stackf32 *) ufp; 1779 + if (__copy_from_user_inatomic(&sf, usf, sizeof(sf))) 1780 + break; 1781 + pc = sf.callers_pc; 1782 + ufp = (unsigned long)sf.fp; 1783 + } 1774 1784 perf_callchain_store(entry, pc); 1775 1785 } while (entry->nr < PERF_MAX_STACK_DEPTH); 1776 1786 }
+23 -19
arch/sparc/kernel/process_64.c
··· 452 452 /* It's a bit more tricky when 64-bit tasks are involved... */ 453 453 static unsigned long clone_stackframe(unsigned long csp, unsigned long psp) 454 454 { 455 + bool stack_64bit = test_thread_64bit_stack(psp); 455 456 unsigned long fp, distance, rval; 456 457 457 - if (!(test_thread_flag(TIF_32BIT))) { 458 + if (stack_64bit) { 458 459 csp += STACK_BIAS; 459 460 psp += STACK_BIAS; 460 461 __get_user(fp, &(((struct reg_window __user *)psp)->ins[6])); 461 462 fp += STACK_BIAS; 463 + if (test_thread_flag(TIF_32BIT)) 464 + fp &= 0xffffffff; 462 465 } else 463 466 __get_user(fp, &(((struct reg_window32 __user *)psp)->ins[6])); 464 467 ··· 475 472 rval = (csp - distance); 476 473 if (copy_in_user((void __user *) rval, (void __user *) psp, distance)) 477 474 rval = 0; 478 - else if (test_thread_flag(TIF_32BIT)) { 475 + else if (!stack_64bit) { 479 476 if (put_user(((u32)csp), 480 477 &(((struct reg_window32 __user *)rval)->ins[6]))) 481 478 rval = 0; ··· 510 507 511 508 flush_user_windows(); 512 509 if ((window = get_thread_wsaved()) != 0) { 513 - int winsize = sizeof(struct reg_window); 514 - int bias = 0; 515 - 516 - if (test_thread_flag(TIF_32BIT)) 517 - winsize = sizeof(struct reg_window32); 518 - else 519 - bias = STACK_BIAS; 520 - 521 510 window -= 1; 522 511 do { 523 - unsigned long sp = (t->rwbuf_stkptrs[window] + bias); 524 512 struct reg_window *rwin = &t->reg_window[window]; 513 + int winsize = sizeof(struct reg_window); 514 + unsigned long sp; 515 + 516 + sp = t->rwbuf_stkptrs[window]; 517 + 518 + if (test_thread_64bit_stack(sp)) 519 + sp += STACK_BIAS; 520 + else 521 + winsize = sizeof(struct reg_window32); 525 522 526 523 if (!copy_to_user((char __user *)sp, rwin, winsize)) { 527 524 shift_window_buffer(window, get_thread_wsaved() - 1, t); ··· 547 544 { 548 545 struct thread_info *t = current_thread_info(); 549 546 unsigned long window; 550 - int winsize = sizeof(struct reg_window); 551 - int bias = 0; 552 - 553 - if (test_thread_flag(TIF_32BIT)) 554 - winsize = sizeof(struct reg_window32); 555 - else 556 - bias = STACK_BIAS; 557 547 558 548 flush_user_windows(); 559 549 window = get_thread_wsaved(); ··· 554 558 if (likely(window != 0)) { 555 559 window -= 1; 556 560 do { 557 - unsigned long sp = (t->rwbuf_stkptrs[window] + bias); 558 561 struct reg_window *rwin = &t->reg_window[window]; 562 + int winsize = sizeof(struct reg_window); 563 + unsigned long sp; 564 + 565 + sp = t->rwbuf_stkptrs[window]; 566 + 567 + if (test_thread_64bit_stack(sp)) 568 + sp += STACK_BIAS; 569 + else 570 + winsize = sizeof(struct reg_window32); 559 571 560 572 if (unlikely(sp & 0x7UL)) 561 573 stack_unaligned(sp);
+2 -2
arch/sparc/kernel/ptrace_64.c
··· 151 151 { 152 152 unsigned long rw_addr = regs->u_regs[UREG_I6]; 153 153 154 - if (test_tsk_thread_flag(current, TIF_32BIT)) { 154 + if (!test_thread_64bit_stack(rw_addr)) { 155 155 struct reg_window32 win32; 156 156 int i; 157 157 ··· 176 176 { 177 177 unsigned long rw_addr = regs->u_regs[UREG_I6]; 178 178 179 - if (test_tsk_thread_flag(current, TIF_32BIT)) { 179 + if (!test_thread_64bit_stack(rw_addr)) { 180 180 struct reg_window32 win32; 181 181 int i; 182 182
+21
arch/sparc/kernel/setup_64.c
··· 316 316 } 317 317 } 318 318 319 + static void __init pause_patch(void) 320 + { 321 + struct pause_patch_entry *p; 322 + 323 + p = &__pause_3insn_patch; 324 + while (p < &__pause_3insn_patch_end) { 325 + unsigned long i, addr = p->addr; 326 + 327 + for (i = 0; i < 3; i++) { 328 + *(unsigned int *) (addr + (i * 4)) = p->insns[i]; 329 + wmb(); 330 + __asm__ __volatile__("flush %0" 331 + : : "r" (addr + (i * 4))); 332 + } 333 + 334 + p++; 335 + } 336 + } 337 + 319 338 #ifdef CONFIG_SMP 320 339 void __init boot_cpu_id_too_large(int cpu) 321 340 { ··· 547 528 548 529 if (sparc64_elf_hwcap & AV_SPARC_POPC) 549 530 popc_patch(); 531 + if (sparc64_elf_hwcap & AV_SPARC_PAUSE) 532 + pause_patch(); 550 533 } 551 534 552 535 void __init setup_arch(char **cmdline_p)
+5
arch/sparc/kernel/sys_sparc_64.c
··· 751 751 : "cc"); 752 752 return __res; 753 753 } 754 + 755 + asmlinkage long sys_kern_features(void) 756 + { 757 + return KERN_FEATURE_MIXED_MODE_STACK; 758 + }
+1
arch/sparc/kernel/systbls_32.S
··· 85 85 /*325*/ .long sys_pwritev, sys_rt_tgsigqueueinfo, sys_perf_event_open, sys_recvmmsg, sys_fanotify_init 86 86 /*330*/ .long sys_fanotify_mark, sys_prlimit64, sys_name_to_handle_at, sys_open_by_handle_at, sys_clock_adjtime 87 87 /*335*/ .long sys_syncfs, sys_sendmmsg, sys_setns, sys_process_vm_readv, sys_process_vm_writev 88 + /*340*/ .long sys_ni_syscall, sys_kcmp
+2
arch/sparc/kernel/systbls_64.S
··· 86 86 .word compat_sys_pwritev, compat_sys_rt_tgsigqueueinfo, sys_perf_event_open, compat_sys_recvmmsg, sys_fanotify_init 87 87 /*330*/ .word sys32_fanotify_mark, sys_prlimit64, sys_name_to_handle_at, compat_sys_open_by_handle_at, compat_sys_clock_adjtime 88 88 .word sys_syncfs, compat_sys_sendmmsg, sys_setns, compat_sys_process_vm_readv, compat_sys_process_vm_writev 89 + /*340*/ .word sys_kern_features, sys_kcmp 89 90 90 91 #endif /* CONFIG_COMPAT */ 91 92 ··· 164 163 .word sys_pwritev, sys_rt_tgsigqueueinfo, sys_perf_event_open, sys_recvmmsg, sys_fanotify_init 165 164 /*330*/ .word sys_fanotify_mark, sys_prlimit64, sys_name_to_handle_at, sys_open_by_handle_at, sys_clock_adjtime 166 165 .word sys_syncfs, sys_sendmmsg, sys_setns, sys_process_vm_readv, sys_process_vm_writev 166 + /*340*/ .word sys_kern_features, sys_kcmp
+23 -13
arch/sparc/kernel/unaligned_64.c
··· 113 113 114 114 static unsigned long fetch_reg(unsigned int reg, struct pt_regs *regs) 115 115 { 116 - unsigned long value; 116 + unsigned long value, fp; 117 117 118 118 if (reg < 16) 119 119 return (!reg ? 0 : regs->u_regs[reg]); 120 + 121 + fp = regs->u_regs[UREG_FP]; 122 + 120 123 if (regs->tstate & TSTATE_PRIV) { 121 124 struct reg_window *win; 122 - win = (struct reg_window *)(regs->u_regs[UREG_FP] + STACK_BIAS); 125 + win = (struct reg_window *)(fp + STACK_BIAS); 123 126 value = win->locals[reg - 16]; 124 - } else if (test_thread_flag(TIF_32BIT)) { 127 + } else if (!test_thread_64bit_stack(fp)) { 125 128 struct reg_window32 __user *win32; 126 - win32 = (struct reg_window32 __user *)((unsigned long)((u32)regs->u_regs[UREG_FP])); 129 + win32 = (struct reg_window32 __user *)((unsigned long)((u32)fp)); 127 130 get_user(value, &win32->locals[reg - 16]); 128 131 } else { 129 132 struct reg_window __user *win; 130 - win = (struct reg_window __user *)(regs->u_regs[UREG_FP] + STACK_BIAS); 133 + win = (struct reg_window __user *)(fp + STACK_BIAS); 131 134 get_user(value, &win->locals[reg - 16]); 132 135 } 133 136 return value; ··· 138 135 139 136 static unsigned long *fetch_reg_addr(unsigned int reg, struct pt_regs *regs) 140 137 { 138 + unsigned long fp; 139 + 141 140 if (reg < 16) 142 141 return &regs->u_regs[reg]; 142 + 143 + fp = regs->u_regs[UREG_FP]; 144 + 143 145 if (regs->tstate & TSTATE_PRIV) { 144 146 struct reg_window *win; 145 - win = (struct reg_window *)(regs->u_regs[UREG_FP] + STACK_BIAS); 147 + win = (struct reg_window *)(fp + STACK_BIAS); 146 148 return &win->locals[reg - 16]; 147 - } else if (test_thread_flag(TIF_32BIT)) { 149 + } else if (!test_thread_64bit_stack(fp)) { 148 150 struct reg_window32 *win32; 149 - win32 = (struct reg_window32 *)((unsigned long)((u32)regs->u_regs[UREG_FP])); 151 + win32 = (struct reg_window32 *)((unsigned long)((u32)fp)); 150 152 return (unsigned long *)&win32->locals[reg - 16]; 151 153 } else { 152 154 struct reg_window *win; 153 - win = (struct reg_window *)(regs->u_regs[UREG_FP] + STACK_BIAS); 155 + win = (struct reg_window *)(fp + STACK_BIAS); 154 156 return &win->locals[reg - 16]; 155 157 } 156 158 } ··· 400 392 if (rd) 401 393 regs->u_regs[rd] = ret; 402 394 } else { 403 - if (test_thread_flag(TIF_32BIT)) { 395 + unsigned long fp = regs->u_regs[UREG_FP]; 396 + 397 + if (!test_thread_64bit_stack(fp)) { 404 398 struct reg_window32 __user *win32; 405 - win32 = (struct reg_window32 __user *)((unsigned long)((u32)regs->u_regs[UREG_FP])); 399 + win32 = (struct reg_window32 __user *)((unsigned long)((u32)fp)); 406 400 put_user(ret, &win32->locals[rd - 16]); 407 401 } else { 408 402 struct reg_window __user *win; 409 - win = (struct reg_window __user *)(regs->u_regs[UREG_FP] + STACK_BIAS); 403 + win = (struct reg_window __user *)(fp + STACK_BIAS); 410 404 put_user(ret, &win->locals[rd - 16]); 411 405 } 412 406 } ··· 564 554 reg[0] = 0; 565 555 if ((insn & 0x780000) == 0x180000) 566 556 reg[1] = 0; 567 - } else if (test_thread_flag(TIF_32BIT)) { 557 + } else if (!test_thread_64bit_stack(regs->u_regs[UREG_FP])) { 568 558 put_user(0, (int __user *) reg); 569 559 if ((insn & 0x780000) == 0x180000) 570 560 put_user(0, ((int __user *) reg) + 1);
+14 -9
arch/sparc/kernel/visemul.c
··· 149 149 150 150 static unsigned long fetch_reg(unsigned int reg, struct pt_regs *regs) 151 151 { 152 - unsigned long value; 152 + unsigned long value, fp; 153 153 154 154 if (reg < 16) 155 155 return (!reg ? 0 : regs->u_regs[reg]); 156 + 157 + fp = regs->u_regs[UREG_FP]; 158 + 156 159 if (regs->tstate & TSTATE_PRIV) { 157 160 struct reg_window *win; 158 - win = (struct reg_window *)(regs->u_regs[UREG_FP] + STACK_BIAS); 161 + win = (struct reg_window *)(fp + STACK_BIAS); 159 162 value = win->locals[reg - 16]; 160 - } else if (test_thread_flag(TIF_32BIT)) { 163 + } else if (!test_thread_64bit_stack(fp)) { 161 164 struct reg_window32 __user *win32; 162 - win32 = (struct reg_window32 __user *)((unsigned long)((u32)regs->u_regs[UREG_FP])); 165 + win32 = (struct reg_window32 __user *)((unsigned long)((u32)fp)); 163 166 get_user(value, &win32->locals[reg - 16]); 164 167 } else { 165 168 struct reg_window __user *win; 166 - win = (struct reg_window __user *)(regs->u_regs[UREG_FP] + STACK_BIAS); 169 + win = (struct reg_window __user *)(fp + STACK_BIAS); 167 170 get_user(value, &win->locals[reg - 16]); 168 171 } 169 172 return value; ··· 175 172 static inline unsigned long __user *__fetch_reg_addr_user(unsigned int reg, 176 173 struct pt_regs *regs) 177 174 { 175 + unsigned long fp = regs->u_regs[UREG_FP]; 176 + 178 177 BUG_ON(reg < 16); 179 178 BUG_ON(regs->tstate & TSTATE_PRIV); 180 179 181 - if (test_thread_flag(TIF_32BIT)) { 180 + if (!test_thread_64bit_stack(fp)) { 182 181 struct reg_window32 __user *win32; 183 - win32 = (struct reg_window32 __user *)((unsigned long)((u32)regs->u_regs[UREG_FP])); 182 + win32 = (struct reg_window32 __user *)((unsigned long)((u32)fp)); 184 183 return (unsigned long __user *)&win32->locals[reg - 16]; 185 184 } else { 186 185 struct reg_window __user *win; 187 - win = (struct reg_window __user *)(regs->u_regs[UREG_FP] + STACK_BIAS); 186 + win = (struct reg_window __user *)(fp + STACK_BIAS); 188 187 return &win->locals[reg - 16]; 189 188 } 190 189 } ··· 209 204 } else { 210 205 unsigned long __user *rd_user = __fetch_reg_addr_user(rd, regs); 211 206 212 - if (test_thread_flag(TIF_32BIT)) 207 + if (!test_thread_64bit_stack(regs->u_regs[UREG_FP])) 213 208 __put_user((u32)val, (u32 __user *)rd_user); 214 209 else 215 210 __put_user(val, rd_user);
+5
arch/sparc/kernel/vmlinux.lds.S
··· 132 132 *(.popc_6insn_patch) 133 133 __popc_6insn_patch_end = .; 134 134 } 135 + .pause_3insn_patch : { 136 + __pause_3insn_patch = .; 137 + *(.pause_3insn_patch) 138 + __pause_3insn_patch_end = .; 139 + } 135 140 PERCPU_SECTION(SMP_CACHE_BYTES) 136 141 137 142 . = ALIGN(PAGE_SIZE);
+2
arch/sparc/kernel/winfixup.S
··· 43 43 spill_fixup_dax: 44 44 TRAP_LOAD_THREAD_REG(%g6, %g1) 45 45 ldx [%g6 + TI_FLAGS], %g1 46 + andcc %sp, 0x1, %g0 47 + movne %icc, 0, %g1 46 48 andcc %g1, _TIF_32BIT, %g0 47 49 ldub [%g6 + TI_WSAVED], %g1 48 50 sll %g1, 3, %g3
+15 -1
arch/sparc/lib/atomic_64.S
··· 1 1 /* atomic.S: These things are too big to do inline. 2 2 * 3 - * Copyright (C) 1999, 2007 David S. Miller (davem@davemloft.net) 3 + * Copyright (C) 1999, 2007 2012 David S. Miller (davem@davemloft.net) 4 4 */ 5 5 6 6 #include <linux/linkage.h> ··· 117 117 sub %g1, %o0, %o0 118 118 2: BACKOFF_SPIN(%o2, %o3, 1b) 119 119 ENDPROC(atomic64_sub_ret) 120 + 121 + ENTRY(atomic64_dec_if_positive) /* %o0 = atomic_ptr */ 122 + BACKOFF_SETUP(%o2) 123 + 1: ldx [%o0], %g1 124 + brlez,pn %g1, 3f 125 + sub %g1, 1, %g7 126 + casx [%o0], %g1, %g7 127 + cmp %g1, %g7 128 + bne,pn %xcc, BACKOFF_LABEL(2f, 1b) 129 + nop 130 + 3: retl 131 + sub %g1, 1, %o0 132 + 2: BACKOFF_SPIN(%o2, %o3, 1b) 133 + ENDPROC(atomic64_dec_if_positive)
+1
arch/sparc/lib/ksyms.c
··· 116 116 EXPORT_SYMBOL(atomic64_add_ret); 117 117 EXPORT_SYMBOL(atomic64_sub); 118 118 EXPORT_SYMBOL(atomic64_sub_ret); 119 + EXPORT_SYMBOL(atomic64_dec_if_positive); 119 120 120 121 /* Atomic bit operations. */ 121 122 EXPORT_SYMBOL(test_and_set_bit);
+1 -1
arch/sparc/math-emu/math_64.c
··· 320 320 XR = 0; 321 321 else if (freg < 16) 322 322 XR = regs->u_regs[freg]; 323 - else if (test_thread_flag(TIF_32BIT)) { 323 + else if (!test_thread_64bit_stack(regs->u_regs[UREG_FP])) { 324 324 struct reg_window32 __user *win32; 325 325 flushw_user (); 326 326 win32 = (struct reg_window32 __user *)((unsigned long)((u32)regs->u_regs[UREG_FP]));
+6 -1
arch/unicore32/Kconfig
··· 16 16 select ARCH_WANT_FRAME_POINTERS 17 17 select GENERIC_IOMAP 18 18 select MODULES_USE_ELF_REL 19 + select GENERIC_KERNEL_THREAD 20 + select GENERIC_KERNEL_EXECVE 19 21 help 20 22 UniCore-32 is 32-bit Instruction Set Architecture, 21 23 including a series of low-power-consumption RISC chip ··· 65 63 66 64 config ARCH_MAY_HAVE_PC_FDC 67 65 bool 66 + 67 + config ZONE_DMA 68 + def_bool y 68 69 69 70 config NEED_DMA_MAP_STATE 70 71 def_bool y ··· 221 216 bool 222 217 depends on !ARCH_FPGA 223 218 select GENERIC_GPIO 224 - select GPIO_SYSFS if EXPERIMENTAL 219 + select GPIO_SYSFS 225 220 default y 226 221 227 222 if PUV3_NB0916
-1
arch/unicore32/include/asm/Kbuild
··· 1 - include include/asm-generic/Kbuild.asm 2 1 3 2 generic-y += atomic.h 4 3 generic-y += auxvec.h
-5
arch/unicore32/include/asm/bug.h
··· 19 19 extern void uc32_notify_die(const char *str, struct pt_regs *regs, 20 20 struct siginfo *info, unsigned long err, unsigned long trap); 21 21 22 - extern asmlinkage void __backtrace(void); 23 - extern asmlinkage void c_backtrace(unsigned long fp, int pmode); 24 - 25 - extern void __show_regs(struct pt_regs *); 26 - 27 22 #endif /* __UNICORE_BUG_H__ */
arch/unicore32/include/asm/byteorder.h arch/unicore32/include/uapi/asm/byteorder.h
+1 -1
arch/unicore32/include/asm/cmpxchg.h
··· 35 35 : "memory", "cc"); 36 36 break; 37 37 default: 38 - ret = __xchg_bad_pointer(); 38 + __xchg_bad_pointer(); 39 39 } 40 40 41 41 return ret;
-1
arch/unicore32/include/asm/kvm_para.h
··· 1 - #include <asm-generic/kvm_para.h>
-5
arch/unicore32/include/asm/processor.h
··· 72 72 73 73 #define cpu_relax() barrier() 74 74 75 - /* 76 - * Create a new kernel thread 77 - */ 78 - extern int kernel_thread(int (*fn)(void *), void *arg, unsigned long flags); 79 - 80 75 #define task_pt_regs(p) \ 81 76 ((struct pt_regs *)(THREAD_START_SP + task_stack_page(p)) - 1) 82 77
+1 -75
arch/unicore32/include/asm/ptrace.h
··· 12 12 #ifndef __UNICORE_PTRACE_H__ 13 13 #define __UNICORE_PTRACE_H__ 14 14 15 - #define PTRACE_GET_THREAD_AREA 22 16 - 17 - /* 18 - * PSR bits 19 - */ 20 - #define USER_MODE 0x00000010 21 - #define REAL_MODE 0x00000011 22 - #define INTR_MODE 0x00000012 23 - #define PRIV_MODE 0x00000013 24 - #define ABRT_MODE 0x00000017 25 - #define EXTN_MODE 0x0000001b 26 - #define SUSR_MODE 0x0000001f 27 - #define MODE_MASK 0x0000001f 28 - #define PSR_R_BIT 0x00000040 29 - #define PSR_I_BIT 0x00000080 30 - #define PSR_V_BIT 0x10000000 31 - #define PSR_C_BIT 0x20000000 32 - #define PSR_Z_BIT 0x40000000 33 - #define PSR_S_BIT 0x80000000 34 - 35 - /* 36 - * Groups of PSR bits 37 - */ 38 - #define PSR_f 0xff000000 /* Flags */ 39 - #define PSR_c 0x000000ff /* Control */ 15 + #include <uapi/asm/ptrace.h> 40 16 41 17 #ifndef __ASSEMBLY__ 42 - 43 - /* 44 - * This struct defines the way the registers are stored on the 45 - * stack during a system call. Note that sizeof(struct pt_regs) 46 - * has to be a multiple of 8. 47 - */ 48 - struct pt_regs { 49 - unsigned long uregs[34]; 50 - }; 51 - 52 - #define UCreg_asr uregs[32] 53 - #define UCreg_pc uregs[31] 54 - #define UCreg_lr uregs[30] 55 - #define UCreg_sp uregs[29] 56 - #define UCreg_ip uregs[28] 57 - #define UCreg_fp uregs[27] 58 - #define UCreg_26 uregs[26] 59 - #define UCreg_25 uregs[25] 60 - #define UCreg_24 uregs[24] 61 - #define UCreg_23 uregs[23] 62 - #define UCreg_22 uregs[22] 63 - #define UCreg_21 uregs[21] 64 - #define UCreg_20 uregs[20] 65 - #define UCreg_19 uregs[19] 66 - #define UCreg_18 uregs[18] 67 - #define UCreg_17 uregs[17] 68 - #define UCreg_16 uregs[16] 69 - #define UCreg_15 uregs[15] 70 - #define UCreg_14 uregs[14] 71 - #define UCreg_13 uregs[13] 72 - #define UCreg_12 uregs[12] 73 - #define UCreg_11 uregs[11] 74 - #define UCreg_10 uregs[10] 75 - #define UCreg_09 uregs[9] 76 - #define UCreg_08 uregs[8] 77 - #define UCreg_07 uregs[7] 78 - #define UCreg_06 uregs[6] 79 - #define UCreg_05 uregs[5] 80 - #define UCreg_04 uregs[4] 81 - #define UCreg_03 uregs[3] 82 - #define UCreg_02 uregs[2] 83 - #define UCreg_01 uregs[1] 84 - #define UCreg_00 uregs[0] 85 - #define UCreg_ORIG_00 uregs[33] 86 - 87 - #ifdef __KERNEL__ 88 18 89 19 #define user_mode(regs) \ 90 20 (processor_mode(regs) == USER_MODE) ··· 55 125 56 126 #define instruction_pointer(regs) ((regs)->UCreg_pc) 57 127 58 - #endif /* __KERNEL__ */ 59 - 60 128 #endif /* __ASSEMBLY__ */ 61 - 62 129 #endif 63 -
arch/unicore32/include/asm/sigcontext.h arch/unicore32/include/uapi/asm/sigcontext.h
+1
arch/unicore32/include/asm/unistd.h arch/unicore32/include/uapi/asm/unistd.h
··· 12 12 13 13 /* Use the standard ABI for syscalls. */ 14 14 #include <asm-generic/unistd.h> 15 + #define __ARCH_WANT_SYS_EXECVE
+7
arch/unicore32/include/uapi/asm/Kbuild
··· 1 1 # UAPI Header export list 2 2 include include/uapi/asm-generic/Kbuild.asm 3 3 4 + header-y += byteorder.h 5 + header-y += kvm_para.h 6 + header-y += ptrace.h 7 + header-y += sigcontext.h 8 + header-y += unistd.h 9 + 10 + generic-y += kvm_para.h
+90
arch/unicore32/include/uapi/asm/ptrace.h
··· 1 + /* 2 + * linux/arch/unicore32/include/asm/ptrace.h 3 + * 4 + * Code specific to PKUnity SoC and UniCore ISA 5 + * 6 + * Copyright (C) 2001-2010 GUAN Xue-tao 7 + * 8 + * This program is free software; you can redistribute it and/or modify 9 + * it under the terms of the GNU General Public License version 2 as 10 + * published by the Free Software Foundation. 11 + */ 12 + #ifndef _UAPI__UNICORE_PTRACE_H__ 13 + #define _UAPI__UNICORE_PTRACE_H__ 14 + 15 + #define PTRACE_GET_THREAD_AREA 22 16 + 17 + /* 18 + * PSR bits 19 + */ 20 + #define USER_MODE 0x00000010 21 + #define REAL_MODE 0x00000011 22 + #define INTR_MODE 0x00000012 23 + #define PRIV_MODE 0x00000013 24 + #define ABRT_MODE 0x00000017 25 + #define EXTN_MODE 0x0000001b 26 + #define SUSR_MODE 0x0000001f 27 + #define MODE_MASK 0x0000001f 28 + #define PSR_R_BIT 0x00000040 29 + #define PSR_I_BIT 0x00000080 30 + #define PSR_V_BIT 0x10000000 31 + #define PSR_C_BIT 0x20000000 32 + #define PSR_Z_BIT 0x40000000 33 + #define PSR_S_BIT 0x80000000 34 + 35 + /* 36 + * Groups of PSR bits 37 + */ 38 + #define PSR_f 0xff000000 /* Flags */ 39 + #define PSR_c 0x000000ff /* Control */ 40 + 41 + #ifndef __ASSEMBLY__ 42 + 43 + /* 44 + * This struct defines the way the registers are stored on the 45 + * stack during a system call. Note that sizeof(struct pt_regs) 46 + * has to be a multiple of 8. 47 + */ 48 + struct pt_regs { 49 + unsigned long uregs[34]; 50 + }; 51 + 52 + #define UCreg_asr uregs[32] 53 + #define UCreg_pc uregs[31] 54 + #define UCreg_lr uregs[30] 55 + #define UCreg_sp uregs[29] 56 + #define UCreg_ip uregs[28] 57 + #define UCreg_fp uregs[27] 58 + #define UCreg_26 uregs[26] 59 + #define UCreg_25 uregs[25] 60 + #define UCreg_24 uregs[24] 61 + #define UCreg_23 uregs[23] 62 + #define UCreg_22 uregs[22] 63 + #define UCreg_21 uregs[21] 64 + #define UCreg_20 uregs[20] 65 + #define UCreg_19 uregs[19] 66 + #define UCreg_18 uregs[18] 67 + #define UCreg_17 uregs[17] 68 + #define UCreg_16 uregs[16] 69 + #define UCreg_15 uregs[15] 70 + #define UCreg_14 uregs[14] 71 + #define UCreg_13 uregs[13] 72 + #define UCreg_12 uregs[12] 73 + #define UCreg_11 uregs[11] 74 + #define UCreg_10 uregs[10] 75 + #define UCreg_09 uregs[9] 76 + #define UCreg_08 uregs[8] 77 + #define UCreg_07 uregs[7] 78 + #define UCreg_06 uregs[6] 79 + #define UCreg_05 uregs[5] 80 + #define UCreg_04 uregs[4] 81 + #define UCreg_03 uregs[3] 82 + #define UCreg_02 uregs[2] 83 + #define UCreg_01 uregs[1] 84 + #define UCreg_00 uregs[0] 85 + #define UCreg_ORIG_00 uregs[33] 86 + 87 + 88 + #endif /* __ASSEMBLY__ */ 89 + 90 + #endif /* _UAPI__UNICORE_PTRACE_H__ */
+7 -13
arch/unicore32/kernel/entry.S
··· 573 573 */ 574 574 ENTRY(ret_from_fork) 575 575 b.l schedule_tail 576 - get_thread_info tsk 577 - ldw r1, [tsk+], #TI_FLAGS @ check for syscall tracing 578 - mov why, #1 579 - cand.a r1, #_TIF_SYSCALL_TRACE @ are we tracing syscalls? 580 - beq ret_slow_syscall 581 - mov r1, sp 582 - mov r0, #1 @ trace exit [IP = 1] 583 - b.l syscall_trace 584 576 b ret_slow_syscall 585 577 ENDPROC(ret_from_fork) 578 + 579 + ENTRY(ret_from_kernel_thread) 580 + b.l schedule_tail 581 + mov r0, r5 582 + adr lr, ret_slow_syscall 583 + mov pc, r4 584 + ENDPROC(ret_from_kernel_thread) 586 585 587 586 /*============================================================================= 588 587 * SWI handler ··· 667 668 .word cr_alignment 668 669 #endif 669 670 .ltorg 670 - 671 - ENTRY(sys_execve) 672 - add r3, sp, #S_OFF 673 - b __sys_execve 674 - ENDPROC(sys_execve) 675 671 676 672 ENTRY(sys_clone) 677 673 add ip, sp, #S_OFF
+14 -44
arch/unicore32/kernel/process.c
··· 258 258 } 259 259 260 260 asmlinkage void ret_from_fork(void) __asm__("ret_from_fork"); 261 + asmlinkage void ret_from_kernel_thread(void) __asm__("ret_from_kernel_thread"); 261 262 262 263 int 263 264 copy_thread(unsigned long clone_flags, unsigned long stack_start, ··· 267 266 struct thread_info *thread = task_thread_info(p); 268 267 struct pt_regs *childregs = task_pt_regs(p); 269 268 270 - *childregs = *regs; 271 - childregs->UCreg_00 = 0; 272 - childregs->UCreg_sp = stack_start; 273 - 274 269 memset(&thread->cpu_context, 0, sizeof(struct cpu_context_save)); 275 270 thread->cpu_context.sp = (unsigned long)childregs; 276 - thread->cpu_context.pc = (unsigned long)ret_from_fork; 271 + if (unlikely(!regs)) { 272 + thread->cpu_context.pc = (unsigned long)ret_from_kernel_thread; 273 + thread->cpu_context.r4 = stack_start; 274 + thread->cpu_context.r5 = stk_sz; 275 + memset(childregs, 0, sizeof(struct pt_regs)); 276 + } else { 277 + thread->cpu_context.pc = (unsigned long)ret_from_fork; 278 + *childregs = *regs; 279 + childregs->UCreg_00 = 0; 280 + childregs->UCreg_sp = stack_start; 277 281 278 - if (clone_flags & CLONE_SETTLS) 279 - childregs->UCreg_16 = regs->UCreg_03; 280 - 282 + if (clone_flags & CLONE_SETTLS) 283 + childregs->UCreg_16 = regs->UCreg_03; 284 + } 281 285 return 0; 282 286 } 283 287 ··· 310 304 return used_math != 0; 311 305 } 312 306 EXPORT_SYMBOL(dump_fpu); 313 - 314 - /* 315 - * Shuffle the argument into the correct register before calling the 316 - * thread function. r1 is the thread argument, r2 is the pointer to 317 - * the thread function, and r3 points to the exit function. 318 - */ 319 - asm(".pushsection .text\n" 320 - " .align\n" 321 - " .type kernel_thread_helper, #function\n" 322 - "kernel_thread_helper:\n" 323 - " mov.a asr, r7\n" 324 - " mov r0, r4\n" 325 - " mov lr, r6\n" 326 - " mov pc, r5\n" 327 - " .size kernel_thread_helper, . - kernel_thread_helper\n" 328 - " .popsection"); 329 - 330 - /* 331 - * Create a kernel thread. 332 - */ 333 - pid_t kernel_thread(int (*fn)(void *), void *arg, unsigned long flags) 334 - { 335 - struct pt_regs regs; 336 - 337 - memset(&regs, 0, sizeof(regs)); 338 - 339 - regs.UCreg_04 = (unsigned long)arg; 340 - regs.UCreg_05 = (unsigned long)fn; 341 - regs.UCreg_06 = (unsigned long)do_exit; 342 - regs.UCreg_07 = PRIV_MODE; 343 - regs.UCreg_pc = (unsigned long)kernel_thread_helper; 344 - regs.UCreg_asr = regs.UCreg_07 | PSR_I_BIT; 345 - 346 - return do_fork(flags|CLONE_VM|CLONE_UNTRACED, 0, &regs, 0, NULL, NULL); 347 - } 348 - EXPORT_SYMBOL(kernel_thread); 349 307 350 308 unsigned long get_wchan(struct task_struct *p) 351 309 {
+6
arch/unicore32/kernel/setup.h
··· 30 30 extern void kernel_thread_helper(void); 31 31 32 32 extern void __init early_signal_init(void); 33 + 34 + extern asmlinkage void __backtrace(void); 35 + extern asmlinkage void c_backtrace(unsigned long fp, int pmode); 36 + 37 + extern void __show_regs(struct pt_regs *); 38 + 33 39 #endif
-63
arch/unicore32/kernel/sys.c
··· 42 42 parent_tid, child_tid); 43 43 } 44 44 45 - /* sys_execve() executes a new program. 46 - * This is called indirectly via a small wrapper 47 - */ 48 - asmlinkage long __sys_execve(const char __user *filename, 49 - const char __user *const __user *argv, 50 - const char __user *const __user *envp, 51 - struct pt_regs *regs) 52 - { 53 - int error; 54 - struct filename *fn; 55 - 56 - fn = getname(filename); 57 - error = PTR_ERR(fn); 58 - if (IS_ERR(fn)) 59 - goto out; 60 - error = do_execve(fn->name, argv, envp, regs); 61 - putname(fn); 62 - out: 63 - return error; 64 - } 65 - 66 - int kernel_execve(const char *filename, 67 - const char *const argv[], 68 - const char *const envp[]) 69 - { 70 - struct pt_regs regs; 71 - int ret; 72 - 73 - memset(&regs, 0, sizeof(struct pt_regs)); 74 - ret = do_execve(filename, 75 - (const char __user *const __user *)argv, 76 - (const char __user *const __user *)envp, &regs); 77 - if (ret < 0) 78 - goto out; 79 - 80 - /* 81 - * Save argc to the register structure for userspace. 82 - */ 83 - regs.UCreg_00 = ret; 84 - 85 - /* 86 - * We were successful. We won't be returning to our caller, but 87 - * instead to user space by manipulating the kernel stack. 88 - */ 89 - asm("add r0, %0, %1\n\t" 90 - "mov r1, %2\n\t" 91 - "mov r2, %3\n\t" 92 - "mov r22, #0\n\t" /* not a syscall */ 93 - "mov r23, %0\n\t" /* thread structure */ 94 - "b.l memmove\n\t" /* copy regs to top of stack */ 95 - "mov sp, r0\n\t" /* reposition stack pointer */ 96 - "b ret_to_user" 97 - : 98 - : "r" (current_thread_info()), 99 - "Ir" (THREAD_START_SP - sizeof(regs)), 100 - "r" (&regs), 101 - "Ir" (sizeof(regs)) 102 - : "r0", "r1", "r2", "r3", "ip", "lr", "memory"); 103 - 104 - out: 105 - return ret; 106 - } 107 - 108 45 /* Note: used by the compat code even in 64-bit Linux. */ 109 46 SYSCALL_DEFINE6(mmap2, unsigned long, addr, unsigned long, len, 110 47 unsigned long, prot, unsigned long, flags,
+27 -10
arch/unicore32/mm/fault.c
··· 168 168 } 169 169 170 170 static int __do_pf(struct mm_struct *mm, unsigned long addr, unsigned int fsr, 171 - struct task_struct *tsk) 171 + unsigned int flags, struct task_struct *tsk) 172 172 { 173 173 struct vm_area_struct *vma; 174 174 int fault; ··· 194 194 * If for any reason at all we couldn't handle the fault, make 195 195 * sure we exit gracefully rather than endlessly redo the fault. 196 196 */ 197 - fault = handle_mm_fault(mm, vma, addr & PAGE_MASK, 198 - (!(fsr ^ 0x12)) ? FAULT_FLAG_WRITE : 0); 199 - if (unlikely(fault & VM_FAULT_ERROR)) 200 - return fault; 201 - if (fault & VM_FAULT_MAJOR) 202 - tsk->maj_flt++; 203 - else 204 - tsk->min_flt++; 197 + fault = handle_mm_fault(mm, vma, addr & PAGE_MASK, flags); 205 198 return fault; 206 199 207 200 check_stack: ··· 209 216 struct task_struct *tsk; 210 217 struct mm_struct *mm; 211 218 int fault, sig, code; 219 + unsigned int flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE | 220 + ((!(fsr ^ 0x12)) ? FAULT_FLAG_WRITE : 0); 212 221 213 222 tsk = current; 214 223 mm = tsk->mm; ··· 231 236 if (!user_mode(regs) 232 237 && !search_exception_tables(regs->UCreg_pc)) 233 238 goto no_context; 239 + retry: 234 240 down_read(&mm->mmap_sem); 235 241 } else { 236 242 /* ··· 247 251 #endif 248 252 } 249 253 250 - fault = __do_pf(mm, addr, fsr, tsk); 254 + fault = __do_pf(mm, addr, fsr, flags, tsk); 255 + 256 + /* If we need to retry but a fatal signal is pending, handle the 257 + * signal first. We do not need to release the mmap_sem because 258 + * it would already be released in __lock_page_or_retry in 259 + * mm/filemap.c. */ 260 + if ((fault & VM_FAULT_RETRY) && fatal_signal_pending(current)) 261 + return 0; 262 + 263 + if (!(fault & VM_FAULT_ERROR) && (flags & FAULT_FLAG_ALLOW_RETRY)) { 264 + if (fault & VM_FAULT_MAJOR) 265 + tsk->maj_flt++; 266 + else 267 + tsk->min_flt++; 268 + if (fault & VM_FAULT_RETRY) { 269 + /* Clear FAULT_FLAG_ALLOW_RETRY to avoid any risk 270 + * of starvation. */ 271 + flags &= ~FAULT_FLAG_ALLOW_RETRY; 272 + goto retry; 273 + } 274 + } 275 + 251 276 up_read(&mm->mmap_sem); 252 277 253 278 /*
+7 -14
arch/x86/include/asm/xen/hypercall.h
··· 359 359 return _hypercall4(int, update_va_mapping, va, 360 360 new_val.pte, new_val.pte >> 32, flags); 361 361 } 362 + extern int __must_check xen_event_channel_op_compat(int, void *); 362 363 363 364 static inline int 364 365 HYPERVISOR_event_channel_op(int cmd, void *arg) 365 366 { 366 367 int rc = _hypercall2(int, event_channel_op, cmd, arg); 367 - if (unlikely(rc == -ENOSYS)) { 368 - struct evtchn_op op; 369 - op.cmd = cmd; 370 - memcpy(&op.u, arg, sizeof(op.u)); 371 - rc = _hypercall1(int, event_channel_op_compat, &op); 372 - memcpy(arg, &op.u, sizeof(op.u)); 373 - } 368 + if (unlikely(rc == -ENOSYS)) 369 + rc = xen_event_channel_op_compat(cmd, arg); 374 370 return rc; 375 371 } 376 372 ··· 382 386 return _hypercall3(int, console_io, cmd, count, str); 383 387 } 384 388 389 + extern int __must_check HYPERVISOR_physdev_op_compat(int, void *); 390 + 385 391 static inline int 386 392 HYPERVISOR_physdev_op(int cmd, void *arg) 387 393 { 388 394 int rc = _hypercall2(int, physdev_op, cmd, arg); 389 - if (unlikely(rc == -ENOSYS)) { 390 - struct physdev_op op; 391 - op.cmd = cmd; 392 - memcpy(&op.u, arg, sizeof(op.u)); 393 - rc = _hypercall1(int, physdev_op_compat, &op); 394 - memcpy(arg, &op.u, sizeof(op.u)); 395 - } 395 + if (unlikely(rc == -ENOSYS)) 396 + rc = HYPERVISOR_physdev_op_compat(cmd, arg); 396 397 return rc; 397 398 } 398 399
+3
arch/x86/kvm/cpuid.h
··· 24 24 { 25 25 struct kvm_cpuid_entry2 *best; 26 26 27 + if (!static_cpu_has(X86_FEATURE_XSAVE)) 28 + return 0; 29 + 27 30 best = kvm_find_cpuid_entry(vcpu, 1, 0); 28 31 return best && (best->ecx & bit(X86_FEATURE_XSAVE)); 29 32 }
+7 -4
arch/x86/kvm/vmx.c
··· 6549 6549 } 6550 6550 } 6551 6551 6552 - exec_control = vmcs_read32(SECONDARY_VM_EXEC_CONTROL); 6553 6552 /* Exposing INVPCID only when PCID is exposed */ 6554 6553 best = kvm_find_cpuid_entry(vcpu, 0x7, 0); 6555 6554 if (vmx_invpcid_supported() && 6556 6555 best && (best->ebx & bit(X86_FEATURE_INVPCID)) && 6557 6556 guest_cpuid_has_pcid(vcpu)) { 6557 + exec_control = vmcs_read32(SECONDARY_VM_EXEC_CONTROL); 6558 6558 exec_control |= SECONDARY_EXEC_ENABLE_INVPCID; 6559 6559 vmcs_write32(SECONDARY_VM_EXEC_CONTROL, 6560 6560 exec_control); 6561 6561 } else { 6562 - exec_control &= ~SECONDARY_EXEC_ENABLE_INVPCID; 6563 - vmcs_write32(SECONDARY_VM_EXEC_CONTROL, 6564 - exec_control); 6562 + if (cpu_has_secondary_exec_ctrls()) { 6563 + exec_control = vmcs_read32(SECONDARY_VM_EXEC_CONTROL); 6564 + exec_control &= ~SECONDARY_EXEC_ENABLE_INVPCID; 6565 + vmcs_write32(SECONDARY_VM_EXEC_CONTROL, 6566 + exec_control); 6567 + } 6565 6568 if (best) 6566 6569 best->ebx &= ~bit(X86_FEATURE_INVPCID); 6567 6570 }
+3
arch/x86/kvm/x86.c
··· 5781 5781 int pending_vec, max_bits, idx; 5782 5782 struct desc_ptr dt; 5783 5783 5784 + if (!guest_cpuid_has_xsave(vcpu) && (sregs->cr4 & X86_CR4_OSXSAVE)) 5785 + return -EINVAL; 5786 + 5784 5787 dt.size = sregs->idt.limit; 5785 5788 dt.address = sregs->idt.base; 5786 5789 kvm_x86_ops->set_idt(vcpu, &dt);
+8 -3
crypto/cryptd.c
··· 137 137 struct crypto_async_request *req, *backlog; 138 138 139 139 cpu_queue = container_of(work, struct cryptd_cpu_queue, work); 140 - /* Only handle one request at a time to avoid hogging crypto 141 - * workqueue. preempt_disable/enable is used to prevent 142 - * being preempted by cryptd_enqueue_request() */ 140 + /* 141 + * Only handle one request at a time to avoid hogging crypto workqueue. 142 + * preempt_disable/enable is used to prevent being preempted by 143 + * cryptd_enqueue_request(). local_bh_disable/enable is used to prevent 144 + * cryptd_enqueue_request() being accessed from software interrupts. 145 + */ 146 + local_bh_disable(); 143 147 preempt_disable(); 144 148 backlog = crypto_get_backlog(&cpu_queue->queue); 145 149 req = crypto_dequeue_request(&cpu_queue->queue); 146 150 preempt_enable(); 151 + local_bh_enable(); 147 152 148 153 if (!req) 149 154 return;
+7 -4
drivers/acpi/video.c
··· 1345 1345 acpi_video_bus_get_devices(struct acpi_video_bus *video, 1346 1346 struct acpi_device *device) 1347 1347 { 1348 - int status; 1348 + int status = 0; 1349 1349 struct acpi_device *dev; 1350 1350 1351 - status = acpi_video_device_enumerate(video); 1352 - if (status) 1353 - return status; 1351 + /* 1352 + * There are systems where video module known to work fine regardless 1353 + * of broken _DOD and ignoring returned value here doesn't cause 1354 + * any issues later. 1355 + */ 1356 + acpi_video_device_enumerate(video); 1354 1357 1355 1358 list_for_each_entry(dev, &device->children, node) { 1356 1359
+7
drivers/base/platform.c
··· 83 83 */ 84 84 int platform_get_irq(struct platform_device *dev, unsigned int num) 85 85 { 86 + #ifdef CONFIG_SPARC 87 + /* sparc does not have irqs represented as IORESOURCE_IRQ resources */ 88 + if (!dev || num >= dev->archdata.num_irqs) 89 + return -ENXIO; 90 + return dev->archdata.irqs[num]; 91 + #else 86 92 struct resource *r = platform_get_resource(dev, IORESOURCE_IRQ, num); 87 93 88 94 return r ? r->start : -ENXIO; 95 + #endif 89 96 } 90 97 EXPORT_SYMBOL_GPL(platform_get_irq); 91 98
+1
drivers/bluetooth/ath3k.c
··· 67 67 { USB_DEVICE(0x13d3, 0x3304) }, 68 68 { USB_DEVICE(0x0930, 0x0215) }, 69 69 { USB_DEVICE(0x0489, 0xE03D) }, 70 + { USB_DEVICE(0x0489, 0xE027) }, 70 71 71 72 /* Atheros AR9285 Malbec with sflash firmware */ 72 73 { USB_DEVICE(0x03F0, 0x311D) },
+1
drivers/bluetooth/btusb.c
··· 124 124 { USB_DEVICE(0x13d3, 0x3304), .driver_info = BTUSB_IGNORE }, 125 125 { USB_DEVICE(0x0930, 0x0215), .driver_info = BTUSB_IGNORE }, 126 126 { USB_DEVICE(0x0489, 0xe03d), .driver_info = BTUSB_IGNORE }, 127 + { USB_DEVICE(0x0489, 0xe027), .driver_info = BTUSB_IGNORE }, 127 128 128 129 /* Atheros AR9285 Malbec with sflash firmware */ 129 130 { USB_DEVICE(0x03f0, 0x311d), .driver_info = BTUSB_IGNORE },
+65 -3
drivers/bus/omap-ocp2scp.c
··· 22 22 #include <linux/pm_runtime.h> 23 23 #include <linux/of.h> 24 24 #include <linux/of_platform.h> 25 + #include <linux/platform_data/omap_ocp2scp.h> 26 + 27 + /** 28 + * _count_resources - count for the number of resources 29 + * @res: struct resource * 30 + * 31 + * Count and return the number of resources populated for the device that is 32 + * connected to ocp2scp. 33 + */ 34 + static unsigned _count_resources(struct resource *res) 35 + { 36 + int cnt = 0; 37 + 38 + while (res->start != res->end) { 39 + cnt++; 40 + res++; 41 + } 42 + 43 + return cnt; 44 + } 25 45 26 46 static int ocp2scp_remove_devices(struct device *dev, void *c) 27 47 { ··· 54 34 55 35 static int __devinit omap_ocp2scp_probe(struct platform_device *pdev) 56 36 { 57 - int ret; 58 - struct device_node *np = pdev->dev.of_node; 37 + int ret; 38 + unsigned res_cnt, i; 39 + struct device_node *np = pdev->dev.of_node; 40 + struct platform_device *pdev_child; 41 + struct omap_ocp2scp_platform_data *pdata = pdev->dev.platform_data; 42 + struct omap_ocp2scp_dev *dev; 59 43 60 44 if (np) { 61 45 ret = of_platform_populate(np, NULL, NULL, &pdev->dev); 62 46 if (ret) { 63 - dev_err(&pdev->dev, "failed to add resources for ocp2scp child\n"); 47 + dev_err(&pdev->dev, 48 + "failed to add resources for ocp2scp child\n"); 64 49 goto err0; 65 50 } 51 + } else if (pdata) { 52 + for (i = 0, dev = *pdata->devices; i < pdata->dev_cnt; i++, 53 + dev++) { 54 + res_cnt = _count_resources(dev->res); 55 + 56 + pdev_child = platform_device_alloc(dev->drv_name, 57 + PLATFORM_DEVID_AUTO); 58 + if (!pdev_child) { 59 + dev_err(&pdev->dev, 60 + "failed to allocate mem for ocp2scp child\n"); 61 + goto err0; 62 + } 63 + 64 + ret = platform_device_add_resources(pdev_child, 65 + dev->res, res_cnt); 66 + if (ret) { 67 + dev_err(&pdev->dev, 68 + "failed to add resources for ocp2scp child\n"); 69 + goto err1; 70 + } 71 + 72 + pdev_child->dev.parent = &pdev->dev; 73 + 74 + ret = platform_device_add(pdev_child); 75 + if (ret) { 76 + dev_err(&pdev->dev, 77 + "failed to register ocp2scp child device\n"); 78 + goto err1; 79 + } 80 + } 81 + } else { 82 + dev_err(&pdev->dev, "OCP2SCP initialized without plat data\n"); 83 + return -EINVAL; 66 84 } 85 + 67 86 pm_runtime_enable(&pdev->dev); 68 87 69 88 return 0; 89 + 90 + err1: 91 + platform_device_put(pdev_child); 70 92 71 93 err0: 72 94 device_for_each_child(&pdev->dev, NULL, ocp2scp_remove_devices);
+46 -4
drivers/clk/ux500/u8500_clk.c
··· 40 40 CLK_IS_ROOT|CLK_IGNORE_UNUSED, 41 41 32768); 42 42 clk_register_clkdev(clk, "clk32k", NULL); 43 - clk_register_clkdev(clk, NULL, "rtc-pl031"); 43 + clk_register_clkdev(clk, "apb_pclk", "rtc-pl031"); 44 44 45 45 /* PRCMU clocks */ 46 46 fw_version = prcmu_get_fw_version(); ··· 228 228 229 229 clk = clk_reg_prcc_pclk("p1_pclk2", "per1clk", U8500_CLKRST1_BASE, 230 230 BIT(2), 0); 231 + clk_register_clkdev(clk, "apb_pclk", "nmk-i2c.1"); 232 + 231 233 clk = clk_reg_prcc_pclk("p1_pclk3", "per1clk", U8500_CLKRST1_BASE, 232 234 BIT(3), 0); 235 + clk_register_clkdev(clk, "apb_pclk", "msp0"); 236 + clk_register_clkdev(clk, "apb_pclk", "ux500-msp-i2s.0"); 237 + 233 238 clk = clk_reg_prcc_pclk("p1_pclk4", "per1clk", U8500_CLKRST1_BASE, 234 239 BIT(4), 0); 240 + clk_register_clkdev(clk, "apb_pclk", "msp1"); 241 + clk_register_clkdev(clk, "apb_pclk", "ux500-msp-i2s.1"); 235 242 236 243 clk = clk_reg_prcc_pclk("p1_pclk5", "per1clk", U8500_CLKRST1_BASE, 237 244 BIT(5), 0); ··· 246 239 247 240 clk = clk_reg_prcc_pclk("p1_pclk6", "per1clk", U8500_CLKRST1_BASE, 248 241 BIT(6), 0); 242 + clk_register_clkdev(clk, "apb_pclk", "nmk-i2c.2"); 249 243 250 244 clk = clk_reg_prcc_pclk("p1_pclk7", "per1clk", U8500_CLKRST1_BASE, 251 245 BIT(7), 0); ··· 254 246 255 247 clk = clk_reg_prcc_pclk("p1_pclk8", "per1clk", U8500_CLKRST1_BASE, 256 248 BIT(8), 0); 249 + clk_register_clkdev(clk, "apb_pclk", "slimbus0"); 257 250 258 251 clk = clk_reg_prcc_pclk("p1_pclk9", "per1clk", U8500_CLKRST1_BASE, 259 252 BIT(9), 0); ··· 264 255 265 256 clk = clk_reg_prcc_pclk("p1_pclk10", "per1clk", U8500_CLKRST1_BASE, 266 257 BIT(10), 0); 258 + clk_register_clkdev(clk, "apb_pclk", "nmk-i2c.4"); 259 + 267 260 clk = clk_reg_prcc_pclk("p1_pclk11", "per1clk", U8500_CLKRST1_BASE, 268 261 BIT(11), 0); 262 + clk_register_clkdev(clk, "apb_pclk", "msp3"); 263 + clk_register_clkdev(clk, "apb_pclk", "ux500-msp-i2s.3"); 269 264 270 265 clk = clk_reg_prcc_pclk("p2_pclk0", "per2clk", U8500_CLKRST2_BASE, 271 266 BIT(0), 0); 267 + clk_register_clkdev(clk, "apb_pclk", "nmk-i2c.3"); 272 268 273 269 clk = clk_reg_prcc_pclk("p2_pclk1", "per2clk", U8500_CLKRST2_BASE, 274 270 BIT(1), 0); ··· 293 279 294 280 clk = clk_reg_prcc_pclk("p2_pclk5", "per2clk", U8500_CLKRST2_BASE, 295 281 BIT(5), 0); 282 + clk_register_clkdev(clk, "apb_pclk", "msp2"); 283 + clk_register_clkdev(clk, "apb_pclk", "ux500-msp-i2s.2"); 296 284 297 285 clk = clk_reg_prcc_pclk("p2_pclk6", "per2clk", U8500_CLKRST2_BASE, 298 286 BIT(6), 0); 299 287 clk_register_clkdev(clk, "apb_pclk", "sdi1"); 300 - 301 288 302 289 clk = clk_reg_prcc_pclk("p2_pclk7", "per2clk", U8500_CLKRST2_BASE, 303 290 BIT(7), 0); ··· 331 316 332 317 clk = clk_reg_prcc_pclk("p3_pclk1", "per3clk", U8500_CLKRST3_BASE, 333 318 BIT(1), 0); 319 + clk_register_clkdev(clk, "apb_pclk", "ssp0"); 320 + 334 321 clk = clk_reg_prcc_pclk("p3_pclk2", "per3clk", U8500_CLKRST3_BASE, 335 322 BIT(2), 0); 323 + clk_register_clkdev(clk, "apb_pclk", "ssp1"); 324 + 336 325 clk = clk_reg_prcc_pclk("p3_pclk3", "per3clk", U8500_CLKRST3_BASE, 337 326 BIT(3), 0); 327 + clk_register_clkdev(clk, "apb_pclk", "nmk-i2c.0"); 338 328 339 329 clk = clk_reg_prcc_pclk("p3_pclk4", "per3clk", U8500_CLKRST3_BASE, 340 330 BIT(4), 0); ··· 421 401 422 402 clk = clk_reg_prcc_kclk("p1_i2c1_kclk", "i2cclk", 423 403 U8500_CLKRST1_BASE, BIT(2), CLK_SET_RATE_GATE); 404 + clk_register_clkdev(clk, NULL, "nmk-i2c.1"); 405 + 424 406 clk = clk_reg_prcc_kclk("p1_msp0_kclk", "msp02clk", 425 407 U8500_CLKRST1_BASE, BIT(3), CLK_SET_RATE_GATE); 408 + clk_register_clkdev(clk, NULL, "msp0"); 409 + clk_register_clkdev(clk, NULL, "ux500-msp-i2s.0"); 410 + 426 411 clk = clk_reg_prcc_kclk("p1_msp1_kclk", "msp1clk", 427 412 U8500_CLKRST1_BASE, BIT(4), CLK_SET_RATE_GATE); 413 + clk_register_clkdev(clk, NULL, "msp1"); 414 + clk_register_clkdev(clk, NULL, "ux500-msp-i2s.1"); 428 415 429 416 clk = clk_reg_prcc_kclk("p1_sdi0_kclk", "sdmmcclk", 430 417 U8500_CLKRST1_BASE, BIT(5), CLK_SET_RATE_GATE); ··· 439 412 440 413 clk = clk_reg_prcc_kclk("p1_i2c2_kclk", "i2cclk", 441 414 U8500_CLKRST1_BASE, BIT(6), CLK_SET_RATE_GATE); 415 + clk_register_clkdev(clk, NULL, "nmk-i2c.2"); 416 + 442 417 clk = clk_reg_prcc_kclk("p1_slimbus0_kclk", "slimclk", 443 - U8500_CLKRST1_BASE, BIT(3), CLK_SET_RATE_GATE); 444 - /* FIXME: Redefinition of BIT(3). */ 418 + U8500_CLKRST1_BASE, BIT(8), CLK_SET_RATE_GATE); 419 + clk_register_clkdev(clk, NULL, "slimbus0"); 420 + 445 421 clk = clk_reg_prcc_kclk("p1_i2c4_kclk", "i2cclk", 446 422 U8500_CLKRST1_BASE, BIT(9), CLK_SET_RATE_GATE); 423 + clk_register_clkdev(clk, NULL, "nmk-i2c.4"); 424 + 447 425 clk = clk_reg_prcc_kclk("p1_msp3_kclk", "msp1clk", 448 426 U8500_CLKRST1_BASE, BIT(10), CLK_SET_RATE_GATE); 427 + clk_register_clkdev(clk, NULL, "msp3"); 428 + clk_register_clkdev(clk, NULL, "ux500-msp-i2s.3"); 449 429 450 430 /* Periph2 */ 451 431 clk = clk_reg_prcc_kclk("p2_i2c3_kclk", "i2cclk", 452 432 U8500_CLKRST2_BASE, BIT(0), CLK_SET_RATE_GATE); 433 + clk_register_clkdev(clk, NULL, "nmk-i2c.3"); 453 434 454 435 clk = clk_reg_prcc_kclk("p2_sdi4_kclk", "sdmmcclk", 455 436 U8500_CLKRST2_BASE, BIT(2), CLK_SET_RATE_GATE); ··· 465 430 466 431 clk = clk_reg_prcc_kclk("p2_msp2_kclk", "msp02clk", 467 432 U8500_CLKRST2_BASE, BIT(3), CLK_SET_RATE_GATE); 433 + clk_register_clkdev(clk, NULL, "msp2"); 434 + clk_register_clkdev(clk, NULL, "ux500-msp-i2s.2"); 468 435 469 436 clk = clk_reg_prcc_kclk("p2_sdi1_kclk", "sdmmcclk", 470 437 U8500_CLKRST2_BASE, BIT(4), CLK_SET_RATE_GATE); ··· 487 450 /* Periph3 */ 488 451 clk = clk_reg_prcc_kclk("p3_ssp0_kclk", "sspclk", 489 452 U8500_CLKRST3_BASE, BIT(1), CLK_SET_RATE_GATE); 453 + clk_register_clkdev(clk, NULL, "ssp0"); 454 + 490 455 clk = clk_reg_prcc_kclk("p3_ssp1_kclk", "sspclk", 491 456 U8500_CLKRST3_BASE, BIT(2), CLK_SET_RATE_GATE); 457 + clk_register_clkdev(clk, NULL, "ssp1"); 458 + 492 459 clk = clk_reg_prcc_kclk("p3_i2c0_kclk", "i2cclk", 493 460 U8500_CLKRST3_BASE, BIT(3), CLK_SET_RATE_GATE); 461 + clk_register_clkdev(clk, NULL, "nmk-i2c.0"); 494 462 495 463 clk = clk_reg_prcc_kclk("p3_sdi2_kclk", "sdmmcclk", 496 464 U8500_CLKRST3_BASE, BIT(4), CLK_SET_RATE_GATE);
+1 -1
drivers/gpio/Kconfig
··· 47 47 48 48 config OF_GPIO 49 49 def_bool y 50 - depends on OF && !SPARC 50 + depends on OF 51 51 52 52 config DEBUG_GPIO 53 53 bool "Debug GPIO calls"
+32 -16
drivers/gpu/drm/drm_fops.c
··· 121 121 int minor_id = iminor(inode); 122 122 struct drm_minor *minor; 123 123 int retcode = 0; 124 + int need_setup = 0; 125 + struct address_space *old_mapping; 124 126 125 127 minor = idr_find(&drm_minors_idr, minor_id); 126 128 if (!minor) ··· 134 132 if (drm_device_is_unplugged(dev)) 135 133 return -ENODEV; 136 134 137 - retcode = drm_open_helper(inode, filp, dev); 138 - if (!retcode) { 139 - atomic_inc(&dev->counts[_DRM_STAT_OPENS]); 140 - if (!dev->open_count++) 141 - retcode = drm_setup(dev); 142 - } 143 - if (!retcode) { 144 - mutex_lock(&dev->struct_mutex); 145 - if (dev->dev_mapping == NULL) 146 - dev->dev_mapping = &inode->i_data; 147 - /* ihold ensures nobody can remove inode with our i_data */ 148 - ihold(container_of(dev->dev_mapping, struct inode, i_data)); 149 - inode->i_mapping = dev->dev_mapping; 150 - filp->f_mapping = dev->dev_mapping; 151 - mutex_unlock(&dev->struct_mutex); 152 - } 135 + if (!dev->open_count++) 136 + need_setup = 1; 137 + mutex_lock(&dev->struct_mutex); 138 + old_mapping = dev->dev_mapping; 139 + if (old_mapping == NULL) 140 + dev->dev_mapping = &inode->i_data; 141 + /* ihold ensures nobody can remove inode with our i_data */ 142 + ihold(container_of(dev->dev_mapping, struct inode, i_data)); 143 + inode->i_mapping = dev->dev_mapping; 144 + filp->f_mapping = dev->dev_mapping; 145 + mutex_unlock(&dev->struct_mutex); 153 146 147 + retcode = drm_open_helper(inode, filp, dev); 148 + if (retcode) 149 + goto err_undo; 150 + atomic_inc(&dev->counts[_DRM_STAT_OPENS]); 151 + if (need_setup) { 152 + retcode = drm_setup(dev); 153 + if (retcode) 154 + goto err_undo; 155 + } 156 + return 0; 157 + 158 + err_undo: 159 + mutex_lock(&dev->struct_mutex); 160 + filp->f_mapping = old_mapping; 161 + inode->i_mapping = old_mapping; 162 + iput(container_of(dev->dev_mapping, struct inode, i_data)); 163 + dev->dev_mapping = old_mapping; 164 + mutex_unlock(&dev->struct_mutex); 165 + dev->open_count--; 154 166 return retcode; 155 167 } 156 168 EXPORT_SYMBOL(drm_open);
+1 -1
drivers/gpu/drm/exynos/Kconfig
··· 1 1 config DRM_EXYNOS 2 2 tristate "DRM Support for Samsung SoC EXYNOS Series" 3 - depends on DRM && PLAT_SAMSUNG 3 + depends on DRM && (PLAT_SAMSUNG || ARCH_MULTIPLATFORM) 4 4 select DRM_KMS_HELPER 5 5 select FB_CFB_FILLRECT 6 6 select FB_CFB_COPYAREA
+1
drivers/gpu/drm/exynos/exynos_drm_connector.c
··· 374 374 exynos_connector->encoder_id = encoder->base.id; 375 375 exynos_connector->manager = manager; 376 376 exynos_connector->dpms = DRM_MODE_DPMS_OFF; 377 + connector->dpms = DRM_MODE_DPMS_OFF; 377 378 connector->encoder = encoder; 378 379 379 380 err = drm_mode_connector_attach_encoder(connector, encoder);
+17 -16
drivers/gpu/drm/exynos/exynos_drm_encoder.c
··· 43 43 * @manager: specific encoder has its own manager to control a hardware 44 44 * appropriately and we can access a hardware drawing on this manager. 45 45 * @dpms: store the encoder dpms value. 46 + * @updated: indicate whether overlay data updating is needed or not. 46 47 */ 47 48 struct exynos_drm_encoder { 48 49 struct drm_crtc *old_crtc; 49 50 struct drm_encoder drm_encoder; 50 51 struct exynos_drm_manager *manager; 51 - int dpms; 52 + int dpms; 53 + bool updated; 52 54 }; 53 55 54 56 static void exynos_drm_connector_power(struct drm_encoder *encoder, int mode) ··· 87 85 switch (mode) { 88 86 case DRM_MODE_DPMS_ON: 89 87 if (manager_ops && manager_ops->apply) 90 - manager_ops->apply(manager->dev); 88 + if (!exynos_encoder->updated) 89 + manager_ops->apply(manager->dev); 90 + 91 91 exynos_drm_connector_power(encoder, mode); 92 92 exynos_encoder->dpms = mode; 93 93 break; ··· 98 94 case DRM_MODE_DPMS_OFF: 99 95 exynos_drm_connector_power(encoder, mode); 100 96 exynos_encoder->dpms = mode; 97 + exynos_encoder->updated = false; 101 98 break; 102 99 default: 103 100 DRM_ERROR("unspecified mode %d\n", mode); ··· 210 205 211 206 static void exynos_drm_encoder_commit(struct drm_encoder *encoder) 212 207 { 213 - struct exynos_drm_manager *manager = exynos_drm_get_manager(encoder); 208 + struct exynos_drm_encoder *exynos_encoder = to_exynos_encoder(encoder); 209 + struct exynos_drm_manager *manager = exynos_encoder->manager; 214 210 struct exynos_drm_manager_ops *manager_ops = manager->ops; 215 211 216 212 DRM_DEBUG_KMS("%s\n", __FILE__); 217 213 218 214 if (manager_ops && manager_ops->commit) 219 215 manager_ops->commit(manager->dev); 216 + 217 + /* 218 + * this will avoid one issue that overlay data is updated to 219 + * real hardware two times. 220 + * And this variable will be used to check if the data was 221 + * already updated or not by exynos_drm_encoder_dpms function. 222 + */ 223 + exynos_encoder->updated = true; 220 224 } 221 225 222 226 static void exynos_drm_encoder_disable(struct drm_encoder *encoder) ··· 413 399 414 400 if (manager_ops && manager_ops->dpms) 415 401 manager_ops->dpms(manager->dev, mode); 416 - 417 - /* 418 - * set current mode to new one so that data aren't updated into 419 - * registers by drm_helper_connector_dpms two times. 420 - * 421 - * in case that drm_crtc_helper_set_mode() is called, 422 - * overlay_ops->commit() and manager_ops->commit() callbacks 423 - * can be called two times, first at drm_crtc_helper_set_mode() 424 - * and second at drm_helper_connector_dpms(). 425 - * so with this setting, when drm_helper_connector_dpms() is called 426 - * encoder->funcs->dpms() will be ignored. 427 - */ 428 - exynos_encoder->dpms = mode; 429 402 430 403 /* 431 404 * if this condition is ok then it means that the crtc is already
+1 -1
drivers/gpu/drm/exynos/exynos_mixer.c
··· 1142 1142 const struct of_device_id *match; 1143 1143 match = of_match_node(of_match_ptr(mixer_match_types), 1144 1144 pdev->dev.of_node); 1145 - drv = match->data; 1145 + drv = (struct mixer_drv_data *)match->data; 1146 1146 } else { 1147 1147 drv = (struct mixer_drv_data *) 1148 1148 platform_get_device_id(pdev)->driver_data;
+2 -1
drivers/gpu/drm/i915/i915_dma.c
··· 1505 1505 goto put_gmch; 1506 1506 } 1507 1507 1508 - i915_kick_out_firmware_fb(dev_priv); 1508 + if (drm_core_check_feature(dev, DRIVER_MODESET)) 1509 + i915_kick_out_firmware_fb(dev_priv); 1509 1510 1510 1511 pci_set_master(dev->pdev); 1511 1512
+2 -2
drivers/gpu/drm/i915/intel_crt.c
··· 143 143 int old_dpms; 144 144 145 145 /* PCH platforms and VLV only support on/off. */ 146 - if (INTEL_INFO(dev)->gen < 5 && mode != DRM_MODE_DPMS_ON) 146 + if (INTEL_INFO(dev)->gen >= 5 && mode != DRM_MODE_DPMS_ON) 147 147 mode = DRM_MODE_DPMS_OFF; 148 148 149 149 if (mode == connector->dpms) ··· 729 729 730 730 crt->base.type = INTEL_OUTPUT_ANALOG; 731 731 crt->base.cloneable = true; 732 - if (IS_HASWELL(dev)) 732 + if (IS_HASWELL(dev) || IS_I830(dev)) 733 733 crt->base.crtc_mask = (1 << 0); 734 734 else 735 735 crt->base.crtc_mask = (1 << 0) | (1 << 1) | (1 << 2);
+11
drivers/gpu/drm/i915/intel_display.c
··· 3841 3841 } 3842 3842 } 3843 3843 3844 + if (intel_encoder->type == INTEL_OUTPUT_EDP) { 3845 + /* Use VBT settings if we have an eDP panel */ 3846 + unsigned int edp_bpc = dev_priv->edp.bpp / 3; 3847 + 3848 + if (edp_bpc < display_bpc) { 3849 + DRM_DEBUG_KMS("clamping display bpc (was %d) to eDP (%d)\n", display_bpc, edp_bpc); 3850 + display_bpc = edp_bpc; 3851 + } 3852 + continue; 3853 + } 3854 + 3844 3855 /* 3845 3856 * HDMI is either 12 or 8, so if the display lets 10bpc sneak 3846 3857 * through, clamp it down. (Note: >12bpc will be caught below.)
+11 -3
drivers/gpu/drm/i915/intel_overlay.c
··· 341 341 intel_ring_emit(ring, flip_addr); 342 342 intel_ring_emit(ring, MI_WAIT_FOR_EVENT | MI_WAIT_FOR_OVERLAY_FLIP); 343 343 /* turn overlay off */ 344 - intel_ring_emit(ring, MI_OVERLAY_FLIP | MI_OVERLAY_OFF); 345 - intel_ring_emit(ring, flip_addr); 346 - intel_ring_emit(ring, MI_WAIT_FOR_EVENT | MI_WAIT_FOR_OVERLAY_FLIP); 344 + if (IS_I830(dev)) { 345 + /* Workaround: Don't disable the overlay fully, since otherwise 346 + * it dies on the next OVERLAY_ON cmd. */ 347 + intel_ring_emit(ring, MI_NOOP); 348 + intel_ring_emit(ring, MI_NOOP); 349 + intel_ring_emit(ring, MI_NOOP); 350 + } else { 351 + intel_ring_emit(ring, MI_OVERLAY_FLIP | MI_OVERLAY_OFF); 352 + intel_ring_emit(ring, flip_addr); 353 + intel_ring_emit(ring, MI_WAIT_FOR_EVENT | MI_WAIT_FOR_OVERLAY_FLIP); 354 + } 347 355 intel_ring_advance(ring); 348 356 349 357 return intel_overlay_do_wait_request(overlay, intel_overlay_off_tail);
+1 -1
drivers/gpu/drm/i915/intel_panel.c
··· 435 435 props.type = BACKLIGHT_RAW; 436 436 props.max_brightness = _intel_panel_get_max_backlight(dev); 437 437 if (props.max_brightness == 0) { 438 - DRM_ERROR("Failed to get maximum backlight value\n"); 438 + DRM_DEBUG_DRIVER("Failed to get maximum backlight value\n"); 439 439 return -ENODEV; 440 440 } 441 441 dev_priv->backlight =
+61 -23
drivers/gpu/drm/i915/intel_sdvo.c
··· 894 894 } 895 895 #endif 896 896 897 + static bool intel_sdvo_write_infoframe(struct intel_sdvo *intel_sdvo, 898 + unsigned if_index, uint8_t tx_rate, 899 + uint8_t *data, unsigned length) 900 + { 901 + uint8_t set_buf_index[2] = { if_index, 0 }; 902 + uint8_t hbuf_size, tmp[8]; 903 + int i; 904 + 905 + if (!intel_sdvo_set_value(intel_sdvo, 906 + SDVO_CMD_SET_HBUF_INDEX, 907 + set_buf_index, 2)) 908 + return false; 909 + 910 + if (!intel_sdvo_get_value(intel_sdvo, SDVO_CMD_GET_HBUF_INFO, 911 + &hbuf_size, 1)) 912 + return false; 913 + 914 + /* Buffer size is 0 based, hooray! */ 915 + hbuf_size++; 916 + 917 + DRM_DEBUG_KMS("writing sdvo hbuf: %i, hbuf_size %i, hbuf_size: %i\n", 918 + if_index, length, hbuf_size); 919 + 920 + for (i = 0; i < hbuf_size; i += 8) { 921 + memset(tmp, 0, 8); 922 + if (i < length) 923 + memcpy(tmp, data + i, min_t(unsigned, 8, length - i)); 924 + 925 + if (!intel_sdvo_set_value(intel_sdvo, 926 + SDVO_CMD_SET_HBUF_DATA, 927 + tmp, 8)) 928 + return false; 929 + } 930 + 931 + return intel_sdvo_set_value(intel_sdvo, 932 + SDVO_CMD_SET_HBUF_TXRATE, 933 + &tx_rate, 1); 934 + } 935 + 897 936 static bool intel_sdvo_set_avi_infoframe(struct intel_sdvo *intel_sdvo) 898 937 { 899 938 struct dip_infoframe avi_if = { ··· 940 901 .ver = DIP_VERSION_AVI, 941 902 .len = DIP_LEN_AVI, 942 903 }; 943 - uint8_t tx_rate = SDVO_HBUF_TX_VSYNC; 944 - uint8_t set_buf_index[2] = { 1, 0 }; 945 904 uint8_t sdvo_data[4 + sizeof(avi_if.body.avi)]; 946 - uint64_t *data = (uint64_t *)sdvo_data; 947 - unsigned i; 948 905 949 906 intel_dip_infoframe_csum(&avi_if); 950 907 ··· 950 915 sdvo_data[3] = avi_if.checksum; 951 916 memcpy(&sdvo_data[4], &avi_if.body, sizeof(avi_if.body.avi)); 952 917 953 - if (!intel_sdvo_set_value(intel_sdvo, 954 - SDVO_CMD_SET_HBUF_INDEX, 955 - set_buf_index, 2)) 956 - return false; 957 - 958 - for (i = 0; i < sizeof(sdvo_data); i += 8) { 959 - if (!intel_sdvo_set_value(intel_sdvo, 960 - SDVO_CMD_SET_HBUF_DATA, 961 - data, 8)) 962 - return false; 963 - data++; 964 - } 965 - 966 - return intel_sdvo_set_value(intel_sdvo, 967 - SDVO_CMD_SET_HBUF_TXRATE, 968 - &tx_rate, 1); 918 + return intel_sdvo_write_infoframe(intel_sdvo, SDVO_HBUF_INDEX_AVI_IF, 919 + SDVO_HBUF_TX_VSYNC, 920 + sdvo_data, sizeof(sdvo_data)); 969 921 } 970 922 971 923 static bool intel_sdvo_set_tv_format(struct intel_sdvo *intel_sdvo) ··· 2382 2360 return true; 2383 2361 } 2384 2362 2363 + static void intel_sdvo_output_cleanup(struct intel_sdvo *intel_sdvo) 2364 + { 2365 + struct drm_device *dev = intel_sdvo->base.base.dev; 2366 + struct drm_connector *connector, *tmp; 2367 + 2368 + list_for_each_entry_safe(connector, tmp, 2369 + &dev->mode_config.connector_list, head) { 2370 + if (intel_attached_encoder(connector) == &intel_sdvo->base) 2371 + intel_sdvo_destroy(connector); 2372 + } 2373 + } 2374 + 2385 2375 static bool intel_sdvo_tv_create_property(struct intel_sdvo *intel_sdvo, 2386 2376 struct intel_sdvo_connector *intel_sdvo_connector, 2387 2377 int type) ··· 2717 2683 intel_sdvo->caps.output_flags) != true) { 2718 2684 DRM_DEBUG_KMS("SDVO output failed to setup on %s\n", 2719 2685 SDVO_NAME(intel_sdvo)); 2720 - goto err; 2686 + /* Output_setup can leave behind connectors! */ 2687 + goto err_output; 2721 2688 } 2722 2689 2723 2690 /* Only enable the hotplug irq if we need it, to work around noisy ··· 2731 2696 2732 2697 /* Set the input timing to the screen. Assume always input 0. */ 2733 2698 if (!intel_sdvo_set_target_input(intel_sdvo)) 2734 - goto err; 2699 + goto err_output; 2735 2700 2736 2701 if (!intel_sdvo_get_input_pixel_clock_range(intel_sdvo, 2737 2702 &intel_sdvo->pixel_clock_min, 2738 2703 &intel_sdvo->pixel_clock_max)) 2739 - goto err; 2704 + goto err_output; 2740 2705 2741 2706 DRM_DEBUG_KMS("%s device VID/DID: %02X:%02X.%02X, " 2742 2707 "clock range %dMHz - %dMHz, " ··· 2755 2720 intel_sdvo->caps.output_flags & 2756 2721 (SDVO_OUTPUT_TMDS1 | SDVO_OUTPUT_RGB1) ? 'Y' : 'N'); 2757 2722 return true; 2723 + 2724 + err_output: 2725 + intel_sdvo_output_cleanup(intel_sdvo); 2758 2726 2759 2727 err: 2760 2728 drm_encoder_cleanup(&intel_encoder->base);
+2
drivers/gpu/drm/i915/intel_sdvo_regs.h
··· 708 708 #define SDVO_CMD_SET_AUDIO_STAT 0x91 709 709 #define SDVO_CMD_GET_AUDIO_STAT 0x92 710 710 #define SDVO_CMD_SET_HBUF_INDEX 0x93 711 + #define SDVO_HBUF_INDEX_ELD 0 712 + #define SDVO_HBUF_INDEX_AVI_IF 1 711 713 #define SDVO_CMD_GET_HBUF_INDEX 0x94 712 714 #define SDVO_CMD_GET_HBUF_INFO 0x95 713 715 #define SDVO_CMD_SET_HBUF_AV_SPLIT 0x96
+12 -8
drivers/gpu/drm/nouveau/core/engine/disp/nv50.c
··· 22 22 * Authors: Ben Skeggs 23 23 */ 24 24 25 + #include <subdev/bar.h> 26 + 25 27 #include <engine/software.h> 26 28 #include <engine/disp.h> 27 29 ··· 39 37 static void 40 38 nv50_disp_intr_vblank(struct nv50_disp_priv *priv, int crtc) 41 39 { 40 + struct nouveau_bar *bar = nouveau_bar(priv); 42 41 struct nouveau_disp *disp = &priv->base; 43 42 struct nouveau_software_chan *chan, *temp; 44 43 unsigned long flags; ··· 49 46 if (chan->vblank.crtc != crtc) 50 47 continue; 51 48 52 - nv_wr32(priv, 0x001704, chan->vblank.channel); 53 - nv_wr32(priv, 0x001710, 0x80000000 | chan->vblank.ctxdma); 54 - 55 49 if (nv_device(priv)->chipset == 0x50) { 50 + nv_wr32(priv, 0x001704, chan->vblank.channel); 51 + nv_wr32(priv, 0x001710, 0x80000000 | chan->vblank.ctxdma); 52 + bar->flush(bar); 56 53 nv_wr32(priv, 0x001570, chan->vblank.offset); 57 54 nv_wr32(priv, 0x001574, chan->vblank.value); 58 55 } else { 59 - if (nv_device(priv)->chipset >= 0xc0) { 60 - nv_wr32(priv, 0x06000c, 61 - upper_32_bits(chan->vblank.offset)); 62 - } 63 - nv_wr32(priv, 0x060010, chan->vblank.offset); 56 + nv_wr32(priv, 0x001718, 0x80000000 | chan->vblank.channel); 57 + bar->flush(bar); 58 + nv_wr32(priv, 0x06000c, 59 + upper_32_bits(chan->vblank.offset)); 60 + nv_wr32(priv, 0x060010, 61 + lower_32_bits(chan->vblank.offset)); 64 62 nv_wr32(priv, 0x060014, chan->vblank.value); 65 63 } 66 64
+2 -2
drivers/gpu/drm/nouveau/core/engine/graph/nv40.c
··· 156 156 static int 157 157 nv40_graph_context_fini(struct nouveau_object *object, bool suspend) 158 158 { 159 - struct nv04_graph_priv *priv = (void *)object->engine; 160 - struct nv04_graph_chan *chan = (void *)object; 159 + struct nv40_graph_priv *priv = (void *)object->engine; 160 + struct nv40_graph_chan *chan = (void *)object; 161 161 u32 inst = 0x01000000 | nv_gpuobj(chan)->addr >> 4; 162 162 int ret = 0; 163 163
+1 -1
drivers/gpu/drm/nouveau/core/engine/mpeg/nv40.c
··· 38 38 }; 39 39 40 40 struct nv40_mpeg_chan { 41 - struct nouveau_mpeg base; 41 + struct nouveau_mpeg_chan base; 42 42 }; 43 43 44 44 /*******************************************************************************
+1 -1
drivers/gpu/drm/nouveau/core/subdev/vm/nv41.c
··· 67 67 static void 68 68 nv41_vm_flush(struct nouveau_vm *vm) 69 69 { 70 - struct nv04_vm_priv *priv = (void *)vm->vmm; 70 + struct nv04_vmmgr_priv *priv = (void *)vm->vmm; 71 71 72 72 mutex_lock(&nv_subdev(priv)->mutex); 73 73 nv_wr32(priv, 0x100810, 0x00000022);
+1 -1
drivers/gpu/drm/nouveau/nouveau_connector.c
··· 355 355 * valid - it's not (rh#613284) 356 356 */ 357 357 if (nv_encoder->dcb->lvdsconf.use_acpi_for_edid) { 358 - if (!(nv_connector->edid = nouveau_acpi_edid(dev, connector))) { 358 + if ((nv_connector->edid = nouveau_acpi_edid(dev, connector))) { 359 359 status = connector_status_connected; 360 360 goto out; 361 361 }
+31 -23
drivers/gpu/drm/radeon/atombios_crtc.c
··· 1696 1696 return ATOM_PPLL2; 1697 1697 DRM_ERROR("unable to allocate a PPLL\n"); 1698 1698 return ATOM_PPLL_INVALID; 1699 - } else { 1700 - if (ASIC_IS_AVIVO(rdev)) { 1701 - /* in DP mode, the DP ref clock can come from either PPLL 1702 - * depending on the asic: 1703 - * DCE3: PPLL1 or PPLL2 1704 - */ 1705 - if (ENCODER_MODE_IS_DP(atombios_get_encoder_mode(radeon_crtc->encoder))) { 1706 - /* use the same PPLL for all DP monitors */ 1707 - pll = radeon_get_shared_dp_ppll(crtc); 1708 - if (pll != ATOM_PPLL_INVALID) 1709 - return pll; 1710 - } else { 1711 - /* use the same PPLL for all monitors with the same clock */ 1712 - pll = radeon_get_shared_nondp_ppll(crtc); 1713 - if (pll != ATOM_PPLL_INVALID) 1714 - return pll; 1715 - } 1716 - /* all other cases */ 1717 - pll_in_use = radeon_get_pll_use_mask(crtc); 1699 + } else if (ASIC_IS_AVIVO(rdev)) { 1700 + /* in DP mode, the DP ref clock can come from either PPLL 1701 + * depending on the asic: 1702 + * DCE3: PPLL1 or PPLL2 1703 + */ 1704 + if (ENCODER_MODE_IS_DP(atombios_get_encoder_mode(radeon_crtc->encoder))) { 1705 + /* use the same PPLL for all DP monitors */ 1706 + pll = radeon_get_shared_dp_ppll(crtc); 1707 + if (pll != ATOM_PPLL_INVALID) 1708 + return pll; 1709 + } else { 1710 + /* use the same PPLL for all monitors with the same clock */ 1711 + pll = radeon_get_shared_nondp_ppll(crtc); 1712 + if (pll != ATOM_PPLL_INVALID) 1713 + return pll; 1714 + } 1715 + /* all other cases */ 1716 + pll_in_use = radeon_get_pll_use_mask(crtc); 1717 + /* the order shouldn't matter here, but we probably 1718 + * need this until we have atomic modeset 1719 + */ 1720 + if (rdev->flags & RADEON_IS_IGP) { 1718 1721 if (!(pll_in_use & (1 << ATOM_PPLL1))) 1719 1722 return ATOM_PPLL1; 1720 1723 if (!(pll_in_use & (1 << ATOM_PPLL2))) 1721 1724 return ATOM_PPLL2; 1722 - DRM_ERROR("unable to allocate a PPLL\n"); 1723 - return ATOM_PPLL_INVALID; 1724 1725 } else { 1725 - /* on pre-R5xx asics, the crtc to pll mapping is hardcoded */ 1726 - return radeon_crtc->crtc_id; 1726 + if (!(pll_in_use & (1 << ATOM_PPLL2))) 1727 + return ATOM_PPLL2; 1728 + if (!(pll_in_use & (1 << ATOM_PPLL1))) 1729 + return ATOM_PPLL1; 1727 1730 } 1731 + DRM_ERROR("unable to allocate a PPLL\n"); 1732 + return ATOM_PPLL_INVALID; 1733 + } else { 1734 + /* on pre-R5xx asics, the crtc to pll mapping is hardcoded */ 1735 + return radeon_crtc->crtc_id; 1728 1736 } 1729 1737 } 1730 1738
+1 -1
drivers/gpu/drm/radeon/atombios_encoders.c
··· 1625 1625 atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_SETUP, 0, 0); 1626 1626 atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_ENABLE, 0, 0); 1627 1627 /* some early dce3.2 boards have a bug in their transmitter control table */ 1628 - if ((rdev->family != CHIP_RV710) || (rdev->family != CHIP_RV730)) 1628 + if ((rdev->family != CHIP_RV710) && (rdev->family != CHIP_RV730)) 1629 1629 atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_ENABLE_OUTPUT, 0, 0); 1630 1630 } 1631 1631 if (ENCODER_MODE_IS_DP(atombios_get_encoder_mode(encoder)) && connector) {
+1 -1
drivers/gpu/drm/radeon/evergreen.c
··· 1372 1372 WREG32(BIF_FB_EN, FB_READ_EN | FB_WRITE_EN); 1373 1373 1374 1374 for (i = 0; i < rdev->num_crtc; i++) { 1375 - if (save->crtc_enabled) { 1375 + if (save->crtc_enabled[i]) { 1376 1376 if (ASIC_IS_DCE6(rdev)) { 1377 1377 tmp = RREG32(EVERGREEN_CRTC_BLANK_CONTROL + crtc_offsets[i]); 1378 1378 tmp |= EVERGREEN_CRTC_BLANK_DATA_EN;
+4 -1
drivers/gpu/drm/radeon/evergreen_cs.c
··· 264 264 /* macro tile width & height */ 265 265 palign = (8 * surf->bankw * track->npipes) * surf->mtilea; 266 266 halign = (8 * surf->bankh * surf->nbanks) / surf->mtilea; 267 - mtileb = (palign / 8) * (halign / 8) * tileb;; 267 + mtileb = (palign / 8) * (halign / 8) * tileb; 268 268 mtile_pr = surf->nbx / palign; 269 269 mtile_ps = (mtile_pr * surf->nby) / halign; 270 270 surf->layer_size = mtile_ps * mtileb * slice_pt; ··· 2725 2725 /* check config regs */ 2726 2726 switch (reg) { 2727 2727 case GRBM_GFX_INDEX: 2728 + case CP_STRMOUT_CNTL: 2729 + case CP_COHER_CNTL: 2730 + case CP_COHER_SIZE: 2728 2731 case VGT_VTX_VECT_EJECT_REG: 2729 2732 case VGT_CACHE_INVALIDATION: 2730 2733 case VGT_GS_VERTEX_REUSE:
+4
drivers/gpu/drm/radeon/evergreend.h
··· 91 91 #define FB_READ_EN (1 << 0) 92 92 #define FB_WRITE_EN (1 << 1) 93 93 94 + #define CP_STRMOUT_CNTL 0x84FC 95 + 96 + #define CP_COHER_CNTL 0x85F0 97 + #define CP_COHER_SIZE 0x85F4 94 98 #define CP_COHER_BASE 0x85F8 95 99 #define CP_STALLED_STAT1 0x8674 96 100 #define CP_STALLED_STAT2 0x8678
+2 -2
drivers/gpu/drm/radeon/radeon_atpx_handler.c
··· 352 352 } 353 353 354 354 /** 355 - * radeon_atpx_switchto - switch to the requested GPU 355 + * radeon_atpx_power_state - power down/up the requested GPU 356 356 * 357 - * @id: GPU to switch to 357 + * @id: GPU to power down/up 358 358 * @state: requested power state (0 = off, 1 = on) 359 359 * 360 360 * Execute the necessary ATPX function to power down/up the discrete GPU
+21 -7
drivers/gpu/drm/radeon/radeon_connectors.c
··· 941 941 struct drm_mode_object *obj; 942 942 int i; 943 943 enum drm_connector_status ret = connector_status_disconnected; 944 - bool dret = false; 944 + bool dret = false, broken_edid = false; 945 945 946 946 if (!force && radeon_check_hpd_status_unchanged(connector)) 947 947 return connector->status; ··· 965 965 ret = connector_status_disconnected; 966 966 DRM_ERROR("%s: detected RS690 floating bus bug, stopping ddc detect\n", drm_get_connector_name(connector)); 967 967 radeon_connector->ddc_bus = NULL; 968 + } else { 969 + ret = connector_status_connected; 970 + broken_edid = true; /* defer use_digital to later */ 968 971 } 969 972 } else { 970 973 radeon_connector->use_digital = !!(radeon_connector->edid->input & DRM_EDID_INPUT_DIGITAL); ··· 1050 1047 1051 1048 encoder_funcs = encoder->helper_private; 1052 1049 if (encoder_funcs->detect) { 1053 - if (ret != connector_status_connected) { 1054 - ret = encoder_funcs->detect(encoder, connector); 1055 - if (ret == connector_status_connected) { 1056 - radeon_connector->use_digital = false; 1050 + if (!broken_edid) { 1051 + if (ret != connector_status_connected) { 1052 + /* deal with analog monitors without DDC */ 1053 + ret = encoder_funcs->detect(encoder, connector); 1054 + if (ret == connector_status_connected) { 1055 + radeon_connector->use_digital = false; 1056 + } 1057 + if (ret != connector_status_disconnected) 1058 + radeon_connector->detected_by_load = true; 1057 1059 } 1058 - if (ret != connector_status_disconnected) 1059 - radeon_connector->detected_by_load = true; 1060 + } else { 1061 + enum drm_connector_status lret; 1062 + /* assume digital unless load detected otherwise */ 1063 + radeon_connector->use_digital = true; 1064 + lret = encoder_funcs->detect(encoder, connector); 1065 + DRM_DEBUG_KMS("load_detect %x returned: %x\n",encoder->encoder_type,lret); 1066 + if (lret == connector_status_connected) 1067 + radeon_connector->use_digital = false; 1060 1068 } 1061 1069 break; 1062 1070 }
+13 -2
drivers/gpu/drm/radeon/radeon_legacy_crtc.c
··· 295 295 struct radeon_crtc *radeon_crtc = to_radeon_crtc(crtc); 296 296 struct drm_device *dev = crtc->dev; 297 297 struct radeon_device *rdev = dev->dev_private; 298 + uint32_t crtc_ext_cntl = 0; 298 299 uint32_t mask; 299 300 300 301 if (radeon_crtc->crtc_id) ··· 308 307 RADEON_CRTC_VSYNC_DIS | 309 308 RADEON_CRTC_HSYNC_DIS); 310 309 310 + /* 311 + * On all dual CRTC GPUs this bit controls the CRTC of the primary DAC. 312 + * Therefore it is set in the DAC DMPS function. 313 + * This is different for GPU's with a single CRTC but a primary and a 314 + * TV DAC: here it controls the single CRTC no matter where it is 315 + * routed. Therefore we set it here. 316 + */ 317 + if (rdev->flags & RADEON_SINGLE_CRTC) 318 + crtc_ext_cntl = RADEON_CRTC_CRT_ON; 319 + 311 320 switch (mode) { 312 321 case DRM_MODE_DPMS_ON: 313 322 radeon_crtc->enabled = true; ··· 328 317 else { 329 318 WREG32_P(RADEON_CRTC_GEN_CNTL, RADEON_CRTC_EN, ~(RADEON_CRTC_EN | 330 319 RADEON_CRTC_DISP_REQ_EN_B)); 331 - WREG32_P(RADEON_CRTC_EXT_CNTL, 0, ~mask); 320 + WREG32_P(RADEON_CRTC_EXT_CNTL, crtc_ext_cntl, ~(mask | crtc_ext_cntl)); 332 321 } 333 322 drm_vblank_post_modeset(dev, radeon_crtc->crtc_id); 334 323 radeon_crtc_load_lut(crtc); ··· 342 331 else { 343 332 WREG32_P(RADEON_CRTC_GEN_CNTL, RADEON_CRTC_DISP_REQ_EN_B, ~(RADEON_CRTC_EN | 344 333 RADEON_CRTC_DISP_REQ_EN_B)); 345 - WREG32_P(RADEON_CRTC_EXT_CNTL, mask, ~mask); 334 + WREG32_P(RADEON_CRTC_EXT_CNTL, mask, ~(mask | crtc_ext_cntl)); 346 335 } 347 336 radeon_crtc->enabled = false; 348 337 /* adjust pm to dpms changes AFTER disabling crtcs */
+147 -28
drivers/gpu/drm/radeon/radeon_legacy_encoders.c
··· 537 537 break; 538 538 } 539 539 540 - WREG32(RADEON_CRTC_EXT_CNTL, crtc_ext_cntl); 540 + /* handled in radeon_crtc_dpms() */ 541 + if (!(rdev->flags & RADEON_SINGLE_CRTC)) 542 + WREG32(RADEON_CRTC_EXT_CNTL, crtc_ext_cntl); 541 543 WREG32(RADEON_DAC_CNTL, dac_cntl); 542 544 WREG32(RADEON_DAC_MACRO_CNTL, dac_macro_cntl); 543 545 ··· 664 662 665 663 if (ASIC_IS_R300(rdev)) 666 664 tmp |= (0x1b6 << RADEON_DAC_FORCE_DATA_SHIFT); 665 + else if (ASIC_IS_RV100(rdev)) 666 + tmp |= (0x1ac << RADEON_DAC_FORCE_DATA_SHIFT); 667 667 else 668 668 tmp |= (0x180 << RADEON_DAC_FORCE_DATA_SHIFT); 669 669 ··· 675 671 tmp |= RADEON_DAC_RANGE_CNTL_PS2 | RADEON_DAC_CMP_EN; 676 672 WREG32(RADEON_DAC_CNTL, tmp); 677 673 674 + tmp = dac_macro_cntl; 678 675 tmp &= ~(RADEON_DAC_PDWN_R | 679 676 RADEON_DAC_PDWN_G | 680 677 RADEON_DAC_PDWN_B); ··· 1097 1092 } else { 1098 1093 if (is_tv) 1099 1094 WREG32(RADEON_TV_MASTER_CNTL, tv_master_cntl); 1100 - else 1095 + /* handled in radeon_crtc_dpms() */ 1096 + else if (!(rdev->flags & RADEON_SINGLE_CRTC)) 1101 1097 WREG32(RADEON_CRTC2_GEN_CNTL, crtc2_gen_cntl); 1102 1098 WREG32(RADEON_TV_DAC_CNTL, tv_dac_cntl); 1103 1099 } ··· 1422 1416 return found; 1423 1417 } 1424 1418 1419 + static bool radeon_legacy_ext_dac_detect(struct drm_encoder *encoder, 1420 + struct drm_connector *connector) 1421 + { 1422 + struct drm_device *dev = encoder->dev; 1423 + struct radeon_device *rdev = dev->dev_private; 1424 + uint32_t gpio_monid, fp2_gen_cntl, disp_output_cntl, crtc2_gen_cntl; 1425 + uint32_t disp_lin_trans_grph_a, disp_lin_trans_grph_b, disp_lin_trans_grph_c; 1426 + uint32_t disp_lin_trans_grph_d, disp_lin_trans_grph_e, disp_lin_trans_grph_f; 1427 + uint32_t tmp, crtc2_h_total_disp, crtc2_v_total_disp; 1428 + uint32_t crtc2_h_sync_strt_wid, crtc2_v_sync_strt_wid; 1429 + bool found = false; 1430 + int i; 1431 + 1432 + /* save the regs we need */ 1433 + gpio_monid = RREG32(RADEON_GPIO_MONID); 1434 + fp2_gen_cntl = RREG32(RADEON_FP2_GEN_CNTL); 1435 + disp_output_cntl = RREG32(RADEON_DISP_OUTPUT_CNTL); 1436 + crtc2_gen_cntl = RREG32(RADEON_CRTC2_GEN_CNTL); 1437 + disp_lin_trans_grph_a = RREG32(RADEON_DISP_LIN_TRANS_GRPH_A); 1438 + disp_lin_trans_grph_b = RREG32(RADEON_DISP_LIN_TRANS_GRPH_B); 1439 + disp_lin_trans_grph_c = RREG32(RADEON_DISP_LIN_TRANS_GRPH_C); 1440 + disp_lin_trans_grph_d = RREG32(RADEON_DISP_LIN_TRANS_GRPH_D); 1441 + disp_lin_trans_grph_e = RREG32(RADEON_DISP_LIN_TRANS_GRPH_E); 1442 + disp_lin_trans_grph_f = RREG32(RADEON_DISP_LIN_TRANS_GRPH_F); 1443 + crtc2_h_total_disp = RREG32(RADEON_CRTC2_H_TOTAL_DISP); 1444 + crtc2_v_total_disp = RREG32(RADEON_CRTC2_V_TOTAL_DISP); 1445 + crtc2_h_sync_strt_wid = RREG32(RADEON_CRTC2_H_SYNC_STRT_WID); 1446 + crtc2_v_sync_strt_wid = RREG32(RADEON_CRTC2_V_SYNC_STRT_WID); 1447 + 1448 + tmp = RREG32(RADEON_GPIO_MONID); 1449 + tmp &= ~RADEON_GPIO_A_0; 1450 + WREG32(RADEON_GPIO_MONID, tmp); 1451 + 1452 + WREG32(RADEON_FP2_GEN_CNTL, (RADEON_FP2_ON | 1453 + RADEON_FP2_PANEL_FORMAT | 1454 + R200_FP2_SOURCE_SEL_TRANS_UNIT | 1455 + RADEON_FP2_DVO_EN | 1456 + R200_FP2_DVO_RATE_SEL_SDR)); 1457 + 1458 + WREG32(RADEON_DISP_OUTPUT_CNTL, (RADEON_DISP_DAC_SOURCE_RMX | 1459 + RADEON_DISP_TRANS_MATRIX_GRAPHICS)); 1460 + 1461 + WREG32(RADEON_CRTC2_GEN_CNTL, (RADEON_CRTC2_EN | 1462 + RADEON_CRTC2_DISP_REQ_EN_B)); 1463 + 1464 + WREG32(RADEON_DISP_LIN_TRANS_GRPH_A, 0x00000000); 1465 + WREG32(RADEON_DISP_LIN_TRANS_GRPH_B, 0x000003f0); 1466 + WREG32(RADEON_DISP_LIN_TRANS_GRPH_C, 0x00000000); 1467 + WREG32(RADEON_DISP_LIN_TRANS_GRPH_D, 0x000003f0); 1468 + WREG32(RADEON_DISP_LIN_TRANS_GRPH_E, 0x00000000); 1469 + WREG32(RADEON_DISP_LIN_TRANS_GRPH_F, 0x000003f0); 1470 + 1471 + WREG32(RADEON_CRTC2_H_TOTAL_DISP, 0x01000008); 1472 + WREG32(RADEON_CRTC2_H_SYNC_STRT_WID, 0x00000800); 1473 + WREG32(RADEON_CRTC2_V_TOTAL_DISP, 0x00080001); 1474 + WREG32(RADEON_CRTC2_V_SYNC_STRT_WID, 0x00000080); 1475 + 1476 + for (i = 0; i < 200; i++) { 1477 + tmp = RREG32(RADEON_GPIO_MONID); 1478 + if (tmp & RADEON_GPIO_Y_0) 1479 + found = true; 1480 + 1481 + if (found) 1482 + break; 1483 + 1484 + if (!drm_can_sleep()) 1485 + mdelay(1); 1486 + else 1487 + msleep(1); 1488 + } 1489 + 1490 + /* restore the regs we used */ 1491 + WREG32(RADEON_DISP_LIN_TRANS_GRPH_A, disp_lin_trans_grph_a); 1492 + WREG32(RADEON_DISP_LIN_TRANS_GRPH_B, disp_lin_trans_grph_b); 1493 + WREG32(RADEON_DISP_LIN_TRANS_GRPH_C, disp_lin_trans_grph_c); 1494 + WREG32(RADEON_DISP_LIN_TRANS_GRPH_D, disp_lin_trans_grph_d); 1495 + WREG32(RADEON_DISP_LIN_TRANS_GRPH_E, disp_lin_trans_grph_e); 1496 + WREG32(RADEON_DISP_LIN_TRANS_GRPH_F, disp_lin_trans_grph_f); 1497 + WREG32(RADEON_CRTC2_H_TOTAL_DISP, crtc2_h_total_disp); 1498 + WREG32(RADEON_CRTC2_V_TOTAL_DISP, crtc2_v_total_disp); 1499 + WREG32(RADEON_CRTC2_H_SYNC_STRT_WID, crtc2_h_sync_strt_wid); 1500 + WREG32(RADEON_CRTC2_V_SYNC_STRT_WID, crtc2_v_sync_strt_wid); 1501 + WREG32(RADEON_CRTC2_GEN_CNTL, crtc2_gen_cntl); 1502 + WREG32(RADEON_DISP_OUTPUT_CNTL, disp_output_cntl); 1503 + WREG32(RADEON_FP2_GEN_CNTL, fp2_gen_cntl); 1504 + WREG32(RADEON_GPIO_MONID, gpio_monid); 1505 + 1506 + return found; 1507 + } 1508 + 1425 1509 static enum drm_connector_status radeon_legacy_tv_dac_detect(struct drm_encoder *encoder, 1426 1510 struct drm_connector *connector) 1427 1511 { 1428 1512 struct drm_device *dev = encoder->dev; 1429 1513 struct radeon_device *rdev = dev->dev_private; 1430 - uint32_t crtc2_gen_cntl, tv_dac_cntl, dac_cntl2, dac_ext_cntl; 1431 - uint32_t disp_hw_debug, disp_output_cntl, gpiopad_a, pixclks_cntl, tmp; 1514 + uint32_t crtc2_gen_cntl = 0, tv_dac_cntl, dac_cntl2, dac_ext_cntl; 1515 + uint32_t gpiopad_a = 0, pixclks_cntl, tmp; 1516 + uint32_t disp_output_cntl = 0, disp_hw_debug = 0, crtc_ext_cntl = 0; 1432 1517 enum drm_connector_status found = connector_status_disconnected; 1433 1518 struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); 1434 1519 struct radeon_encoder_tv_dac *tv_dac = radeon_encoder->enc_priv; ··· 1556 1459 return connector_status_disconnected; 1557 1460 } 1558 1461 1462 + /* R200 uses an external DAC for secondary DAC */ 1463 + if (rdev->family == CHIP_R200) { 1464 + if (radeon_legacy_ext_dac_detect(encoder, connector)) 1465 + found = connector_status_connected; 1466 + return found; 1467 + } 1468 + 1559 1469 /* save the regs we need */ 1560 1470 pixclks_cntl = RREG32_PLL(RADEON_PIXCLKS_CNTL); 1561 - gpiopad_a = ASIC_IS_R300(rdev) ? RREG32(RADEON_GPIOPAD_A) : 0; 1562 - disp_output_cntl = ASIC_IS_R300(rdev) ? RREG32(RADEON_DISP_OUTPUT_CNTL) : 0; 1563 - disp_hw_debug = ASIC_IS_R300(rdev) ? 0 : RREG32(RADEON_DISP_HW_DEBUG); 1564 - crtc2_gen_cntl = RREG32(RADEON_CRTC2_GEN_CNTL); 1471 + 1472 + if (rdev->flags & RADEON_SINGLE_CRTC) { 1473 + crtc_ext_cntl = RREG32(RADEON_CRTC_EXT_CNTL); 1474 + } else { 1475 + if (ASIC_IS_R300(rdev)) { 1476 + gpiopad_a = RREG32(RADEON_GPIOPAD_A); 1477 + disp_output_cntl = RREG32(RADEON_DISP_OUTPUT_CNTL); 1478 + } else { 1479 + disp_hw_debug = RREG32(RADEON_DISP_HW_DEBUG); 1480 + } 1481 + crtc2_gen_cntl = RREG32(RADEON_CRTC2_GEN_CNTL); 1482 + } 1565 1483 tv_dac_cntl = RREG32(RADEON_TV_DAC_CNTL); 1566 1484 dac_ext_cntl = RREG32(RADEON_DAC_EXT_CNTL); 1567 1485 dac_cntl2 = RREG32(RADEON_DAC_CNTL2); ··· 1585 1473 | RADEON_PIX2CLK_DAC_ALWAYS_ONb); 1586 1474 WREG32_PLL(RADEON_PIXCLKS_CNTL, tmp); 1587 1475 1588 - if (ASIC_IS_R300(rdev)) 1589 - WREG32_P(RADEON_GPIOPAD_A, 1, ~1); 1590 - 1591 - tmp = crtc2_gen_cntl & ~RADEON_CRTC2_PIX_WIDTH_MASK; 1592 - tmp |= RADEON_CRTC2_CRT2_ON | 1593 - (2 << RADEON_CRTC2_PIX_WIDTH_SHIFT); 1594 - 1595 - WREG32(RADEON_CRTC2_GEN_CNTL, tmp); 1596 - 1597 - if (ASIC_IS_R300(rdev)) { 1598 - tmp = disp_output_cntl & ~RADEON_DISP_TVDAC_SOURCE_MASK; 1599 - tmp |= RADEON_DISP_TVDAC_SOURCE_CRTC2; 1600 - WREG32(RADEON_DISP_OUTPUT_CNTL, tmp); 1476 + if (rdev->flags & RADEON_SINGLE_CRTC) { 1477 + tmp = crtc_ext_cntl | RADEON_CRTC_CRT_ON; 1478 + WREG32(RADEON_CRTC_EXT_CNTL, tmp); 1601 1479 } else { 1602 - tmp = disp_hw_debug & ~RADEON_CRT2_DISP1_SEL; 1603 - WREG32(RADEON_DISP_HW_DEBUG, tmp); 1480 + tmp = crtc2_gen_cntl & ~RADEON_CRTC2_PIX_WIDTH_MASK; 1481 + tmp |= RADEON_CRTC2_CRT2_ON | 1482 + (2 << RADEON_CRTC2_PIX_WIDTH_SHIFT); 1483 + WREG32(RADEON_CRTC2_GEN_CNTL, tmp); 1484 + 1485 + if (ASIC_IS_R300(rdev)) { 1486 + WREG32_P(RADEON_GPIOPAD_A, 1, ~1); 1487 + tmp = disp_output_cntl & ~RADEON_DISP_TVDAC_SOURCE_MASK; 1488 + tmp |= RADEON_DISP_TVDAC_SOURCE_CRTC2; 1489 + WREG32(RADEON_DISP_OUTPUT_CNTL, tmp); 1490 + } else { 1491 + tmp = disp_hw_debug & ~RADEON_CRT2_DISP1_SEL; 1492 + WREG32(RADEON_DISP_HW_DEBUG, tmp); 1493 + } 1604 1494 } 1605 1495 1606 1496 tmp = RADEON_TV_DAC_NBLANK | ··· 1644 1530 WREG32(RADEON_DAC_CNTL2, dac_cntl2); 1645 1531 WREG32(RADEON_DAC_EXT_CNTL, dac_ext_cntl); 1646 1532 WREG32(RADEON_TV_DAC_CNTL, tv_dac_cntl); 1647 - WREG32(RADEON_CRTC2_GEN_CNTL, crtc2_gen_cntl); 1648 1533 1649 - if (ASIC_IS_R300(rdev)) { 1650 - WREG32(RADEON_DISP_OUTPUT_CNTL, disp_output_cntl); 1651 - WREG32_P(RADEON_GPIOPAD_A, gpiopad_a, ~1); 1534 + if (rdev->flags & RADEON_SINGLE_CRTC) { 1535 + WREG32(RADEON_CRTC_EXT_CNTL, crtc_ext_cntl); 1652 1536 } else { 1653 - WREG32(RADEON_DISP_HW_DEBUG, disp_hw_debug); 1537 + WREG32(RADEON_CRTC2_GEN_CNTL, crtc2_gen_cntl); 1538 + if (ASIC_IS_R300(rdev)) { 1539 + WREG32(RADEON_DISP_OUTPUT_CNTL, disp_output_cntl); 1540 + WREG32_P(RADEON_GPIOPAD_A, gpiopad_a, ~1); 1541 + } else { 1542 + WREG32(RADEON_DISP_HW_DEBUG, disp_hw_debug); 1543 + } 1654 1544 } 1545 + 1655 1546 WREG32_PLL(RADEON_PIXCLKS_CNTL, pixclks_cntl); 1656 1547 1657 1548 return found;
+1
drivers/gpu/drm/radeon/si.c
··· 2474 2474 /* check config regs */ 2475 2475 switch (reg) { 2476 2476 case GRBM_GFX_INDEX: 2477 + case CP_STRMOUT_CNTL: 2477 2478 case VGT_VTX_VECT_EJECT_REG: 2478 2479 case VGT_CACHE_INVALIDATION: 2479 2480 case VGT_ESGS_RING_SIZE:
+1
drivers/gpu/drm/radeon/sid.h
··· 424 424 # define RDERR_INT_ENABLE (1 << 0) 425 425 # define GUI_IDLE_INT_ENABLE (1 << 19) 426 426 427 + #define CP_STRMOUT_CNTL 0x84FC 427 428 #define SCRATCH_REG0 0x8500 428 429 #define SCRATCH_REG1 0x8504 429 430 #define SCRATCH_REG2 0x8508
+4 -1
drivers/gpu/drm/ttm/ttm_page_alloc.c
··· 749 749 /* clear the pages coming from the pool if requested */ 750 750 if (flags & TTM_PAGE_FLAG_ZERO_ALLOC) { 751 751 list_for_each_entry(p, &plist, lru) { 752 - clear_page(page_address(p)); 752 + if (PageHighMem(p)) 753 + clear_highpage(p); 754 + else 755 + clear_page(page_address(p)); 753 756 } 754 757 } 755 758
-4
drivers/gpu/drm/ttm/ttm_tt.c
··· 308 308 if (unlikely(to_page == NULL)) 309 309 goto out_err; 310 310 311 - preempt_disable(); 312 311 copy_highpage(to_page, from_page); 313 - preempt_enable(); 314 312 page_cache_release(from_page); 315 313 } 316 314 ··· 356 358 ret = PTR_ERR(to_page); 357 359 goto out_err; 358 360 } 359 - preempt_disable(); 360 361 copy_highpage(to_page, from_page); 361 - preempt_enable(); 362 362 set_page_dirty(to_page); 363 363 mark_page_accessed(to_page); 364 364 page_cache_release(to_page);
+1 -1
drivers/gpu/drm/udl/udl_drv.h
··· 104 104 105 105 int udl_render_hline(struct drm_device *dev, int bpp, struct urb **urb_ptr, 106 106 const char *front, char **urb_buf_ptr, 107 - u32 byte_offset, u32 byte_width, 107 + u32 byte_offset, u32 device_byte_offset, u32 byte_width, 108 108 int *ident_ptr, int *sent_ptr); 109 109 110 110 int udl_dumb_create(struct drm_file *file_priv,
+7 -5
drivers/gpu/drm/udl/udl_fb.c
··· 114 114 list_for_each_entry(cur, &fbdefio->pagelist, lru) { 115 115 116 116 if (udl_render_hline(dev, (ufbdev->ufb.base.bits_per_pixel / 8), 117 - &urb, (char *) info->fix.smem_start, 118 - &cmd, cur->index << PAGE_SHIFT, 119 - PAGE_SIZE, &bytes_identical, &bytes_sent)) 117 + &urb, (char *) info->fix.smem_start, 118 + &cmd, cur->index << PAGE_SHIFT, 119 + cur->index << PAGE_SHIFT, 120 + PAGE_SIZE, &bytes_identical, &bytes_sent)) 120 121 goto error; 121 122 bytes_rendered += PAGE_SIZE; 122 123 } ··· 188 187 for (i = y; i < y + height ; i++) { 189 188 const int line_offset = fb->base.pitches[0] * i; 190 189 const int byte_offset = line_offset + (x * bpp); 191 - 190 + const int dev_byte_offset = (fb->base.width * bpp * i) + (x * bpp); 192 191 if (udl_render_hline(dev, bpp, &urb, 193 192 (char *) fb->obj->vmapping, 194 - &cmd, byte_offset, width * bpp, 193 + &cmd, byte_offset, dev_byte_offset, 194 + width * bpp, 195 195 &bytes_identical, &bytes_sent)) 196 196 goto error; 197 197 }
+3 -2
drivers/gpu/drm/udl/udl_transfer.c
··· 213 213 */ 214 214 int udl_render_hline(struct drm_device *dev, int bpp, struct urb **urb_ptr, 215 215 const char *front, char **urb_buf_ptr, 216 - u32 byte_offset, u32 byte_width, 216 + u32 byte_offset, u32 device_byte_offset, 217 + u32 byte_width, 217 218 int *ident_ptr, int *sent_ptr) 218 219 { 219 220 const u8 *line_start, *line_end, *next_pixel; 220 - u32 base16 = 0 + (byte_offset / bpp) * 2; 221 + u32 base16 = 0 + (device_byte_offset / bpp) * 2; 221 222 struct urb *urb = *urb_ptr; 222 223 u8 *cmd = *urb_buf_ptr; 223 224 u8 *cmd_end = (u8 *) urb->transfer_buffer + urb->transfer_buffer_length;
+1 -1
drivers/gpu/drm/vmwgfx/vmwgfx_dmabuf.c
··· 306 306 307 307 BUG_ON(!atomic_read(&bo->reserved)); 308 308 BUG_ON(old_mem_type != TTM_PL_VRAM && 309 - old_mem_type != VMW_PL_FLAG_GMR); 309 + old_mem_type != VMW_PL_GMR); 310 310 311 311 pl_flags = TTM_PL_FLAG_VRAM | VMW_PL_FLAG_GMR | TTM_PL_FLAG_CACHED; 312 312 if (pin)
+5
drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
··· 1098 1098 struct drm_device *dev = pci_get_drvdata(pdev); 1099 1099 struct vmw_private *dev_priv = vmw_priv(dev); 1100 1100 1101 + mutex_lock(&dev_priv->hw_mutex); 1102 + vmw_write(dev_priv, SVGA_REG_ID, SVGA_ID_2); 1103 + (void) vmw_read(dev_priv, SVGA_REG_ID); 1104 + mutex_unlock(&dev_priv->hw_mutex); 1105 + 1101 1106 /** 1102 1107 * Reclaim 3d reference held by fbdev and potentially 1103 1108 * start fifo.
+2
drivers/gpu/drm/vmwgfx/vmwgfx_ioctl.c
··· 110 110 memcpy_fromio(bounce, &fifo_mem[SVGA_FIFO_3D_CAPS], size); 111 111 112 112 ret = copy_to_user(buffer, bounce, size); 113 + if (ret) 114 + ret = -EFAULT; 113 115 vfree(bounce); 114 116 115 117 if (unlikely(ret != 0))
+3 -3
drivers/hid/hid-microsoft.c
··· 46 46 rdesc[559] = 0x45; 47 47 } 48 48 /* the same as above (s/usage/physical/) */ 49 - if ((quirks & MS_RDESC_3K) && *rsize == 106 && 50 - !memcmp((char []){ 0x19, 0x00, 0x29, 0xff }, 51 - &rdesc[94], 4)) { 49 + if ((quirks & MS_RDESC_3K) && *rsize == 106 && rdesc[94] == 0x19 && 50 + rdesc[95] == 0x00 && rdesc[96] == 0x29 && 51 + rdesc[97] == 0xff) { 52 52 rdesc[94] = 0x35; 53 53 rdesc[96] = 0x45; 54 54 }
+43 -26
drivers/hid/hidraw.c
··· 42 42 static struct class *hidraw_class; 43 43 static struct hidraw *hidraw_table[HIDRAW_MAX_DEVICES]; 44 44 static DEFINE_MUTEX(minors_lock); 45 - static void drop_ref(struct hidraw *hid, int exists_bit); 46 45 47 46 static ssize_t hidraw_read(struct file *file, char __user *buffer, size_t count, loff_t *ppos) 48 47 { ··· 113 114 __u8 *buf; 114 115 int ret = 0; 115 116 116 - if (!hidraw_table[minor] || !hidraw_table[minor]->exist) { 117 + if (!hidraw_table[minor]) { 117 118 ret = -ENODEV; 118 119 goto out; 119 120 } ··· 261 262 } 262 263 263 264 mutex_lock(&minors_lock); 264 - if (!hidraw_table[minor] || !hidraw_table[minor]->exist) { 265 + if (!hidraw_table[minor]) { 265 266 err = -ENODEV; 266 267 goto out_unlock; 267 268 } ··· 298 299 static int hidraw_release(struct inode * inode, struct file * file) 299 300 { 300 301 unsigned int minor = iminor(inode); 302 + struct hidraw *dev; 301 303 struct hidraw_list *list = file->private_data; 304 + int ret; 305 + int i; 302 306 303 - drop_ref(hidraw_table[minor], 0); 307 + mutex_lock(&minors_lock); 308 + if (!hidraw_table[minor]) { 309 + ret = -ENODEV; 310 + goto unlock; 311 + } 312 + 304 313 list_del(&list->node); 314 + dev = hidraw_table[minor]; 315 + if (!--dev->open) { 316 + if (list->hidraw->exist) { 317 + hid_hw_power(dev->hid, PM_HINT_NORMAL); 318 + hid_hw_close(dev->hid); 319 + } else { 320 + kfree(list->hidraw); 321 + } 322 + } 323 + 324 + for (i = 0; i < HIDRAW_BUFFER_SIZE; ++i) 325 + kfree(list->buffer[i].value); 305 326 kfree(list); 306 - return 0; 327 + ret = 0; 328 + unlock: 329 + mutex_unlock(&minors_lock); 330 + 331 + return ret; 307 332 } 308 333 309 334 static long hidraw_ioctl(struct file *file, unsigned int cmd, ··· 529 506 void hidraw_disconnect(struct hid_device *hid) 530 507 { 531 508 struct hidraw *hidraw = hid->hidraw; 532 - drop_ref(hidraw, 1); 509 + 510 + mutex_lock(&minors_lock); 511 + hidraw->exist = 0; 512 + 513 + device_destroy(hidraw_class, MKDEV(hidraw_major, hidraw->minor)); 514 + 515 + hidraw_table[hidraw->minor] = NULL; 516 + 517 + if (hidraw->open) { 518 + hid_hw_close(hid); 519 + wake_up_interruptible(&hidraw->wait); 520 + } else { 521 + kfree(hidraw); 522 + } 523 + mutex_unlock(&minors_lock); 533 524 } 534 525 EXPORT_SYMBOL_GPL(hidraw_disconnect); 535 526 ··· 591 554 class_destroy(hidraw_class); 592 555 unregister_chrdev_region(dev_id, HIDRAW_MAX_DEVICES); 593 556 594 - } 595 - 596 - static void drop_ref(struct hidraw *hidraw, int exists_bit) 597 - { 598 - mutex_lock(&minors_lock); 599 - if (exists_bit) { 600 - hid_hw_close(hidraw->hid); 601 - hidraw->exist = 0; 602 - if (hidraw->open) 603 - wake_up_interruptible(&hidraw->wait); 604 - } else { 605 - --hidraw->open; 606 - } 607 - 608 - if (!hidraw->open && !hidraw->exist) { 609 - device_destroy(hidraw_class, MKDEV(hidraw_major, hidraw->minor)); 610 - hidraw_table[hidraw->minor] = NULL; 611 - kfree(hidraw); 612 - } 613 - mutex_unlock(&minors_lock); 614 557 }
+1 -1
drivers/hwmon/asb100.c
··· 32 32 * ASB100-A supports pwm1, while plain ASB100 does not. There is no known 33 33 * way for the driver to tell which one is there. 34 34 * 35 - * Chip #vin #fanin #pwm #temp wchipid vendid i2c ISA 35 + * Chip #vin #fanin #pwm #temp wchipid vendid i2c ISA 36 36 * asb100 7 3 1 4 0x31 0x0694 yes no 37 37 */ 38 38
+1
drivers/hwmon/w83627ehf.c
··· 2083 2083 mutex_init(&data->lock); 2084 2084 mutex_init(&data->update_lock); 2085 2085 data->name = w83627ehf_device_names[sio_data->kind]; 2086 + data->bank = 0xff; /* Force initial bank selection */ 2086 2087 platform_set_drvdata(pdev, data); 2087 2088 2088 2089 /* 627EHG and 627EHF have 10 voltage inputs; 627DHG and 667HG have 9 */
+1 -1
drivers/hwmon/w83627hf.c
··· 25 25 /* 26 26 * Supports following chips: 27 27 * 28 - * Chip #vin #fanin #pwm #temp wchipid vendid i2c ISA 28 + * Chip #vin #fanin #pwm #temp wchipid vendid i2c ISA 29 29 * w83627hf 9 3 2 3 0x20 0x5ca3 no yes(LPC) 30 30 * w83627thf 7 3 3 3 0x90 0x5ca3 no yes(LPC) 31 31 * w83637hf 7 3 3 3 0x80 0x5ca3 no yes(LPC)
+1 -1
drivers/hwmon/w83781d.c
··· 24 24 /* 25 25 * Supports following chips: 26 26 * 27 - * Chip #vin #fanin #pwm #temp wchipid vendid i2c ISA 27 + * Chip #vin #fanin #pwm #temp wchipid vendid i2c ISA 28 28 * as99127f 7 3 0 3 0x31 0x12c3 yes no 29 29 * as99127f rev.2 (type_name = as99127f) 0x31 0x5ca3 yes no 30 30 * w83781d 7 3 0 3 0x10-1 0x5ca3 yes yes
+1 -1
drivers/hwmon/w83791d.c
··· 22 22 /* 23 23 * Supports following chips: 24 24 * 25 - * Chip #vin #fanin #pwm #temp wchipid vendid i2c ISA 25 + * Chip #vin #fanin #pwm #temp wchipid vendid i2c ISA 26 26 * w83791d 10 5 5 3 0x71 0x5ca3 yes no 27 27 * 28 28 * The w83791d chip appears to be part way between the 83781d and the
+1 -1
drivers/hwmon/w83792d.c
··· 31 31 /* 32 32 * Supports following chips: 33 33 * 34 - * Chip #vin #fanin #pwm #temp wchipid vendid i2c ISA 34 + * Chip #vin #fanin #pwm #temp wchipid vendid i2c ISA 35 35 * w83792d 9 7 7 3 0x7a 0x5ca3 yes no 36 36 */ 37 37
+1 -1
drivers/hwmon/w83l786ng.c
··· 20 20 /* 21 21 * Supports following chips: 22 22 * 23 - * Chip #vin #fanin #pwm #temp wchipid vendid i2c ISA 23 + * Chip #vin #fanin #pwm #temp wchipid vendid i2c ISA 24 24 * w83l786ng 3 2 2 2 0x7b 0x5ca3 yes no 25 25 */ 26 26
+14 -172
drivers/i2c/busses/i2c-mxs.c
··· 1 1 /* 2 2 * Freescale MXS I2C bus driver 3 3 * 4 - * Copyright (C) 2011 Wolfram Sang, Pengutronix e.K. 4 + * Copyright (C) 2011-2012 Wolfram Sang, Pengutronix e.K. 5 5 * 6 6 * based on a (non-working) driver which was: 7 7 * ··· 34 34 #include <linux/fsl/mxs-dma.h> 35 35 36 36 #define DRIVER_NAME "mxs-i2c" 37 - 38 - static bool use_pioqueue; 39 - module_param(use_pioqueue, bool, 0); 40 - MODULE_PARM_DESC(use_pioqueue, "Use PIOQUEUE mode for transfer instead of DMA"); 41 37 42 38 #define MXS_I2C_CTRL0 (0x00) 43 39 #define MXS_I2C_CTRL0_SET (0x04) ··· 70 74 MXS_I2C_CTRL1_MASTER_LOSS_IRQ | \ 71 75 MXS_I2C_CTRL1_SLAVE_STOP_IRQ | \ 72 76 MXS_I2C_CTRL1_SLAVE_IRQ) 73 - 74 - #define MXS_I2C_QUEUECTRL (0x60) 75 - #define MXS_I2C_QUEUECTRL_SET (0x64) 76 - #define MXS_I2C_QUEUECTRL_CLR (0x68) 77 - 78 - #define MXS_I2C_QUEUECTRL_QUEUE_RUN 0x20 79 - #define MXS_I2C_QUEUECTRL_PIO_QUEUE_MODE 0x04 80 - 81 - #define MXS_I2C_QUEUESTAT (0x70) 82 - #define MXS_I2C_QUEUESTAT_RD_QUEUE_EMPTY 0x00002000 83 - #define MXS_I2C_QUEUESTAT_WRITE_QUEUE_CNT_MASK 0x0000001F 84 - 85 - #define MXS_I2C_QUEUECMD (0x80) 86 - 87 - #define MXS_I2C_QUEUEDATA (0x90) 88 - 89 - #define MXS_I2C_DATA (0xa0) 90 77 91 78 92 79 #define MXS_CMD_I2C_SELECT (MXS_I2C_CTRL0_RETAIN_CLOCK | \ ··· 132 153 const struct mxs_i2c_speed_config *speed; 133 154 134 155 /* DMA support components */ 135 - bool dma_mode; 136 156 int dma_channel; 137 157 struct dma_chan *dmach; 138 158 struct mxs_dma_data dma_data; ··· 150 172 writel(i2c->speed->timing2, i2c->regs + MXS_I2C_TIMING2); 151 173 152 174 writel(MXS_I2C_IRQ_MASK << 8, i2c->regs + MXS_I2C_CTRL1_SET); 153 - if (i2c->dma_mode) 154 - writel(MXS_I2C_QUEUECTRL_PIO_QUEUE_MODE, 155 - i2c->regs + MXS_I2C_QUEUECTRL_CLR); 156 - else 157 - writel(MXS_I2C_QUEUECTRL_PIO_QUEUE_MODE, 158 - i2c->regs + MXS_I2C_QUEUECTRL_SET); 159 - } 160 - 161 - static void mxs_i2c_pioq_setup_read(struct mxs_i2c_dev *i2c, u8 addr, int len, 162 - int flags) 163 - { 164 - u32 data; 165 - 166 - writel(MXS_CMD_I2C_SELECT, i2c->regs + MXS_I2C_QUEUECMD); 167 - 168 - data = (addr << 1) | I2C_SMBUS_READ; 169 - writel(data, i2c->regs + MXS_I2C_DATA); 170 - 171 - data = MXS_CMD_I2C_READ | MXS_I2C_CTRL0_XFER_COUNT(len) | flags; 172 - writel(data, i2c->regs + MXS_I2C_QUEUECMD); 173 - } 174 - 175 - static void mxs_i2c_pioq_setup_write(struct mxs_i2c_dev *i2c, 176 - u8 addr, u8 *buf, int len, int flags) 177 - { 178 - u32 data; 179 - int i, shifts_left; 180 - 181 - data = MXS_CMD_I2C_WRITE | MXS_I2C_CTRL0_XFER_COUNT(len + 1) | flags; 182 - writel(data, i2c->regs + MXS_I2C_QUEUECMD); 183 - 184 - /* 185 - * We have to copy the slave address (u8) and buffer (arbitrary number 186 - * of u8) into the data register (u32). To achieve that, the u8 are put 187 - * into the MSBs of 'data' which is then shifted for the next u8. When 188 - * appropriate, 'data' is written to MXS_I2C_DATA. So, the first u32 189 - * looks like this: 190 - * 191 - * 3 2 1 0 192 - * 10987654|32109876|54321098|76543210 193 - * --------+--------+--------+-------- 194 - * buffer+2|buffer+1|buffer+0|slave_addr 195 - */ 196 - 197 - data = ((addr << 1) | I2C_SMBUS_WRITE) << 24; 198 - 199 - for (i = 0; i < len; i++) { 200 - data >>= 8; 201 - data |= buf[i] << 24; 202 - if ((i & 3) == 2) 203 - writel(data, i2c->regs + MXS_I2C_DATA); 204 - } 205 - 206 - /* Write out the remaining bytes if any */ 207 - shifts_left = 24 - (i & 3) * 8; 208 - if (shifts_left) 209 - writel(data >> shifts_left, i2c->regs + MXS_I2C_DATA); 210 - } 211 - 212 - /* 213 - * TODO: should be replaceable with a waitqueue and RD_QUEUE_IRQ (setting the 214 - * rd_threshold to 1). Couldn't get this to work, though. 215 - */ 216 - static int mxs_i2c_wait_for_data(struct mxs_i2c_dev *i2c) 217 - { 218 - unsigned long timeout = jiffies + msecs_to_jiffies(1000); 219 - 220 - while (readl(i2c->regs + MXS_I2C_QUEUESTAT) 221 - & MXS_I2C_QUEUESTAT_RD_QUEUE_EMPTY) { 222 - if (time_after(jiffies, timeout)) 223 - return -ETIMEDOUT; 224 - cond_resched(); 225 - } 226 - 227 - return 0; 228 - } 229 - 230 - static int mxs_i2c_finish_read(struct mxs_i2c_dev *i2c, u8 *buf, int len) 231 - { 232 - u32 uninitialized_var(data); 233 - int i; 234 - 235 - for (i = 0; i < len; i++) { 236 - if ((i & 3) == 0) { 237 - if (mxs_i2c_wait_for_data(i2c)) 238 - return -ETIMEDOUT; 239 - data = readl(i2c->regs + MXS_I2C_QUEUEDATA); 240 - } 241 - buf[i] = data & 0xff; 242 - data >>= 8; 243 - } 244 - 245 - return 0; 246 175 } 247 176 248 177 static void mxs_i2c_dma_finish(struct mxs_i2c_dev *i2c) ··· 317 432 init_completion(&i2c->cmd_complete); 318 433 i2c->cmd_err = 0; 319 434 320 - if (i2c->dma_mode) { 321 - ret = mxs_i2c_dma_setup_xfer(adap, msg, flags); 322 - if (ret) 323 - return ret; 324 - } else { 325 - if (msg->flags & I2C_M_RD) { 326 - mxs_i2c_pioq_setup_read(i2c, msg->addr, 327 - msg->len, flags); 328 - } else { 329 - mxs_i2c_pioq_setup_write(i2c, msg->addr, msg->buf, 330 - msg->len, flags); 331 - } 332 - 333 - writel(MXS_I2C_QUEUECTRL_QUEUE_RUN, 334 - i2c->regs + MXS_I2C_QUEUECTRL_SET); 335 - } 435 + ret = mxs_i2c_dma_setup_xfer(adap, msg, flags); 436 + if (ret) 437 + return ret; 336 438 337 439 ret = wait_for_completion_timeout(&i2c->cmd_complete, 338 440 msecs_to_jiffies(1000)); 339 441 if (ret == 0) 340 442 goto timeout; 341 443 342 - if (!i2c->dma_mode && !i2c->cmd_err && (msg->flags & I2C_M_RD)) { 343 - ret = mxs_i2c_finish_read(i2c, msg->buf, msg->len); 344 - if (ret) 345 - goto timeout; 346 - } 347 - 348 444 if (i2c->cmd_err == -ENXIO) 349 445 mxs_i2c_reset(i2c); 350 - else 351 - writel(MXS_I2C_QUEUECTRL_QUEUE_RUN, 352 - i2c->regs + MXS_I2C_QUEUECTRL_CLR); 353 446 354 447 dev_dbg(i2c->dev, "Done with err=%d\n", i2c->cmd_err); 355 448 ··· 335 472 336 473 timeout: 337 474 dev_dbg(i2c->dev, "Timeout!\n"); 338 - if (i2c->dma_mode) 339 - mxs_i2c_dma_finish(i2c); 475 + mxs_i2c_dma_finish(i2c); 340 476 mxs_i2c_reset(i2c); 341 477 return -ETIMEDOUT; 342 478 } ··· 364 502 { 365 503 struct mxs_i2c_dev *i2c = dev_id; 366 504 u32 stat = readl(i2c->regs + MXS_I2C_CTRL1) & MXS_I2C_IRQ_MASK; 367 - bool is_last_cmd; 368 505 369 506 if (!stat) 370 507 return IRQ_NONE; ··· 375 514 MXS_I2C_CTRL1_SLAVE_STOP_IRQ | MXS_I2C_CTRL1_SLAVE_IRQ)) 376 515 /* MXS_I2C_CTRL1_OVERSIZE_XFER_TERM_IRQ is only for slaves */ 377 516 i2c->cmd_err = -EIO; 378 - 379 - if (!i2c->dma_mode) { 380 - is_last_cmd = (readl(i2c->regs + MXS_I2C_QUEUESTAT) & 381 - MXS_I2C_QUEUESTAT_WRITE_QUEUE_CNT_MASK) == 0; 382 - 383 - if (is_last_cmd || i2c->cmd_err) 384 - complete(&i2c->cmd_complete); 385 - } 386 517 387 518 writel(stat, i2c->regs + MXS_I2C_CTRL1_CLR); 388 519 ··· 409 556 int ret; 410 557 411 558 /* 412 - * The MXS I2C DMA mode is prefered and enabled by default. 413 - * The PIO mode is still supported, but should be used only 414 - * for debuging purposes etc. 415 - */ 416 - i2c->dma_mode = !use_pioqueue; 417 - if (!i2c->dma_mode) 418 - dev_info(dev, "Using PIOQUEUE mode for I2C transfers!\n"); 419 - 420 - /* 421 559 * TODO: This is a temporary solution and should be changed 422 560 * to use generic DMA binding later when the helpers get in. 423 561 */ 424 562 ret = of_property_read_u32(node, "fsl,i2c-dma-channel", 425 563 &i2c->dma_channel); 426 564 if (ret) { 427 - dev_warn(dev, "Failed to get DMA channel, using PIOQUEUE!\n"); 428 - i2c->dma_mode = 0; 565 + dev_err(dev, "Failed to get DMA channel!\n"); 566 + return -ENODEV; 429 567 } 430 568 431 569 ret = of_property_read_u32(node, "clock-frequency", &speed); ··· 478 634 } 479 635 480 636 /* Setup the DMA */ 481 - if (i2c->dma_mode) { 482 - dma_cap_zero(mask); 483 - dma_cap_set(DMA_SLAVE, mask); 484 - i2c->dma_data.chan_irq = dmairq; 485 - i2c->dmach = dma_request_channel(mask, mxs_i2c_dma_filter, i2c); 486 - if (!i2c->dmach) { 487 - dev_err(dev, "Failed to request dma\n"); 488 - return -ENODEV; 489 - } 637 + dma_cap_zero(mask); 638 + dma_cap_set(DMA_SLAVE, mask); 639 + i2c->dma_data.chan_irq = dmairq; 640 + i2c->dmach = dma_request_channel(mask, mxs_i2c_dma_filter, i2c); 641 + if (!i2c->dmach) { 642 + dev_err(dev, "Failed to request dma\n"); 643 + return -ENODEV; 490 644 } 491 645 492 646 platform_set_drvdata(pdev, i2c);
+7 -2
drivers/i2c/busses/i2c-nomadik.c
··· 644 644 645 645 pm_runtime_get_sync(&dev->adev->dev); 646 646 647 - clk_enable(dev->clk); 647 + status = clk_prepare_enable(dev->clk); 648 + if (status) { 649 + dev_err(&dev->adev->dev, "can't prepare_enable clock\n"); 650 + goto out_clk; 651 + } 648 652 649 653 status = init_hw(dev); 650 654 if (status) ··· 675 671 } 676 672 677 673 out: 678 - clk_disable(dev->clk); 674 + clk_disable_unprepare(dev->clk); 675 + out_clk: 679 676 pm_runtime_put_sync(&dev->adev->dev); 680 677 681 678 dev->busy = false;
+1 -1
drivers/i2c/busses/i2c-tegra.c
··· 742 742 } 743 743 744 744 ret = devm_request_irq(&pdev->dev, i2c_dev->irq, 745 - tegra_i2c_isr, 0, pdev->name, i2c_dev); 745 + tegra_i2c_isr, 0, dev_name(&pdev->dev), i2c_dev); 746 746 if (ret) { 747 747 dev_err(&pdev->dev, "Failed to request irq %i\n", i2c_dev->irq); 748 748 return ret;
+1 -1
drivers/i2c/muxes/i2c-mux-pinctrl.c
··· 169 169 mux->busses = devm_kzalloc(&pdev->dev, 170 170 sizeof(mux->busses) * mux->pdata->bus_count, 171 171 GFP_KERNEL); 172 - if (!mux->states) { 172 + if (!mux->busses) { 173 173 dev_err(&pdev->dev, "Cannot allocate busses\n"); 174 174 ret = -ENOMEM; 175 175 goto err;
+2 -1
drivers/irqchip/irq-bcm2835.c
··· 168 168 } 169 169 170 170 static struct of_device_id irq_of_match[] __initconst = { 171 - { .compatible = "brcm,bcm2835-armctrl-ic", .data = armctrl_of_init } 171 + { .compatible = "brcm,bcm2835-armctrl-ic", .data = armctrl_of_init }, 172 + { } 172 173 }; 173 174 174 175 void __init bcm2835_init_irq(void)
-21
drivers/leds/ledtrig-cpu.c
··· 33 33 struct led_trigger_cpu { 34 34 char name[MAX_NAME_LEN]; 35 35 struct led_trigger *_trig; 36 - struct mutex lock; 37 - int lock_is_inited; 38 36 }; 39 37 40 38 static DEFINE_PER_CPU(struct led_trigger_cpu, cpu_trig); ··· 47 49 void ledtrig_cpu(enum cpu_led_event ledevt) 48 50 { 49 51 struct led_trigger_cpu *trig = &__get_cpu_var(cpu_trig); 50 - 51 - /* mutex lock should be initialized before calling mutex_call() */ 52 - if (!trig->lock_is_inited) 53 - return; 54 - 55 - mutex_lock(&trig->lock); 56 52 57 53 /* Locate the correct CPU LED */ 58 54 switch (ledevt) { ··· 67 75 /* Will leave the LED as it is */ 68 76 break; 69 77 } 70 - 71 - mutex_unlock(&trig->lock); 72 78 } 73 79 EXPORT_SYMBOL(ledtrig_cpu); 74 80 ··· 107 117 for_each_possible_cpu(cpu) { 108 118 struct led_trigger_cpu *trig = &per_cpu(cpu_trig, cpu); 109 119 110 - mutex_init(&trig->lock); 111 - 112 120 snprintf(trig->name, MAX_NAME_LEN, "cpu%d", cpu); 113 121 114 - mutex_lock(&trig->lock); 115 122 led_trigger_register_simple(trig->name, &trig->_trig); 116 - trig->lock_is_inited = 1; 117 - mutex_unlock(&trig->lock); 118 123 } 119 124 120 125 register_syscore_ops(&ledtrig_cpu_syscore_ops); ··· 127 142 for_each_possible_cpu(cpu) { 128 143 struct led_trigger_cpu *trig = &per_cpu(cpu_trig, cpu); 129 144 130 - mutex_lock(&trig->lock); 131 - 132 145 led_trigger_unregister_simple(trig->_trig); 133 146 trig->_trig = NULL; 134 147 memset(trig->name, 0, MAX_NAME_LEN); 135 - trig->lock_is_inited = 0; 136 - 137 - mutex_unlock(&trig->lock); 138 - mutex_destroy(&trig->lock); 139 148 } 140 149 141 150 unregister_syscore_ops(&ledtrig_cpu_syscore_ops);
+4 -4
drivers/mmc/host/dw_mmc-exynos.c
··· 208 208 MMC_CAP_CMD23, 209 209 }; 210 210 211 - static struct dw_mci_drv_data exynos5250_drv_data = { 211 + static const struct dw_mci_drv_data exynos5250_drv_data = { 212 212 .caps = exynos5250_dwmmc_caps, 213 213 .init = dw_mci_exynos_priv_init, 214 214 .setup_clock = dw_mci_exynos_setup_clock, ··· 220 220 221 221 static const struct of_device_id dw_mci_exynos_match[] = { 222 222 { .compatible = "samsung,exynos5250-dw-mshc", 223 - .data = (void *)&exynos5250_drv_data, }, 223 + .data = &exynos5250_drv_data, }, 224 224 {}, 225 225 }; 226 - MODULE_DEVICE_TABLE(of, dw_mci_pltfm_match); 226 + MODULE_DEVICE_TABLE(of, dw_mci_exynos_match); 227 227 228 228 int dw_mci_exynos_probe(struct platform_device *pdev) 229 229 { 230 - struct dw_mci_drv_data *drv_data; 230 + const struct dw_mci_drv_data *drv_data; 231 231 const struct of_device_id *match; 232 232 233 233 match = of_match_node(dw_mci_exynos_match, pdev->dev.of_node);
+3 -3
drivers/mmc/host/dw_mmc-pltfm.c
··· 24 24 #include "dw_mmc.h" 25 25 26 26 int dw_mci_pltfm_register(struct platform_device *pdev, 27 - struct dw_mci_drv_data *drv_data) 27 + const struct dw_mci_drv_data *drv_data) 28 28 { 29 29 struct dw_mci *host; 30 30 struct resource *regs; ··· 50 50 if (!host->regs) 51 51 return -ENOMEM; 52 52 53 - if (host->drv_data->init) { 54 - ret = host->drv_data->init(host); 53 + if (drv_data && drv_data->init) { 54 + ret = drv_data->init(host); 55 55 if (ret) 56 56 return ret; 57 57 }
+1 -1
drivers/mmc/host/dw_mmc-pltfm.h
··· 13 13 #define _DW_MMC_PLTFM_H_ 14 14 15 15 extern int dw_mci_pltfm_register(struct platform_device *pdev, 16 - struct dw_mci_drv_data *drv_data); 16 + const struct dw_mci_drv_data *drv_data); 17 17 extern int __devexit dw_mci_pltfm_remove(struct platform_device *pdev); 18 18 extern const struct dev_pm_ops dw_mci_pltfm_pmops; 19 19
+34 -28
drivers/mmc/host/dw_mmc.c
··· 232 232 { 233 233 struct mmc_data *data; 234 234 struct dw_mci_slot *slot = mmc_priv(mmc); 235 + struct dw_mci_drv_data *drv_data = slot->host->drv_data; 235 236 u32 cmdr; 236 237 cmd->error = -EINPROGRESS; 237 238 ··· 262 261 cmdr |= SDMMC_CMD_DAT_WR; 263 262 } 264 263 265 - if (slot->host->drv_data->prepare_command) 266 - slot->host->drv_data->prepare_command(slot->host, &cmdr); 264 + if (drv_data && drv_data->prepare_command) 265 + drv_data->prepare_command(slot->host, &cmdr); 267 266 268 267 return cmdr; 269 268 } ··· 435 434 return 0; 436 435 } 437 436 438 - static struct dw_mci_dma_ops dw_mci_idmac_ops = { 437 + static const struct dw_mci_dma_ops dw_mci_idmac_ops = { 439 438 .init = dw_mci_idmac_init, 440 439 .start = dw_mci_idmac_start_dma, 441 440 .stop = dw_mci_idmac_stop_dma, ··· 773 772 static void dw_mci_set_ios(struct mmc_host *mmc, struct mmc_ios *ios) 774 773 { 775 774 struct dw_mci_slot *slot = mmc_priv(mmc); 775 + struct dw_mci_drv_data *drv_data = slot->host->drv_data; 776 776 u32 regs; 777 777 778 778 /* set default 1 bit mode */ ··· 809 807 slot->clock = ios->clock; 810 808 } 811 809 812 - if (slot->host->drv_data->set_ios) 813 - slot->host->drv_data->set_ios(slot->host, ios); 810 + if (drv_data && drv_data->set_ios) 811 + drv_data->set_ios(slot->host, ios); 814 812 815 813 switch (ios->power_mode) { 816 814 case MMC_POWER_UP: ··· 1817 1815 { 1818 1816 struct mmc_host *mmc; 1819 1817 struct dw_mci_slot *slot; 1818 + struct dw_mci_drv_data *drv_data = host->drv_data; 1820 1819 int ctrl_id, ret; 1821 1820 u8 bus_width; 1822 1821 ··· 1857 1854 } else { 1858 1855 ctrl_id = to_platform_device(host->dev)->id; 1859 1856 } 1860 - if (host->drv_data && host->drv_data->caps) 1861 - mmc->caps |= host->drv_data->caps[ctrl_id]; 1857 + if (drv_data && drv_data->caps) 1858 + mmc->caps |= drv_data->caps[ctrl_id]; 1862 1859 1863 1860 if (host->pdata->caps2) 1864 1861 mmc->caps2 = host->pdata->caps2; ··· 1870 1867 else 1871 1868 bus_width = 1; 1872 1869 1873 - if (host->drv_data->setup_bus) { 1870 + if (drv_data && drv_data->setup_bus) { 1874 1871 struct device_node *slot_np; 1875 1872 slot_np = dw_mci_of_find_slot_node(host->dev, slot->id); 1876 - ret = host->drv_data->setup_bus(host, slot_np, bus_width); 1873 + ret = drv_data->setup_bus(host, slot_np, bus_width); 1877 1874 if (ret) 1878 1875 goto err_setup_bus; 1879 1876 } ··· 1971 1968 /* Determine which DMA interface to use */ 1972 1969 #ifdef CONFIG_MMC_DW_IDMAC 1973 1970 host->dma_ops = &dw_mci_idmac_ops; 1974 - dev_info(&host->dev, "Using internal DMA controller.\n"); 1971 + dev_info(host->dev, "Using internal DMA controller.\n"); 1975 1972 #endif 1976 1973 1977 1974 if (!host->dma_ops) ··· 2038 2035 struct dw_mci_board *pdata; 2039 2036 struct device *dev = host->dev; 2040 2037 struct device_node *np = dev->of_node; 2038 + struct dw_mci_drv_data *drv_data = host->drv_data; 2041 2039 int idx, ret; 2042 2040 2043 2041 pdata = devm_kzalloc(dev, sizeof(*pdata), GFP_KERNEL); ··· 2066 2062 2067 2063 of_property_read_u32(np, "card-detect-delay", &pdata->detect_delay_ms); 2068 2064 2069 - if (host->drv_data->parse_dt) { 2070 - ret = host->drv_data->parse_dt(host); 2065 + if (drv_data && drv_data->parse_dt) { 2066 + ret = drv_data->parse_dt(host); 2071 2067 if (ret) 2072 2068 return ERR_PTR(ret); 2073 2069 } ··· 2084 2080 2085 2081 int dw_mci_probe(struct dw_mci *host) 2086 2082 { 2083 + struct dw_mci_drv_data *drv_data = host->drv_data; 2087 2084 int width, i, ret = 0; 2088 2085 u32 fifo_size; 2089 2086 int init_slots = 0; ··· 2132 2127 else 2133 2128 host->bus_hz = clk_get_rate(host->ciu_clk); 2134 2129 2135 - if (host->drv_data->setup_clock) { 2136 - ret = host->drv_data->setup_clock(host); 2130 + if (drv_data && drv_data->setup_clock) { 2131 + ret = drv_data->setup_clock(host); 2137 2132 if (ret) { 2138 2133 dev_err(host->dev, 2139 2134 "implementation specific clock setup failed\n"); ··· 2233 2228 else 2234 2229 host->num_slots = ((mci_readl(host, HCON) >> 1) & 0x1F) + 1; 2235 2230 2231 + /* 2232 + * Enable interrupts for command done, data over, data empty, card det, 2233 + * receive ready and error such as transmit, receive timeout, crc error 2234 + */ 2235 + mci_writel(host, RINTSTS, 0xFFFFFFFF); 2236 + mci_writel(host, INTMASK, SDMMC_INT_CMD_DONE | SDMMC_INT_DATA_OVER | 2237 + SDMMC_INT_TXDR | SDMMC_INT_RXDR | 2238 + DW_MCI_ERROR_FLAGS | SDMMC_INT_CD); 2239 + mci_writel(host, CTRL, SDMMC_CTRL_INT_ENABLE); /* Enable mci interrupt */ 2240 + 2241 + dev_info(host->dev, "DW MMC controller at irq %d, " 2242 + "%d bit host data width, " 2243 + "%u deep fifo\n", 2244 + host->irq, width, fifo_size); 2245 + 2236 2246 /* We need at least one slot to succeed */ 2237 2247 for (i = 0; i < host->num_slots; i++) { 2238 2248 ret = dw_mci_init_slot(host, i); ··· 2277 2257 else 2278 2258 host->data_offset = DATA_240A_OFFSET; 2279 2259 2280 - /* 2281 - * Enable interrupts for command done, data over, data empty, card det, 2282 - * receive ready and error such as transmit, receive timeout, crc error 2283 - */ 2284 - mci_writel(host, RINTSTS, 0xFFFFFFFF); 2285 - mci_writel(host, INTMASK, SDMMC_INT_CMD_DONE | SDMMC_INT_DATA_OVER | 2286 - SDMMC_INT_TXDR | SDMMC_INT_RXDR | 2287 - DW_MCI_ERROR_FLAGS | SDMMC_INT_CD); 2288 - mci_writel(host, CTRL, SDMMC_CTRL_INT_ENABLE); /* Enable mci interrupt */ 2289 - 2290 - dev_info(host->dev, "DW MMC controller at irq %d, " 2291 - "%d bit host data width, " 2292 - "%u deep fifo\n", 2293 - host->irq, width, fifo_size); 2294 2260 if (host->quirks & DW_MCI_QUIRK_IDMAC_DTO) 2295 2261 dev_info(host->dev, "Internal DMAC interrupt fix enabled.\n"); 2296 2262
+1 -1
drivers/mmc/host/mxcmmc.c
··· 1134 1134 MODULE_DESCRIPTION("i.MX Multimedia Card Interface Driver"); 1135 1135 MODULE_AUTHOR("Sascha Hauer, Pengutronix"); 1136 1136 MODULE_LICENSE("GPL"); 1137 - MODULE_ALIAS("platform:imx-mmc"); 1137 + MODULE_ALIAS("platform:mxc-mmc");
+12 -7
drivers/mmc/host/omap_hsmmc.c
··· 178 178 179 179 static int omap_hsmmc_card_detect(struct device *dev, int slot) 180 180 { 181 - struct omap_mmc_platform_data *mmc = dev->platform_data; 181 + struct omap_hsmmc_host *host = dev_get_drvdata(dev); 182 + struct omap_mmc_platform_data *mmc = host->pdata; 182 183 183 184 /* NOTE: assumes card detect signal is active-low */ 184 185 return !gpio_get_value_cansleep(mmc->slots[0].switch_pin); ··· 187 186 188 187 static int omap_hsmmc_get_wp(struct device *dev, int slot) 189 188 { 190 - struct omap_mmc_platform_data *mmc = dev->platform_data; 189 + struct omap_hsmmc_host *host = dev_get_drvdata(dev); 190 + struct omap_mmc_platform_data *mmc = host->pdata; 191 191 192 192 /* NOTE: assumes write protect signal is active-high */ 193 193 return gpio_get_value_cansleep(mmc->slots[0].gpio_wp); ··· 196 194 197 195 static int omap_hsmmc_get_cover_state(struct device *dev, int slot) 198 196 { 199 - struct omap_mmc_platform_data *mmc = dev->platform_data; 197 + struct omap_hsmmc_host *host = dev_get_drvdata(dev); 198 + struct omap_mmc_platform_data *mmc = host->pdata; 200 199 201 200 /* NOTE: assumes card detect signal is active-low */ 202 201 return !gpio_get_value_cansleep(mmc->slots[0].switch_pin); ··· 207 204 208 205 static int omap_hsmmc_suspend_cdirq(struct device *dev, int slot) 209 206 { 210 - struct omap_mmc_platform_data *mmc = dev->platform_data; 207 + struct omap_hsmmc_host *host = dev_get_drvdata(dev); 208 + struct omap_mmc_platform_data *mmc = host->pdata; 211 209 212 210 disable_irq(mmc->slots[0].card_detect_irq); 213 211 return 0; ··· 216 212 217 213 static int omap_hsmmc_resume_cdirq(struct device *dev, int slot) 218 214 { 219 - struct omap_mmc_platform_data *mmc = dev->platform_data; 215 + struct omap_hsmmc_host *host = dev_get_drvdata(dev); 216 + struct omap_mmc_platform_data *mmc = host->pdata; 220 217 221 218 enable_irq(mmc->slots[0].card_detect_irq); 222 219 return 0; ··· 2014 2009 clk_put(host->dbclk); 2015 2010 } 2016 2011 2017 - mmc_free_host(host->mmc); 2012 + omap_hsmmc_gpio_free(host->pdata); 2018 2013 iounmap(host->base); 2019 - omap_hsmmc_gpio_free(pdev->dev.platform_data); 2014 + mmc_free_host(host->mmc); 2020 2015 2021 2016 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 2022 2017 if (res)
+20 -18
drivers/mmc/host/sdhci-dove.c
··· 19 19 * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. 20 20 */ 21 21 22 + #include <linux/err.h> 22 23 #include <linux/io.h> 23 24 #include <linux/clk.h> 24 25 #include <linux/err.h> ··· 85 84 struct sdhci_dove_priv *priv; 86 85 int ret; 87 86 88 - ret = sdhci_pltfm_register(pdev, &sdhci_dove_pdata); 89 - if (ret) 90 - goto sdhci_dove_register_fail; 91 - 92 87 priv = devm_kzalloc(&pdev->dev, sizeof(struct sdhci_dove_priv), 93 88 GFP_KERNEL); 94 89 if (!priv) { 95 90 dev_err(&pdev->dev, "unable to allocate private data"); 96 - ret = -ENOMEM; 97 - goto sdhci_dove_allocate_fail; 91 + return -ENOMEM; 98 92 } 93 + 94 + priv->clk = clk_get(&pdev->dev, NULL); 95 + if (!IS_ERR(priv->clk)) 96 + clk_prepare_enable(priv->clk); 97 + 98 + ret = sdhci_pltfm_register(pdev, &sdhci_dove_pdata); 99 + if (ret) 100 + goto sdhci_dove_register_fail; 99 101 100 102 host = platform_get_drvdata(pdev); 101 103 pltfm_host = sdhci_priv(host); 102 104 pltfm_host->priv = priv; 103 105 104 - priv->clk = clk_get(&pdev->dev, NULL); 105 - if (!IS_ERR(priv->clk)) 106 - clk_prepare_enable(priv->clk); 107 106 return 0; 108 107 109 - sdhci_dove_allocate_fail: 110 - sdhci_pltfm_unregister(pdev); 111 108 sdhci_dove_register_fail: 109 + if (!IS_ERR(priv->clk)) { 110 + clk_disable_unprepare(priv->clk); 111 + clk_put(priv->clk); 112 + } 112 113 return ret; 113 114 } 114 115 ··· 120 117 struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 121 118 struct sdhci_dove_priv *priv = pltfm_host->priv; 122 119 123 - if (priv->clk) { 124 - if (!IS_ERR(priv->clk)) { 125 - clk_disable_unprepare(priv->clk); 126 - clk_put(priv->clk); 127 - } 128 - devm_kfree(&pdev->dev, priv->clk); 120 + sdhci_pltfm_unregister(pdev); 121 + 122 + if (!IS_ERR(priv->clk)) { 123 + clk_disable_unprepare(priv->clk); 124 + clk_put(priv->clk); 129 125 } 130 - return sdhci_pltfm_unregister(pdev); 126 + return 0; 131 127 } 132 128 133 129 static const struct of_device_id sdhci_dove_of_match_table[] __devinitdata = {
+11
drivers/mmc/host/sdhci-of-esdhc.c
··· 169 169 } 170 170 #endif 171 171 172 + static void esdhc_of_platform_init(struct sdhci_host *host) 173 + { 174 + u32 vvn; 175 + 176 + vvn = in_be32(host->ioaddr + SDHCI_SLOT_INT_STATUS); 177 + vvn = (vvn & SDHCI_VENDOR_VER_MASK) >> SDHCI_VENDOR_VER_SHIFT; 178 + if (vvn == VENDOR_V_22) 179 + host->quirks2 |= SDHCI_QUIRK2_HOST_NO_CMD23; 180 + } 181 + 172 182 static struct sdhci_ops sdhci_esdhc_ops = { 173 183 .read_l = esdhc_readl, 174 184 .read_w = esdhc_readw, ··· 190 180 .enable_dma = esdhc_of_enable_dma, 191 181 .get_max_clock = esdhc_of_get_max_clock, 192 182 .get_min_clock = esdhc_of_get_min_clock, 183 + .platform_init = esdhc_of_platform_init, 193 184 #ifdef CONFIG_PM 194 185 .platform_suspend = esdhc_of_suspend, 195 186 .platform_resume = esdhc_of_resume,
+1 -1
drivers/mmc/host/sdhci-pci.c
··· 1196 1196 return ERR_PTR(-ENODEV); 1197 1197 } 1198 1198 1199 - if (pci_resource_len(pdev, bar) != 0x100) { 1199 + if (pci_resource_len(pdev, bar) < 0x100) { 1200 1200 dev_err(&pdev->dev, "Invalid iomem size. You may " 1201 1201 "experience problems.\n"); 1202 1202 }
+7
drivers/mmc/host/sdhci-pltfm.c
··· 150 150 goto err_remap; 151 151 } 152 152 153 + /* 154 + * Some platforms need to probe the controller to be able to 155 + * determine which caps should be used. 156 + */ 157 + if (host->ops && host->ops->platform_init) 158 + host->ops->platform_init(host); 159 + 153 160 platform_set_drvdata(pdev, host); 154 161 155 162 return host;
+16 -14
drivers/mmc/host/sdhci-s3c.c
··· 211 211 if (ourhost->cur_clk != best_src) { 212 212 struct clk *clk = ourhost->clk_bus[best_src]; 213 213 214 - clk_enable(clk); 215 - clk_disable(ourhost->clk_bus[ourhost->cur_clk]); 214 + clk_prepare_enable(clk); 215 + clk_disable_unprepare(ourhost->clk_bus[ourhost->cur_clk]); 216 216 217 217 /* turn clock off to card before changing clock source */ 218 218 writew(0, host->ioaddr + SDHCI_CLOCK_CONTROL); ··· 607 607 } 608 608 609 609 /* enable the local io clock and keep it running for the moment. */ 610 - clk_enable(sc->clk_io); 610 + clk_prepare_enable(sc->clk_io); 611 611 612 612 for (clks = 0, ptr = 0; ptr < MAX_BUS_CLK; ptr++) { 613 613 struct clk *clk; ··· 638 638 } 639 639 640 640 #ifndef CONFIG_PM_RUNTIME 641 - clk_enable(sc->clk_bus[sc->cur_clk]); 641 + clk_prepare_enable(sc->clk_bus[sc->cur_clk]); 642 642 #endif 643 643 644 644 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); ··· 747 747 sdhci_s3c_setup_card_detect_gpio(sc); 748 748 749 749 #ifdef CONFIG_PM_RUNTIME 750 - clk_disable(sc->clk_io); 750 + if (pdata->cd_type != S3C_SDHCI_CD_INTERNAL) 751 + clk_disable_unprepare(sc->clk_io); 751 752 #endif 752 753 return 0; 753 754 754 755 err_req_regs: 755 756 #ifndef CONFIG_PM_RUNTIME 756 - clk_disable(sc->clk_bus[sc->cur_clk]); 757 + clk_disable_unprepare(sc->clk_bus[sc->cur_clk]); 757 758 #endif 758 759 for (ptr = 0; ptr < MAX_BUS_CLK; ptr++) { 759 760 if (sc->clk_bus[ptr]) { ··· 763 762 } 764 763 765 764 err_no_busclks: 766 - clk_disable(sc->clk_io); 765 + clk_disable_unprepare(sc->clk_io); 767 766 clk_put(sc->clk_io); 768 767 769 768 err_io_clk: ··· 795 794 gpio_free(sc->ext_cd_gpio); 796 795 797 796 #ifdef CONFIG_PM_RUNTIME 798 - clk_enable(sc->clk_io); 797 + if (pdata->cd_type != S3C_SDHCI_CD_INTERNAL) 798 + clk_prepare_enable(sc->clk_io); 799 799 #endif 800 800 sdhci_remove_host(host, 1); 801 801 ··· 804 802 pm_runtime_disable(&pdev->dev); 805 803 806 804 #ifndef CONFIG_PM_RUNTIME 807 - clk_disable(sc->clk_bus[sc->cur_clk]); 805 + clk_disable_unprepare(sc->clk_bus[sc->cur_clk]); 808 806 #endif 809 807 for (ptr = 0; ptr < MAX_BUS_CLK; ptr++) { 810 808 if (sc->clk_bus[ptr]) { 811 809 clk_put(sc->clk_bus[ptr]); 812 810 } 813 811 } 814 - clk_disable(sc->clk_io); 812 + clk_disable_unprepare(sc->clk_io); 815 813 clk_put(sc->clk_io); 816 814 817 815 if (pdev->dev.of_node) { ··· 851 849 852 850 ret = sdhci_runtime_suspend_host(host); 853 851 854 - clk_disable(ourhost->clk_bus[ourhost->cur_clk]); 855 - clk_disable(busclk); 852 + clk_disable_unprepare(ourhost->clk_bus[ourhost->cur_clk]); 853 + clk_disable_unprepare(busclk); 856 854 return ret; 857 855 } 858 856 ··· 863 861 struct clk *busclk = ourhost->clk_io; 864 862 int ret; 865 863 866 - clk_enable(busclk); 867 - clk_enable(ourhost->clk_bus[ourhost->cur_clk]); 864 + clk_prepare_enable(busclk); 865 + clk_prepare_enable(ourhost->clk_bus[ourhost->cur_clk]); 868 866 ret = sdhci_runtime_resume_host(host); 869 867 return ret; 870 868 }
+27 -15
drivers/mmc/host/sdhci.c
··· 1315 1315 */ 1316 1316 if ((host->flags & SDHCI_NEEDS_RETUNING) && 1317 1317 !(present_state & (SDHCI_DOING_WRITE | SDHCI_DOING_READ))) { 1318 - /* eMMC uses cmd21 while sd and sdio use cmd19 */ 1319 - tuning_opcode = mmc->card->type == MMC_TYPE_MMC ? 1320 - MMC_SEND_TUNING_BLOCK_HS200 : 1321 - MMC_SEND_TUNING_BLOCK; 1322 - spin_unlock_irqrestore(&host->lock, flags); 1323 - sdhci_execute_tuning(mmc, tuning_opcode); 1324 - spin_lock_irqsave(&host->lock, flags); 1318 + if (mmc->card) { 1319 + /* eMMC uses cmd21 but sd and sdio use cmd19 */ 1320 + tuning_opcode = 1321 + mmc->card->type == MMC_TYPE_MMC ? 1322 + MMC_SEND_TUNING_BLOCK_HS200 : 1323 + MMC_SEND_TUNING_BLOCK; 1324 + spin_unlock_irqrestore(&host->lock, flags); 1325 + sdhci_execute_tuning(mmc, tuning_opcode); 1326 + spin_lock_irqsave(&host->lock, flags); 1325 1327 1326 - /* Restore original mmc_request structure */ 1327 - host->mrq = mrq; 1328 + /* Restore original mmc_request structure */ 1329 + host->mrq = mrq; 1330 + } 1328 1331 } 1329 1332 1330 1333 if (mrq->sbc && !(host->flags & SDHCI_AUTO_CMD23)) ··· 2840 2837 if (!(host->quirks & SDHCI_QUIRK_FORCE_1_BIT_DATA)) 2841 2838 mmc->caps |= MMC_CAP_4_BIT_DATA; 2842 2839 2840 + if (host->quirks2 & SDHCI_QUIRK2_HOST_NO_CMD23) 2841 + mmc->caps &= ~MMC_CAP_CMD23; 2842 + 2843 2843 if (caps[0] & SDHCI_CAN_DO_HISPD) 2844 2844 mmc->caps |= MMC_CAP_SD_HIGHSPEED | MMC_CAP_MMC_HIGHSPEED; 2845 2845 ··· 2852 2846 2853 2847 /* If vqmmc regulator and no 1.8V signalling, then there's no UHS */ 2854 2848 host->vqmmc = regulator_get(mmc_dev(mmc), "vqmmc"); 2855 - if (IS_ERR(host->vqmmc)) { 2856 - pr_info("%s: no vqmmc regulator found\n", mmc_hostname(mmc)); 2857 - host->vqmmc = NULL; 2849 + if (IS_ERR_OR_NULL(host->vqmmc)) { 2850 + if (PTR_ERR(host->vqmmc) < 0) { 2851 + pr_info("%s: no vqmmc regulator found\n", 2852 + mmc_hostname(mmc)); 2853 + host->vqmmc = NULL; 2854 + } 2858 2855 } 2859 2856 else if (regulator_is_supported_voltage(host->vqmmc, 1800000, 1800000)) 2860 2857 regulator_enable(host->vqmmc); ··· 2913 2904 ocr_avail = 0; 2914 2905 2915 2906 host->vmmc = regulator_get(mmc_dev(mmc), "vmmc"); 2916 - if (IS_ERR(host->vmmc)) { 2917 - pr_info("%s: no vmmc regulator found\n", mmc_hostname(mmc)); 2918 - host->vmmc = NULL; 2907 + if (IS_ERR_OR_NULL(host->vmmc)) { 2908 + if (PTR_ERR(host->vmmc) < 0) { 2909 + pr_info("%s: no vmmc regulator found\n", 2910 + mmc_hostname(mmc)); 2911 + host->vmmc = NULL; 2912 + } 2919 2913 } else 2920 2914 regulator_enable(host->vmmc); 2921 2915
+1
drivers/mmc/host/sdhci.h
··· 278 278 void (*hw_reset)(struct sdhci_host *host); 279 279 void (*platform_suspend)(struct sdhci_host *host); 280 280 void (*platform_resume)(struct sdhci_host *host); 281 + void (*platform_init)(struct sdhci_host *host); 281 282 }; 282 283 283 284 #ifdef CONFIG_MMC_SDHCI_IO_ACCESSORS
+1 -1
drivers/mmc/host/sh_mmcif.c
··· 1466 1466 1467 1467 platform_set_drvdata(pdev, NULL); 1468 1468 1469 + clk_disable(host->hclk); 1469 1470 mmc_free_host(host->mmc); 1470 1471 pm_runtime_put_sync(&pdev->dev); 1471 - clk_disable(host->hclk); 1472 1472 pm_runtime_disable(&pdev->dev); 1473 1473 1474 1474 return 0;
+8 -20
drivers/net/ethernet/jme.c
··· 1860 1860 jme_clear_pm(jme); 1861 1861 JME_NAPI_ENABLE(jme); 1862 1862 1863 - tasklet_enable(&jme->linkch_task); 1864 - tasklet_enable(&jme->txclean_task); 1865 - tasklet_hi_enable(&jme->rxclean_task); 1866 - tasklet_hi_enable(&jme->rxempty_task); 1863 + tasklet_init(&jme->linkch_task, jme_link_change_tasklet, 1864 + (unsigned long) jme); 1865 + tasklet_init(&jme->txclean_task, jme_tx_clean_tasklet, 1866 + (unsigned long) jme); 1867 + tasklet_init(&jme->rxclean_task, jme_rx_clean_tasklet, 1868 + (unsigned long) jme); 1869 + tasklet_init(&jme->rxempty_task, jme_rx_empty_tasklet, 1870 + (unsigned long) jme); 1867 1871 1868 1872 rc = jme_request_irq(jme); 1869 1873 if (rc) ··· 3083 3079 tasklet_init(&jme->pcc_task, 3084 3080 jme_pcc_tasklet, 3085 3081 (unsigned long) jme); 3086 - tasklet_init(&jme->linkch_task, 3087 - jme_link_change_tasklet, 3088 - (unsigned long) jme); 3089 - tasklet_init(&jme->txclean_task, 3090 - jme_tx_clean_tasklet, 3091 - (unsigned long) jme); 3092 - tasklet_init(&jme->rxclean_task, 3093 - jme_rx_clean_tasklet, 3094 - (unsigned long) jme); 3095 - tasklet_init(&jme->rxempty_task, 3096 - jme_rx_empty_tasklet, 3097 - (unsigned long) jme); 3098 - tasklet_disable_nosync(&jme->linkch_task); 3099 - tasklet_disable_nosync(&jme->txclean_task); 3100 - tasklet_disable_nosync(&jme->rxclean_task); 3101 - tasklet_disable_nosync(&jme->rxempty_task); 3102 3082 jme->dpi.cur = PCC_P1; 3103 3083 3104 3084 jme->reg_ghc = 0;
+4 -12
drivers/net/ethernet/micrel/ksz884x.c
··· 5459 5459 rc = request_irq(dev->irq, netdev_intr, IRQF_SHARED, dev->name, dev); 5460 5460 if (rc) 5461 5461 return rc; 5462 - tasklet_enable(&hw_priv->rx_tasklet); 5463 - tasklet_enable(&hw_priv->tx_tasklet); 5462 + tasklet_init(&hw_priv->rx_tasklet, rx_proc_task, 5463 + (unsigned long) hw_priv); 5464 + tasklet_init(&hw_priv->tx_tasklet, tx_proc_task, 5465 + (unsigned long) hw_priv); 5464 5466 5465 5467 hw->promiscuous = 0; 5466 5468 hw->all_multi = 0; ··· 7034 7032 7035 7033 spin_lock_init(&hw_priv->hwlock); 7036 7034 mutex_init(&hw_priv->lock); 7037 - 7038 - /* tasklet is enabled. */ 7039 - tasklet_init(&hw_priv->rx_tasklet, rx_proc_task, 7040 - (unsigned long) hw_priv); 7041 - tasklet_init(&hw_priv->tx_tasklet, tx_proc_task, 7042 - (unsigned long) hw_priv); 7043 - 7044 - /* tasklet_enable will decrement the atomic counter. */ 7045 - tasklet_disable(&hw_priv->rx_tasklet); 7046 - tasklet_disable(&hw_priv->tx_tasklet); 7047 7035 7048 7036 for (i = 0; i < TOTAL_PORT_NUM; i++) 7049 7037 init_waitqueue_head(&hw_priv->counter[i].counter);
+15 -2
drivers/net/ethernet/smsc/smsc911x.c
··· 2110 2110 static int __devinit smsc911x_init(struct net_device *dev) 2111 2111 { 2112 2112 struct smsc911x_data *pdata = netdev_priv(dev); 2113 - unsigned int byte_test; 2113 + unsigned int byte_test, mask; 2114 2114 unsigned int to = 100; 2115 2115 2116 2116 SMSC_TRACE(pdata, probe, "Driver Parameters:"); ··· 2130 2130 /* 2131 2131 * poll the READY bit in PMT_CTRL. Any other access to the device is 2132 2132 * forbidden while this bit isn't set. Try for 100ms 2133 + * 2134 + * Note that this test is done before the WORD_SWAP register is 2135 + * programmed. So in some configurations the READY bit is at 16 before 2136 + * WORD_SWAP is written to. This issue is worked around by waiting 2137 + * until either bit 0 or bit 16 gets set in PMT_CTRL. 2138 + * 2139 + * SMSC has confirmed that checking bit 16 (marked as reserved in 2140 + * the datasheet) is fine since these bits "will either never be set 2141 + * or can only go high after READY does (so also indicate the device 2142 + * is ready)". 2133 2143 */ 2134 - while (!(smsc911x_reg_read(pdata, PMT_CTRL) & PMT_CTRL_READY_) && --to) 2144 + 2145 + mask = PMT_CTRL_READY_ | swahw32(PMT_CTRL_READY_); 2146 + while (!(smsc911x_reg_read(pdata, PMT_CTRL) & mask) && --to) 2135 2147 udelay(1000); 2148 + 2136 2149 if (to == 0) { 2137 2150 pr_err("Device not READY in 100ms aborting\n"); 2138 2151 return -ENODEV;
+1 -1
drivers/net/ethernet/tile/tilegx.c
··· 917 917 ingress_irq = rc; 918 918 tile_irq_activate(ingress_irq, TILE_IRQ_PERCPU); 919 919 rc = request_irq(ingress_irq, tile_net_handle_ingress_irq, 920 - 0, NULL, NULL); 920 + 0, "tile_net", NULL); 921 921 if (rc != 0) { 922 922 netdev_err(dev, "request_irq failed: %d\n", rc); 923 923 destroy_irq(ingress_irq);
+6 -6
drivers/net/ethernet/xilinx/xilinx_axienet_main.c
··· 942 942 phy_start(lp->phy_dev); 943 943 } 944 944 945 + /* Enable tasklets for Axi DMA error handling */ 946 + tasklet_init(&lp->dma_err_tasklet, axienet_dma_err_handler, 947 + (unsigned long) lp); 948 + 945 949 /* Enable interrupts for Axi DMA Tx */ 946 950 ret = request_irq(lp->tx_irq, axienet_tx_irq, 0, ndev->name, ndev); 947 951 if (ret) ··· 954 950 ret = request_irq(lp->rx_irq, axienet_rx_irq, 0, ndev->name, ndev); 955 951 if (ret) 956 952 goto err_rx_irq; 957 - /* Enable tasklets for Axi DMA error handling */ 958 - tasklet_enable(&lp->dma_err_tasklet); 953 + 959 954 return 0; 960 955 961 956 err_rx_irq: ··· 963 960 if (lp->phy_dev) 964 961 phy_disconnect(lp->phy_dev); 965 962 lp->phy_dev = NULL; 963 + tasklet_kill(&lp->dma_err_tasklet); 966 964 dev_err(lp->dev, "request_irq() failed\n"); 967 965 return ret; 968 966 } ··· 1616 1612 dev_err(lp->dev, "register_netdev() error (%i)\n", ret); 1617 1613 goto err_iounmap_2; 1618 1614 } 1619 - 1620 - tasklet_init(&lp->dma_err_tasklet, axienet_dma_err_handler, 1621 - (unsigned long) lp); 1622 - tasklet_disable(&lp->dma_err_tasklet); 1623 1615 1624 1616 return 0; 1625 1617
-1
drivers/net/phy/mdio-bitbang.c
··· 234 234 struct mdiobb_ctrl *ctrl = bus->priv; 235 235 236 236 module_put(ctrl->ops->owner); 237 - mdiobus_unregister(bus); 238 237 mdiobus_free(bus); 239 238 } 240 239 EXPORT_SYMBOL(free_mdio_bitbang);
+18 -4
drivers/net/usb/cdc_ncm.c
··· 440 440 ((!ctx->mbim_desc) && ((ctx->ether_desc == NULL) || (ctx->control != intf)))) 441 441 goto error; 442 442 443 - /* claim interfaces, if any */ 444 - temp = usb_driver_claim_interface(driver, ctx->data, dev); 445 - if (temp) 446 - goto error; 443 + /* claim data interface, if different from control */ 444 + if (ctx->data != ctx->control) { 445 + temp = usb_driver_claim_interface(driver, ctx->data, dev); 446 + if (temp) 447 + goto error; 448 + } 447 449 448 450 iface_no = ctx->data->cur_altsetting->desc.bInterfaceNumber; 449 451 ··· 520 518 hrtimer_cancel(&ctx->tx_timer); 521 519 522 520 tasklet_kill(&ctx->bh); 521 + 522 + /* handle devices with combined control and data interface */ 523 + if (ctx->control == ctx->data) 524 + ctx->data = NULL; 523 525 524 526 /* disconnect master --> disconnect slave */ 525 527 if (intf == ctx->control && ctx->data) { ··· 1190 1184 .bInterfaceSubClass = USB_CDC_SUBCLASS_NCM, 1191 1185 .bInterfaceProtocol = USB_CDC_PROTO_NONE, 1192 1186 .driver_info = (unsigned long) &wwan_info, 1187 + }, 1188 + 1189 + /* Huawei NCM devices disguised as vendor specific */ 1190 + { USB_VENDOR_AND_INTERFACE_INFO(0x12d1, 0xff, 0x02, 0x16), 1191 + .driver_info = (unsigned long)&wwan_info, 1192 + }, 1193 + { USB_VENDOR_AND_INTERFACE_INFO(0x12d1, 0xff, 0x02, 0x46), 1194 + .driver_info = (unsigned long)&wwan_info, 1193 1195 }, 1194 1196 1195 1197 /* Generic CDC-NCM devices */
+2 -2
drivers/net/usb/smsc95xx.c
··· 203 203 /* set the address, index & direction (read from PHY) */ 204 204 phy_id &= dev->mii.phy_id_mask; 205 205 idx &= dev->mii.reg_num_mask; 206 - addr = (phy_id << 11) | (idx << 6) | MII_READ_; 206 + addr = (phy_id << 11) | (idx << 6) | MII_READ_ | MII_BUSY_; 207 207 ret = smsc95xx_write_reg(dev, MII_ADDR, addr); 208 208 check_warn_goto_done(ret, "Error writing MII_ADDR"); 209 209 ··· 240 240 /* set the address, index & direction (write to PHY) */ 241 241 phy_id &= dev->mii.phy_id_mask; 242 242 idx &= dev->mii.reg_num_mask; 243 - addr = (phy_id << 11) | (idx << 6) | MII_WRITE_; 243 + addr = (phy_id << 11) | (idx << 6) | MII_WRITE_ | MII_BUSY_; 244 244 ret = smsc95xx_write_reg(dev, MII_ADDR, addr); 245 245 check_warn_goto_done(ret, "Error writing MII_ADDR"); 246 246
+7 -3
drivers/net/vxlan.c
··· 1 1 /* 2 - * VXLAN: Virtual eXtensiable Local Area Network 2 + * VXLAN: Virtual eXtensible Local Area Network 3 3 * 4 4 * Copyright (c) 2012 Vyatta Inc. 5 5 * ··· 50 50 51 51 #define VXLAN_N_VID (1u << 24) 52 52 #define VXLAN_VID_MASK (VXLAN_N_VID - 1) 53 - /* VLAN + IP header + UDP + VXLAN */ 54 - #define VXLAN_HEADROOM (4 + 20 + 8 + 8) 53 + /* IP header + UDP + VXLAN + Ethernet header */ 54 + #define VXLAN_HEADROOM (20 + 8 + 8 + 14) 55 55 56 56 #define VXLAN_FLAGS 0x08000000 /* struct vxlanhdr.vx_flags required value. */ 57 57 ··· 1102 1102 1103 1103 if (!tb[IFLA_MTU]) 1104 1104 dev->mtu = lowerdev->mtu - VXLAN_HEADROOM; 1105 + 1106 + /* update header length based on lower device */ 1107 + dev->hard_header_len = lowerdev->hard_header_len + 1108 + VXLAN_HEADROOM; 1105 1109 } 1106 1110 1107 1111 if (data[IFLA_VXLAN_TOS])
+1 -1
drivers/net/wireless/brcm80211/brcmfmac/wl_cfg80211.c
··· 4197 4197 4198 4198 static void brcmf_wiphy_pno_params(struct wiphy *wiphy) 4199 4199 { 4200 - #ifndef CONFIG_BRCMFISCAN 4200 + #ifndef CONFIG_BRCMISCAN 4201 4201 /* scheduled scan settings */ 4202 4202 wiphy->max_sched_scan_ssids = BRCMF_PNO_MAX_PFN_COUNT; 4203 4203 wiphy->max_match_sets = BRCMF_PNO_MAX_PFN_COUNT;
+1 -1
drivers/net/wireless/iwlwifi/dvm/mac80211.c
··· 521 521 ieee80211_get_tx_rate(hw, IEEE80211_SKB_CB(skb))->bitrate); 522 522 523 523 if (iwlagn_tx_skb(priv, control->sta, skb)) 524 - dev_kfree_skb_any(skb); 524 + ieee80211_free_txskb(hw, skb); 525 525 } 526 526 527 527 static void iwlagn_mac_update_tkip_key(struct ieee80211_hw *hw,
+1 -1
drivers/net/wireless/iwlwifi/dvm/main.c
··· 2113 2113 2114 2114 info = IEEE80211_SKB_CB(skb); 2115 2115 iwl_trans_free_tx_cmd(priv->trans, info->driver_data[1]); 2116 - dev_kfree_skb_any(skb); 2116 + ieee80211_free_txskb(priv->hw, skb); 2117 2117 } 2118 2118 2119 2119 static void iwl_set_hw_rfkill_state(struct iwl_op_mode *op_mode, bool state)
+21 -2
drivers/net/wireless/iwlwifi/pcie/rx.c
··· 321 321 dma_map_page(trans->dev, page, 0, 322 322 PAGE_SIZE << trans_pcie->rx_page_order, 323 323 DMA_FROM_DEVICE); 324 + if (dma_mapping_error(trans->dev, rxb->page_dma)) { 325 + rxb->page = NULL; 326 + spin_lock_irqsave(&rxq->lock, flags); 327 + list_add(&rxb->list, &rxq->rx_used); 328 + spin_unlock_irqrestore(&rxq->lock, flags); 329 + __free_pages(page, trans_pcie->rx_page_order); 330 + return; 331 + } 324 332 /* dma address must be no more than 36 bits */ 325 333 BUG_ON(rxb->page_dma & ~DMA_BIT_MASK(36)); 326 334 /* and also 256 byte aligned! */ ··· 497 489 dma_map_page(trans->dev, rxb->page, 0, 498 490 PAGE_SIZE << trans_pcie->rx_page_order, 499 491 DMA_FROM_DEVICE); 500 - list_add_tail(&rxb->list, &rxq->rx_free); 501 - rxq->free_count++; 492 + if (dma_mapping_error(trans->dev, rxb->page_dma)) { 493 + /* 494 + * free the page(s) as well to not break 495 + * the invariant that the items on the used 496 + * list have no page(s) 497 + */ 498 + __free_pages(rxb->page, trans_pcie->rx_page_order); 499 + rxb->page = NULL; 500 + list_add_tail(&rxb->list, &rxq->rx_used); 501 + } else { 502 + list_add_tail(&rxb->list, &rxq->rx_free); 503 + rxq->free_count++; 504 + } 502 505 } else 503 506 list_add_tail(&rxb->list, &rxq->rx_used); 504 507 spin_unlock_irqrestore(&rxq->lock, flags);
-3
drivers/pci/bus.c
··· 320 320 } else 321 321 next = dev->bus_list.next; 322 322 323 - /* Run device routines with the device locked */ 324 - device_lock(&dev->dev); 325 323 retval = cb(dev, userdata); 326 - device_unlock(&dev->dev); 327 324 if (retval) 328 325 break; 329 326 }
+2 -10
drivers/pci/pci-driver.c
··· 398 398 struct pci_dev *pci_dev = to_pci_dev(dev); 399 399 struct pci_driver *drv = pci_dev->driver; 400 400 401 + pm_runtime_resume(dev); 402 + 401 403 if (drv && drv->shutdown) 402 404 drv->shutdown(pci_dev); 403 405 pci_msi_shutdown(pci_dev); ··· 410 408 * continue to do DMA 411 409 */ 412 410 pci_disable_device(pci_dev); 413 - 414 - /* 415 - * Devices may be enabled to wake up by runtime PM, but they need not 416 - * be supposed to wake up the system from its "power off" state (e.g. 417 - * ACPI S5). Therefore disable wakeup for all devices that aren't 418 - * supposed to wake up the system at this point. The state argument 419 - * will be ignored by pci_enable_wake(). 420 - */ 421 - if (!device_may_wakeup(dev)) 422 - pci_enable_wake(pci_dev, PCI_UNKNOWN, false); 423 411 } 424 412 425 413 #ifdef CONFIG_PM
-34
drivers/pci/pci-sysfs.c
··· 458 458 } 459 459 struct device_attribute vga_attr = __ATTR_RO(boot_vga); 460 460 461 - static void 462 - pci_config_pm_runtime_get(struct pci_dev *pdev) 463 - { 464 - struct device *dev = &pdev->dev; 465 - struct device *parent = dev->parent; 466 - 467 - if (parent) 468 - pm_runtime_get_sync(parent); 469 - pm_runtime_get_noresume(dev); 470 - /* 471 - * pdev->current_state is set to PCI_D3cold during suspending, 472 - * so wait until suspending completes 473 - */ 474 - pm_runtime_barrier(dev); 475 - /* 476 - * Only need to resume devices in D3cold, because config 477 - * registers are still accessible for devices suspended but 478 - * not in D3cold. 479 - */ 480 - if (pdev->current_state == PCI_D3cold) 481 - pm_runtime_resume(dev); 482 - } 483 - 484 - static void 485 - pci_config_pm_runtime_put(struct pci_dev *pdev) 486 - { 487 - struct device *dev = &pdev->dev; 488 - struct device *parent = dev->parent; 489 - 490 - pm_runtime_put(dev); 491 - if (parent) 492 - pm_runtime_put_sync(parent); 493 - } 494 - 495 461 static ssize_t 496 462 pci_read_config(struct file *filp, struct kobject *kobj, 497 463 struct bin_attribute *bin_attr,
+32
drivers/pci/pci.c
··· 1858 1858 } 1859 1859 EXPORT_SYMBOL_GPL(pci_dev_run_wake); 1860 1860 1861 + void pci_config_pm_runtime_get(struct pci_dev *pdev) 1862 + { 1863 + struct device *dev = &pdev->dev; 1864 + struct device *parent = dev->parent; 1865 + 1866 + if (parent) 1867 + pm_runtime_get_sync(parent); 1868 + pm_runtime_get_noresume(dev); 1869 + /* 1870 + * pdev->current_state is set to PCI_D3cold during suspending, 1871 + * so wait until suspending completes 1872 + */ 1873 + pm_runtime_barrier(dev); 1874 + /* 1875 + * Only need to resume devices in D3cold, because config 1876 + * registers are still accessible for devices suspended but 1877 + * not in D3cold. 1878 + */ 1879 + if (pdev->current_state == PCI_D3cold) 1880 + pm_runtime_resume(dev); 1881 + } 1882 + 1883 + void pci_config_pm_runtime_put(struct pci_dev *pdev) 1884 + { 1885 + struct device *dev = &pdev->dev; 1886 + struct device *parent = dev->parent; 1887 + 1888 + pm_runtime_put(dev); 1889 + if (parent) 1890 + pm_runtime_put_sync(parent); 1891 + } 1892 + 1861 1893 /** 1862 1894 * pci_pm_init - Initialize PM functions of given PCI device 1863 1895 * @dev: PCI device to handle.
+2
drivers/pci/pci.h
··· 72 72 extern int pci_finish_runtime_suspend(struct pci_dev *dev); 73 73 extern int __pci_pme_wakeup(struct pci_dev *dev, void *ign); 74 74 extern void pci_wakeup_bus(struct pci_bus *bus); 75 + extern void pci_config_pm_runtime_get(struct pci_dev *dev); 76 + extern void pci_config_pm_runtime_put(struct pci_dev *dev); 75 77 extern void pci_pm_init(struct pci_dev *dev); 76 78 extern void platform_pci_wakeup_init(struct pci_dev *dev); 77 79 extern void pci_allocate_cap_save_buffers(struct pci_dev *dev);
+16 -4
drivers/pci/pcie/aer/aerdrv_core.c
··· 213 213 struct aer_broadcast_data *result_data; 214 214 result_data = (struct aer_broadcast_data *) data; 215 215 216 + device_lock(&dev->dev); 216 217 dev->error_state = result_data->state; 217 218 218 219 if (!dev->driver || ··· 232 231 dev->driver ? 233 232 "no AER-aware driver" : "no driver"); 234 233 } 235 - return 0; 234 + goto out; 236 235 } 237 236 238 237 err_handler = dev->driver->err_handler; 239 238 vote = err_handler->error_detected(dev, result_data->state); 240 239 result_data->result = merge_result(result_data->result, vote); 240 + out: 241 + device_unlock(&dev->dev); 241 242 return 0; 242 243 } 243 244 ··· 250 247 struct aer_broadcast_data *result_data; 251 248 result_data = (struct aer_broadcast_data *) data; 252 249 250 + device_lock(&dev->dev); 253 251 if (!dev->driver || 254 252 !dev->driver->err_handler || 255 253 !dev->driver->err_handler->mmio_enabled) 256 - return 0; 254 + goto out; 257 255 258 256 err_handler = dev->driver->err_handler; 259 257 vote = err_handler->mmio_enabled(dev); 260 258 result_data->result = merge_result(result_data->result, vote); 259 + out: 260 + device_unlock(&dev->dev); 261 261 return 0; 262 262 } 263 263 ··· 271 265 struct aer_broadcast_data *result_data; 272 266 result_data = (struct aer_broadcast_data *) data; 273 267 268 + device_lock(&dev->dev); 274 269 if (!dev->driver || 275 270 !dev->driver->err_handler || 276 271 !dev->driver->err_handler->slot_reset) 277 - return 0; 272 + goto out; 278 273 279 274 err_handler = dev->driver->err_handler; 280 275 vote = err_handler->slot_reset(dev); 281 276 result_data->result = merge_result(result_data->result, vote); 277 + out: 278 + device_unlock(&dev->dev); 282 279 return 0; 283 280 } 284 281 ··· 289 280 { 290 281 const struct pci_error_handlers *err_handler; 291 282 283 + device_lock(&dev->dev); 292 284 dev->error_state = pci_channel_io_normal; 293 285 294 286 if (!dev->driver || 295 287 !dev->driver->err_handler || 296 288 !dev->driver->err_handler->resume) 297 - return 0; 289 + goto out; 298 290 299 291 err_handler = dev->driver->err_handler; 300 292 err_handler->resume(dev); 293 + out: 294 + device_unlock(&dev->dev); 301 295 return 0; 302 296 } 303 297
+2 -1
drivers/pci/pcie/portdrv_core.c
··· 272 272 } 273 273 274 274 /* Hot-Plug Capable */ 275 - if (cap_mask & PCIE_PORT_SERVICE_HP) { 275 + if ((cap_mask & PCIE_PORT_SERVICE_HP) && 276 + dev->pcie_flags_reg & PCI_EXP_FLAGS_SLOT) { 276 277 pcie_capability_read_dword(dev, PCI_EXP_SLTCAP, &reg32); 277 278 if (reg32 & PCI_EXP_SLTCAP_HPC) { 278 279 services |= PCIE_PORT_SERVICE_HP;
+8
drivers/pci/proc.c
··· 76 76 if (!access_ok(VERIFY_WRITE, buf, cnt)) 77 77 return -EINVAL; 78 78 79 + pci_config_pm_runtime_get(dev); 80 + 79 81 if ((pos & 1) && cnt) { 80 82 unsigned char val; 81 83 pci_user_read_config_byte(dev, pos, &val); ··· 123 121 cnt--; 124 122 } 125 123 124 + pci_config_pm_runtime_put(dev); 125 + 126 126 *ppos = pos; 127 127 return nbytes; 128 128 } ··· 149 145 150 146 if (!access_ok(VERIFY_READ, buf, cnt)) 151 147 return -EINVAL; 148 + 149 + pci_config_pm_runtime_get(dev); 152 150 153 151 if ((pos & 1) && cnt) { 154 152 unsigned char val; ··· 196 190 pos++; 197 191 cnt--; 198 192 } 193 + 194 + pci_config_pm_runtime_put(dev); 199 195 200 196 *ppos = pos; 201 197 i_size_write(ino, dp->size);
+2
drivers/pinctrl/Kconfig
··· 179 179 180 180 config PINCTRL_SAMSUNG 181 181 bool "Samsung pinctrl driver" 182 + depends on OF && GPIOLIB 182 183 select PINMUX 183 184 select PINCONF 184 185 185 186 config PINCTRL_EXYNOS4 186 187 bool "Pinctrl driver data for Exynos4 SoC" 188 + depends on OF && GPIOLIB 187 189 select PINCTRL_SAMSUNG 188 190 189 191 config PINCTRL_MVEBU
+1 -1
drivers/pinctrl/spear/pinctrl-spear.c
··· 244 244 else 245 245 temp = ~muxreg->val; 246 246 247 - val |= temp; 247 + val |= muxreg->mask & temp; 248 248 pmx_writel(pmx, val, muxreg->reg); 249 249 } 250 250 }
+320 -45
drivers/pinctrl/spear/pinctrl-spear1310.c
··· 25 25 }; 26 26 27 27 /* registers */ 28 - #define PERIP_CFG 0x32C 29 - #define MCIF_SEL_SHIFT 3 28 + #define PERIP_CFG 0x3B0 29 + #define MCIF_SEL_SHIFT 5 30 30 #define MCIF_SEL_SD (0x1 << MCIF_SEL_SHIFT) 31 31 #define MCIF_SEL_CF (0x2 << MCIF_SEL_SHIFT) 32 32 #define MCIF_SEL_XD (0x3 << MCIF_SEL_SHIFT) ··· 164 164 #define PMX_SSP0_CS0_MASK (1 << 29) 165 165 #define PMX_SSP0_CS1_2_MASK (1 << 30) 166 166 167 + #define PAD_DIRECTION_SEL_0 0x65C 168 + #define PAD_DIRECTION_SEL_1 0x660 169 + #define PAD_DIRECTION_SEL_2 0x664 170 + 167 171 /* combined macros */ 168 172 #define PMX_GMII_MASK (PMX_GMIICLK_MASK | \ 169 173 PMX_GMIICOL_CRS_XFERER_MIITXCLK_MASK | \ ··· 241 237 .reg = PAD_FUNCTION_EN_0, 242 238 .mask = PMX_I2C0_MASK, 243 239 .val = PMX_I2C0_MASK, 240 + }, { 241 + .reg = PAD_DIRECTION_SEL_0, 242 + .mask = PMX_I2C0_MASK, 243 + .val = PMX_I2C0_MASK, 244 244 }, 245 245 }; 246 246 ··· 277 269 .reg = PAD_FUNCTION_EN_0, 278 270 .mask = PMX_SSP0_MASK, 279 271 .val = PMX_SSP0_MASK, 272 + }, { 273 + .reg = PAD_DIRECTION_SEL_0, 274 + .mask = PMX_SSP0_MASK, 275 + .val = PMX_SSP0_MASK, 280 276 }, 281 277 }; 282 278 ··· 306 294 .reg = PAD_FUNCTION_EN_2, 307 295 .mask = PMX_SSP0_CS0_MASK, 308 296 .val = PMX_SSP0_CS0_MASK, 297 + }, { 298 + .reg = PAD_DIRECTION_SEL_2, 299 + .mask = PMX_SSP0_CS0_MASK, 300 + .val = PMX_SSP0_CS0_MASK, 309 301 }, 310 302 }; 311 303 ··· 333 317 static struct spear_muxreg ssp0_cs1_2_muxreg[] = { 334 318 { 335 319 .reg = PAD_FUNCTION_EN_2, 320 + .mask = PMX_SSP0_CS1_2_MASK, 321 + .val = PMX_SSP0_CS1_2_MASK, 322 + }, { 323 + .reg = PAD_DIRECTION_SEL_2, 336 324 .mask = PMX_SSP0_CS1_2_MASK, 337 325 .val = PMX_SSP0_CS1_2_MASK, 338 326 }, ··· 372 352 .reg = PAD_FUNCTION_EN_0, 373 353 .mask = PMX_I2S0_MASK, 374 354 .val = PMX_I2S0_MASK, 355 + }, { 356 + .reg = PAD_DIRECTION_SEL_0, 357 + .mask = PMX_I2S0_MASK, 358 + .val = PMX_I2S0_MASK, 375 359 }, 376 360 }; 377 361 ··· 406 382 static struct spear_muxreg i2s1_muxreg[] = { 407 383 { 408 384 .reg = PAD_FUNCTION_EN_1, 385 + .mask = PMX_I2S1_MASK, 386 + .val = PMX_I2S1_MASK, 387 + }, { 388 + .reg = PAD_DIRECTION_SEL_1, 409 389 .mask = PMX_I2S1_MASK, 410 390 .val = PMX_I2S1_MASK, 411 391 }, ··· 446 418 .reg = PAD_FUNCTION_EN_0, 447 419 .mask = PMX_CLCD1_MASK, 448 420 .val = PMX_CLCD1_MASK, 421 + }, { 422 + .reg = PAD_DIRECTION_SEL_0, 423 + .mask = PMX_CLCD1_MASK, 424 + .val = PMX_CLCD1_MASK, 449 425 }, 450 426 }; 451 427 ··· 475 443 .reg = PAD_FUNCTION_EN_1, 476 444 .mask = PMX_CLCD2_MASK, 477 445 .val = PMX_CLCD2_MASK, 446 + }, { 447 + .reg = PAD_DIRECTION_SEL_1, 448 + .mask = PMX_CLCD2_MASK, 449 + .val = PMX_CLCD2_MASK, 478 450 }, 479 451 }; 480 452 ··· 497 461 .nmodemuxs = ARRAY_SIZE(clcd_high_res_modemux), 498 462 }; 499 463 500 - static const char *const clcd_grps[] = { "clcd_grp", "clcd_high_res" }; 464 + static const char *const clcd_grps[] = { "clcd_grp", "clcd_high_res_grp" }; 501 465 static struct spear_function clcd_function = { 502 466 .name = "clcd", 503 467 .groups = clcd_grps, ··· 513 477 .val = PMX_EGPIO_0_GRP_MASK, 514 478 }, { 515 479 .reg = PAD_FUNCTION_EN_1, 480 + .mask = PMX_EGPIO_1_GRP_MASK, 481 + .val = PMX_EGPIO_1_GRP_MASK, 482 + }, { 483 + .reg = PAD_DIRECTION_SEL_0, 484 + .mask = PMX_EGPIO_0_GRP_MASK, 485 + .val = PMX_EGPIO_0_GRP_MASK, 486 + }, { 487 + .reg = PAD_DIRECTION_SEL_1, 516 488 .mask = PMX_EGPIO_1_GRP_MASK, 517 489 .val = PMX_EGPIO_1_GRP_MASK, 518 490 }, ··· 555 511 .reg = PAD_FUNCTION_EN_0, 556 512 .mask = PMX_SMI_MASK, 557 513 .val = PMX_SMI_MASK, 514 + }, { 515 + .reg = PAD_DIRECTION_SEL_0, 516 + .mask = PMX_SMI_MASK, 517 + .val = PMX_SMI_MASK, 558 518 }, 559 519 }; 560 520 ··· 585 537 .val = PMX_SMI_MASK, 586 538 }, { 587 539 .reg = PAD_FUNCTION_EN_1, 540 + .mask = PMX_SMINCS2_MASK | PMX_SMINCS3_MASK, 541 + .val = PMX_SMINCS2_MASK | PMX_SMINCS3_MASK, 542 + }, { 543 + .reg = PAD_DIRECTION_SEL_0, 544 + .mask = PMX_SMI_MASK, 545 + .val = PMX_SMI_MASK, 546 + }, { 547 + .reg = PAD_DIRECTION_SEL_1, 588 548 .mask = PMX_SMINCS2_MASK | PMX_SMINCS3_MASK, 589 549 .val = PMX_SMINCS2_MASK | PMX_SMINCS3_MASK, 590 550 }, ··· 627 571 static struct spear_muxreg gmii_muxreg[] = { 628 572 { 629 573 .reg = PAD_FUNCTION_EN_0, 574 + .mask = PMX_GMII_MASK, 575 + .val = PMX_GMII_MASK, 576 + }, { 577 + .reg = PAD_DIRECTION_SEL_0, 630 578 .mask = PMX_GMII_MASK, 631 579 .val = PMX_GMII_MASK, 632 580 }, ··· 675 615 .reg = PAD_FUNCTION_EN_2, 676 616 .mask = PMX_RGMII_REG2_MASK, 677 617 .val = 0, 618 + }, { 619 + .reg = PAD_DIRECTION_SEL_0, 620 + .mask = PMX_RGMII_REG0_MASK, 621 + .val = PMX_RGMII_REG0_MASK, 622 + }, { 623 + .reg = PAD_DIRECTION_SEL_1, 624 + .mask = PMX_RGMII_REG1_MASK, 625 + .val = PMX_RGMII_REG1_MASK, 626 + }, { 627 + .reg = PAD_DIRECTION_SEL_2, 628 + .mask = PMX_RGMII_REG2_MASK, 629 + .val = PMX_RGMII_REG2_MASK, 678 630 }, 679 631 }; 680 632 ··· 721 649 .reg = PAD_FUNCTION_EN_1, 722 650 .mask = PMX_SMII_0_1_2_MASK, 723 651 .val = 0, 652 + }, { 653 + .reg = PAD_DIRECTION_SEL_1, 654 + .mask = PMX_SMII_0_1_2_MASK, 655 + .val = PMX_SMII_0_1_2_MASK, 724 656 }, 725 657 }; 726 658 ··· 757 681 .reg = PAD_FUNCTION_EN_1, 758 682 .mask = PMX_NFCE2_MASK, 759 683 .val = 0, 684 + }, { 685 + .reg = PAD_DIRECTION_SEL_1, 686 + .mask = PMX_NFCE2_MASK, 687 + .val = PMX_NFCE2_MASK, 760 688 }, 761 689 }; 762 690 ··· 801 721 .reg = PAD_FUNCTION_EN_1, 802 722 .mask = PMX_NAND8BIT_1_MASK, 803 723 .val = PMX_NAND8BIT_1_MASK, 724 + }, { 725 + .reg = PAD_DIRECTION_SEL_0, 726 + .mask = PMX_NAND8BIT_0_MASK, 727 + .val = PMX_NAND8BIT_0_MASK, 728 + }, { 729 + .reg = PAD_DIRECTION_SEL_1, 730 + .mask = PMX_NAND8BIT_1_MASK, 731 + .val = PMX_NAND8BIT_1_MASK, 804 732 }, 805 733 }; 806 734 ··· 835 747 .reg = PAD_FUNCTION_EN_1, 836 748 .mask = PMX_NAND16BIT_1_MASK, 837 749 .val = PMX_NAND16BIT_1_MASK, 750 + }, { 751 + .reg = PAD_DIRECTION_SEL_1, 752 + .mask = PMX_NAND16BIT_1_MASK, 753 + .val = PMX_NAND16BIT_1_MASK, 838 754 }, 839 755 }; 840 756 ··· 862 770 static struct spear_muxreg nand_4_chips_muxreg[] = { 863 771 { 864 772 .reg = PAD_FUNCTION_EN_1, 773 + .mask = PMX_NAND_4CHIPS_MASK, 774 + .val = PMX_NAND_4CHIPS_MASK, 775 + }, { 776 + .reg = PAD_DIRECTION_SEL_1, 865 777 .mask = PMX_NAND_4CHIPS_MASK, 866 778 .val = PMX_NAND_4CHIPS_MASK, 867 779 }, ··· 929 833 .reg = PAD_FUNCTION_EN_1, 930 834 .mask = PMX_KBD_ROWCOL68_MASK, 931 835 .val = PMX_KBD_ROWCOL68_MASK, 836 + }, { 837 + .reg = PAD_DIRECTION_SEL_1, 838 + .mask = PMX_KBD_ROWCOL68_MASK, 839 + .val = PMX_KBD_ROWCOL68_MASK, 932 840 }, 933 841 }; 934 842 ··· 966 866 .reg = PAD_FUNCTION_EN_0, 967 867 .mask = PMX_UART0_MASK, 968 868 .val = PMX_UART0_MASK, 869 + }, { 870 + .reg = PAD_DIRECTION_SEL_0, 871 + .mask = PMX_UART0_MASK, 872 + .val = PMX_UART0_MASK, 969 873 }, 970 874 }; 971 875 ··· 993 889 static struct spear_muxreg uart0_modem_muxreg[] = { 994 890 { 995 891 .reg = PAD_FUNCTION_EN_1, 892 + .mask = PMX_UART0_MODEM_MASK, 893 + .val = PMX_UART0_MODEM_MASK, 894 + }, { 895 + .reg = PAD_DIRECTION_SEL_1, 996 896 .mask = PMX_UART0_MODEM_MASK, 997 897 .val = PMX_UART0_MODEM_MASK, 998 898 }, ··· 1031 923 .reg = PAD_FUNCTION_EN_1, 1032 924 .mask = PMX_GPT0_TMR0_MASK, 1033 925 .val = PMX_GPT0_TMR0_MASK, 926 + }, { 927 + .reg = PAD_DIRECTION_SEL_1, 928 + .mask = PMX_GPT0_TMR0_MASK, 929 + .val = PMX_GPT0_TMR0_MASK, 1034 930 }, 1035 931 }; 1036 932 ··· 1058 946 static struct spear_muxreg gpt0_tmr1_muxreg[] = { 1059 947 { 1060 948 .reg = PAD_FUNCTION_EN_1, 949 + .mask = PMX_GPT0_TMR1_MASK, 950 + .val = PMX_GPT0_TMR1_MASK, 951 + }, { 952 + .reg = PAD_DIRECTION_SEL_1, 1061 953 .mask = PMX_GPT0_TMR1_MASK, 1062 954 .val = PMX_GPT0_TMR1_MASK, 1063 955 }, ··· 1096 980 .reg = PAD_FUNCTION_EN_1, 1097 981 .mask = PMX_GPT1_TMR0_MASK, 1098 982 .val = PMX_GPT1_TMR0_MASK, 983 + }, { 984 + .reg = PAD_DIRECTION_SEL_1, 985 + .mask = PMX_GPT1_TMR0_MASK, 986 + .val = PMX_GPT1_TMR0_MASK, 1099 987 }, 1100 988 }; 1101 989 ··· 1123 1003 static struct spear_muxreg gpt1_tmr1_muxreg[] = { 1124 1004 { 1125 1005 .reg = PAD_FUNCTION_EN_1, 1006 + .mask = PMX_GPT1_TMR1_MASK, 1007 + .val = PMX_GPT1_TMR1_MASK, 1008 + }, { 1009 + .reg = PAD_DIRECTION_SEL_1, 1126 1010 .mask = PMX_GPT1_TMR1_MASK, 1127 1011 .val = PMX_GPT1_TMR1_MASK, 1128 1012 }, ··· 1171 1047 .val = PMX_MCIFALL_1_MASK, \ 1172 1048 }, { \ 1173 1049 .reg = PAD_FUNCTION_EN_2, \ 1050 + .mask = PMX_MCIFALL_2_MASK, \ 1051 + .val = PMX_MCIFALL_2_MASK, \ 1052 + }, { \ 1053 + .reg = PAD_DIRECTION_SEL_0, \ 1054 + .mask = PMX_MCI_DATA8_15_MASK, \ 1055 + .val = PMX_MCI_DATA8_15_MASK, \ 1056 + }, { \ 1057 + .reg = PAD_DIRECTION_SEL_1, \ 1058 + .mask = PMX_MCIFALL_1_MASK | PMX_NFWPRT1_MASK | \ 1059 + PMX_NFWPRT2_MASK, \ 1060 + .val = PMX_MCIFALL_1_MASK | PMX_NFWPRT1_MASK | \ 1061 + PMX_NFWPRT2_MASK, \ 1062 + }, { \ 1063 + .reg = PAD_DIRECTION_SEL_2, \ 1174 1064 .mask = PMX_MCIFALL_2_MASK, \ 1175 1065 .val = PMX_MCIFALL_2_MASK, \ 1176 1066 } ··· 1292 1154 .reg = PAD_FUNCTION_EN_2, 1293 1155 .mask = PMX_TOUCH_XY_MASK, 1294 1156 .val = PMX_TOUCH_XY_MASK, 1157 + }, { 1158 + .reg = PAD_DIRECTION_SEL_2, 1159 + .mask = PMX_TOUCH_XY_MASK, 1160 + .val = PMX_TOUCH_XY_MASK, 1295 1161 }, 1296 1162 }; 1297 1163 ··· 1329 1187 .reg = PAD_FUNCTION_EN_0, 1330 1188 .mask = PMX_I2C0_MASK, 1331 1189 .val = 0, 1190 + }, { 1191 + .reg = PAD_DIRECTION_SEL_0, 1192 + .mask = PMX_I2C0_MASK, 1193 + .val = PMX_I2C0_MASK, 1332 1194 }, 1333 1195 }; 1334 1196 ··· 1359 1213 .mask = PMX_MCIDATA1_MASK | 1360 1214 PMX_MCIDATA2_MASK, 1361 1215 .val = 0, 1216 + }, { 1217 + .reg = PAD_DIRECTION_SEL_1, 1218 + .mask = PMX_MCIDATA1_MASK | 1219 + PMX_MCIDATA2_MASK, 1220 + .val = PMX_MCIDATA1_MASK | 1221 + PMX_MCIDATA2_MASK, 1362 1222 }, 1363 1223 }; 1364 1224 ··· 1398 1246 .reg = PAD_FUNCTION_EN_0, 1399 1247 .mask = PMX_I2S0_MASK, 1400 1248 .val = 0, 1249 + }, { 1250 + .reg = PAD_DIRECTION_SEL_0, 1251 + .mask = PMX_I2S0_MASK, 1252 + .val = PMX_I2S0_MASK, 1401 1253 }, 1402 1254 }; 1403 1255 ··· 1434 1278 .reg = PAD_FUNCTION_EN_0, 1435 1279 .mask = PMX_I2S0_MASK | PMX_CLCD1_MASK, 1436 1280 .val = 0, 1281 + }, { 1282 + .reg = PAD_DIRECTION_SEL_0, 1283 + .mask = PMX_I2S0_MASK | PMX_CLCD1_MASK, 1284 + .val = PMX_I2S0_MASK | PMX_CLCD1_MASK, 1437 1285 }, 1438 1286 }; 1439 1287 ··· 1470 1310 .reg = PAD_FUNCTION_EN_0, 1471 1311 .mask = PMX_CLCD1_MASK, 1472 1312 .val = 0, 1313 + }, { 1314 + .reg = PAD_DIRECTION_SEL_0, 1315 + .mask = PMX_CLCD1_MASK, 1316 + .val = PMX_CLCD1_MASK, 1473 1317 }, 1474 1318 }; 1475 1319 ··· 1508 1344 .reg = PAD_FUNCTION_EN_0, 1509 1345 .mask = PMX_CLCD1_MASK, 1510 1346 .val = 0, 1347 + }, { 1348 + .reg = PAD_DIRECTION_SEL_0, 1349 + .mask = PMX_CLCD1_MASK, 1350 + .val = PMX_CLCD1_MASK, 1511 1351 }, 1512 1352 }; 1513 1353 ··· 1544 1376 .reg = PAD_FUNCTION_EN_0, 1545 1377 .mask = PMX_CLCD1_MASK, 1546 1378 .val = 0, 1379 + }, { 1380 + .reg = PAD_DIRECTION_SEL_0, 1381 + .mask = PMX_CLCD1_MASK, 1382 + .val = PMX_CLCD1_MASK, 1547 1383 }, 1548 1384 }; 1549 1385 ··· 1581 1409 .reg = PAD_FUNCTION_EN_0, 1582 1410 .mask = PMX_CLCD1_MASK | PMX_SMI_MASK, 1583 1411 .val = 0, 1412 + }, { 1413 + .reg = PAD_DIRECTION_SEL_0, 1414 + .mask = PMX_CLCD1_MASK | PMX_SMI_MASK, 1415 + .val = PMX_CLCD1_MASK | PMX_SMI_MASK, 1584 1416 }, 1585 1417 }; 1586 1418 ··· 1611 1435 .reg = PAD_FUNCTION_EN_1, 1612 1436 .mask = PMX_I2S1_MASK | PMX_MCIDATA3_MASK, 1613 1437 .val = 0, 1438 + }, { 1439 + .reg = PAD_DIRECTION_SEL_1, 1440 + .mask = PMX_I2S1_MASK | PMX_MCIDATA3_MASK, 1441 + .val = PMX_I2S1_MASK | PMX_MCIDATA3_MASK, 1614 1442 }, 1615 1443 }; 1616 1444 ··· 1649 1469 .reg = PAD_FUNCTION_EN_0, 1650 1470 .mask = PMX_SMI_MASK, 1651 1471 .val = 0, 1472 + }, { 1473 + .reg = PAD_DIRECTION_SEL_0, 1474 + .mask = PMX_SMI_MASK, 1475 + .val = PMX_SMI_MASK, 1652 1476 }, 1653 1477 }; 1654 1478 ··· 1683 1499 .reg = PAD_FUNCTION_EN_2, 1684 1500 .mask = PMX_MCIDATA5_MASK, 1685 1501 .val = 0, 1502 + }, { 1503 + .reg = PAD_DIRECTION_SEL_1, 1504 + .mask = PMX_MCIDATA4_MASK, 1505 + .val = PMX_MCIDATA4_MASK, 1506 + }, { 1507 + .reg = PAD_DIRECTION_SEL_2, 1508 + .mask = PMX_MCIDATA5_MASK, 1509 + .val = PMX_MCIDATA5_MASK, 1686 1510 }, 1687 1511 }; 1688 1512 ··· 1718 1526 .mask = PMX_MCIDATA6_MASK | 1719 1527 PMX_MCIDATA7_MASK, 1720 1528 .val = 0, 1529 + }, { 1530 + .reg = PAD_DIRECTION_SEL_2, 1531 + .mask = PMX_MCIDATA6_MASK | 1532 + PMX_MCIDATA7_MASK, 1533 + .val = PMX_MCIDATA6_MASK | 1534 + PMX_MCIDATA7_MASK, 1721 1535 }, 1722 1536 }; 1723 1537 ··· 1758 1560 .reg = PAD_FUNCTION_EN_1, 1759 1561 .mask = PMX_KBD_ROWCOL25_MASK, 1760 1562 .val = 0, 1563 + }, { 1564 + .reg = PAD_DIRECTION_SEL_1, 1565 + .mask = PMX_KBD_ROWCOL25_MASK, 1566 + .val = PMX_KBD_ROWCOL25_MASK, 1761 1567 }, 1762 1568 }; 1763 1569 ··· 1789 1587 .mask = PMX_MCIIORDRE_MASK | 1790 1588 PMX_MCIIOWRWE_MASK, 1791 1589 .val = 0, 1590 + }, { 1591 + .reg = PAD_DIRECTION_SEL_2, 1592 + .mask = PMX_MCIIORDRE_MASK | 1593 + PMX_MCIIOWRWE_MASK, 1594 + .val = PMX_MCIIORDRE_MASK | 1595 + PMX_MCIIOWRWE_MASK, 1792 1596 }, 1793 1597 }; 1794 1598 ··· 1821 1613 .mask = PMX_MCIRESETCF_MASK | 1822 1614 PMX_MCICS0CE_MASK, 1823 1615 .val = 0, 1616 + }, { 1617 + .reg = PAD_DIRECTION_SEL_2, 1618 + .mask = PMX_MCIRESETCF_MASK | 1619 + PMX_MCICS0CE_MASK, 1620 + .val = PMX_MCIRESETCF_MASK | 1621 + PMX_MCICS0CE_MASK, 1824 1622 }, 1825 1623 }; 1826 1624 ··· 1865 1651 .reg = PAD_FUNCTION_EN_1, 1866 1652 .mask = PMX_NFRSTPWDWN3_MASK, 1867 1653 .val = 0, 1654 + }, { 1655 + .reg = PAD_DIRECTION_SEL_0, 1656 + .mask = PMX_NFRSTPWDWN2_MASK, 1657 + .val = PMX_NFRSTPWDWN2_MASK, 1658 + }, { 1659 + .reg = PAD_DIRECTION_SEL_1, 1660 + .mask = PMX_NFRSTPWDWN3_MASK, 1661 + .val = PMX_NFRSTPWDWN3_MASK, 1868 1662 }, 1869 1663 }; 1870 1664 ··· 1899 1677 .reg = PAD_FUNCTION_EN_2, 1900 1678 .mask = PMX_MCICFINTR_MASK | PMX_MCIIORDY_MASK, 1901 1679 .val = 0, 1680 + }, { 1681 + .reg = PAD_DIRECTION_SEL_2, 1682 + .mask = PMX_MCICFINTR_MASK | PMX_MCIIORDY_MASK, 1683 + .val = PMX_MCICFINTR_MASK | PMX_MCIIORDY_MASK, 1902 1684 }, 1903 1685 }; 1904 1686 ··· 1937 1711 .reg = PAD_FUNCTION_EN_2, 1938 1712 .mask = PMX_MCICS1_MASK | PMX_MCIDMAACK_MASK, 1939 1713 .val = 0, 1714 + }, { 1715 + .reg = PAD_DIRECTION_SEL_2, 1716 + .mask = PMX_MCICS1_MASK | PMX_MCIDMAACK_MASK, 1717 + .val = PMX_MCICS1_MASK | PMX_MCIDMAACK_MASK, 1940 1718 }, 1941 1719 }; 1942 1720 ··· 1967 1737 .reg = PAD_FUNCTION_EN_1, 1968 1738 .mask = PMX_KBD_ROWCOL25_MASK, 1969 1739 .val = 0, 1740 + }, { 1741 + .reg = PAD_DIRECTION_SEL_1, 1742 + .mask = PMX_KBD_ROWCOL25_MASK, 1743 + .val = PMX_KBD_ROWCOL25_MASK, 1970 1744 }, 1971 1745 }; 1972 1746 ··· 1997 1763 .ngroups = ARRAY_SIZE(can1_grps), 1998 1764 }; 1999 1765 2000 - /* Pad multiplexing for pci device */ 2001 - static const unsigned pci_sata_pins[] = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 18, 1766 + /* Pad multiplexing for (ras-ip) pci device */ 1767 + static const unsigned pci_pins[] = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 18, 2002 1768 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 2003 1769 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 2004 1770 55, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99 }; 2005 - #define PCI_SATA_MUXREG \ 2006 - { \ 2007 - .reg = PAD_FUNCTION_EN_0, \ 2008 - .mask = PMX_MCI_DATA8_15_MASK, \ 2009 - .val = 0, \ 2010 - }, { \ 2011 - .reg = PAD_FUNCTION_EN_1, \ 2012 - .mask = PMX_PCI_REG1_MASK, \ 2013 - .val = 0, \ 2014 - }, { \ 2015 - .reg = PAD_FUNCTION_EN_2, \ 2016 - .mask = PMX_PCI_REG2_MASK, \ 2017 - .val = 0, \ 2018 - } 2019 1771 2020 - /* pad multiplexing for pcie0 device */ 1772 + static struct spear_muxreg pci_muxreg[] = { 1773 + { 1774 + .reg = PAD_FUNCTION_EN_0, 1775 + .mask = PMX_MCI_DATA8_15_MASK, 1776 + .val = 0, 1777 + }, { 1778 + .reg = PAD_FUNCTION_EN_1, 1779 + .mask = PMX_PCI_REG1_MASK, 1780 + .val = 0, 1781 + }, { 1782 + .reg = PAD_FUNCTION_EN_2, 1783 + .mask = PMX_PCI_REG2_MASK, 1784 + .val = 0, 1785 + }, { 1786 + .reg = PAD_DIRECTION_SEL_0, 1787 + .mask = PMX_MCI_DATA8_15_MASK, 1788 + .val = PMX_MCI_DATA8_15_MASK, 1789 + }, { 1790 + .reg = PAD_DIRECTION_SEL_1, 1791 + .mask = PMX_PCI_REG1_MASK, 1792 + .val = PMX_PCI_REG1_MASK, 1793 + }, { 1794 + .reg = PAD_DIRECTION_SEL_2, 1795 + .mask = PMX_PCI_REG2_MASK, 1796 + .val = PMX_PCI_REG2_MASK, 1797 + }, 1798 + }; 1799 + 1800 + static struct spear_modemux pci_modemux[] = { 1801 + { 1802 + .muxregs = pci_muxreg, 1803 + .nmuxregs = ARRAY_SIZE(pci_muxreg), 1804 + }, 1805 + }; 1806 + 1807 + static struct spear_pingroup pci_pingroup = { 1808 + .name = "pci_grp", 1809 + .pins = pci_pins, 1810 + .npins = ARRAY_SIZE(pci_pins), 1811 + .modemuxs = pci_modemux, 1812 + .nmodemuxs = ARRAY_SIZE(pci_modemux), 1813 + }; 1814 + 1815 + static const char *const pci_grps[] = { "pci_grp" }; 1816 + static struct spear_function pci_function = { 1817 + .name = "pci", 1818 + .groups = pci_grps, 1819 + .ngroups = ARRAY_SIZE(pci_grps), 1820 + }; 1821 + 1822 + /* pad multiplexing for (fix-part) pcie0 device */ 2021 1823 static struct spear_muxreg pcie0_muxreg[] = { 2022 - PCI_SATA_MUXREG, 2023 1824 { 2024 1825 .reg = PCIE_SATA_CFG, 2025 1826 .mask = PCIE_CFG_VAL(0), ··· 2071 1802 2072 1803 static struct spear_pingroup pcie0_pingroup = { 2073 1804 .name = "pcie0_grp", 2074 - .pins = pci_sata_pins, 2075 - .npins = ARRAY_SIZE(pci_sata_pins), 2076 1805 .modemuxs = pcie0_modemux, 2077 1806 .nmodemuxs = ARRAY_SIZE(pcie0_modemux), 2078 1807 }; 2079 1808 2080 - /* pad multiplexing for pcie1 device */ 1809 + /* pad multiplexing for (fix-part) pcie1 device */ 2081 1810 static struct spear_muxreg pcie1_muxreg[] = { 2082 - PCI_SATA_MUXREG, 2083 1811 { 2084 1812 .reg = PCIE_SATA_CFG, 2085 1813 .mask = PCIE_CFG_VAL(1), ··· 2093 1827 2094 1828 static struct spear_pingroup pcie1_pingroup = { 2095 1829 .name = "pcie1_grp", 2096 - .pins = pci_sata_pins, 2097 - .npins = ARRAY_SIZE(pci_sata_pins), 2098 1830 .modemuxs = pcie1_modemux, 2099 1831 .nmodemuxs = ARRAY_SIZE(pcie1_modemux), 2100 1832 }; 2101 1833 2102 - /* pad multiplexing for pcie2 device */ 1834 + /* pad multiplexing for (fix-part) pcie2 device */ 2103 1835 static struct spear_muxreg pcie2_muxreg[] = { 2104 - PCI_SATA_MUXREG, 2105 1836 { 2106 1837 .reg = PCIE_SATA_CFG, 2107 1838 .mask = PCIE_CFG_VAL(2), ··· 2115 1852 2116 1853 static struct spear_pingroup pcie2_pingroup = { 2117 1854 .name = "pcie2_grp", 2118 - .pins = pci_sata_pins, 2119 - .npins = ARRAY_SIZE(pci_sata_pins), 2120 1855 .modemuxs = pcie2_modemux, 2121 1856 .nmodemuxs = ARRAY_SIZE(pcie2_modemux), 2122 1857 }; 2123 1858 2124 - static const char *const pci_grps[] = { "pcie0_grp", "pcie1_grp", "pcie2_grp" }; 2125 - static struct spear_function pci_function = { 2126 - .name = "pci", 2127 - .groups = pci_grps, 2128 - .ngroups = ARRAY_SIZE(pci_grps), 1859 + static const char *const pcie_grps[] = { "pcie0_grp", "pcie1_grp", "pcie2_grp" 1860 + }; 1861 + static struct spear_function pcie_function = { 1862 + .name = "pci_express", 1863 + .groups = pcie_grps, 1864 + .ngroups = ARRAY_SIZE(pcie_grps), 2129 1865 }; 2130 1866 2131 1867 /* pad multiplexing for sata0 device */ 2132 1868 static struct spear_muxreg sata0_muxreg[] = { 2133 - PCI_SATA_MUXREG, 2134 1869 { 2135 1870 .reg = PCIE_SATA_CFG, 2136 1871 .mask = SATA_CFG_VAL(0), ··· 2145 1884 2146 1885 static struct spear_pingroup sata0_pingroup = { 2147 1886 .name = "sata0_grp", 2148 - .pins = pci_sata_pins, 2149 - .npins = ARRAY_SIZE(pci_sata_pins), 2150 1887 .modemuxs = sata0_modemux, 2151 1888 .nmodemuxs = ARRAY_SIZE(sata0_modemux), 2152 1889 }; 2153 1890 2154 1891 /* pad multiplexing for sata1 device */ 2155 1892 static struct spear_muxreg sata1_muxreg[] = { 2156 - PCI_SATA_MUXREG, 2157 1893 { 2158 1894 .reg = PCIE_SATA_CFG, 2159 1895 .mask = SATA_CFG_VAL(1), ··· 2167 1909 2168 1910 static struct spear_pingroup sata1_pingroup = { 2169 1911 .name = "sata1_grp", 2170 - .pins = pci_sata_pins, 2171 - .npins = ARRAY_SIZE(pci_sata_pins), 2172 1912 .modemuxs = sata1_modemux, 2173 1913 .nmodemuxs = ARRAY_SIZE(sata1_modemux), 2174 1914 }; 2175 1915 2176 1916 /* pad multiplexing for sata2 device */ 2177 1917 static struct spear_muxreg sata2_muxreg[] = { 2178 - PCI_SATA_MUXREG, 2179 1918 { 2180 1919 .reg = PCIE_SATA_CFG, 2181 1920 .mask = SATA_CFG_VAL(2), ··· 2189 1934 2190 1935 static struct spear_pingroup sata2_pingroup = { 2191 1936 .name = "sata2_grp", 2192 - .pins = pci_sata_pins, 2193 - .npins = ARRAY_SIZE(pci_sata_pins), 2194 1937 .modemuxs = sata2_modemux, 2195 1938 .nmodemuxs = ARRAY_SIZE(sata2_modemux), 2196 1939 }; ··· 2210 1957 PMX_KBD_COL0_MASK | PMX_NFIO8_15_MASK | PMX_NFCE1_MASK | 2211 1958 PMX_NFCE2_MASK, 2212 1959 .val = 0, 1960 + }, { 1961 + .reg = PAD_DIRECTION_SEL_1, 1962 + .mask = PMX_KBD_ROWCOL25_MASK | PMX_KBD_COL1_MASK | 1963 + PMX_KBD_COL0_MASK | PMX_NFIO8_15_MASK | PMX_NFCE1_MASK | 1964 + PMX_NFCE2_MASK, 1965 + .val = PMX_KBD_ROWCOL25_MASK | PMX_KBD_COL1_MASK | 1966 + PMX_KBD_COL0_MASK | PMX_NFIO8_15_MASK | PMX_NFCE1_MASK | 1967 + PMX_NFCE2_MASK, 2213 1968 }, 2214 1969 }; 2215 1970 ··· 2244 1983 .mask = PMX_MCIADDR0ALE_MASK | PMX_MCIADDR2_MASK | 2245 1984 PMX_MCICECF_MASK | PMX_MCICEXD_MASK, 2246 1985 .val = 0, 1986 + }, { 1987 + .reg = PAD_DIRECTION_SEL_2, 1988 + .mask = PMX_MCIADDR0ALE_MASK | PMX_MCIADDR2_MASK | 1989 + PMX_MCICECF_MASK | PMX_MCICEXD_MASK, 1990 + .val = PMX_MCIADDR0ALE_MASK | PMX_MCIADDR2_MASK | 1991 + PMX_MCICECF_MASK | PMX_MCICEXD_MASK, 2247 1992 }, 2248 1993 }; 2249 1994 ··· 2284 2017 .mask = PMX_MCICDCF1_MASK | PMX_MCICDCF2_MASK | PMX_MCICDXD_MASK 2285 2018 | PMX_MCILEDS_MASK, 2286 2019 .val = 0, 2020 + }, { 2021 + .reg = PAD_DIRECTION_SEL_2, 2022 + .mask = PMX_MCICDCF1_MASK | PMX_MCICDCF2_MASK | PMX_MCICDXD_MASK 2023 + | PMX_MCILEDS_MASK, 2024 + .val = PMX_MCICDCF1_MASK | PMX_MCICDCF2_MASK | PMX_MCICDXD_MASK 2025 + | PMX_MCILEDS_MASK, 2287 2026 }, 2288 2027 }; 2289 2028 ··· 2366 2093 &can0_dis_sd_pingroup, 2367 2094 &can1_dis_sd_pingroup, 2368 2095 &can1_dis_kbd_pingroup, 2096 + &pci_pingroup, 2369 2097 &pcie0_pingroup, 2370 2098 &pcie1_pingroup, 2371 2099 &pcie2_pingroup, ··· 2412 2138 &can0_function, 2413 2139 &can1_function, 2414 2140 &pci_function, 2141 + &pcie_function, 2415 2142 &sata_function, 2416 2143 &ssp1_function, 2417 2144 &gpt64_function,
+39 -2
drivers/pinctrl/spear/pinctrl-spear1340.c
··· 213 213 * Pad multiplexing for making all pads as gpio's. This is done to override the 214 214 * values passed from bootloader and start from scratch. 215 215 */ 216 - static const unsigned pads_as_gpio_pins[] = { 251 }; 216 + static const unsigned pads_as_gpio_pins[] = { 12, 88, 89, 251 }; 217 217 static struct spear_muxreg pads_as_gpio_muxreg[] = { 218 218 { 219 219 .reg = PAD_FUNCTION_EN_1, ··· 1692 1692 .nmodemuxs = ARRAY_SIZE(clcd_modemux), 1693 1693 }; 1694 1694 1695 - static const char *const clcd_grps[] = { "clcd_grp" }; 1695 + /* Disable cld runtime to save panel damage */ 1696 + static struct spear_muxreg clcd_sleep_muxreg[] = { 1697 + { 1698 + .reg = PAD_SHARED_IP_EN_1, 1699 + .mask = ARM_TRACE_MASK | MIPHY_DBG_MASK, 1700 + .val = 0, 1701 + }, { 1702 + .reg = PAD_FUNCTION_EN_5, 1703 + .mask = CLCD_REG4_MASK | CLCD_AND_ARM_TRACE_REG4_MASK, 1704 + .val = 0x0, 1705 + }, { 1706 + .reg = PAD_FUNCTION_EN_6, 1707 + .mask = CLCD_AND_ARM_TRACE_REG5_MASK, 1708 + .val = 0x0, 1709 + }, { 1710 + .reg = PAD_FUNCTION_EN_7, 1711 + .mask = CLCD_AND_ARM_TRACE_REG6_MASK, 1712 + .val = 0x0, 1713 + }, 1714 + }; 1715 + 1716 + static struct spear_modemux clcd_sleep_modemux[] = { 1717 + { 1718 + .muxregs = clcd_sleep_muxreg, 1719 + .nmuxregs = ARRAY_SIZE(clcd_sleep_muxreg), 1720 + }, 1721 + }; 1722 + 1723 + static struct spear_pingroup clcd_sleep_pingroup = { 1724 + .name = "clcd_sleep_grp", 1725 + .pins = clcd_pins, 1726 + .npins = ARRAY_SIZE(clcd_pins), 1727 + .modemuxs = clcd_sleep_modemux, 1728 + .nmodemuxs = ARRAY_SIZE(clcd_sleep_modemux), 1729 + }; 1730 + 1731 + static const char *const clcd_grps[] = { "clcd_grp", "clcd_sleep_grp" }; 1696 1732 static struct spear_function clcd_function = { 1697 1733 .name = "clcd", 1698 1734 .groups = clcd_grps, ··· 1929 1893 &sdhci_pingroup, 1930 1894 &cf_pingroup, 1931 1895 &xd_pingroup, 1896 + &clcd_sleep_pingroup, 1932 1897 &clcd_pingroup, 1933 1898 &arm_trace_pingroup, 1934 1899 &miphy_dbg_pingroup,
+6 -2
drivers/pinctrl/spear/pinctrl-spear320.c
··· 2240 2240 .mask = PMX_SSP_CS_MASK, 2241 2241 .val = 0, 2242 2242 }, { 2243 + .reg = MODE_CONFIG_REG, 2244 + .mask = PMX_PWM_MASK, 2245 + .val = PMX_PWM_MASK, 2246 + }, { 2243 2247 .reg = IP_SEL_PAD_30_39_REG, 2244 2248 .mask = PMX_PL_34_MASK, 2245 2249 .val = PMX_PWM2_PL_34_VAL, ··· 2960 2956 }; 2961 2957 2962 2958 /* Pad multiplexing for cadence mii 1_2 as smii or rmii device */ 2963 - static const unsigned smii0_1_pins[] = { 10, 11, 13, 14, 15, 16, 17, 18, 19, 20, 2959 + static const unsigned rmii0_1_pins[] = { 10, 11, 13, 14, 15, 16, 17, 18, 19, 20, 2964 2960 21, 22, 23, 24, 25, 26, 27 }; 2965 - static const unsigned rmii0_1_pins[] = { 10, 11, 21, 22, 23, 24, 25, 26, 27 }; 2961 + static const unsigned smii0_1_pins[] = { 10, 11, 21, 22, 23, 24, 25, 26, 27 }; 2966 2962 static struct spear_muxreg mii0_1_muxreg[] = { 2967 2963 { 2968 2964 .reg = PMX_CONFIG_REG,
+1
drivers/pinctrl/spear/pinctrl-spear3xx.h
··· 15 15 #include "pinctrl-spear.h" 16 16 17 17 /* pad mux declarations */ 18 + #define PMX_PWM_MASK (1 << 16) 18 19 #define PMX_FIRDA_MASK (1 << 14) 19 20 #define PMX_I2C_MASK (1 << 13) 20 21 #define PMX_SSP_CS_MASK (1 << 12)
+1 -1
drivers/rapidio/rio.c
··· 401 401 /** 402 402 * rio_map_inb_region -- Map inbound memory region. 403 403 * @mport: Master port. 404 - * @lstart: physical address of memory region to be mapped 404 + * @local: physical address of memory region to be mapped 405 405 * @rbase: RIO base address assigned to this window 406 406 * @size: Size of the memory region 407 407 * @rflags: Flags for mapping.
+20 -13
drivers/regulator/core.c
··· 1381 1381 } 1382 1382 EXPORT_SYMBOL_GPL(regulator_get_exclusive); 1383 1383 1384 - /** 1385 - * regulator_put - "free" the regulator source 1386 - * @regulator: regulator source 1387 - * 1388 - * Note: drivers must ensure that all regulator_enable calls made on this 1389 - * regulator source are balanced by regulator_disable calls prior to calling 1390 - * this function. 1391 - */ 1392 - void regulator_put(struct regulator *regulator) 1384 + /* Locks held by regulator_put() */ 1385 + static void _regulator_put(struct regulator *regulator) 1393 1386 { 1394 1387 struct regulator_dev *rdev; 1395 1388 1396 1389 if (regulator == NULL || IS_ERR(regulator)) 1397 1390 return; 1398 1391 1399 - mutex_lock(&regulator_list_mutex); 1400 1392 rdev = regulator->rdev; 1401 1393 1402 1394 debugfs_remove_recursive(regulator->debugfs); ··· 1404 1412 rdev->exclusive = 0; 1405 1413 1406 1414 module_put(rdev->owner); 1415 + } 1416 + 1417 + /** 1418 + * regulator_put - "free" the regulator source 1419 + * @regulator: regulator source 1420 + * 1421 + * Note: drivers must ensure that all regulator_enable calls made on this 1422 + * regulator source are balanced by regulator_disable calls prior to calling 1423 + * this function. 1424 + */ 1425 + void regulator_put(struct regulator *regulator) 1426 + { 1427 + mutex_lock(&regulator_list_mutex); 1428 + _regulator_put(regulator); 1407 1429 mutex_unlock(&regulator_list_mutex); 1408 1430 } 1409 1431 EXPORT_SYMBOL_GPL(regulator_put); ··· 1980 1974 if (!(rdev->constraints->valid_ops_mask & REGULATOR_CHANGE_VOLTAGE)) { 1981 1975 ret = regulator_get_voltage(regulator); 1982 1976 if (ret >= 0) 1983 - return (min_uV >= ret && ret <= max_uV); 1977 + return (min_uV <= ret && ret <= max_uV); 1984 1978 else 1985 1979 return ret; 1986 1980 } ··· 3371 3365 if (ret != 0) { 3372 3366 rdev_err(rdev, "Failed to request enable GPIO%d: %d\n", 3373 3367 config->ena_gpio, ret); 3374 - goto clean; 3368 + goto wash; 3375 3369 } 3376 3370 3377 3371 rdev->ena_gpio = config->ena_gpio; ··· 3451 3445 3452 3446 scrub: 3453 3447 if (rdev->supply) 3454 - regulator_put(rdev->supply); 3448 + _regulator_put(rdev->supply); 3455 3449 if (rdev->ena_gpio) 3456 3450 gpio_free(rdev->ena_gpio); 3457 3451 kfree(rdev->constraints); 3452 + wash: 3458 3453 device_unregister(&rdev->dev); 3459 3454 /* device core frees rdev */ 3460 3455 rdev = ERR_PTR(ret);
+5 -7
drivers/s390/char/con3215.c
··· 44 44 #define RAW3215_NR_CCWS 3 45 45 #define RAW3215_TIMEOUT HZ/10 /* time for delayed output */ 46 46 47 - #define RAW3215_FIXED 1 /* 3215 console device is not be freed */ 48 47 #define RAW3215_WORKING 4 /* set if a request is being worked on */ 49 48 #define RAW3215_THROTTLED 8 /* set if reading is disabled */ 50 49 #define RAW3215_STOPPED 16 /* set if writing is disabled */ ··· 338 339 struct tty_struct *tty; 339 340 340 341 tty = tty_port_tty_get(&raw->port); 341 - tty_wakeup(tty); 342 - tty_kref_put(tty); 342 + if (tty) { 343 + tty_wakeup(tty); 344 + tty_kref_put(tty); 345 + } 343 346 } 344 347 345 348 /* ··· 630 629 DECLARE_WAITQUEUE(wait, current); 631 630 unsigned long flags; 632 631 633 - if (!(raw->port.flags & ASYNC_INITIALIZED) || 634 - (raw->flags & RAW3215_FIXED)) 632 + if (!(raw->port.flags & ASYNC_INITIALIZED)) 635 633 return; 636 634 /* Wait for outstanding requests, then free irq */ 637 635 spin_lock_irqsave(get_ccwdev_lock(raw->cdev), flags); ··· 925 925 raw->cdev = cdev; 926 926 dev_set_drvdata(&cdev->dev, raw); 927 927 cdev->handler = raw3215_irq; 928 - 929 - raw->flags |= RAW3215_FIXED; 930 928 931 929 /* Request the console irq */ 932 930 if (raw3215_startup(raw) != 0) {
-3
drivers/s390/cio/css.h
··· 112 112 extern void css_reiterate_subchannels(void); 113 113 void css_update_ssd_info(struct subchannel *sch); 114 114 115 - #define __MAX_SUBCHANNEL 65535 116 - #define __MAX_SSID 3 117 - 118 115 struct channel_subsystem { 119 116 u8 cssid; 120 117 int valid;
+1 -7
drivers/s390/cio/device.c
··· 1424 1424 } 1425 1425 if (device_is_disconnected(cdev)) 1426 1426 return IO_SCH_REPROBE; 1427 - if (cdev->online) 1427 + if (cdev->online && !cdev->private->flags.resuming) 1428 1428 return IO_SCH_VERIFY; 1429 1429 if (cdev->private->state == DEV_STATE_NOT_OPER) 1430 1430 return IO_SCH_UNREG_ATTACH; ··· 1469 1469 rc = 0; 1470 1470 goto out_unlock; 1471 1471 case IO_SCH_VERIFY: 1472 - if (cdev->private->flags.resuming == 1) { 1473 - if (cio_enable_subchannel(sch, (u32)(addr_t)sch)) { 1474 - ccw_device_set_notoper(cdev); 1475 - break; 1476 - } 1477 - } 1478 1472 /* Trigger path verification. */ 1479 1473 io_subchannel_verify(sch); 1480 1474 rc = 0;
+1 -2
drivers/s390/cio/idset.c
··· 125 125 126 126 void idset_add_set(struct idset *to, struct idset *from) 127 127 { 128 - int len = min(__BITOPS_WORDS(to->num_ssid * to->num_id), 129 - __BITOPS_WORDS(from->num_ssid * from->num_id)); 128 + int len = min(to->num_ssid * to->num_id, from->num_ssid * from->num_id); 130 129 131 130 bitmap_or(to->bitmap, to->bitmap, from->bitmap, len); 132 131 }
+22 -2
drivers/s390/net/qeth_core_main.c
··· 2942 2942 QETH_DBF_TEXT(SETUP, 2, "qipasscb"); 2943 2943 2944 2944 cmd = (struct qeth_ipa_cmd *) data; 2945 + 2946 + switch (cmd->hdr.return_code) { 2947 + case IPA_RC_NOTSUPP: 2948 + case IPA_RC_L2_UNSUPPORTED_CMD: 2949 + QETH_DBF_TEXT(SETUP, 2, "ipaunsup"); 2950 + card->options.ipa4.supported_funcs |= IPA_SETADAPTERPARMS; 2951 + card->options.ipa6.supported_funcs |= IPA_SETADAPTERPARMS; 2952 + return -0; 2953 + default: 2954 + if (cmd->hdr.return_code) { 2955 + QETH_DBF_MESSAGE(1, "%s IPA_CMD_QIPASSIST: Unhandled " 2956 + "rc=%d\n", 2957 + dev_name(&card->gdev->dev), 2958 + cmd->hdr.return_code); 2959 + return 0; 2960 + } 2961 + } 2962 + 2945 2963 if (cmd->hdr.prot_version == QETH_PROT_IPV4) { 2946 2964 card->options.ipa4.supported_funcs = cmd->hdr.ipa_supported; 2947 2965 card->options.ipa4.enabled_funcs = cmd->hdr.ipa_enabled; 2948 - } else { 2966 + } else if (cmd->hdr.prot_version == QETH_PROT_IPV6) { 2949 2967 card->options.ipa6.supported_funcs = cmd->hdr.ipa_supported; 2950 2968 card->options.ipa6.enabled_funcs = cmd->hdr.ipa_enabled; 2951 - } 2969 + } else 2970 + QETH_DBF_MESSAGE(1, "%s IPA_CMD_QIPASSIST: Flawed LIC detected" 2971 + "\n", dev_name(&card->gdev->dev)); 2952 2972 QETH_DBF_TEXT(SETUP, 2, "suppenbl"); 2953 2973 QETH_DBF_TEXT_(SETUP, 2, "%08x", (__u32)cmd->hdr.ipa_supported); 2954 2974 QETH_DBF_TEXT_(SETUP, 2, "%08x", (__u32)cmd->hdr.ipa_enabled);
+8 -5
drivers/s390/net/qeth_l2_main.c
··· 626 626 QETH_DBF_TEXT(SETUP, 2, "doL2init"); 627 627 QETH_DBF_TEXT_(SETUP, 2, "doL2%s", CARD_BUS_ID(card)); 628 628 629 - rc = qeth_query_setadapterparms(card); 630 - if (rc) { 631 - QETH_DBF_MESSAGE(2, "could not query adapter parameters on " 632 - "device %s: x%x\n", CARD_BUS_ID(card), rc); 629 + if (qeth_is_supported(card, IPA_SETADAPTERPARMS)) { 630 + rc = qeth_query_setadapterparms(card); 631 + if (rc) { 632 + QETH_DBF_MESSAGE(2, "could not query adapter " 633 + "parameters on device %s: x%x\n", 634 + CARD_BUS_ID(card), rc); 635 + } 633 636 } 634 637 635 638 if (card->info.type == QETH_CARD_TYPE_IQD || ··· 679 676 return -ERESTARTSYS; 680 677 } 681 678 rc = qeth_l2_send_delmac(card, &card->dev->dev_addr[0]); 682 - if (!rc) 679 + if (!rc || (rc == IPA_RC_L2_MAC_NOT_FOUND)) 683 680 rc = qeth_l2_send_setmac(card, addr->sa_data); 684 681 return rc ? -EINVAL : 0; 685 682 }
+1 -12
drivers/scsi/qlogicpti.c
··· 1294 1294 static const struct of_device_id qpti_match[]; 1295 1295 static int __devinit qpti_sbus_probe(struct platform_device *op) 1296 1296 { 1297 - const struct of_device_id *match; 1298 - struct scsi_host_template *tpnt; 1299 1297 struct device_node *dp = op->dev.of_node; 1300 1298 struct Scsi_Host *host; 1301 1299 struct qlogicpti *qpti; 1302 1300 static int nqptis; 1303 1301 const char *fcode; 1304 - 1305 - match = of_match_device(qpti_match, &op->dev); 1306 - if (!match) 1307 - return -EINVAL; 1308 - tpnt = match->data; 1309 1302 1310 1303 /* Sometimes Antares cards come up not completely 1311 1304 * setup, and we get a report of a zero IRQ. ··· 1306 1313 if (op->archdata.irqs[0] == 0) 1307 1314 return -ENODEV; 1308 1315 1309 - host = scsi_host_alloc(tpnt, sizeof(struct qlogicpti)); 1316 + host = scsi_host_alloc(&qpti_template, sizeof(struct qlogicpti)); 1310 1317 if (!host) 1311 1318 return -ENOMEM; 1312 1319 ··· 1438 1445 static const struct of_device_id qpti_match[] = { 1439 1446 { 1440 1447 .name = "ptisp", 1441 - .data = &qpti_template, 1442 1448 }, 1443 1449 { 1444 1450 .name = "PTI,ptisp", 1445 - .data = &qpti_template, 1446 1451 }, 1447 1452 { 1448 1453 .name = "QLGC,isp", 1449 - .data = &qpti_template, 1450 1454 }, 1451 1455 { 1452 1456 .name = "SUNW,isp", 1453 - .data = &qpti_template, 1454 1457 }, 1455 1458 {}, 1456 1459 };
+1 -3
drivers/staging/android/android_alarm.h
··· 51 51 #define ANDROID_ALARM_WAIT _IO('a', 1) 52 52 53 53 #define ALARM_IOW(c, type, size) _IOW('a', (c) | ((type) << 4), size) 54 - #define ALARM_IOR(c, type, size) _IOR('a', (c) | ((type) << 4), size) 55 - 56 54 /* Set alarm */ 57 55 #define ANDROID_ALARM_SET(type) ALARM_IOW(2, type, struct timespec) 58 56 #define ANDROID_ALARM_SET_AND_WAIT(type) ALARM_IOW(3, type, struct timespec) 59 - #define ANDROID_ALARM_GET_TIME(type) ALARM_IOR(4, type, struct timespec) 57 + #define ANDROID_ALARM_GET_TIME(type) ALARM_IOW(4, type, struct timespec) 60 58 #define ANDROID_ALARM_SET_RTC _IOW('a', 5, struct timespec) 61 59 #define ANDROID_ALARM_BASE_CMD(cmd) (cmd & ~(_IOC(0, 0, 0xf0, 0))) 62 60 #define ANDROID_ALARM_IOCTL_TO_TYPE(cmd) (_IOC_NR(cmd) >> 4)
+1 -1
drivers/thermal/exynos_thermal.c
··· 815 815 }, 816 816 { }, 817 817 }; 818 - MODULE_DEVICE_TABLE(platform, exynos4_tmu_driver_ids); 818 + MODULE_DEVICE_TABLE(platform, exynos_tmu_driver_ids); 819 819 820 820 static inline struct exynos_tmu_platform_data *exynos_get_driver_data( 821 821 struct platform_device *pdev)
+1 -1
drivers/thermal/rcar_thermal.c
··· 210 210 goto error_free_priv; 211 211 } 212 212 213 - zone = thermal_zone_device_register("rcar_thermal", 0, priv, 213 + zone = thermal_zone_device_register("rcar_thermal", 0, 0, priv, 214 214 &rcar_thermal_zone_ops, 0, 0); 215 215 if (IS_ERR(zone)) { 216 216 dev_err(&pdev->dev, "thermal zone device is NULL\n");
-7
drivers/tty/hvc/hvc_console.c
··· 424 424 { 425 425 struct hvc_struct *hp = tty->driver_data; 426 426 unsigned long flags; 427 - int temp_open_count; 428 427 429 428 if (!hp) 430 429 return; ··· 443 444 return; 444 445 } 445 446 446 - temp_open_count = hp->port.count; 447 447 hp->port.count = 0; 448 448 spin_unlock_irqrestore(&hp->port.lock, flags); 449 449 tty_port_tty_set(&hp->port, NULL); ··· 451 453 452 454 if (hp->ops->notifier_hangup) 453 455 hp->ops->notifier_hangup(hp, hp->data); 454 - 455 - while(temp_open_count) { 456 - --temp_open_count; 457 - tty_port_put(&hp->port); 458 - } 459 456 } 460 457 461 458 /*
+1
drivers/tty/serial/max310x.c
··· 1239 1239 static const struct spi_device_id max310x_id_table[] = { 1240 1240 { "max3107", MAX310X_TYPE_MAX3107 }, 1241 1241 { "max3108", MAX310X_TYPE_MAX3108 }, 1242 + { } 1242 1243 }; 1243 1244 MODULE_DEVICE_TABLE(spi, max310x_id_table); 1244 1245
+16
drivers/usb/core/hcd.c
··· 2151 2151 irqreturn_t usb_hcd_irq (int irq, void *__hcd) 2152 2152 { 2153 2153 struct usb_hcd *hcd = __hcd; 2154 + unsigned long flags; 2154 2155 irqreturn_t rc; 2156 + 2157 + /* IRQF_DISABLED doesn't work correctly with shared IRQs 2158 + * when the first handler doesn't use it. So let's just 2159 + * assume it's never used. 2160 + */ 2161 + local_irq_save(flags); 2155 2162 2156 2163 if (unlikely(HCD_DEAD(hcd) || !HCD_HW_ACCESSIBLE(hcd))) 2157 2164 rc = IRQ_NONE; ··· 2167 2160 else 2168 2161 rc = IRQ_HANDLED; 2169 2162 2163 + local_irq_restore(flags); 2170 2164 return rc; 2171 2165 } 2172 2166 EXPORT_SYMBOL_GPL(usb_hcd_irq); ··· 2355 2347 int retval; 2356 2348 2357 2349 if (hcd->driver->irq) { 2350 + 2351 + /* IRQF_DISABLED doesn't work as advertised when used together 2352 + * with IRQF_SHARED. As usb_hcd_irq() will always disable 2353 + * interrupts we can remove it here. 2354 + */ 2355 + if (irqflags & IRQF_SHARED) 2356 + irqflags &= ~IRQF_DISABLED; 2357 + 2358 2358 snprintf(hcd->irq_descr, sizeof(hcd->irq_descr), "%s:usb%d", 2359 2359 hcd->driver->description, hcd->self.busnum); 2360 2360 retval = request_irq(irqnum, &usb_hcd_irq, irqflags,
+9 -6
drivers/usb/early/ehci-dbgp.c
··· 20 20 #include <linux/usb/ehci_def.h> 21 21 #include <linux/delay.h> 22 22 #include <linux/serial_core.h> 23 + #include <linux/kconfig.h> 23 24 #include <linux/kgdb.h> 24 25 #include <linux/kthread.h> 25 26 #include <asm/io.h> ··· 615 614 return -ENODEV; 616 615 } 617 616 618 - int dbgp_external_startup(struct usb_hcd *hcd) 619 - { 620 - return xen_dbgp_external_startup(hcd) ?: _dbgp_external_startup(); 621 - } 622 - EXPORT_SYMBOL_GPL(dbgp_external_startup); 623 - 624 617 static int ehci_reset_port(int port) 625 618 { 626 619 u32 portsc; ··· 974 979 .index = -1, 975 980 }; 976 981 982 + #if IS_ENABLED(CONFIG_USB_EHCI_HCD) 977 983 int dbgp_reset_prep(struct usb_hcd *hcd) 978 984 { 979 985 int ret = xen_dbgp_reset_prep(hcd); ··· 1002 1006 return 0; 1003 1007 } 1004 1008 EXPORT_SYMBOL_GPL(dbgp_reset_prep); 1009 + 1010 + int dbgp_external_startup(struct usb_hcd *hcd) 1011 + { 1012 + return xen_dbgp_external_startup(hcd) ?: _dbgp_external_startup(); 1013 + } 1014 + EXPORT_SYMBOL_GPL(dbgp_external_startup); 1015 + #endif /* USB_EHCI_HCD */ 1005 1016 1006 1017 #ifdef CONFIG_KGDB 1007 1018
+1 -1
drivers/usb/host/ehci-ls1x.c
··· 113 113 goto err_put_hcd; 114 114 } 115 115 116 - ret = usb_add_hcd(hcd, irq, IRQF_SHARED); 116 + ret = usb_add_hcd(hcd, irq, IRQF_DISABLED | IRQF_SHARED); 117 117 if (ret) 118 118 goto err_put_hcd; 119 119
+1 -1
drivers/usb/host/ohci-xls.c
··· 56 56 goto err3; 57 57 } 58 58 59 - retval = usb_add_hcd(hcd, irq, IRQF_SHARED); 59 + retval = usb_add_hcd(hcd, irq, IRQF_DISABLED | IRQF_SHARED); 60 60 if (retval != 0) 61 61 goto err4; 62 62 return retval;
+26 -4
drivers/usb/musb/musb_gadget.c
··· 707 707 fifo_count = musb_readw(epio, MUSB_RXCOUNT); 708 708 709 709 /* 710 - * use mode 1 only if we expect data of at least ep packet_sz 711 - * and have not yet received a short packet 710 + * Enable Mode 1 on RX transfers only when short_not_ok flag 711 + * is set. Currently short_not_ok flag is set only from 712 + * file_storage and f_mass_storage drivers 712 713 */ 713 - if ((request->length - request->actual >= musb_ep->packet_sz) && 714 - (fifo_count >= musb_ep->packet_sz)) 714 + 715 + if (request->short_not_ok && fifo_count == musb_ep->packet_sz) 715 716 use_mode_1 = 1; 716 717 else 717 718 use_mode_1 = 0; ··· 727 726 728 727 c = musb->dma_controller; 729 728 channel = musb_ep->dma; 729 + 730 + /* We use DMA Req mode 0 in rx_csr, and DMA controller operates in 731 + * mode 0 only. So we do not get endpoint interrupts due to DMA 732 + * completion. We only get interrupts from DMA controller. 733 + * 734 + * We could operate in DMA mode 1 if we knew the size of the tranfer 735 + * in advance. For mass storage class, request->length = what the host 736 + * sends, so that'd work. But for pretty much everything else, 737 + * request->length is routinely more than what the host sends. For 738 + * most these gadgets, end of is signified either by a short packet, 739 + * or filling the last byte of the buffer. (Sending extra data in 740 + * that last pckate should trigger an overflow fault.) But in mode 1, 741 + * we don't get DMA completion interrupt for short packets. 742 + * 743 + * Theoretically, we could enable DMAReq irq (MUSB_RXCSR_DMAMODE = 1), 744 + * to get endpoint interrupt on every DMA req, but that didn't seem 745 + * to work reliably. 746 + * 747 + * REVISIT an updated g_file_storage can set req->short_not_ok, which 748 + * then becomes usable as a runtime "use mode 1" hint... 749 + */ 730 750 731 751 /* Experimental: Mode1 works with mass storage use cases */ 732 752 if (use_mode_1) {
+1 -1
drivers/usb/musb/ux500.c
··· 65 65 struct platform_device *musb; 66 66 struct ux500_glue *glue; 67 67 struct clk *clk; 68 - 68 + int musbid; 69 69 int ret = -ENOMEM; 70 70 71 71 glue = kzalloc(sizeof(*glue), GFP_KERNEL);
+2 -2
drivers/usb/otg/Kconfig
··· 58 58 59 59 config TWL4030_USB 60 60 tristate "TWL4030 USB Transceiver Driver" 61 - depends on TWL4030_CORE && REGULATOR_TWL4030 61 + depends on TWL4030_CORE && REGULATOR_TWL4030 && USB_MUSB_OMAP2PLUS 62 62 select USB_OTG_UTILS 63 63 help 64 64 Enable this to support the USB OTG transceiver on TWL4030 ··· 68 68 69 69 config TWL6030_USB 70 70 tristate "TWL6030 USB Transceiver Driver" 71 - depends on TWL4030_CORE && OMAP_USB2 71 + depends on TWL4030_CORE && OMAP_USB2 && USB_MUSB_OMAP2PLUS 72 72 select USB_OTG_UTILS 73 73 help 74 74 Enable this to support the USB OTG transceiver on TWL6030
+1 -2
drivers/usb/serial/keyspan.c
··· 2430 2430 static int keyspan_port_probe(struct usb_serial_port *port) 2431 2431 { 2432 2432 struct usb_serial *serial = port->serial; 2433 - struct keyspan_port_private *s_priv; 2433 + struct keyspan_serial_private *s_priv; 2434 2434 struct keyspan_port_private *p_priv; 2435 2435 const struct keyspan_device_details *d_details; 2436 2436 struct callbacks *cback; ··· 2445 2445 if (!p_priv) 2446 2446 return -ENOMEM; 2447 2447 2448 - s_priv = usb_get_serial_data(port->serial); 2449 2448 p_priv->device_details = d_details; 2450 2449 2451 2450 /* Setup values for the various callback routines */
+9
drivers/usb/serial/option.c
··· 158 158 #define NOVATELWIRELESS_PRODUCT_EVDO_EMBEDDED_HIGHSPEED 0x8001 159 159 #define NOVATELWIRELESS_PRODUCT_HSPA_EMBEDDED_FULLSPEED 0x9000 160 160 #define NOVATELWIRELESS_PRODUCT_HSPA_EMBEDDED_HIGHSPEED 0x9001 161 + #define NOVATELWIRELESS_PRODUCT_E362 0x9010 161 162 #define NOVATELWIRELESS_PRODUCT_G1 0xA001 162 163 #define NOVATELWIRELESS_PRODUCT_G1_M 0xA002 163 164 #define NOVATELWIRELESS_PRODUCT_G2 0xA010 ··· 193 192 #define DELL_PRODUCT_5730_MINICARD_SPRINT 0x8180 194 193 #define DELL_PRODUCT_5730_MINICARD_TELUS 0x8181 195 194 #define DELL_PRODUCT_5730_MINICARD_VZW 0x8182 195 + 196 + #define DELL_PRODUCT_5800_MINICARD_VZW 0x8195 /* Novatel E362 */ 197 + #define DELL_PRODUCT_5800_V2_MINICARD_VZW 0x8196 /* Novatel E362 */ 196 198 197 199 #define KYOCERA_VENDOR_ID 0x0c88 198 200 #define KYOCERA_PRODUCT_KPC650 0x17da ··· 287 283 /* ALCATEL PRODUCTS */ 288 284 #define ALCATEL_VENDOR_ID 0x1bbb 289 285 #define ALCATEL_PRODUCT_X060S_X200 0x0000 286 + #define ALCATEL_PRODUCT_X220_X500D 0x0017 290 287 291 288 #define PIRELLI_VENDOR_ID 0x1266 292 289 #define PIRELLI_PRODUCT_C100_1 0x1002 ··· 711 706 { USB_DEVICE(NOVATELWIRELESS_VENDOR_ID, NOVATELWIRELESS_PRODUCT_G2) }, 712 707 /* Novatel Ovation MC551 a.k.a. Verizon USB551L */ 713 708 { USB_DEVICE_AND_INTERFACE_INFO(NOVATELWIRELESS_VENDOR_ID, NOVATELWIRELESS_PRODUCT_MC551, 0xff, 0xff, 0xff) }, 709 + { USB_DEVICE_AND_INTERFACE_INFO(NOVATELWIRELESS_VENDOR_ID, NOVATELWIRELESS_PRODUCT_E362, 0xff, 0xff, 0xff) }, 714 710 715 711 { USB_DEVICE(AMOI_VENDOR_ID, AMOI_PRODUCT_H01) }, 716 712 { USB_DEVICE(AMOI_VENDOR_ID, AMOI_PRODUCT_H01A) }, ··· 734 728 { USB_DEVICE(DELL_VENDOR_ID, DELL_PRODUCT_5730_MINICARD_SPRINT) }, /* Dell Wireless 5730 Mobile Broadband EVDO/HSPA Mini-Card */ 735 729 { USB_DEVICE(DELL_VENDOR_ID, DELL_PRODUCT_5730_MINICARD_TELUS) }, /* Dell Wireless 5730 Mobile Broadband EVDO/HSPA Mini-Card */ 736 730 { USB_DEVICE(DELL_VENDOR_ID, DELL_PRODUCT_5730_MINICARD_VZW) }, /* Dell Wireless 5730 Mobile Broadband EVDO/HSPA Mini-Card */ 731 + { USB_DEVICE_AND_INTERFACE_INFO(DELL_VENDOR_ID, DELL_PRODUCT_5800_MINICARD_VZW, 0xff, 0xff, 0xff) }, 732 + { USB_DEVICE_AND_INTERFACE_INFO(DELL_VENDOR_ID, DELL_PRODUCT_5800_V2_MINICARD_VZW, 0xff, 0xff, 0xff) }, 737 733 { USB_DEVICE(ANYDATA_VENDOR_ID, ANYDATA_PRODUCT_ADU_E100A) }, /* ADU-E100, ADU-310 */ 738 734 { USB_DEVICE(ANYDATA_VENDOR_ID, ANYDATA_PRODUCT_ADU_500A) }, 739 735 { USB_DEVICE(ANYDATA_VENDOR_ID, ANYDATA_PRODUCT_ADU_620UW) }, ··· 1165 1157 { USB_DEVICE(ALCATEL_VENDOR_ID, ALCATEL_PRODUCT_X060S_X200), 1166 1158 .driver_info = (kernel_ulong_t)&alcatel_x200_blacklist 1167 1159 }, 1160 + { USB_DEVICE(ALCATEL_VENDOR_ID, ALCATEL_PRODUCT_X220_X500D) }, 1168 1161 { USB_DEVICE(AIRPLUS_VENDOR_ID, AIRPLUS_PRODUCT_MCD650) }, 1169 1162 { USB_DEVICE(TLAYTECH_VENDOR_ID, TLAYTECH_PRODUCT_TEU800) }, 1170 1163 { USB_DEVICE(LONGCHEER_VENDOR_ID, FOUR_G_SYSTEMS_PRODUCT_W14),
+5 -5
drivers/usb/serial/usb_wwan.c
··· 455 455 struct usb_serial *serial = port->serial; 456 456 struct urb *urb; 457 457 458 - if (endpoint == -1) 459 - return NULL; /* endpoint not needed */ 460 - 461 458 urb = usb_alloc_urb(0, GFP_KERNEL); /* No ISO */ 462 459 if (urb == NULL) { 463 460 dev_dbg(&serial->interface->dev, ··· 486 489 init_usb_anchor(&portdata->delayed); 487 490 488 491 for (i = 0; i < N_IN_URB; i++) { 492 + if (!port->bulk_in_size) 493 + break; 494 + 489 495 buffer = (u8 *)__get_free_page(GFP_KERNEL); 490 496 if (!buffer) 491 497 goto bail_out_error; ··· 502 502 } 503 503 504 504 for (i = 0; i < N_OUT_URB; i++) { 505 - if (port->bulk_out_endpointAddress == -1) 506 - continue; 505 + if (!port->bulk_out_size) 506 + break; 507 507 508 508 buffer = kmalloc(OUT_BUFLEN, GFP_KERNEL); 509 509 if (!buffer)
+3 -1
drivers/virtio/virtio.c
··· 225 225 226 226 void unregister_virtio_device(struct virtio_device *dev) 227 227 { 228 + int index = dev->index; /* save for after device release */ 229 + 228 230 device_unregister(&dev->dev); 229 - ida_simple_remove(&virtio_index_ida, dev->index); 231 + ida_simple_remove(&virtio_index_ida, index); 230 232 } 231 233 EXPORT_SYMBOL_GPL(unregister_virtio_device); 232 234
+1
drivers/xen/Makefile
··· 2 2 obj-y += manage.o balloon.o 3 3 obj-$(CONFIG_HOTPLUG_CPU) += cpu_hotplug.o 4 4 endif 5 + obj-$(CONFIG_X86) += fallback.o 5 6 obj-y += grant-table.o features.o events.o 6 7 obj-y += xenbus/ 7 8
+1 -1
drivers/xen/events.c
··· 1395 1395 { 1396 1396 struct pt_regs *old_regs = set_irq_regs(regs); 1397 1397 1398 + irq_enter(); 1398 1399 #ifdef CONFIG_X86 1399 1400 exit_idle(); 1400 1401 #endif 1401 - irq_enter(); 1402 1402 1403 1403 __xen_evtchn_do_upcall(); 1404 1404
+80
drivers/xen/fallback.c
··· 1 + #include <linux/kernel.h> 2 + #include <linux/string.h> 3 + #include <linux/bug.h> 4 + #include <linux/export.h> 5 + #include <asm/hypervisor.h> 6 + #include <asm/xen/hypercall.h> 7 + 8 + int xen_event_channel_op_compat(int cmd, void *arg) 9 + { 10 + struct evtchn_op op; 11 + int rc; 12 + 13 + op.cmd = cmd; 14 + memcpy(&op.u, arg, sizeof(op.u)); 15 + rc = _hypercall1(int, event_channel_op_compat, &op); 16 + 17 + switch (cmd) { 18 + case EVTCHNOP_close: 19 + case EVTCHNOP_send: 20 + case EVTCHNOP_bind_vcpu: 21 + case EVTCHNOP_unmask: 22 + /* no output */ 23 + break; 24 + 25 + #define COPY_BACK(eop) \ 26 + case EVTCHNOP_##eop: \ 27 + memcpy(arg, &op.u.eop, sizeof(op.u.eop)); \ 28 + break 29 + 30 + COPY_BACK(bind_interdomain); 31 + COPY_BACK(bind_virq); 32 + COPY_BACK(bind_pirq); 33 + COPY_BACK(status); 34 + COPY_BACK(alloc_unbound); 35 + COPY_BACK(bind_ipi); 36 + #undef COPY_BACK 37 + 38 + default: 39 + WARN_ON(rc != -ENOSYS); 40 + break; 41 + } 42 + 43 + return rc; 44 + } 45 + EXPORT_SYMBOL_GPL(xen_event_channel_op_compat); 46 + 47 + int HYPERVISOR_physdev_op_compat(int cmd, void *arg) 48 + { 49 + struct physdev_op op; 50 + int rc; 51 + 52 + op.cmd = cmd; 53 + memcpy(&op.u, arg, sizeof(op.u)); 54 + rc = _hypercall1(int, physdev_op_compat, &op); 55 + 56 + switch (cmd) { 57 + case PHYSDEVOP_IRQ_UNMASK_NOTIFY: 58 + case PHYSDEVOP_set_iopl: 59 + case PHYSDEVOP_set_iobitmap: 60 + case PHYSDEVOP_apic_write: 61 + /* no output */ 62 + break; 63 + 64 + #define COPY_BACK(pop, fld) \ 65 + case PHYSDEVOP_##pop: \ 66 + memcpy(arg, &op.u.fld, sizeof(op.u.fld)); \ 67 + break 68 + 69 + COPY_BACK(irq_status_query, irq_status_query); 70 + COPY_BACK(apic_read, apic_op); 71 + COPY_BACK(ASSIGN_VECTOR, irq_op); 72 + #undef COPY_BACK 73 + 74 + default: 75 + WARN_ON(rc != -ENOSYS); 76 + break; 77 + } 78 + 79 + return rc; 80 + }
+20 -29
fs/cifs/cifsacl.c
··· 225 225 } 226 226 227 227 static void 228 + cifs_copy_sid(struct cifs_sid *dst, const struct cifs_sid *src) 229 + { 230 + memcpy(dst, src, sizeof(*dst)); 231 + dst->num_subauth = min_t(u8, src->num_subauth, NUM_SUBAUTHS); 232 + } 233 + 234 + static void 228 235 id_rb_insert(struct rb_root *root, struct cifs_sid *sidptr, 229 236 struct cifs_sid_id **psidid, char *typestr) 230 237 { ··· 255 248 } 256 249 } 257 250 258 - memcpy(&(*psidid)->sid, sidptr, sizeof(struct cifs_sid)); 251 + cifs_copy_sid(&(*psidid)->sid, sidptr); 259 252 (*psidid)->time = jiffies - (SID_MAP_RETRY + 1); 260 253 (*psidid)->refcount = 0; 261 254 ··· 361 354 * any fields of the node after a reference is put . 362 355 */ 363 356 if (test_bit(SID_ID_MAPPED, &psidid->state)) { 364 - memcpy(ssid, &psidid->sid, sizeof(struct cifs_sid)); 357 + cifs_copy_sid(ssid, &psidid->sid); 365 358 psidid->time = jiffies; /* update ts for accessing */ 366 359 goto id_sid_out; 367 360 } ··· 377 370 if (IS_ERR(sidkey)) { 378 371 rc = -EINVAL; 379 372 cFYI(1, "%s: Can't map and id to a SID", __func__); 373 + } else if (sidkey->datalen < sizeof(struct cifs_sid)) { 374 + rc = -EIO; 375 + cFYI(1, "%s: Downcall contained malformed key " 376 + "(datalen=%hu)", __func__, sidkey->datalen); 380 377 } else { 381 378 lsid = (struct cifs_sid *)sidkey->payload.data; 382 - memcpy(&psidid->sid, lsid, 383 - sidkey->datalen < sizeof(struct cifs_sid) ? 384 - sidkey->datalen : sizeof(struct cifs_sid)); 385 - memcpy(ssid, &psidid->sid, 386 - sidkey->datalen < sizeof(struct cifs_sid) ? 387 - sidkey->datalen : sizeof(struct cifs_sid)); 379 + cifs_copy_sid(&psidid->sid, lsid); 380 + cifs_copy_sid(ssid, &psidid->sid); 388 381 set_bit(SID_ID_MAPPED, &psidid->state); 389 382 key_put(sidkey); 390 383 kfree(psidid->sidstr); ··· 403 396 return rc; 404 397 } 405 398 if (test_bit(SID_ID_MAPPED, &psidid->state)) 406 - memcpy(ssid, &psidid->sid, sizeof(struct cifs_sid)); 399 + cifs_copy_sid(ssid, &psidid->sid); 407 400 else 408 401 rc = -EINVAL; 409 402 } ··· 682 675 static void copy_sec_desc(const struct cifs_ntsd *pntsd, 683 676 struct cifs_ntsd *pnntsd, __u32 sidsoffset) 684 677 { 685 - int i; 686 - 687 678 struct cifs_sid *owner_sid_ptr, *group_sid_ptr; 688 679 struct cifs_sid *nowner_sid_ptr, *ngroup_sid_ptr; 689 680 ··· 697 692 owner_sid_ptr = (struct cifs_sid *)((char *)pntsd + 698 693 le32_to_cpu(pntsd->osidoffset)); 699 694 nowner_sid_ptr = (struct cifs_sid *)((char *)pnntsd + sidsoffset); 700 - 701 - nowner_sid_ptr->revision = owner_sid_ptr->revision; 702 - nowner_sid_ptr->num_subauth = owner_sid_ptr->num_subauth; 703 - for (i = 0; i < 6; i++) 704 - nowner_sid_ptr->authority[i] = owner_sid_ptr->authority[i]; 705 - for (i = 0; i < 5; i++) 706 - nowner_sid_ptr->sub_auth[i] = owner_sid_ptr->sub_auth[i]; 695 + cifs_copy_sid(nowner_sid_ptr, owner_sid_ptr); 707 696 708 697 /* copy group sid */ 709 698 group_sid_ptr = (struct cifs_sid *)((char *)pntsd + 710 699 le32_to_cpu(pntsd->gsidoffset)); 711 700 ngroup_sid_ptr = (struct cifs_sid *)((char *)pnntsd + sidsoffset + 712 701 sizeof(struct cifs_sid)); 713 - 714 - ngroup_sid_ptr->revision = group_sid_ptr->revision; 715 - ngroup_sid_ptr->num_subauth = group_sid_ptr->num_subauth; 716 - for (i = 0; i < 6; i++) 717 - ngroup_sid_ptr->authority[i] = group_sid_ptr->authority[i]; 718 - for (i = 0; i < 5; i++) 719 - ngroup_sid_ptr->sub_auth[i] = group_sid_ptr->sub_auth[i]; 702 + cifs_copy_sid(ngroup_sid_ptr, group_sid_ptr); 720 703 721 704 return; 722 705 } ··· 1113 1120 kfree(nowner_sid_ptr); 1114 1121 return rc; 1115 1122 } 1116 - memcpy(owner_sid_ptr, nowner_sid_ptr, 1117 - sizeof(struct cifs_sid)); 1123 + cifs_copy_sid(owner_sid_ptr, nowner_sid_ptr); 1118 1124 kfree(nowner_sid_ptr); 1119 1125 *aclflag = CIFS_ACL_OWNER; 1120 1126 } ··· 1131 1139 kfree(ngroup_sid_ptr); 1132 1140 return rc; 1133 1141 } 1134 - memcpy(group_sid_ptr, ngroup_sid_ptr, 1135 - sizeof(struct cifs_sid)); 1142 + cifs_copy_sid(group_sid_ptr, ngroup_sid_ptr); 1136 1143 kfree(ngroup_sid_ptr); 1137 1144 *aclflag = CIFS_ACL_GROUP; 1138 1145 }
+10 -1
fs/cifs/dir.c
··· 398 398 * in network traffic in the other paths. 399 399 */ 400 400 if (!(oflags & O_CREAT)) { 401 - struct dentry *res = cifs_lookup(inode, direntry, 0); 401 + struct dentry *res; 402 + 403 + /* 404 + * Check for hashed negative dentry. We have already revalidated 405 + * the dentry and it is fine. No need to perform another lookup. 406 + */ 407 + if (!d_unhashed(direntry)) 408 + return -ENOENT; 409 + 410 + res = cifs_lookup(inode, direntry, 0); 402 411 if (IS_ERR(res)) 403 412 return PTR_ERR(res); 404 413
+3 -35
fs/eventpoll.c
··· 346 346 /* Tells if the epoll_ctl(2) operation needs an event copy from userspace */ 347 347 static inline int ep_op_has_event(int op) 348 348 { 349 - return op == EPOLL_CTL_ADD || op == EPOLL_CTL_MOD; 349 + return op != EPOLL_CTL_DEL; 350 350 } 351 351 352 352 /* Initialize the poll safe wake up structure */ ··· 674 674 atomic_long_dec(&ep->user->epoll_watches); 675 675 676 676 return 0; 677 - } 678 - 679 - /* 680 - * Disables a "struct epitem" in the eventpoll set. Returns -EBUSY if the item 681 - * had no event flags set, indicating that another thread may be currently 682 - * handling that item's events (in the case that EPOLLONESHOT was being 683 - * used). Otherwise a zero result indicates that the item has been disabled 684 - * from receiving events. A disabled item may be re-enabled via 685 - * EPOLL_CTL_MOD. Must be called with "mtx" held. 686 - */ 687 - static int ep_disable(struct eventpoll *ep, struct epitem *epi) 688 - { 689 - int result = 0; 690 - unsigned long flags; 691 - 692 - spin_lock_irqsave(&ep->lock, flags); 693 - if (epi->event.events & ~EP_PRIVATE_BITS) { 694 - if (ep_is_linked(&epi->rdllink)) 695 - list_del_init(&epi->rdllink); 696 - /* Ensure ep_poll_callback will not add epi back onto ready 697 - list: */ 698 - epi->event.events &= EP_PRIVATE_BITS; 699 - } 700 - else 701 - result = -EBUSY; 702 - spin_unlock_irqrestore(&ep->lock, flags); 703 - 704 - return result; 705 677 } 706 678 707 679 static void ep_free(struct eventpoll *ep) ··· 1019 1047 rb_link_node(&epi->rbn, parent, p); 1020 1048 rb_insert_color(&epi->rbn, &ep->rbr); 1021 1049 } 1050 + 1051 + 1022 1052 1023 1053 #define PATH_ARR_SIZE 5 1024 1054 /* ··· 1785 1811 epds.events |= POLLERR | POLLHUP; 1786 1812 error = ep_modify(ep, epi, &epds); 1787 1813 } else 1788 - error = -ENOENT; 1789 - break; 1790 - case EPOLL_CTL_DISABLE: 1791 - if (epi) 1792 - error = ep_disable(ep, epi); 1793 - else 1794 1814 error = -ENOENT; 1795 1815 break; 1796 1816 }
+5 -9
fs/gfs2/file.c
··· 516 516 struct gfs2_holder i_gh; 517 517 int error; 518 518 519 - gfs2_holder_init(ip->i_gl, LM_ST_SHARED, LM_FLAG_ANY, &i_gh); 520 - error = gfs2_glock_nq(&i_gh); 521 - if (error == 0) { 522 - file_accessed(file); 523 - gfs2_glock_dq(&i_gh); 524 - } 525 - gfs2_holder_uninit(&i_gh); 519 + error = gfs2_glock_nq_init(ip->i_gl, LM_ST_SHARED, LM_FLAG_ANY, 520 + &i_gh); 526 521 if (error) 527 522 return error; 523 + /* grab lock to update inode */ 524 + gfs2_glock_dq_uninit(&i_gh); 525 + file_accessed(file); 528 526 } 529 527 vma->vm_ops = &gfs2_vm_ops; 530 528 ··· 675 677 size_t writesize = iov_length(iov, nr_segs); 676 678 struct dentry *dentry = file->f_dentry; 677 679 struct gfs2_inode *ip = GFS2_I(dentry->d_inode); 678 - struct gfs2_sbd *sdp; 679 680 int ret; 680 681 681 - sdp = GFS2_SB(file->f_mapping->host); 682 682 ret = gfs2_rs_alloc(ip); 683 683 if (ret) 684 684 return ret;
+2 -14
fs/gfs2/lops.c
··· 393 393 struct gfs2_meta_header *mh; 394 394 struct gfs2_trans *tr; 395 395 396 - lock_buffer(bd->bd_bh); 397 - gfs2_log_lock(sdp); 398 396 tr = current->journal_info; 399 397 tr->tr_touched = 1; 400 398 if (!list_empty(&bd->bd_list)) 401 - goto out; 399 + return; 402 400 set_bit(GLF_LFLUSH, &bd->bd_gl->gl_flags); 403 401 set_bit(GLF_DIRTY, &bd->bd_gl->gl_flags); 404 402 mh = (struct gfs2_meta_header *)bd->bd_bh->b_data; ··· 412 414 sdp->sd_log_num_buf++; 413 415 list_add(&bd->bd_list, &sdp->sd_log_le_buf); 414 416 tr->tr_num_buf_new++; 415 - out: 416 - gfs2_log_unlock(sdp); 417 - unlock_buffer(bd->bd_bh); 418 417 } 419 418 420 419 static void gfs2_check_magic(struct buffer_head *bh) ··· 616 621 617 622 static void revoke_lo_before_commit(struct gfs2_sbd *sdp) 618 623 { 619 - struct gfs2_log_descriptor *ld; 620 624 struct gfs2_meta_header *mh; 621 625 unsigned int offset; 622 626 struct list_head *head = &sdp->sd_log_le_revoke; ··· 628 634 629 635 length = gfs2_struct2blk(sdp, sdp->sd_log_num_revoke, sizeof(u64)); 630 636 page = gfs2_get_log_desc(sdp, GFS2_LOG_DESC_REVOKE, length, sdp->sd_log_num_revoke); 631 - ld = page_address(page); 632 637 offset = sizeof(struct gfs2_log_descriptor); 633 638 634 639 list_for_each_entry(bd, head, bd_list) { ··· 770 777 struct address_space *mapping = bd->bd_bh->b_page->mapping; 771 778 struct gfs2_inode *ip = GFS2_I(mapping->host); 772 779 773 - lock_buffer(bd->bd_bh); 774 - gfs2_log_lock(sdp); 775 780 if (tr) 776 781 tr->tr_touched = 1; 777 782 if (!list_empty(&bd->bd_list)) 778 - goto out; 783 + return; 779 784 set_bit(GLF_LFLUSH, &bd->bd_gl->gl_flags); 780 785 set_bit(GLF_DIRTY, &bd->bd_gl->gl_flags); 781 786 if (gfs2_is_jdata(ip)) { ··· 784 793 } else { 785 794 list_add_tail(&bd->bd_list, &sdp->sd_log_le_ordered); 786 795 } 787 - out: 788 - gfs2_log_unlock(sdp); 789 - unlock_buffer(bd->bd_bh); 790 796 } 791 797 792 798 /**
+5 -2
fs/gfs2/quota.c
··· 497 497 struct gfs2_quota_data **qd; 498 498 int error; 499 499 500 - if (ip->i_res == NULL) 501 - gfs2_rs_alloc(ip); 500 + if (ip->i_res == NULL) { 501 + error = gfs2_rs_alloc(ip); 502 + if (error) 503 + return error; 504 + } 502 505 503 506 qd = ip->i_res->rs_qa_qd; 504 507
+21 -12
fs/gfs2/rgrp.c
··· 553 553 */ 554 554 int gfs2_rs_alloc(struct gfs2_inode *ip) 555 555 { 556 - int error = 0; 557 556 struct gfs2_blkreserv *res; 558 557 559 558 if (ip->i_res) ··· 560 561 561 562 res = kmem_cache_zalloc(gfs2_rsrv_cachep, GFP_NOFS); 562 563 if (!res) 563 - error = -ENOMEM; 564 + return -ENOMEM; 564 565 565 566 RB_CLEAR_NODE(&res->rs_node); 566 567 ··· 570 571 else 571 572 ip->i_res = res; 572 573 up_write(&ip->i_rw_mutex); 573 - return error; 574 + return 0; 574 575 } 575 576 576 577 static void dump_rs(struct seq_file *seq, const struct gfs2_blkreserv *rs) ··· 1262 1263 int ret = 0; 1263 1264 u64 amt; 1264 1265 u64 trimmed = 0; 1266 + u64 start, end, minlen; 1265 1267 unsigned int x; 1268 + unsigned bs_shift = sdp->sd_sb.sb_bsize_shift; 1266 1269 1267 1270 if (!capable(CAP_SYS_ADMIN)) 1268 1271 return -EPERM; ··· 1272 1271 if (!blk_queue_discard(q)) 1273 1272 return -EOPNOTSUPP; 1274 1273 1275 - if (argp == NULL) { 1276 - r.start = 0; 1277 - r.len = ULLONG_MAX; 1278 - r.minlen = 0; 1279 - } else if (copy_from_user(&r, argp, sizeof(r))) 1274 + if (copy_from_user(&r, argp, sizeof(r))) 1280 1275 return -EFAULT; 1281 1276 1282 1277 ret = gfs2_rindex_update(sdp); 1283 1278 if (ret) 1284 1279 return ret; 1285 1280 1286 - rgd = gfs2_blk2rgrpd(sdp, r.start, 0); 1287 - rgd_end = gfs2_blk2rgrpd(sdp, r.start + r.len, 0); 1281 + start = r.start >> bs_shift; 1282 + end = start + (r.len >> bs_shift); 1283 + minlen = max_t(u64, r.minlen, 1284 + q->limits.discard_granularity) >> bs_shift; 1285 + 1286 + rgd = gfs2_blk2rgrpd(sdp, start, 0); 1287 + rgd_end = gfs2_blk2rgrpd(sdp, end - 1, 0); 1288 + 1289 + if (end <= start || 1290 + minlen > sdp->sd_max_rg_data || 1291 + start > rgd_end->rd_data0 + rgd_end->rd_data) 1292 + return -EINVAL; 1288 1293 1289 1294 while (1) { 1290 1295 ··· 1302 1295 /* Trim each bitmap in the rgrp */ 1303 1296 for (x = 0; x < rgd->rd_length; x++) { 1304 1297 struct gfs2_bitmap *bi = rgd->rd_bits + x; 1305 - ret = gfs2_rgrp_send_discards(sdp, rgd->rd_data0, NULL, bi, r.minlen, &amt); 1298 + ret = gfs2_rgrp_send_discards(sdp, 1299 + rgd->rd_data0, NULL, bi, minlen, 1300 + &amt); 1306 1301 if (ret) { 1307 1302 gfs2_glock_dq_uninit(&gh); 1308 1303 goto out; ··· 1333 1324 1334 1325 out: 1335 1326 r.len = trimmed << 9; 1336 - if (argp && copy_to_user(argp, &r, sizeof(r))) 1327 + if (copy_to_user(argp, &r, sizeof(r))) 1337 1328 return -EFAULT; 1338 1329 1339 1330 return ret;
+2 -1
fs/gfs2/super.c
··· 810 810 return; 811 811 } 812 812 need_unlock = 1; 813 - } 813 + } else if (WARN_ON_ONCE(ip->i_gl->gl_state != LM_ST_EXCLUSIVE)) 814 + return; 814 815 815 816 if (current->journal_info == NULL) { 816 817 ret = gfs2_trans_begin(sdp, RES_DINODE, 0);
+8
fs/gfs2/trans.c
··· 155 155 struct gfs2_sbd *sdp = gl->gl_sbd; 156 156 struct gfs2_bufdata *bd; 157 157 158 + lock_buffer(bh); 159 + gfs2_log_lock(sdp); 158 160 bd = bh->b_private; 159 161 if (bd) 160 162 gfs2_assert(sdp, bd->bd_gl == gl); 161 163 else { 164 + gfs2_log_unlock(sdp); 165 + unlock_buffer(bh); 162 166 gfs2_attach_bufdata(gl, bh, meta); 163 167 bd = bh->b_private; 168 + lock_buffer(bh); 169 + gfs2_log_lock(sdp); 164 170 } 165 171 lops_add(sdp, bd); 172 + gfs2_log_unlock(sdp); 173 + unlock_buffer(bh); 166 174 } 167 175 168 176 void gfs2_trans_add_revoke(struct gfs2_sbd *sdp, struct gfs2_bufdata *bd)
+3 -2
fs/nfs/dns_resolve.c
··· 217 217 { 218 218 char buf1[NFS_DNS_HOSTNAME_MAXLEN+1]; 219 219 struct nfs_dns_ent key, *item; 220 - unsigned long ttl; 220 + unsigned int ttl; 221 221 ssize_t len; 222 222 int ret = -EINVAL; 223 223 ··· 240 240 key.namelen = len; 241 241 memset(&key.h, 0, sizeof(key.h)); 242 242 243 - ttl = get_expiry(&buf); 243 + if (get_uint(&buf, &ttl) < 0) 244 + goto out; 244 245 if (ttl == 0) 245 246 goto out; 246 247 key.h.expiry_time = ttl + seconds_since_boot();
+4 -1
fs/nfs/inode.c
··· 685 685 if (ctx->cred != NULL) 686 686 put_rpccred(ctx->cred); 687 687 dput(ctx->dentry); 688 - nfs_sb_deactive(sb); 688 + if (is_sync) 689 + nfs_sb_deactive(sb); 690 + else 691 + nfs_sb_deactive_async(sb); 689 692 kfree(ctx->mdsthreshold); 690 693 kfree(ctx); 691 694 }
+4 -2
fs/nfs/internal.h
··· 351 351 extern void __exit unregister_nfs_fs(void); 352 352 extern void nfs_sb_active(struct super_block *sb); 353 353 extern void nfs_sb_deactive(struct super_block *sb); 354 + extern void nfs_sb_deactive_async(struct super_block *sb); 354 355 355 356 /* namespace.c */ 357 + #define NFS_PATH_CANONICAL 1 356 358 extern char *nfs_path(char **p, struct dentry *dentry, 357 - char *buffer, ssize_t buflen); 359 + char *buffer, ssize_t buflen, unsigned flags); 358 360 extern struct vfsmount *nfs_d_automount(struct path *path); 359 361 struct vfsmount *nfs_submount(struct nfs_server *, struct dentry *, 360 362 struct nfs_fh *, struct nfs_fattr *); ··· 500 498 char *buffer, ssize_t buflen) 501 499 { 502 500 char *dummy; 503 - return nfs_path(&dummy, dentry, buffer, buflen); 501 + return nfs_path(&dummy, dentry, buffer, buflen, NFS_PATH_CANONICAL); 504 502 } 505 503 506 504 /*
+1 -1
fs/nfs/mount_clnt.c
··· 181 181 else 182 182 msg.rpc_proc = &mnt_clnt->cl_procinfo[MOUNTPROC_MNT]; 183 183 184 - status = rpc_call_sync(mnt_clnt, &msg, 0); 184 + status = rpc_call_sync(mnt_clnt, &msg, RPC_TASK_SOFT|RPC_TASK_TIMEOUT); 185 185 rpc_shutdown_client(mnt_clnt); 186 186 187 187 if (status < 0)
+14 -5
fs/nfs/namespace.c
··· 33 33 * @dentry - pointer to dentry 34 34 * @buffer - result buffer 35 35 * @buflen - length of buffer 36 + * @flags - options (see below) 36 37 * 37 38 * Helper function for constructing the server pathname 38 39 * by arbitrary hashed dentry. ··· 41 40 * This is mainly for use in figuring out the path on the 42 41 * server side when automounting on top of an existing partition 43 42 * and in generating /proc/mounts and friends. 43 + * 44 + * Supported flags: 45 + * NFS_PATH_CANONICAL: ensure there is exactly one slash after 46 + * the original device (export) name 47 + * (if unset, the original name is returned verbatim) 44 48 */ 45 - char *nfs_path(char **p, struct dentry *dentry, char *buffer, ssize_t buflen) 49 + char *nfs_path(char **p, struct dentry *dentry, char *buffer, ssize_t buflen, 50 + unsigned flags) 46 51 { 47 52 char *end; 48 53 int namelen; ··· 81 74 rcu_read_unlock(); 82 75 goto rename_retry; 83 76 } 84 - if (*end != '/') { 77 + if ((flags & NFS_PATH_CANONICAL) && *end != '/') { 85 78 if (--buflen < 0) { 86 79 spin_unlock(&dentry->d_lock); 87 80 rcu_read_unlock(); ··· 98 91 return end; 99 92 } 100 93 namelen = strlen(base); 101 - /* Strip off excess slashes in base string */ 102 - while (namelen > 0 && base[namelen - 1] == '/') 103 - namelen--; 94 + if (flags & NFS_PATH_CANONICAL) { 95 + /* Strip off excess slashes in base string */ 96 + while (namelen > 0 && base[namelen - 1] == '/') 97 + namelen--; 98 + } 104 99 buflen -= namelen; 105 100 if (buflen < 0) { 106 101 spin_unlock(&dentry->d_lock);
+2 -1
fs/nfs/nfs4namespace.c
··· 81 81 static char *nfs4_path(struct dentry *dentry, char *buffer, ssize_t buflen) 82 82 { 83 83 char *limit; 84 - char *path = nfs_path(&limit, dentry, buffer, buflen); 84 + char *path = nfs_path(&limit, dentry, buffer, buflen, 85 + NFS_PATH_CANONICAL); 85 86 if (!IS_ERR(path)) { 86 87 char *path_component = nfs_path_component(path, limit); 87 88 if (path_component)
+28 -18
fs/nfs/nfs4proc.c
··· 339 339 dprintk("%s ERROR: %d Reset session\n", __func__, 340 340 errorcode); 341 341 nfs4_schedule_session_recovery(clp->cl_session, errorcode); 342 - exception->retry = 1; 343 - break; 342 + goto wait_on_recovery; 344 343 #endif /* defined(CONFIG_NFS_V4_1) */ 345 344 case -NFS4ERR_FILE_OPEN: 346 345 if (exception->timeout > HZ) { ··· 1571 1572 data->timestamp = jiffies; 1572 1573 if (nfs4_setup_sequence(data->o_arg.server, 1573 1574 &data->o_arg.seq_args, 1574 - &data->o_res.seq_res, task)) 1575 - return; 1576 - rpc_call_start(task); 1575 + &data->o_res.seq_res, 1576 + task) != 0) 1577 + nfs_release_seqid(data->o_arg.seqid); 1578 + else 1579 + rpc_call_start(task); 1577 1580 return; 1578 1581 unlock_no_action: 1579 1582 rcu_read_unlock(); ··· 1749 1748 1750 1749 /* even though OPEN succeeded, access is denied. Close the file */ 1751 1750 nfs4_close_state(state, fmode); 1752 - return -NFS4ERR_ACCESS; 1751 + return -EACCES; 1753 1752 } 1754 1753 1755 1754 /* ··· 2197 2196 nfs4_put_open_state(calldata->state); 2198 2197 nfs_free_seqid(calldata->arg.seqid); 2199 2198 nfs4_put_state_owner(sp); 2200 - nfs_sb_deactive(sb); 2199 + nfs_sb_deactive_async(sb); 2201 2200 kfree(calldata); 2202 2201 } 2203 2202 ··· 2297 2296 if (nfs4_setup_sequence(NFS_SERVER(inode), 2298 2297 &calldata->arg.seq_args, 2299 2298 &calldata->res.seq_res, 2300 - task)) 2301 - goto out; 2302 - rpc_call_start(task); 2299 + task) != 0) 2300 + nfs_release_seqid(calldata->arg.seqid); 2301 + else 2302 + rpc_call_start(task); 2303 2303 out: 2304 2304 dprintk("%s: done!\n", __func__); 2305 2305 } ··· 4531 4529 if (nfs4_async_handle_error(task, calldata->server, NULL) == -EAGAIN) 4532 4530 rpc_restart_call_prepare(task); 4533 4531 } 4532 + nfs_release_seqid(calldata->arg.seqid); 4534 4533 } 4535 4534 4536 4535 static void nfs4_locku_prepare(struct rpc_task *task, void *data) ··· 4548 4545 calldata->timestamp = jiffies; 4549 4546 if (nfs4_setup_sequence(calldata->server, 4550 4547 &calldata->arg.seq_args, 4551 - &calldata->res.seq_res, task)) 4552 - return; 4553 - rpc_call_start(task); 4548 + &calldata->res.seq_res, 4549 + task) != 0) 4550 + nfs_release_seqid(calldata->arg.seqid); 4551 + else 4552 + rpc_call_start(task); 4554 4553 } 4555 4554 4556 4555 static const struct rpc_call_ops nfs4_locku_ops = { ··· 4697 4692 /* Do we need to do an open_to_lock_owner? */ 4698 4693 if (!(data->arg.lock_seqid->sequence->flags & NFS_SEQID_CONFIRMED)) { 4699 4694 if (nfs_wait_on_sequence(data->arg.open_seqid, task) != 0) 4700 - return; 4695 + goto out_release_lock_seqid; 4701 4696 data->arg.open_stateid = &state->stateid; 4702 4697 data->arg.new_lock_owner = 1; 4703 4698 data->res.open_seqid = data->arg.open_seqid; ··· 4706 4701 data->timestamp = jiffies; 4707 4702 if (nfs4_setup_sequence(data->server, 4708 4703 &data->arg.seq_args, 4709 - &data->res.seq_res, task)) 4704 + &data->res.seq_res, 4705 + task) == 0) { 4706 + rpc_call_start(task); 4710 4707 return; 4711 - rpc_call_start(task); 4712 - dprintk("%s: done!, ret = %d\n", __func__, data->rpc_status); 4708 + } 4709 + nfs_release_seqid(data->arg.open_seqid); 4710 + out_release_lock_seqid: 4711 + nfs_release_seqid(data->arg.lock_seqid); 4712 + dprintk("%s: done!, ret = %d\n", __func__, task->tk_status); 4713 4713 } 4714 4714 4715 4715 static void nfs4_recover_lock_prepare(struct rpc_task *task, void *calldata) ··· 5677 5667 tbl->slots = new; 5678 5668 tbl->max_slots = max_slots; 5679 5669 } 5680 - tbl->highest_used_slotid = -1; /* no slot is currently used */ 5670 + tbl->highest_used_slotid = NFS4_NO_SLOT; 5681 5671 for (i = 0; i < tbl->max_slots; i++) 5682 5672 tbl->slots[i].seq_nr = ivalue; 5683 5673 spin_unlock(&tbl->slot_tbl_lock);
+2 -2
fs/nfs/pnfs.c
··· 925 925 if (likely(nfsi->layout == NULL)) { /* Won the race? */ 926 926 nfsi->layout = new; 927 927 return new; 928 - } 929 - pnfs_free_layout_hdr(new); 928 + } else if (new != NULL) 929 + pnfs_free_layout_hdr(new); 930 930 out_existing: 931 931 pnfs_get_layout_hdr(nfsi->layout); 932 932 return nfsi->layout;
+50 -1
fs/nfs/super.c
··· 54 54 #include <linux/parser.h> 55 55 #include <linux/nsproxy.h> 56 56 #include <linux/rcupdate.h> 57 + #include <linux/kthread.h> 57 58 58 59 #include <asm/uaccess.h> 59 60 ··· 416 415 } 417 416 EXPORT_SYMBOL_GPL(nfs_sb_deactive); 418 417 418 + static int nfs_deactivate_super_async_work(void *ptr) 419 + { 420 + struct super_block *sb = ptr; 421 + 422 + deactivate_super(sb); 423 + module_put_and_exit(0); 424 + return 0; 425 + } 426 + 427 + /* 428 + * same effect as deactivate_super, but will do final unmount in kthread 429 + * context 430 + */ 431 + static void nfs_deactivate_super_async(struct super_block *sb) 432 + { 433 + struct task_struct *task; 434 + char buf[INET6_ADDRSTRLEN + 1]; 435 + struct nfs_server *server = NFS_SB(sb); 436 + struct nfs_client *clp = server->nfs_client; 437 + 438 + if (!atomic_add_unless(&sb->s_active, -1, 1)) { 439 + rcu_read_lock(); 440 + snprintf(buf, sizeof(buf), 441 + rpc_peeraddr2str(clp->cl_rpcclient, RPC_DISPLAY_ADDR)); 442 + rcu_read_unlock(); 443 + 444 + __module_get(THIS_MODULE); 445 + task = kthread_run(nfs_deactivate_super_async_work, sb, 446 + "%s-deactivate-super", buf); 447 + if (IS_ERR(task)) { 448 + pr_err("%s: kthread_run: %ld\n", 449 + __func__, PTR_ERR(task)); 450 + /* make synchronous call and hope for the best */ 451 + deactivate_super(sb); 452 + module_put(THIS_MODULE); 453 + } 454 + } 455 + } 456 + 457 + void nfs_sb_deactive_async(struct super_block *sb) 458 + { 459 + struct nfs_server *server = NFS_SB(sb); 460 + 461 + if (atomic_dec_and_test(&server->active)) 462 + nfs_deactivate_super_async(sb); 463 + } 464 + EXPORT_SYMBOL_GPL(nfs_sb_deactive_async); 465 + 419 466 /* 420 467 * Deliver file system statistics to userspace 421 468 */ ··· 820 771 int err = 0; 821 772 if (!page) 822 773 return -ENOMEM; 823 - devname = nfs_path(&dummy, root, page, PAGE_SIZE); 774 + devname = nfs_path(&dummy, root, page, PAGE_SIZE, 0); 824 775 if (IS_ERR(devname)) 825 776 err = PTR_ERR(devname); 826 777 else
+1 -1
fs/nfs/unlink.c
··· 95 95 96 96 nfs_dec_sillycount(data->dir); 97 97 nfs_free_unlinkdata(data); 98 - nfs_sb_deactive(sb); 98 + nfs_sb_deactive_async(sb); 99 99 } 100 100 101 101 static void nfs_unlink_prepare(struct rpc_task *task, void *calldata)
+1
fs/notify/fanotify/fanotify.c
··· 21 21 if ((old->path.mnt == new->path.mnt) && 22 22 (old->path.dentry == new->path.dentry)) 23 23 return true; 24 + break; 24 25 case (FSNOTIFY_EVENT_NONE): 25 26 return true; 26 27 default:
+109
fs/proc/base.c
··· 873 873 .release = mem_release, 874 874 }; 875 875 876 + static ssize_t oom_adj_read(struct file *file, char __user *buf, size_t count, 877 + loff_t *ppos) 878 + { 879 + struct task_struct *task = get_proc_task(file->f_path.dentry->d_inode); 880 + char buffer[PROC_NUMBUF]; 881 + int oom_adj = OOM_ADJUST_MIN; 882 + size_t len; 883 + unsigned long flags; 884 + 885 + if (!task) 886 + return -ESRCH; 887 + if (lock_task_sighand(task, &flags)) { 888 + if (task->signal->oom_score_adj == OOM_SCORE_ADJ_MAX) 889 + oom_adj = OOM_ADJUST_MAX; 890 + else 891 + oom_adj = (task->signal->oom_score_adj * -OOM_DISABLE) / 892 + OOM_SCORE_ADJ_MAX; 893 + unlock_task_sighand(task, &flags); 894 + } 895 + put_task_struct(task); 896 + len = snprintf(buffer, sizeof(buffer), "%d\n", oom_adj); 897 + return simple_read_from_buffer(buf, count, ppos, buffer, len); 898 + } 899 + 900 + static ssize_t oom_adj_write(struct file *file, const char __user *buf, 901 + size_t count, loff_t *ppos) 902 + { 903 + struct task_struct *task; 904 + char buffer[PROC_NUMBUF]; 905 + int oom_adj; 906 + unsigned long flags; 907 + int err; 908 + 909 + memset(buffer, 0, sizeof(buffer)); 910 + if (count > sizeof(buffer) - 1) 911 + count = sizeof(buffer) - 1; 912 + if (copy_from_user(buffer, buf, count)) { 913 + err = -EFAULT; 914 + goto out; 915 + } 916 + 917 + err = kstrtoint(strstrip(buffer), 0, &oom_adj); 918 + if (err) 919 + goto out; 920 + if ((oom_adj < OOM_ADJUST_MIN || oom_adj > OOM_ADJUST_MAX) && 921 + oom_adj != OOM_DISABLE) { 922 + err = -EINVAL; 923 + goto out; 924 + } 925 + 926 + task = get_proc_task(file->f_path.dentry->d_inode); 927 + if (!task) { 928 + err = -ESRCH; 929 + goto out; 930 + } 931 + 932 + task_lock(task); 933 + if (!task->mm) { 934 + err = -EINVAL; 935 + goto err_task_lock; 936 + } 937 + 938 + if (!lock_task_sighand(task, &flags)) { 939 + err = -ESRCH; 940 + goto err_task_lock; 941 + } 942 + 943 + /* 944 + * Scale /proc/pid/oom_score_adj appropriately ensuring that a maximum 945 + * value is always attainable. 946 + */ 947 + if (oom_adj == OOM_ADJUST_MAX) 948 + oom_adj = OOM_SCORE_ADJ_MAX; 949 + else 950 + oom_adj = (oom_adj * OOM_SCORE_ADJ_MAX) / -OOM_DISABLE; 951 + 952 + if (oom_adj < task->signal->oom_score_adj && 953 + !capable(CAP_SYS_RESOURCE)) { 954 + err = -EACCES; 955 + goto err_sighand; 956 + } 957 + 958 + /* 959 + * /proc/pid/oom_adj is provided for legacy purposes, ask users to use 960 + * /proc/pid/oom_score_adj instead. 961 + */ 962 + printk_once(KERN_WARNING "%s (%d): /proc/%d/oom_adj is deprecated, please use /proc/%d/oom_score_adj instead.\n", 963 + current->comm, task_pid_nr(current), task_pid_nr(task), 964 + task_pid_nr(task)); 965 + 966 + task->signal->oom_score_adj = oom_adj; 967 + trace_oom_score_adj_update(task); 968 + err_sighand: 969 + unlock_task_sighand(task, &flags); 970 + err_task_lock: 971 + task_unlock(task); 972 + put_task_struct(task); 973 + out: 974 + return err < 0 ? err : count; 975 + } 976 + 977 + static const struct file_operations proc_oom_adj_operations = { 978 + .read = oom_adj_read, 979 + .write = oom_adj_write, 980 + .llseek = generic_file_llseek, 981 + }; 982 + 876 983 static ssize_t oom_score_adj_read(struct file *file, char __user *buf, 877 984 size_t count, loff_t *ppos) 878 985 { ··· 2705 2598 REG("cgroup", S_IRUGO, proc_cgroup_operations), 2706 2599 #endif 2707 2600 INF("oom_score", S_IRUGO, proc_oom_score), 2601 + REG("oom_adj", S_IRUGO|S_IWUSR, proc_oom_adj_operations), 2708 2602 REG("oom_score_adj", S_IRUGO|S_IWUSR, proc_oom_score_adj_operations), 2709 2603 #ifdef CONFIG_AUDITSYSCALL 2710 2604 REG("loginuid", S_IWUSR|S_IRUGO, proc_loginuid_operations), ··· 3072 2964 REG("cgroup", S_IRUGO, proc_cgroup_operations), 3073 2965 #endif 3074 2966 INF("oom_score", S_IRUGO, proc_oom_score), 2967 + REG("oom_adj", S_IRUGO|S_IWUSR, proc_oom_adj_operations), 3075 2968 REG("oom_score_adj", S_IRUGO|S_IWUSR, proc_oom_score_adj_operations), 3076 2969 #ifdef CONFIG_AUDITSYSCALL 3077 2970 REG("loginuid", S_IWUSR|S_IRUGO, proc_loginuid_operations),
+2 -1
fs/pstore/platform.c
··· 161 161 162 162 while (s < e) { 163 163 unsigned long flags; 164 + u64 id; 164 165 165 166 if (c > psinfo->bufsize) 166 167 c = psinfo->bufsize; ··· 173 172 spin_lock_irqsave(&psinfo->buf_lock, flags); 174 173 } 175 174 memcpy(psinfo->buf, s, c); 176 - psinfo->write(PSTORE_TYPE_CONSOLE, 0, NULL, 0, c, psinfo); 175 + psinfo->write(PSTORE_TYPE_CONSOLE, 0, &id, 0, c, psinfo); 177 176 spin_unlock_irqrestore(&psinfo->buf_lock, flags); 178 177 s += c; 179 178 c = e - s;
+10 -2
fs/ubifs/find.c
··· 681 681 if (!lprops) { 682 682 lprops = ubifs_fast_find_freeable(c); 683 683 if (!lprops) { 684 - ubifs_assert(c->freeable_cnt == 0); 685 - if (c->lst.empty_lebs - c->lst.taken_empty_lebs > 0) { 684 + /* 685 + * The first condition means the following: go scan the 686 + * LPT if there are uncategorized lprops, which means 687 + * there may be freeable LEBs there (UBIFS does not 688 + * store the information about freeable LEBs in the 689 + * master node). 690 + */ 691 + if (c->in_a_category_cnt != c->main_lebs || 692 + c->lst.empty_lebs - c->lst.taken_empty_lebs > 0) { 693 + ubifs_assert(c->freeable_cnt == 0); 686 694 lprops = scan_for_leb_for_idx(c); 687 695 if (IS_ERR(lprops)) { 688 696 err = PTR_ERR(lprops);
+6
fs/ubifs/lprops.c
··· 300 300 default: 301 301 ubifs_assert(0); 302 302 } 303 + 303 304 lprops->flags &= ~LPROPS_CAT_MASK; 304 305 lprops->flags |= cat; 306 + c->in_a_category_cnt += 1; 307 + ubifs_assert(c->in_a_category_cnt <= c->main_lebs); 305 308 } 306 309 307 310 /** ··· 337 334 default: 338 335 ubifs_assert(0); 339 336 } 337 + 338 + c->in_a_category_cnt -= 1; 339 + ubifs_assert(c->in_a_category_cnt >= 0); 340 340 } 341 341 342 342 /**
+3
fs/ubifs/ubifs.h
··· 1183 1183 * @freeable_list: list of freeable non-index LEBs (free + dirty == @leb_size) 1184 1184 * @frdi_idx_list: list of freeable index LEBs (free + dirty == @leb_size) 1185 1185 * @freeable_cnt: number of freeable LEBs in @freeable_list 1186 + * @in_a_category_cnt: count of lprops which are in a certain category, which 1187 + * basically meants that they were loaded from the flash 1186 1188 * 1187 1189 * @ltab_lnum: LEB number of LPT's own lprops table 1188 1190 * @ltab_offs: offset of LPT's own lprops table ··· 1414 1412 struct list_head freeable_list; 1415 1413 struct list_head frdi_idx_list; 1416 1414 int freeable_cnt; 1415 + int in_a_category_cnt; 1417 1416 1418 1417 int ltab_lnum; 1419 1418 int ltab_offs;
+2 -41
fs/xfs/xfs_alloc.c
··· 1866 1866 /* 1867 1867 * Initialize the args structure. 1868 1868 */ 1869 + memset(&targs, 0, sizeof(targs)); 1869 1870 targs.tp = tp; 1870 1871 targs.mp = mp; 1871 1872 targs.agbp = agbp; ··· 2208 2207 * group or loop over the allocation groups to find the result. 2209 2208 */ 2210 2209 int /* error */ 2211 - __xfs_alloc_vextent( 2210 + xfs_alloc_vextent( 2212 2211 xfs_alloc_arg_t *args) /* allocation argument structure */ 2213 2212 { 2214 2213 xfs_agblock_t agsize; /* allocation group size */ ··· 2416 2415 error0: 2417 2416 xfs_perag_put(args->pag); 2418 2417 return error; 2419 - } 2420 - 2421 - static void 2422 - xfs_alloc_vextent_worker( 2423 - struct work_struct *work) 2424 - { 2425 - struct xfs_alloc_arg *args = container_of(work, 2426 - struct xfs_alloc_arg, work); 2427 - unsigned long pflags; 2428 - 2429 - /* we are in a transaction context here */ 2430 - current_set_flags_nested(&pflags, PF_FSTRANS); 2431 - 2432 - args->result = __xfs_alloc_vextent(args); 2433 - complete(args->done); 2434 - 2435 - current_restore_flags_nested(&pflags, PF_FSTRANS); 2436 - } 2437 - 2438 - /* 2439 - * Data allocation requests often come in with little stack to work on. Push 2440 - * them off to a worker thread so there is lots of stack to use. Metadata 2441 - * requests, OTOH, are generally from low stack usage paths, so avoid the 2442 - * context switch overhead here. 2443 - */ 2444 - int 2445 - xfs_alloc_vextent( 2446 - struct xfs_alloc_arg *args) 2447 - { 2448 - DECLARE_COMPLETION_ONSTACK(done); 2449 - 2450 - if (!args->userdata) 2451 - return __xfs_alloc_vextent(args); 2452 - 2453 - 2454 - args->done = &done; 2455 - INIT_WORK_ONSTACK(&args->work, xfs_alloc_vextent_worker); 2456 - queue_work(xfs_alloc_wq, &args->work); 2457 - wait_for_completion(&done); 2458 - return args->result; 2459 2418 } 2460 2419 2461 2420 /*
-3
fs/xfs/xfs_alloc.h
··· 120 120 char isfl; /* set if is freelist blocks - !acctg */ 121 121 char userdata; /* set if this is user data */ 122 122 xfs_fsblock_t firstblock; /* io first block allocated */ 123 - struct completion *done; 124 - struct work_struct work; 125 - int result; 126 123 } xfs_alloc_arg_t; 127 124 128 125 /*
+2
fs/xfs/xfs_alloc_btree.c
··· 121 121 xfs_extent_busy_insert(cur->bc_tp, be32_to_cpu(agf->agf_seqno), bno, 1, 122 122 XFS_EXTENT_BUSY_SKIP_DISCARD); 123 123 xfs_trans_agbtree_delta(cur->bc_tp, -1); 124 + 125 + xfs_trans_binval(cur->bc_tp, bp); 124 126 return 0; 125 127 } 126 128
+54 -9
fs/xfs/xfs_bmap.c
··· 2437 2437 * Normal allocation, done through xfs_alloc_vextent. 2438 2438 */ 2439 2439 tryagain = isaligned = 0; 2440 + memset(&args, 0, sizeof(args)); 2440 2441 args.tp = ap->tp; 2441 2442 args.mp = mp; 2442 2443 args.fsbno = ap->blkno; ··· 3083 3082 * Convert to a btree with two levels, one record in root. 3084 3083 */ 3085 3084 XFS_IFORK_FMT_SET(ip, whichfork, XFS_DINODE_FMT_BTREE); 3085 + memset(&args, 0, sizeof(args)); 3086 3086 args.tp = tp; 3087 3087 args.mp = mp; 3088 3088 args.firstblock = *firstblock; ··· 3239 3237 xfs_buf_t *bp; /* buffer for extent block */ 3240 3238 xfs_bmbt_rec_host_t *ep;/* extent record pointer */ 3241 3239 3240 + memset(&args, 0, sizeof(args)); 3242 3241 args.tp = tp; 3243 3242 args.mp = ip->i_mount; 3244 3243 args.firstblock = *firstblock; ··· 4619 4616 4620 4617 4621 4618 STATIC int 4622 - xfs_bmapi_allocate( 4623 - struct xfs_bmalloca *bma, 4624 - int flags) 4619 + __xfs_bmapi_allocate( 4620 + struct xfs_bmalloca *bma) 4625 4621 { 4626 4622 struct xfs_mount *mp = bma->ip->i_mount; 4627 - int whichfork = (flags & XFS_BMAPI_ATTRFORK) ? 4623 + int whichfork = (bma->flags & XFS_BMAPI_ATTRFORK) ? 4628 4624 XFS_ATTR_FORK : XFS_DATA_FORK; 4629 4625 struct xfs_ifork *ifp = XFS_IFORK_PTR(bma->ip, whichfork); 4630 4626 int tmp_logflags = 0; ··· 4656 4654 * Indicate if this is the first user data in the file, or just any 4657 4655 * user data. 4658 4656 */ 4659 - if (!(flags & XFS_BMAPI_METADATA)) { 4657 + if (!(bma->flags & XFS_BMAPI_METADATA)) { 4660 4658 bma->userdata = (bma->offset == 0) ? 4661 4659 XFS_ALLOC_INITIAL_USER_DATA : XFS_ALLOC_USERDATA; 4662 4660 } 4663 4661 4664 - bma->minlen = (flags & XFS_BMAPI_CONTIG) ? bma->length : 1; 4662 + bma->minlen = (bma->flags & XFS_BMAPI_CONTIG) ? bma->length : 1; 4665 4663 4666 4664 /* 4667 4665 * Only want to do the alignment at the eof if it is userdata and 4668 4666 * allocation length is larger than a stripe unit. 4669 4667 */ 4670 4668 if (mp->m_dalign && bma->length >= mp->m_dalign && 4671 - !(flags & XFS_BMAPI_METADATA) && whichfork == XFS_DATA_FORK) { 4669 + !(bma->flags & XFS_BMAPI_METADATA) && whichfork == XFS_DATA_FORK) { 4672 4670 error = xfs_bmap_isaeof(bma, whichfork); 4673 4671 if (error) 4674 4672 return error; 4675 4673 } 4674 + 4675 + if (bma->flags & XFS_BMAPI_STACK_SWITCH) 4676 + bma->stack_switch = 1; 4676 4677 4677 4678 error = xfs_bmap_alloc(bma); 4678 4679 if (error) ··· 4711 4706 * A wasdelay extent has been initialized, so shouldn't be flagged 4712 4707 * as unwritten. 4713 4708 */ 4714 - if (!bma->wasdel && (flags & XFS_BMAPI_PREALLOC) && 4709 + if (!bma->wasdel && (bma->flags & XFS_BMAPI_PREALLOC) && 4715 4710 xfs_sb_version_hasextflgbit(&mp->m_sb)) 4716 4711 bma->got.br_state = XFS_EXT_UNWRITTEN; 4717 4712 ··· 4737 4732 ASSERT(bma->got.br_state == XFS_EXT_NORM || 4738 4733 bma->got.br_state == XFS_EXT_UNWRITTEN); 4739 4734 return 0; 4735 + } 4736 + 4737 + static void 4738 + xfs_bmapi_allocate_worker( 4739 + struct work_struct *work) 4740 + { 4741 + struct xfs_bmalloca *args = container_of(work, 4742 + struct xfs_bmalloca, work); 4743 + unsigned long pflags; 4744 + 4745 + /* we are in a transaction context here */ 4746 + current_set_flags_nested(&pflags, PF_FSTRANS); 4747 + 4748 + args->result = __xfs_bmapi_allocate(args); 4749 + complete(args->done); 4750 + 4751 + current_restore_flags_nested(&pflags, PF_FSTRANS); 4752 + } 4753 + 4754 + /* 4755 + * Some allocation requests often come in with little stack to work on. Push 4756 + * them off to a worker thread so there is lots of stack to use. Otherwise just 4757 + * call directly to avoid the context switch overhead here. 4758 + */ 4759 + int 4760 + xfs_bmapi_allocate( 4761 + struct xfs_bmalloca *args) 4762 + { 4763 + DECLARE_COMPLETION_ONSTACK(done); 4764 + 4765 + if (!args->stack_switch) 4766 + return __xfs_bmapi_allocate(args); 4767 + 4768 + 4769 + args->done = &done; 4770 + INIT_WORK_ONSTACK(&args->work, xfs_bmapi_allocate_worker); 4771 + queue_work(xfs_alloc_wq, &args->work); 4772 + wait_for_completion(&done); 4773 + return args->result; 4740 4774 } 4741 4775 4742 4776 STATIC int ··· 4963 4919 bma.conv = !!(flags & XFS_BMAPI_CONVERT); 4964 4920 bma.wasdel = wasdelay; 4965 4921 bma.offset = bno; 4922 + bma.flags = flags; 4966 4923 4967 4924 /* 4968 4925 * There's a 32/64 bit type mismatch between the ··· 4979 4934 4980 4935 ASSERT(len > 0); 4981 4936 ASSERT(bma.length > 0); 4982 - error = xfs_bmapi_allocate(&bma, flags); 4937 + error = xfs_bmapi_allocate(&bma); 4983 4938 if (error) 4984 4939 goto error0; 4985 4940 if (bma.blkno == NULLFSBLOCK)
+8 -1
fs/xfs/xfs_bmap.h
··· 77 77 * from written to unwritten, otherwise convert from unwritten to written. 78 78 */ 79 79 #define XFS_BMAPI_CONVERT 0x040 80 + #define XFS_BMAPI_STACK_SWITCH 0x080 80 81 81 82 #define XFS_BMAPI_FLAGS \ 82 83 { XFS_BMAPI_ENTIRE, "ENTIRE" }, \ ··· 86 85 { XFS_BMAPI_PREALLOC, "PREALLOC" }, \ 87 86 { XFS_BMAPI_IGSTATE, "IGSTATE" }, \ 88 87 { XFS_BMAPI_CONTIG, "CONTIG" }, \ 89 - { XFS_BMAPI_CONVERT, "CONVERT" } 88 + { XFS_BMAPI_CONVERT, "CONVERT" }, \ 89 + { XFS_BMAPI_STACK_SWITCH, "STACK_SWITCH" } 90 90 91 91 92 92 static inline int xfs_bmapi_aflag(int w) ··· 135 133 char userdata;/* set if is user data */ 136 134 char aeof; /* allocated space at eof */ 137 135 char conv; /* overwriting unwritten extents */ 136 + char stack_switch; 137 + int flags; 138 + struct completion *done; 139 + struct work_struct work; 140 + int result; 138 141 } xfs_bmalloca_t; 139 142 140 143 /*
+18
fs/xfs/xfs_buf_item.c
··· 526 526 } 527 527 xfs_buf_relse(bp); 528 528 } else if (freed && remove) { 529 + /* 530 + * There are currently two references to the buffer - the active 531 + * LRU reference and the buf log item. What we are about to do 532 + * here - simulate a failed IO completion - requires 3 533 + * references. 534 + * 535 + * The LRU reference is removed by the xfs_buf_stale() call. The 536 + * buf item reference is removed by the xfs_buf_iodone() 537 + * callback that is run by xfs_buf_do_callbacks() during ioend 538 + * processing (via the bp->b_iodone callback), and then finally 539 + * the ioend processing will drop the IO reference if the buffer 540 + * is marked XBF_ASYNC. 541 + * 542 + * Hence we need to take an additional reference here so that IO 543 + * completion processing doesn't free the buffer prematurely. 544 + */ 529 545 xfs_buf_lock(bp); 546 + xfs_buf_hold(bp); 547 + bp->b_flags |= XBF_ASYNC; 530 548 xfs_buf_ioerror(bp, EIO); 531 549 XFS_BUF_UNDONE(bp); 532 550 xfs_buf_stale(bp);
+19 -2
fs/xfs/xfs_fsops.c
··· 399 399 400 400 /* update secondary superblocks. */ 401 401 for (agno = 1; agno < nagcount; agno++) { 402 - error = xfs_trans_read_buf(mp, NULL, mp->m_ddev_targp, 402 + error = 0; 403 + /* 404 + * new secondary superblocks need to be zeroed, not read from 405 + * disk as the contents of the new area we are growing into is 406 + * completely unknown. 407 + */ 408 + if (agno < oagcount) { 409 + error = xfs_trans_read_buf(mp, NULL, mp->m_ddev_targp, 403 410 XFS_AGB_TO_DADDR(mp, agno, XFS_SB_BLOCK(mp)), 404 411 XFS_FSS_TO_BB(mp, 1), 0, &bp); 412 + } else { 413 + bp = xfs_trans_get_buf(NULL, mp->m_ddev_targp, 414 + XFS_AGB_TO_DADDR(mp, agno, XFS_SB_BLOCK(mp)), 415 + XFS_FSS_TO_BB(mp, 1), 0); 416 + if (bp) 417 + xfs_buf_zero(bp, 0, BBTOB(bp->b_length)); 418 + else 419 + error = ENOMEM; 420 + } 421 + 405 422 if (error) { 406 423 xfs_warn(mp, 407 424 "error %d reading secondary superblock for ag %d", ··· 440 423 break; /* no point in continuing */ 441 424 } 442 425 } 443 - return 0; 426 + return error; 444 427 445 428 error0: 446 429 xfs_trans_cancel(tp, XFS_TRANS_ABORT);
+1
fs/xfs/xfs_ialloc.c
··· 250 250 /* boundary */ 251 251 struct xfs_perag *pag; 252 252 253 + memset(&args, 0, sizeof(args)); 253 254 args.tp = tp; 254 255 args.mp = tp->t_mountp; 255 256
+2 -1
fs/xfs/xfs_inode.c
··· 1509 1509 * to mark all the active inodes on the buffer stale. 1510 1510 */ 1511 1511 bp = xfs_trans_get_buf(tp, mp->m_ddev_targp, blkno, 1512 - mp->m_bsize * blks_per_cluster, 0); 1512 + mp->m_bsize * blks_per_cluster, 1513 + XBF_UNMAPPED); 1513 1514 1514 1515 if (!bp) 1515 1516 return ENOMEM;
+1 -1
fs/xfs/xfs_ioctl.c
··· 70 70 int hsize; 71 71 xfs_handle_t handle; 72 72 struct inode *inode; 73 - struct fd f; 73 + struct fd f = {0}; 74 74 struct path path; 75 75 int error; 76 76 struct xfs_inode *ip;
+3 -1
fs/xfs/xfs_iomap.c
··· 584 584 * pointer that the caller gave to us. 585 585 */ 586 586 error = xfs_bmapi_write(tp, ip, map_start_fsb, 587 - count_fsb, 0, &first_block, 1, 587 + count_fsb, 588 + XFS_BMAPI_STACK_SWITCH, 589 + &first_block, 1, 588 590 imap, &nimaps, &free_list); 589 591 if (error) 590 592 goto trans_cancel;
+16 -3
fs/xfs/xfs_log.c
··· 2387 2387 2388 2388 2389 2389 /* 2390 - * update the last_sync_lsn before we drop the 2390 + * Completion of a iclog IO does not imply that 2391 + * a transaction has completed, as transactions 2392 + * can be large enough to span many iclogs. We 2393 + * cannot change the tail of the log half way 2394 + * through a transaction as this may be the only 2395 + * transaction in the log and moving th etail to 2396 + * point to the middle of it will prevent 2397 + * recovery from finding the start of the 2398 + * transaction. Hence we should only update the 2399 + * last_sync_lsn if this iclog contains 2400 + * transaction completion callbacks on it. 2401 + * 2402 + * We have to do this before we drop the 2391 2403 * icloglock to ensure we are the only one that 2392 2404 * can update it. 2393 2405 */ 2394 2406 ASSERT(XFS_LSN_CMP(atomic64_read(&log->l_last_sync_lsn), 2395 2407 be64_to_cpu(iclog->ic_header.h_lsn)) <= 0); 2396 - atomic64_set(&log->l_last_sync_lsn, 2397 - be64_to_cpu(iclog->ic_header.h_lsn)); 2408 + if (iclog->ic_callback) 2409 + atomic64_set(&log->l_last_sync_lsn, 2410 + be64_to_cpu(iclog->ic_header.h_lsn)); 2398 2411 2399 2412 } else 2400 2413 ioerrors++;
+1 -1
fs/xfs/xfs_log_recover.c
··· 3541 3541 * - order is important. 3542 3542 */ 3543 3543 error = xlog_bread_offset(log, 0, 3544 - bblks - split_bblks, hbp, 3544 + bblks - split_bblks, dbp, 3545 3545 offset + BBTOB(split_bblks)); 3546 3546 if (error) 3547 3547 goto bread_err2;
+2 -2
include/linux/clk-provider.h
··· 335 335 struct clk_hw *__clk_get_hw(struct clk *clk); 336 336 u8 __clk_get_num_parents(struct clk *clk); 337 337 struct clk *__clk_get_parent(struct clk *clk); 338 - inline int __clk_get_enable_count(struct clk *clk); 339 - inline int __clk_get_prepare_count(struct clk *clk); 338 + int __clk_get_enable_count(struct clk *clk); 339 + int __clk_get_prepare_count(struct clk *clk); 340 340 unsigned long __clk_get_rate(struct clk *clk); 341 341 unsigned long __clk_get_flags(struct clk *clk); 342 342 int __clk_is_enabled(struct clk *clk);
-4
include/linux/mm.h
··· 1684 1684 static inline bool page_is_guard(struct page *page) { return false; } 1685 1685 #endif /* CONFIG_DEBUG_PAGEALLOC */ 1686 1686 1687 - extern void reset_zone_present_pages(void); 1688 - extern void fixup_zone_present_pages(int nid, unsigned long start_pfn, 1689 - unsigned long end_pfn); 1690 - 1691 1687 #endif /* __KERNEL__ */ 1692 1688 #endif /* _LINUX_MM_H */
+3 -3
include/linux/mmc/dw_mmc.h
··· 137 137 138 138 dma_addr_t sg_dma; 139 139 void *sg_cpu; 140 - struct dw_mci_dma_ops *dma_ops; 140 + const struct dw_mci_dma_ops *dma_ops; 141 141 #ifdef CONFIG_MMC_DW_IDMAC 142 142 unsigned int ring_size; 143 143 #else ··· 162 162 u16 data_offset; 163 163 struct device *dev; 164 164 struct dw_mci_board *pdata; 165 - struct dw_mci_drv_data *drv_data; 165 + const struct dw_mci_drv_data *drv_data; 166 166 void *priv; 167 167 struct clk *biu_clk; 168 168 struct clk *ciu_clk; ··· 186 186 187 187 struct regulator *vmmc; /* Power regulator */ 188 188 unsigned long irq_flags; /* IRQ flags */ 189 - unsigned int irq; 189 + int irq; 190 190 }; 191 191 192 192 /* DMA ops for Internal/External DMAC interface */
+1
include/linux/mmc/sdhci.h
··· 91 91 unsigned int quirks2; /* More deviations from spec. */ 92 92 93 93 #define SDHCI_QUIRK2_HOST_OFF_CARD_ON (1<<0) 94 + #define SDHCI_QUIRK2_HOST_NO_CMD23 (1<<1) 94 95 95 96 int irq; /* Device IRQ */ 96 97 void __iomem *ioaddr; /* Mapped address */
+1 -1
include/linux/mmzone.h
··· 752 752 unsigned long size, 753 753 enum memmap_context context); 754 754 755 - extern void lruvec_init(struct lruvec *lruvec, struct zone *zone); 755 + extern void lruvec_init(struct lruvec *lruvec); 756 756 757 757 static inline struct zone *lruvec_zone(struct lruvec *lruvec) 758 758 {
+2
include/linux/of_address.h
··· 28 28 #endif 29 29 30 30 #else /* CONFIG_OF_ADDRESS */ 31 + #ifndef of_address_to_resource 31 32 static inline int of_address_to_resource(struct device_node *dev, int index, 32 33 struct resource *r) 33 34 { 34 35 return -EINVAL; 35 36 } 37 + #endif 36 38 static inline struct device_node *of_find_matching_node_by_address( 37 39 struct device_node *from, 38 40 const struct of_device_id *matches,
+31
include/linux/platform_data/omap_ocp2scp.h
··· 1 + /* 2 + * omap_ocp2scp.h -- ocp2scp header file 3 + * 4 + * Copyright (C) 2012 Texas Instruments Incorporated - http://www.ti.com 5 + * This program is free software; you can redistribute it and/or modify 6 + * it under the terms of the GNU General Public License as published by 7 + * the Free Software Foundation; either version 2 of the License, or 8 + * (at your option) any later version. 9 + * 10 + * Author: Kishon Vijay Abraham I <kishon@ti.com> 11 + * 12 + * This program is distributed in the hope that it will be useful, 13 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 14 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 15 + * GNU General Public License for more details. 16 + * 17 + */ 18 + 19 + #ifndef __DRIVERS_OMAP_OCP2SCP_H 20 + #define __DRIVERS_OMAP_OCP2SCP_H 21 + 22 + struct omap_ocp2scp_dev { 23 + const char *drv_name; 24 + struct resource *res; 25 + }; 26 + 27 + struct omap_ocp2scp_platform_data { 28 + int dev_cnt; 29 + struct omap_ocp2scp_dev **devices; 30 + }; 31 + #endif /* __DRIVERS_OMAP_OCP2SCP_H */
+2
include/linux/rio.h
··· 275 275 * struct rio_net - RIO network info 276 276 * @node: Node in global list of RIO networks 277 277 * @devices: List of devices in this network 278 + * @switches: List of switches in this netowrk 278 279 * @mports: List of master ports accessing this network 279 280 * @hport: Default port for accessing this network 280 281 * @id: RIO network ID 282 + * @destid_table: destID allocation table 281 283 */ 282 284 struct rio_net { 283 285 struct list_head node; /* node in list of networks */
-1
include/uapi/linux/eventpoll.h
··· 25 25 #define EPOLL_CTL_ADD 1 26 26 #define EPOLL_CTL_DEL 2 27 27 #define EPOLL_CTL_MOD 3 28 - #define EPOLL_CTL_DISABLE 4 29 28 30 29 /* 31 30 * Request the handling of system wakeup events so as to prevent system suspends
+9
include/uapi/linux/oom.h
··· 8 8 #define OOM_SCORE_ADJ_MIN (-1000) 9 9 #define OOM_SCORE_ADJ_MAX 1000 10 10 11 + /* 12 + * /proc/<pid>/oom_adj set to -17 protects from the oom killer for legacy 13 + * purposes. 14 + */ 15 + #define OOM_DISABLE (-17) 16 + /* inclusive */ 17 + #define OOM_ADJUST_MIN (-16) 18 + #define OOM_ADJUST_MAX 15 19 + 11 20 #endif /* _UAPI__INCLUDE_LINUX_OOM_H */
+32 -2
include/xen/hvm.h
··· 5 5 #include <xen/interface/hvm/params.h> 6 6 #include <asm/xen/hypercall.h> 7 7 8 + static const char *param_name(int op) 9 + { 10 + #define PARAM(x) [HVM_PARAM_##x] = #x 11 + static const char *const names[] = { 12 + PARAM(CALLBACK_IRQ), 13 + PARAM(STORE_PFN), 14 + PARAM(STORE_EVTCHN), 15 + PARAM(PAE_ENABLED), 16 + PARAM(IOREQ_PFN), 17 + PARAM(BUFIOREQ_PFN), 18 + PARAM(TIMER_MODE), 19 + PARAM(HPET_ENABLED), 20 + PARAM(IDENT_PT), 21 + PARAM(DM_DOMAIN), 22 + PARAM(ACPI_S_STATE), 23 + PARAM(VM86_TSS), 24 + PARAM(VPT_ALIGN), 25 + PARAM(CONSOLE_PFN), 26 + PARAM(CONSOLE_EVTCHN), 27 + }; 28 + #undef PARAM 29 + 30 + if (op >= ARRAY_SIZE(names)) 31 + return "unknown"; 32 + 33 + if (!names[op]) 34 + return "reserved"; 35 + 36 + return names[op]; 37 + } 8 38 static inline int hvm_get_parameter(int idx, uint64_t *value) 9 39 { 10 40 struct xen_hvm_param xhv; ··· 44 14 xhv.index = idx; 45 15 r = HYPERVISOR_hvm_op(HVMOP_get_param, &xhv); 46 16 if (r < 0) { 47 - printk(KERN_ERR "Cannot get hvm parameter %d: %d!\n", 48 - idx, r); 17 + printk(KERN_ERR "Cannot get hvm parameter %s (%d): %d!\n", 18 + param_name(idx), idx, r); 49 19 return r; 50 20 } 51 21 *value = xhv.value;
+22 -19
kernel/futex.c
··· 716 716 struct futex_pi_state **ps, 717 717 struct task_struct *task, int set_waiters) 718 718 { 719 - int lock_taken, ret, ownerdied = 0; 719 + int lock_taken, ret, force_take = 0; 720 720 u32 uval, newval, curval, vpid = task_pid_vnr(task); 721 721 722 722 retry: ··· 755 755 newval = curval | FUTEX_WAITERS; 756 756 757 757 /* 758 - * There are two cases, where a futex might have no owner (the 759 - * owner TID is 0): OWNER_DIED. We take over the futex in this 760 - * case. We also do an unconditional take over, when the owner 761 - * of the futex died. 762 - * 763 - * This is safe as we are protected by the hash bucket lock ! 758 + * Should we force take the futex? See below. 764 759 */ 765 - if (unlikely(ownerdied || !(curval & FUTEX_TID_MASK))) { 766 - /* Keep the OWNER_DIED bit */ 760 + if (unlikely(force_take)) { 761 + /* 762 + * Keep the OWNER_DIED and the WAITERS bit and set the 763 + * new TID value. 764 + */ 767 765 newval = (curval & ~FUTEX_TID_MASK) | vpid; 768 - ownerdied = 0; 766 + force_take = 0; 769 767 lock_taken = 1; 770 768 } 771 769 ··· 773 775 goto retry; 774 776 775 777 /* 776 - * We took the lock due to owner died take over. 778 + * We took the lock due to forced take over. 777 779 */ 778 780 if (unlikely(lock_taken)) 779 781 return 1; ··· 788 790 switch (ret) { 789 791 case -ESRCH: 790 792 /* 791 - * No owner found for this futex. Check if the 792 - * OWNER_DIED bit is set to figure out whether 793 - * this is a robust futex or not. 793 + * We failed to find an owner for this 794 + * futex. So we have no pi_state to block 795 + * on. This can happen in two cases: 796 + * 797 + * 1) The owner died 798 + * 2) A stale FUTEX_WAITERS bit 799 + * 800 + * Re-read the futex value. 794 801 */ 795 802 if (get_futex_value_locked(&curval, uaddr)) 796 803 return -EFAULT; 797 804 798 805 /* 799 - * We simply start over in case of a robust 800 - * futex. The code above will take the futex 801 - * and return happy. 806 + * If the owner died or we have a stale 807 + * WAITERS bit the owner TID in the user space 808 + * futex is 0. 802 809 */ 803 - if (curval & FUTEX_OWNER_DIED) { 804 - ownerdied = 1; 810 + if (!(curval & FUTEX_TID_MASK)) { 811 + force_take = 1; 805 812 goto retry; 806 813 } 807 814 default:
+16 -11
kernel/module.c
··· 2293 2293 src = (void *)info->hdr + symsect->sh_offset; 2294 2294 nsrc = symsect->sh_size / sizeof(*src); 2295 2295 2296 + /* strtab always starts with a nul, so offset 0 is the empty string. */ 2297 + strtab_size = 1; 2298 + 2296 2299 /* Compute total space required for the core symbols' strtab. */ 2297 - for (ndst = i = strtab_size = 1; i < nsrc; ++i, ++src) 2298 - if (is_core_symbol(src, info->sechdrs, info->hdr->e_shnum)) { 2299 - strtab_size += strlen(&info->strtab[src->st_name]) + 1; 2300 + for (ndst = i = 0; i < nsrc; i++) { 2301 + if (i == 0 || 2302 + is_core_symbol(src+i, info->sechdrs, info->hdr->e_shnum)) { 2303 + strtab_size += strlen(&info->strtab[src[i].st_name])+1; 2300 2304 ndst++; 2301 2305 } 2306 + } 2302 2307 2303 2308 /* Append room for core symbols at end of core part. */ 2304 2309 info->symoffs = ALIGN(mod->core_size, symsect->sh_addralign ?: 1); ··· 2337 2332 mod->core_symtab = dst = mod->module_core + info->symoffs; 2338 2333 mod->core_strtab = s = mod->module_core + info->stroffs; 2339 2334 src = mod->symtab; 2340 - *dst = *src; 2341 2335 *s++ = 0; 2342 - for (ndst = i = 1; i < mod->num_symtab; ++i, ++src) { 2343 - if (!is_core_symbol(src, info->sechdrs, info->hdr->e_shnum)) 2344 - continue; 2345 - 2346 - dst[ndst] = *src; 2347 - dst[ndst++].st_name = s - mod->core_strtab; 2348 - s += strlcpy(s, &mod->strtab[src->st_name], KSYM_NAME_LEN) + 1; 2336 + for (ndst = i = 0; i < mod->num_symtab; i++) { 2337 + if (i == 0 || 2338 + is_core_symbol(src+i, info->sechdrs, info->hdr->e_shnum)) { 2339 + dst[ndst] = src[i]; 2340 + dst[ndst++].st_name = s - mod->core_strtab; 2341 + s += strlcpy(s, &mod->strtab[src[i].st_name], 2342 + KSYM_NAME_LEN) + 1; 2343 + } 2349 2344 } 2350 2345 mod->core_num_syms = ndst; 2351 2346 }
+1 -9
mm/bootmem.c
··· 198 198 int order = ilog2(BITS_PER_LONG); 199 199 200 200 __free_pages_bootmem(pfn_to_page(start), order); 201 - fixup_zone_present_pages(page_to_nid(pfn_to_page(start)), 202 - start, start + BITS_PER_LONG); 203 201 count += BITS_PER_LONG; 204 202 start += BITS_PER_LONG; 205 203 } else { ··· 208 210 if (vec & 1) { 209 211 page = pfn_to_page(start + off); 210 212 __free_pages_bootmem(page, 0); 211 - fixup_zone_present_pages( 212 - page_to_nid(page), 213 - start + off, start + off + 1); 214 213 count++; 215 214 } 216 215 vec >>= 1; ··· 221 226 pages = bdata->node_low_pfn - bdata->node_min_pfn; 222 227 pages = bootmem_bootmap_pages(pages); 223 228 count += pages; 224 - while (pages--) { 225 - fixup_zone_present_pages(page_to_nid(page), 226 - page_to_pfn(page), page_to_pfn(page) + 1); 229 + while (pages--) 227 230 __free_pages_bootmem(page++, 0); 228 - } 229 231 230 232 bdebug("nid=%td released=%lx\n", bdata - bootmem_node_data, count); 231 233
+1 -1
mm/highmem.c
··· 98 98 { 99 99 unsigned long addr = (unsigned long)vaddr; 100 100 101 - if (addr >= PKMAP_ADDR(0) && addr <= PKMAP_ADDR(LAST_PKMAP)) { 101 + if (addr >= PKMAP_ADDR(0) && addr < PKMAP_ADDR(LAST_PKMAP)) { 102 102 int i = (addr - PKMAP_ADDR(0)) >> PAGE_SHIFT; 103 103 return pte_page(pkmap_page_table[i]); 104 104 }
+50 -17
mm/memcontrol.c
··· 1055 1055 struct mem_cgroup *memcg) 1056 1056 { 1057 1057 struct mem_cgroup_per_zone *mz; 1058 + struct lruvec *lruvec; 1058 1059 1059 - if (mem_cgroup_disabled()) 1060 - return &zone->lruvec; 1060 + if (mem_cgroup_disabled()) { 1061 + lruvec = &zone->lruvec; 1062 + goto out; 1063 + } 1061 1064 1062 1065 mz = mem_cgroup_zoneinfo(memcg, zone_to_nid(zone), zone_idx(zone)); 1063 - return &mz->lruvec; 1066 + lruvec = &mz->lruvec; 1067 + out: 1068 + /* 1069 + * Since a node can be onlined after the mem_cgroup was created, 1070 + * we have to be prepared to initialize lruvec->zone here; 1071 + * and if offlined then reonlined, we need to reinitialize it. 1072 + */ 1073 + if (unlikely(lruvec->zone != zone)) 1074 + lruvec->zone = zone; 1075 + return lruvec; 1064 1076 } 1065 1077 1066 1078 /* ··· 1099 1087 struct mem_cgroup_per_zone *mz; 1100 1088 struct mem_cgroup *memcg; 1101 1089 struct page_cgroup *pc; 1090 + struct lruvec *lruvec; 1102 1091 1103 - if (mem_cgroup_disabled()) 1104 - return &zone->lruvec; 1092 + if (mem_cgroup_disabled()) { 1093 + lruvec = &zone->lruvec; 1094 + goto out; 1095 + } 1105 1096 1106 1097 pc = lookup_page_cgroup(page); 1107 1098 memcg = pc->mem_cgroup; ··· 1122 1107 pc->mem_cgroup = memcg = root_mem_cgroup; 1123 1108 1124 1109 mz = page_cgroup_zoneinfo(memcg, page); 1125 - return &mz->lruvec; 1110 + lruvec = &mz->lruvec; 1111 + out: 1112 + /* 1113 + * Since a node can be onlined after the mem_cgroup was created, 1114 + * we have to be prepared to initialize lruvec->zone here; 1115 + * and if offlined then reonlined, we need to reinitialize it. 1116 + */ 1117 + if (unlikely(lruvec->zone != zone)) 1118 + lruvec->zone = zone; 1119 + return lruvec; 1126 1120 } 1127 1121 1128 1122 /** ··· 1476 1452 static u64 mem_cgroup_get_limit(struct mem_cgroup *memcg) 1477 1453 { 1478 1454 u64 limit; 1479 - u64 memsw; 1480 1455 1481 1456 limit = res_counter_read_u64(&memcg->res, RES_LIMIT); 1482 - limit += total_swap_pages << PAGE_SHIFT; 1483 1457 1484 - memsw = res_counter_read_u64(&memcg->memsw, RES_LIMIT); 1485 1458 /* 1486 - * If memsw is finite and limits the amount of swap space available 1487 - * to this memcg, return that limit. 1459 + * Do not consider swap space if we cannot swap due to swappiness 1488 1460 */ 1489 - return min(limit, memsw); 1461 + if (mem_cgroup_swappiness(memcg)) { 1462 + u64 memsw; 1463 + 1464 + limit += total_swap_pages << PAGE_SHIFT; 1465 + memsw = res_counter_read_u64(&memcg->memsw, RES_LIMIT); 1466 + 1467 + /* 1468 + * If memsw is finite and limits the amount of swap space 1469 + * available to this memcg, return that limit. 1470 + */ 1471 + limit = min(limit, memsw); 1472 + } 1473 + 1474 + return limit; 1490 1475 } 1491 1476 1492 1477 void mem_cgroup_out_of_memory(struct mem_cgroup *memcg, gfp_t gfp_mask, ··· 3721 3688 static bool mem_cgroup_force_empty_list(struct mem_cgroup *memcg, 3722 3689 int node, int zid, enum lru_list lru) 3723 3690 { 3724 - struct mem_cgroup_per_zone *mz; 3691 + struct lruvec *lruvec; 3725 3692 unsigned long flags, loop; 3726 3693 struct list_head *list; 3727 3694 struct page *busy; 3728 3695 struct zone *zone; 3729 3696 3730 3697 zone = &NODE_DATA(node)->node_zones[zid]; 3731 - mz = mem_cgroup_zoneinfo(memcg, node, zid); 3732 - list = &mz->lruvec.lists[lru]; 3698 + lruvec = mem_cgroup_zone_lruvec(zone, memcg); 3699 + list = &lruvec->lists[lru]; 3733 3700 3734 - loop = mz->lru_size[lru]; 3701 + loop = mem_cgroup_get_lru_size(lruvec, lru); 3735 3702 /* give some margin against EBUSY etc...*/ 3736 3703 loop += 256; 3737 3704 busy = NULL; ··· 4769 4736 4770 4737 for (zone = 0; zone < MAX_NR_ZONES; zone++) { 4771 4738 mz = &pn->zoneinfo[zone]; 4772 - lruvec_init(&mz->lruvec, &NODE_DATA(node)->node_zones[zone]); 4739 + lruvec_init(&mz->lruvec); 4773 4740 mz->usage_in_excess = 0; 4774 4741 mz->on_tree = false; 4775 4742 mz->memcg = memcg;
+4 -6
mm/memory.c
··· 2527 2527 int ret = 0; 2528 2528 int page_mkwrite = 0; 2529 2529 struct page *dirty_page = NULL; 2530 - unsigned long mmun_start; /* For mmu_notifiers */ 2531 - unsigned long mmun_end; /* For mmu_notifiers */ 2532 - bool mmun_called = false; /* For mmu_notifiers */ 2530 + unsigned long mmun_start = 0; /* For mmu_notifiers */ 2531 + unsigned long mmun_end = 0; /* For mmu_notifiers */ 2533 2532 2534 2533 old_page = vm_normal_page(vma, address, orig_pte); 2535 2534 if (!old_page) { ··· 2707 2708 goto oom_free_new; 2708 2709 2709 2710 mmun_start = address & PAGE_MASK; 2710 - mmun_end = (address & PAGE_MASK) + PAGE_SIZE; 2711 - mmun_called = true; 2711 + mmun_end = mmun_start + PAGE_SIZE; 2712 2712 mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end); 2713 2713 2714 2714 /* ··· 2776 2778 page_cache_release(new_page); 2777 2779 unlock: 2778 2780 pte_unmap_unlock(page_table, ptl); 2779 - if (mmun_called) 2781 + if (mmun_end > mmun_start) 2780 2782 mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end); 2781 2783 if (old_page) { 2782 2784 /*
-7
mm/memory_hotplug.c
··· 106 106 void __ref put_page_bootmem(struct page *page) 107 107 { 108 108 unsigned long type; 109 - struct zone *zone; 110 109 111 110 type = (unsigned long) page->lru.next; 112 111 BUG_ON(type < MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE || ··· 116 117 set_page_private(page, 0); 117 118 INIT_LIST_HEAD(&page->lru); 118 119 __free_pages_bootmem(page, 0); 119 - 120 - zone = page_zone(page); 121 - zone_span_writelock(zone); 122 - zone->present_pages++; 123 - zone_span_writeunlock(zone); 124 - totalram_pages++; 125 120 } 126 121 127 122 }
+2
mm/mmap.c
··· 334 334 struct vm_area_struct *vma = mm->mmap; 335 335 while (vma) { 336 336 struct anon_vma_chain *avc; 337 + vma_lock_anon_vma(vma); 337 338 list_for_each_entry(avc, &vma->anon_vma_chain, same_vma) 338 339 anon_vma_interval_tree_verify(avc); 340 + vma_unlock_anon_vma(vma); 339 341 vma = vma->vm_next; 340 342 i++; 341 343 }
+1 -5
mm/mmzone.c
··· 87 87 } 88 88 #endif /* CONFIG_ARCH_HAS_HOLES_MEMORYMODEL */ 89 89 90 - void lruvec_init(struct lruvec *lruvec, struct zone *zone) 90 + void lruvec_init(struct lruvec *lruvec) 91 91 { 92 92 enum lru_list lru; 93 93 ··· 95 95 96 96 for_each_lru(lru) 97 97 INIT_LIST_HEAD(&lruvec->lists[lru]); 98 - 99 - #ifdef CONFIG_MEMCG 100 - lruvec->zone = zone; 101 - #endif 102 98 }
-3
mm/nobootmem.c
··· 116 116 return 0; 117 117 118 118 __free_pages_memory(start_pfn, end_pfn); 119 - fixup_zone_present_pages(pfn_to_nid(start >> PAGE_SHIFT), 120 - start_pfn, end_pfn); 121 119 122 120 return end_pfn - start_pfn; 123 121 } ··· 126 128 phys_addr_t start, end, size; 127 129 u64 i; 128 130 129 - reset_zone_present_pages(); 130 131 for_each_free_mem_range(i, MAX_NUMNODES, &start, &end, NULL) 131 132 count += __free_memory_core(start, end); 132 133
+1 -35
mm/page_alloc.c
··· 4505 4505 zone->zone_pgdat = pgdat; 4506 4506 4507 4507 zone_pcp_init(zone); 4508 - lruvec_init(&zone->lruvec, zone); 4508 + lruvec_init(&zone->lruvec); 4509 4509 if (!size) 4510 4510 continue; 4511 4511 ··· 6097 6097 page->mapping, page->index); 6098 6098 dump_page_flags(page->flags); 6099 6099 mem_cgroup_print_bad_page(page); 6100 - } 6101 - 6102 - /* reset zone->present_pages */ 6103 - void reset_zone_present_pages(void) 6104 - { 6105 - struct zone *z; 6106 - int i, nid; 6107 - 6108 - for_each_node_state(nid, N_HIGH_MEMORY) { 6109 - for (i = 0; i < MAX_NR_ZONES; i++) { 6110 - z = NODE_DATA(nid)->node_zones + i; 6111 - z->present_pages = 0; 6112 - } 6113 - } 6114 - } 6115 - 6116 - /* calculate zone's present pages in buddy system */ 6117 - void fixup_zone_present_pages(int nid, unsigned long start_pfn, 6118 - unsigned long end_pfn) 6119 - { 6120 - struct zone *z; 6121 - unsigned long zone_start_pfn, zone_end_pfn; 6122 - int i; 6123 - 6124 - for (i = 0; i < MAX_NR_ZONES; i++) { 6125 - z = NODE_DATA(nid)->node_zones + i; 6126 - zone_start_pfn = z->zone_start_pfn; 6127 - zone_end_pfn = zone_start_pfn + z->spanned_pages; 6128 - 6129 - /* if the two regions intersect */ 6130 - if (!(zone_start_pfn >= end_pfn || zone_end_pfn <= start_pfn)) 6131 - z->present_pages += min(end_pfn, zone_end_pfn) - 6132 - max(start_pfn, zone_start_pfn); 6133 - } 6134 6100 }
+15 -3
mm/shmem.c
··· 643 643 kfree(info->symlink); 644 644 645 645 simple_xattrs_free(&info->xattrs); 646 - BUG_ON(inode->i_blocks); 646 + WARN_ON(inode->i_blocks); 647 647 shmem_free_inode(inode->i_sb); 648 648 clear_inode(inode); 649 649 } ··· 1145 1145 if (!error) { 1146 1146 error = shmem_add_to_page_cache(page, mapping, index, 1147 1147 gfp, swp_to_radix_entry(swap)); 1148 - /* We already confirmed swap, and make no allocation */ 1149 - VM_BUG_ON(error); 1148 + /* 1149 + * We already confirmed swap under page lock, and make 1150 + * no memory allocation here, so usually no possibility 1151 + * of error; but free_swap_and_cache() only trylocks a 1152 + * page, so it is just possible that the entry has been 1153 + * truncated or holepunched since swap was confirmed. 1154 + * shmem_undo_range() will have done some of the 1155 + * unaccounting, now delete_from_swap_cache() will do 1156 + * the rest (including mem_cgroup_uncharge_swapcache). 1157 + * Reset swap.val? No, leave it so "failed" goes back to 1158 + * "repeat": reading a hole and writing should succeed. 1159 + */ 1160 + if (error) 1161 + delete_from_swap_cache(page); 1150 1162 } 1151 1163 if (error) 1152 1164 goto failed;
+2 -2
mm/swapfile.c
··· 1494 1494 BUG_ON(!current->mm); 1495 1495 1496 1496 pathname = getname(specialfile); 1497 - err = PTR_ERR(pathname); 1498 1497 if (IS_ERR(pathname)) 1499 - goto out; 1498 + return PTR_ERR(pathname); 1500 1499 1501 1500 victim = file_open_name(pathname, O_RDWR|O_LARGEFILE, 0); 1502 1501 err = PTR_ERR(victim); ··· 1607 1608 out_dput: 1608 1609 filp_close(victim, NULL); 1609 1610 out: 1611 + putname(pathname); 1610 1612 return err; 1611 1613 } 1612 1614
+2 -25
mm/vmscan.c
··· 1760 1760 return false; 1761 1761 } 1762 1762 1763 - #ifdef CONFIG_COMPACTION 1764 - /* 1765 - * If compaction is deferred for sc->order then scale the number of pages 1766 - * reclaimed based on the number of consecutive allocation failures 1767 - */ 1768 - static unsigned long scale_for_compaction(unsigned long pages_for_compaction, 1769 - struct lruvec *lruvec, struct scan_control *sc) 1770 - { 1771 - struct zone *zone = lruvec_zone(lruvec); 1772 - 1773 - if (zone->compact_order_failed <= sc->order) 1774 - pages_for_compaction <<= zone->compact_defer_shift; 1775 - return pages_for_compaction; 1776 - } 1777 - #else 1778 - static unsigned long scale_for_compaction(unsigned long pages_for_compaction, 1779 - struct lruvec *lruvec, struct scan_control *sc) 1780 - { 1781 - return pages_for_compaction; 1782 - } 1783 - #endif 1784 - 1785 1763 /* 1786 1764 * Reclaim/compaction is used for high-order allocation requests. It reclaims 1787 1765 * order-0 pages before compacting the zone. should_continue_reclaim() returns ··· 1807 1829 * inactive lists are large enough, continue reclaiming 1808 1830 */ 1809 1831 pages_for_compaction = (2UL << sc->order); 1810 - 1811 - pages_for_compaction = scale_for_compaction(pages_for_compaction, 1812 - lruvec, sc); 1813 1832 inactive_lru_pages = get_lru_size(lruvec, LRU_INACTIVE_FILE); 1814 1833 if (nr_swap_pages > 0) 1815 1834 inactive_lru_pages += get_lru_size(lruvec, LRU_INACTIVE_ANON); ··· 2992 3017 &balanced_classzone_idx); 2993 3018 } 2994 3019 } 3020 + 3021 + current->reclaim_state = NULL; 2995 3022 return 0; 2996 3023 } 2997 3024
+6 -6
net/batman-adv/soft-interface.c
··· 347 347 348 348 soft_iface->last_rx = jiffies; 349 349 350 + /* Let the bridge loop avoidance check the packet. If will 351 + * not handle it, we can safely push it up. 352 + */ 353 + if (batadv_bla_rx(bat_priv, skb, vid, is_bcast)) 354 + goto out; 355 + 350 356 if (orig_node) 351 357 batadv_tt_add_temporary_global_entry(bat_priv, orig_node, 352 358 ethhdr->h_source); 353 359 354 360 if (batadv_is_ap_isolated(bat_priv, ethhdr->h_source, ethhdr->h_dest)) 355 361 goto dropped; 356 - 357 - /* Let the bridge loop avoidance check the packet. If will 358 - * not handle it, we can safely push it up. 359 - */ 360 - if (batadv_bla_rx(bat_priv, skb, vid, is_bcast)) 361 - goto out; 362 362 363 363 netif_rx(skb); 364 364 goto out;
+14 -1
net/batman-adv/translation-table.c
··· 861 861 */ 862 862 common->flags &= ~BATADV_TT_CLIENT_TEMP; 863 863 864 + /* the change can carry possible "attribute" flags like the 865 + * TT_CLIENT_WIFI, therefore they have to be copied in the 866 + * client entry 867 + */ 868 + tt_global_entry->common.flags |= flags; 869 + 864 870 /* If there is the BATADV_TT_CLIENT_ROAM flag set, there is only 865 871 * one originator left in the list and we previously received a 866 872 * delete + roaming change for this originator. ··· 1580 1574 1581 1575 memcpy(tt_change->addr, tt_common_entry->addr, 1582 1576 ETH_ALEN); 1583 - tt_change->flags = BATADV_NO_FLAGS; 1577 + tt_change->flags = tt_common_entry->flags; 1584 1578 1585 1579 tt_count++; 1586 1580 tt_change++; ··· 2550 2544 const unsigned char *addr) 2551 2545 { 2552 2546 bool ret = false; 2547 + 2548 + /* if the originator is a backbone node (meaning it belongs to the same 2549 + * LAN of this node) the temporary client must not be added because to 2550 + * reach such destination the node must use the LAN instead of the mesh 2551 + */ 2552 + if (batadv_bla_is_backbone_gw_orig(bat_priv, orig_node->orig)) 2553 + goto out; 2553 2554 2554 2555 if (!batadv_tt_global_add(bat_priv, orig_node, addr, 2555 2556 BATADV_TT_CLIENT_TEMP,
+2 -2
net/bluetooth/hci_core.c
··· 1754 1754 if (hdev->dev_type != HCI_AMP) 1755 1755 set_bit(HCI_AUTO_OFF, &hdev->dev_flags); 1756 1756 1757 - schedule_work(&hdev->power_on); 1758 - 1759 1757 hci_notify(hdev, HCI_DEV_REG); 1760 1758 hci_dev_hold(hdev); 1759 + 1760 + schedule_work(&hdev->power_on); 1761 1761 1762 1762 return id; 1763 1763
+7 -5
net/bluetooth/mgmt.c
··· 326 326 struct hci_dev *d; 327 327 size_t rp_len; 328 328 u16 count; 329 - int i, err; 329 + int err; 330 330 331 331 BT_DBG("sock %p", sk); 332 332 ··· 347 347 return -ENOMEM; 348 348 } 349 349 350 - rp->num_controllers = cpu_to_le16(count); 351 - 352 - i = 0; 350 + count = 0; 353 351 list_for_each_entry(d, &hci_dev_list, list) { 354 352 if (test_bit(HCI_SETUP, &d->dev_flags)) 355 353 continue; ··· 355 357 if (!mgmt_valid_hdev(d)) 356 358 continue; 357 359 358 - rp->index[i++] = cpu_to_le16(d->id); 360 + rp->index[count++] = cpu_to_le16(d->id); 359 361 BT_DBG("Added hci%u", d->id); 360 362 } 363 + 364 + rp->num_controllers = cpu_to_le16(count); 365 + rp_len = sizeof(*rp) + (2 * count); 361 366 362 367 read_unlock(&hci_dev_list_lock); 363 368 ··· 1367 1366 continue; 1368 1367 1369 1368 list_del(&match->list); 1369 + kfree(match); 1370 1370 found++; 1371 1371 } 1372 1372
+1 -1
net/bluetooth/smp.c
··· 267 267 268 268 clear_bit(HCI_CONN_ENCRYPT_PEND, &conn->hcon->flags); 269 269 mgmt_auth_failed(conn->hcon->hdev, conn->dst, hcon->type, 270 - hcon->dst_type, reason); 270 + hcon->dst_type, HCI_ERROR_AUTH_FAILURE); 271 271 272 272 cancel_delayed_work_sync(&conn->security_timer); 273 273
+3 -1
net/core/dev.c
··· 2895 2895 if (unlikely(tcpu != next_cpu) && 2896 2896 (tcpu == RPS_NO_CPU || !cpu_online(tcpu) || 2897 2897 ((int)(per_cpu(softnet_data, tcpu).input_queue_head - 2898 - rflow->last_qtail)) >= 0)) 2898 + rflow->last_qtail)) >= 0)) { 2899 + tcpu = next_cpu; 2899 2900 rflow = set_rps_cpu(dev, skb, rflow, next_cpu); 2901 + } 2900 2902 2901 2903 if (tcpu != RPS_NO_CPU && cpu_online(tcpu)) { 2902 2904 *rflowp = rflow;
+2 -1
net/core/dev_addr_lists.c
··· 319 319 */ 320 320 ha = list_first_entry(&dev->dev_addrs.list, 321 321 struct netdev_hw_addr, list); 322 - if (ha->addr == dev->dev_addr && ha->refcount == 1) 322 + if (!memcmp(ha->addr, addr, dev->addr_len) && 323 + ha->type == addr_type && ha->refcount == 1) 323 324 return -ENOENT; 324 325 325 326 err = __hw_addr_del(&dev->dev_addrs, addr, dev->addr_len,
+22 -13
net/ipv4/ip_sockglue.c
··· 457 457 struct inet_sock *inet = inet_sk(sk); 458 458 int val = 0, err; 459 459 460 - if (((1<<optname) & ((1<<IP_PKTINFO) | (1<<IP_RECVTTL) | 461 - (1<<IP_RECVOPTS) | (1<<IP_RECVTOS) | 462 - (1<<IP_RETOPTS) | (1<<IP_TOS) | 463 - (1<<IP_TTL) | (1<<IP_HDRINCL) | 464 - (1<<IP_MTU_DISCOVER) | (1<<IP_RECVERR) | 465 - (1<<IP_ROUTER_ALERT) | (1<<IP_FREEBIND) | 466 - (1<<IP_PASSSEC) | (1<<IP_TRANSPARENT) | 467 - (1<<IP_MINTTL) | (1<<IP_NODEFRAG))) || 468 - optname == IP_UNICAST_IF || 469 - optname == IP_MULTICAST_TTL || 470 - optname == IP_MULTICAST_ALL || 471 - optname == IP_MULTICAST_LOOP || 472 - optname == IP_RECVORIGDSTADDR) { 460 + switch (optname) { 461 + case IP_PKTINFO: 462 + case IP_RECVTTL: 463 + case IP_RECVOPTS: 464 + case IP_RECVTOS: 465 + case IP_RETOPTS: 466 + case IP_TOS: 467 + case IP_TTL: 468 + case IP_HDRINCL: 469 + case IP_MTU_DISCOVER: 470 + case IP_RECVERR: 471 + case IP_ROUTER_ALERT: 472 + case IP_FREEBIND: 473 + case IP_PASSSEC: 474 + case IP_TRANSPARENT: 475 + case IP_MINTTL: 476 + case IP_NODEFRAG: 477 + case IP_UNICAST_IF: 478 + case IP_MULTICAST_TTL: 479 + case IP_MULTICAST_ALL: 480 + case IP_MULTICAST_LOOP: 481 + case IP_RECVORIGDSTADDR: 473 482 if (optlen >= sizeof(int)) { 474 483 if (get_user(val, (int __user *) optval)) 475 484 return -EFAULT;
+5
net/ipv4/ip_vti.c
··· 324 324 if (tunnel != NULL) { 325 325 struct pcpu_tstats *tstats; 326 326 327 + if (!xfrm4_policy_check(NULL, XFRM_POLICY_IN, skb)) 328 + return -1; 329 + 327 330 tstats = this_cpu_ptr(tunnel->dev->tstats); 328 331 u64_stats_update_begin(&tstats->syncp); 329 332 tstats->rx_packets++; 330 333 tstats->rx_bytes += skb->len; 331 334 u64_stats_update_end(&tstats->syncp); 332 335 336 + skb->mark = 0; 337 + secpath_reset(skb); 333 338 skb->dev = tunnel->dev; 334 339 return 1; 335 340 }
+2 -2
net/ipv4/tcp.c
··· 1213 1213 wait_for_sndbuf: 1214 1214 set_bit(SOCK_NOSPACE, &sk->sk_socket->flags); 1215 1215 wait_for_memory: 1216 - if (copied && likely(!tp->repair)) 1216 + if (copied) 1217 1217 tcp_push(sk, flags & ~MSG_MORE, mss_now, TCP_NAGLE_PUSH); 1218 1218 1219 1219 if ((err = sk_stream_wait_memory(sk, &timeo)) != 0) ··· 1224 1224 } 1225 1225 1226 1226 out: 1227 - if (copied && likely(!tp->repair)) 1227 + if (copied) 1228 1228 tcp_push(sk, flags, mss_now, tp->nonagle); 1229 1229 release_sock(sk); 1230 1230 return copied + copied_syn;
+10 -5
net/ipv4/tcp_input.c
··· 5320 5320 goto discard; 5321 5321 } 5322 5322 5323 - /* ts_recent update must be made after we are sure that the packet 5324 - * is in window. 5325 - */ 5326 - tcp_replace_ts_recent(tp, TCP_SKB_CB(skb)->seq); 5327 - 5328 5323 /* step 3: check security and precedence [ignored] */ 5329 5324 5330 5325 /* step 4: Check for a SYN ··· 5553 5558 step5: 5554 5559 if (th->ack && tcp_ack(sk, skb, FLAG_SLOWPATH) < 0) 5555 5560 goto discard; 5561 + 5562 + /* ts_recent update must be made after we are sure that the packet 5563 + * is in window. 5564 + */ 5565 + tcp_replace_ts_recent(tp, TCP_SKB_CB(skb)->seq); 5556 5566 5557 5567 tcp_rcv_rtt_measure_ts(sk, skb); 5558 5568 ··· 6136 6136 } 6137 6137 } else 6138 6138 goto discard; 6139 + 6140 + /* ts_recent update must be made after we are sure that the packet 6141 + * is in window. 6142 + */ 6143 + tcp_replace_ts_recent(tp, TCP_SKB_CB(skb)->seq); 6139 6144 6140 6145 /* step 6: check the URG bit */ 6141 6146 tcp_urg(sk, skb, th);
+9 -3
net/ipv4/tcp_metrics.c
··· 1 1 #include <linux/rcupdate.h> 2 2 #include <linux/spinlock.h> 3 3 #include <linux/jiffies.h> 4 - #include <linux/bootmem.h> 5 4 #include <linux/module.h> 6 5 #include <linux/cache.h> 7 6 #include <linux/slab.h> ··· 8 9 #include <linux/tcp.h> 9 10 #include <linux/hash.h> 10 11 #include <linux/tcp_metrics.h> 12 + #include <linux/vmalloc.h> 11 13 12 14 #include <net/inet_connection_sock.h> 13 15 #include <net/net_namespace.h> ··· 1034 1034 net->ipv4.tcp_metrics_hash_log = order_base_2(slots); 1035 1035 size = sizeof(struct tcpm_hash_bucket) << net->ipv4.tcp_metrics_hash_log; 1036 1036 1037 - net->ipv4.tcp_metrics_hash = kzalloc(size, GFP_KERNEL); 1037 + net->ipv4.tcp_metrics_hash = kzalloc(size, GFP_KERNEL | __GFP_NOWARN); 1038 + if (!net->ipv4.tcp_metrics_hash) 1039 + net->ipv4.tcp_metrics_hash = vzalloc(size); 1040 + 1038 1041 if (!net->ipv4.tcp_metrics_hash) 1039 1042 return -ENOMEM; 1040 1043 ··· 1058 1055 tm = next; 1059 1056 } 1060 1057 } 1061 - kfree(net->ipv4.tcp_metrics_hash); 1058 + if (is_vmalloc_addr(net->ipv4.tcp_metrics_hash)) 1059 + vfree(net->ipv4.tcp_metrics_hash); 1060 + else 1061 + kfree(net->ipv4.tcp_metrics_hash); 1062 1062 } 1063 1063 1064 1064 static __net_initdata struct pernet_operations tcp_net_metrics_ops = {
+4
net/ipv4/tcp_output.c
··· 1986 1986 tso_segs = tcp_init_tso_segs(sk, skb, mss_now); 1987 1987 BUG_ON(!tso_segs); 1988 1988 1989 + if (unlikely(tp->repair) && tp->repair_queue == TCP_SEND_QUEUE) 1990 + goto repair; /* Skip network transmission */ 1991 + 1989 1992 cwnd_quota = tcp_cwnd_test(tp, skb); 1990 1993 if (!cwnd_quota) 1991 1994 break; ··· 2029 2026 if (unlikely(tcp_transmit_skb(sk, skb, 1, gfp))) 2030 2027 break; 2031 2028 2029 + repair: 2032 2030 /* Advance the send_head. This one is sent out. 2033 2031 * This call will increment packets_out. 2034 2032 */
+1
net/ipv6/ipv6_sockglue.c
··· 827 827 if (val < 0 || val > 255) 828 828 goto e_inval; 829 829 np->min_hopcount = val; 830 + retv = 0; 830 831 break; 831 832 case IPV6_DONTFRAG: 832 833 np->dontfrag = valbool;
+3
net/mac80211/cfg.c
··· 2613 2613 else 2614 2614 local->probe_req_reg--; 2615 2615 2616 + if (!local->open_count) 2617 + break; 2618 + 2616 2619 ieee80211_queue_work(&local->hw, &local->reconfig_filter); 2617 2620 break; 2618 2621 default:
+2
net/mac80211/ieee80211_i.h
··· 1377 1377 struct net_device *dev); 1378 1378 netdev_tx_t ieee80211_subif_start_xmit(struct sk_buff *skb, 1379 1379 struct net_device *dev); 1380 + void ieee80211_purge_tx_queue(struct ieee80211_hw *hw, 1381 + struct sk_buff_head *skbs); 1380 1382 1381 1383 /* HT */ 1382 1384 void ieee80211_apply_htcap_overrides(struct ieee80211_sub_if_data *sdata,
+4 -2
net/mac80211/main.c
··· 920 920 local->hw.wiphy->cipher_suites, 921 921 sizeof(u32) * local->hw.wiphy->n_cipher_suites, 922 922 GFP_KERNEL); 923 - if (!suites) 924 - return -ENOMEM; 923 + if (!suites) { 924 + result = -ENOMEM; 925 + goto fail_wiphy_register; 926 + } 925 927 for (r = 0; r < local->hw.wiphy->n_cipher_suites; r++) { 926 928 u32 suite = local->hw.wiphy->cipher_suites[r]; 927 929 if (suite == WLAN_CIPHER_SUITE_WEP40 ||
+1 -1
net/mac80211/scan.c
··· 934 934 struct cfg80211_sched_scan_request *req) 935 935 { 936 936 struct ieee80211_local *local = sdata->local; 937 - struct ieee80211_sched_scan_ies sched_scan_ies; 937 + struct ieee80211_sched_scan_ies sched_scan_ies = {}; 938 938 int ret, i; 939 939 940 940 mutex_lock(&local->mtx);
+8 -3
net/mac80211/sta_info.c
··· 122 122 123 123 for (ac = 0; ac < IEEE80211_NUM_ACS; ac++) { 124 124 local->total_ps_buffered -= skb_queue_len(&sta->ps_tx_buf[ac]); 125 - __skb_queue_purge(&sta->ps_tx_buf[ac]); 126 - __skb_queue_purge(&sta->tx_filtered[ac]); 125 + ieee80211_purge_tx_queue(&local->hw, &sta->ps_tx_buf[ac]); 126 + ieee80211_purge_tx_queue(&local->hw, &sta->tx_filtered[ac]); 127 127 } 128 128 129 129 #ifdef CONFIG_MAC80211_MESH ··· 146 146 tid_tx = rcu_dereference_raw(sta->ampdu_mlme.tid_tx[i]); 147 147 if (!tid_tx) 148 148 continue; 149 - __skb_queue_purge(&tid_tx->pending); 149 + ieee80211_purge_tx_queue(&local->hw, &tid_tx->pending); 150 150 kfree(tid_tx); 151 151 } 152 152 ··· 982 982 struct ieee80211_local *local = sdata->local; 983 983 struct sk_buff_head pending; 984 984 int filtered = 0, buffered = 0, ac; 985 + unsigned long flags; 985 986 986 987 clear_sta_flag(sta, WLAN_STA_SP); 987 988 ··· 998 997 for (ac = 0; ac < IEEE80211_NUM_ACS; ac++) { 999 998 int count = skb_queue_len(&pending), tmp; 1000 999 1000 + spin_lock_irqsave(&sta->tx_filtered[ac].lock, flags); 1001 1001 skb_queue_splice_tail_init(&sta->tx_filtered[ac], &pending); 1002 + spin_unlock_irqrestore(&sta->tx_filtered[ac].lock, flags); 1002 1003 tmp = skb_queue_len(&pending); 1003 1004 filtered += tmp - count; 1004 1005 count = tmp; 1005 1006 1007 + spin_lock_irqsave(&sta->ps_tx_buf[ac].lock, flags); 1006 1008 skb_queue_splice_tail_init(&sta->ps_tx_buf[ac], &pending); 1009 + spin_unlock_irqrestore(&sta->ps_tx_buf[ac].lock, flags); 1007 1010 tmp = skb_queue_len(&pending); 1008 1011 buffered += tmp - count; 1009 1012 }
+9
net/mac80211/status.c
··· 669 669 dev_kfree_skb_any(skb); 670 670 } 671 671 EXPORT_SYMBOL(ieee80211_free_txskb); 672 + 673 + void ieee80211_purge_tx_queue(struct ieee80211_hw *hw, 674 + struct sk_buff_head *skbs) 675 + { 676 + struct sk_buff *skb; 677 + 678 + while ((skb = __skb_dequeue(skbs))) 679 + ieee80211_free_txskb(hw, skb); 680 + }
+6 -3
net/mac80211/tx.c
··· 1361 1361 if (tx->skb) 1362 1362 ieee80211_free_txskb(&tx->local->hw, tx->skb); 1363 1363 else 1364 - __skb_queue_purge(&tx->skbs); 1364 + ieee80211_purge_tx_queue(&tx->local->hw, &tx->skbs); 1365 1365 return -1; 1366 1366 } else if (unlikely(res == TX_QUEUED)) { 1367 1367 I802_DEBUG_INC(tx->local->tx_handlers_queued); ··· 2160 2160 */ 2161 2161 void ieee80211_clear_tx_pending(struct ieee80211_local *local) 2162 2162 { 2163 + struct sk_buff *skb; 2163 2164 int i; 2164 2165 2165 - for (i = 0; i < local->hw.queues; i++) 2166 - skb_queue_purge(&local->pending[i]); 2166 + for (i = 0; i < local->hw.queues; i++) { 2167 + while ((skb = skb_dequeue(&local->pending[i])) != NULL) 2168 + ieee80211_free_txskb(&local->hw, skb); 2169 + } 2167 2170 } 2168 2171 2169 2172 /*
+2
net/mac80211/util.c
··· 1534 1534 list_for_each_entry(sdata, &local->interfaces, list) { 1535 1535 if (sdata->vif.type != NL80211_IFTYPE_STATION) 1536 1536 continue; 1537 + if (!sdata->u.mgd.associated) 1538 + continue; 1537 1539 1538 1540 ieee80211_send_nullfunc(local, sdata, 0); 1539 1541 }
+4 -4
net/sctp/proc.c
··· 102 102 .open = sctp_snmp_seq_open, 103 103 .read = seq_read, 104 104 .llseek = seq_lseek, 105 - .release = single_release, 105 + .release = single_release_net, 106 106 }; 107 107 108 108 /* Set up the proc fs entry for 'snmp' object. */ ··· 251 251 .open = sctp_eps_seq_open, 252 252 .read = seq_read, 253 253 .llseek = seq_lseek, 254 - .release = seq_release, 254 + .release = seq_release_net, 255 255 }; 256 256 257 257 /* Set up the proc fs entry for 'eps' object. */ ··· 372 372 .open = sctp_assocs_seq_open, 373 373 .read = seq_read, 374 374 .llseek = seq_lseek, 375 - .release = seq_release, 375 + .release = seq_release_net, 376 376 }; 377 377 378 378 /* Set up the proc fs entry for 'assocs' object. */ ··· 517 517 .open = sctp_remaddr_seq_open, 518 518 .read = seq_read, 519 519 .llseek = seq_lseek, 520 - .release = seq_release, 520 + .release = seq_release_net, 521 521 }; 522 522 523 523 int __net_init sctp_remaddr_proc_init(struct net *net)
+1 -1
net/sunrpc/backchannel_rqst.c
··· 172 172 xprt_free_allocation(req); 173 173 174 174 dprintk("RPC: setup backchannel transport failed\n"); 175 - return -1; 175 + return -ENOMEM; 176 176 } 177 177 EXPORT_SYMBOL_GPL(xprt_setup_backchannel); 178 178
+2 -3
net/wireless/reg.c
··· 141 141 .reg_rules = { 142 142 /* IEEE 802.11b/g, channels 1..11 */ 143 143 REG_RULE(2412-10, 2462+10, 40, 6, 20, 0), 144 - /* IEEE 802.11b/g, channels 12..13. No HT40 145 - * channel fits here. */ 146 - REG_RULE(2467-10, 2472+10, 20, 6, 20, 144 + /* IEEE 802.11b/g, channels 12..13. */ 145 + REG_RULE(2467-10, 2472+10, 40, 6, 20, 147 146 NL80211_RRF_PASSIVE_SCAN | 148 147 NL80211_RRF_NO_IBSS), 149 148 /* IEEE 802.11 channel 14 - Only JP enables
+2 -1
scripts/Makefile.modinst
··· 16 16 __modinst: $(modules) 17 17 @: 18 18 19 + # Don't stop modules_install if we can't sign external modules. 19 20 quiet_cmd_modules_install = INSTALL $@ 20 - cmd_modules_install = mkdir -p $(2); cp $@ $(2) ; $(mod_strip_cmd) $(2)/$(notdir $@) ; $(mod_sign_cmd) $(2)/$(notdir $@) 21 + cmd_modules_install = mkdir -p $(2); cp $@ $(2) ; $(mod_strip_cmd) $(2)/$(notdir $@) ; $(mod_sign_cmd) $(2)/$(notdir $@) $(patsubst %,|| true,$(KBUILD_EXTMOD)) 21 22 22 23 # Modules built outside the kernel source tree go into extra by default 23 24 INSTALL_MOD_DIR ?= extra
+4 -2
scripts/checkpatch.pl
··· 1890 1890 } 1891 1891 1892 1892 if ($realfile =~ m@^(drivers/net/|net/)@ && 1893 - $rawline !~ m@^\+[ \t]*(\/\*|\*\/)@ && 1894 - $rawline =~ m@^\+[ \t]*.+\*\/[ \t]*$@) { 1893 + $rawline !~ m@^\+[ \t]*\*/[ \t]*$@ && #trailing */ 1894 + $rawline !~ m@^\+.*/\*.*\*/[ \t]*$@ && #inline /*...*/ 1895 + $rawline !~ m@^\+.*\*{2,}/[ \t]*$@ && #trailing **/ 1896 + $rawline =~ m@^\+[ \t]*.+\*\/[ \t]*$@) { #non blank */ 1895 1897 WARN("NETWORKING_BLOCK_COMMENT_STYLE", 1896 1898 "networking block comments put the trailing */ on a separate line\n" . $herecurr); 1897 1899 }
+2 -3
scripts/kconfig/expr.h
··· 12 12 13 13 #include <assert.h> 14 14 #include <stdio.h> 15 - #include <sys/queue.h> 15 + #include "list.h" 16 16 #ifndef __cplusplus 17 17 #include <stdbool.h> 18 18 #endif ··· 175 175 #define MENU_ROOT 0x0002 176 176 177 177 struct jump_key { 178 - CIRCLEQ_ENTRY(jump_key) entries; 178 + struct list_head entries; 179 179 size_t offset; 180 180 struct menu *target; 181 181 int index; 182 182 }; 183 - CIRCLEQ_HEAD(jk_head, jump_key); 184 183 185 184 #define JUMP_NB 9 186 185
+91
scripts/kconfig/list.h
··· 1 + #ifndef LIST_H 2 + #define LIST_H 3 + 4 + /* 5 + * Copied from include/linux/... 6 + */ 7 + 8 + #undef offsetof 9 + #define offsetof(TYPE, MEMBER) ((size_t) &((TYPE *)0)->MEMBER) 10 + 11 + /** 12 + * container_of - cast a member of a structure out to the containing structure 13 + * @ptr: the pointer to the member. 14 + * @type: the type of the container struct this is embedded in. 15 + * @member: the name of the member within the struct. 16 + * 17 + */ 18 + #define container_of(ptr, type, member) ({ \ 19 + const typeof( ((type *)0)->member ) *__mptr = (ptr); \ 20 + (type *)( (char *)__mptr - offsetof(type,member) );}) 21 + 22 + 23 + struct list_head { 24 + struct list_head *next, *prev; 25 + }; 26 + 27 + 28 + #define LIST_HEAD_INIT(name) { &(name), &(name) } 29 + 30 + #define LIST_HEAD(name) \ 31 + struct list_head name = LIST_HEAD_INIT(name) 32 + 33 + /** 34 + * list_entry - get the struct for this entry 35 + * @ptr: the &struct list_head pointer. 36 + * @type: the type of the struct this is embedded in. 37 + * @member: the name of the list_struct within the struct. 38 + */ 39 + #define list_entry(ptr, type, member) \ 40 + container_of(ptr, type, member) 41 + 42 + /** 43 + * list_for_each_entry - iterate over list of given type 44 + * @pos: the type * to use as a loop cursor. 45 + * @head: the head for your list. 46 + * @member: the name of the list_struct within the struct. 47 + */ 48 + #define list_for_each_entry(pos, head, member) \ 49 + for (pos = list_entry((head)->next, typeof(*pos), member); \ 50 + &pos->member != (head); \ 51 + pos = list_entry(pos->member.next, typeof(*pos), member)) 52 + 53 + /** 54 + * list_empty - tests whether a list is empty 55 + * @head: the list to test. 56 + */ 57 + static inline int list_empty(const struct list_head *head) 58 + { 59 + return head->next == head; 60 + } 61 + 62 + /* 63 + * Insert a new entry between two known consecutive entries. 64 + * 65 + * This is only for internal list manipulation where we know 66 + * the prev/next entries already! 67 + */ 68 + static inline void __list_add(struct list_head *_new, 69 + struct list_head *prev, 70 + struct list_head *next) 71 + { 72 + next->prev = _new; 73 + _new->next = next; 74 + _new->prev = prev; 75 + prev->next = _new; 76 + } 77 + 78 + /** 79 + * list_add_tail - add a new entry 80 + * @new: new entry to be added 81 + * @head: list head to add it before 82 + * 83 + * Insert a new entry before the specified head. 84 + * This is useful for implementing queues. 85 + */ 86 + static inline void list_add_tail(struct list_head *_new, struct list_head *head) 87 + { 88 + __list_add(_new, head->prev, head); 89 + } 90 + 91 + #endif
+2 -2
scripts/kconfig/lkc_proto.h
··· 21 21 P(menu_get_parent_menu,struct menu *,(struct menu *menu)); 22 22 P(menu_has_help,bool,(struct menu *menu)); 23 23 P(menu_get_help,const char *,(struct menu *menu)); 24 - P(get_symbol_str, void, (struct gstr *r, struct symbol *sym, struct jk_head 24 + P(get_symbol_str, void, (struct gstr *r, struct symbol *sym, struct list_head 25 25 *head)); 26 - P(get_relations_str, struct gstr, (struct symbol **sym_arr, struct jk_head 26 + P(get_relations_str, struct gstr, (struct symbol **sym_arr, struct list_head 27 27 *head)); 28 28 P(menu_get_ext_help,void,(struct menu *menu, struct gstr *help)); 29 29
+3 -3
scripts/kconfig/mconf.c
··· 312 312 313 313 314 314 struct search_data { 315 - struct jk_head *head; 315 + struct list_head *head; 316 316 struct menu **targets; 317 317 int *keys; 318 318 }; ··· 323 323 struct jump_key *pos; 324 324 int k = 0; 325 325 326 - CIRCLEQ_FOREACH(pos, data->head, entries) { 326 + list_for_each_entry(pos, data->head, entries) { 327 327 if (pos->offset >= start && pos->offset < end) { 328 328 char header[4]; 329 329 ··· 375 375 376 376 sym_arr = sym_re_search(dialog_input); 377 377 do { 378 - struct jk_head head = CIRCLEQ_HEAD_INITIALIZER(head); 378 + LIST_HEAD(head); 379 379 struct menu *targets[JUMP_NB]; 380 380 int keys[JUMP_NB + 1], i; 381 381 struct search_data data = {
+8 -6
scripts/kconfig/menu.c
··· 508 508 } 509 509 510 510 static void get_prompt_str(struct gstr *r, struct property *prop, 511 - struct jk_head *head) 511 + struct list_head *head) 512 512 { 513 513 int i, j; 514 514 struct menu *submenu[8], *menu, *location = NULL; ··· 544 544 } else 545 545 jump->target = location; 546 546 547 - if (CIRCLEQ_EMPTY(head)) 547 + if (list_empty(head)) 548 548 jump->index = 0; 549 549 else 550 - jump->index = CIRCLEQ_LAST(head)->index + 1; 550 + jump->index = list_entry(head->prev, struct jump_key, 551 + entries)->index + 1; 551 552 552 - CIRCLEQ_INSERT_TAIL(head, jump, entries); 553 + list_add_tail(&jump->entries, head); 553 554 } 554 555 555 556 if (i > 0) { ··· 574 573 /* 575 574 * head is optional and may be NULL 576 575 */ 577 - void get_symbol_str(struct gstr *r, struct symbol *sym, struct jk_head *head) 576 + void get_symbol_str(struct gstr *r, struct symbol *sym, 577 + struct list_head *head) 578 578 { 579 579 bool hit; 580 580 struct property *prop; ··· 614 612 str_append(r, "\n\n"); 615 613 } 616 614 617 - struct gstr get_relations_str(struct symbol **sym_arr, struct jk_head *head) 615 + struct gstr get_relations_str(struct symbol **sym_arr, struct list_head *head) 618 616 { 619 617 struct symbol *sym; 620 618 struct gstr res = str_new();
+13 -5
security/device_cgroup.c
··· 164 164 struct dev_exception_item *ex, *tmp; 165 165 166 166 list_for_each_entry_safe(ex, tmp, &dev_cgroup->exceptions, list) { 167 - list_del(&ex->list); 168 - kfree(ex); 167 + list_del_rcu(&ex->list); 168 + kfree_rcu(ex, rcu); 169 169 } 170 170 } 171 171 ··· 298 298 struct dev_exception_item *ex; 299 299 bool match = false; 300 300 301 - list_for_each_entry(ex, &dev_cgroup->exceptions, list) { 301 + list_for_each_entry_rcu(ex, &dev_cgroup->exceptions, list) { 302 302 if ((refex->type & DEV_BLOCK) && !(ex->type & DEV_BLOCK)) 303 303 continue; 304 304 if ((refex->type & DEV_CHAR) && !(ex->type & DEV_CHAR)) ··· 352 352 */ 353 353 static inline int may_allow_all(struct dev_cgroup *parent) 354 354 { 355 + if (!parent) 356 + return 1; 355 357 return parent->behavior == DEVCG_DEFAULT_ALLOW; 356 358 } 357 359 ··· 378 376 int count, rc; 379 377 struct dev_exception_item ex; 380 378 struct cgroup *p = devcgroup->css.cgroup; 381 - struct dev_cgroup *parent = cgroup_to_devcgroup(p->parent); 379 + struct dev_cgroup *parent = NULL; 382 380 383 381 if (!capable(CAP_SYS_ADMIN)) 384 382 return -EPERM; 383 + 384 + if (p->parent) 385 + parent = cgroup_to_devcgroup(p->parent); 385 386 386 387 memset(&ex, 0, sizeof(ex)); 387 388 b = buffer; ··· 396 391 if (!may_allow_all(parent)) 397 392 return -EPERM; 398 393 dev_exception_clean(devcgroup); 394 + devcgroup->behavior = DEVCG_DEFAULT_ALLOW; 395 + if (!parent) 396 + break; 397 + 399 398 rc = dev_exceptions_copy(&devcgroup->exceptions, 400 399 &parent->exceptions); 401 400 if (rc) 402 401 return rc; 403 - devcgroup->behavior = DEVCG_DEFAULT_ALLOW; 404 402 break; 405 403 case DEVCG_DENY: 406 404 dev_exception_clean(devcgroup);
+1
sound/core/oss/mixer_oss.c
··· 76 76 snd_card_unref(card); 77 77 return -EFAULT; 78 78 } 79 + snd_card_unref(card); 79 80 return 0; 80 81 } 81 82
+1
sound/core/oss/pcm_oss.c
··· 2454 2454 mutex_unlock(&pcm->open_mutex); 2455 2455 if (err < 0) 2456 2456 goto __error; 2457 + snd_card_unref(pcm->card); 2457 2458 return err; 2458 2459 2459 2460 __error:
+4 -2
sound/core/pcm_native.c
··· 2122 2122 pcm = snd_lookup_minor_data(iminor(inode), 2123 2123 SNDRV_DEVICE_TYPE_PCM_PLAYBACK); 2124 2124 err = snd_pcm_open(file, pcm, SNDRV_PCM_STREAM_PLAYBACK); 2125 - snd_card_unref(pcm->card); 2125 + if (pcm) 2126 + snd_card_unref(pcm->card); 2126 2127 return err; 2127 2128 } 2128 2129 ··· 2136 2135 pcm = snd_lookup_minor_data(iminor(inode), 2137 2136 SNDRV_DEVICE_TYPE_PCM_CAPTURE); 2138 2137 err = snd_pcm_open(file, pcm, SNDRV_PCM_STREAM_CAPTURE); 2139 - snd_card_unref(pcm->card); 2138 + if (pcm) 2139 + snd_card_unref(pcm->card); 2140 2140 return err; 2141 2141 } 2142 2142
+1 -1
sound/core/sound.c
··· 114 114 mreg = snd_minors[minor]; 115 115 if (mreg && mreg->type == type) { 116 116 private_data = mreg->private_data; 117 - if (mreg->card_ptr) 117 + if (private_data && mreg->card_ptr) 118 118 atomic_inc(&mreg->card_ptr->refcount); 119 119 } else 120 120 private_data = NULL;
+1 -1
sound/core/sound_oss.c
··· 54 54 mreg = snd_oss_minors[minor]; 55 55 if (mreg && mreg->type == type) { 56 56 private_data = mreg->private_data; 57 - if (mreg->card_ptr) 57 + if (private_data && mreg->card_ptr) 58 58 atomic_inc(&mreg->card_ptr->refcount); 59 59 } else 60 60 private_data = NULL;
+1 -1
sound/i2c/other/ak4113.c
··· 426 426 }, 427 427 { 428 428 .iface = SNDRV_CTL_ELEM_IFACE_PCM, 429 - .name = "IEC958 Preample Capture Default", 429 + .name = "IEC958 Preamble Capture Default", 430 430 .access = SNDRV_CTL_ELEM_ACCESS_READ | 431 431 SNDRV_CTL_ELEM_ACCESS_VOLATILE, 432 432 .info = snd_ak4113_spdif_pinfo,
+1 -1
sound/i2c/other/ak4114.c
··· 401 401 }, 402 402 { 403 403 .iface = SNDRV_CTL_ELEM_IFACE_PCM, 404 - .name = "IEC958 Preample Capture Default", 404 + .name = "IEC958 Preamble Capture Default", 405 405 .access = SNDRV_CTL_ELEM_ACCESS_READ | SNDRV_CTL_ELEM_ACCESS_VOLATILE, 406 406 .info = snd_ak4114_spdif_pinfo, 407 407 .get = snd_ak4114_spdif_pget,
+1 -1
sound/i2c/other/ak4117.c
··· 380 380 }, 381 381 { 382 382 .iface = SNDRV_CTL_ELEM_IFACE_PCM, 383 - .name = "IEC958 Preample Capture Default", 383 + .name = "IEC958 Preamble Capture Default", 384 384 .access = SNDRV_CTL_ELEM_ACCESS_READ | SNDRV_CTL_ELEM_ACCESS_VOLATILE, 385 385 .info = snd_ak4117_spdif_pinfo, 386 386 .get = snd_ak4117_spdif_pget,
+9 -2
sound/pci/es1968.c
··· 2581 2581 struct es1968 *chip = tea->private_data; 2582 2582 unsigned long io = chip->io_port + GPIO_DATA; 2583 2583 u16 val = inw(io); 2584 + u8 ret; 2584 2585 2585 - return (val & STR_DATA) ? TEA575X_DATA : 0 | 2586 - (val & STR_MOST) ? TEA575X_MOST : 0; 2586 + ret = 0; 2587 + if (val & STR_DATA) 2588 + ret |= TEA575X_DATA; 2589 + if (val & STR_MOST) 2590 + ret |= TEA575X_MOST; 2591 + return ret; 2587 2592 } 2588 2593 2589 2594 static void snd_es1968_tea575x_set_direction(struct snd_tea575x *tea, bool output) ··· 2660 2655 { TYPE_MAESTRO2E, 0x1179 }, 2661 2656 { TYPE_MAESTRO2E, 0x14c0 }, /* HP omnibook 4150 */ 2662 2657 { TYPE_MAESTRO2E, 0x1558 }, 2658 + { TYPE_MAESTRO2E, 0x125d }, /* a PCI card, e.g. Terratec DMX */ 2659 + { TYPE_MAESTRO2, 0x125d }, /* a PCI card, e.g. SF64-PCE2 */ 2663 2660 }; 2664 2661 2665 2662 static struct ess_device_list mpu_blacklist[] __devinitdata = {
+7 -2
sound/pci/fm801.c
··· 767 767 struct fm801 *chip = tea->private_data; 768 768 unsigned short reg = inw(FM801_REG(chip, GPIO_CTRL)); 769 769 struct snd_fm801_tea575x_gpio gpio = *get_tea575x_gpio(chip); 770 + u8 ret; 770 771 771 - return (reg & FM801_GPIO_GP(gpio.data)) ? TEA575X_DATA : 0 | 772 - (reg & FM801_GPIO_GP(gpio.most)) ? TEA575X_MOST : 0; 772 + ret = 0; 773 + if (reg & FM801_GPIO_GP(gpio.data)) 774 + ret |= TEA575X_DATA; 775 + if (reg & FM801_GPIO_GP(gpio.most)) 776 + ret |= TEA575X_MOST; 777 + return ret; 773 778 } 774 779 775 780 static void snd_fm801_tea575x_set_direction(struct snd_tea575x *tea, bool output)
+2
sound/pci/hda/hda_intel.c
··· 3563 3563 /* Teradici */ 3564 3564 { PCI_DEVICE(0x6549, 0x1200), 3565 3565 .driver_data = AZX_DRIVER_TERA | AZX_DCAPS_NO_64BIT }, 3566 + { PCI_DEVICE(0x6549, 0x2200), 3567 + .driver_data = AZX_DRIVER_TERA | AZX_DCAPS_NO_64BIT }, 3566 3568 /* Creative X-Fi (CA0110-IBG) */ 3567 3569 /* CTHDA chips */ 3568 3570 { PCI_DEVICE(0x1102, 0x0010),
+1
sound/pci/hda/patch_analog.c
··· 545 545 if (spec->multiout.dig_out_nid) { 546 546 info++; 547 547 codec->num_pcms++; 548 + codec->spdif_status_reset = 1; 548 549 info->name = "AD198x Digital"; 549 550 info->pcm_type = HDA_PCM_TYPE_SPDIF; 550 551 info->stream[SNDRV_PCM_STREAM_PLAYBACK] = ad198x_pcm_digital_playback;
+12 -9
sound/pci/hda/patch_cirrus.c
··· 101 101 #define CS420X_VENDOR_NID 0x11 102 102 #define CS_DIG_OUT1_PIN_NID 0x10 103 103 #define CS_DIG_OUT2_PIN_NID 0x15 104 - #define CS_DMIC1_PIN_NID 0x12 105 - #define CS_DMIC2_PIN_NID 0x0e 104 + #define CS_DMIC1_PIN_NID 0x0e 105 + #define CS_DMIC2_PIN_NID 0x12 106 106 107 107 /* coef indices */ 108 108 #define IDX_SPDIF_STAT 0x0000 ··· 1079 1079 cs_automic(codec, NULL); 1080 1080 1081 1081 coef = 0x000a; /* ADC1/2 - Digital and Analog Soft Ramp */ 1082 + cs_vendor_coef_set(codec, IDX_ADC_CFG, coef); 1083 + 1084 + coef = cs_vendor_coef_get(codec, IDX_BEEP_CFG); 1082 1085 if (is_active_pin(codec, CS_DMIC2_PIN_NID)) 1083 - coef |= 0x0500; /* DMIC2 2 chan on, GPIO1 off */ 1086 + coef |= 1 << 4; /* DMIC2 2 chan on, GPIO1 off */ 1084 1087 if (is_active_pin(codec, CS_DMIC1_PIN_NID)) 1085 - coef |= 0x1800; /* DMIC1 2 chan on, GPIO0 off 1088 + coef |= 1 << 3; /* DMIC1 2 chan on, GPIO0 off 1086 1089 * No effect if SPDIF_OUT2 is 1087 1090 * selected in IDX_SPDIF_CTL. 1088 1091 */ 1089 - cs_vendor_coef_set(codec, IDX_ADC_CFG, coef); 1092 + 1093 + cs_vendor_coef_set(codec, IDX_BEEP_CFG, coef); 1090 1094 } else { 1091 1095 if (spec->mic_detect) 1092 1096 cs_automic(codec, NULL); ··· 1111 1107 | 0x0400 /* Disable Coefficient Auto increment */ 1112 1108 )}, 1113 1109 /* Beep */ 1114 - {0x11, AC_VERB_SET_COEF_INDEX, IDX_DAC_CFG}, 1110 + {0x11, AC_VERB_SET_COEF_INDEX, IDX_BEEP_CFG}, 1115 1111 {0x11, AC_VERB_SET_PROC_COEF, 0x0007}, /* Enable Beep thru DAC1/2/3 */ 1116 1112 1117 1113 {} /* terminator */ ··· 1732 1728 1733 1729 } 1734 1730 1735 - static struct snd_kcontrol_new cs421x_capture_source = { 1736 - 1731 + static const struct snd_kcontrol_new cs421x_capture_source = { 1737 1732 .iface = SNDRV_CTL_ELEM_IFACE_MIXER, 1738 1733 .name = "Capture Source", 1739 1734 .access = SNDRV_CTL_ELEM_ACCESS_READWRITE, ··· 1949 1946 } 1950 1947 #endif 1951 1948 1952 - static struct hda_codec_ops cs421x_patch_ops = { 1949 + static const struct hda_codec_ops cs421x_patch_ops = { 1953 1950 .build_controls = cs421x_build_controls, 1954 1951 .build_pcms = cs_build_pcms, 1955 1952 .init = cs421x_init,
+14 -13
sound/pci/hda/patch_realtek.c
··· 5407 5407 SND_PCI_QUIRK(0x106b, 0x4000, "MacbookPro 5,1", ALC889_FIXUP_IMAC91_VREF), 5408 5408 SND_PCI_QUIRK(0x106b, 0x4100, "Macmini 3,1", ALC889_FIXUP_IMAC91_VREF), 5409 5409 SND_PCI_QUIRK(0x106b, 0x4200, "Mac Pro 5,1", ALC885_FIXUP_MACPRO_GPIO), 5410 + SND_PCI_QUIRK(0x106b, 0x4300, "iMac 9,1", ALC889_FIXUP_IMAC91_VREF), 5410 5411 SND_PCI_QUIRK(0x106b, 0x4600, "MacbookPro 5,2", ALC889_FIXUP_IMAC91_VREF), 5411 5412 SND_PCI_QUIRK(0x106b, 0x4900, "iMac 9,1 Aluminum", ALC889_FIXUP_IMAC91_VREF), 5412 5413 SND_PCI_QUIRK(0x106b, 0x4a00, "Macbook 5,2", ALC889_FIXUP_IMAC91_VREF), ··· 5841 5840 return alc_parse_auto_config(codec, alc269_ignore, ssids); 5842 5841 } 5843 5842 5844 - static void alc269_toggle_power_output(struct hda_codec *codec, int power_up) 5843 + static void alc269vb_toggle_power_output(struct hda_codec *codec, int power_up) 5845 5844 { 5846 5845 int val = alc_read_coef_idx(codec, 0x04); 5847 5846 if (power_up) ··· 5858 5857 if (spec->codec_variant != ALC269_TYPE_ALC269VB) 5859 5858 return; 5860 5859 5861 - if ((alc_get_coef0(codec) & 0x00ff) == 0x017) 5862 - alc269_toggle_power_output(codec, 0); 5863 - if ((alc_get_coef0(codec) & 0x00ff) == 0x018) { 5864 - alc269_toggle_power_output(codec, 0); 5860 + if (spec->codec_variant == ALC269_TYPE_ALC269VB) 5861 + alc269vb_toggle_power_output(codec, 0); 5862 + if (spec->codec_variant == ALC269_TYPE_ALC269VB && 5863 + (alc_get_coef0(codec) & 0x00ff) == 0x018) { 5865 5864 msleep(150); 5866 5865 } 5867 5866 } ··· 5871 5870 { 5872 5871 struct alc_spec *spec = codec->spec; 5873 5872 5874 - if (spec->codec_variant == ALC269_TYPE_ALC269VB || 5873 + if (spec->codec_variant == ALC269_TYPE_ALC269VB) 5874 + alc269vb_toggle_power_output(codec, 0); 5875 + if (spec->codec_variant == ALC269_TYPE_ALC269VB && 5875 5876 (alc_get_coef0(codec) & 0x00ff) == 0x018) { 5876 - alc269_toggle_power_output(codec, 0); 5877 5877 msleep(150); 5878 5878 } 5879 5879 5880 5880 codec->patch_ops.init(codec); 5881 5881 5882 - if (spec->codec_variant == ALC269_TYPE_ALC269VB || 5882 + if (spec->codec_variant == ALC269_TYPE_ALC269VB) 5883 + alc269vb_toggle_power_output(codec, 1); 5884 + if (spec->codec_variant == ALC269_TYPE_ALC269VB && 5883 5885 (alc_get_coef0(codec) & 0x00ff) == 0x017) { 5884 - alc269_toggle_power_output(codec, 1); 5885 5886 msleep(200); 5886 5887 } 5887 - 5888 - if (spec->codec_variant == ALC269_TYPE_ALC269VB || 5889 - (alc_get_coef0(codec) & 0x00ff) == 0x018) 5890 - alc269_toggle_power_output(codec, 1); 5891 5888 5892 5889 snd_hda_codec_resume_amp(codec); 5893 5890 snd_hda_codec_resume_cache(codec); ··· 7078 7079 .patch = patch_alc662 }, 7079 7080 { .id = 0x10ec0663, .name = "ALC663", .patch = patch_alc662 }, 7080 7081 { .id = 0x10ec0665, .name = "ALC665", .patch = patch_alc662 }, 7082 + { .id = 0x10ec0668, .name = "ALC668", .patch = patch_alc662 }, 7081 7083 { .id = 0x10ec0670, .name = "ALC670", .patch = patch_alc662 }, 7082 7084 { .id = 0x10ec0680, .name = "ALC680", .patch = patch_alc680 }, 7083 7085 { .id = 0x10ec0880, .name = "ALC880", .patch = patch_alc880 }, ··· 7096 7096 { .id = 0x10ec0889, .name = "ALC889", .patch = patch_alc882 }, 7097 7097 { .id = 0x10ec0892, .name = "ALC892", .patch = patch_alc662 }, 7098 7098 { .id = 0x10ec0899, .name = "ALC898", .patch = patch_alc882 }, 7099 + { .id = 0x10ec0900, .name = "ALC1150", .patch = patch_alc882 }, 7099 7100 {} /* terminator */ 7100 7101 }; 7101 7102
+29 -7
sound/pci/hda/patch_via.c
··· 1809 1809 { 1810 1810 struct via_spec *spec = codec->spec; 1811 1811 const struct auto_pin_cfg *cfg = &spec->autocfg; 1812 - int i, dac_num; 1812 + int i; 1813 1813 hda_nid_t nid; 1814 1814 1815 + spec->multiout.num_dacs = 0; 1815 1816 spec->multiout.dac_nids = spec->private_dac_nids; 1816 - dac_num = 0; 1817 1817 for (i = 0; i < cfg->line_outs; i++) { 1818 1818 hda_nid_t dac = 0; 1819 1819 nid = cfg->line_out_pins[i]; ··· 1824 1824 if (!i && parse_output_path(codec, nid, dac, 1, 1825 1825 &spec->out_mix_path)) 1826 1826 dac = spec->out_mix_path.path[0]; 1827 - if (dac) { 1828 - spec->private_dac_nids[i] = dac; 1829 - dac_num++; 1830 - } 1827 + if (dac) 1828 + spec->private_dac_nids[spec->multiout.num_dacs++] = dac; 1831 1829 } 1832 1830 if (!spec->out_path[0].depth && spec->out_mix_path.depth) { 1833 1831 spec->out_path[0] = spec->out_mix_path; 1834 1832 spec->out_mix_path.depth = 0; 1835 1833 } 1836 - spec->multiout.num_dacs = dac_num; 1837 1834 return 0; 1838 1835 } 1839 1836 ··· 3625 3628 */ 3626 3629 enum { 3627 3630 VIA_FIXUP_INTMIC_BOOST, 3631 + VIA_FIXUP_ASUS_G75, 3628 3632 }; 3629 3633 3630 3634 static void via_fixup_intmic_boost(struct hda_codec *codec, ··· 3640 3642 .type = HDA_FIXUP_FUNC, 3641 3643 .v.func = via_fixup_intmic_boost, 3642 3644 }, 3645 + [VIA_FIXUP_ASUS_G75] = { 3646 + .type = HDA_FIXUP_PINS, 3647 + .v.pins = (const struct hda_pintbl[]) { 3648 + /* set 0x24 and 0x33 as speakers */ 3649 + { 0x24, 0x991301f0 }, 3650 + { 0x33, 0x991301f1 }, /* subwoofer */ 3651 + { } 3652 + } 3653 + }, 3643 3654 }; 3644 3655 3645 3656 static const struct snd_pci_quirk vt2002p_fixups[] = { 3657 + SND_PCI_QUIRK(0x1043, 0x1487, "Asus G75", VIA_FIXUP_ASUS_G75), 3646 3658 SND_PCI_QUIRK(0x1043, 0x8532, "Asus X202E", VIA_FIXUP_INTMIC_BOOST), 3647 3659 {} 3648 3660 }; 3661 + 3662 + /* NIDs 0x24 and 0x33 on VT1802 have connections to non-existing NID 0x3e 3663 + * Replace this with mixer NID 0x1c 3664 + */ 3665 + static void fix_vt1802_connections(struct hda_codec *codec) 3666 + { 3667 + static hda_nid_t conn_24[] = { 0x14, 0x1c }; 3668 + static hda_nid_t conn_33[] = { 0x1c }; 3669 + 3670 + snd_hda_override_conn_list(codec, 0x24, ARRAY_SIZE(conn_24), conn_24); 3671 + snd_hda_override_conn_list(codec, 0x33, ARRAY_SIZE(conn_33), conn_33); 3672 + } 3649 3673 3650 3674 /* patch for vt2002P */ 3651 3675 static int patch_vt2002P(struct hda_codec *codec) ··· 3683 3663 spec->aa_mix_nid = 0x21; 3684 3664 override_mic_boost(codec, 0x2b, 0, 3, 40); 3685 3665 override_mic_boost(codec, 0x29, 0, 3, 40); 3666 + if (spec->codec_type == VT1802) 3667 + fix_vt1802_connections(codec); 3686 3668 add_secret_dac_path(codec); 3687 3669 3688 3670 snd_hda_pick_fixup(codec, NULL, vt2002p_fixups, via_fixups);
+3 -2
sound/pci/rme9652/hdspm.c
··· 3979 3979 case 8: /* SYNC IN */ 3980 3980 val = hdspm_sync_in_sync_check(hdspm); break; 3981 3981 default: 3982 - val = hdspm_s1_sync_check(hdspm, ucontrol->id.index-1); 3982 + val = hdspm_s1_sync_check(hdspm, 3983 + kcontrol->private_value-1); 3983 3984 } 3984 3985 break; 3985 3986 ··· 4900 4899 insel = "Coaxial"; 4901 4900 break; 4902 4901 default: 4903 - insel = "Unkown"; 4902 + insel = "Unknown"; 4904 4903 } 4905 4904 4906 4905 snd_iprintf(buffer,
+2 -3
sound/soc/codecs/cs42l52.c
··· 763 763 if ((freq >= CS42L52_MIN_CLK) && (freq <= CS42L52_MAX_CLK)) { 764 764 cs42l52->sysclk = freq; 765 765 } else { 766 - dev_err(codec->dev, "Invalid freq paramter\n"); 766 + dev_err(codec->dev, "Invalid freq parameter\n"); 767 767 return -EINVAL; 768 768 } 769 769 return 0; ··· 773 773 { 774 774 struct snd_soc_codec *codec = codec_dai->codec; 775 775 struct cs42l52_private *cs42l52 = snd_soc_codec_get_drvdata(codec); 776 - int ret = 0; 777 776 u8 iface = 0; 778 777 779 778 switch (fmt & SND_SOC_DAIFMT_MASTER_MASK) { ··· 821 822 case SND_SOC_DAIFMT_NB_IF: 822 823 break; 823 824 default: 824 - ret = -EINVAL; 825 + return -EINVAL; 825 826 } 826 827 cs42l52->config.format = iface; 827 828 snd_soc_write(codec, CS42L52_IFACE_CTL1, cs42l52->config.format);
+551 -1
sound/soc/codecs/wm5102.c
··· 42 42 static DECLARE_TLV_DB_SCALE(digital_tlv, -6400, 50, 0); 43 43 static DECLARE_TLV_DB_SCALE(noise_tlv, 0, 600, 0); 44 44 45 + static const struct reg_default wm5102_sysclk_reva_patch[] = { 46 + { 0x3000, 0x2225 }, 47 + { 0x3001, 0x3a03 }, 48 + { 0x3002, 0x0225 }, 49 + { 0x3003, 0x0801 }, 50 + { 0x3004, 0x6249 }, 51 + { 0x3005, 0x0c04 }, 52 + { 0x3006, 0x0225 }, 53 + { 0x3007, 0x5901 }, 54 + { 0x3008, 0xe249 }, 55 + { 0x3009, 0x030d }, 56 + { 0x300a, 0x0249 }, 57 + { 0x300b, 0x2c01 }, 58 + { 0x300c, 0xe249 }, 59 + { 0x300d, 0x4342 }, 60 + { 0x300e, 0xe249 }, 61 + { 0x300f, 0x73c0 }, 62 + { 0x3010, 0x4249 }, 63 + { 0x3011, 0x0c00 }, 64 + { 0x3012, 0x0225 }, 65 + { 0x3013, 0x1f01 }, 66 + { 0x3014, 0x0225 }, 67 + { 0x3015, 0x1e01 }, 68 + { 0x3016, 0x0225 }, 69 + { 0x3017, 0xfa00 }, 70 + { 0x3018, 0x0000 }, 71 + { 0x3019, 0xf000 }, 72 + { 0x301a, 0x0000 }, 73 + { 0x301b, 0xf000 }, 74 + { 0x301c, 0x0000 }, 75 + { 0x301d, 0xf000 }, 76 + { 0x301e, 0x0000 }, 77 + { 0x301f, 0xf000 }, 78 + { 0x3020, 0x0000 }, 79 + { 0x3021, 0xf000 }, 80 + { 0x3022, 0x0000 }, 81 + { 0x3023, 0xf000 }, 82 + { 0x3024, 0x0000 }, 83 + { 0x3025, 0xf000 }, 84 + { 0x3026, 0x0000 }, 85 + { 0x3027, 0xf000 }, 86 + { 0x3028, 0x0000 }, 87 + { 0x3029, 0xf000 }, 88 + { 0x302a, 0x0000 }, 89 + { 0x302b, 0xf000 }, 90 + { 0x302c, 0x0000 }, 91 + { 0x302d, 0xf000 }, 92 + { 0x302e, 0x0000 }, 93 + { 0x302f, 0xf000 }, 94 + { 0x3030, 0x0225 }, 95 + { 0x3031, 0x1a01 }, 96 + { 0x3032, 0x0225 }, 97 + { 0x3033, 0x1e00 }, 98 + { 0x3034, 0x0225 }, 99 + { 0x3035, 0x1f00 }, 100 + { 0x3036, 0x6225 }, 101 + { 0x3037, 0xf800 }, 102 + { 0x3038, 0x0000 }, 103 + { 0x3039, 0xf000 }, 104 + { 0x303a, 0x0000 }, 105 + { 0x303b, 0xf000 }, 106 + { 0x303c, 0x0000 }, 107 + { 0x303d, 0xf000 }, 108 + { 0x303e, 0x0000 }, 109 + { 0x303f, 0xf000 }, 110 + { 0x3040, 0x2226 }, 111 + { 0x3041, 0x3a03 }, 112 + { 0x3042, 0x0226 }, 113 + { 0x3043, 0x0801 }, 114 + { 0x3044, 0x6249 }, 115 + { 0x3045, 0x0c06 }, 116 + { 0x3046, 0x0226 }, 117 + { 0x3047, 0x5901 }, 118 + { 0x3048, 0xe249 }, 119 + { 0x3049, 0x030d }, 120 + { 0x304a, 0x0249 }, 121 + { 0x304b, 0x2c01 }, 122 + { 0x304c, 0xe249 }, 123 + { 0x304d, 0x4342 }, 124 + { 0x304e, 0xe249 }, 125 + { 0x304f, 0x73c0 }, 126 + { 0x3050, 0x4249 }, 127 + { 0x3051, 0x0c00 }, 128 + { 0x3052, 0x0226 }, 129 + { 0x3053, 0x1f01 }, 130 + { 0x3054, 0x0226 }, 131 + { 0x3055, 0x1e01 }, 132 + { 0x3056, 0x0226 }, 133 + { 0x3057, 0xfa00 }, 134 + { 0x3058, 0x0000 }, 135 + { 0x3059, 0xf000 }, 136 + { 0x305a, 0x0000 }, 137 + { 0x305b, 0xf000 }, 138 + { 0x305c, 0x0000 }, 139 + { 0x305d, 0xf000 }, 140 + { 0x305e, 0x0000 }, 141 + { 0x305f, 0xf000 }, 142 + { 0x3060, 0x0000 }, 143 + { 0x3061, 0xf000 }, 144 + { 0x3062, 0x0000 }, 145 + { 0x3063, 0xf000 }, 146 + { 0x3064, 0x0000 }, 147 + { 0x3065, 0xf000 }, 148 + { 0x3066, 0x0000 }, 149 + { 0x3067, 0xf000 }, 150 + { 0x3068, 0x0000 }, 151 + { 0x3069, 0xf000 }, 152 + { 0x306a, 0x0000 }, 153 + { 0x306b, 0xf000 }, 154 + { 0x306c, 0x0000 }, 155 + { 0x306d, 0xf000 }, 156 + { 0x306e, 0x0000 }, 157 + { 0x306f, 0xf000 }, 158 + { 0x3070, 0x0226 }, 159 + { 0x3071, 0x1a01 }, 160 + { 0x3072, 0x0226 }, 161 + { 0x3073, 0x1e00 }, 162 + { 0x3074, 0x0226 }, 163 + { 0x3075, 0x1f00 }, 164 + { 0x3076, 0x6226 }, 165 + { 0x3077, 0xf800 }, 166 + { 0x3078, 0x0000 }, 167 + { 0x3079, 0xf000 }, 168 + { 0x307a, 0x0000 }, 169 + { 0x307b, 0xf000 }, 170 + { 0x307c, 0x0000 }, 171 + { 0x307d, 0xf000 }, 172 + { 0x307e, 0x0000 }, 173 + { 0x307f, 0xf000 }, 174 + { 0x3080, 0x2227 }, 175 + { 0x3081, 0x3a03 }, 176 + { 0x3082, 0x0227 }, 177 + { 0x3083, 0x0801 }, 178 + { 0x3084, 0x6255 }, 179 + { 0x3085, 0x0c04 }, 180 + { 0x3086, 0x0227 }, 181 + { 0x3087, 0x5901 }, 182 + { 0x3088, 0xe255 }, 183 + { 0x3089, 0x030d }, 184 + { 0x308a, 0x0255 }, 185 + { 0x308b, 0x2c01 }, 186 + { 0x308c, 0xe255 }, 187 + { 0x308d, 0x4342 }, 188 + { 0x308e, 0xe255 }, 189 + { 0x308f, 0x73c0 }, 190 + { 0x3090, 0x4255 }, 191 + { 0x3091, 0x0c00 }, 192 + { 0x3092, 0x0227 }, 193 + { 0x3093, 0x1f01 }, 194 + { 0x3094, 0x0227 }, 195 + { 0x3095, 0x1e01 }, 196 + { 0x3096, 0x0227 }, 197 + { 0x3097, 0xfa00 }, 198 + { 0x3098, 0x0000 }, 199 + { 0x3099, 0xf000 }, 200 + { 0x309a, 0x0000 }, 201 + { 0x309b, 0xf000 }, 202 + { 0x309c, 0x0000 }, 203 + { 0x309d, 0xf000 }, 204 + { 0x309e, 0x0000 }, 205 + { 0x309f, 0xf000 }, 206 + { 0x30a0, 0x0000 }, 207 + { 0x30a1, 0xf000 }, 208 + { 0x30a2, 0x0000 }, 209 + { 0x30a3, 0xf000 }, 210 + { 0x30a4, 0x0000 }, 211 + { 0x30a5, 0xf000 }, 212 + { 0x30a6, 0x0000 }, 213 + { 0x30a7, 0xf000 }, 214 + { 0x30a8, 0x0000 }, 215 + { 0x30a9, 0xf000 }, 216 + { 0x30aa, 0x0000 }, 217 + { 0x30ab, 0xf000 }, 218 + { 0x30ac, 0x0000 }, 219 + { 0x30ad, 0xf000 }, 220 + { 0x30ae, 0x0000 }, 221 + { 0x30af, 0xf000 }, 222 + { 0x30b0, 0x0227 }, 223 + { 0x30b1, 0x1a01 }, 224 + { 0x30b2, 0x0227 }, 225 + { 0x30b3, 0x1e00 }, 226 + { 0x30b4, 0x0227 }, 227 + { 0x30b5, 0x1f00 }, 228 + { 0x30b6, 0x6227 }, 229 + { 0x30b7, 0xf800 }, 230 + { 0x30b8, 0x0000 }, 231 + { 0x30b9, 0xf000 }, 232 + { 0x30ba, 0x0000 }, 233 + { 0x30bb, 0xf000 }, 234 + { 0x30bc, 0x0000 }, 235 + { 0x30bd, 0xf000 }, 236 + { 0x30be, 0x0000 }, 237 + { 0x30bf, 0xf000 }, 238 + { 0x30c0, 0x2228 }, 239 + { 0x30c1, 0x3a03 }, 240 + { 0x30c2, 0x0228 }, 241 + { 0x30c3, 0x0801 }, 242 + { 0x30c4, 0x6255 }, 243 + { 0x30c5, 0x0c06 }, 244 + { 0x30c6, 0x0228 }, 245 + { 0x30c7, 0x5901 }, 246 + { 0x30c8, 0xe255 }, 247 + { 0x30c9, 0x030d }, 248 + { 0x30ca, 0x0255 }, 249 + { 0x30cb, 0x2c01 }, 250 + { 0x30cc, 0xe255 }, 251 + { 0x30cd, 0x4342 }, 252 + { 0x30ce, 0xe255 }, 253 + { 0x30cf, 0x73c0 }, 254 + { 0x30d0, 0x4255 }, 255 + { 0x30d1, 0x0c00 }, 256 + { 0x30d2, 0x0228 }, 257 + { 0x30d3, 0x1f01 }, 258 + { 0x30d4, 0x0228 }, 259 + { 0x30d5, 0x1e01 }, 260 + { 0x30d6, 0x0228 }, 261 + { 0x30d7, 0xfa00 }, 262 + { 0x30d8, 0x0000 }, 263 + { 0x30d9, 0xf000 }, 264 + { 0x30da, 0x0000 }, 265 + { 0x30db, 0xf000 }, 266 + { 0x30dc, 0x0000 }, 267 + { 0x30dd, 0xf000 }, 268 + { 0x30de, 0x0000 }, 269 + { 0x30df, 0xf000 }, 270 + { 0x30e0, 0x0000 }, 271 + { 0x30e1, 0xf000 }, 272 + { 0x30e2, 0x0000 }, 273 + { 0x30e3, 0xf000 }, 274 + { 0x30e4, 0x0000 }, 275 + { 0x30e5, 0xf000 }, 276 + { 0x30e6, 0x0000 }, 277 + { 0x30e7, 0xf000 }, 278 + { 0x30e8, 0x0000 }, 279 + { 0x30e9, 0xf000 }, 280 + { 0x30ea, 0x0000 }, 281 + { 0x30eb, 0xf000 }, 282 + { 0x30ec, 0x0000 }, 283 + { 0x30ed, 0xf000 }, 284 + { 0x30ee, 0x0000 }, 285 + { 0x30ef, 0xf000 }, 286 + { 0x30f0, 0x0228 }, 287 + { 0x30f1, 0x1a01 }, 288 + { 0x30f2, 0x0228 }, 289 + { 0x30f3, 0x1e00 }, 290 + { 0x30f4, 0x0228 }, 291 + { 0x30f5, 0x1f00 }, 292 + { 0x30f6, 0x6228 }, 293 + { 0x30f7, 0xf800 }, 294 + { 0x30f8, 0x0000 }, 295 + { 0x30f9, 0xf000 }, 296 + { 0x30fa, 0x0000 }, 297 + { 0x30fb, 0xf000 }, 298 + { 0x30fc, 0x0000 }, 299 + { 0x30fd, 0xf000 }, 300 + { 0x30fe, 0x0000 }, 301 + { 0x30ff, 0xf000 }, 302 + { 0x3100, 0x222b }, 303 + { 0x3101, 0x3a03 }, 304 + { 0x3102, 0x222b }, 305 + { 0x3103, 0x5803 }, 306 + { 0x3104, 0xe26f }, 307 + { 0x3105, 0x030d }, 308 + { 0x3106, 0x626f }, 309 + { 0x3107, 0x2c01 }, 310 + { 0x3108, 0xe26f }, 311 + { 0x3109, 0x4342 }, 312 + { 0x310a, 0xe26f }, 313 + { 0x310b, 0x73c0 }, 314 + { 0x310c, 0x026f }, 315 + { 0x310d, 0x0c00 }, 316 + { 0x310e, 0x022b }, 317 + { 0x310f, 0x1f01 }, 318 + { 0x3110, 0x022b }, 319 + { 0x3111, 0x1e01 }, 320 + { 0x3112, 0x022b }, 321 + { 0x3113, 0xfa00 }, 322 + { 0x3114, 0x0000 }, 323 + { 0x3115, 0xf000 }, 324 + { 0x3116, 0x0000 }, 325 + { 0x3117, 0xf000 }, 326 + { 0x3118, 0x0000 }, 327 + { 0x3119, 0xf000 }, 328 + { 0x311a, 0x0000 }, 329 + { 0x311b, 0xf000 }, 330 + { 0x311c, 0x0000 }, 331 + { 0x311d, 0xf000 }, 332 + { 0x311e, 0x0000 }, 333 + { 0x311f, 0xf000 }, 334 + { 0x3120, 0x022b }, 335 + { 0x3121, 0x0a01 }, 336 + { 0x3122, 0x022b }, 337 + { 0x3123, 0x1e00 }, 338 + { 0x3124, 0x022b }, 339 + { 0x3125, 0x1f00 }, 340 + { 0x3126, 0x622b }, 341 + { 0x3127, 0xf800 }, 342 + { 0x3128, 0x0000 }, 343 + { 0x3129, 0xf000 }, 344 + { 0x312a, 0x0000 }, 345 + { 0x312b, 0xf000 }, 346 + { 0x312c, 0x0000 }, 347 + { 0x312d, 0xf000 }, 348 + { 0x312e, 0x0000 }, 349 + { 0x312f, 0xf000 }, 350 + { 0x3130, 0x0000 }, 351 + { 0x3131, 0xf000 }, 352 + { 0x3132, 0x0000 }, 353 + { 0x3133, 0xf000 }, 354 + { 0x3134, 0x0000 }, 355 + { 0x3135, 0xf000 }, 356 + { 0x3136, 0x0000 }, 357 + { 0x3137, 0xf000 }, 358 + { 0x3138, 0x0000 }, 359 + { 0x3139, 0xf000 }, 360 + { 0x313a, 0x0000 }, 361 + { 0x313b, 0xf000 }, 362 + { 0x313c, 0x0000 }, 363 + { 0x313d, 0xf000 }, 364 + { 0x313e, 0x0000 }, 365 + { 0x313f, 0xf000 }, 366 + { 0x3140, 0x0000 }, 367 + { 0x3141, 0xf000 }, 368 + { 0x3142, 0x0000 }, 369 + { 0x3143, 0xf000 }, 370 + { 0x3144, 0x0000 }, 371 + { 0x3145, 0xf000 }, 372 + { 0x3146, 0x0000 }, 373 + { 0x3147, 0xf000 }, 374 + { 0x3148, 0x0000 }, 375 + { 0x3149, 0xf000 }, 376 + { 0x314a, 0x0000 }, 377 + { 0x314b, 0xf000 }, 378 + { 0x314c, 0x0000 }, 379 + { 0x314d, 0xf000 }, 380 + { 0x314e, 0x0000 }, 381 + { 0x314f, 0xf000 }, 382 + { 0x3150, 0x0000 }, 383 + { 0x3151, 0xf000 }, 384 + { 0x3152, 0x0000 }, 385 + { 0x3153, 0xf000 }, 386 + { 0x3154, 0x0000 }, 387 + { 0x3155, 0xf000 }, 388 + { 0x3156, 0x0000 }, 389 + { 0x3157, 0xf000 }, 390 + { 0x3158, 0x0000 }, 391 + { 0x3159, 0xf000 }, 392 + { 0x315a, 0x0000 }, 393 + { 0x315b, 0xf000 }, 394 + { 0x315c, 0x0000 }, 395 + { 0x315d, 0xf000 }, 396 + { 0x315e, 0x0000 }, 397 + { 0x315f, 0xf000 }, 398 + { 0x3160, 0x0000 }, 399 + { 0x3161, 0xf000 }, 400 + { 0x3162, 0x0000 }, 401 + { 0x3163, 0xf000 }, 402 + { 0x3164, 0x0000 }, 403 + { 0x3165, 0xf000 }, 404 + { 0x3166, 0x0000 }, 405 + { 0x3167, 0xf000 }, 406 + { 0x3168, 0x0000 }, 407 + { 0x3169, 0xf000 }, 408 + { 0x316a, 0x0000 }, 409 + { 0x316b, 0xf000 }, 410 + { 0x316c, 0x0000 }, 411 + { 0x316d, 0xf000 }, 412 + { 0x316e, 0x0000 }, 413 + { 0x316f, 0xf000 }, 414 + { 0x3170, 0x0000 }, 415 + { 0x3171, 0xf000 }, 416 + { 0x3172, 0x0000 }, 417 + { 0x3173, 0xf000 }, 418 + { 0x3174, 0x0000 }, 419 + { 0x3175, 0xf000 }, 420 + { 0x3176, 0x0000 }, 421 + { 0x3177, 0xf000 }, 422 + { 0x3178, 0x0000 }, 423 + { 0x3179, 0xf000 }, 424 + { 0x317a, 0x0000 }, 425 + { 0x317b, 0xf000 }, 426 + { 0x317c, 0x0000 }, 427 + { 0x317d, 0xf000 }, 428 + { 0x317e, 0x0000 }, 429 + { 0x317f, 0xf000 }, 430 + { 0x3180, 0x2001 }, 431 + { 0x3181, 0xf101 }, 432 + { 0x3182, 0x0000 }, 433 + { 0x3183, 0xf000 }, 434 + { 0x3184, 0x0000 }, 435 + { 0x3185, 0xf000 }, 436 + { 0x3186, 0x0000 }, 437 + { 0x3187, 0xf000 }, 438 + { 0x3188, 0x0000 }, 439 + { 0x3189, 0xf000 }, 440 + { 0x318a, 0x0000 }, 441 + { 0x318b, 0xf000 }, 442 + { 0x318c, 0x0000 }, 443 + { 0x318d, 0xf000 }, 444 + { 0x318e, 0x0000 }, 445 + { 0x318f, 0xf000 }, 446 + { 0x3190, 0x0000 }, 447 + { 0x3191, 0xf000 }, 448 + { 0x3192, 0x0000 }, 449 + { 0x3193, 0xf000 }, 450 + { 0x3194, 0x0000 }, 451 + { 0x3195, 0xf000 }, 452 + { 0x3196, 0x0000 }, 453 + { 0x3197, 0xf000 }, 454 + { 0x3198, 0x0000 }, 455 + { 0x3199, 0xf000 }, 456 + { 0x319a, 0x0000 }, 457 + { 0x319b, 0xf000 }, 458 + { 0x319c, 0x0000 }, 459 + { 0x319d, 0xf000 }, 460 + { 0x319e, 0x0000 }, 461 + { 0x319f, 0xf000 }, 462 + { 0x31a0, 0x0000 }, 463 + { 0x31a1, 0xf000 }, 464 + { 0x31a2, 0x0000 }, 465 + { 0x31a3, 0xf000 }, 466 + { 0x31a4, 0x0000 }, 467 + { 0x31a5, 0xf000 }, 468 + { 0x31a6, 0x0000 }, 469 + { 0x31a7, 0xf000 }, 470 + { 0x31a8, 0x0000 }, 471 + { 0x31a9, 0xf000 }, 472 + { 0x31aa, 0x0000 }, 473 + { 0x31ab, 0xf000 }, 474 + { 0x31ac, 0x0000 }, 475 + { 0x31ad, 0xf000 }, 476 + { 0x31ae, 0x0000 }, 477 + { 0x31af, 0xf000 }, 478 + { 0x31b0, 0x0000 }, 479 + { 0x31b1, 0xf000 }, 480 + { 0x31b2, 0x0000 }, 481 + { 0x31b3, 0xf000 }, 482 + { 0x31b4, 0x0000 }, 483 + { 0x31b5, 0xf000 }, 484 + { 0x31b6, 0x0000 }, 485 + { 0x31b7, 0xf000 }, 486 + { 0x31b8, 0x0000 }, 487 + { 0x31b9, 0xf000 }, 488 + { 0x31ba, 0x0000 }, 489 + { 0x31bb, 0xf000 }, 490 + { 0x31bc, 0x0000 }, 491 + { 0x31bd, 0xf000 }, 492 + { 0x31be, 0x0000 }, 493 + { 0x31bf, 0xf000 }, 494 + { 0x31c0, 0x0000 }, 495 + { 0x31c1, 0xf000 }, 496 + { 0x31c2, 0x0000 }, 497 + { 0x31c3, 0xf000 }, 498 + { 0x31c4, 0x0000 }, 499 + { 0x31c5, 0xf000 }, 500 + { 0x31c6, 0x0000 }, 501 + { 0x31c7, 0xf000 }, 502 + { 0x31c8, 0x0000 }, 503 + { 0x31c9, 0xf000 }, 504 + { 0x31ca, 0x0000 }, 505 + { 0x31cb, 0xf000 }, 506 + { 0x31cc, 0x0000 }, 507 + { 0x31cd, 0xf000 }, 508 + { 0x31ce, 0x0000 }, 509 + { 0x31cf, 0xf000 }, 510 + { 0x31d0, 0x0000 }, 511 + { 0x31d1, 0xf000 }, 512 + { 0x31d2, 0x0000 }, 513 + { 0x31d3, 0xf000 }, 514 + { 0x31d4, 0x0000 }, 515 + { 0x31d5, 0xf000 }, 516 + { 0x31d6, 0x0000 }, 517 + { 0x31d7, 0xf000 }, 518 + { 0x31d8, 0x0000 }, 519 + { 0x31d9, 0xf000 }, 520 + { 0x31da, 0x0000 }, 521 + { 0x31db, 0xf000 }, 522 + { 0x31dc, 0x0000 }, 523 + { 0x31dd, 0xf000 }, 524 + { 0x31de, 0x0000 }, 525 + { 0x31df, 0xf000 }, 526 + { 0x31e0, 0x0000 }, 527 + { 0x31e1, 0xf000 }, 528 + { 0x31e2, 0x0000 }, 529 + { 0x31e3, 0xf000 }, 530 + { 0x31e4, 0x0000 }, 531 + { 0x31e5, 0xf000 }, 532 + { 0x31e6, 0x0000 }, 533 + { 0x31e7, 0xf000 }, 534 + { 0x31e8, 0x0000 }, 535 + { 0x31e9, 0xf000 }, 536 + { 0x31ea, 0x0000 }, 537 + { 0x31eb, 0xf000 }, 538 + { 0x31ec, 0x0000 }, 539 + { 0x31ed, 0xf000 }, 540 + { 0x31ee, 0x0000 }, 541 + { 0x31ef, 0xf000 }, 542 + { 0x31f0, 0x0000 }, 543 + { 0x31f1, 0xf000 }, 544 + { 0x31f2, 0x0000 }, 545 + { 0x31f3, 0xf000 }, 546 + { 0x31f4, 0x0000 }, 547 + { 0x31f5, 0xf000 }, 548 + { 0x31f6, 0x0000 }, 549 + { 0x31f7, 0xf000 }, 550 + { 0x31f8, 0x0000 }, 551 + { 0x31f9, 0xf000 }, 552 + { 0x31fa, 0x0000 }, 553 + { 0x31fb, 0xf000 }, 554 + { 0x31fc, 0x0000 }, 555 + { 0x31fd, 0xf000 }, 556 + { 0x31fe, 0x0000 }, 557 + { 0x31ff, 0xf000 }, 558 + { 0x024d, 0xff50 }, 559 + { 0x0252, 0xff50 }, 560 + { 0x0259, 0x0112 }, 561 + { 0x025e, 0x0112 }, 562 + }; 563 + 564 + static int wm5102_sysclk_ev(struct snd_soc_dapm_widget *w, 565 + struct snd_kcontrol *kcontrol, int event) 566 + { 567 + struct snd_soc_codec *codec = w->codec; 568 + struct arizona *arizona = dev_get_drvdata(codec->dev); 569 + struct regmap *regmap = codec->control_data; 570 + const struct reg_default *patch = NULL; 571 + int i, patch_size; 572 + 573 + switch (arizona->rev) { 574 + case 0: 575 + patch = wm5102_sysclk_reva_patch; 576 + patch_size = ARRAY_SIZE(wm5102_sysclk_reva_patch); 577 + break; 578 + } 579 + 580 + switch (event) { 581 + case SND_SOC_DAPM_POST_PMU: 582 + if (patch) 583 + for (i = 0; i < patch_size; i++) 584 + regmap_write(regmap, patch[i].reg, 585 + patch[i].def); 586 + break; 587 + 588 + default: 589 + break; 590 + } 591 + 592 + return 0; 593 + } 594 + 45 595 static const struct snd_kcontrol_new wm5102_snd_controls[] = { 46 596 SOC_SINGLE("IN1 High Performance Switch", ARIZONA_IN1L_CONTROL, 47 597 ARIZONA_IN1_OSR_SHIFT, 1, 0), ··· 847 297 848 298 static const struct snd_soc_dapm_widget wm5102_dapm_widgets[] = { 849 299 SND_SOC_DAPM_SUPPLY("SYSCLK", ARIZONA_SYSTEM_CLOCK_1, ARIZONA_SYSCLK_ENA_SHIFT, 850 - 0, NULL, 0), 300 + 0, wm5102_sysclk_ev, SND_SOC_DAPM_POST_PMU), 851 301 SND_SOC_DAPM_SUPPLY("ASYNCCLK", ARIZONA_ASYNC_CLOCK_1, 852 302 ARIZONA_ASYNC_CLK_ENA_SHIFT, 0, NULL, 0), 853 303 SND_SOC_DAPM_SUPPLY("OPCLK", ARIZONA_OUTPUT_SYSTEM_CLOCK,
+1 -1
sound/soc/codecs/wm8978.c
··· 782 782 wm8978->mclk_idx = -1; 783 783 f_sel = wm8978->f_mclk; 784 784 } else { 785 - if (!wm8978->f_pllout) { 785 + if (!wm8978->f_opclk) { 786 786 /* We only enter here, if OPCLK is not used */ 787 787 int ret = wm8978_configure_pll(codec); 788 788 if (ret < 0)
+1 -1
sound/soc/codecs/wm8994.c
··· 3722 3722 } while (count--); 3723 3723 3724 3724 if (count == 0) 3725 - dev_warn(codec->dev, "No impedence range reported for jack\n"); 3725 + dev_warn(codec->dev, "No impedance range reported for jack\n"); 3726 3726 3727 3727 #ifndef CONFIG_SND_SOC_WM8994_MODULE 3728 3728 trace_snd_soc_jack_irq(dev_name(codec->dev));
+13 -4
sound/soc/mxs/mxs-saif.c
··· 523 523 524 524 if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) { 525 525 /* 526 - * write a data to saif data register to trigger 527 - * the transfer 526 + * write data to saif data register to trigger 527 + * the transfer. 528 + * For 24-bit format the 32-bit FIFO register stores 529 + * only one channel, so we need to write twice. 530 + * This is also safe for the other non 24-bit formats. 528 531 */ 532 + __raw_writel(0, saif->base + SAIF_DATA); 529 533 __raw_writel(0, saif->base + SAIF_DATA); 530 534 } else { 531 535 /* 532 - * read a data from saif data register to trigger 533 - * the receive 536 + * read data from saif data register to trigger 537 + * the receive. 538 + * For 24-bit format the 32-bit FIFO register stores 539 + * only one channel, so we need to read twice. 540 + * This is also safe for the other non 24-bit formats. 534 541 */ 542 + __raw_readl(saif->base + SAIF_DATA); 535 543 __raw_readl(saif->base + SAIF_DATA); 536 544 } 537 545 ··· 820 812 MODULE_AUTHOR("Freescale Semiconductor, Inc."); 821 813 MODULE_DESCRIPTION("MXS ASoC SAIF driver"); 822 814 MODULE_LICENSE("GPL"); 815 + MODULE_ALIAS("platform:mxs-saif");
+2
sound/soc/samsung/Kconfig
··· 207 207 select SND_SOC_WM5102 208 208 select SND_SOC_WM5110 209 209 select SND_SOC_WM9081 210 + select SND_SOC_WM0010 211 + select SND_SOC_WM1250_EV1 210 212 211 213 config SND_SOC_LOWLAND 212 214 tristate "Audio support for Wolfson Lowland"
+1 -1
sound/soc/samsung/bells.c
··· 247 247 { 248 248 .name = "Sub", 249 249 .stream_name = "Sub", 250 - .cpu_dai_name = "wm5110-aif3", 250 + .cpu_dai_name = "wm5102-aif3", 251 251 .codec_dai_name = "wm9081-hifi", 252 252 .codec_name = "wm9081.1-006c", 253 253 .dai_fmt = SND_SOC_DAIFMT_I2S | SND_SOC_DAIFMT_NB_NF
+3 -2
sound/soc/soc-core.c
··· 2786 2786 val = (ucontrol->value.integer.value[0] + min) & mask; 2787 2787 val = val << shift; 2788 2788 2789 - if (snd_soc_update_bits_locked(codec, reg, val_mask, val)) 2790 - return err; 2789 + err = snd_soc_update_bits_locked(codec, reg, val_mask, val); 2790 + if (err < 0) 2791 + return err; 2791 2792 2792 2793 if (snd_soc_volsw_is_stereo(mc)) { 2793 2794 val_mask = mask << rshift;
+1 -1
sound/soc/soc-dapm.c
··· 3745 3745 { 3746 3746 struct snd_soc_codec *codec; 3747 3747 3748 - list_for_each_entry(codec, &card->codec_dev_list, list) { 3748 + list_for_each_entry(codec, &card->codec_dev_list, card_list) { 3749 3749 soc_dapm_shutdown_codec(&codec->dapm); 3750 3750 if (codec->dapm.bias_level == SND_SOC_BIAS_STANDBY) 3751 3751 snd_soc_dapm_set_bias_level(&codec->dapm,
+3 -3
sound/usb/card.c
··· 559 559 return; 560 560 561 561 card = chip->card; 562 - mutex_lock(&register_mutex); 563 562 down_write(&chip->shutdown_rwsem); 564 563 chip->shutdown = 1; 564 + up_write(&chip->shutdown_rwsem); 565 + 566 + mutex_lock(&register_mutex); 565 567 chip->num_interfaces--; 566 568 if (chip->num_interfaces <= 0) { 567 569 snd_card_disconnect(card); ··· 584 582 snd_usb_mixer_disconnect(p); 585 583 } 586 584 usb_chip[chip->index] = NULL; 587 - up_write(&chip->shutdown_rwsem); 588 585 mutex_unlock(&register_mutex); 589 586 snd_card_free_when_closed(card); 590 587 } else { 591 - up_write(&chip->shutdown_rwsem); 592 588 mutex_unlock(&register_mutex); 593 589 } 594 590 }
+13
sound/usb/endpoint.c
··· 35 35 36 36 #define EP_FLAG_ACTIVATED 0 37 37 #define EP_FLAG_RUNNING 1 38 + #define EP_FLAG_STOPPING 2 38 39 39 40 /* 40 41 * snd_usb_endpoint is a model that abstracts everything related to an ··· 503 502 if (alive) 504 503 snd_printk(KERN_ERR "timeout: still %d active urbs on EP #%x\n", 505 504 alive, ep->ep_num); 505 + clear_bit(EP_FLAG_STOPPING, &ep->flags); 506 506 507 507 return 0; 508 + } 509 + 510 + /* sync the pending stop operation; 511 + * this function itself doesn't trigger the stop operation 512 + */ 513 + void snd_usb_endpoint_sync_pending_stop(struct snd_usb_endpoint *ep) 514 + { 515 + if (ep && test_bit(EP_FLAG_STOPPING, &ep->flags)) 516 + wait_clear_urbs(ep); 508 517 } 509 518 510 519 /* ··· 929 918 930 919 if (wait) 931 920 wait_clear_urbs(ep); 921 + else 922 + set_bit(EP_FLAG_STOPPING, &ep->flags); 932 923 } 933 924 } 934 925
+1
sound/usb/endpoint.h
··· 19 19 int snd_usb_endpoint_start(struct snd_usb_endpoint *ep, int can_sleep); 20 20 void snd_usb_endpoint_stop(struct snd_usb_endpoint *ep, 21 21 int force, int can_sleep, int wait); 22 + void snd_usb_endpoint_sync_pending_stop(struct snd_usb_endpoint *ep); 22 23 int snd_usb_endpoint_activate(struct snd_usb_endpoint *ep); 23 24 int snd_usb_endpoint_deactivate(struct snd_usb_endpoint *ep); 24 25 void snd_usb_endpoint_free(struct list_head *head);
+3
sound/usb/pcm.c
··· 568 568 goto unlock; 569 569 } 570 570 571 + snd_usb_endpoint_sync_pending_stop(subs->sync_endpoint); 572 + snd_usb_endpoint_sync_pending_stop(subs->data_endpoint); 573 + 571 574 ret = set_format(subs, subs->cur_audiofmt); 572 575 if (ret < 0) 573 576 goto unlock;
+19 -9
tools/power/x86/turbostat/turbostat.c
··· 206 206 retval = pread(fd, msr, sizeof *msr, offset); 207 207 close(fd); 208 208 209 - if (retval != sizeof *msr) 209 + if (retval != sizeof *msr) { 210 + fprintf(stderr, "%s offset 0x%zx read failed\n", pathname, offset); 210 211 return -1; 212 + } 211 213 212 214 return 0; 213 215 } ··· 1103 1101 1104 1102 restart: 1105 1103 retval = for_all_cpus(get_counters, EVEN_COUNTERS); 1106 - if (retval) { 1104 + if (retval < -1) { 1105 + exit(retval); 1106 + } else if (retval == -1) { 1107 1107 re_initialize(); 1108 1108 goto restart; 1109 1109 } ··· 1118 1114 } 1119 1115 sleep(interval_sec); 1120 1116 retval = for_all_cpus(get_counters, ODD_COUNTERS); 1121 - if (retval) { 1117 + if (retval < -1) { 1118 + exit(retval); 1119 + } else if (retval == -1) { 1122 1120 re_initialize(); 1123 1121 goto restart; 1124 1122 } ··· 1132 1126 flush_stdout(); 1133 1127 sleep(interval_sec); 1134 1128 retval = for_all_cpus(get_counters, EVEN_COUNTERS); 1135 - if (retval) { 1129 + if (retval < -1) { 1130 + exit(retval); 1131 + } else if (retval == -1) { 1136 1132 re_initialize(); 1137 1133 goto restart; 1138 1134 } ··· 1553 1545 int fork_it(char **argv) 1554 1546 { 1555 1547 pid_t child_pid; 1548 + int status; 1556 1549 1557 - for_all_cpus(get_counters, EVEN_COUNTERS); 1550 + status = for_all_cpus(get_counters, EVEN_COUNTERS); 1551 + if (status) 1552 + exit(status); 1558 1553 /* clear affinity side-effect of get_counters() */ 1559 1554 sched_setaffinity(0, cpu_present_setsize, cpu_present_set); 1560 1555 gettimeofday(&tv_even, (struct timezone *)NULL); ··· 1567 1556 /* child */ 1568 1557 execvp(argv[0], argv); 1569 1558 } else { 1570 - int status; 1571 1559 1572 1560 /* parent */ 1573 1561 if (child_pid == -1) { ··· 1578 1568 signal(SIGQUIT, SIG_IGN); 1579 1569 if (waitpid(child_pid, &status, 0) == -1) { 1580 1570 perror("wait"); 1581 - exit(1); 1571 + exit(status); 1582 1572 } 1583 1573 } 1584 1574 /* ··· 1595 1585 1596 1586 fprintf(stderr, "%.6f sec\n", tv_delta.tv_sec + tv_delta.tv_usec/1000000.0); 1597 1587 1598 - return 0; 1588 + return status; 1599 1589 } 1600 1590 1601 1591 void cmdline(int argc, char **argv) ··· 1604 1594 1605 1595 progname = argv[0]; 1606 1596 1607 - while ((opt = getopt(argc, argv, "+pPSvisc:sC:m:M:")) != -1) { 1597 + while ((opt = getopt(argc, argv, "+pPSvi:sc:sC:m:M:")) != -1) { 1608 1598 switch (opt) { 1609 1599 case 'p': 1610 1600 show_core_only++;
+1 -1
tools/testing/selftests/Makefile
··· 1 - TARGETS = breakpoints kcmp mqueue vm cpu-hotplug memory-hotplug epoll 1 + TARGETS = breakpoints kcmp mqueue vm cpu-hotplug memory-hotplug 2 2 3 3 all: 4 4 for TARGET in $(TARGETS); do \
-11
tools/testing/selftests/epoll/Makefile
··· 1 - # Makefile for epoll selftests 2 - 3 - all: test_epoll 4 - %: %.c 5 - gcc -pthread -g -o $@ $^ 6 - 7 - run_tests: all 8 - ./test_epoll 9 - 10 - clean: 11 - $(RM) test_epoll
-344
tools/testing/selftests/epoll/test_epoll.c
··· 1 - /* 2 - * tools/testing/selftests/epoll/test_epoll.c 3 - * 4 - * Copyright 2012 Adobe Systems Incorporated 5 - * 6 - * This program is free software; you can redistribute it and/or modify 7 - * it under the terms of the GNU General Public License as published by 8 - * the Free Software Foundation; either version 2 of the License, or 9 - * (at your option) any later version. 10 - * 11 - * Paton J. Lewis <palewis@adobe.com> 12 - * 13 - */ 14 - 15 - #include <errno.h> 16 - #include <fcntl.h> 17 - #include <pthread.h> 18 - #include <stdio.h> 19 - #include <stdlib.h> 20 - #include <unistd.h> 21 - #include <sys/epoll.h> 22 - #include <sys/socket.h> 23 - 24 - /* 25 - * A pointer to an epoll_item_private structure will be stored in the epoll 26 - * item's event structure so that we can get access to the epoll_item_private 27 - * data after calling epoll_wait: 28 - */ 29 - struct epoll_item_private { 30 - int index; /* Position of this struct within the epoll_items array. */ 31 - int fd; 32 - uint32_t events; 33 - pthread_mutex_t mutex; /* Guards the following variables... */ 34 - int stop; 35 - int status; /* Stores any error encountered while handling item. */ 36 - /* The following variable allows us to test whether we have encountered 37 - a problem while attempting to cancel and delete the associated 38 - event. When the test program exits, 'deleted' should be exactly 39 - one. If it is greater than one, then the failed test reflects a real 40 - world situation where we would have tried to access the epoll item's 41 - private data after deleting it: */ 42 - int deleted; 43 - }; 44 - 45 - struct epoll_item_private *epoll_items; 46 - 47 - /* 48 - * Delete the specified item from the epoll set. In a real-world secneario this 49 - * is where we would free the associated data structure, but in this testing 50 - * environment we retain the structure so that we can test for double-deletion: 51 - */ 52 - void delete_item(int index) 53 - { 54 - __sync_fetch_and_add(&epoll_items[index].deleted, 1); 55 - } 56 - 57 - /* 58 - * A pointer to a read_thread_data structure will be passed as the argument to 59 - * each read thread: 60 - */ 61 - struct read_thread_data { 62 - int stop; 63 - int status; /* Indicates any error encountered by the read thread. */ 64 - int epoll_set; 65 - }; 66 - 67 - /* 68 - * The function executed by the read threads: 69 - */ 70 - void *read_thread_function(void *function_data) 71 - { 72 - struct read_thread_data *thread_data = 73 - (struct read_thread_data *)function_data; 74 - struct epoll_event event_data; 75 - struct epoll_item_private *item_data; 76 - char socket_data; 77 - 78 - /* Handle events until we encounter an error or this thread's 'stop' 79 - condition is set: */ 80 - while (1) { 81 - int result = epoll_wait(thread_data->epoll_set, 82 - &event_data, 83 - 1, /* Number of desired events */ 84 - 1000); /* Timeout in ms */ 85 - if (result < 0) { 86 - /* Breakpoints signal all threads. Ignore that while 87 - debugging: */ 88 - if (errno == EINTR) 89 - continue; 90 - thread_data->status = errno; 91 - return 0; 92 - } else if (thread_data->stop) 93 - return 0; 94 - else if (result == 0) /* Timeout */ 95 - continue; 96 - 97 - /* We need the mutex here because checking for the stop 98 - condition and re-enabling the epoll item need to be done 99 - together as one atomic operation when EPOLL_CTL_DISABLE is 100 - available: */ 101 - item_data = (struct epoll_item_private *)event_data.data.ptr; 102 - pthread_mutex_lock(&item_data->mutex); 103 - 104 - /* Remove the item from the epoll set if we want to stop 105 - handling that event: */ 106 - if (item_data->stop) 107 - delete_item(item_data->index); 108 - else { 109 - /* Clear the data that was written to the other end of 110 - our non-blocking socket: */ 111 - do { 112 - if (read(item_data->fd, &socket_data, 1) < 1) { 113 - if ((errno == EAGAIN) || 114 - (errno == EWOULDBLOCK)) 115 - break; 116 - else 117 - goto error_unlock; 118 - } 119 - } while (item_data->events & EPOLLET); 120 - 121 - /* The item was one-shot, so re-enable it: */ 122 - event_data.events = item_data->events; 123 - if (epoll_ctl(thread_data->epoll_set, 124 - EPOLL_CTL_MOD, 125 - item_data->fd, 126 - &event_data) < 0) 127 - goto error_unlock; 128 - } 129 - 130 - pthread_mutex_unlock(&item_data->mutex); 131 - } 132 - 133 - error_unlock: 134 - thread_data->status = item_data->status = errno; 135 - pthread_mutex_unlock(&item_data->mutex); 136 - return 0; 137 - } 138 - 139 - /* 140 - * A pointer to a write_thread_data structure will be passed as the argument to 141 - * the write thread: 142 - */ 143 - struct write_thread_data { 144 - int stop; 145 - int status; /* Indicates any error encountered by the write thread. */ 146 - int n_fds; 147 - int *fds; 148 - }; 149 - 150 - /* 151 - * The function executed by the write thread. It writes a single byte to each 152 - * socket in turn until the stop condition for this thread is set. If writing to 153 - * a socket would block (i.e. errno was EAGAIN), we leave that socket alone for 154 - * the moment and just move on to the next socket in the list. We don't care 155 - * about the order in which we deliver events to the epoll set. In fact we don't 156 - * care about the data we're writing to the pipes at all; we just want to 157 - * trigger epoll events: 158 - */ 159 - void *write_thread_function(void *function_data) 160 - { 161 - const char data = 'X'; 162 - int index; 163 - struct write_thread_data *thread_data = 164 - (struct write_thread_data *)function_data; 165 - while (!thread_data->stop) 166 - for (index = 0; 167 - !thread_data->stop && (index < thread_data->n_fds); 168 - ++index) 169 - if ((write(thread_data->fds[index], &data, 1) < 1) && 170 - (errno != EAGAIN) && 171 - (errno != EWOULDBLOCK)) { 172 - thread_data->status = errno; 173 - return; 174 - } 175 - } 176 - 177 - /* 178 - * Arguments are currently ignored: 179 - */ 180 - int main(int argc, char **argv) 181 - { 182 - const int n_read_threads = 100; 183 - const int n_epoll_items = 500; 184 - int index; 185 - int epoll_set = epoll_create1(0); 186 - struct write_thread_data write_thread_data = { 187 - 0, 0, n_epoll_items, malloc(n_epoll_items * sizeof(int)) 188 - }; 189 - struct read_thread_data *read_thread_data = 190 - malloc(n_read_threads * sizeof(struct read_thread_data)); 191 - pthread_t *read_threads = malloc(n_read_threads * sizeof(pthread_t)); 192 - pthread_t write_thread; 193 - 194 - printf("-----------------\n"); 195 - printf("Runing test_epoll\n"); 196 - printf("-----------------\n"); 197 - 198 - epoll_items = malloc(n_epoll_items * sizeof(struct epoll_item_private)); 199 - 200 - if (epoll_set < 0 || epoll_items == 0 || write_thread_data.fds == 0 || 201 - read_thread_data == 0 || read_threads == 0) 202 - goto error; 203 - 204 - if (sysconf(_SC_NPROCESSORS_ONLN) < 2) { 205 - printf("Error: please run this test on a multi-core system.\n"); 206 - goto error; 207 - } 208 - 209 - /* Create the socket pairs and epoll items: */ 210 - for (index = 0; index < n_epoll_items; ++index) { 211 - int socket_pair[2]; 212 - struct epoll_event event_data; 213 - if (socketpair(AF_UNIX, 214 - SOCK_STREAM | SOCK_NONBLOCK, 215 - 0, 216 - socket_pair) < 0) 217 - goto error; 218 - write_thread_data.fds[index] = socket_pair[0]; 219 - epoll_items[index].index = index; 220 - epoll_items[index].fd = socket_pair[1]; 221 - if (pthread_mutex_init(&epoll_items[index].mutex, NULL) != 0) 222 - goto error; 223 - /* We always use EPOLLONESHOT because this test is currently 224 - structured to demonstrate the need for EPOLL_CTL_DISABLE, 225 - which only produces useful information in the EPOLLONESHOT 226 - case (without EPOLLONESHOT, calling epoll_ctl with 227 - EPOLL_CTL_DISABLE will never return EBUSY). If support for 228 - testing events without EPOLLONESHOT is desired, it should 229 - probably be implemented in a separate unit test. */ 230 - epoll_items[index].events = EPOLLIN | EPOLLONESHOT; 231 - if (index < n_epoll_items / 2) 232 - epoll_items[index].events |= EPOLLET; 233 - epoll_items[index].stop = 0; 234 - epoll_items[index].status = 0; 235 - epoll_items[index].deleted = 0; 236 - event_data.events = epoll_items[index].events; 237 - event_data.data.ptr = &epoll_items[index]; 238 - if (epoll_ctl(epoll_set, 239 - EPOLL_CTL_ADD, 240 - epoll_items[index].fd, 241 - &event_data) < 0) 242 - goto error; 243 - } 244 - 245 - /* Create and start the read threads: */ 246 - for (index = 0; index < n_read_threads; ++index) { 247 - read_thread_data[index].stop = 0; 248 - read_thread_data[index].status = 0; 249 - read_thread_data[index].epoll_set = epoll_set; 250 - if (pthread_create(&read_threads[index], 251 - NULL, 252 - read_thread_function, 253 - &read_thread_data[index]) != 0) 254 - goto error; 255 - } 256 - 257 - if (pthread_create(&write_thread, 258 - NULL, 259 - write_thread_function, 260 - &write_thread_data) != 0) 261 - goto error; 262 - 263 - /* Cancel all event pollers: */ 264 - #ifdef EPOLL_CTL_DISABLE 265 - for (index = 0; index < n_epoll_items; ++index) { 266 - pthread_mutex_lock(&epoll_items[index].mutex); 267 - ++epoll_items[index].stop; 268 - if (epoll_ctl(epoll_set, 269 - EPOLL_CTL_DISABLE, 270 - epoll_items[index].fd, 271 - NULL) == 0) 272 - delete_item(index); 273 - else if (errno != EBUSY) { 274 - pthread_mutex_unlock(&epoll_items[index].mutex); 275 - goto error; 276 - } 277 - /* EBUSY means events were being handled; allow the other thread 278 - to delete the item. */ 279 - pthread_mutex_unlock(&epoll_items[index].mutex); 280 - } 281 - #else 282 - for (index = 0; index < n_epoll_items; ++index) { 283 - pthread_mutex_lock(&epoll_items[index].mutex); 284 - ++epoll_items[index].stop; 285 - pthread_mutex_unlock(&epoll_items[index].mutex); 286 - /* Wait in case a thread running read_thread_function is 287 - currently executing code between epoll_wait and 288 - pthread_mutex_lock with this item. Note that a longer delay 289 - would make double-deletion less likely (at the expense of 290 - performance), but there is no guarantee that any delay would 291 - ever be sufficient. Note also that we delete all event 292 - pollers at once for testing purposes, but in a real-world 293 - environment we are likely to want to be able to cancel event 294 - pollers at arbitrary times. Therefore we can't improve this 295 - situation by just splitting this loop into two loops 296 - (i.e. signal 'stop' for all items, sleep, and then delete all 297 - items). We also can't fix the problem via EPOLL_CTL_DEL 298 - because that command can't prevent the case where some other 299 - thread is executing read_thread_function within the region 300 - mentioned above: */ 301 - usleep(1); 302 - pthread_mutex_lock(&epoll_items[index].mutex); 303 - if (!epoll_items[index].deleted) 304 - delete_item(index); 305 - pthread_mutex_unlock(&epoll_items[index].mutex); 306 - } 307 - #endif 308 - 309 - /* Shut down the read threads: */ 310 - for (index = 0; index < n_read_threads; ++index) 311 - __sync_fetch_and_add(&read_thread_data[index].stop, 1); 312 - for (index = 0; index < n_read_threads; ++index) { 313 - if (pthread_join(read_threads[index], NULL) != 0) 314 - goto error; 315 - if (read_thread_data[index].status) 316 - goto error; 317 - } 318 - 319 - /* Shut down the write thread: */ 320 - __sync_fetch_and_add(&write_thread_data.stop, 1); 321 - if ((pthread_join(write_thread, NULL) != 0) || write_thread_data.status) 322 - goto error; 323 - 324 - /* Check for final error conditions: */ 325 - for (index = 0; index < n_epoll_items; ++index) { 326 - if (epoll_items[index].status != 0) 327 - goto error; 328 - if (pthread_mutex_destroy(&epoll_items[index].mutex) < 0) 329 - goto error; 330 - } 331 - for (index = 0; index < n_epoll_items; ++index) 332 - if (epoll_items[index].deleted != 1) { 333 - printf("Error: item data deleted %1d times.\n", 334 - epoll_items[index].deleted); 335 - goto error; 336 - } 337 - 338 - printf("[PASS]\n"); 339 - return 0; 340 - 341 - error: 342 - printf("[FAIL]\n"); 343 - return errno; 344 - }