Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

ASoC: Merge tag 'v3.4-rc3' into for-3.5

Linux 3.4-rc3 contains a bunch of Tegra changes which are conflicting
annoyingly with the new development that's going on for Tegra so merge
it up to resolve those conflicts.

Conflicts:
sound/soc/soc-core.c
sound/soc/tegra/tegra_i2s.c
sound/soc/tegra/tegra_spdif.c

+3380 -2638
+7 -7
Documentation/ABI/stable/sysfs-driver-usb-usbtmc
··· 1 - What: /sys/bus/usb/drivers/usbtmc/devices/*/interface_capabilities 2 - What: /sys/bus/usb/drivers/usbtmc/devices/*/device_capabilities 1 + What: /sys/bus/usb/drivers/usbtmc/*/interface_capabilities 2 + What: /sys/bus/usb/drivers/usbtmc/*/device_capabilities 3 3 Date: August 2008 4 4 Contact: Greg Kroah-Hartman <gregkh@linuxfoundation.org> 5 5 Description: ··· 12 12 The files are read only. 13 13 14 14 15 - What: /sys/bus/usb/drivers/usbtmc/devices/*/usb488_interface_capabilities 16 - What: /sys/bus/usb/drivers/usbtmc/devices/*/usb488_device_capabilities 15 + What: /sys/bus/usb/drivers/usbtmc/*/usb488_interface_capabilities 16 + What: /sys/bus/usb/drivers/usbtmc/*/usb488_device_capabilities 17 17 Date: August 2008 18 18 Contact: Greg Kroah-Hartman <gregkh@linuxfoundation.org> 19 19 Description: ··· 27 27 The files are read only. 28 28 29 29 30 - What: /sys/bus/usb/drivers/usbtmc/devices/*/TermChar 30 + What: /sys/bus/usb/drivers/usbtmc/*/TermChar 31 31 Date: August 2008 32 32 Contact: Greg Kroah-Hartman <gregkh@linuxfoundation.org> 33 33 Description: ··· 40 40 sent to the device or not. 41 41 42 42 43 - What: /sys/bus/usb/drivers/usbtmc/devices/*/TermCharEnabled 43 + What: /sys/bus/usb/drivers/usbtmc/*/TermCharEnabled 44 44 Date: August 2008 45 45 Contact: Greg Kroah-Hartman <gregkh@linuxfoundation.org> 46 46 Description: ··· 51 51 published by the USB-IF. 52 52 53 53 54 - What: /sys/bus/usb/drivers/usbtmc/devices/*/auto_abort 54 + What: /sys/bus/usb/drivers/usbtmc/*/auto_abort 55 55 Date: August 2008 56 56 Contact: Greg Kroah-Hartman <gregkh@linuxfoundation.org> 57 57 Description:
+18
Documentation/ABI/testing/sysfs-block-rssd
··· 1 + What: /sys/block/rssd*/registers 2 + Date: March 2012 3 + KernelVersion: 3.3 4 + Contact: Asai Thambi S P <asamymuthupa@micron.com> 5 + Description: This is a read-only file. Dumps below driver information and 6 + hardware registers. 7 + - S ACTive 8 + - Command Issue 9 + - Allocated 10 + - Completed 11 + - PORT IRQ STAT 12 + - HOST IRQ STAT 13 + 14 + What: /sys/block/rssd*/status 15 + Date: April 2012 16 + KernelVersion: 3.4 17 + Contact: Asai Thambi S P <asamymuthupa@micron.com> 18 + Description: This is a read-only file. Indicates the status of the device.
+8
Documentation/ABI/testing/sysfs-cfq-target-latency
··· 1 + What: /sys/block/<device>/iosched/target_latency 2 + Date: March 2012 3 + contact: Tao Ma <boyu.mt@taobao.com> 4 + Description: 5 + The /sys/block/<device>/iosched/target_latency only exists 6 + when the user sets cfq to /sys/block/<device>/scheduler. 7 + It contains an estimated latency time for the cfq. cfq will 8 + use it to calculate the time slice used for every task.
+2 -3
Documentation/cgroups/memory.txt
··· 34 34 35 35 Features: 36 36 - accounting anonymous pages, file caches, swap caches usage and limiting them. 37 - - private LRU and reclaim routine. (system's global LRU and private LRU 38 - work independently from each other) 37 + - pages are linked to per-memcg LRU exclusively, and there is no global LRU. 39 38 - optionally, memory+swap usage can be accounted and limited. 40 39 - hierarchical accounting 41 40 - soft limit ··· 153 154 2.2.1 Accounting details 154 155 155 156 All mapped anon pages (RSS) and cache pages (Page Cache) are accounted. 156 - Some pages which are never reclaimable and will not be on the global LRU 157 + Some pages which are never reclaimable and will not be on the LRU 157 158 are not accounted. We just account pages under usual VM management. 158 159 159 160 RSS pages are accounted at page_fault unless they've already been accounted
+8
Documentation/feature-removal-schedule.txt
··· 531 531 of ASLR. It was only ever intended for debugging, so it should be 532 532 removed. 533 533 Who: Kees Cook <keescook@chromium.org> 534 + 535 + ---------------------------- 536 + 537 + What: setitimer accepts user NULL pointer (value) 538 + When: 3.6 539 + Why: setitimer is not returning -EFAULT if user pointer is NULL. This 540 + violates the spec. 541 + Who: Sasikantha Babu <sasikanth.v19@gmail.com>
+1 -1
Documentation/filesystems/vfs.txt
··· 114 114 struct file_system_type { 115 115 const char *name; 116 116 int fs_flags; 117 - struct dentry (*mount) (struct file_system_type *, int, 117 + struct dentry *(*mount) (struct file_system_type *, int, 118 118 const char *, void *); 119 119 void (*kill_sb) (struct super_block *); 120 120 struct module *owner;
+3 -1
Documentation/sound/alsa/HD-Audio-Models.txt
··· 43 43 44 44 ALC882/883/885/888/889 45 45 ====================== 46 - N/A 46 + acer-aspire-4930g Acer Aspire 4930G/5930G/6530G/6930G/7730G 47 + acer-aspire-8930g Acer Aspire 8330G/6935G 48 + acer-aspire Acer Aspire others 47 49 48 50 ALC861/660 49 51 ==========
+22
Documentation/usb/URB.txt
··· 168 168 they will get a -EPERM error. Thus you can be sure that when 169 169 usb_kill_urb() returns, the URB is totally idle. 170 170 171 + There is a lifetime issue to consider. An URB may complete at any 172 + time, and the completion handler may free the URB. If this happens 173 + while usb_unlink_urb or usb_kill_urb is running, it will cause a 174 + memory-access violation. The driver is responsible for avoiding this, 175 + which often means some sort of lock will be needed to prevent the URB 176 + from being deallocated while it is still in use. 177 + 178 + On the other hand, since usb_unlink_urb may end up calling the 179 + completion handler, the handler must not take any lock that is held 180 + when usb_unlink_urb is invoked. The general solution to this problem 181 + is to increment the URB's reference count while holding the lock, then 182 + drop the lock and call usb_unlink_urb or usb_kill_urb, and then 183 + decrement the URB's reference count. You increment the reference 184 + count by calling 185 + 186 + struct urb *usb_get_urb(struct urb *urb) 187 + 188 + (ignore the return value; it is the same as the argument) and 189 + decrement the reference count by calling usb_free_urb. Of course, 190 + none of this is necessary if there's no danger of the URB being freed 191 + by the completion handler. 192 + 171 193 172 194 1.7. What about the completion handler? 173 195
+3 -3
Documentation/usb/usbmon.txt
··· 183 183 d5ea89a0 3575914555 S Ci:1:001:0 s a3 00 0000 0003 0004 4 < 184 184 d5ea89a0 3575914560 C Ci:1:001:0 0 4 = 01050000 185 185 186 - An output bulk transfer to send a SCSI command 0x5E in a 31-byte Bulk wrapper 187 - to a storage device at address 5: 186 + An output bulk transfer to send a SCSI command 0x28 (READ_10) in a 31-byte 187 + Bulk wrapper to a storage device at address 5: 188 188 189 - dd65f0e8 4128379752 S Bo:1:005:2 -115 31 = 55534243 5e000000 00000000 00000600 00000000 00000000 00000000 000000 189 + dd65f0e8 4128379752 S Bo:1:005:2 -115 31 = 55534243 ad000000 00800000 80010a28 20000000 20000040 00000000 000000 190 190 dd65f0e8 4128379808 C Bo:1:005:2 0 31 > 191 191 192 192 * Raw binary format and API
+13 -8
MAINTAINERS
··· 1521 1521 M: Johan Hedberg <johan.hedberg@gmail.com> 1522 1522 L: linux-bluetooth@vger.kernel.org 1523 1523 W: http://www.bluez.org/ 1524 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/padovan/bluetooth.git 1525 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/jh/bluetooth.git 1524 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/bluetooth/bluetooth.git 1525 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/bluetooth/bluetooth-next.git 1526 1526 S: Maintained 1527 1527 F: drivers/bluetooth/ 1528 1528 ··· 1532 1532 M: Johan Hedberg <johan.hedberg@gmail.com> 1533 1533 L: linux-bluetooth@vger.kernel.org 1534 1534 W: http://www.bluez.org/ 1535 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/padovan/bluetooth.git 1536 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/jh/bluetooth.git 1535 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/bluetooth/bluetooth.git 1536 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/bluetooth/bluetooth-next.git 1537 1537 S: Maintained 1538 1538 F: net/bluetooth/ 1539 1539 F: include/net/bluetooth/ ··· 4533 4533 F: drivers/net/ethernet/myricom/myri10ge/ 4534 4534 4535 4535 NATSEMI ETHERNET DRIVER (DP8381x) 4536 - M: Tim Hockin <thockin@hockin.org> 4537 - S: Maintained 4536 + S: Orphan 4538 4537 F: drivers/net/ethernet/natsemi/natsemi.c 4539 4538 4540 4539 NATIVE INSTRUMENTS USB SOUND INTERFACE DRIVER ··· 4802 4803 F: arch/arm/mach-omap2/clockdomain44xx.c 4803 4804 4804 4805 OMAP AUDIO SUPPORT 4806 + M: Peter Ujfalusi <peter.ujfalusi@ti.com> 4805 4807 M: Jarkko Nikula <jarkko.nikula@bitmer.com> 4806 4808 L: alsa-devel@alsa-project.org (subscribers-only) 4807 4809 L: linux-omap@vger.kernel.org ··· 5116 5116 F: drivers/i2c/busses/i2c-pca-* 5117 5117 F: include/linux/i2c-algo-pca.h 5118 5118 F: include/linux/i2c-pca-platform.h 5119 + 5120 + PCDP - PRIMARY CONSOLE AND DEBUG PORT 5121 + M: Khalid Aziz <khalid.aziz@hp.com> 5122 + S: Maintained 5123 + F: drivers/firmware/pcdp.* 5119 5124 5120 5125 PCI ERROR RECOVERY 5121 5126 M: Linas Vepstas <linasvepstas@gmail.com> ··· 6471 6466 F: drivers/staging/olpc_dcon/ 6472 6467 6473 6468 STAGING - OZMO DEVICES USB OVER WIFI DRIVER 6469 + M: Rupesh Gujare <rgujare@ozmodevices.com> 6474 6470 M: Chris Kelly <ckelly@ozmodevices.com> 6475 6471 S: Maintained 6476 6472 F: drivers/staging/ozwpan/ ··· 7473 7467 7474 7468 WOLFSON MICROELECTRONICS DRIVERS 7475 7469 M: Mark Brown <broonie@opensource.wolfsonmicro.com> 7476 - M: Ian Lartey <ian@opensource.wolfsonmicro.com> 7477 - M: Dimitris Papastamos <dp@opensource.wolfsonmicro.com> 7470 + L: patches@opensource.wolfsonmicro.com 7478 7471 T: git git://opensource.wolfsonmicro.com/linux-2.6-asoc 7479 7472 T: git git://opensource.wolfsonmicro.com/linux-2.6-audioplus 7480 7473 W: http://opensource.wolfsonmicro.com/content/linux-drivers-wolfson-devices
+1 -1
Makefile
··· 1 1 VERSION = 3 2 2 PATCHLEVEL = 4 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc2 4 + EXTRAVERSION = -rc3 5 5 NAME = Saber-toothed Squirrel 6 6 7 7 # *DOCUMENTATION*
+1 -67
arch/alpha/include/asm/atomic.h
··· 3 3 4 4 #include <linux/types.h> 5 5 #include <asm/barrier.h> 6 + #include <asm/cmpxchg.h> 6 7 7 8 /* 8 9 * Atomic operations that C can't guarantee us. Useful for ··· 168 167 smp_mb(); 169 168 return result; 170 169 } 171 - 172 - /* 173 - * Atomic exchange routines. 174 - */ 175 - 176 - #define __ASM__MB 177 - #define ____xchg(type, args...) __xchg ## type ## _local(args) 178 - #define ____cmpxchg(type, args...) __cmpxchg ## type ## _local(args) 179 - #include <asm/xchg.h> 180 - 181 - #define xchg_local(ptr,x) \ 182 - ({ \ 183 - __typeof__(*(ptr)) _x_ = (x); \ 184 - (__typeof__(*(ptr))) __xchg_local((ptr), (unsigned long)_x_, \ 185 - sizeof(*(ptr))); \ 186 - }) 187 - 188 - #define cmpxchg_local(ptr, o, n) \ 189 - ({ \ 190 - __typeof__(*(ptr)) _o_ = (o); \ 191 - __typeof__(*(ptr)) _n_ = (n); \ 192 - (__typeof__(*(ptr))) __cmpxchg_local((ptr), (unsigned long)_o_, \ 193 - (unsigned long)_n_, \ 194 - sizeof(*(ptr))); \ 195 - }) 196 - 197 - #define cmpxchg64_local(ptr, o, n) \ 198 - ({ \ 199 - BUILD_BUG_ON(sizeof(*(ptr)) != 8); \ 200 - cmpxchg_local((ptr), (o), (n)); \ 201 - }) 202 - 203 - #ifdef CONFIG_SMP 204 - #undef __ASM__MB 205 - #define __ASM__MB "\tmb\n" 206 - #endif 207 - #undef ____xchg 208 - #undef ____cmpxchg 209 - #define ____xchg(type, args...) __xchg ##type(args) 210 - #define ____cmpxchg(type, args...) __cmpxchg ##type(args) 211 - #include <asm/xchg.h> 212 - 213 - #define xchg(ptr,x) \ 214 - ({ \ 215 - __typeof__(*(ptr)) _x_ = (x); \ 216 - (__typeof__(*(ptr))) __xchg((ptr), (unsigned long)_x_, \ 217 - sizeof(*(ptr))); \ 218 - }) 219 - 220 - #define cmpxchg(ptr, o, n) \ 221 - ({ \ 222 - __typeof__(*(ptr)) _o_ = (o); \ 223 - __typeof__(*(ptr)) _n_ = (n); \ 224 - (__typeof__(*(ptr))) __cmpxchg((ptr), (unsigned long)_o_, \ 225 - (unsigned long)_n_, sizeof(*(ptr)));\ 226 - }) 227 - 228 - #define cmpxchg64(ptr, o, n) \ 229 - ({ \ 230 - BUILD_BUG_ON(sizeof(*(ptr)) != 8); \ 231 - cmpxchg((ptr), (o), (n)); \ 232 - }) 233 - 234 - #undef __ASM__MB 235 - #undef ____cmpxchg 236 - 237 - #define __HAVE_ARCH_CMPXCHG 1 238 170 239 171 #define atomic64_cmpxchg(v, old, new) (cmpxchg(&((v)->counter), old, new)) 240 172 #define atomic64_xchg(v, new) (xchg(&((v)->counter), new))
+71
arch/alpha/include/asm/cmpxchg.h
··· 1 + #ifndef _ALPHA_CMPXCHG_H 2 + #define _ALPHA_CMPXCHG_H 3 + 4 + /* 5 + * Atomic exchange routines. 6 + */ 7 + 8 + #define __ASM__MB 9 + #define ____xchg(type, args...) __xchg ## type ## _local(args) 10 + #define ____cmpxchg(type, args...) __cmpxchg ## type ## _local(args) 11 + #include <asm/xchg.h> 12 + 13 + #define xchg_local(ptr, x) \ 14 + ({ \ 15 + __typeof__(*(ptr)) _x_ = (x); \ 16 + (__typeof__(*(ptr))) __xchg_local((ptr), (unsigned long)_x_, \ 17 + sizeof(*(ptr))); \ 18 + }) 19 + 20 + #define cmpxchg_local(ptr, o, n) \ 21 + ({ \ 22 + __typeof__(*(ptr)) _o_ = (o); \ 23 + __typeof__(*(ptr)) _n_ = (n); \ 24 + (__typeof__(*(ptr))) __cmpxchg_local((ptr), (unsigned long)_o_, \ 25 + (unsigned long)_n_, \ 26 + sizeof(*(ptr))); \ 27 + }) 28 + 29 + #define cmpxchg64_local(ptr, o, n) \ 30 + ({ \ 31 + BUILD_BUG_ON(sizeof(*(ptr)) != 8); \ 32 + cmpxchg_local((ptr), (o), (n)); \ 33 + }) 34 + 35 + #ifdef CONFIG_SMP 36 + #undef __ASM__MB 37 + #define __ASM__MB "\tmb\n" 38 + #endif 39 + #undef ____xchg 40 + #undef ____cmpxchg 41 + #define ____xchg(type, args...) __xchg ##type(args) 42 + #define ____cmpxchg(type, args...) __cmpxchg ##type(args) 43 + #include <asm/xchg.h> 44 + 45 + #define xchg(ptr, x) \ 46 + ({ \ 47 + __typeof__(*(ptr)) _x_ = (x); \ 48 + (__typeof__(*(ptr))) __xchg((ptr), (unsigned long)_x_, \ 49 + sizeof(*(ptr))); \ 50 + }) 51 + 52 + #define cmpxchg(ptr, o, n) \ 53 + ({ \ 54 + __typeof__(*(ptr)) _o_ = (o); \ 55 + __typeof__(*(ptr)) _n_ = (n); \ 56 + (__typeof__(*(ptr))) __cmpxchg((ptr), (unsigned long)_o_, \ 57 + (unsigned long)_n_, sizeof(*(ptr)));\ 58 + }) 59 + 60 + #define cmpxchg64(ptr, o, n) \ 61 + ({ \ 62 + BUILD_BUG_ON(sizeof(*(ptr)) != 8); \ 63 + cmpxchg((ptr), (o), (n)); \ 64 + }) 65 + 66 + #undef __ASM__MB 67 + #undef ____cmpxchg 68 + 69 + #define __HAVE_ARCH_CMPXCHG 1 70 + 71 + #endif /* _ALPHA_CMPXCHG_H */
+2 -2
arch/alpha/include/asm/xchg.h
··· 1 - #ifndef _ALPHA_ATOMIC_H 1 + #ifndef _ALPHA_CMPXCHG_H 2 2 #error Do not include xchg.h directly! 3 3 #else 4 4 /* 5 5 * xchg/xchg_local and cmpxchg/cmpxchg_local share the same code 6 6 * except that local version do not have the expensive memory barrier. 7 - * So this file is included twice from asm/system.h. 7 + * So this file is included twice from asm/cmpxchg.h. 8 8 */ 9 9 10 10 /*
+2
arch/arm/boot/compressed/atags_to_fdt.c
··· 77 77 } else if (atag->hdr.tag == ATAG_MEM) { 78 78 if (memcount >= sizeof(mem_reg_property)/4) 79 79 continue; 80 + if (!atag->u.mem.size) 81 + continue; 80 82 mem_reg_property[memcount++] = cpu_to_fdt32(atag->u.mem.start); 81 83 mem_reg_property[memcount++] = cpu_to_fdt32(atag->u.mem.size); 82 84 } else if (atag->hdr.tag == ATAG_INITRD2) {
+1 -1
arch/arm/boot/compressed/head.S
··· 273 273 add r0, r0, #0x100 274 274 mov r1, r6 275 275 sub r2, sp, r6 276 - blne atags_to_fdt 276 + bleq atags_to_fdt 277 277 278 278 ldmfd sp!, {r0-r3, ip, lr} 279 279 sub sp, sp, #0x10000
-1
arch/arm/boot/dts/at91sam9g20.dtsi
··· 55 55 #interrupt-cells = <2>; 56 56 compatible = "atmel,at91rm9200-aic"; 57 57 interrupt-controller; 58 - interrupt-parent; 59 58 reg = <0xfffff000 0x200>; 60 59 }; 61 60
-1
arch/arm/boot/dts/at91sam9g45.dtsi
··· 56 56 #interrupt-cells = <2>; 57 57 compatible = "atmel,at91rm9200-aic"; 58 58 interrupt-controller; 59 - interrupt-parent; 60 59 reg = <0xfffff000 0x200>; 61 60 }; 62 61
-1
arch/arm/boot/dts/at91sam9x5.dtsi
··· 54 54 #interrupt-cells = <2>; 55 55 compatible = "atmel,at91rm9200-aic"; 56 56 interrupt-controller; 57 - interrupt-parent; 58 57 reg = <0xfffff000 0x200>; 59 58 }; 60 59
-1
arch/arm/boot/dts/db8500.dtsi
··· 24 24 #interrupt-cells = <3>; 25 25 #address-cells = <1>; 26 26 interrupt-controller; 27 - interrupt-parent; 28 27 reg = <0xa0411000 0x1000>, 29 28 <0xa0410100 0x100>; 30 29 };
-1
arch/arm/boot/dts/highbank.dts
··· 89 89 #size-cells = <0>; 90 90 #address-cells = <1>; 91 91 interrupt-controller; 92 - interrupt-parent; 93 92 reg = <0xfff11000 0x1000>, 94 93 <0xfff10100 0x100>; 95 94 };
+4 -5
arch/arm/common/vic.c
··· 427 427 428 428 /* 429 429 * Handle each interrupt in a single VIC. Returns non-zero if we've 430 - * handled at least one interrupt. This does a single read of the 431 - * status register and handles all interrupts in order from LSB first. 430 + * handled at least one interrupt. This reads the status register 431 + * before handling each interrupt, which is necessary given that 432 + * handle_IRQ may briefly re-enable interrupts for soft IRQ handling. 432 433 */ 433 434 static int handle_one_vic(struct vic_device *vic, struct pt_regs *regs) 434 435 { 435 436 u32 stat, irq; 436 437 int handled = 0; 437 438 438 - stat = readl_relaxed(vic->base + VIC_IRQ_STATUS); 439 - while (stat) { 439 + while ((stat = readl_relaxed(vic->base + VIC_IRQ_STATUS))) { 440 440 irq = ffs(stat) - 1; 441 441 handle_IRQ(irq_find_mapping(vic->domain, irq), regs); 442 - stat &= ~(1 << irq); 443 442 handled = 1; 444 443 } 445 444
+1 -1
arch/arm/include/asm/jump_label.h
··· 14 14 #define JUMP_LABEL_NOP "nop" 15 15 #endif 16 16 17 - static __always_inline bool arch_static_branch(struct jump_label_key *key) 17 + static __always_inline bool arch_static_branch(struct static_key *key) 18 18 { 19 19 asm goto("1:\n\t" 20 20 JUMP_LABEL_NOP "\n\t"
+15 -1
arch/arm/kernel/setup.c
··· 523 523 */ 524 524 size -= start & ~PAGE_MASK; 525 525 bank->start = PAGE_ALIGN(start); 526 - bank->size = size & PAGE_MASK; 526 + 527 + #ifndef CONFIG_LPAE 528 + if (bank->start + size < bank->start) { 529 + printk(KERN_CRIT "Truncating memory at 0x%08llx to fit in " 530 + "32-bit physical address space\n", (long long)start); 531 + /* 532 + * To ensure bank->start + bank->size is representable in 533 + * 32 bits, we use ULONG_MAX as the upper limit rather than 4GB. 534 + * This means we lose a page after masking. 535 + */ 536 + size = ULONG_MAX - bank->start; 537 + } 538 + #endif 539 + 540 + bank->size = size & PAGE_MASK; 527 541 528 542 /* 529 543 * Check whether this memory region has non-zero size or
+5 -1
arch/arm/kernel/smp_twd.c
··· 118 118 * The twd clock events must be reprogrammed to account for the new 119 119 * frequency. The timer is local to a cpu, so cross-call to the 120 120 * changing cpu. 121 + * 122 + * Only wait for it to finish, if the cpu is active to avoid 123 + * deadlock when cpu1 is spinning on while(!cpu_active(cpu1)) during 124 + * booting of that cpu. 121 125 */ 122 126 if (state == CPUFREQ_POSTCHANGE || state == CPUFREQ_RESUMECHANGE) 123 127 smp_call_function_single(freqs->cpu, twd_update_frequency, 124 - NULL, 1); 128 + NULL, cpu_active(freqs->cpu)); 125 129 126 130 return NOTIFY_OK; 127 131 }
+2
arch/arm/mach-exynos/Kconfig
··· 368 368 369 369 config MACH_EXYNOS4_DT 370 370 bool "Samsung Exynos4 Machine using device tree" 371 + depends on ARCH_EXYNOS4 371 372 select CPU_EXYNOS4210 372 373 select USE_OF 373 374 select ARM_AMBA ··· 381 380 382 381 config MACH_EXYNOS5_DT 383 382 bool "SAMSUNG EXYNOS5 Machine using device tree" 383 + depends on ARCH_EXYNOS5 384 384 select SOC_EXYNOS5250 385 385 select USE_OF 386 386 select ARM_AMBA
+2
arch/arm/mach-exynos/include/mach/irqs.h
··· 212 212 #define IRQ_MFC EXYNOS4_IRQ_MFC 213 213 #define IRQ_SDO EXYNOS4_IRQ_SDO 214 214 215 + #define IRQ_I2S0 EXYNOS4_IRQ_I2S0 216 + 215 217 #define IRQ_ADC EXYNOS4_IRQ_ADC0 216 218 #define IRQ_TC EXYNOS4_IRQ_PEN0 217 219
+4
arch/arm/mach-exynos/include/mach/map.h
··· 89 89 #define EXYNOS4_PA_MDMA1 0x12840000 90 90 #define EXYNOS4_PA_PDMA0 0x12680000 91 91 #define EXYNOS4_PA_PDMA1 0x12690000 92 + #define EXYNOS5_PA_MDMA0 0x10800000 93 + #define EXYNOS5_PA_MDMA1 0x11C10000 94 + #define EXYNOS5_PA_PDMA0 0x121A0000 95 + #define EXYNOS5_PA_PDMA1 0x121B0000 92 96 93 97 #define EXYNOS4_PA_SYSMMU_MDMA 0x10A40000 94 98 #define EXYNOS4_PA_SYSMMU_SSS 0x10A50000
+6
arch/arm/mach-exynos/include/mach/regs-clock.h
··· 255 255 256 256 /* For EXYNOS5250 */ 257 257 258 + #define EXYNOS5_APLL_LOCK EXYNOS_CLKREG(0x00000) 258 259 #define EXYNOS5_APLL_CON0 EXYNOS_CLKREG(0x00100) 259 260 #define EXYNOS5_CLKSRC_CPU EXYNOS_CLKREG(0x00200) 261 + #define EXYNOS5_CLKMUX_STATCPU EXYNOS_CLKREG(0x00400) 260 262 #define EXYNOS5_CLKDIV_CPU0 EXYNOS_CLKREG(0x00500) 263 + #define EXYNOS5_CLKDIV_CPU1 EXYNOS_CLKREG(0x00504) 264 + #define EXYNOS5_CLKDIV_STATCPU0 EXYNOS_CLKREG(0x00600) 265 + #define EXYNOS5_CLKDIV_STATCPU1 EXYNOS_CLKREG(0x00604) 266 + 261 267 #define EXYNOS5_MPLL_CON0 EXYNOS_CLKREG(0x04100) 262 268 #define EXYNOS5_CLKSRC_CORE1 EXYNOS_CLKREG(0x04204) 263 269
+1 -1
arch/arm/mach-exynos/mach-exynos5-dt.c
··· 45 45 "exynos4210-uart.3", NULL), 46 46 OF_DEV_AUXDATA("arm,pl330", EXYNOS5_PA_PDMA0, "dma-pl330.0", NULL), 47 47 OF_DEV_AUXDATA("arm,pl330", EXYNOS5_PA_PDMA1, "dma-pl330.1", NULL), 48 - OF_DEV_AUXDATA("arm,pl330", EXYNOS5_PA_PDMA1, "dma-pl330.2", NULL), 48 + OF_DEV_AUXDATA("arm,pl330", EXYNOS5_PA_MDMA1, "dma-pl330.2", NULL), 49 49 {}, 50 50 }; 51 51
+2 -44
arch/arm/mach-exynos/mach-nuri.c
··· 307 307 }; 308 308 309 309 /* TSP */ 310 - static u8 mxt_init_vals[] = { 311 - /* MXT_GEN_COMMAND(6) */ 312 - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 313 - /* MXT_GEN_POWER(7) */ 314 - 0x20, 0xff, 0x32, 315 - /* MXT_GEN_ACQUIRE(8) */ 316 - 0x0a, 0x00, 0x05, 0x00, 0x00, 0x00, 0x09, 0x23, 317 - /* MXT_TOUCH_MULTI(9) */ 318 - 0x00, 0x00, 0x00, 0x13, 0x0b, 0x00, 0x00, 0x00, 0x02, 0x00, 319 - 0x00, 0x01, 0x01, 0x0e, 0x0a, 0x0a, 0x0a, 0x0a, 0x00, 0x00, 320 - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 321 - 0x00, 322 - /* MXT_TOUCH_KEYARRAY(15) */ 323 - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0x00, 324 - 0x00, 325 - /* MXT_SPT_GPIOPWM(19) */ 326 - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 327 - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 328 - /* MXT_PROCI_GRIPFACE(20) */ 329 - 0x07, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x50, 0x28, 0x04, 330 - 0x0f, 0x0a, 331 - /* MXT_PROCG_NOISE(22) */ 332 - 0x05, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x03, 0x23, 0x00, 333 - 0x00, 0x05, 0x0f, 0x19, 0x23, 0x2d, 0x03, 334 - /* MXT_TOUCH_PROXIMITY(23) */ 335 - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 336 - 0x00, 0x00, 0x00, 0x00, 0x00, 337 - /* MXT_PROCI_ONETOUCH(24) */ 338 - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 339 - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 340 - /* MXT_SPT_SELFTEST(25) */ 341 - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 342 - 0x00, 0x00, 0x00, 0x00, 343 - /* MXT_PROCI_TWOTOUCH(27) */ 344 - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 345 - /* MXT_SPT_CTECONFIG(28) */ 346 - 0x00, 0x00, 0x02, 0x08, 0x10, 0x00, 347 - }; 348 - 349 310 static struct mxt_platform_data mxt_platform_data = { 350 - .config = mxt_init_vals, 351 - .config_length = ARRAY_SIZE(mxt_init_vals), 352 - 353 311 .x_line = 18, 354 312 .y_line = 11, 355 313 .x_size = 1024, ··· 529 571 530 572 static struct regulator_init_data __initdata max8997_ldo8_data = { 531 573 .constraints = { 532 - .name = "VUSB/VDAC_3.3V_C210", 574 + .name = "VUSB+VDAC_3.3V_C210", 533 575 .min_uV = 3300000, 534 576 .max_uV = 3300000, 535 577 .valid_ops_mask = REGULATOR_CHANGE_STATUS, ··· 1305 1347 1306 1348 static void __init nuri_map_io(void) 1307 1349 { 1350 + clk_xusbxti.rate = 24000000; 1308 1351 exynos_init_io(NULL, 0); 1309 1352 s3c24xx_init_clocks(24000000); 1310 1353 s3c24xx_init_uarts(nuri_uartcfgs, ARRAY_SIZE(nuri_uartcfgs)); ··· 1338 1379 nuri_camera_init(); 1339 1380 1340 1381 nuri_ehci_init(); 1341 - clk_xusbxti.rate = 24000000; 1342 1382 1343 1383 /* Last */ 1344 1384 platform_add_devices(nuri_devices, ARRAY_SIZE(nuri_devices));
+2
arch/arm/mach-exynos/mach-universal_c210.c
··· 29 29 #include <asm/mach-types.h> 30 30 31 31 #include <plat/regs-serial.h> 32 + #include <plat/clock.h> 32 33 #include <plat/cpu.h> 33 34 #include <plat/devs.h> 34 35 #include <plat/iic.h> ··· 1058 1057 1059 1058 static void __init universal_map_io(void) 1060 1059 { 1060 + clk_xusbxti.rate = 24000000; 1061 1061 exynos_init_io(NULL, 0); 1062 1062 s3c24xx_init_clocks(24000000); 1063 1063 s3c24xx_init_uarts(universal_uartcfgs, ARRAY_SIZE(universal_uartcfgs));
-3
arch/arm/mach-msm/board-halibut.c
··· 86 86 static void __init halibut_fixup(struct tag *tags, char **cmdline, 87 87 struct meminfo *mi) 88 88 { 89 - mi->nr_banks=1; 90 - mi->bank[0].start = PHYS_OFFSET; 91 - mi->bank[0].size = (101*1024*1024); 92 89 } 93 90 94 91 static void __init halibut_map_io(void)
+1
arch/arm/mach-msm/board-trout-panel.c
··· 12 12 13 13 #include <asm/io.h> 14 14 #include <asm/mach-types.h> 15 + #include <asm/system_info.h> 15 16 16 17 #include <mach/msm_fb.h> 17 18 #include <mach/vreg.h>
+1
arch/arm/mach-msm/board-trout.c
··· 19 19 #include <linux/platform_device.h> 20 20 #include <linux/clkdev.h> 21 21 22 + #include <asm/system_info.h> 22 23 #include <asm/mach-types.h> 23 24 #include <asm/mach/arch.h> 24 25 #include <asm/mach/map.h>
+1 -1
arch/arm/mach-msm/proc_comm.c
··· 121 121 * and unknown state. This function should be called early to 122 122 * wait on the ARM9. 123 123 */ 124 - void __init proc_comm_boot_wait(void) 124 + void __devinit proc_comm_boot_wait(void) 125 125 { 126 126 void __iomem *base = MSM_SHARED_RAM_BASE; 127 127
-80
arch/arm/mach-omap2/clkt2xxx_virt_prcm_set.c
··· 165 165 166 166 return 0; 167 167 } 168 - 169 - #ifdef CONFIG_CPU_FREQ 170 - /* 171 - * Walk PRCM rate table and fillout cpufreq freq_table 172 - * XXX This should be replaced by an OPP layer in the near future 173 - */ 174 - static struct cpufreq_frequency_table *freq_table; 175 - 176 - void omap2_clk_init_cpufreq_table(struct cpufreq_frequency_table **table) 177 - { 178 - const struct prcm_config *prcm; 179 - int i = 0; 180 - int tbl_sz = 0; 181 - 182 - if (!cpu_is_omap24xx()) 183 - return; 184 - 185 - for (prcm = rate_table; prcm->mpu_speed; prcm++) { 186 - if (!(prcm->flags & cpu_mask)) 187 - continue; 188 - if (prcm->xtal_speed != sclk->rate) 189 - continue; 190 - 191 - /* don't put bypass rates in table */ 192 - if (prcm->dpll_speed == prcm->xtal_speed) 193 - continue; 194 - 195 - tbl_sz++; 196 - } 197 - 198 - /* 199 - * XXX Ensure that we're doing what CPUFreq expects for this error 200 - * case and the following one 201 - */ 202 - if (tbl_sz == 0) { 203 - pr_warning("%s: no matching entries in rate_table\n", 204 - __func__); 205 - return; 206 - } 207 - 208 - /* Include the CPUFREQ_TABLE_END terminator entry */ 209 - tbl_sz++; 210 - 211 - freq_table = kzalloc(sizeof(struct cpufreq_frequency_table) * tbl_sz, 212 - GFP_ATOMIC); 213 - if (!freq_table) { 214 - pr_err("%s: could not kzalloc frequency table\n", __func__); 215 - return; 216 - } 217 - 218 - for (prcm = rate_table; prcm->mpu_speed; prcm++) { 219 - if (!(prcm->flags & cpu_mask)) 220 - continue; 221 - if (prcm->xtal_speed != sclk->rate) 222 - continue; 223 - 224 - /* don't put bypass rates in table */ 225 - if (prcm->dpll_speed == prcm->xtal_speed) 226 - continue; 227 - 228 - freq_table[i].index = i; 229 - freq_table[i].frequency = prcm->mpu_speed / 1000; 230 - i++; 231 - } 232 - 233 - freq_table[i].index = i; 234 - freq_table[i].frequency = CPUFREQ_TABLE_END; 235 - 236 - *table = &freq_table[0]; 237 - } 238 - 239 - void omap2_clk_exit_cpufreq_table(struct cpufreq_frequency_table **table) 240 - { 241 - if (!cpu_is_omap24xx()) 242 - return; 243 - 244 - kfree(freq_table); 245 - } 246 - 247 - #endif
-5
arch/arm/mach-omap2/clock.c
··· 536 536 .clk_set_rate = omap2_clk_set_rate, 537 537 .clk_set_parent = omap2_clk_set_parent, 538 538 .clk_disable_unused = omap2_clk_disable_unused, 539 - #ifdef CONFIG_CPU_FREQ 540 - /* These will be removed when the OPP code is integrated */ 541 - .clk_init_cpufreq_table = omap2_clk_init_cpufreq_table, 542 - .clk_exit_cpufreq_table = omap2_clk_exit_cpufreq_table, 543 - #endif 544 539 }; 545 540
-8
arch/arm/mach-omap2/clock.h
··· 146 146 extern const struct clksel_rate gfx_l3_rates[]; 147 147 extern const struct clksel_rate dsp_ick_rates[]; 148 148 149 - #if defined(CONFIG_ARCH_OMAP2) && defined(CONFIG_CPU_FREQ) 150 - extern void omap2_clk_init_cpufreq_table(struct cpufreq_frequency_table **table); 151 - extern void omap2_clk_exit_cpufreq_table(struct cpufreq_frequency_table **table); 152 - #else 153 - #define omap2_clk_init_cpufreq_table 0 154 - #define omap2_clk_exit_cpufreq_table 0 155 - #endif 156 - 157 149 extern const struct clkops clkops_omap2_iclk_dflt_wait; 158 150 extern const struct clkops clkops_omap2_iclk_dflt; 159 151 extern const struct clkops clkops_omap2_iclk_idle_only;
-2
arch/arm/mach-s5pv210/dma.c
··· 33 33 #include <mach/irqs.h> 34 34 #include <mach/dma.h> 35 35 36 - static u64 dma_dmamask = DMA_BIT_MASK(32); 37 - 38 36 static u8 pdma0_peri[] = { 39 37 DMACH_UART0_RX, 40 38 DMACH_UART0_TX,
+2 -2
arch/arm/mach-s5pv210/mach-aquila.c
··· 484 484 .gpio_defaults[8] = 0x0100, 485 485 .gpio_defaults[9] = 0x0100, 486 486 .gpio_defaults[10] = 0x0100, 487 - .ldo[0] = { S5PV210_MP03(6), NULL, &wm8994_ldo1_data }, /* XM0FRNB_2 */ 488 - .ldo[1] = { 0, NULL, &wm8994_ldo2_data }, 487 + .ldo[0] = { S5PV210_MP03(6), &wm8994_ldo1_data }, /* XM0FRNB_2 */ 488 + .ldo[1] = { 0, &wm8994_ldo2_data }, 489 489 }; 490 490 491 491 /* GPIO I2C PMIC */
+2 -2
arch/arm/mach-s5pv210/mach-goni.c
··· 674 674 .gpio_defaults[8] = 0x0100, 675 675 .gpio_defaults[9] = 0x0100, 676 676 .gpio_defaults[10] = 0x0100, 677 - .ldo[0] = { S5PV210_MP03(6), NULL, &wm8994_ldo1_data }, /* XM0FRNB_2 */ 678 - .ldo[1] = { 0, NULL, &wm8994_ldo2_data }, 677 + .ldo[0] = { S5PV210_MP03(6), &wm8994_ldo1_data }, /* XM0FRNB_2 */ 678 + .ldo[1] = { 0, &wm8994_ldo2_data }, 679 679 }; 680 680 681 681 /* GPIO I2C PMIC */
+1 -1
arch/arm/mm/Kconfig
··· 723 723 bool "Select the High exception vector" 724 724 help 725 725 Say Y here to select high exception vector(0xFFFF0000~). 726 - The exception vector can be vary depending on the platform 726 + The exception vector can vary depending on the platform 727 727 design in nommu mode. If your platform needs to select 728 728 high exception vector, say Y. 729 729 Otherwise or if you are unsure, say N, and the low exception
+1 -1
arch/arm/mm/fault.c
··· 320 320 */ 321 321 322 322 perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, addr); 323 - if (flags & FAULT_FLAG_ALLOW_RETRY) { 323 + if (!(fault & VM_FAULT_ERROR) && flags & FAULT_FLAG_ALLOW_RETRY) { 324 324 if (fault & VM_FAULT_MAJOR) { 325 325 tsk->maj_flt++; 326 326 perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1,
+2
arch/arm/mm/nommu.c
··· 13 13 #include <asm/sections.h> 14 14 #include <asm/page.h> 15 15 #include <asm/setup.h> 16 + #include <asm/traps.h> 16 17 #include <asm/mach/arch.h> 17 18 18 19 #include "mm.h" ··· 40 39 */ 41 40 void __init paging_init(struct machine_desc *mdesc) 42 41 { 42 + early_trap_init((void *)CONFIG_VECTORS_BASE); 43 43 bootmem_init(); 44 44 } 45 45
+12
arch/arm/mm/proc-v7.S
··· 255 255 mcr p15, 0, r5, c10, c2, 0 @ write PRRR 256 256 mcr p15, 0, r6, c10, c2, 1 @ write NMRR 257 257 #endif 258 + #ifndef CONFIG_ARM_THUMBEE 259 + mrc p15, 0, r0, c0, c1, 0 @ read ID_PFR0 for ThumbEE 260 + and r0, r0, #(0xf << 12) @ ThumbEE enabled field 261 + teq r0, #(1 << 12) @ check if ThumbEE is present 262 + bne 1f 263 + mov r5, #0 264 + mcr p14, 6, r5, c1, c0, 0 @ Initialize TEEHBR to 0 265 + mrc p14, 6, r0, c0, c0, 0 @ load TEECR 266 + orr r0, r0, #1 @ set the 1st bit in order to 267 + mcr p14, 6, r0, c0, c0, 0 @ stop userspace TEEHBR access 268 + 1: 269 + #endif 258 270 adr r5, v7_crval 259 271 ldmia r5, {r5, r6} 260 272 #ifdef CONFIG_CPU_ENDIAN_BE8
-26
arch/arm/plat-omap/clock.c
··· 398 398 .ops = &clkops_null, 399 399 }; 400 400 401 - #ifdef CONFIG_CPU_FREQ 402 - void clk_init_cpufreq_table(struct cpufreq_frequency_table **table) 403 - { 404 - unsigned long flags; 405 - 406 - if (!arch_clock || !arch_clock->clk_init_cpufreq_table) 407 - return; 408 - 409 - spin_lock_irqsave(&clockfw_lock, flags); 410 - arch_clock->clk_init_cpufreq_table(table); 411 - spin_unlock_irqrestore(&clockfw_lock, flags); 412 - } 413 - 414 - void clk_exit_cpufreq_table(struct cpufreq_frequency_table **table) 415 - { 416 - unsigned long flags; 417 - 418 - if (!arch_clock || !arch_clock->clk_exit_cpufreq_table) 419 - return; 420 - 421 - spin_lock_irqsave(&clockfw_lock, flags); 422 - arch_clock->clk_exit_cpufreq_table(table); 423 - spin_unlock_irqrestore(&clockfw_lock, flags); 424 - } 425 - #endif 426 - 427 401 /* 428 402 * 429 403 */
-10
arch/arm/plat-omap/include/plat/clock.h
··· 272 272 #endif 273 273 }; 274 274 275 - struct cpufreq_frequency_table; 276 - 277 275 struct clk_functions { 278 276 int (*clk_enable)(struct clk *clk); 279 277 void (*clk_disable)(struct clk *clk); ··· 281 283 void (*clk_allow_idle)(struct clk *clk); 282 284 void (*clk_deny_idle)(struct clk *clk); 283 285 void (*clk_disable_unused)(struct clk *clk); 284 - #ifdef CONFIG_CPU_FREQ 285 - void (*clk_init_cpufreq_table)(struct cpufreq_frequency_table **); 286 - void (*clk_exit_cpufreq_table)(struct cpufreq_frequency_table **); 287 - #endif 288 286 }; 289 287 290 288 extern int mpurate; ··· 295 301 extern unsigned long followparent_recalc(struct clk *clk); 296 302 extern void clk_enable_init_clocks(void); 297 303 unsigned long omap_fixed_divisor_recalc(struct clk *clk); 298 - #ifdef CONFIG_CPU_FREQ 299 - extern void clk_init_cpufreq_table(struct cpufreq_frequency_table **table); 300 - extern void clk_exit_cpufreq_table(struct cpufreq_frequency_table **table); 301 - #endif 302 304 extern struct clk *omap_clk_get_by_name(const char *name); 303 305 extern int omap_clk_enable_autoidle_all(void); 304 306 extern int omap_clk_disable_autoidle_all(void);
+1
arch/arm/plat-samsung/Kconfig
··· 302 302 config SAMSUNG_PM_DEBUG 303 303 bool "S3C2410 PM Suspend debug" 304 304 depends on PM 305 + select DEBUG_LL 305 306 help 306 307 Say Y here if you want verbose debugging from the PM Suspend and 307 308 Resume code. See <file:Documentation/arm/Samsung-S3C24XX/Suspend.txt>
-4
arch/c6x/include/asm/irq.h
··· 42 42 /* This number is used when no interrupt has been assigned */ 43 43 #define NO_IRQ 0 44 44 45 - struct irq_data; 46 - extern irq_hw_number_t irqd_to_hwirq(struct irq_data *d); 47 - extern irq_hw_number_t virq_to_hw(unsigned int virq); 48 - 49 45 extern void __init init_pic_c64xplus(void); 50 46 51 47 extern void init_IRQ(void);
-13
arch/c6x/kernel/irq.c
··· 130 130 seq_printf(p, "%*s: %10lu\n", prec, "Err", irq_err_count); 131 131 return 0; 132 132 } 133 - 134 - irq_hw_number_t irqd_to_hwirq(struct irq_data *d) 135 - { 136 - return d->hwirq; 137 - } 138 - EXPORT_SYMBOL_GPL(irqd_to_hwirq); 139 - 140 - irq_hw_number_t virq_to_hw(unsigned int virq) 141 - { 142 - struct irq_data *irq_data = irq_get_irq_data(virq); 143 - return WARN_ON(!irq_data) ? 0 : irq_data->hwirq; 144 - } 145 - EXPORT_SYMBOL_GPL(virq_to_hw);
+147 -1
arch/ia64/include/asm/cmpxchg.h
··· 1 - #include <asm/intrinsics.h> 1 + #ifndef _ASM_IA64_CMPXCHG_H 2 + #define _ASM_IA64_CMPXCHG_H 3 + 4 + /* 5 + * Compare/Exchange, forked from asm/intrinsics.h 6 + * which was: 7 + * 8 + * Copyright (C) 2002-2003 Hewlett-Packard Co 9 + * David Mosberger-Tang <davidm@hpl.hp.com> 10 + */ 11 + 12 + #ifndef __ASSEMBLY__ 13 + 14 + #include <linux/types.h> 15 + /* include compiler specific intrinsics */ 16 + #include <asm/ia64regs.h> 17 + #ifdef __INTEL_COMPILER 18 + # include <asm/intel_intrin.h> 19 + #else 20 + # include <asm/gcc_intrin.h> 21 + #endif 22 + 23 + /* 24 + * This function doesn't exist, so you'll get a linker error if 25 + * something tries to do an invalid xchg(). 26 + */ 27 + extern void ia64_xchg_called_with_bad_pointer(void); 28 + 29 + #define __xchg(x, ptr, size) \ 30 + ({ \ 31 + unsigned long __xchg_result; \ 32 + \ 33 + switch (size) { \ 34 + case 1: \ 35 + __xchg_result = ia64_xchg1((__u8 *)ptr, x); \ 36 + break; \ 37 + \ 38 + case 2: \ 39 + __xchg_result = ia64_xchg2((__u16 *)ptr, x); \ 40 + break; \ 41 + \ 42 + case 4: \ 43 + __xchg_result = ia64_xchg4((__u32 *)ptr, x); \ 44 + break; \ 45 + \ 46 + case 8: \ 47 + __xchg_result = ia64_xchg8((__u64 *)ptr, x); \ 48 + break; \ 49 + default: \ 50 + ia64_xchg_called_with_bad_pointer(); \ 51 + } \ 52 + __xchg_result; \ 53 + }) 54 + 55 + #define xchg(ptr, x) \ 56 + ((__typeof__(*(ptr))) __xchg((unsigned long) (x), (ptr), sizeof(*(ptr)))) 57 + 58 + /* 59 + * Atomic compare and exchange. Compare OLD with MEM, if identical, 60 + * store NEW in MEM. Return the initial value in MEM. Success is 61 + * indicated by comparing RETURN with OLD. 62 + */ 63 + 64 + #define __HAVE_ARCH_CMPXCHG 1 65 + 66 + /* 67 + * This function doesn't exist, so you'll get a linker error 68 + * if something tries to do an invalid cmpxchg(). 69 + */ 70 + extern long ia64_cmpxchg_called_with_bad_pointer(void); 71 + 72 + #define ia64_cmpxchg(sem, ptr, old, new, size) \ 73 + ({ \ 74 + __u64 _o_, _r_; \ 75 + \ 76 + switch (size) { \ 77 + case 1: \ 78 + _o_ = (__u8) (long) (old); \ 79 + break; \ 80 + case 2: \ 81 + _o_ = (__u16) (long) (old); \ 82 + break; \ 83 + case 4: \ 84 + _o_ = (__u32) (long) (old); \ 85 + break; \ 86 + case 8: \ 87 + _o_ = (__u64) (long) (old); \ 88 + break; \ 89 + default: \ 90 + break; \ 91 + } \ 92 + switch (size) { \ 93 + case 1: \ 94 + _r_ = ia64_cmpxchg1_##sem((__u8 *) ptr, new, _o_); \ 95 + break; \ 96 + \ 97 + case 2: \ 98 + _r_ = ia64_cmpxchg2_##sem((__u16 *) ptr, new, _o_); \ 99 + break; \ 100 + \ 101 + case 4: \ 102 + _r_ = ia64_cmpxchg4_##sem((__u32 *) ptr, new, _o_); \ 103 + break; \ 104 + \ 105 + case 8: \ 106 + _r_ = ia64_cmpxchg8_##sem((__u64 *) ptr, new, _o_); \ 107 + break; \ 108 + \ 109 + default: \ 110 + _r_ = ia64_cmpxchg_called_with_bad_pointer(); \ 111 + break; \ 112 + } \ 113 + (__typeof__(old)) _r_; \ 114 + }) 115 + 116 + #define cmpxchg_acq(ptr, o, n) \ 117 + ia64_cmpxchg(acq, (ptr), (o), (n), sizeof(*(ptr))) 118 + #define cmpxchg_rel(ptr, o, n) \ 119 + ia64_cmpxchg(rel, (ptr), (o), (n), sizeof(*(ptr))) 120 + 121 + /* for compatibility with other platforms: */ 122 + #define cmpxchg(ptr, o, n) cmpxchg_acq((ptr), (o), (n)) 123 + #define cmpxchg64(ptr, o, n) cmpxchg_acq((ptr), (o), (n)) 124 + 125 + #define cmpxchg_local cmpxchg 126 + #define cmpxchg64_local cmpxchg64 127 + 128 + #ifdef CONFIG_IA64_DEBUG_CMPXCHG 129 + # define CMPXCHG_BUGCHECK_DECL int _cmpxchg_bugcheck_count = 128; 130 + # define CMPXCHG_BUGCHECK(v) \ 131 + do { \ 132 + if (_cmpxchg_bugcheck_count-- <= 0) { \ 133 + void *ip; \ 134 + extern int printk(const char *fmt, ...); \ 135 + ip = (void *) ia64_getreg(_IA64_REG_IP); \ 136 + printk("CMPXCHG_BUGCHECK: stuck at %p on word %p\n", ip, (v));\ 137 + break; \ 138 + } \ 139 + } while (0) 140 + #else /* !CONFIG_IA64_DEBUG_CMPXCHG */ 141 + # define CMPXCHG_BUGCHECK_DECL 142 + # define CMPXCHG_BUGCHECK(v) 143 + #endif /* !CONFIG_IA64_DEBUG_CMPXCHG */ 144 + 145 + #endif /* !__ASSEMBLY__ */ 146 + 147 + #endif /* _ASM_IA64_CMPXCHG_H */
+1 -113
arch/ia64/include/asm/intrinsics.h
··· 18 18 #else 19 19 # include <asm/gcc_intrin.h> 20 20 #endif 21 + #include <asm/cmpxchg.h> 21 22 22 23 #define ia64_native_get_psr_i() (ia64_native_getreg(_IA64_REG_PSR) & IA64_PSR_I) 23 24 ··· 81 80 }) 82 81 83 82 #define ia64_fetch_and_add(i,v) (ia64_fetchadd(i, v, rel) + (i)) /* return new value */ 84 - 85 - /* 86 - * This function doesn't exist, so you'll get a linker error if 87 - * something tries to do an invalid xchg(). 88 - */ 89 - extern void ia64_xchg_called_with_bad_pointer (void); 90 - 91 - #define __xchg(x,ptr,size) \ 92 - ({ \ 93 - unsigned long __xchg_result; \ 94 - \ 95 - switch (size) { \ 96 - case 1: \ 97 - __xchg_result = ia64_xchg1((__u8 *)ptr, x); \ 98 - break; \ 99 - \ 100 - case 2: \ 101 - __xchg_result = ia64_xchg2((__u16 *)ptr, x); \ 102 - break; \ 103 - \ 104 - case 4: \ 105 - __xchg_result = ia64_xchg4((__u32 *)ptr, x); \ 106 - break; \ 107 - \ 108 - case 8: \ 109 - __xchg_result = ia64_xchg8((__u64 *)ptr, x); \ 110 - break; \ 111 - default: \ 112 - ia64_xchg_called_with_bad_pointer(); \ 113 - } \ 114 - __xchg_result; \ 115 - }) 116 - 117 - #define xchg(ptr,x) \ 118 - ((__typeof__(*(ptr))) __xchg ((unsigned long) (x), (ptr), sizeof(*(ptr)))) 119 - 120 - /* 121 - * Atomic compare and exchange. Compare OLD with MEM, if identical, 122 - * store NEW in MEM. Return the initial value in MEM. Success is 123 - * indicated by comparing RETURN with OLD. 124 - */ 125 - 126 - #define __HAVE_ARCH_CMPXCHG 1 127 - 128 - /* 129 - * This function doesn't exist, so you'll get a linker error 130 - * if something tries to do an invalid cmpxchg(). 131 - */ 132 - extern long ia64_cmpxchg_called_with_bad_pointer (void); 133 - 134 - #define ia64_cmpxchg(sem,ptr,old,new,size) \ 135 - ({ \ 136 - __u64 _o_, _r_; \ 137 - \ 138 - switch (size) { \ 139 - case 1: _o_ = (__u8 ) (long) (old); break; \ 140 - case 2: _o_ = (__u16) (long) (old); break; \ 141 - case 4: _o_ = (__u32) (long) (old); break; \ 142 - case 8: _o_ = (__u64) (long) (old); break; \ 143 - default: break; \ 144 - } \ 145 - switch (size) { \ 146 - case 1: \ 147 - _r_ = ia64_cmpxchg1_##sem((__u8 *) ptr, new, _o_); \ 148 - break; \ 149 - \ 150 - case 2: \ 151 - _r_ = ia64_cmpxchg2_##sem((__u16 *) ptr, new, _o_); \ 152 - break; \ 153 - \ 154 - case 4: \ 155 - _r_ = ia64_cmpxchg4_##sem((__u32 *) ptr, new, _o_); \ 156 - break; \ 157 - \ 158 - case 8: \ 159 - _r_ = ia64_cmpxchg8_##sem((__u64 *) ptr, new, _o_); \ 160 - break; \ 161 - \ 162 - default: \ 163 - _r_ = ia64_cmpxchg_called_with_bad_pointer(); \ 164 - break; \ 165 - } \ 166 - (__typeof__(old)) _r_; \ 167 - }) 168 - 169 - #define cmpxchg_acq(ptr, o, n) \ 170 - ia64_cmpxchg(acq, (ptr), (o), (n), sizeof(*(ptr))) 171 - #define cmpxchg_rel(ptr, o, n) \ 172 - ia64_cmpxchg(rel, (ptr), (o), (n), sizeof(*(ptr))) 173 - 174 - /* for compatibility with other platforms: */ 175 - #define cmpxchg(ptr, o, n) cmpxchg_acq((ptr), (o), (n)) 176 - #define cmpxchg64(ptr, o, n) cmpxchg_acq((ptr), (o), (n)) 177 - 178 - #define cmpxchg_local cmpxchg 179 - #define cmpxchg64_local cmpxchg64 180 - 181 - #ifdef CONFIG_IA64_DEBUG_CMPXCHG 182 - # define CMPXCHG_BUGCHECK_DECL int _cmpxchg_bugcheck_count = 128; 183 - # define CMPXCHG_BUGCHECK(v) \ 184 - do { \ 185 - if (_cmpxchg_bugcheck_count-- <= 0) { \ 186 - void *ip; \ 187 - extern int printk(const char *fmt, ...); \ 188 - ip = (void *) ia64_getreg(_IA64_REG_IP); \ 189 - printk("CMPXCHG_BUGCHECK: stuck at %p on word %p\n", ip, (v)); \ 190 - break; \ 191 - } \ 192 - } while (0) 193 - #else /* !CONFIG_IA64_DEBUG_CMPXCHG */ 194 - # define CMPXCHG_BUGCHECK_DECL 195 - # define CMPXCHG_BUGCHECK(v) 196 - #endif /* !CONFIG_IA64_DEBUG_CMPXCHG */ 197 83 198 84 #endif 199 85
-2
arch/powerpc/include/asm/irq.h
··· 33 33 /* Same thing, used by the generic IRQ code */ 34 34 #define NR_IRQS_LEGACY NUM_ISA_INTERRUPTS 35 35 36 - struct irq_data; 37 - extern irq_hw_number_t irqd_to_hwirq(struct irq_data *d); 38 36 extern irq_hw_number_t virq_to_hw(unsigned int virq); 39 37 40 38 /**
+21 -18
arch/powerpc/kernel/entry_32.S
··· 206 206 andi. r10,r10,MSR_EE /* Did EE change? */ 207 207 beq 1f 208 208 209 - /* Save handler and return address into the 2 unused words 210 - * of the STACK_FRAME_OVERHEAD (sneak sneak sneak). Everything 211 - * else can be recovered from the pt_regs except r3 which for 212 - * normal interrupts has been set to pt_regs and for syscalls 213 - * is an argument, so we temporarily use ORIG_GPR3 to save it 214 - */ 215 - stw r9,8(r1) 216 - stw r11,12(r1) 217 - stw r3,ORIG_GPR3(r1) 218 209 /* 219 210 * The trace_hardirqs_off will use CALLER_ADDR0 and CALLER_ADDR1. 220 211 * If from user mode there is only one stack frame on the stack, and 221 212 * accessing CALLER_ADDR1 will cause oops. So we need create a dummy 222 213 * stack frame to make trace_hardirqs_off happy. 214 + * 215 + * This is handy because we also need to save a bunch of GPRs, 216 + * r3 can be different from GPR3(r1) at this point, r9 and r11 217 + * contains the old MSR and handler address respectively, 218 + * r4 & r5 can contain page fault arguments that need to be passed 219 + * along as well. r12, CCR, CTR, XER etc... are left clobbered as 220 + * they aren't useful past this point (aren't syscall arguments), 221 + * the rest is restored from the exception frame. 223 222 */ 223 + stwu r1,-32(r1) 224 + stw r9,8(r1) 225 + stw r11,12(r1) 226 + stw r3,16(r1) 227 + stw r4,20(r1) 228 + stw r5,24(r1) 224 229 andi. r12,r12,MSR_PR 225 - beq 11f 226 - stwu r1,-16(r1) 230 + b 11f 227 231 bl trace_hardirqs_off 228 - addi r1,r1,16 229 232 b 12f 230 - 231 233 11: 232 234 bl trace_hardirqs_off 233 235 12: 236 + lwz r5,24(r1) 237 + lwz r4,20(r1) 238 + lwz r3,16(r1) 239 + lwz r11,12(r1) 240 + lwz r9,8(r1) 241 + addi r1,r1,32 234 242 lwz r0,GPR0(r1) 235 - lwz r3,ORIG_GPR3(r1) 236 - lwz r4,GPR4(r1) 237 - lwz r5,GPR5(r1) 238 243 lwz r6,GPR6(r1) 239 244 lwz r7,GPR7(r1) 240 245 lwz r8,GPR8(r1) 241 - lwz r9,8(r1) 242 - lwz r11,12(r1) 243 246 1: mtctr r11 244 247 mtlr r9 245 248 bctr /* jump to handler */
-6
arch/powerpc/kernel/irq.c
··· 560 560 local_irq_restore(flags); 561 561 } 562 562 563 - irq_hw_number_t irqd_to_hwirq(struct irq_data *d) 564 - { 565 - return d->hwirq; 566 - } 567 - EXPORT_SYMBOL_GPL(irqd_to_hwirq); 568 - 569 563 irq_hw_number_t virq_to_hw(unsigned int virq) 570 564 { 571 565 struct irq_data *irq_data = irq_get_irq_data(virq);
+2 -2
arch/powerpc/kernel/process.c
··· 1235 1235 ctrl |= CTRL_RUNLATCH; 1236 1236 mtspr(SPRN_CTRLT, ctrl); 1237 1237 1238 - ti->local_flags |= TLF_RUNLATCH; 1238 + ti->local_flags |= _TLF_RUNLATCH; 1239 1239 } 1240 1240 1241 1241 /* Called with hard IRQs off */ ··· 1244 1244 struct thread_info *ti = current_thread_info(); 1245 1245 unsigned long ctrl; 1246 1246 1247 - ti->local_flags &= ~TLF_RUNLATCH; 1247 + ti->local_flags &= ~_TLF_RUNLATCH; 1248 1248 1249 1249 ctrl = mfspr(SPRN_CTRLF); 1250 1250 ctrl &= ~CTRL_RUNLATCH;
+1 -1
arch/powerpc/platforms/cell/axon_msi.c
··· 392 392 } 393 393 memset(msic->fifo_virt, 0xff, MSIC_FIFO_SIZE_BYTES); 394 394 395 - msic->irq_domain = irq_domain_add_nomap(dn, &msic_host_ops, msic); 395 + msic->irq_domain = irq_domain_add_nomap(dn, 0, &msic_host_ops, msic); 396 396 if (!msic->irq_domain) { 397 397 printk(KERN_ERR "axon_msi: couldn't allocate irq_domain for %s\n", 398 398 dn->full_name);
+1 -1
arch/powerpc/platforms/cell/beat_interrupt.c
··· 239 239 ppc_md.get_irq = beatic_get_irq; 240 240 241 241 /* Allocate an irq host */ 242 - beatic_host = irq_domain_add_nomap(NULL, &beatic_pic_host_ops, NULL); 242 + beatic_host = irq_domain_add_nomap(NULL, 0, &beatic_pic_host_ops, NULL); 243 243 BUG_ON(beatic_host == NULL); 244 244 irq_set_default_host(beatic_host); 245 245 }
+1 -1
arch/powerpc/platforms/powermac/smp.c
··· 192 192 { 193 193 int rc = -ENOMEM; 194 194 195 - psurge_host = irq_domain_add_nomap(NULL, &psurge_host_ops, NULL); 195 + psurge_host = irq_domain_add_nomap(NULL, 0, &psurge_host_ops, NULL); 196 196 197 197 if (psurge_host) 198 198 psurge_secondary_virq = irq_create_direct_mapping(psurge_host);
+1 -2
arch/powerpc/platforms/ps3/interrupt.c
··· 753 753 unsigned cpu; 754 754 struct irq_domain *host; 755 755 756 - host = irq_domain_add_nomap(NULL, &ps3_host_ops, NULL); 756 + host = irq_domain_add_nomap(NULL, PS3_PLUG_MAX + 1, &ps3_host_ops, NULL); 757 757 irq_set_default_host(host); 758 - irq_set_virq_count(PS3_PLUG_MAX + 1); 759 758 760 759 for_each_possible_cpu(cpu) { 761 760 struct ps3_private *pd = &per_cpu(ps3_private, cpu);
+1 -1
arch/sparc/kernel/ds.c
··· 1264 1264 return vio_register_driver(&ds_driver); 1265 1265 } 1266 1266 1267 - subsys_initcall(ds_init); 1267 + fs_initcall(ds_init);
-13
arch/sparc/kernel/leon_pci.c
··· 45 45 46 46 void __devinit pcibios_fixup_bus(struct pci_bus *pbus) 47 47 { 48 - struct leon_pci_info *info = pbus->sysdata; 49 48 struct pci_dev *dev; 50 49 int i, has_io, has_mem; 51 50 u16 cmd; ··· 109 110 { 110 111 return pci_enable_resources(dev, mask); 111 112 } 112 - 113 - struct device_node *pci_device_to_OF_node(struct pci_dev *pdev) 114 - { 115 - /* 116 - * Currently the OpenBoot nodes are not connected with the PCI device, 117 - * this is because the LEON PROM does not create PCI nodes. Eventually 118 - * this will change and the same approach as pcic.c can be used to 119 - * match PROM nodes with pci devices. 120 - */ 121 - return NULL; 122 - } 123 - EXPORT_SYMBOL(pci_device_to_OF_node); 124 113 125 114 void __devinit pcibios_update_irq(struct pci_dev *dev, int irq) 126 115 {
-7
arch/sparc/kernel/rtrap_64.S
··· 20 20 21 21 .text 22 22 .align 32 23 - __handle_softirq: 24 - call do_softirq 25 - nop 26 - ba,a,pt %xcc, __handle_softirq_continue 27 - nop 28 23 __handle_preemption: 29 24 call schedule 30 25 wrpr %g0, RTRAP_PSTATE, %pstate ··· 84 89 cmp %l1, 0 85 90 86 91 /* mm/ultra.S:xcall_report_regs KNOWS about this load. */ 87 - bne,pn %icc, __handle_softirq 88 92 ldx [%sp + PTREGS_OFF + PT_V9_TSTATE], %l1 89 - __handle_softirq_continue: 90 93 rtrap_xcall: 91 94 sethi %hi(0xf << 20), %l4 92 95 and %l1, %l4, %l4
+30 -7
arch/sparc/mm/fault_32.c
··· 225 225 unsigned long g2; 226 226 int from_user = !(regs->psr & PSR_PS); 227 227 int fault, code; 228 + unsigned int flags = (FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE | 229 + (write ? FAULT_FLAG_WRITE : 0)); 228 230 229 231 if(text_fault) 230 232 address = regs->pc; ··· 253 251 254 252 perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); 255 253 254 + retry: 256 255 down_read(&mm->mmap_sem); 257 256 258 257 /* ··· 292 289 * make sure we exit gracefully rather than endlessly redo 293 290 * the fault. 294 291 */ 295 - fault = handle_mm_fault(mm, vma, address, write ? FAULT_FLAG_WRITE : 0); 292 + fault = handle_mm_fault(mm, vma, address, flags); 293 + 294 + if ((fault & VM_FAULT_RETRY) && fatal_signal_pending(current)) 295 + return; 296 + 296 297 if (unlikely(fault & VM_FAULT_ERROR)) { 297 298 if (fault & VM_FAULT_OOM) 298 299 goto out_of_memory; ··· 304 297 goto do_sigbus; 305 298 BUG(); 306 299 } 307 - if (fault & VM_FAULT_MAJOR) { 308 - current->maj_flt++; 309 - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1, regs, address); 310 - } else { 311 - current->min_flt++; 312 - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1, regs, address); 300 + 301 + if (flags & FAULT_FLAG_ALLOW_RETRY) { 302 + if (fault & VM_FAULT_MAJOR) { 303 + current->maj_flt++; 304 + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 305 + 1, regs, address); 306 + } else { 307 + current->min_flt++; 308 + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 309 + 1, regs, address); 310 + } 311 + if (fault & VM_FAULT_RETRY) { 312 + flags &= ~FAULT_FLAG_ALLOW_RETRY; 313 + 314 + /* No need to up_read(&mm->mmap_sem) as we would 315 + * have already released it in __lock_page_or_retry 316 + * in mm/filemap.c. 317 + */ 318 + 319 + goto retry; 320 + } 313 321 } 322 + 314 323 up_read(&mm->mmap_sem); 315 324 return; 316 325
+30 -7
arch/sparc/mm/fault_64.c
··· 279 279 unsigned int insn = 0; 280 280 int si_code, fault_code, fault; 281 281 unsigned long address, mm_rss; 282 + unsigned int flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE; 282 283 283 284 fault_code = get_thread_fault_code(); 284 285 ··· 334 333 insn = get_fault_insn(regs, insn); 335 334 goto handle_kernel_fault; 336 335 } 336 + 337 + retry: 337 338 down_read(&mm->mmap_sem); 338 339 } 339 340 ··· 426 423 goto bad_area; 427 424 } 428 425 429 - fault = handle_mm_fault(mm, vma, address, (fault_code & FAULT_CODE_WRITE) ? FAULT_FLAG_WRITE : 0); 426 + flags |= ((fault_code & FAULT_CODE_WRITE) ? FAULT_FLAG_WRITE : 0); 427 + fault = handle_mm_fault(mm, vma, address, flags); 428 + 429 + if ((fault & VM_FAULT_RETRY) && fatal_signal_pending(current)) 430 + return; 431 + 430 432 if (unlikely(fault & VM_FAULT_ERROR)) { 431 433 if (fault & VM_FAULT_OOM) 432 434 goto out_of_memory; ··· 439 431 goto do_sigbus; 440 432 BUG(); 441 433 } 442 - if (fault & VM_FAULT_MAJOR) { 443 - current->maj_flt++; 444 - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1, regs, address); 445 - } else { 446 - current->min_flt++; 447 - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1, regs, address); 434 + 435 + if (flags & FAULT_FLAG_ALLOW_RETRY) { 436 + if (fault & VM_FAULT_MAJOR) { 437 + current->maj_flt++; 438 + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 439 + 1, regs, address); 440 + } else { 441 + current->min_flt++; 442 + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 443 + 1, regs, address); 444 + } 445 + if (fault & VM_FAULT_RETRY) { 446 + flags &= ~FAULT_FLAG_ALLOW_RETRY; 447 + 448 + /* No need to up_read(&mm->mmap_sem) as we would 449 + * have already released it in __lock_page_or_retry 450 + * in mm/filemap.c. 451 + */ 452 + 453 + goto retry; 454 + } 448 455 } 449 456 up_read(&mm->mmap_sem); 450 457
+1 -3
arch/tile/kernel/proc.c
··· 146 146 }, 147 147 {} 148 148 }; 149 - #endif 150 149 151 150 static struct ctl_path tile_path[] = { 152 151 { .procname = "tile" }, ··· 154 155 155 156 static int __init proc_sys_tile_init(void) 156 157 { 157 - #ifndef __tilegx__ /* FIXME: GX: no support for unaligned access yet */ 158 158 register_sysctl_paths(tile_path, unaligned_table); 159 - #endif 160 159 return 0; 161 160 } 162 161 163 162 arch_initcall(proc_sys_tile_init); 163 + #endif
+2
arch/tile/kernel/smpboot.c
··· 196 196 /* This must be done before setting cpu_online_mask */ 197 197 wmb(); 198 198 199 + notify_cpu_starting(smp_processor_id()); 200 + 199 201 /* 200 202 * We need to hold call_lock, so there is no inconsistency 201 203 * between the time smp_call_function() determines number of
-35
arch/um/drivers/cow.h
··· 3 3 4 4 #include <asm/types.h> 5 5 6 - #if defined(__KERNEL__) 7 - 8 - # include <asm/byteorder.h> 9 - 10 - # if defined(__BIG_ENDIAN) 11 - # define ntohll(x) (x) 12 - # define htonll(x) (x) 13 - # elif defined(__LITTLE_ENDIAN) 14 - # define ntohll(x) be64_to_cpu(x) 15 - # define htonll(x) cpu_to_be64(x) 16 - # else 17 - # error "Could not determine byte order" 18 - # endif 19 - 20 - #else 21 - /* For the definition of ntohl, htonl and __BYTE_ORDER */ 22 - #include <endian.h> 23 - #include <netinet/in.h> 24 - #if defined(__BYTE_ORDER) 25 - 26 - # if __BYTE_ORDER == __BIG_ENDIAN 27 - # define ntohll(x) (x) 28 - # define htonll(x) (x) 29 - # elif __BYTE_ORDER == __LITTLE_ENDIAN 30 - # define ntohll(x) bswap_64(x) 31 - # define htonll(x) bswap_64(x) 32 - # else 33 - # error "Could not determine byte order: __BYTE_ORDER uncorrectly defined" 34 - # endif 35 - 36 - #else /* ! defined(__BYTE_ORDER) */ 37 - # error "Could not determine byte order: __BYTE_ORDER not defined" 38 - #endif 39 - #endif /* ! defined(__KERNEL__) */ 40 - 41 6 extern int init_cow_file(int fd, char *cow_file, char *backing_file, 42 7 int sectorsize, int alignment, int *bitmap_offset_out, 43 8 unsigned long *bitmap_len_out, int *data_offset_out);
+21 -22
arch/um/drivers/cow_user.c
··· 8 8 * that. 9 9 */ 10 10 #include <unistd.h> 11 - #include <byteswap.h> 12 11 #include <errno.h> 13 12 #include <string.h> 14 13 #include <arpa/inet.h> 15 - #include <asm/types.h> 14 + #include <endian.h> 16 15 #include "cow.h" 17 16 #include "cow_sys.h" 18 17 ··· 213 214 "header\n"); 214 215 goto out; 215 216 } 216 - header->magic = htonl(COW_MAGIC); 217 - header->version = htonl(COW_VERSION); 217 + header->magic = htobe32(COW_MAGIC); 218 + header->version = htobe32(COW_VERSION); 218 219 219 220 err = -EINVAL; 220 221 if (strlen(backing_file) > sizeof(header->backing_file) - 1) { ··· 245 246 goto out_free; 246 247 } 247 248 248 - header->mtime = htonl(modtime); 249 - header->size = htonll(*size); 250 - header->sectorsize = htonl(sectorsize); 251 - header->alignment = htonl(alignment); 249 + header->mtime = htobe32(modtime); 250 + header->size = htobe64(*size); 251 + header->sectorsize = htobe32(sectorsize); 252 + header->alignment = htobe32(alignment); 252 253 header->cow_format = COW_BITMAP; 253 254 254 255 err = cow_write_file(fd, header, sizeof(*header)); ··· 300 301 magic = header->v1.magic; 301 302 if (magic == COW_MAGIC) 302 303 version = header->v1.version; 303 - else if (magic == ntohl(COW_MAGIC)) 304 - version = ntohl(header->v1.version); 304 + else if (magic == be32toh(COW_MAGIC)) 305 + version = be32toh(header->v1.version); 305 306 /* No error printed because the non-COW case comes through here */ 306 307 else goto out; 307 308 ··· 326 327 "header\n"); 327 328 goto out; 328 329 } 329 - *mtime_out = ntohl(header->v2.mtime); 330 - *size_out = ntohll(header->v2.size); 331 - *sectorsize_out = ntohl(header->v2.sectorsize); 330 + *mtime_out = be32toh(header->v2.mtime); 331 + *size_out = be64toh(header->v2.size); 332 + *sectorsize_out = be32toh(header->v2.sectorsize); 332 333 *bitmap_offset_out = sizeof(header->v2); 333 334 *align_out = *sectorsize_out; 334 335 file = header->v2.backing_file; ··· 340 341 "header\n"); 341 342 goto out; 342 343 } 343 - *mtime_out = ntohl(header->v3.mtime); 344 - *size_out = ntohll(header->v3.size); 345 - *sectorsize_out = ntohl(header->v3.sectorsize); 346 - *align_out = ntohl(header->v3.alignment); 344 + *mtime_out = be32toh(header->v3.mtime); 345 + *size_out = be64toh(header->v3.size); 346 + *sectorsize_out = be32toh(header->v3.sectorsize); 347 + *align_out = be32toh(header->v3.alignment); 347 348 if (*align_out == 0) { 348 349 cow_printf("read_cow_header - invalid COW header, " 349 350 "align == 0\n"); ··· 365 366 * this was used until Dec2005 - 64bits are needed to represent 366 367 * 2038+. I.e. we can safely do this truncating cast. 367 368 * 368 - * Additionally, we must use ntohl() instead of ntohll(), since 369 + * Additionally, we must use be32toh() instead of be64toh(), since 369 370 * the program used to use the former (tested - I got mtime 370 371 * mismatch "0 vs whatever"). 371 372 * 372 373 * Ever heard about bug-to-bug-compatibility ? ;-) */ 373 - *mtime_out = (time32_t) ntohl(header->v3_b.mtime); 374 + *mtime_out = (time32_t) be32toh(header->v3_b.mtime); 374 375 375 - *size_out = ntohll(header->v3_b.size); 376 - *sectorsize_out = ntohl(header->v3_b.sectorsize); 377 - *align_out = ntohl(header->v3_b.alignment); 376 + *size_out = be64toh(header->v3_b.size); 377 + *sectorsize_out = be32toh(header->v3_b.sectorsize); 378 + *align_out = be32toh(header->v3_b.alignment); 378 379 if (*align_out == 0) { 379 380 cow_printf("read_cow_header - invalid COW header, " 380 381 "align == 0\n");
+1
arch/um/drivers/mconsole_kern.c
··· 22 22 #include <linux/workqueue.h> 23 23 #include <linux/mutex.h> 24 24 #include <asm/uaccess.h> 25 + #include <asm/switch_to.h> 25 26 26 27 #include "init.h" 27 28 #include "irq_kern.h"
+2 -1
arch/um/include/asm/Kbuild
··· 1 1 generic-y += bug.h cputime.h device.h emergency-restart.h futex.h hardirq.h 2 2 generic-y += hw_irq.h irq_regs.h kdebug.h percpu.h sections.h topology.h xor.h 3 - generic-y += ftrace.h pci.h io.h param.h delay.h mutex.h current.h 3 + generic-y += ftrace.h pci.h io.h param.h delay.h mutex.h current.h exec.h 4 + generic-y += switch_to.h
+4 -3
arch/um/kernel/Makefile
··· 3 3 # Licensed under the GPL 4 4 # 5 5 6 - CPPFLAGS_vmlinux.lds := -DSTART=$(LDS_START) \ 7 - -DELF_ARCH=$(LDS_ELF_ARCH) \ 8 - -DELF_FORMAT=$(LDS_ELF_FORMAT) 6 + CPPFLAGS_vmlinux.lds := -DSTART=$(LDS_START) \ 7 + -DELF_ARCH=$(LDS_ELF_ARCH) \ 8 + -DELF_FORMAT=$(LDS_ELF_FORMAT) \ 9 + $(LDS_EXTRA) 9 10 extra-y := vmlinux.lds 10 11 clean-files := 11 12
+1 -5
arch/um/kernel/process.c
··· 88 88 89 89 extern void arch_switch_to(struct task_struct *to); 90 90 91 - void *_switch_to(void *prev, void *next, void *last) 91 + void *__switch_to(struct task_struct *from, struct task_struct *to) 92 92 { 93 - struct task_struct *from = prev; 94 - struct task_struct *to = next; 95 - 96 93 to->thread.prev_sched = from; 97 94 set_current(to); 98 95 ··· 108 111 } while (current->thread.saved_task); 109 112 110 113 return current->thread.prev_sched; 111 - 112 114 } 113 115 114 116 void interrupt_end(void)
-1
arch/um/kernel/skas/mmu.c
··· 103 103 104 104 void uml_setup_stubs(struct mm_struct *mm) 105 105 { 106 - struct page **pages; 107 106 int err, ret; 108 107 109 108 if (!skas_needs_stub)
+3
arch/x86/Makefile.um
··· 14 14 15 15 export LDFLAGS 16 16 17 + LDS_EXTRA := -Ui386 18 + export LDS_EXTRA 19 + 17 20 # First of all, tune CFLAGS for the specific CPU. This actually sets cflags-y. 18 21 include $(srctree)/arch/x86/Makefile_32.cpu 19 22
+2 -2
arch/x86/include/asm/cmpxchg.h
··· 43 43 switch (sizeof(*(ptr))) { \ 44 44 case __X86_CASE_B: \ 45 45 asm volatile (lock #op "b %b0, %1\n" \ 46 - : "+r" (__ret), "+m" (*(ptr)) \ 46 + : "+q" (__ret), "+m" (*(ptr)) \ 47 47 : : "memory", "cc"); \ 48 48 break; \ 49 49 case __X86_CASE_W: \ ··· 173 173 switch (sizeof(*(ptr))) { \ 174 174 case __X86_CASE_B: \ 175 175 asm volatile (lock "addb %b1, %0\n" \ 176 - : "+m" (*(ptr)) : "ri" (inc) \ 176 + : "+m" (*(ptr)) : "qi" (inc) \ 177 177 : "memory", "cc"); \ 178 178 break; \ 179 179 case __X86_CASE_W: \
+2
arch/x86/include/asm/uaccess.h
··· 557 557 558 558 extern unsigned long 559 559 copy_from_user_nmi(void *to, const void __user *from, unsigned long n); 560 + extern __must_check long 561 + strncpy_from_user(char *dst, const char __user *src, long count); 560 562 561 563 /* 562 564 * movsl can be slow when source and dest are not both 8-byte aligned
-5
arch/x86/include/asm/uaccess_32.h
··· 213 213 return n; 214 214 } 215 215 216 - long __must_check strncpy_from_user(char *dst, const char __user *src, 217 - long count); 218 - long __must_check __strncpy_from_user(char *dst, 219 - const char __user *src, long count); 220 - 221 216 /** 222 217 * strlen_user: - Get the size of a string in user space. 223 218 * @str: The string to measure.
-4
arch/x86/include/asm/uaccess_64.h
··· 208 208 } 209 209 } 210 210 211 - __must_check long 212 - strncpy_from_user(char *dst, const char __user *src, long count); 213 - __must_check long 214 - __strncpy_from_user(char *dst, const char __user *src, long count); 215 211 __must_check long strnlen_user(const char __user *str, long n); 216 212 __must_check long __strnlen_user(const char __user *str, long n); 217 213 __must_check long strlen_user(const char __user *str);
+3 -3
arch/x86/kernel/vsyscall_64.c
··· 216 216 current_thread_info()->sig_on_uaccess_error = 1; 217 217 218 218 /* 219 - * 0 is a valid user pointer (in the access_ok sense) on 32-bit and 219 + * NULL is a valid user pointer (in the access_ok sense) on 32-bit and 220 220 * 64-bit, so we don't need to special-case it here. For all the 221 - * vsyscalls, 0 means "don't write anything" not "write it at 221 + * vsyscalls, NULL means "don't write anything" not "write it at 222 222 * address 0". 223 223 */ 224 224 ret = -EFAULT; ··· 247 247 248 248 ret = sys_getcpu((unsigned __user *)regs->di, 249 249 (unsigned __user *)regs->si, 250 - 0); 250 + NULL); 251 251 break; 252 252 } 253 253
+103
arch/x86/lib/usercopy.c
··· 7 7 #include <linux/highmem.h> 8 8 #include <linux/module.h> 9 9 10 + #include <asm/word-at-a-time.h> 11 + 10 12 /* 11 13 * best effort, GUP based copy_from_user() that is NMI-safe 12 14 */ ··· 43 41 return len; 44 42 } 45 43 EXPORT_SYMBOL_GPL(copy_from_user_nmi); 44 + 45 + static inline unsigned long count_bytes(unsigned long mask) 46 + { 47 + mask = (mask - 1) & ~mask; 48 + mask >>= 7; 49 + return count_masked_bytes(mask); 50 + } 51 + 52 + /* 53 + * Do a strncpy, return length of string without final '\0'. 54 + * 'count' is the user-supplied count (return 'count' if we 55 + * hit it), 'max' is the address space maximum (and we return 56 + * -EFAULT if we hit it). 57 + */ 58 + static inline long do_strncpy_from_user(char *dst, const char __user *src, long count, unsigned long max) 59 + { 60 + long res = 0; 61 + 62 + /* 63 + * Truncate 'max' to the user-specified limit, so that 64 + * we only have one limit we need to check in the loop 65 + */ 66 + if (max > count) 67 + max = count; 68 + 69 + while (max >= sizeof(unsigned long)) { 70 + unsigned long c; 71 + 72 + /* Fall back to byte-at-a-time if we get a page fault */ 73 + if (unlikely(__get_user(c,(unsigned long __user *)(src+res)))) 74 + break; 75 + /* This can write a few bytes past the NUL character, but that's ok */ 76 + *(unsigned long *)(dst+res) = c; 77 + c = has_zero(c); 78 + if (c) 79 + return res + count_bytes(c); 80 + res += sizeof(unsigned long); 81 + max -= sizeof(unsigned long); 82 + } 83 + 84 + while (max) { 85 + char c; 86 + 87 + if (unlikely(__get_user(c,src+res))) 88 + return -EFAULT; 89 + dst[res] = c; 90 + if (!c) 91 + return res; 92 + res++; 93 + max--; 94 + } 95 + 96 + /* 97 + * Uhhuh. We hit 'max'. But was that the user-specified maximum 98 + * too? If so, that's ok - we got as much as the user asked for. 99 + */ 100 + if (res >= count) 101 + return res; 102 + 103 + /* 104 + * Nope: we hit the address space limit, and we still had more 105 + * characters the caller would have wanted. That's an EFAULT. 106 + */ 107 + return -EFAULT; 108 + } 109 + 110 + /** 111 + * strncpy_from_user: - Copy a NUL terminated string from userspace. 112 + * @dst: Destination address, in kernel space. This buffer must be at 113 + * least @count bytes long. 114 + * @src: Source address, in user space. 115 + * @count: Maximum number of bytes to copy, including the trailing NUL. 116 + * 117 + * Copies a NUL-terminated string from userspace to kernel space. 118 + * 119 + * On success, returns the length of the string (not including the trailing 120 + * NUL). 121 + * 122 + * If access to userspace fails, returns -EFAULT (some data may have been 123 + * copied). 124 + * 125 + * If @count is smaller than the length of the string, copies @count bytes 126 + * and returns @count. 127 + */ 128 + long 129 + strncpy_from_user(char *dst, const char __user *src, long count) 130 + { 131 + unsigned long max_addr, src_addr; 132 + 133 + if (unlikely(count <= 0)) 134 + return 0; 135 + 136 + max_addr = current_thread_info()->addr_limit.seg; 137 + src_addr = (unsigned long)src; 138 + if (likely(src_addr < max_addr)) { 139 + unsigned long max = max_addr - src_addr; 140 + return do_strncpy_from_user(dst, src, count, max); 141 + } 142 + return -EFAULT; 143 + } 144 + EXPORT_SYMBOL(strncpy_from_user);
-87
arch/x86/lib/usercopy_32.c
··· 33 33 __movsl_is_ok((unsigned long)(a1), (unsigned long)(a2), (n)) 34 34 35 35 /* 36 - * Copy a null terminated string from userspace. 37 - */ 38 - 39 - #define __do_strncpy_from_user(dst, src, count, res) \ 40 - do { \ 41 - int __d0, __d1, __d2; \ 42 - might_fault(); \ 43 - __asm__ __volatile__( \ 44 - " testl %1,%1\n" \ 45 - " jz 2f\n" \ 46 - "0: lodsb\n" \ 47 - " stosb\n" \ 48 - " testb %%al,%%al\n" \ 49 - " jz 1f\n" \ 50 - " decl %1\n" \ 51 - " jnz 0b\n" \ 52 - "1: subl %1,%0\n" \ 53 - "2:\n" \ 54 - ".section .fixup,\"ax\"\n" \ 55 - "3: movl %5,%0\n" \ 56 - " jmp 2b\n" \ 57 - ".previous\n" \ 58 - _ASM_EXTABLE(0b,3b) \ 59 - : "=&d"(res), "=&c"(count), "=&a" (__d0), "=&S" (__d1), \ 60 - "=&D" (__d2) \ 61 - : "i"(-EFAULT), "0"(count), "1"(count), "3"(src), "4"(dst) \ 62 - : "memory"); \ 63 - } while (0) 64 - 65 - /** 66 - * __strncpy_from_user: - Copy a NUL terminated string from userspace, with less checking. 67 - * @dst: Destination address, in kernel space. This buffer must be at 68 - * least @count bytes long. 69 - * @src: Source address, in user space. 70 - * @count: Maximum number of bytes to copy, including the trailing NUL. 71 - * 72 - * Copies a NUL-terminated string from userspace to kernel space. 73 - * Caller must check the specified block with access_ok() before calling 74 - * this function. 75 - * 76 - * On success, returns the length of the string (not including the trailing 77 - * NUL). 78 - * 79 - * If access to userspace fails, returns -EFAULT (some data may have been 80 - * copied). 81 - * 82 - * If @count is smaller than the length of the string, copies @count bytes 83 - * and returns @count. 84 - */ 85 - long 86 - __strncpy_from_user(char *dst, const char __user *src, long count) 87 - { 88 - long res; 89 - __do_strncpy_from_user(dst, src, count, res); 90 - return res; 91 - } 92 - EXPORT_SYMBOL(__strncpy_from_user); 93 - 94 - /** 95 - * strncpy_from_user: - Copy a NUL terminated string from userspace. 96 - * @dst: Destination address, in kernel space. This buffer must be at 97 - * least @count bytes long. 98 - * @src: Source address, in user space. 99 - * @count: Maximum number of bytes to copy, including the trailing NUL. 100 - * 101 - * Copies a NUL-terminated string from userspace to kernel space. 102 - * 103 - * On success, returns the length of the string (not including the trailing 104 - * NUL). 105 - * 106 - * If access to userspace fails, returns -EFAULT (some data may have been 107 - * copied). 108 - * 109 - * If @count is smaller than the length of the string, copies @count bytes 110 - * and returns @count. 111 - */ 112 - long 113 - strncpy_from_user(char *dst, const char __user *src, long count) 114 - { 115 - long res = -EFAULT; 116 - if (access_ok(VERIFY_READ, src, 1)) 117 - __do_strncpy_from_user(dst, src, count, res); 118 - return res; 119 - } 120 - EXPORT_SYMBOL(strncpy_from_user); 121 - 122 - /* 123 36 * Zero Userspace 124 37 */ 125 38
-49
arch/x86/lib/usercopy_64.c
··· 9 9 #include <asm/uaccess.h> 10 10 11 11 /* 12 - * Copy a null terminated string from userspace. 13 - */ 14 - 15 - #define __do_strncpy_from_user(dst,src,count,res) \ 16 - do { \ 17 - long __d0, __d1, __d2; \ 18 - might_fault(); \ 19 - __asm__ __volatile__( \ 20 - " testq %1,%1\n" \ 21 - " jz 2f\n" \ 22 - "0: lodsb\n" \ 23 - " stosb\n" \ 24 - " testb %%al,%%al\n" \ 25 - " jz 1f\n" \ 26 - " decq %1\n" \ 27 - " jnz 0b\n" \ 28 - "1: subq %1,%0\n" \ 29 - "2:\n" \ 30 - ".section .fixup,\"ax\"\n" \ 31 - "3: movq %5,%0\n" \ 32 - " jmp 2b\n" \ 33 - ".previous\n" \ 34 - _ASM_EXTABLE(0b,3b) \ 35 - : "=&r"(res), "=&c"(count), "=&a" (__d0), "=&S" (__d1), \ 36 - "=&D" (__d2) \ 37 - : "i"(-EFAULT), "0"(count), "1"(count), "3"(src), "4"(dst) \ 38 - : "memory"); \ 39 - } while (0) 40 - 41 - long 42 - __strncpy_from_user(char *dst, const char __user *src, long count) 43 - { 44 - long res; 45 - __do_strncpy_from_user(dst, src, count, res); 46 - return res; 47 - } 48 - EXPORT_SYMBOL(__strncpy_from_user); 49 - 50 - long 51 - strncpy_from_user(char *dst, const char __user *src, long count) 52 - { 53 - long res = -EFAULT; 54 - if (access_ok(VERIFY_READ, src, 1)) 55 - return __strncpy_from_user(dst, src, count); 56 - return res; 57 - } 58 - EXPORT_SYMBOL(strncpy_from_user); 59 - 60 - /* 61 12 * Zero Userspace 62 13 */ 63 14
+75
arch/x86/um/asm/barrier.h
··· 1 + #ifndef _ASM_UM_BARRIER_H_ 2 + #define _ASM_UM_BARRIER_H_ 3 + 4 + #include <asm/asm.h> 5 + #include <asm/segment.h> 6 + #include <asm/cpufeature.h> 7 + #include <asm/cmpxchg.h> 8 + #include <asm/nops.h> 9 + 10 + #include <linux/kernel.h> 11 + #include <linux/irqflags.h> 12 + 13 + /* 14 + * Force strict CPU ordering. 15 + * And yes, this is required on UP too when we're talking 16 + * to devices. 17 + */ 18 + #ifdef CONFIG_X86_32 19 + 20 + #define mb() alternative("lock; addl $0,0(%%esp)", "mfence", X86_FEATURE_XMM2) 21 + #define rmb() alternative("lock; addl $0,0(%%esp)", "lfence", X86_FEATURE_XMM2) 22 + #define wmb() alternative("lock; addl $0,0(%%esp)", "sfence", X86_FEATURE_XMM) 23 + 24 + #else /* CONFIG_X86_32 */ 25 + 26 + #define mb() asm volatile("mfence" : : : "memory") 27 + #define rmb() asm volatile("lfence" : : : "memory") 28 + #define wmb() asm volatile("sfence" : : : "memory") 29 + 30 + #endif /* CONFIG_X86_32 */ 31 + 32 + #define read_barrier_depends() do { } while (0) 33 + 34 + #ifdef CONFIG_SMP 35 + 36 + #define smp_mb() mb() 37 + #ifdef CONFIG_X86_PPRO_FENCE 38 + #define smp_rmb() rmb() 39 + #else /* CONFIG_X86_PPRO_FENCE */ 40 + #define smp_rmb() barrier() 41 + #endif /* CONFIG_X86_PPRO_FENCE */ 42 + 43 + #ifdef CONFIG_X86_OOSTORE 44 + #define smp_wmb() wmb() 45 + #else /* CONFIG_X86_OOSTORE */ 46 + #define smp_wmb() barrier() 47 + #endif /* CONFIG_X86_OOSTORE */ 48 + 49 + #define smp_read_barrier_depends() read_barrier_depends() 50 + #define set_mb(var, value) do { (void)xchg(&var, value); } while (0) 51 + 52 + #else /* CONFIG_SMP */ 53 + 54 + #define smp_mb() barrier() 55 + #define smp_rmb() barrier() 56 + #define smp_wmb() barrier() 57 + #define smp_read_barrier_depends() do { } while (0) 58 + #define set_mb(var, value) do { var = value; barrier(); } while (0) 59 + 60 + #endif /* CONFIG_SMP */ 61 + 62 + /* 63 + * Stop RDTSC speculation. This is needed when you need to use RDTSC 64 + * (or get_cycles or vread that possibly accesses the TSC) in a defined 65 + * code region. 66 + * 67 + * (Could use an alternative three way for this if there was one.) 68 + */ 69 + static inline void rdtsc_barrier(void) 70 + { 71 + alternative(ASM_NOP3, "mfence", X86_FEATURE_MFENCE_RDTSC); 72 + alternative(ASM_NOP3, "lfence", X86_FEATURE_LFENCE_RDTSC); 73 + } 74 + 75 + #endif
-135
arch/x86/um/asm/system.h
··· 1 - #ifndef _ASM_X86_SYSTEM_H_ 2 - #define _ASM_X86_SYSTEM_H_ 3 - 4 - #include <asm/asm.h> 5 - #include <asm/segment.h> 6 - #include <asm/cpufeature.h> 7 - #include <asm/cmpxchg.h> 8 - #include <asm/nops.h> 9 - 10 - #include <linux/kernel.h> 11 - #include <linux/irqflags.h> 12 - 13 - /* entries in ARCH_DLINFO: */ 14 - #ifdef CONFIG_IA32_EMULATION 15 - # define AT_VECTOR_SIZE_ARCH 2 16 - #else 17 - # define AT_VECTOR_SIZE_ARCH 1 18 - #endif 19 - 20 - extern unsigned long arch_align_stack(unsigned long sp); 21 - 22 - void default_idle(void); 23 - 24 - /* 25 - * Force strict CPU ordering. 26 - * And yes, this is required on UP too when we're talking 27 - * to devices. 28 - */ 29 - #ifdef CONFIG_X86_32 30 - /* 31 - * Some non-Intel clones support out of order store. wmb() ceases to be a 32 - * nop for these. 33 - */ 34 - #define mb() alternative("lock; addl $0,0(%%esp)", "mfence", X86_FEATURE_XMM2) 35 - #define rmb() alternative("lock; addl $0,0(%%esp)", "lfence", X86_FEATURE_XMM2) 36 - #define wmb() alternative("lock; addl $0,0(%%esp)", "sfence", X86_FEATURE_XMM) 37 - #else 38 - #define mb() asm volatile("mfence":::"memory") 39 - #define rmb() asm volatile("lfence":::"memory") 40 - #define wmb() asm volatile("sfence" ::: "memory") 41 - #endif 42 - 43 - /** 44 - * read_barrier_depends - Flush all pending reads that subsequents reads 45 - * depend on. 46 - * 47 - * No data-dependent reads from memory-like regions are ever reordered 48 - * over this barrier. All reads preceding this primitive are guaranteed 49 - * to access memory (but not necessarily other CPUs' caches) before any 50 - * reads following this primitive that depend on the data return by 51 - * any of the preceding reads. This primitive is much lighter weight than 52 - * rmb() on most CPUs, and is never heavier weight than is 53 - * rmb(). 54 - * 55 - * These ordering constraints are respected by both the local CPU 56 - * and the compiler. 57 - * 58 - * Ordering is not guaranteed by anything other than these primitives, 59 - * not even by data dependencies. See the documentation for 60 - * memory_barrier() for examples and URLs to more information. 61 - * 62 - * For example, the following code would force ordering (the initial 63 - * value of "a" is zero, "b" is one, and "p" is "&a"): 64 - * 65 - * <programlisting> 66 - * CPU 0 CPU 1 67 - * 68 - * b = 2; 69 - * memory_barrier(); 70 - * p = &b; q = p; 71 - * read_barrier_depends(); 72 - * d = *q; 73 - * </programlisting> 74 - * 75 - * because the read of "*q" depends on the read of "p" and these 76 - * two reads are separated by a read_barrier_depends(). However, 77 - * the following code, with the same initial values for "a" and "b": 78 - * 79 - * <programlisting> 80 - * CPU 0 CPU 1 81 - * 82 - * a = 2; 83 - * memory_barrier(); 84 - * b = 3; y = b; 85 - * read_barrier_depends(); 86 - * x = a; 87 - * </programlisting> 88 - * 89 - * does not enforce ordering, since there is no data dependency between 90 - * the read of "a" and the read of "b". Therefore, on some CPUs, such 91 - * as Alpha, "y" could be set to 3 and "x" to 0. Use rmb() 92 - * in cases like this where there are no data dependencies. 93 - **/ 94 - 95 - #define read_barrier_depends() do { } while (0) 96 - 97 - #ifdef CONFIG_SMP 98 - #define smp_mb() mb() 99 - #ifdef CONFIG_X86_PPRO_FENCE 100 - # define smp_rmb() rmb() 101 - #else 102 - # define smp_rmb() barrier() 103 - #endif 104 - #ifdef CONFIG_X86_OOSTORE 105 - # define smp_wmb() wmb() 106 - #else 107 - # define smp_wmb() barrier() 108 - #endif 109 - #define smp_read_barrier_depends() read_barrier_depends() 110 - #define set_mb(var, value) do { (void)xchg(&var, value); } while (0) 111 - #else 112 - #define smp_mb() barrier() 113 - #define smp_rmb() barrier() 114 - #define smp_wmb() barrier() 115 - #define smp_read_barrier_depends() do { } while (0) 116 - #define set_mb(var, value) do { var = value; barrier(); } while (0) 117 - #endif 118 - 119 - /* 120 - * Stop RDTSC speculation. This is needed when you need to use RDTSC 121 - * (or get_cycles or vread that possibly accesses the TSC) in a defined 122 - * code region. 123 - * 124 - * (Could use an alternative three way for this if there was one.) 125 - */ 126 - static inline void rdtsc_barrier(void) 127 - { 128 - alternative(ASM_NOP3, "mfence", X86_FEATURE_MFENCE_RDTSC); 129 - alternative(ASM_NOP3, "lfence", X86_FEATURE_LFENCE_RDTSC); 130 - } 131 - 132 - extern void *_switch_to(void *prev, void *next, void *last); 133 - #define switch_to(prev, next, last) prev = _switch_to(prev, next, last) 134 - 135 - #endif
+3 -2
block/blk-core.c
··· 483 483 if (!q) 484 484 return NULL; 485 485 486 - q->id = ida_simple_get(&blk_queue_ida, 0, 0, GFP_KERNEL); 486 + q->id = ida_simple_get(&blk_queue_ida, 0, 0, gfp_mask); 487 487 if (q->id < 0) 488 488 goto fail_q; 489 489 ··· 1277 1277 list_for_each_entry_reverse(rq, &plug->list, queuelist) { 1278 1278 int el_ret; 1279 1279 1280 - (*request_count)++; 1280 + if (rq->q == q) 1281 + (*request_count)++; 1281 1282 1282 1283 if (rq->q != q || !blk_rq_merge_ok(rq, bio)) 1283 1284 continue;
+1 -1
block/blk-throttle.c
··· 1218 1218 struct bio_list bl; 1219 1219 struct bio *bio; 1220 1220 1221 - WARN_ON_ONCE(!queue_is_locked(q)); 1221 + queue_lockdep_assert_held(q); 1222 1222 1223 1223 bio_list_init(&bl); 1224 1224
+8 -2
block/cfq-iosched.c
··· 295 295 unsigned int cfq_slice_idle; 296 296 unsigned int cfq_group_idle; 297 297 unsigned int cfq_latency; 298 + unsigned int cfq_target_latency; 298 299 299 300 /* 300 301 * Fallback dummy cfqq for extreme OOM conditions ··· 605 604 { 606 605 struct cfq_rb_root *st = &cfqd->grp_service_tree; 607 606 608 - return cfq_target_latency * cfqg->weight / st->total_weight; 607 + return cfqd->cfq_target_latency * cfqg->weight / st->total_weight; 609 608 } 610 609 611 610 static inline unsigned ··· 2272 2271 * to have higher weight. A more accurate thing would be to 2273 2272 * calculate system wide asnc/sync ratio. 2274 2273 */ 2275 - tmp = cfq_target_latency * cfqg_busy_async_queues(cfqd, cfqg); 2274 + tmp = cfqd->cfq_target_latency * 2275 + cfqg_busy_async_queues(cfqd, cfqg); 2276 2276 tmp = tmp/cfqd->busy_queues; 2277 2277 slice = min_t(unsigned, slice, tmp); 2278 2278 ··· 3739 3737 cfqd->cfq_back_penalty = cfq_back_penalty; 3740 3738 cfqd->cfq_slice[0] = cfq_slice_async; 3741 3739 cfqd->cfq_slice[1] = cfq_slice_sync; 3740 + cfqd->cfq_target_latency = cfq_target_latency; 3742 3741 cfqd->cfq_slice_async_rq = cfq_slice_async_rq; 3743 3742 cfqd->cfq_slice_idle = cfq_slice_idle; 3744 3743 cfqd->cfq_group_idle = cfq_group_idle; ··· 3791 3788 SHOW_FUNCTION(cfq_slice_async_show, cfqd->cfq_slice[0], 1); 3792 3789 SHOW_FUNCTION(cfq_slice_async_rq_show, cfqd->cfq_slice_async_rq, 0); 3793 3790 SHOW_FUNCTION(cfq_low_latency_show, cfqd->cfq_latency, 0); 3791 + SHOW_FUNCTION(cfq_target_latency_show, cfqd->cfq_target_latency, 1); 3794 3792 #undef SHOW_FUNCTION 3795 3793 3796 3794 #define STORE_FUNCTION(__FUNC, __PTR, MIN, MAX, __CONV) \ ··· 3825 3821 STORE_FUNCTION(cfq_slice_async_rq_store, &cfqd->cfq_slice_async_rq, 1, 3826 3822 UINT_MAX, 0); 3827 3823 STORE_FUNCTION(cfq_low_latency_store, &cfqd->cfq_latency, 0, 1, 0); 3824 + STORE_FUNCTION(cfq_target_latency_store, &cfqd->cfq_target_latency, 1, UINT_MAX, 1); 3828 3825 #undef STORE_FUNCTION 3829 3826 3830 3827 #define CFQ_ATTR(name) \ ··· 3843 3838 CFQ_ATTR(slice_idle), 3844 3839 CFQ_ATTR(group_idle), 3845 3840 CFQ_ATTR(low_latency), 3841 + CFQ_ATTR(target_latency), 3846 3842 __ATTR_NULL 3847 3843 }; 3848 3844
+3 -3
crypto/Kconfig
··· 627 627 628 628 config CRYPTO_BLOWFISH_X86_64 629 629 tristate "Blowfish cipher algorithm (x86_64)" 630 - depends on (X86 || UML_X86) && 64BIT 630 + depends on X86 && 64BIT 631 631 select CRYPTO_ALGAPI 632 632 select CRYPTO_BLOWFISH_COMMON 633 633 help ··· 657 657 658 658 config CRYPTO_CAMELLIA_X86_64 659 659 tristate "Camellia cipher algorithm (x86_64)" 660 - depends on (X86 || UML_X86) && 64BIT 660 + depends on X86 && 64BIT 661 661 depends on CRYPTO 662 662 select CRYPTO_ALGAPI 663 663 select CRYPTO_LRW ··· 893 893 894 894 config CRYPTO_TWOFISH_X86_64_3WAY 895 895 tristate "Twofish cipher algorithm (x86_64, 3-way parallel)" 896 - depends on (X86 || UML_X86) && 64BIT 896 + depends on X86 && 64BIT 897 897 select CRYPTO_ALGAPI 898 898 select CRYPTO_TWOFISH_COMMON 899 899 select CRYPTO_TWOFISH_X86_64
+1 -41
drivers/amba/bus.c
··· 247 247 /* 248 248 * Hooks to provide runtime PM of the pclk (bus clock). It is safe to 249 249 * enable/disable the bus clock at runtime PM suspend/resume as this 250 - * does not result in loss of context. However, disabling vcore power 251 - * would do, so we leave that to the driver. 250 + * does not result in loss of context. 252 251 */ 253 252 static int amba_pm_runtime_suspend(struct device *dev) 254 253 { ··· 353 354 clk_put(pclk); 354 355 } 355 356 356 - static int amba_get_enable_vcore(struct amba_device *pcdev) 357 - { 358 - struct regulator *vcore = regulator_get(&pcdev->dev, "vcore"); 359 - int ret; 360 - 361 - pcdev->vcore = vcore; 362 - 363 - if (IS_ERR(vcore)) { 364 - /* It is OK not to supply a vcore regulator */ 365 - if (PTR_ERR(vcore) == -ENODEV) 366 - return 0; 367 - return PTR_ERR(vcore); 368 - } 369 - 370 - ret = regulator_enable(vcore); 371 - if (ret) { 372 - regulator_put(vcore); 373 - pcdev->vcore = ERR_PTR(-ENODEV); 374 - } 375 - 376 - return ret; 377 - } 378 - 379 - static void amba_put_disable_vcore(struct amba_device *pcdev) 380 - { 381 - struct regulator *vcore = pcdev->vcore; 382 - 383 - if (!IS_ERR(vcore)) { 384 - regulator_disable(vcore); 385 - regulator_put(vcore); 386 - } 387 - } 388 - 389 357 /* 390 358 * These are the device model conversion veneers; they convert the 391 359 * device model structures to our more specific structures. ··· 365 399 int ret; 366 400 367 401 do { 368 - ret = amba_get_enable_vcore(pcdev); 369 - if (ret) 370 - break; 371 - 372 402 ret = amba_get_enable_pclk(pcdev); 373 403 if (ret) 374 404 break; ··· 382 420 pm_runtime_put_noidle(dev); 383 421 384 422 amba_put_disable_pclk(pcdev); 385 - amba_put_disable_vcore(pcdev); 386 423 } while (0); 387 424 388 425 return ret; ··· 403 442 pm_runtime_put_noidle(dev); 404 443 405 444 amba_put_disable_pclk(pcdev); 406 - amba_put_disable_vcore(pcdev); 407 445 408 446 return ret; 409 447 }
+1 -3
drivers/base/soc.c
··· 15 15 #include <linux/sys_soc.h> 16 16 #include <linux/err.h> 17 17 18 - static DEFINE_IDR(soc_ida); 18 + static DEFINE_IDA(soc_ida); 19 19 static DEFINE_SPINLOCK(soc_lock); 20 20 21 21 static ssize_t soc_info_get(struct device *dev, ··· 168 168 169 169 static int __init soc_bus_register(void) 170 170 { 171 - spin_lock_init(&soc_lock); 172 - 173 171 return bus_register(&soc_bus_type); 174 172 } 175 173 core_initcall(soc_bus_register);
+1 -1
drivers/bcma/Kconfig
··· 29 29 30 30 config BCMA_DRIVER_PCI_HOSTMODE 31 31 bool "Driver for PCI core working in hostmode" 32 - depends on BCMA && MIPS 32 + depends on BCMA && MIPS && BCMA_HOST_PCI 33 33 help 34 34 PCI core hostmode operation (external PCI bus). 35 35
+1
drivers/bcma/driver_pci_host.c
··· 10 10 */ 11 11 12 12 #include "bcma_private.h" 13 + #include <linux/pci.h> 13 14 #include <linux/export.h> 14 15 #include <linux/bcma/bcma.h> 15 16 #include <asm/paccess.h>
+2 -1
drivers/block/cciss_scsi.c
··· 866 866 sh->can_queue = cciss_tape_cmds; 867 867 sh->sg_tablesize = h->maxsgentries; 868 868 sh->max_cmd_len = MAX_COMMAND_SIZE; 869 + sh->max_sectors = h->cciss_max_sectors; 869 870 870 871 ((struct cciss_scsi_adapter_data_t *) 871 872 h->scsi_ctlr)->scsi_host = sh; ··· 1411 1410 /* track how many SG entries we are using */ 1412 1411 if (request_nsgs > h->maxSG) 1413 1412 h->maxSG = request_nsgs; 1414 - c->Header.SGTotal = (__u8) request_nsgs + chained; 1413 + c->Header.SGTotal = (u16) request_nsgs + chained; 1415 1414 if (request_nsgs > h->max_cmd_sgentries) 1416 1415 c->Header.SGList = h->max_cmd_sgentries; 1417 1416 else
+1 -1
drivers/block/mtip32xx/Kconfig
··· 4 4 5 5 config BLK_DEV_PCIESSD_MTIP32XX 6 6 tristate "Block Device Driver for Micron PCIe SSDs" 7 - depends on HOTPLUG_PCI_PCIE 7 + depends on PCI 8 8 help 9 9 This enables the block driver for Micron PCIe SSDs.
+664 -196
drivers/block/mtip32xx/mtip32xx.c
··· 36 36 #include <linux/idr.h> 37 37 #include <linux/kthread.h> 38 38 #include <../drivers/ata/ahci.h> 39 + #include <linux/export.h> 39 40 #include "mtip32xx.h" 40 41 41 42 #define HW_CMD_SLOT_SZ (MTIP_MAX_COMMAND_SLOTS * 32) ··· 45 44 #define HW_PORT_PRIV_DMA_SZ \ 46 45 (HW_CMD_SLOT_SZ + HW_CMD_TBL_AR_SZ + AHCI_RX_FIS_SZ) 47 46 47 + #define HOST_CAP_NZDMA (1 << 19) 48 48 #define HOST_HSORG 0xFC 49 49 #define HSORG_DISABLE_SLOTGRP_INTR (1<<24) 50 50 #define HSORG_DISABLE_SLOTGRP_PXIS (1<<16) ··· 141 139 int group = 0, commandslot = 0, commandindex = 0; 142 140 struct mtip_cmd *command; 143 141 struct mtip_port *port = dd->port; 142 + static int in_progress; 143 + 144 + if (in_progress) 145 + return; 146 + 147 + in_progress = 1; 144 148 145 149 for (group = 0; group < 4; group++) { 146 150 for (commandslot = 0; commandslot < 32; commandslot++) { ··· 173 165 174 166 up(&port->cmd_slot); 175 167 176 - atomic_set(&dd->drv_cleanup_done, true); 168 + set_bit(MTIP_DDF_CLEANUP_BIT, &dd->dd_flag); 169 + in_progress = 0; 177 170 } 178 171 179 172 /* ··· 271 262 && time_before(jiffies, timeout)) 272 263 mdelay(1); 273 264 265 + if (test_bit(MTIP_DDF_REMOVE_PENDING_BIT, &dd->dd_flag)) 266 + return -1; 267 + 274 268 if (readl(dd->mmio + HOST_CTL) & HOST_RESET) 275 269 return -1; 276 270 ··· 306 294 port->cmd_issue[MTIP_TAG_INDEX(tag)]); 307 295 308 296 spin_unlock_irqrestore(&port->cmd_issue_lock, flags); 297 + 298 + /* Set the command's timeout value.*/ 299 + port->commands[tag].comp_time = jiffies + msecs_to_jiffies( 300 + MTIP_NCQ_COMMAND_TIMEOUT_MS); 309 301 } 310 302 311 303 /* ··· 436 420 writel(0xFFFFFFFF, port->completed[i]); 437 421 438 422 /* Clear any pending interrupts for this port */ 439 - writel(readl(port->mmio + PORT_IRQ_STAT), port->mmio + PORT_IRQ_STAT); 423 + writel(readl(port->dd->mmio + PORT_IRQ_STAT), 424 + port->dd->mmio + PORT_IRQ_STAT); 425 + 426 + /* Clear any pending interrupts on the HBA. */ 427 + writel(readl(port->dd->mmio + HOST_IRQ_STAT), 428 + port->dd->mmio + HOST_IRQ_STAT); 440 429 441 430 /* Enable port interrupts */ 442 431 writel(DEF_PORT_IRQ, port->mmio + PORT_IRQ_MASK); ··· 467 446 while ((readl(port->mmio + PORT_CMD) & PORT_CMD_LIST_ON) 468 447 && time_before(jiffies, timeout)) 469 448 ; 449 + 450 + if (test_bit(MTIP_DDF_REMOVE_PENDING_BIT, &port->dd->dd_flag)) 451 + return; 470 452 471 453 /* 472 454 * Chip quirk: escalate to hba reset if ··· 499 475 while (time_before(jiffies, timeout)) 500 476 ; 501 477 478 + if (test_bit(MTIP_DDF_REMOVE_PENDING_BIT, &port->dd->dd_flag)) 479 + return; 480 + 502 481 /* Clear PxSCTL.DET */ 503 482 writel(readl(port->mmio + PORT_SCR_CTL) & ~1, 504 483 port->mmio + PORT_SCR_CTL); ··· 513 486 && time_before(jiffies, timeout)) 514 487 ; 515 488 489 + if (test_bit(MTIP_DDF_REMOVE_PENDING_BIT, &port->dd->dd_flag)) 490 + return; 491 + 516 492 if ((readl(port->mmio + PORT_SCR_STAT) & 0x01) == 0) 517 493 dev_warn(&port->dd->pdev->dev, 518 494 "COM reset failed\n"); 519 495 520 - /* Clear SError, the PxSERR.DIAG.x should be set so clear it */ 521 - writel(readl(port->mmio + PORT_SCR_ERR), port->mmio + PORT_SCR_ERR); 496 + mtip_init_port(port); 497 + mtip_start_port(port); 522 498 523 - /* Enable the DMA engine */ 524 - mtip_enable_engine(port, 1); 499 + } 500 + 501 + /* 502 + * Helper function for tag logging 503 + */ 504 + static void print_tags(struct driver_data *dd, 505 + char *msg, 506 + unsigned long *tagbits, 507 + int cnt) 508 + { 509 + unsigned char tagmap[128]; 510 + int group, tagmap_len = 0; 511 + 512 + memset(tagmap, 0, sizeof(tagmap)); 513 + for (group = SLOTBITS_IN_LONGS; group > 0; group--) 514 + tagmap_len = sprintf(tagmap + tagmap_len, "%016lX ", 515 + tagbits[group-1]); 516 + dev_warn(&dd->pdev->dev, 517 + "%d command(s) %s: tagmap [%s]", cnt, msg, tagmap); 525 518 } 526 519 527 520 /* ··· 561 514 int tag, cmdto_cnt = 0; 562 515 unsigned int bit, group; 563 516 unsigned int num_command_slots = port->dd->slot_groups * 32; 517 + unsigned long to, tagaccum[SLOTBITS_IN_LONGS]; 564 518 565 519 if (unlikely(!port)) 566 520 return; 567 521 568 - if (atomic_read(&port->dd->resumeflag) == true) { 522 + if (test_bit(MTIP_DDF_RESUME_BIT, &port->dd->dd_flag)) { 569 523 mod_timer(&port->cmd_timer, 570 524 jiffies + msecs_to_jiffies(30000)); 571 525 return; 572 526 } 527 + /* clear the tag accumulator */ 528 + memset(tagaccum, 0, SLOTBITS_IN_LONGS * sizeof(long)); 573 529 574 530 for (tag = 0; tag < num_command_slots; tag++) { 575 531 /* ··· 590 540 command = &port->commands[tag]; 591 541 fis = (struct host_to_dev_fis *) command->command; 592 542 593 - dev_warn(&port->dd->pdev->dev, 594 - "Timeout for command tag %d\n", tag); 595 - 543 + set_bit(tag, tagaccum); 596 544 cmdto_cnt++; 597 545 if (cmdto_cnt == 1) 598 - set_bit(MTIP_FLAG_EH_ACTIVE_BIT, &port->flags); 546 + set_bit(MTIP_PF_EH_ACTIVE_BIT, &port->flags); 599 547 600 548 /* 601 549 * Clear the completed bit. This should prevent ··· 626 578 } 627 579 } 628 580 629 - if (cmdto_cnt) { 630 - dev_warn(&port->dd->pdev->dev, 631 - "%d commands timed out: restarting port", 632 - cmdto_cnt); 581 + if (cmdto_cnt && !test_bit(MTIP_PF_IC_ACTIVE_BIT, &port->flags)) { 582 + print_tags(port->dd, "timed out", tagaccum, cmdto_cnt); 583 + 633 584 mtip_restart_port(port); 634 - clear_bit(MTIP_FLAG_EH_ACTIVE_BIT, &port->flags); 585 + clear_bit(MTIP_PF_EH_ACTIVE_BIT, &port->flags); 635 586 wake_up_interruptible(&port->svc_wait); 587 + } 588 + 589 + if (port->ic_pause_timer) { 590 + to = port->ic_pause_timer + msecs_to_jiffies(1000); 591 + if (time_after(jiffies, to)) { 592 + if (!test_bit(MTIP_PF_IC_ACTIVE_BIT, &port->flags)) { 593 + port->ic_pause_timer = 0; 594 + clear_bit(MTIP_PF_SE_ACTIVE_BIT, &port->flags); 595 + clear_bit(MTIP_PF_DM_ACTIVE_BIT, &port->flags); 596 + clear_bit(MTIP_PF_IC_ACTIVE_BIT, &port->flags); 597 + wake_up_interruptible(&port->svc_wait); 598 + } 599 + 600 + 601 + } 636 602 } 637 603 638 604 /* Restart the timer */ ··· 743 681 complete(waiting); 744 682 } 745 683 746 - /* 747 - * Helper function for tag logging 748 - */ 749 - static void print_tags(struct driver_data *dd, 750 - char *msg, 751 - unsigned long *tagbits) 684 + static void mtip_null_completion(struct mtip_port *port, 685 + int tag, 686 + void *data, 687 + int status) 752 688 { 753 - unsigned int tag, count = 0; 754 - 755 - for (tag = 0; tag < (dd->slot_groups) * 32; tag++) { 756 - if (test_bit(tag, tagbits)) 757 - count++; 758 - } 759 - if (count) 760 - dev_info(&dd->pdev->dev, "%s [%i tags]\n", msg, count); 689 + return; 761 690 } 762 691 692 + static int mtip_read_log_page(struct mtip_port *port, u8 page, u16 *buffer, 693 + dma_addr_t buffer_dma, unsigned int sectors); 694 + static int mtip_get_smart_attr(struct mtip_port *port, unsigned int id, 695 + struct smart_attr *attrib); 763 696 /* 764 697 * Handle an error. 765 698 * ··· 765 708 */ 766 709 static void mtip_handle_tfe(struct driver_data *dd) 767 710 { 768 - int group, tag, bit, reissue; 711 + int group, tag, bit, reissue, rv; 769 712 struct mtip_port *port; 770 - struct mtip_cmd *command; 713 + struct mtip_cmd *cmd; 771 714 u32 completed; 772 715 struct host_to_dev_fis *fis; 773 716 unsigned long tagaccum[SLOTBITS_IN_LONGS]; 717 + unsigned int cmd_cnt = 0; 718 + unsigned char *buf; 719 + char *fail_reason = NULL; 720 + int fail_all_ncq_write = 0, fail_all_ncq_cmds = 0; 774 721 775 722 dev_warn(&dd->pdev->dev, "Taskfile error\n"); 776 723 ··· 783 722 /* Stop the timer to prevent command timeouts. */ 784 723 del_timer(&port->cmd_timer); 785 724 725 + /* clear the tag accumulator */ 726 + memset(tagaccum, 0, SLOTBITS_IN_LONGS * sizeof(long)); 727 + 786 728 /* Set eh_active */ 787 - set_bit(MTIP_FLAG_EH_ACTIVE_BIT, &port->flags); 729 + set_bit(MTIP_PF_EH_ACTIVE_BIT, &port->flags); 788 730 789 731 /* Loop through all the groups */ 790 732 for (group = 0; group < dd->slot_groups; group++) { ··· 795 731 796 732 /* clear completed status register in the hardware.*/ 797 733 writel(completed, port->completed[group]); 798 - 799 - /* clear the tag accumulator */ 800 - memset(tagaccum, 0, SLOTBITS_IN_LONGS * sizeof(long)); 801 734 802 735 /* Process successfully completed commands */ 803 736 for (bit = 0; bit < 32 && completed; bit++) { ··· 806 745 if (tag == MTIP_TAG_INTERNAL) 807 746 continue; 808 747 809 - command = &port->commands[tag]; 810 - if (likely(command->comp_func)) { 748 + cmd = &port->commands[tag]; 749 + if (likely(cmd->comp_func)) { 811 750 set_bit(tag, tagaccum); 812 - atomic_set(&port->commands[tag].active, 0); 813 - command->comp_func(port, 751 + cmd_cnt++; 752 + atomic_set(&cmd->active, 0); 753 + cmd->comp_func(port, 814 754 tag, 815 - command->comp_data, 755 + cmd->comp_data, 816 756 0); 817 757 } else { 818 758 dev_err(&port->dd->pdev->dev, ··· 827 765 } 828 766 } 829 767 } 830 - print_tags(dd, "TFE tags completed:", tagaccum); 768 + 769 + print_tags(dd, "completed (TFE)", tagaccum, cmd_cnt); 831 770 832 771 /* Restart the port */ 833 772 mdelay(20); 834 773 mtip_restart_port(port); 774 + 775 + /* Trying to determine the cause of the error */ 776 + rv = mtip_read_log_page(dd->port, ATA_LOG_SATA_NCQ, 777 + dd->port->log_buf, 778 + dd->port->log_buf_dma, 1); 779 + if (rv) { 780 + dev_warn(&dd->pdev->dev, 781 + "Error in READ LOG EXT (10h) command\n"); 782 + /* non-critical error, don't fail the load */ 783 + } else { 784 + buf = (unsigned char *)dd->port->log_buf; 785 + if (buf[259] & 0x1) { 786 + dev_info(&dd->pdev->dev, 787 + "Write protect bit is set.\n"); 788 + set_bit(MTIP_DDF_WRITE_PROTECT_BIT, &dd->dd_flag); 789 + fail_all_ncq_write = 1; 790 + fail_reason = "write protect"; 791 + } 792 + if (buf[288] == 0xF7) { 793 + dev_info(&dd->pdev->dev, 794 + "Exceeded Tmax, drive in thermal shutdown.\n"); 795 + set_bit(MTIP_DDF_OVER_TEMP_BIT, &dd->dd_flag); 796 + fail_all_ncq_cmds = 1; 797 + fail_reason = "thermal shutdown"; 798 + } 799 + if (buf[288] == 0xBF) { 800 + dev_info(&dd->pdev->dev, 801 + "Drive indicates rebuild has failed.\n"); 802 + fail_all_ncq_cmds = 1; 803 + fail_reason = "rebuild failed"; 804 + } 805 + } 835 806 836 807 /* clear the tag accumulator */ 837 808 memset(tagaccum, 0, SLOTBITS_IN_LONGS * sizeof(long)); ··· 874 779 for (bit = 0; bit < 32; bit++) { 875 780 reissue = 1; 876 781 tag = (group << 5) + bit; 782 + cmd = &port->commands[tag]; 877 783 878 784 /* If the active bit is set re-issue the command */ 879 - if (atomic_read(&port->commands[tag].active) == 0) 785 + if (atomic_read(&cmd->active) == 0) 880 786 continue; 881 787 882 - fis = (struct host_to_dev_fis *) 883 - port->commands[tag].command; 788 + fis = (struct host_to_dev_fis *)cmd->command; 884 789 885 790 /* Should re-issue? */ 886 791 if (tag == MTIP_TAG_INTERNAL || 887 792 fis->command == ATA_CMD_SET_FEATURES) 888 793 reissue = 0; 794 + else { 795 + if (fail_all_ncq_cmds || 796 + (fail_all_ncq_write && 797 + fis->command == ATA_CMD_FPDMA_WRITE)) { 798 + dev_warn(&dd->pdev->dev, 799 + " Fail: %s w/tag %d [%s].\n", 800 + fis->command == ATA_CMD_FPDMA_WRITE ? 801 + "write" : "read", 802 + tag, 803 + fail_reason != NULL ? 804 + fail_reason : "unknown"); 805 + atomic_set(&cmd->active, 0); 806 + if (cmd->comp_func) { 807 + cmd->comp_func(port, tag, 808 + cmd->comp_data, 809 + -ENODATA); 810 + } 811 + continue; 812 + } 813 + } 889 814 890 815 /* 891 816 * First check if this command has 892 817 * exceeded its retries. 893 818 */ 894 - if (reissue && 895 - (port->commands[tag].retries-- > 0)) { 819 + if (reissue && (cmd->retries-- > 0)) { 896 820 897 821 set_bit(tag, tagaccum); 898 822 899 - /* Update the timeout value. */ 900 - port->commands[tag].comp_time = 901 - jiffies + msecs_to_jiffies( 902 - MTIP_NCQ_COMMAND_TIMEOUT_MS); 903 823 /* Re-issue the command. */ 904 824 mtip_issue_ncq_command(port, tag); 905 825 ··· 924 814 /* Retire a command that will not be reissued */ 925 815 dev_warn(&port->dd->pdev->dev, 926 816 "retiring tag %d\n", tag); 927 - atomic_set(&port->commands[tag].active, 0); 817 + atomic_set(&cmd->active, 0); 928 818 929 - if (port->commands[tag].comp_func) 930 - port->commands[tag].comp_func( 819 + if (cmd->comp_func) 820 + cmd->comp_func( 931 821 port, 932 822 tag, 933 - port->commands[tag].comp_data, 823 + cmd->comp_data, 934 824 PORT_IRQ_TF_ERR); 935 825 else 936 826 dev_warn(&port->dd->pdev->dev, ··· 938 828 tag); 939 829 } 940 830 } 941 - print_tags(dd, "TFE tags reissued:", tagaccum); 831 + print_tags(dd, "reissued (TFE)", tagaccum, cmd_cnt); 942 832 943 833 /* clear eh_active */ 944 - clear_bit(MTIP_FLAG_EH_ACTIVE_BIT, &port->flags); 834 + clear_bit(MTIP_PF_EH_ACTIVE_BIT, &port->flags); 945 835 wake_up_interruptible(&port->svc_wait); 946 836 947 837 mod_timer(&port->cmd_timer, ··· 1009 899 struct mtip_port *port = dd->port; 1010 900 struct mtip_cmd *cmd = &port->commands[MTIP_TAG_INTERNAL]; 1011 901 1012 - if (test_bit(MTIP_FLAG_IC_ACTIVE_BIT, &port->flags) && 902 + if (test_bit(MTIP_PF_IC_ACTIVE_BIT, &port->flags) && 1013 903 (cmd != NULL) && !(readl(port->cmd_issue[MTIP_TAG_INTERNAL]) 1014 904 & (1 << MTIP_TAG_INTERNAL))) { 1015 905 if (cmd->comp_func) { ··· 1020 910 return; 1021 911 } 1022 912 } 1023 - 1024 - dev_warn(&dd->pdev->dev, "IRQ status 0x%x ignored.\n", port_stat); 1025 913 1026 914 return; 1027 915 } ··· 1076 968 /* don't proceed further */ 1077 969 return IRQ_HANDLED; 1078 970 } 971 + if (test_bit(MTIP_DDF_REMOVE_PENDING_BIT, 972 + &dd->dd_flag)) 973 + return rv; 1079 974 1080 975 mtip_process_errors(dd, port_stat & PORT_IRQ_ERR); 1081 976 } ··· 1126 1015 port->cmd_issue[MTIP_TAG_INDEX(tag)]); 1127 1016 } 1128 1017 1018 + static bool mtip_pause_ncq(struct mtip_port *port, 1019 + struct host_to_dev_fis *fis) 1020 + { 1021 + struct host_to_dev_fis *reply; 1022 + unsigned long task_file_data; 1023 + 1024 + reply = port->rxfis + RX_FIS_D2H_REG; 1025 + task_file_data = readl(port->mmio+PORT_TFDATA); 1026 + 1027 + if ((task_file_data & 1) || (fis->command == ATA_CMD_SEC_ERASE_UNIT)) 1028 + return false; 1029 + 1030 + if (fis->command == ATA_CMD_SEC_ERASE_PREP) { 1031 + set_bit(MTIP_PF_SE_ACTIVE_BIT, &port->flags); 1032 + port->ic_pause_timer = jiffies; 1033 + return true; 1034 + } else if ((fis->command == ATA_CMD_DOWNLOAD_MICRO) && 1035 + (fis->features == 0x03)) { 1036 + set_bit(MTIP_PF_DM_ACTIVE_BIT, &port->flags); 1037 + port->ic_pause_timer = jiffies; 1038 + return true; 1039 + } else if ((fis->command == ATA_CMD_SEC_ERASE_UNIT) || 1040 + ((fis->command == 0xFC) && 1041 + (fis->features == 0x27 || fis->features == 0x72 || 1042 + fis->features == 0x62 || fis->features == 0x26))) { 1043 + /* Com reset after secure erase or lowlevel format */ 1044 + mtip_restart_port(port); 1045 + return false; 1046 + } 1047 + 1048 + return false; 1049 + } 1050 + 1129 1051 /* 1130 1052 * Wait for port to quiesce 1131 1053 * ··· 1177 1033 1178 1034 to = jiffies + msecs_to_jiffies(timeout); 1179 1035 do { 1180 - if (test_bit(MTIP_FLAG_SVC_THD_ACTIVE_BIT, &port->flags) && 1181 - test_bit(MTIP_FLAG_ISSUE_CMDS_BIT, &port->flags)) { 1036 + if (test_bit(MTIP_PF_SVC_THD_ACTIVE_BIT, &port->flags) && 1037 + test_bit(MTIP_PF_ISSUE_CMDS_BIT, &port->flags)) { 1182 1038 msleep(20); 1183 1039 continue; /* svc thd is actively issuing commands */ 1184 1040 } 1041 + if (test_bit(MTIP_DDF_REMOVE_PENDING_BIT, &port->dd->dd_flag)) 1042 + return -EFAULT; 1185 1043 /* 1186 1044 * Ignore s_active bit 0 of array element 0. 1187 1045 * This bit will always be set ··· 1220 1074 * -EAGAIN Time out waiting for command to complete. 1221 1075 */ 1222 1076 static int mtip_exec_internal_command(struct mtip_port *port, 1223 - void *fis, 1077 + struct host_to_dev_fis *fis, 1224 1078 int fis_len, 1225 1079 dma_addr_t buffer, 1226 1080 int buf_len, ··· 1230 1084 { 1231 1085 struct mtip_cmd_sg *command_sg; 1232 1086 DECLARE_COMPLETION_ONSTACK(wait); 1233 - int rv = 0; 1087 + int rv = 0, ready2go = 1; 1234 1088 struct mtip_cmd *int_cmd = &port->commands[MTIP_TAG_INTERNAL]; 1089 + unsigned long to; 1235 1090 1236 1091 /* Make sure the buffer is 8 byte aligned. This is asic specific. */ 1237 1092 if (buffer & 0x00000007) { ··· 1241 1094 return -EFAULT; 1242 1095 } 1243 1096 1244 - /* Only one internal command should be running at a time */ 1245 - if (test_and_set_bit(MTIP_TAG_INTERNAL, port->allocated)) { 1097 + to = jiffies + msecs_to_jiffies(timeout); 1098 + do { 1099 + ready2go = !test_and_set_bit(MTIP_TAG_INTERNAL, 1100 + port->allocated); 1101 + if (ready2go) 1102 + break; 1103 + mdelay(100); 1104 + } while (time_before(jiffies, to)); 1105 + if (!ready2go) { 1246 1106 dev_warn(&port->dd->pdev->dev, 1247 - "Internal command already active\n"); 1107 + "Internal cmd active. new cmd [%02X]\n", fis->command); 1248 1108 return -EBUSY; 1249 1109 } 1250 - set_bit(MTIP_FLAG_IC_ACTIVE_BIT, &port->flags); 1110 + set_bit(MTIP_PF_IC_ACTIVE_BIT, &port->flags); 1111 + port->ic_pause_timer = 0; 1112 + 1113 + if (fis->command == ATA_CMD_SEC_ERASE_UNIT) 1114 + clear_bit(MTIP_PF_SE_ACTIVE_BIT, &port->flags); 1115 + else if (fis->command == ATA_CMD_DOWNLOAD_MICRO) 1116 + clear_bit(MTIP_PF_DM_ACTIVE_BIT, &port->flags); 1251 1117 1252 1118 if (atomic == GFP_KERNEL) { 1253 - /* wait for io to complete if non atomic */ 1254 - if (mtip_quiesce_io(port, 5000) < 0) { 1255 - dev_warn(&port->dd->pdev->dev, 1256 - "Failed to quiesce IO\n"); 1257 - release_slot(port, MTIP_TAG_INTERNAL); 1258 - clear_bit(MTIP_FLAG_IC_ACTIVE_BIT, &port->flags); 1259 - wake_up_interruptible(&port->svc_wait); 1260 - return -EBUSY; 1119 + if (fis->command != ATA_CMD_STANDBYNOW1) { 1120 + /* wait for io to complete if non atomic */ 1121 + if (mtip_quiesce_io(port, 5000) < 0) { 1122 + dev_warn(&port->dd->pdev->dev, 1123 + "Failed to quiesce IO\n"); 1124 + release_slot(port, MTIP_TAG_INTERNAL); 1125 + clear_bit(MTIP_PF_IC_ACTIVE_BIT, &port->flags); 1126 + wake_up_interruptible(&port->svc_wait); 1127 + return -EBUSY; 1128 + } 1261 1129 } 1262 1130 1263 1131 /* Set the completion function and data for the command. */ ··· 1282 1120 } else { 1283 1121 /* Clear completion - we're going to poll */ 1284 1122 int_cmd->comp_data = NULL; 1285 - int_cmd->comp_func = NULL; 1123 + int_cmd->comp_func = mtip_null_completion; 1286 1124 } 1287 1125 1288 1126 /* Copy the command to the command table */ ··· 1321 1159 "Internal command did not complete [%d] " 1322 1160 "within timeout of %lu ms\n", 1323 1161 atomic, timeout); 1162 + if (mtip_check_surprise_removal(port->dd->pdev) || 1163 + test_bit(MTIP_DDF_REMOVE_PENDING_BIT, 1164 + &port->dd->dd_flag)) { 1165 + rv = -ENXIO; 1166 + goto exec_ic_exit; 1167 + } 1324 1168 rv = -EAGAIN; 1325 1169 } 1326 1170 ··· 1334 1166 & (1 << MTIP_TAG_INTERNAL)) { 1335 1167 dev_warn(&port->dd->pdev->dev, 1336 1168 "Retiring internal command but CI is 1.\n"); 1169 + if (test_bit(MTIP_DDF_REMOVE_PENDING_BIT, 1170 + &port->dd->dd_flag)) { 1171 + hba_reset_nosleep(port->dd); 1172 + rv = -ENXIO; 1173 + } else { 1174 + mtip_restart_port(port); 1175 + rv = -EAGAIN; 1176 + } 1177 + goto exec_ic_exit; 1337 1178 } 1338 1179 1339 1180 } else { 1340 1181 /* Spin for <timeout> checking if command still outstanding */ 1341 1182 timeout = jiffies + msecs_to_jiffies(timeout); 1342 - 1343 - while ((readl( 1344 - port->cmd_issue[MTIP_TAG_INTERNAL]) 1345 - & (1 << MTIP_TAG_INTERNAL)) 1346 - && time_before(jiffies, timeout)) 1347 - ; 1183 + while ((readl(port->cmd_issue[MTIP_TAG_INTERNAL]) 1184 + & (1 << MTIP_TAG_INTERNAL)) 1185 + && time_before(jiffies, timeout)) { 1186 + if (mtip_check_surprise_removal(port->dd->pdev)) { 1187 + rv = -ENXIO; 1188 + goto exec_ic_exit; 1189 + } 1190 + if ((fis->command != ATA_CMD_STANDBYNOW1) && 1191 + test_bit(MTIP_DDF_REMOVE_PENDING_BIT, 1192 + &port->dd->dd_flag)) { 1193 + rv = -ENXIO; 1194 + goto exec_ic_exit; 1195 + } 1196 + } 1348 1197 1349 1198 if (readl(port->cmd_issue[MTIP_TAG_INTERNAL]) 1350 1199 & (1 << MTIP_TAG_INTERNAL)) { 1351 1200 dev_err(&port->dd->pdev->dev, 1352 - "Internal command did not complete [%d]\n", 1353 - atomic); 1201 + "Internal command did not complete [atomic]\n"); 1354 1202 rv = -EAGAIN; 1203 + if (test_bit(MTIP_DDF_REMOVE_PENDING_BIT, 1204 + &port->dd->dd_flag)) { 1205 + hba_reset_nosleep(port->dd); 1206 + rv = -ENXIO; 1207 + } else { 1208 + mtip_restart_port(port); 1209 + rv = -EAGAIN; 1210 + } 1355 1211 } 1356 1212 } 1357 - 1213 + exec_ic_exit: 1358 1214 /* Clear the allocated and active bits for the internal command. */ 1359 1215 atomic_set(&int_cmd->active, 0); 1360 1216 release_slot(port, MTIP_TAG_INTERNAL); 1361 - clear_bit(MTIP_FLAG_IC_ACTIVE_BIT, &port->flags); 1217 + if (rv >= 0 && mtip_pause_ncq(port, fis)) { 1218 + /* NCQ paused */ 1219 + return rv; 1220 + } 1221 + clear_bit(MTIP_PF_IC_ACTIVE_BIT, &port->flags); 1362 1222 wake_up_interruptible(&port->svc_wait); 1363 1223 1364 1224 return rv; ··· 1435 1239 { 1436 1240 int rv = 0; 1437 1241 struct host_to_dev_fis fis; 1242 + 1243 + if (test_bit(MTIP_DDF_REMOVE_PENDING_BIT, &port->dd->dd_flag)) 1244 + return -EFAULT; 1438 1245 1439 1246 /* Build the FIS. */ 1440 1247 memset(&fis, 0, sizeof(struct host_to_dev_fis)); ··· 1512 1313 { 1513 1314 int rv; 1514 1315 struct host_to_dev_fis fis; 1316 + unsigned long start; 1515 1317 1516 1318 /* Build the FIS. */ 1517 1319 memset(&fis, 0, sizeof(struct host_to_dev_fis)); ··· 1520 1320 fis.opts = 1 << 7; 1521 1321 fis.command = ATA_CMD_STANDBYNOW1; 1522 1322 1523 - /* Execute the command. Use a 15-second timeout for large drives. */ 1323 + start = jiffies; 1524 1324 rv = mtip_exec_internal_command(port, 1525 1325 &fis, 1526 1326 5, 1527 1327 0, 1528 1328 0, 1529 1329 0, 1530 - GFP_KERNEL, 1330 + GFP_ATOMIC, 1531 1331 15000); 1332 + dbg_printk(MTIP_DRV_NAME "Time taken to complete standby cmd: %d ms\n", 1333 + jiffies_to_msecs(jiffies - start)); 1334 + if (rv) 1335 + dev_warn(&port->dd->pdev->dev, 1336 + "STANDBY IMMEDIATE command failed.\n"); 1337 + 1338 + return rv; 1339 + } 1340 + 1341 + /* 1342 + * Issue a READ LOG EXT command to the device. 1343 + * 1344 + * @port pointer to the port structure. 1345 + * @page page number to fetch 1346 + * @buffer pointer to buffer 1347 + * @buffer_dma dma address corresponding to @buffer 1348 + * @sectors page length to fetch, in sectors 1349 + * 1350 + * return value 1351 + * @rv return value from mtip_exec_internal_command() 1352 + */ 1353 + static int mtip_read_log_page(struct mtip_port *port, u8 page, u16 *buffer, 1354 + dma_addr_t buffer_dma, unsigned int sectors) 1355 + { 1356 + struct host_to_dev_fis fis; 1357 + 1358 + memset(&fis, 0, sizeof(struct host_to_dev_fis)); 1359 + fis.type = 0x27; 1360 + fis.opts = 1 << 7; 1361 + fis.command = ATA_CMD_READ_LOG_EXT; 1362 + fis.sect_count = sectors & 0xFF; 1363 + fis.sect_cnt_ex = (sectors >> 8) & 0xFF; 1364 + fis.lba_low = page; 1365 + fis.lba_mid = 0; 1366 + fis.device = ATA_DEVICE_OBS; 1367 + 1368 + memset(buffer, 0, sectors * ATA_SECT_SIZE); 1369 + 1370 + return mtip_exec_internal_command(port, 1371 + &fis, 1372 + 5, 1373 + buffer_dma, 1374 + sectors * ATA_SECT_SIZE, 1375 + 0, 1376 + GFP_ATOMIC, 1377 + MTIP_INTERNAL_COMMAND_TIMEOUT_MS); 1378 + } 1379 + 1380 + /* 1381 + * Issue a SMART READ DATA command to the device. 1382 + * 1383 + * @port pointer to the port structure. 1384 + * @buffer pointer to buffer 1385 + * @buffer_dma dma address corresponding to @buffer 1386 + * 1387 + * return value 1388 + * @rv return value from mtip_exec_internal_command() 1389 + */ 1390 + static int mtip_get_smart_data(struct mtip_port *port, u8 *buffer, 1391 + dma_addr_t buffer_dma) 1392 + { 1393 + struct host_to_dev_fis fis; 1394 + 1395 + memset(&fis, 0, sizeof(struct host_to_dev_fis)); 1396 + fis.type = 0x27; 1397 + fis.opts = 1 << 7; 1398 + fis.command = ATA_CMD_SMART; 1399 + fis.features = 0xD0; 1400 + fis.sect_count = 1; 1401 + fis.lba_mid = 0x4F; 1402 + fis.lba_hi = 0xC2; 1403 + fis.device = ATA_DEVICE_OBS; 1404 + 1405 + return mtip_exec_internal_command(port, 1406 + &fis, 1407 + 5, 1408 + buffer_dma, 1409 + ATA_SECT_SIZE, 1410 + 0, 1411 + GFP_ATOMIC, 1412 + 15000); 1413 + } 1414 + 1415 + /* 1416 + * Get the value of a smart attribute 1417 + * 1418 + * @port pointer to the port structure 1419 + * @id attribute number 1420 + * @attrib pointer to return attrib information corresponding to @id 1421 + * 1422 + * return value 1423 + * -EINVAL NULL buffer passed or unsupported attribute @id. 1424 + * -EPERM Identify data not valid, SMART not supported or not enabled 1425 + */ 1426 + static int mtip_get_smart_attr(struct mtip_port *port, unsigned int id, 1427 + struct smart_attr *attrib) 1428 + { 1429 + int rv, i; 1430 + struct smart_attr *pattr; 1431 + 1432 + if (!attrib) 1433 + return -EINVAL; 1434 + 1435 + if (!port->identify_valid) { 1436 + dev_warn(&port->dd->pdev->dev, "IDENTIFY DATA not valid\n"); 1437 + return -EPERM; 1438 + } 1439 + if (!(port->identify[82] & 0x1)) { 1440 + dev_warn(&port->dd->pdev->dev, "SMART not supported\n"); 1441 + return -EPERM; 1442 + } 1443 + if (!(port->identify[85] & 0x1)) { 1444 + dev_warn(&port->dd->pdev->dev, "SMART not enabled\n"); 1445 + return -EPERM; 1446 + } 1447 + 1448 + memset(port->smart_buf, 0, ATA_SECT_SIZE); 1449 + rv = mtip_get_smart_data(port, port->smart_buf, port->smart_buf_dma); 1450 + if (rv) { 1451 + dev_warn(&port->dd->pdev->dev, "Failed to ge SMART data\n"); 1452 + return rv; 1453 + } 1454 + 1455 + pattr = (struct smart_attr *)(port->smart_buf + 2); 1456 + for (i = 0; i < 29; i++, pattr++) 1457 + if (pattr->attr_id == id) { 1458 + memcpy(attrib, pattr, sizeof(struct smart_attr)); 1459 + break; 1460 + } 1461 + 1462 + if (i == 29) { 1463 + dev_warn(&port->dd->pdev->dev, 1464 + "Query for invalid SMART attribute ID\n"); 1465 + rv = -EINVAL; 1466 + } 1532 1467 1533 1468 return rv; 1534 1469 } ··· 1839 1504 fis.cyl_hi = command[5]; 1840 1505 fis.device = command[6] & ~0x10; /* Clear the dev bit*/ 1841 1506 1842 - 1843 - dbg_printk(MTIP_DRV_NAME "%s: User Command: cmd %x, feat %x, " 1844 - "nsect %x, sect %x, lcyl %x, " 1845 - "hcyl %x, sel %x\n", 1507 + dbg_printk(MTIP_DRV_NAME " %s: User Command: cmd %x, feat %x, nsect %x, sect %x, lcyl %x, hcyl %x, sel %x\n", 1846 1508 __func__, 1847 1509 command[0], 1848 1510 command[1], ··· 1866 1534 command[4] = reply->cyl_low; 1867 1535 command[5] = reply->cyl_hi; 1868 1536 1869 - dbg_printk(MTIP_DRV_NAME "%s: Completion Status: stat %x, " 1870 - "err %x , cyl_lo %x cyl_hi %x\n", 1537 + dbg_printk(MTIP_DRV_NAME " %s: Completion Status: stat %x, err %x , cyl_lo %x cyl_hi %x\n", 1871 1538 __func__, 1872 1539 command[0], 1873 1540 command[1], ··· 1909 1578 } 1910 1579 1911 1580 dbg_printk(MTIP_DRV_NAME 1912 - "%s: User Command: cmd %x, sect %x, " 1581 + " %s: User Command: cmd %x, sect %x, " 1913 1582 "feat %x, sectcnt %x\n", 1914 1583 __func__, 1915 1584 command[0], ··· 1938 1607 command[2] = command[3]; 1939 1608 1940 1609 dbg_printk(MTIP_DRV_NAME 1941 - "%s: Completion Status: stat %x, " 1610 + " %s: Completion Status: stat %x, " 1942 1611 "err %x, cmd %x\n", 1943 1612 __func__, 1944 1613 command[0], ··· 2141 1810 } 2142 1811 2143 1812 dbg_printk(MTIP_DRV_NAME 2144 - "taskfile: cmd %x, feat %x, nsect %x," 1813 + " %s: cmd %x, feat %x, nsect %x," 2145 1814 " sect/lbal %x, lcyl/lbam %x, hcyl/lbah %x," 2146 1815 " head/dev %x\n", 1816 + __func__, 2147 1817 fis.command, 2148 1818 fis.features, 2149 1819 fis.sect_count, ··· 2155 1823 2156 1824 switch (fis.command) { 2157 1825 case ATA_CMD_DOWNLOAD_MICRO: 2158 - /* Change timeout for Download Microcode to 60 seconds.*/ 2159 - timeout = 60000; 1826 + /* Change timeout for Download Microcode to 2 minutes */ 1827 + timeout = 120000; 2160 1828 break; 2161 1829 case ATA_CMD_SEC_ERASE_UNIT: 2162 1830 /* Change timeout for Security Erase Unit to 4 minutes.*/ ··· 2172 1840 timeout = 10000; 2173 1841 break; 2174 1842 case ATA_CMD_SMART: 2175 - /* Change timeout for vendor unique command to 10 secs */ 2176 - timeout = 10000; 1843 + /* Change timeout for vendor unique command to 15 secs */ 1844 + timeout = 15000; 2177 1845 break; 2178 1846 default: 2179 1847 timeout = MTIP_IOCTL_COMMAND_TIMEOUT_MS; ··· 2235 1903 req_task->hob_ports[1] = reply->features_ex; 2236 1904 req_task->hob_ports[2] = reply->sect_cnt_ex; 2237 1905 } 2238 - 2239 - /* Com rest after secure erase or lowlevel format */ 2240 - if (((fis.command == ATA_CMD_SEC_ERASE_UNIT) || 2241 - ((fis.command == 0xFC) && 2242 - (fis.features == 0x27 || fis.features == 0x72 || 2243 - fis.features == 0x62 || fis.features == 0x26))) && 2244 - !(reply->command & 1)) { 2245 - mtip_restart_port(dd->port); 2246 - } 2247 - 2248 1906 dbg_printk(MTIP_DRV_NAME 2249 - "%s: Completion: stat %x," 1907 + " %s: Completion: stat %x," 2250 1908 "err %x, sect_cnt %x, lbalo %x," 2251 1909 "lbamid %x, lbahi %x, dev %x\n", 2252 1910 __func__, ··· 2402 2080 struct host_to_dev_fis *fis; 2403 2081 struct mtip_port *port = dd->port; 2404 2082 struct mtip_cmd *command = &port->commands[tag]; 2083 + int dma_dir = (dir == READ) ? DMA_FROM_DEVICE : DMA_TO_DEVICE; 2405 2084 2406 2085 /* Map the scatter list for DMA access */ 2407 - if (dir == READ) 2408 - nents = dma_map_sg(&dd->pdev->dev, command->sg, 2409 - nents, DMA_FROM_DEVICE); 2410 - else 2411 - nents = dma_map_sg(&dd->pdev->dev, command->sg, 2412 - nents, DMA_TO_DEVICE); 2086 + nents = dma_map_sg(&dd->pdev->dev, command->sg, nents, dma_dir); 2413 2087 2414 2088 command->scatter_ents = nents; 2415 2089 ··· 2445 2127 */ 2446 2128 command->comp_data = dd; 2447 2129 command->comp_func = mtip_async_complete; 2448 - command->direction = (dir == READ ? DMA_FROM_DEVICE : DMA_TO_DEVICE); 2130 + command->direction = dma_dir; 2449 2131 2450 2132 /* 2451 2133 * Set the completion function and data for the command passed ··· 2458 2140 * To prevent this command from being issued 2459 2141 * if an internal command is in progress or error handling is active. 2460 2142 */ 2461 - if (unlikely(test_bit(MTIP_FLAG_IC_ACTIVE_BIT, &port->flags) || 2462 - test_bit(MTIP_FLAG_EH_ACTIVE_BIT, &port->flags))) { 2143 + if (port->flags & MTIP_PF_PAUSE_IO) { 2463 2144 set_bit(tag, port->cmds_to_issue); 2464 - set_bit(MTIP_FLAG_ISSUE_CMDS_BIT, &port->flags); 2145 + set_bit(MTIP_PF_ISSUE_CMDS_BIT, &port->flags); 2465 2146 return; 2466 2147 } 2467 2148 2468 2149 /* Issue the command to the hardware */ 2469 2150 mtip_issue_ncq_command(port, tag); 2470 2151 2471 - /* Set the command's timeout value.*/ 2472 - port->commands[tag].comp_time = jiffies + msecs_to_jiffies( 2473 - MTIP_NCQ_COMMAND_TIMEOUT_MS); 2152 + return; 2474 2153 } 2475 2154 2476 2155 /* ··· 2506 2191 down(&dd->port->cmd_slot); 2507 2192 *tag = get_slot(dd->port); 2508 2193 2194 + if (unlikely(test_bit(MTIP_DDF_REMOVE_PENDING_BIT, &dd->dd_flag))) { 2195 + up(&dd->port->cmd_slot); 2196 + return NULL; 2197 + } 2509 2198 if (unlikely(*tag < 0)) 2510 2199 return NULL; 2511 2200 ··· 2526 2207 * return value 2527 2208 * The size, in bytes, of the data copied into buf. 2528 2209 */ 2529 - static ssize_t hw_show_registers(struct device *dev, 2210 + static ssize_t mtip_hw_show_registers(struct device *dev, 2530 2211 struct device_attribute *attr, 2531 2212 char *buf) 2532 2213 { ··· 2535 2216 int size = 0; 2536 2217 int n; 2537 2218 2538 - size += sprintf(&buf[size], "%s:\ns_active:\n", __func__); 2219 + size += sprintf(&buf[size], "S ACTive:\n"); 2539 2220 2540 2221 for (n = 0; n < dd->slot_groups; n++) 2541 2222 size += sprintf(&buf[size], "0x%08x\n", ··· 2559 2240 group_allocated); 2560 2241 } 2561 2242 2562 - size += sprintf(&buf[size], "completed:\n"); 2243 + size += sprintf(&buf[size], "Completed:\n"); 2563 2244 2564 2245 for (n = 0; n < dd->slot_groups; n++) 2565 2246 size += sprintf(&buf[size], "0x%08x\n", 2566 2247 readl(dd->port->completed[n])); 2567 2248 2568 - size += sprintf(&buf[size], "PORT_IRQ_STAT 0x%08x\n", 2249 + size += sprintf(&buf[size], "PORT IRQ STAT : 0x%08x\n", 2569 2250 readl(dd->port->mmio + PORT_IRQ_STAT)); 2570 - size += sprintf(&buf[size], "HOST_IRQ_STAT 0x%08x\n", 2251 + size += sprintf(&buf[size], "HOST IRQ STAT : 0x%08x\n", 2571 2252 readl(dd->mmio + HOST_IRQ_STAT)); 2572 2253 2573 2254 return size; 2574 2255 } 2575 - static DEVICE_ATTR(registers, S_IRUGO, hw_show_registers, NULL); 2256 + 2257 + static ssize_t mtip_hw_show_status(struct device *dev, 2258 + struct device_attribute *attr, 2259 + char *buf) 2260 + { 2261 + struct driver_data *dd = dev_to_disk(dev)->private_data; 2262 + int size = 0; 2263 + 2264 + if (test_bit(MTIP_DDF_OVER_TEMP_BIT, &dd->dd_flag)) 2265 + size += sprintf(buf, "%s", "thermal_shutdown\n"); 2266 + else if (test_bit(MTIP_DDF_WRITE_PROTECT_BIT, &dd->dd_flag)) 2267 + size += sprintf(buf, "%s", "write_protect\n"); 2268 + else 2269 + size += sprintf(buf, "%s", "online\n"); 2270 + 2271 + return size; 2272 + } 2273 + 2274 + static DEVICE_ATTR(registers, S_IRUGO, mtip_hw_show_registers, NULL); 2275 + static DEVICE_ATTR(status, S_IRUGO, mtip_hw_show_status, NULL); 2576 2276 2577 2277 /* 2578 2278 * Create the sysfs related attributes. ··· 2610 2272 2611 2273 if (sysfs_create_file(kobj, &dev_attr_registers.attr)) 2612 2274 dev_warn(&dd->pdev->dev, 2613 - "Error creating registers sysfs entry\n"); 2275 + "Error creating 'registers' sysfs entry\n"); 2276 + if (sysfs_create_file(kobj, &dev_attr_status.attr)) 2277 + dev_warn(&dd->pdev->dev, 2278 + "Error creating 'status' sysfs entry\n"); 2614 2279 return 0; 2615 2280 } 2616 2281 ··· 2633 2292 return -EINVAL; 2634 2293 2635 2294 sysfs_remove_file(kobj, &dev_attr_registers.attr); 2295 + sysfs_remove_file(kobj, &dev_attr_status.attr); 2636 2296 2637 2297 return 0; 2638 2298 } ··· 2726 2384 "FTL rebuild in progress. Polling for completion.\n"); 2727 2385 2728 2386 start = jiffies; 2729 - dd->ftlrebuildflag = 1; 2730 2387 timeout = jiffies + msecs_to_jiffies(MTIP_FTL_REBUILD_TIMEOUT_MS); 2731 2388 2732 2389 do { 2390 + if (unlikely(test_bit(MTIP_DDF_REMOVE_PENDING_BIT, 2391 + &dd->dd_flag))) 2392 + return -EFAULT; 2733 2393 if (mtip_check_surprise_removal(dd->pdev)) 2734 2394 return -EFAULT; 2735 2395 ··· 2752 2408 dev_warn(&dd->pdev->dev, 2753 2409 "FTL rebuild complete (%d secs).\n", 2754 2410 jiffies_to_msecs(jiffies - start) / 1000); 2755 - dd->ftlrebuildflag = 0; 2756 2411 mtip_block_initialize(dd); 2757 - break; 2412 + return 0; 2758 2413 } 2759 2414 ssleep(10); 2760 2415 } while (time_before(jiffies, timeout)); 2761 2416 2762 2417 /* Check for timeout */ 2763 - if (dd->ftlrebuildflag) { 2764 - dev_err(&dd->pdev->dev, 2418 + dev_err(&dd->pdev->dev, 2765 2419 "Timed out waiting for FTL rebuild to complete (%d secs).\n", 2766 2420 jiffies_to_msecs(jiffies - start) / 1000); 2767 - return -EFAULT; 2768 - } 2769 - 2770 - return 0; 2421 + return -EFAULT; 2771 2422 } 2772 2423 2773 2424 /* ··· 2787 2448 * is in progress nor error handling is active 2788 2449 */ 2789 2450 wait_event_interruptible(port->svc_wait, (port->flags) && 2790 - !test_bit(MTIP_FLAG_IC_ACTIVE_BIT, &port->flags) && 2791 - !test_bit(MTIP_FLAG_EH_ACTIVE_BIT, &port->flags)); 2451 + !(port->flags & MTIP_PF_PAUSE_IO)); 2792 2452 2793 2453 if (kthread_should_stop()) 2794 2454 break; 2795 2455 2796 - set_bit(MTIP_FLAG_SVC_THD_ACTIVE_BIT, &port->flags); 2797 - if (test_bit(MTIP_FLAG_ISSUE_CMDS_BIT, &port->flags)) { 2456 + if (unlikely(test_bit(MTIP_DDF_REMOVE_PENDING_BIT, 2457 + &dd->dd_flag))) 2458 + break; 2459 + 2460 + set_bit(MTIP_PF_SVC_THD_ACTIVE_BIT, &port->flags); 2461 + if (test_bit(MTIP_PF_ISSUE_CMDS_BIT, &port->flags)) { 2798 2462 slot = 1; 2799 2463 /* used to restrict the loop to one iteration */ 2800 2464 slot_start = num_cmd_slots; ··· 2822 2480 /* Issue the command to the hardware */ 2823 2481 mtip_issue_ncq_command(port, slot); 2824 2482 2825 - /* Set the command's timeout value.*/ 2826 - port->commands[slot].comp_time = jiffies + 2827 - msecs_to_jiffies(MTIP_NCQ_COMMAND_TIMEOUT_MS); 2828 - 2829 2483 clear_bit(slot, port->cmds_to_issue); 2830 2484 } 2831 2485 2832 - clear_bit(MTIP_FLAG_ISSUE_CMDS_BIT, &port->flags); 2833 - } else if (test_bit(MTIP_FLAG_REBUILD_BIT, &port->flags)) { 2834 - mtip_ftl_rebuild_poll(dd); 2835 - clear_bit(MTIP_FLAG_REBUILD_BIT, &port->flags); 2486 + clear_bit(MTIP_PF_ISSUE_CMDS_BIT, &port->flags); 2487 + } else if (test_bit(MTIP_PF_REBUILD_BIT, &port->flags)) { 2488 + if (!mtip_ftl_rebuild_poll(dd)) 2489 + set_bit(MTIP_DDF_REBUILD_FAILED_BIT, 2490 + &dd->dd_flag); 2491 + clear_bit(MTIP_PF_REBUILD_BIT, &port->flags); 2836 2492 } 2837 - clear_bit(MTIP_FLAG_SVC_THD_ACTIVE_BIT, &port->flags); 2493 + clear_bit(MTIP_PF_SVC_THD_ACTIVE_BIT, &port->flags); 2838 2494 2839 - if (test_bit(MTIP_FLAG_SVC_THD_SHOULD_STOP_BIT, &port->flags)) 2495 + if (test_bit(MTIP_PF_SVC_THD_STOP_BIT, &port->flags)) 2840 2496 break; 2841 2497 } 2842 2498 return 0; ··· 2853 2513 int i; 2854 2514 int rv; 2855 2515 unsigned int num_command_slots; 2516 + unsigned long timeout, timetaken; 2517 + unsigned char *buf; 2518 + struct smart_attr attr242; 2856 2519 2857 2520 dd->mmio = pcim_iomap_table(dd->pdev)[MTIP_ABAR]; 2858 2521 ··· 2890 2547 /* Allocate memory for the command list. */ 2891 2548 dd->port->command_list = 2892 2549 dmam_alloc_coherent(&dd->pdev->dev, 2893 - HW_PORT_PRIV_DMA_SZ + (ATA_SECT_SIZE * 2), 2550 + HW_PORT_PRIV_DMA_SZ + (ATA_SECT_SIZE * 4), 2894 2551 &dd->port->command_list_dma, 2895 2552 GFP_KERNEL); 2896 2553 if (!dd->port->command_list) { ··· 2903 2560 /* Clear the memory we have allocated. */ 2904 2561 memset(dd->port->command_list, 2905 2562 0, 2906 - HW_PORT_PRIV_DMA_SZ + (ATA_SECT_SIZE * 2)); 2563 + HW_PORT_PRIV_DMA_SZ + (ATA_SECT_SIZE * 4)); 2907 2564 2908 2565 /* Setup the addresse of the RX FIS. */ 2909 2566 dd->port->rxfis = dd->port->command_list + HW_CMD_SLOT_SZ; ··· 2919 2576 dd->port->identify_dma = dd->port->command_tbl_dma + 2920 2577 HW_CMD_TBL_AR_SZ; 2921 2578 2922 - /* Setup the address of the sector buffer. */ 2579 + /* Setup the address of the sector buffer - for some non-ncq cmds */ 2923 2580 dd->port->sector_buffer = (void *) dd->port->identify + ATA_SECT_SIZE; 2924 2581 dd->port->sector_buffer_dma = dd->port->identify_dma + ATA_SECT_SIZE; 2582 + 2583 + /* Setup the address of the log buf - for read log command */ 2584 + dd->port->log_buf = (void *)dd->port->sector_buffer + ATA_SECT_SIZE; 2585 + dd->port->log_buf_dma = dd->port->sector_buffer_dma + ATA_SECT_SIZE; 2586 + 2587 + /* Setup the address of the smart buf - for smart read data command */ 2588 + dd->port->smart_buf = (void *)dd->port->log_buf + ATA_SECT_SIZE; 2589 + dd->port->smart_buf_dma = dd->port->log_buf_dma + ATA_SECT_SIZE; 2590 + 2925 2591 2926 2592 /* Point the command headers at the command tables. */ 2927 2593 for (i = 0; i < num_command_slots; i++) { ··· 2975 2623 dd->port->mmio + i*0x80 + PORT_SDBV; 2976 2624 } 2977 2625 2978 - /* Reset the HBA. */ 2979 - if (mtip_hba_reset(dd) < 0) { 2980 - dev_err(&dd->pdev->dev, 2981 - "Card did not reset within timeout\n"); 2982 - rv = -EIO; 2626 + timetaken = jiffies; 2627 + timeout = jiffies + msecs_to_jiffies(30000); 2628 + while (((readl(dd->port->mmio + PORT_SCR_STAT) & 0x0F) != 0x03) && 2629 + time_before(jiffies, timeout)) { 2630 + mdelay(100); 2631 + } 2632 + if (unlikely(mtip_check_surprise_removal(dd->pdev))) { 2633 + timetaken = jiffies - timetaken; 2634 + dev_warn(&dd->pdev->dev, 2635 + "Surprise removal detected at %u ms\n", 2636 + jiffies_to_msecs(timetaken)); 2637 + rv = -ENODEV; 2638 + goto out2 ; 2639 + } 2640 + if (unlikely(test_bit(MTIP_DDF_REMOVE_PENDING_BIT, &dd->dd_flag))) { 2641 + timetaken = jiffies - timetaken; 2642 + dev_warn(&dd->pdev->dev, 2643 + "Removal detected at %u ms\n", 2644 + jiffies_to_msecs(timetaken)); 2645 + rv = -EFAULT; 2983 2646 goto out2; 2647 + } 2648 + 2649 + /* Conditionally reset the HBA. */ 2650 + if (!(readl(dd->mmio + HOST_CAP) & HOST_CAP_NZDMA)) { 2651 + if (mtip_hba_reset(dd) < 0) { 2652 + dev_err(&dd->pdev->dev, 2653 + "Card did not reset within timeout\n"); 2654 + rv = -EIO; 2655 + goto out2; 2656 + } 2657 + } else { 2658 + /* Clear any pending interrupts on the HBA */ 2659 + writel(readl(dd->mmio + HOST_IRQ_STAT), 2660 + dd->mmio + HOST_IRQ_STAT); 2984 2661 } 2985 2662 2986 2663 mtip_init_port(dd->port); ··· 3041 2660 mod_timer(&dd->port->cmd_timer, 3042 2661 jiffies + msecs_to_jiffies(MTIP_TIMEOUT_CHECK_PERIOD)); 3043 2662 2663 + 2664 + if (test_bit(MTIP_DDF_REMOVE_PENDING_BIT, &dd->dd_flag)) { 2665 + rv = -EFAULT; 2666 + goto out3; 2667 + } 2668 + 3044 2669 if (mtip_get_identify(dd->port, NULL) < 0) { 3045 2670 rv = -EFAULT; 3046 2671 goto out3; ··· 3054 2667 3055 2668 if (*(dd->port->identify + MTIP_FTL_REBUILD_OFFSET) == 3056 2669 MTIP_FTL_REBUILD_MAGIC) { 3057 - set_bit(MTIP_FLAG_REBUILD_BIT, &dd->port->flags); 2670 + set_bit(MTIP_PF_REBUILD_BIT, &dd->port->flags); 3058 2671 return MTIP_FTL_REBUILD_MAGIC; 3059 2672 } 3060 2673 mtip_dump_identify(dd->port); 2674 + 2675 + /* check write protect, over temp and rebuild statuses */ 2676 + rv = mtip_read_log_page(dd->port, ATA_LOG_SATA_NCQ, 2677 + dd->port->log_buf, 2678 + dd->port->log_buf_dma, 1); 2679 + if (rv) { 2680 + dev_warn(&dd->pdev->dev, 2681 + "Error in READ LOG EXT (10h) command\n"); 2682 + /* non-critical error, don't fail the load */ 2683 + } else { 2684 + buf = (unsigned char *)dd->port->log_buf; 2685 + if (buf[259] & 0x1) { 2686 + dev_info(&dd->pdev->dev, 2687 + "Write protect bit is set.\n"); 2688 + set_bit(MTIP_DDF_WRITE_PROTECT_BIT, &dd->dd_flag); 2689 + } 2690 + if (buf[288] == 0xF7) { 2691 + dev_info(&dd->pdev->dev, 2692 + "Exceeded Tmax, drive in thermal shutdown.\n"); 2693 + set_bit(MTIP_DDF_OVER_TEMP_BIT, &dd->dd_flag); 2694 + } 2695 + if (buf[288] == 0xBF) { 2696 + dev_info(&dd->pdev->dev, 2697 + "Drive indicates rebuild has failed.\n"); 2698 + /* TODO */ 2699 + } 2700 + } 2701 + 2702 + /* get write protect progess */ 2703 + memset(&attr242, 0, sizeof(struct smart_attr)); 2704 + if (mtip_get_smart_attr(dd->port, 242, &attr242)) 2705 + dev_warn(&dd->pdev->dev, 2706 + "Unable to check write protect progress\n"); 2707 + else 2708 + dev_info(&dd->pdev->dev, 2709 + "Write protect progress: %d%% (%d blocks)\n", 2710 + attr242.cur, attr242.data); 3061 2711 return rv; 3062 2712 3063 2713 out3: ··· 3112 2688 3113 2689 /* Free the command/command header memory. */ 3114 2690 dmam_free_coherent(&dd->pdev->dev, 3115 - HW_PORT_PRIV_DMA_SZ + (ATA_SECT_SIZE * 2), 2691 + HW_PORT_PRIV_DMA_SZ + (ATA_SECT_SIZE * 4), 3116 2692 dd->port->command_list, 3117 2693 dd->port->command_list_dma); 3118 2694 out1: ··· 3136 2712 * Send standby immediate (E0h) to the drive so that it 3137 2713 * saves its state. 3138 2714 */ 3139 - if (atomic_read(&dd->drv_cleanup_done) != true) { 2715 + if (!test_bit(MTIP_DDF_CLEANUP_BIT, &dd->dd_flag)) { 3140 2716 3141 - mtip_standby_immediate(dd->port); 2717 + if (!test_bit(MTIP_PF_REBUILD_BIT, &dd->port->flags)) 2718 + if (mtip_standby_immediate(dd->port)) 2719 + dev_warn(&dd->pdev->dev, 2720 + "STANDBY IMMEDIATE failed\n"); 3142 2721 3143 2722 /* de-initialize the port. */ 3144 2723 mtip_deinit_port(dd->port); ··· 3161 2734 3162 2735 /* Free the command/command header memory. */ 3163 2736 dmam_free_coherent(&dd->pdev->dev, 3164 - HW_PORT_PRIV_DMA_SZ + (ATA_SECT_SIZE * 2), 2737 + HW_PORT_PRIV_DMA_SZ + (ATA_SECT_SIZE * 4), 3165 2738 dd->port->command_list, 3166 2739 dd->port->command_list_dma); 3167 2740 /* Free the memory allocated for the for structure. */ ··· 3319 2892 if (!dd) 3320 2893 return -ENOTTY; 3321 2894 2895 + if (unlikely(test_bit(MTIP_DDF_REMOVE_PENDING_BIT, &dd->dd_flag))) 2896 + return -ENOTTY; 2897 + 3322 2898 switch (cmd) { 3323 2899 case BLKFLSBUF: 3324 2900 return -ENOTTY; ··· 3355 2925 return -EACCES; 3356 2926 3357 2927 if (!dd) 2928 + return -ENOTTY; 2929 + 2930 + if (unlikely(test_bit(MTIP_DDF_REMOVE_PENDING_BIT, &dd->dd_flag))) 3358 2931 return -ENOTTY; 3359 2932 3360 2933 switch (cmd) { ··· 3482 3049 int nents = 0; 3483 3050 int tag = 0; 3484 3051 3052 + if (unlikely(dd->dd_flag & MTIP_DDF_STOP_IO)) { 3053 + if (unlikely(test_bit(MTIP_DDF_REMOVE_PENDING_BIT, 3054 + &dd->dd_flag))) { 3055 + bio_endio(bio, -ENXIO); 3056 + return; 3057 + } 3058 + if (unlikely(test_bit(MTIP_DDF_OVER_TEMP_BIT, &dd->dd_flag))) { 3059 + bio_endio(bio, -ENODATA); 3060 + return; 3061 + } 3062 + if (unlikely(test_bit(MTIP_DDF_WRITE_PROTECT_BIT, 3063 + &dd->dd_flag) && 3064 + bio_data_dir(bio))) { 3065 + bio_endio(bio, -ENODATA); 3066 + return; 3067 + } 3068 + } 3069 + 3485 3070 if (unlikely(!bio_has_data(bio))) { 3486 3071 blk_queue_flush(queue, 0); 3487 3072 bio_endio(bio, 0); ··· 3512 3061 3513 3062 if (unlikely((bio)->bi_vcnt > MTIP_MAX_SG)) { 3514 3063 dev_warn(&dd->pdev->dev, 3515 - "Maximum number of SGL entries exceeded"); 3064 + "Maximum number of SGL entries exceeded\n"); 3516 3065 bio_io_error(bio); 3517 3066 mtip_hw_release_scatterlist(dd, tag); 3518 3067 return; ··· 3661 3210 kobject_put(kobj); 3662 3211 } 3663 3212 3664 - if (dd->mtip_svc_handler) 3213 + if (dd->mtip_svc_handler) { 3214 + set_bit(MTIP_DDF_INIT_DONE_BIT, &dd->dd_flag); 3665 3215 return rv; /* service thread created for handling rebuild */ 3216 + } 3666 3217 3667 3218 start_service_thread: 3668 3219 sprintf(thd_name, "mtip_svc_thd_%02d", index); ··· 3673 3220 dd, thd_name); 3674 3221 3675 3222 if (IS_ERR(dd->mtip_svc_handler)) { 3676 - printk(KERN_ERR "mtip32xx: service thread failed to start\n"); 3223 + dev_err(&dd->pdev->dev, "service thread failed to start\n"); 3677 3224 dd->mtip_svc_handler = NULL; 3678 3225 rv = -EFAULT; 3679 3226 goto kthread_run_error; 3680 3227 } 3228 + 3229 + if (wait_for_rebuild == MTIP_FTL_REBUILD_MAGIC) 3230 + rv = wait_for_rebuild; 3681 3231 3682 3232 return rv; 3683 3233 ··· 3722 3266 struct kobject *kobj; 3723 3267 3724 3268 if (dd->mtip_svc_handler) { 3725 - set_bit(MTIP_FLAG_SVC_THD_SHOULD_STOP_BIT, &dd->port->flags); 3269 + set_bit(MTIP_PF_SVC_THD_STOP_BIT, &dd->port->flags); 3726 3270 wake_up_interruptible(&dd->port->svc_wait); 3727 3271 kthread_stop(dd->mtip_svc_handler); 3728 3272 } 3729 3273 3730 - /* Clean up the sysfs attributes managed by the protocol layer. */ 3731 - kobj = kobject_get(&disk_to_dev(dd->disk)->kobj); 3732 - if (kobj) { 3733 - mtip_hw_sysfs_exit(dd, kobj); 3734 - kobject_put(kobj); 3274 + /* Clean up the sysfs attributes, if created */ 3275 + if (test_bit(MTIP_DDF_INIT_DONE_BIT, &dd->dd_flag)) { 3276 + kobj = kobject_get(&disk_to_dev(dd->disk)->kobj); 3277 + if (kobj) { 3278 + mtip_hw_sysfs_exit(dd, kobj); 3279 + kobject_put(kobj); 3280 + } 3735 3281 } 3736 3282 3737 3283 /* ··· 3741 3283 * from /dev 3742 3284 */ 3743 3285 del_gendisk(dd->disk); 3286 + 3287 + spin_lock(&rssd_index_lock); 3288 + ida_remove(&rssd_index_ida, dd->index); 3289 + spin_unlock(&rssd_index_lock); 3290 + 3744 3291 blk_cleanup_queue(dd->queue); 3745 3292 dd->disk = NULL; 3746 3293 dd->queue = NULL; ··· 3775 3312 3776 3313 /* Delete our gendisk structure, and cleanup the blk queue. */ 3777 3314 del_gendisk(dd->disk); 3315 + 3316 + spin_lock(&rssd_index_lock); 3317 + ida_remove(&rssd_index_ida, dd->index); 3318 + spin_unlock(&rssd_index_lock); 3319 + 3778 3320 blk_cleanup_queue(dd->queue); 3779 3321 dd->disk = NULL; 3780 3322 dd->queue = NULL; ··· 3826 3358 "Unable to allocate memory for driver data\n"); 3827 3359 return -ENOMEM; 3828 3360 } 3829 - 3830 - /* Set the atomic variable as 1 in case of SRSI */ 3831 - atomic_set(&dd->drv_cleanup_done, true); 3832 - 3833 - atomic_set(&dd->resumeflag, false); 3834 3361 3835 3362 /* Attach the private data to this PCI device. */ 3836 3363 pci_set_drvdata(pdev, dd); ··· 3883 3420 * instance number. 3884 3421 */ 3885 3422 instance++; 3886 - 3423 + if (rv != MTIP_FTL_REBUILD_MAGIC) 3424 + set_bit(MTIP_DDF_INIT_DONE_BIT, &dd->dd_flag); 3887 3425 goto done; 3888 3426 3889 3427 block_initialize_err: ··· 3898 3434 pci_set_drvdata(pdev, NULL); 3899 3435 return rv; 3900 3436 done: 3901 - /* Set the atomic variable as 0 in case of SRSI */ 3902 - atomic_set(&dd->drv_cleanup_done, true); 3903 - 3904 3437 return rv; 3905 3438 } 3906 3439 ··· 3913 3452 struct driver_data *dd = pci_get_drvdata(pdev); 3914 3453 int counter = 0; 3915 3454 3455 + set_bit(MTIP_DDF_REMOVE_PENDING_BIT, &dd->dd_flag); 3456 + 3916 3457 if (mtip_check_surprise_removal(pdev)) { 3917 - while (atomic_read(&dd->drv_cleanup_done) == false) { 3458 + while (!test_bit(MTIP_DDF_CLEANUP_BIT, &dd->dd_flag)) { 3918 3459 counter++; 3919 3460 msleep(20); 3920 3461 if (counter == 10) { ··· 3926 3463 } 3927 3464 } 3928 3465 } 3929 - /* Set the atomic variable as 1 in case of SRSI */ 3930 - atomic_set(&dd->drv_cleanup_done, true); 3931 3466 3932 3467 /* Clean up the block layer. */ 3933 3468 mtip_block_remove(dd); ··· 3954 3493 return -EFAULT; 3955 3494 } 3956 3495 3957 - atomic_set(&dd->resumeflag, true); 3496 + set_bit(MTIP_DDF_RESUME_BIT, &dd->dd_flag); 3958 3497 3959 3498 /* Disable ports & interrupts then send standby immediate */ 3960 3499 rv = mtip_block_suspend(dd); ··· 4020 3559 dev_err(&pdev->dev, "Unable to resume\n"); 4021 3560 4022 3561 err: 4023 - atomic_set(&dd->resumeflag, false); 3562 + clear_bit(MTIP_DDF_RESUME_BIT, &dd->dd_flag); 4024 3563 4025 3564 return rv; 4026 3565 } ··· 4069 3608 */ 4070 3609 static int __init mtip_init(void) 4071 3610 { 3611 + int error; 3612 + 4072 3613 printk(KERN_INFO MTIP_DRV_NAME " Version " MTIP_DRV_VERSION "\n"); 4073 3614 4074 3615 /* Allocate a major block device number to use with this driver. */ 4075 - mtip_major = register_blkdev(0, MTIP_DRV_NAME); 4076 - if (mtip_major < 0) { 3616 + error = register_blkdev(0, MTIP_DRV_NAME); 3617 + if (error <= 0) { 4077 3618 printk(KERN_ERR "Unable to register block device (%d)\n", 4078 - mtip_major); 3619 + error); 4079 3620 return -EBUSY; 4080 3621 } 3622 + mtip_major = error; 4081 3623 4082 3624 /* Register our PCI operations. */ 4083 - return pci_register_driver(&mtip_pci_driver); 3625 + error = pci_register_driver(&mtip_pci_driver); 3626 + if (error) 3627 + unregister_blkdev(mtip_major, MTIP_DRV_NAME); 3628 + 3629 + return error; 4084 3630 } 4085 3631 4086 3632 /*
+45 -13
drivers/block/mtip32xx/mtip32xx.h
··· 34 34 /* offset of Device Control register in PCIe extended capabilites space */ 35 35 #define PCIE_CONFIG_EXT_DEVICE_CONTROL_OFFSET 0x48 36 36 37 - /* # of times to retry timed out IOs */ 38 - #define MTIP_MAX_RETRIES 5 37 + /* # of times to retry timed out/failed IOs */ 38 + #define MTIP_MAX_RETRIES 2 39 39 40 40 /* Various timeout values in ms */ 41 41 #define MTIP_NCQ_COMMAND_TIMEOUT_MS 5000 ··· 114 114 #define __force_bit2int (unsigned int __force) 115 115 116 116 /* below are bit numbers in 'flags' defined in mtip_port */ 117 - #define MTIP_FLAG_IC_ACTIVE_BIT 0 118 - #define MTIP_FLAG_EH_ACTIVE_BIT 1 119 - #define MTIP_FLAG_SVC_THD_ACTIVE_BIT 2 120 - #define MTIP_FLAG_ISSUE_CMDS_BIT 4 121 - #define MTIP_FLAG_REBUILD_BIT 5 122 - #define MTIP_FLAG_SVC_THD_SHOULD_STOP_BIT 8 117 + #define MTIP_PF_IC_ACTIVE_BIT 0 /* pio/ioctl */ 118 + #define MTIP_PF_EH_ACTIVE_BIT 1 /* error handling */ 119 + #define MTIP_PF_SE_ACTIVE_BIT 2 /* secure erase */ 120 + #define MTIP_PF_DM_ACTIVE_BIT 3 /* download microcde */ 121 + #define MTIP_PF_PAUSE_IO ((1 << MTIP_PF_IC_ACTIVE_BIT) | \ 122 + (1 << MTIP_PF_EH_ACTIVE_BIT) | \ 123 + (1 << MTIP_PF_SE_ACTIVE_BIT) | \ 124 + (1 << MTIP_PF_DM_ACTIVE_BIT)) 125 + 126 + #define MTIP_PF_SVC_THD_ACTIVE_BIT 4 127 + #define MTIP_PF_ISSUE_CMDS_BIT 5 128 + #define MTIP_PF_REBUILD_BIT 6 129 + #define MTIP_PF_SVC_THD_STOP_BIT 8 130 + 131 + /* below are bit numbers in 'dd_flag' defined in driver_data */ 132 + #define MTIP_DDF_REMOVE_PENDING_BIT 1 133 + #define MTIP_DDF_OVER_TEMP_BIT 2 134 + #define MTIP_DDF_WRITE_PROTECT_BIT 3 135 + #define MTIP_DDF_STOP_IO ((1 << MTIP_DDF_REMOVE_PENDING_BIT) | \ 136 + (1 << MTIP_DDF_OVER_TEMP_BIT) | \ 137 + (1 << MTIP_DDF_WRITE_PROTECT_BIT)) 138 + 139 + #define MTIP_DDF_CLEANUP_BIT 5 140 + #define MTIP_DDF_RESUME_BIT 6 141 + #define MTIP_DDF_INIT_DONE_BIT 7 142 + #define MTIP_DDF_REBUILD_FAILED_BIT 8 143 + 144 + __packed struct smart_attr{ 145 + u8 attr_id; 146 + u16 flags; 147 + u8 cur; 148 + u8 worst; 149 + u32 data; 150 + u8 res[3]; 151 + }; 123 152 124 153 /* Register Frame Information Structure (FIS), host to device. */ 125 154 struct host_to_dev_fis { ··· 374 345 * when the command slot and all associated data structures 375 346 * are no longer needed. 376 347 */ 348 + u16 *log_buf; 349 + dma_addr_t log_buf_dma; 350 + 351 + u8 *smart_buf; 352 + dma_addr_t smart_buf_dma; 353 + 377 354 unsigned long allocated[SLOTBITS_IN_LONGS]; 378 355 /* 379 356 * used to queue commands when an internal command is in progress ··· 403 368 * Timer used to complete commands that have been active for too long. 404 369 */ 405 370 struct timer_list cmd_timer; 371 + unsigned long ic_pause_timer; 406 372 /* 407 373 * Semaphore used to block threads if there are no 408 374 * command slots available. ··· 440 404 441 405 unsigned slot_groups; /* number of slot groups the product supports */ 442 406 443 - atomic_t drv_cleanup_done; /* Atomic variable for SRSI */ 444 - 445 407 unsigned long index; /* Index to determine the disk name */ 446 408 447 - unsigned int ftlrebuildflag; /* FTL rebuild flag */ 448 - 449 - atomic_t resumeflag; /* Atomic variable to track suspend/resume */ 409 + unsigned long dd_flag; /* NOTE: use atomic bit operations on this */ 450 410 451 411 struct task_struct *mtip_svc_handler; /* task_struct of svc thd */ 452 412 };
+1
drivers/block/virtio_blk.c
··· 351 351 cap_str_10, cap_str_2); 352 352 353 353 set_capacity(vblk->disk, capacity); 354 + revalidate_disk(vblk->disk); 354 355 done: 355 356 mutex_unlock(&vblk->config_lock); 356 357 }
+13 -37
drivers/block/xen-blkback/blkback.c
··· 321 321 static void xen_blkbk_unmap(struct pending_req *req) 322 322 { 323 323 struct gnttab_unmap_grant_ref unmap[BLKIF_MAX_SEGMENTS_PER_REQUEST]; 324 + struct page *pages[BLKIF_MAX_SEGMENTS_PER_REQUEST]; 324 325 unsigned int i, invcount = 0; 325 326 grant_handle_t handle; 326 327 int ret; ··· 333 332 gnttab_set_unmap_op(&unmap[invcount], vaddr(req, i), 334 333 GNTMAP_host_map, handle); 335 334 pending_handle(req, i) = BLKBACK_INVALID_HANDLE; 335 + pages[invcount] = virt_to_page(vaddr(req, i)); 336 336 invcount++; 337 337 } 338 338 339 - ret = HYPERVISOR_grant_table_op( 340 - GNTTABOP_unmap_grant_ref, unmap, invcount); 339 + ret = gnttab_unmap_refs(unmap, pages, invcount, false); 341 340 BUG_ON(ret); 342 - /* 343 - * Note, we use invcount, so nr->pages, so we can't index 344 - * using vaddr(req, i). 345 - */ 346 - for (i = 0; i < invcount; i++) { 347 - ret = m2p_remove_override( 348 - virt_to_page(unmap[i].host_addr), false); 349 - if (ret) { 350 - pr_alert(DRV_PFX "Failed to remove M2P override for %lx\n", 351 - (unsigned long)unmap[i].host_addr); 352 - continue; 353 - } 354 - } 355 341 } 356 342 357 343 static int xen_blkbk_map(struct blkif_request *req, ··· 366 378 pending_req->blkif->domid); 367 379 } 368 380 369 - ret = HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, map, nseg); 381 + ret = gnttab_map_refs(map, NULL, &blkbk->pending_page(pending_req, 0), nseg); 370 382 BUG_ON(ret); 371 383 372 384 /* ··· 386 398 if (ret) 387 399 continue; 388 400 389 - ret = m2p_add_override(PFN_DOWN(map[i].dev_bus_addr), 390 - blkbk->pending_page(pending_req, i), NULL); 391 - if (ret) { 392 - pr_alert(DRV_PFX "Failed to install M2P override for %lx (ret: %d)\n", 393 - (unsigned long)map[i].dev_bus_addr, ret); 394 - /* We could switch over to GNTTABOP_copy */ 395 - continue; 396 - } 397 - 398 401 seg[i].buf = map[i].dev_bus_addr | 399 402 (req->u.rw.seg[i].first_sect << 9); 400 403 } ··· 398 419 int err = 0; 399 420 int status = BLKIF_RSP_OKAY; 400 421 struct block_device *bdev = blkif->vbd.bdev; 422 + unsigned long secure; 401 423 402 424 blkif->st_ds_req++; 403 425 404 426 xen_blkif_get(blkif); 405 - if (blkif->blk_backend_type == BLKIF_BACKEND_PHY || 406 - blkif->blk_backend_type == BLKIF_BACKEND_FILE) { 407 - unsigned long secure = (blkif->vbd.discard_secure && 408 - (req->u.discard.flag & BLKIF_DISCARD_SECURE)) ? 409 - BLKDEV_DISCARD_SECURE : 0; 410 - err = blkdev_issue_discard(bdev, 411 - req->u.discard.sector_number, 412 - req->u.discard.nr_sectors, 413 - GFP_KERNEL, secure); 414 - } else 415 - err = -EOPNOTSUPP; 427 + secure = (blkif->vbd.discard_secure && 428 + (req->u.discard.flag & BLKIF_DISCARD_SECURE)) ? 429 + BLKDEV_DISCARD_SECURE : 0; 430 + 431 + err = blkdev_issue_discard(bdev, req->u.discard.sector_number, 432 + req->u.discard.nr_sectors, 433 + GFP_KERNEL, secure); 416 434 417 435 if (err == -EOPNOTSUPP) { 418 436 pr_debug(DRV_PFX "discard op failed, not supported\n"); ··· 806 830 int i, mmap_pages; 807 831 int rc = 0; 808 832 809 - if (!xen_pv_domain()) 833 + if (!xen_domain()) 810 834 return -ENODEV; 811 835 812 836 blkbk = kzalloc(sizeof(struct xen_blkbk), GFP_KERNEL);
-6
drivers/block/xen-blkback/common.h
··· 146 146 BLKIF_PROTOCOL_X86_64 = 3, 147 147 }; 148 148 149 - enum blkif_backend_type { 150 - BLKIF_BACKEND_PHY = 1, 151 - BLKIF_BACKEND_FILE = 2, 152 - }; 153 - 154 149 struct xen_vbd { 155 150 /* What the domain refers to this vbd as. */ 156 151 blkif_vdev_t handle; ··· 172 177 unsigned int irq; 173 178 /* Comms information. */ 174 179 enum blkif_protocol blk_protocol; 175 - enum blkif_backend_type blk_backend_type; 176 180 union blkif_back_rings blk_rings; 177 181 void *blk_ring; 178 182 /* The VBD attached to this interface. */
+33 -58
drivers/block/xen-blkback/xenbus.c
··· 381 381 err = xenbus_printf(xbt, dev->nodename, "feature-flush-cache", 382 382 "%d", state); 383 383 if (err) 384 - xenbus_dev_fatal(dev, err, "writing feature-flush-cache"); 384 + dev_warn(&dev->dev, "writing feature-flush-cache (%d)", err); 385 385 386 386 return err; 387 387 } 388 388 389 - int xen_blkbk_discard(struct xenbus_transaction xbt, struct backend_info *be) 389 + static void xen_blkbk_discard(struct xenbus_transaction xbt, struct backend_info *be) 390 390 { 391 391 struct xenbus_device *dev = be->dev; 392 392 struct xen_blkif *blkif = be->blkif; 393 - char *type; 394 393 int err; 395 394 int state = 0; 395 + struct block_device *bdev = be->blkif->vbd.bdev; 396 + struct request_queue *q = bdev_get_queue(bdev); 396 397 397 - type = xenbus_read(XBT_NIL, dev->nodename, "type", NULL); 398 - if (!IS_ERR(type)) { 399 - if (strncmp(type, "file", 4) == 0) { 400 - state = 1; 401 - blkif->blk_backend_type = BLKIF_BACKEND_FILE; 398 + if (blk_queue_discard(q)) { 399 + err = xenbus_printf(xbt, dev->nodename, 400 + "discard-granularity", "%u", 401 + q->limits.discard_granularity); 402 + if (err) { 403 + dev_warn(&dev->dev, "writing discard-granularity (%d)", err); 404 + return; 402 405 } 403 - if (strncmp(type, "phy", 3) == 0) { 404 - struct block_device *bdev = be->blkif->vbd.bdev; 405 - struct request_queue *q = bdev_get_queue(bdev); 406 - if (blk_queue_discard(q)) { 407 - err = xenbus_printf(xbt, dev->nodename, 408 - "discard-granularity", "%u", 409 - q->limits.discard_granularity); 410 - if (err) { 411 - xenbus_dev_fatal(dev, err, 412 - "writing discard-granularity"); 413 - goto kfree; 414 - } 415 - err = xenbus_printf(xbt, dev->nodename, 416 - "discard-alignment", "%u", 417 - q->limits.discard_alignment); 418 - if (err) { 419 - xenbus_dev_fatal(dev, err, 420 - "writing discard-alignment"); 421 - goto kfree; 422 - } 423 - state = 1; 424 - blkif->blk_backend_type = BLKIF_BACKEND_PHY; 425 - } 426 - /* Optional. */ 427 - err = xenbus_printf(xbt, dev->nodename, 428 - "discard-secure", "%d", 429 - blkif->vbd.discard_secure); 430 - if (err) { 431 - xenbus_dev_fatal(dev, err, 432 - "writting discard-secure"); 433 - goto kfree; 434 - } 406 + err = xenbus_printf(xbt, dev->nodename, 407 + "discard-alignment", "%u", 408 + q->limits.discard_alignment); 409 + if (err) { 410 + dev_warn(&dev->dev, "writing discard-alignment (%d)", err); 411 + return; 435 412 } 436 - } else { 437 - err = PTR_ERR(type); 438 - xenbus_dev_fatal(dev, err, "reading type"); 439 - goto out; 413 + state = 1; 414 + /* Optional. */ 415 + err = xenbus_printf(xbt, dev->nodename, 416 + "discard-secure", "%d", 417 + blkif->vbd.discard_secure); 418 + if (err) { 419 + dev_warn(dev-dev, "writing discard-secure (%d)", err); 420 + return; 421 + } 440 422 } 441 - 442 423 err = xenbus_printf(xbt, dev->nodename, "feature-discard", 443 424 "%d", state); 444 425 if (err) 445 - xenbus_dev_fatal(dev, err, "writing feature-discard"); 446 - kfree: 447 - kfree(type); 448 - out: 449 - return err; 426 + dev_warn(&dev->dev, "writing feature-discard (%d)", err); 450 427 } 451 428 int xen_blkbk_barrier(struct xenbus_transaction xbt, 452 429 struct backend_info *be, int state) ··· 434 457 err = xenbus_printf(xbt, dev->nodename, "feature-barrier", 435 458 "%d", state); 436 459 if (err) 437 - xenbus_dev_fatal(dev, err, "writing feature-barrier"); 460 + dev_warn(&dev->dev, "writing feature-barrier (%d)", err); 438 461 439 462 return err; 440 463 } ··· 666 689 return; 667 690 } 668 691 669 - err = xen_blkbk_flush_diskcache(xbt, be, be->blkif->vbd.flush_support); 670 - if (err) 671 - goto abort; 672 - 673 - err = xen_blkbk_discard(xbt, be); 674 - 675 692 /* If we can't advertise it is OK. */ 676 - err = xen_blkbk_barrier(xbt, be, be->blkif->vbd.flush_support); 693 + xen_blkbk_flush_diskcache(xbt, be, be->blkif->vbd.flush_support); 694 + 695 + xen_blkbk_discard(xbt, be); 696 + 697 + xen_blkbk_barrier(xbt, be, be->blkif->vbd.flush_support); 677 698 678 699 err = xenbus_printf(xbt, dev->nodename, "sectors", "%llu", 679 700 (unsigned long long)vbd_sz(&be->blkif->vbd));
+20 -21
drivers/block/xen-blkfront.c
··· 43 43 #include <linux/slab.h> 44 44 #include <linux/mutex.h> 45 45 #include <linux/scatterlist.h> 46 + #include <linux/bitmap.h> 46 47 47 48 #include <xen/xen.h> 48 49 #include <xen/xenbus.h> ··· 82 81 */ 83 82 struct blkfront_info 84 83 { 84 + spinlock_t io_lock; 85 85 struct mutex mutex; 86 86 struct xenbus_device *xbdev; 87 87 struct gendisk *gd; ··· 106 104 unsigned int discard_alignment; 107 105 int is_ready; 108 106 }; 109 - 110 - static DEFINE_SPINLOCK(blkif_io_lock); 111 107 112 108 static unsigned int nr_minors; 113 109 static unsigned long *minors; ··· 177 177 178 178 spin_lock(&minor_lock); 179 179 if (find_next_bit(minors, end, minor) >= end) { 180 - for (; minor < end; ++minor) 181 - __set_bit(minor, minors); 180 + bitmap_set(minors, minor, nr); 182 181 rc = 0; 183 182 } else 184 183 rc = -EBUSY; ··· 192 193 193 194 BUG_ON(end > nr_minors); 194 195 spin_lock(&minor_lock); 195 - for (; minor < end; ++minor) 196 - __clear_bit(minor, minors); 196 + bitmap_clear(minors, minor, nr); 197 197 spin_unlock(&minor_lock); 198 198 } 199 199 ··· 417 419 struct request_queue *rq; 418 420 struct blkfront_info *info = gd->private_data; 419 421 420 - rq = blk_init_queue(do_blkif_request, &blkif_io_lock); 422 + rq = blk_init_queue(do_blkif_request, &info->io_lock); 421 423 if (rq == NULL) 422 424 return -1; 423 425 ··· 634 636 if (info->rq == NULL) 635 637 return; 636 638 637 - spin_lock_irqsave(&blkif_io_lock, flags); 639 + spin_lock_irqsave(&info->io_lock, flags); 638 640 639 641 /* No more blkif_request(). */ 640 642 blk_stop_queue(info->rq); 641 643 642 644 /* No more gnttab callback work. */ 643 645 gnttab_cancel_free_callback(&info->callback); 644 - spin_unlock_irqrestore(&blkif_io_lock, flags); 646 + spin_unlock_irqrestore(&info->io_lock, flags); 645 647 646 648 /* Flush gnttab callback work. Must be done with no locks held. */ 647 649 flush_work_sync(&info->work); ··· 673 675 { 674 676 struct blkfront_info *info = container_of(work, struct blkfront_info, work); 675 677 676 - spin_lock_irq(&blkif_io_lock); 678 + spin_lock_irq(&info->io_lock); 677 679 if (info->connected == BLKIF_STATE_CONNECTED) 678 680 kick_pending_request_queues(info); 679 - spin_unlock_irq(&blkif_io_lock); 681 + spin_unlock_irq(&info->io_lock); 680 682 } 681 683 682 684 static void blkif_free(struct blkfront_info *info, int suspend) 683 685 { 684 686 /* Prevent new requests being issued until we fix things up. */ 685 - spin_lock_irq(&blkif_io_lock); 687 + spin_lock_irq(&info->io_lock); 686 688 info->connected = suspend ? 687 689 BLKIF_STATE_SUSPENDED : BLKIF_STATE_DISCONNECTED; 688 690 /* No more blkif_request(). */ ··· 690 692 blk_stop_queue(info->rq); 691 693 /* No more gnttab callback work. */ 692 694 gnttab_cancel_free_callback(&info->callback); 693 - spin_unlock_irq(&blkif_io_lock); 695 + spin_unlock_irq(&info->io_lock); 694 696 695 697 /* Flush gnttab callback work. Must be done with no locks held. */ 696 698 flush_work_sync(&info->work); ··· 726 728 struct blkfront_info *info = (struct blkfront_info *)dev_id; 727 729 int error; 728 730 729 - spin_lock_irqsave(&blkif_io_lock, flags); 731 + spin_lock_irqsave(&info->io_lock, flags); 730 732 731 733 if (unlikely(info->connected != BLKIF_STATE_CONNECTED)) { 732 - spin_unlock_irqrestore(&blkif_io_lock, flags); 734 + spin_unlock_irqrestore(&info->io_lock, flags); 733 735 return IRQ_HANDLED; 734 736 } 735 737 ··· 814 816 815 817 kick_pending_request_queues(info); 816 818 817 - spin_unlock_irqrestore(&blkif_io_lock, flags); 819 + spin_unlock_irqrestore(&info->io_lock, flags); 818 820 819 821 return IRQ_HANDLED; 820 822 } ··· 989 991 } 990 992 991 993 mutex_init(&info->mutex); 994 + spin_lock_init(&info->io_lock); 992 995 info->xbdev = dev; 993 996 info->vdevice = vdevice; 994 997 info->connected = BLKIF_STATE_DISCONNECTED; ··· 1067 1068 1068 1069 xenbus_switch_state(info->xbdev, XenbusStateConnected); 1069 1070 1070 - spin_lock_irq(&blkif_io_lock); 1071 + spin_lock_irq(&info->io_lock); 1071 1072 1072 1073 /* Now safe for us to use the shared ring */ 1073 1074 info->connected = BLKIF_STATE_CONNECTED; ··· 1078 1079 /* Kick any other new requests queued since we resumed */ 1079 1080 kick_pending_request_queues(info); 1080 1081 1081 - spin_unlock_irq(&blkif_io_lock); 1082 + spin_unlock_irq(&info->io_lock); 1082 1083 1083 1084 return 0; 1084 1085 } ··· 1276 1277 xenbus_switch_state(info->xbdev, XenbusStateConnected); 1277 1278 1278 1279 /* Kick pending requests. */ 1279 - spin_lock_irq(&blkif_io_lock); 1280 + spin_lock_irq(&info->io_lock); 1280 1281 info->connected = BLKIF_STATE_CONNECTED; 1281 1282 kick_pending_request_queues(info); 1282 - spin_unlock_irq(&blkif_io_lock); 1283 + spin_unlock_irq(&info->io_lock); 1283 1284 1284 1285 add_disk(info->gd); 1285 1286 ··· 1409 1410 mutex_lock(&blkfront_mutex); 1410 1411 1411 1412 bdev = bdget_disk(disk, 0); 1412 - bdput(bdev); 1413 1413 1414 1414 if (bdev->bd_openers) 1415 1415 goto out; ··· 1439 1441 } 1440 1442 1441 1443 out: 1444 + bdput(bdev); 1442 1445 mutex_unlock(&blkfront_mutex); 1443 1446 return 0; 1444 1447 }
+4
drivers/bluetooth/ath3k.c
··· 72 72 73 73 /* Atheros AR3012 with sflash firmware*/ 74 74 { USB_DEVICE(0x0CF3, 0x3004) }, 75 + { USB_DEVICE(0x0CF3, 0x311D) }, 75 76 { USB_DEVICE(0x13d3, 0x3375) }, 77 + { USB_DEVICE(0x04CA, 0x3005) }, 76 78 77 79 /* Atheros AR5BBU12 with sflash firmware */ 78 80 { USB_DEVICE(0x0489, 0xE02C) }, ··· 91 89 92 90 /* Atheros AR3012 with sflash firmware*/ 93 91 { USB_DEVICE(0x0cf3, 0x3004), .driver_info = BTUSB_ATH3012 }, 92 + { USB_DEVICE(0x0cf3, 0x311D), .driver_info = BTUSB_ATH3012 }, 94 93 { USB_DEVICE(0x13d3, 0x3375), .driver_info = BTUSB_ATH3012 }, 94 + { USB_DEVICE(0x04ca, 0x3005), .driver_info = BTUSB_ATH3012 }, 95 95 96 96 { } /* Terminating entry */ 97 97 };
+4 -1
drivers/bluetooth/btusb.c
··· 61 61 { USB_DEVICE_INFO(0xe0, 0x01, 0x01) }, 62 62 63 63 /* Broadcom SoftSailing reporting vendor specific */ 64 - { USB_DEVICE(0x05ac, 0x21e1) }, 64 + { USB_DEVICE(0x0a5c, 0x21e1) }, 65 65 66 66 /* Apple MacBookPro 7,1 */ 67 67 { USB_DEVICE(0x05ac, 0x8213) }, ··· 103 103 /* Broadcom BCM20702A0 */ 104 104 { USB_DEVICE(0x0a5c, 0x21e3) }, 105 105 { USB_DEVICE(0x0a5c, 0x21e6) }, 106 + { USB_DEVICE(0x0a5c, 0x21e8) }, 106 107 { USB_DEVICE(0x0a5c, 0x21f3) }, 107 108 { USB_DEVICE(0x413c, 0x8197) }, 108 109 ··· 130 129 131 130 /* Atheros 3012 with sflash firmware */ 132 131 { USB_DEVICE(0x0cf3, 0x3004), .driver_info = BTUSB_ATH3012 }, 132 + { USB_DEVICE(0x0cf3, 0x311d), .driver_info = BTUSB_ATH3012 }, 133 133 { USB_DEVICE(0x13d3, 0x3375), .driver_info = BTUSB_ATH3012 }, 134 + { USB_DEVICE(0x04ca, 0x3005), .driver_info = BTUSB_ATH3012 }, 134 135 135 136 /* Atheros AR5BBU12 with sflash firmware */ 136 137 { USB_DEVICE(0x0489, 0xe02c), .driver_info = BTUSB_IGNORE },
+1 -1
drivers/bluetooth/hci_ldisc.c
··· 299 299 hci_uart_close(hdev); 300 300 301 301 if (test_and_clear_bit(HCI_UART_PROTO_SET, &hu->flags)) { 302 - hu->proto->close(hu); 303 302 if (hdev) { 304 303 hci_unregister_dev(hdev); 305 304 hci_free_dev(hdev); 306 305 } 306 + hu->proto->close(hu); 307 307 } 308 308 309 309 kfree(hu);
+2 -2
drivers/char/hpet.c
··· 906 906 hpetp->hp_which, hdp->hd_phys_address, 907 907 hpetp->hp_ntimer > 1 ? "s" : ""); 908 908 for (i = 0; i < hpetp->hp_ntimer; i++) 909 - printk("%s %d", i > 0 ? "," : "", hdp->hd_irq[i]); 910 - printk("\n"); 909 + printk(KERN_CONT "%s %d", i > 0 ? "," : "", hdp->hd_irq[i]); 910 + printk(KERN_CONT "\n"); 911 911 912 912 temp = hpetp->hp_tick_freq; 913 913 remainder = do_div(temp, 1000000);
+8 -3
drivers/char/random.c
··· 1260 1260 uuid = table->data; 1261 1261 if (!uuid) { 1262 1262 uuid = tmp_uuid; 1263 - uuid[8] = 0; 1264 - } 1265 - if (uuid[8] == 0) 1266 1263 generate_random_uuid(uuid); 1264 + } else { 1265 + static DEFINE_SPINLOCK(bootid_spinlock); 1266 + 1267 + spin_lock(&bootid_spinlock); 1268 + if (!uuid[8]) 1269 + generate_random_uuid(uuid); 1270 + spin_unlock(&bootid_spinlock); 1271 + } 1267 1272 1268 1273 sprintf(buf, "%pU", uuid); 1269 1274
+8 -16
drivers/clocksource/acpi_pm.c
··· 23 23 #include <linux/init.h> 24 24 #include <linux/pci.h> 25 25 #include <linux/delay.h> 26 - #include <linux/async.h> 27 26 #include <asm/io.h> 28 27 29 28 /* ··· 179 180 /* Number of reads we try to get two different values */ 180 181 #define ACPI_PM_READ_CHECKS 10000 181 182 182 - static void __init acpi_pm_clocksource_async(void *unused, async_cookie_t cookie) 183 + static int __init init_acpi_pm_clocksource(void) 183 184 { 184 185 cycle_t value1, value2; 185 186 unsigned int i, j = 0; 186 187 188 + if (!pmtmr_ioport) 189 + return -ENODEV; 187 190 188 191 /* "verify" this timing source: */ 189 192 for (j = 0; j < ACPI_PM_MONOTONICITY_CHECKS; j++) { 190 - usleep_range(100 * j, 100 * j + 100); 193 + udelay(100 * j); 191 194 value1 = clocksource_acpi_pm.read(&clocksource_acpi_pm); 192 195 for (i = 0; i < ACPI_PM_READ_CHECKS; i++) { 193 196 value2 = clocksource_acpi_pm.read(&clocksource_acpi_pm); ··· 203 202 " 0x%#llx, 0x%#llx - aborting.\n", 204 203 value1, value2); 205 204 pmtmr_ioport = 0; 206 - return; 205 + return -EINVAL; 207 206 } 208 207 if (i == ACPI_PM_READ_CHECKS) { 209 208 printk(KERN_INFO "PM-Timer failed consistency check " 210 209 " (0x%#llx) - aborting.\n", value1); 211 210 pmtmr_ioport = 0; 212 - return; 211 + return -ENODEV; 213 212 } 214 213 } 215 214 216 215 if (verify_pmtmr_rate() != 0){ 217 216 pmtmr_ioport = 0; 218 - return; 217 + return -ENODEV; 219 218 } 220 219 221 - clocksource_register_hz(&clocksource_acpi_pm, 220 + return clocksource_register_hz(&clocksource_acpi_pm, 222 221 PMTMR_TICKS_PER_SEC); 223 - } 224 - 225 - static int __init init_acpi_pm_clocksource(void) 226 - { 227 - if (!pmtmr_ioport) 228 - return -ENODEV; 229 - 230 - async_schedule(acpi_pm_clocksource_async, NULL); 231 - return 0; 232 222 } 233 223 234 224 /* We use fs_initcall because we want the PCI fixups to have run
+1
drivers/cpufreq/Kconfig.arm
··· 4 4 5 5 config ARM_OMAP2PLUS_CPUFREQ 6 6 bool "TI OMAP2+" 7 + depends on ARCH_OMAP2PLUS 7 8 default ARCH_OMAP2PLUS 8 9 select CPU_FREQ_TABLE 9 10
+14
drivers/dma/dmaengine.c
··· 332 332 } 333 333 EXPORT_SYMBOL(dma_find_channel); 334 334 335 + /* 336 + * net_dma_find_channel - find a channel for net_dma 337 + * net_dma has alignment requirements 338 + */ 339 + struct dma_chan *net_dma_find_channel(void) 340 + { 341 + struct dma_chan *chan = dma_find_channel(DMA_MEMCPY); 342 + if (chan && !is_dma_copy_aligned(chan->device, 1, 1, 1)) 343 + return NULL; 344 + 345 + return chan; 346 + } 347 + EXPORT_SYMBOL(net_dma_find_channel); 348 + 335 349 /** 336 350 * dma_issue_pending_all - flush all pending operations across all channels 337 351 */
+8 -8
drivers/dma/ioat/dma.c
··· 546 546 PCI_DMA_TODEVICE, flags, 0); 547 547 } 548 548 549 - unsigned long ioat_get_current_completion(struct ioat_chan_common *chan) 549 + dma_addr_t ioat_get_current_completion(struct ioat_chan_common *chan) 550 550 { 551 - unsigned long phys_complete; 551 + dma_addr_t phys_complete; 552 552 u64 completion; 553 553 554 554 completion = *chan->completion; ··· 569 569 } 570 570 571 571 bool ioat_cleanup_preamble(struct ioat_chan_common *chan, 572 - unsigned long *phys_complete) 572 + dma_addr_t *phys_complete) 573 573 { 574 574 *phys_complete = ioat_get_current_completion(chan); 575 575 if (*phys_complete == chan->last_completion) ··· 580 580 return true; 581 581 } 582 582 583 - static void __cleanup(struct ioat_dma_chan *ioat, unsigned long phys_complete) 583 + static void __cleanup(struct ioat_dma_chan *ioat, dma_addr_t phys_complete) 584 584 { 585 585 struct ioat_chan_common *chan = &ioat->base; 586 586 struct list_head *_desc, *n; 587 587 struct dma_async_tx_descriptor *tx; 588 588 589 - dev_dbg(to_dev(chan), "%s: phys_complete: %lx\n", 590 - __func__, phys_complete); 589 + dev_dbg(to_dev(chan), "%s: phys_complete: %llx\n", 590 + __func__, (unsigned long long) phys_complete); 591 591 list_for_each_safe(_desc, n, &ioat->used_desc) { 592 592 struct ioat_desc_sw *desc; 593 593 ··· 652 652 static void ioat1_cleanup(struct ioat_dma_chan *ioat) 653 653 { 654 654 struct ioat_chan_common *chan = &ioat->base; 655 - unsigned long phys_complete; 655 + dma_addr_t phys_complete; 656 656 657 657 prefetch(chan->completion); 658 658 ··· 698 698 mod_timer(&chan->timer, jiffies + COMPLETION_TIMEOUT); 699 699 spin_unlock_bh(&ioat->desc_lock); 700 700 } else if (test_bit(IOAT_COMPLETION_PENDING, &chan->state)) { 701 - unsigned long phys_complete; 701 + dma_addr_t phys_complete; 702 702 703 703 spin_lock_bh(&ioat->desc_lock); 704 704 /* if we haven't made progress and we have already
+3 -3
drivers/dma/ioat/dma.h
··· 88 88 struct ioat_chan_common { 89 89 struct dma_chan common; 90 90 void __iomem *reg_base; 91 - unsigned long last_completion; 91 + dma_addr_t last_completion; 92 92 spinlock_t cleanup_lock; 93 93 unsigned long state; 94 94 #define IOAT_COMPLETION_PENDING 0 ··· 310 310 void __devexit ioat_dma_remove(struct ioatdma_device *device); 311 311 struct dca_provider * __devinit ioat_dca_init(struct pci_dev *pdev, 312 312 void __iomem *iobase); 313 - unsigned long ioat_get_current_completion(struct ioat_chan_common *chan); 313 + dma_addr_t ioat_get_current_completion(struct ioat_chan_common *chan); 314 314 void ioat_init_channel(struct ioatdma_device *device, 315 315 struct ioat_chan_common *chan, int idx); 316 316 enum dma_status ioat_dma_tx_status(struct dma_chan *c, dma_cookie_t cookie, ··· 318 318 void ioat_dma_unmap(struct ioat_chan_common *chan, enum dma_ctrl_flags flags, 319 319 size_t len, struct ioat_dma_descriptor *hw); 320 320 bool ioat_cleanup_preamble(struct ioat_chan_common *chan, 321 - unsigned long *phys_complete); 321 + dma_addr_t *phys_complete); 322 322 void ioat_kobject_add(struct ioatdma_device *device, struct kobj_type *type); 323 323 void ioat_kobject_del(struct ioatdma_device *device); 324 324 extern const struct sysfs_ops ioat_sysfs_ops;
+6 -6
drivers/dma/ioat/dma_v2.c
··· 128 128 spin_unlock_bh(&ioat->prep_lock); 129 129 } 130 130 131 - static void __cleanup(struct ioat2_dma_chan *ioat, unsigned long phys_complete) 131 + static void __cleanup(struct ioat2_dma_chan *ioat, dma_addr_t phys_complete) 132 132 { 133 133 struct ioat_chan_common *chan = &ioat->base; 134 134 struct dma_async_tx_descriptor *tx; ··· 179 179 static void ioat2_cleanup(struct ioat2_dma_chan *ioat) 180 180 { 181 181 struct ioat_chan_common *chan = &ioat->base; 182 - unsigned long phys_complete; 182 + dma_addr_t phys_complete; 183 183 184 184 spin_lock_bh(&chan->cleanup_lock); 185 185 if (ioat_cleanup_preamble(chan, &phys_complete)) ··· 260 260 static void ioat2_restart_channel(struct ioat2_dma_chan *ioat) 261 261 { 262 262 struct ioat_chan_common *chan = &ioat->base; 263 - unsigned long phys_complete; 263 + dma_addr_t phys_complete; 264 264 265 265 ioat2_quiesce(chan, 0); 266 266 if (ioat_cleanup_preamble(chan, &phys_complete)) ··· 275 275 struct ioat_chan_common *chan = &ioat->base; 276 276 277 277 if (test_bit(IOAT_COMPLETION_PENDING, &chan->state)) { 278 - unsigned long phys_complete; 278 + dma_addr_t phys_complete; 279 279 u64 status; 280 280 281 281 status = ioat_chansts(chan); ··· 572 572 */ 573 573 struct ioat_chan_common *chan = &ioat->base; 574 574 struct dma_chan *c = &chan->common; 575 - const u16 curr_size = ioat2_ring_size(ioat); 575 + const u32 curr_size = ioat2_ring_size(ioat); 576 576 const u16 active = ioat2_ring_active(ioat); 577 - const u16 new_size = 1 << order; 577 + const u32 new_size = 1 << order; 578 578 struct ioat_ring_ent **ring; 579 579 u16 i; 580 580
+2 -2
drivers/dma/ioat/dma_v2.h
··· 74 74 return container_of(chan, struct ioat2_dma_chan, base); 75 75 } 76 76 77 - static inline u16 ioat2_ring_size(struct ioat2_dma_chan *ioat) 77 + static inline u32 ioat2_ring_size(struct ioat2_dma_chan *ioat) 78 78 { 79 79 return 1 << ioat->alloc_order; 80 80 } ··· 91 91 return CIRC_CNT(ioat->head, ioat->issued, ioat2_ring_size(ioat)); 92 92 } 93 93 94 - static inline u16 ioat2_ring_space(struct ioat2_dma_chan *ioat) 94 + static inline u32 ioat2_ring_space(struct ioat2_dma_chan *ioat) 95 95 { 96 96 return ioat2_ring_size(ioat) - ioat2_ring_active(ioat); 97 97 }
+45 -4
drivers/dma/ioat/dma_v3.c
··· 257 257 * The difference from the dma_v2.c __cleanup() is that this routine 258 258 * handles extended descriptors and dma-unmapping raid operations. 259 259 */ 260 - static void __cleanup(struct ioat2_dma_chan *ioat, unsigned long phys_complete) 260 + static void __cleanup(struct ioat2_dma_chan *ioat, dma_addr_t phys_complete) 261 261 { 262 262 struct ioat_chan_common *chan = &ioat->base; 263 263 struct ioat_ring_ent *desc; ··· 314 314 static void ioat3_cleanup(struct ioat2_dma_chan *ioat) 315 315 { 316 316 struct ioat_chan_common *chan = &ioat->base; 317 - unsigned long phys_complete; 317 + dma_addr_t phys_complete; 318 318 319 319 spin_lock_bh(&chan->cleanup_lock); 320 320 if (ioat_cleanup_preamble(chan, &phys_complete)) ··· 333 333 static void ioat3_restart_channel(struct ioat2_dma_chan *ioat) 334 334 { 335 335 struct ioat_chan_common *chan = &ioat->base; 336 - unsigned long phys_complete; 336 + dma_addr_t phys_complete; 337 337 338 338 ioat2_quiesce(chan, 0); 339 339 if (ioat_cleanup_preamble(chan, &phys_complete)) ··· 348 348 struct ioat_chan_common *chan = &ioat->base; 349 349 350 350 if (test_bit(IOAT_COMPLETION_PENDING, &chan->state)) { 351 - unsigned long phys_complete; 351 + dma_addr_t phys_complete; 352 352 u64 status; 353 353 354 354 status = ioat_chansts(chan); ··· 1149 1149 return ioat2_reset_sync(chan, msecs_to_jiffies(200)); 1150 1150 } 1151 1151 1152 + static bool is_jf_ioat(struct pci_dev *pdev) 1153 + { 1154 + switch (pdev->device) { 1155 + case PCI_DEVICE_ID_INTEL_IOAT_JSF0: 1156 + case PCI_DEVICE_ID_INTEL_IOAT_JSF1: 1157 + case PCI_DEVICE_ID_INTEL_IOAT_JSF2: 1158 + case PCI_DEVICE_ID_INTEL_IOAT_JSF3: 1159 + case PCI_DEVICE_ID_INTEL_IOAT_JSF4: 1160 + case PCI_DEVICE_ID_INTEL_IOAT_JSF5: 1161 + case PCI_DEVICE_ID_INTEL_IOAT_JSF6: 1162 + case PCI_DEVICE_ID_INTEL_IOAT_JSF7: 1163 + case PCI_DEVICE_ID_INTEL_IOAT_JSF8: 1164 + case PCI_DEVICE_ID_INTEL_IOAT_JSF9: 1165 + return true; 1166 + default: 1167 + return false; 1168 + } 1169 + } 1170 + 1171 + static bool is_snb_ioat(struct pci_dev *pdev) 1172 + { 1173 + switch (pdev->device) { 1174 + case PCI_DEVICE_ID_INTEL_IOAT_SNB0: 1175 + case PCI_DEVICE_ID_INTEL_IOAT_SNB1: 1176 + case PCI_DEVICE_ID_INTEL_IOAT_SNB2: 1177 + case PCI_DEVICE_ID_INTEL_IOAT_SNB3: 1178 + case PCI_DEVICE_ID_INTEL_IOAT_SNB4: 1179 + case PCI_DEVICE_ID_INTEL_IOAT_SNB5: 1180 + case PCI_DEVICE_ID_INTEL_IOAT_SNB6: 1181 + case PCI_DEVICE_ID_INTEL_IOAT_SNB7: 1182 + case PCI_DEVICE_ID_INTEL_IOAT_SNB8: 1183 + case PCI_DEVICE_ID_INTEL_IOAT_SNB9: 1184 + return true; 1185 + default: 1186 + return false; 1187 + } 1188 + } 1189 + 1152 1190 int __devinit ioat3_dma_probe(struct ioatdma_device *device, int dca) 1153 1191 { 1154 1192 struct pci_dev *pdev = device->pdev; ··· 1206 1168 dma->device_issue_pending = ioat2_issue_pending; 1207 1169 dma->device_alloc_chan_resources = ioat2_alloc_chan_resources; 1208 1170 dma->device_free_chan_resources = ioat2_free_chan_resources; 1171 + 1172 + if (is_jf_ioat(pdev) || is_snb_ioat(pdev)) 1173 + dma->copy_align = 6; 1209 1174 1210 1175 dma_cap_set(DMA_INTERRUPT, dma->cap_mask); 1211 1176 dma->device_prep_dma_interrupt = ioat3_prep_interrupt_lock;
+2 -2
drivers/dma/iop-adma.c
··· 1252 1252 struct page **pq_hw = &pq[IOP_ADMA_NUM_SRC_TEST+2]; 1253 1253 /* address conversion buffers (dma_map / page_address) */ 1254 1254 void *pq_sw[IOP_ADMA_NUM_SRC_TEST+2]; 1255 - dma_addr_t pq_src[IOP_ADMA_NUM_SRC_TEST]; 1256 - dma_addr_t pq_dest[2]; 1255 + dma_addr_t pq_src[IOP_ADMA_NUM_SRC_TEST+2]; 1256 + dma_addr_t *pq_dest = &pq_src[IOP_ADMA_NUM_SRC_TEST]; 1257 1257 1258 1258 int i; 1259 1259 struct dma_async_tx_descriptor *tx;
+1 -1
drivers/gpio/Kconfig
··· 430 430 431 431 config GPIO_SODAVILLE 432 432 bool "Intel Sodaville GPIO support" 433 - depends on X86 && PCI && OF && BROKEN 433 + depends on X86 && PCI && OF 434 434 select GPIO_GENERIC 435 435 select GENERIC_IRQ_CHIP 436 436 help
+1 -1
drivers/gpio/gpio-adp5588.c
··· 252 252 if (ret < 0) 253 253 memset(dev->irq_stat, 0, ARRAY_SIZE(dev->irq_stat)); 254 254 255 - for (bank = 0; bank <= ADP5588_BANK(ADP5588_MAXGPIO); 255 + for (bank = 0, bit = 0; bank <= ADP5588_BANK(ADP5588_MAXGPIO); 256 256 bank++, bit = 0) { 257 257 pending = dev->irq_stat[bank] & dev->irq_mask[bank]; 258 258
+8 -8
drivers/gpio/gpio-samsung.c
··· 2382 2382 #endif 2383 2383 }; 2384 2384 2385 - static struct samsung_gpio_chip exynos5_gpios_1[] = { 2386 2385 #ifdef CONFIG_ARCH_EXYNOS5 2386 + static struct samsung_gpio_chip exynos5_gpios_1[] = { 2387 2387 { 2388 2388 .chip = { 2389 2389 .base = EXYNOS5_GPA0(0), ··· 2541 2541 .to_irq = samsung_gpiolib_to_irq, 2542 2542 }, 2543 2543 }, 2544 - #endif 2545 2544 }; 2545 + #endif 2546 2546 2547 - static struct samsung_gpio_chip exynos5_gpios_2[] = { 2548 2547 #ifdef CONFIG_ARCH_EXYNOS5 2548 + static struct samsung_gpio_chip exynos5_gpios_2[] = { 2549 2549 { 2550 2550 .chip = { 2551 2551 .base = EXYNOS5_GPE0(0), ··· 2602 2602 2603 2603 }, 2604 2604 }, 2605 - #endif 2606 2605 }; 2606 + #endif 2607 2607 2608 - static struct samsung_gpio_chip exynos5_gpios_3[] = { 2609 2608 #ifdef CONFIG_ARCH_EXYNOS5 2609 + static struct samsung_gpio_chip exynos5_gpios_3[] = { 2610 2610 { 2611 2611 .chip = { 2612 2612 .base = EXYNOS5_GPV0(0), ··· 2638 2638 .label = "GPV4", 2639 2639 }, 2640 2640 }, 2641 - #endif 2642 2641 }; 2642 + #endif 2643 2643 2644 - static struct samsung_gpio_chip exynos5_gpios_4[] = { 2645 2644 #ifdef CONFIG_ARCH_EXYNOS5 2645 + static struct samsung_gpio_chip exynos5_gpios_4[] = { 2646 2646 { 2647 2647 .chip = { 2648 2648 .base = EXYNOS5_GPZ(0), ··· 2650 2650 .label = "GPZ", 2651 2651 }, 2652 2652 }, 2653 - #endif 2654 2653 }; 2654 + #endif 2655 2655 2656 2656 2657 2657 #if defined(CONFIG_ARCH_EXYNOS) && defined(CONFIG_OF)
+10 -13
drivers/gpio/gpio-sodaville.c
··· 41 41 struct sdv_gpio_chip_data { 42 42 int irq_base; 43 43 void __iomem *gpio_pub_base; 44 - struct irq_domain id; 44 + struct irq_domain *id; 45 45 struct irq_chip_generic *gc; 46 46 struct bgpio_chip bgpio; 47 47 }; ··· 51 51 struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d); 52 52 struct sdv_gpio_chip_data *sd = gc->private; 53 53 void __iomem *type_reg; 54 - u32 irq_offs = d->irq - sd->irq_base; 55 54 u32 reg; 56 55 57 - if (irq_offs < 8) 56 + if (d->hwirq < 8) 58 57 type_reg = sd->gpio_pub_base + GPIT1R0; 59 58 else 60 59 type_reg = sd->gpio_pub_base + GPIT1R1; ··· 62 63 63 64 switch (type) { 64 65 case IRQ_TYPE_LEVEL_HIGH: 65 - reg &= ~BIT(4 * (irq_offs % 8)); 66 + reg &= ~BIT(4 * (d->hwirq % 8)); 66 67 break; 67 68 68 69 case IRQ_TYPE_LEVEL_LOW: 69 - reg |= BIT(4 * (irq_offs % 8)); 70 + reg |= BIT(4 * (d->hwirq % 8)); 70 71 break; 71 72 72 73 default: ··· 90 91 u32 irq_bit = __fls(irq_stat); 91 92 92 93 irq_stat &= ~BIT(irq_bit); 93 - generic_handle_irq(sd->irq_base + irq_bit); 94 + generic_handle_irq(irq_find_mapping(sd->id, irq_bit)); 94 95 } 95 96 96 97 return IRQ_HANDLED; ··· 126 127 } 127 128 128 129 static struct irq_domain_ops irq_domain_sdv_ops = { 129 - .dt_translate = sdv_xlate, 130 + .xlate = sdv_xlate, 130 131 }; 131 132 132 133 static __devinit int sdv_register_irqsupport(struct sdv_gpio_chip_data *sd, ··· 147 148 "sdv_gpio", sd); 148 149 if (ret) 149 150 goto out_free_desc; 150 - 151 - sd->id.irq_base = sd->irq_base; 152 - sd->id.of_node = of_node_get(pdev->dev.of_node); 153 - sd->id.ops = &irq_domain_sdv_ops; 154 151 155 152 /* 156 153 * This gpio irq controller latches level irqs. Testing shows that if ··· 174 179 IRQ_GC_INIT_MASK_CACHE, IRQ_NOREQUEST, 175 180 IRQ_LEVEL | IRQ_NOPROBE); 176 181 177 - irq_domain_add(&sd->id); 182 + sd->id = irq_domain_add_legacy(pdev->dev.of_node, SDV_NUM_PUB_GPIOS, 183 + sd->irq_base, 0, &irq_domain_sdv_ops, sd); 184 + if (!sd->id) 185 + goto out_free_irq; 178 186 return 0; 179 187 out_free_irq: 180 188 free_irq(pdev->irq, sd); ··· 258 260 { 259 261 struct sdv_gpio_chip_data *sd = pci_get_drvdata(pdev); 260 262 261 - irq_domain_del(&sd->id); 262 263 free_irq(pdev->irq, sd); 263 264 irq_free_descs(sd->irq_base, SDV_NUM_PUB_GPIOS); 264 265
+14 -33
drivers/gpu/drm/exynos/exynos_drm_buf.c
··· 34 34 static int lowlevel_buffer_allocate(struct drm_device *dev, 35 35 unsigned int flags, struct exynos_drm_gem_buf *buf) 36 36 { 37 - dma_addr_t start_addr, end_addr; 37 + dma_addr_t start_addr; 38 38 unsigned int npages, page_size, i = 0; 39 39 struct scatterlist *sgl; 40 40 int ret = 0; 41 41 42 42 DRM_DEBUG_KMS("%s\n", __FILE__); 43 43 44 - if (flags & EXYNOS_BO_NONCONTIG) { 44 + if (IS_NONCONTIG_BUFFER(flags)) { 45 45 DRM_DEBUG_KMS("not support allocation type.\n"); 46 46 return -EINVAL; 47 47 } ··· 52 52 } 53 53 54 54 if (buf->size >= SZ_1M) { 55 - npages = (buf->size >> SECTION_SHIFT) + 1; 55 + npages = buf->size >> SECTION_SHIFT; 56 56 page_size = SECTION_SIZE; 57 57 } else if (buf->size >= SZ_64K) { 58 - npages = (buf->size >> 16) + 1; 58 + npages = buf->size >> 16; 59 59 page_size = SZ_64K; 60 60 } else { 61 - npages = (buf->size >> PAGE_SHIFT) + 1; 61 + npages = buf->size >> PAGE_SHIFT; 62 62 page_size = PAGE_SIZE; 63 63 } 64 64 ··· 76 76 return -ENOMEM; 77 77 } 78 78 79 - buf->kvaddr = dma_alloc_writecombine(dev->dev, buf->size, 80 - &buf->dma_addr, GFP_KERNEL); 81 - if (!buf->kvaddr) { 82 - DRM_ERROR("failed to allocate buffer.\n"); 83 - ret = -ENOMEM; 84 - goto err1; 85 - } 86 - 87 - start_addr = buf->dma_addr; 88 - end_addr = buf->dma_addr + buf->size; 89 - 90 - buf->pages = kzalloc(sizeof(struct page) * npages, GFP_KERNEL); 91 - if (!buf->pages) { 92 - DRM_ERROR("failed to allocate pages.\n"); 93 - ret = -ENOMEM; 94 - goto err2; 95 - } 96 - 97 - start_addr = buf->dma_addr; 98 - end_addr = buf->dma_addr + buf->size; 79 + buf->kvaddr = dma_alloc_writecombine(dev->dev, buf->size, 80 + &buf->dma_addr, GFP_KERNEL); 81 + if (!buf->kvaddr) { 82 + DRM_ERROR("failed to allocate buffer.\n"); 83 + ret = -ENOMEM; 84 + goto err1; 85 + } 99 86 100 87 buf->pages = kzalloc(sizeof(struct page) * npages, GFP_KERNEL); 101 88 if (!buf->pages) { ··· 92 105 } 93 106 94 107 sgl = buf->sgt->sgl; 108 + start_addr = buf->dma_addr; 95 109 96 110 while (i < npages) { 97 111 buf->pages[i] = phys_to_page(start_addr); 98 112 sg_set_page(sgl, buf->pages[i], page_size, 0); 99 113 sg_dma_address(sgl) = start_addr; 100 114 start_addr += page_size; 101 - if (end_addr - start_addr < page_size) 102 - break; 103 115 sgl = sg_next(sgl); 104 116 i++; 105 117 } 106 - 107 - buf->pages[i] = phys_to_page(start_addr); 108 - 109 - sgl = sg_next(sgl); 110 - sg_set_page(sgl, buf->pages[i+1], end_addr - start_addr, 0); 111 118 112 119 DRM_DEBUG_KMS("vaddr(0x%lx), dma_addr(0x%lx), size(0x%lx)\n", 113 120 (unsigned long)buf->kvaddr, ··· 131 150 * non-continuous memory would be released by exynos 132 151 * gem framework. 133 152 */ 134 - if (flags & EXYNOS_BO_NONCONTIG) { 153 + if (IS_NONCONTIG_BUFFER(flags)) { 135 154 DRM_DEBUG_KMS("not support allocation type.\n"); 136 155 return; 137 156 }
+8 -6
drivers/gpu/drm/exynos/exynos_drm_core.c
··· 54 54 * 55 55 * P.S. note that this driver is considered for modularization. 56 56 */ 57 - ret = subdrv->probe(dev, subdrv->manager.dev); 57 + ret = subdrv->probe(dev, subdrv->dev); 58 58 if (ret) 59 59 return ret; 60 60 } 61 61 62 - if (subdrv->is_local) 62 + if (!subdrv->manager) 63 63 return 0; 64 64 65 + subdrv->manager->dev = subdrv->dev; 66 + 65 67 /* create and initialize a encoder for this sub driver. */ 66 - encoder = exynos_drm_encoder_create(dev, &subdrv->manager, 68 + encoder = exynos_drm_encoder_create(dev, subdrv->manager, 67 69 (1 << MAX_CRTC) - 1); 68 70 if (!encoder) { 69 71 DRM_ERROR("failed to create encoder\n"); ··· 188 186 189 187 list_for_each_entry(subdrv, &exynos_drm_subdrv_list, list) { 190 188 if (subdrv->open) { 191 - ret = subdrv->open(dev, subdrv->manager.dev, file); 189 + ret = subdrv->open(dev, subdrv->dev, file); 192 190 if (ret) 193 191 goto err; 194 192 } ··· 199 197 err: 200 198 list_for_each_entry_reverse(subdrv, &subdrv->list, list) { 201 199 if (subdrv->close) 202 - subdrv->close(dev, subdrv->manager.dev, file); 200 + subdrv->close(dev, subdrv->dev, file); 203 201 } 204 202 return ret; 205 203 } ··· 211 209 212 210 list_for_each_entry(subdrv, &exynos_drm_subdrv_list, list) { 213 211 if (subdrv->close) 214 - subdrv->close(dev, subdrv->manager.dev, file); 212 + subdrv->close(dev, subdrv->dev, file); 215 213 } 216 214 } 217 215 EXPORT_SYMBOL_GPL(exynos_drm_subdrv_close);
+5 -5
drivers/gpu/drm/exynos/exynos_drm_drv.h
··· 225 225 * Exynos drm sub driver structure. 226 226 * 227 227 * @list: sub driver has its own list object to register to exynos drm driver. 228 + * @dev: pointer to device object for subdrv device driver. 228 229 * @drm_dev: pointer to drm_device and this pointer would be set 229 230 * when sub driver calls exynos_drm_subdrv_register(). 230 - * @is_local: appear encoder and connector disrelated device. 231 + * @manager: subdrv has its own manager to control a hardware appropriately 232 + * and we can access a hardware drawing on this manager. 231 233 * @probe: this callback would be called by exynos drm driver after 232 234 * subdrv is registered to it. 233 235 * @remove: this callback is used to release resources created 234 236 * by probe callback. 235 237 * @open: this would be called with drm device file open. 236 238 * @close: this would be called with drm device file close. 237 - * @manager: subdrv has its own manager to control a hardware appropriately 238 - * and we can access a hardware drawing on this manager. 239 239 * @encoder: encoder object owned by this sub driver. 240 240 * @connector: connector object owned by this sub driver. 241 241 */ 242 242 struct exynos_drm_subdrv { 243 243 struct list_head list; 244 + struct device *dev; 244 245 struct drm_device *drm_dev; 245 - bool is_local; 246 + struct exynos_drm_manager *manager; 246 247 247 248 int (*probe)(struct drm_device *drm_dev, struct device *dev); 248 249 void (*remove)(struct drm_device *dev); ··· 252 251 void (*close)(struct drm_device *drm_dev, struct device *dev, 253 252 struct drm_file *file); 254 253 255 - struct exynos_drm_manager manager; 256 254 struct drm_encoder *encoder; 257 255 struct drm_connector *connector; 258 256 };
+12 -8
drivers/gpu/drm/exynos/exynos_drm_fimd.c
··· 172 172 static void fimd_apply(struct device *subdrv_dev) 173 173 { 174 174 struct fimd_context *ctx = get_fimd_context(subdrv_dev); 175 - struct exynos_drm_manager *mgr = &ctx->subdrv.manager; 175 + struct exynos_drm_manager *mgr = ctx->subdrv.manager; 176 176 struct exynos_drm_manager_ops *mgr_ops = mgr->ops; 177 177 struct exynos_drm_overlay_ops *ovl_ops = mgr->overlay_ops; 178 178 struct fimd_win_data *win_data; ··· 577 577 .disable = fimd_win_disable, 578 578 }; 579 579 580 + static struct exynos_drm_manager fimd_manager = { 581 + .pipe = -1, 582 + .ops = &fimd_manager_ops, 583 + .overlay_ops = &fimd_overlay_ops, 584 + .display_ops = &fimd_display_ops, 585 + }; 586 + 580 587 static void fimd_finish_pageflip(struct drm_device *drm_dev, int crtc) 581 588 { 582 589 struct exynos_drm_private *dev_priv = drm_dev->dev_private; ··· 635 628 struct fimd_context *ctx = (struct fimd_context *)dev_id; 636 629 struct exynos_drm_subdrv *subdrv = &ctx->subdrv; 637 630 struct drm_device *drm_dev = subdrv->drm_dev; 638 - struct exynos_drm_manager *manager = &subdrv->manager; 631 + struct exynos_drm_manager *manager = subdrv->manager; 639 632 u32 val; 640 633 641 634 val = readl(ctx->regs + VIDINTCON1); ··· 751 744 static int fimd_power_on(struct fimd_context *ctx, bool enable) 752 745 { 753 746 struct exynos_drm_subdrv *subdrv = &ctx->subdrv; 754 - struct device *dev = subdrv->manager.dev; 747 + struct device *dev = subdrv->dev; 755 748 756 749 DRM_DEBUG_KMS("%s\n", __FILE__); 757 750 ··· 874 867 875 868 subdrv = &ctx->subdrv; 876 869 870 + subdrv->dev = dev; 871 + subdrv->manager = &fimd_manager; 877 872 subdrv->probe = fimd_subdrv_probe; 878 873 subdrv->remove = fimd_subdrv_remove; 879 - subdrv->manager.pipe = -1; 880 - subdrv->manager.ops = &fimd_manager_ops; 881 - subdrv->manager.overlay_ops = &fimd_overlay_ops; 882 - subdrv->manager.display_ops = &fimd_display_ops; 883 - subdrv->manager.dev = dev; 884 874 885 875 mutex_init(&ctx->lock); 886 876
+36 -9
drivers/gpu/drm/exynos/exynos_drm_gem.c
··· 56 56 return out_msg; 57 57 } 58 58 59 - static unsigned int mask_gem_flags(unsigned int flags) 59 + static int check_gem_flags(unsigned int flags) 60 60 { 61 - return flags &= EXYNOS_BO_NONCONTIG; 61 + if (flags & ~(EXYNOS_BO_MASK)) { 62 + DRM_ERROR("invalid flags.\n"); 63 + return -EINVAL; 64 + } 65 + 66 + return 0; 67 + } 68 + 69 + static unsigned long roundup_gem_size(unsigned long size, unsigned int flags) 70 + { 71 + if (!IS_NONCONTIG_BUFFER(flags)) { 72 + if (size >= SZ_1M) 73 + return roundup(size, SECTION_SIZE); 74 + else if (size >= SZ_64K) 75 + return roundup(size, SZ_64K); 76 + else 77 + goto out; 78 + } 79 + out: 80 + return roundup(size, PAGE_SIZE); 62 81 } 63 82 64 83 static struct page **exynos_gem_get_pages(struct drm_gem_object *obj, ··· 338 319 struct exynos_drm_gem_buf *buf; 339 320 int ret; 340 321 341 - size = roundup(size, PAGE_SIZE); 342 - DRM_DEBUG_KMS("%s: size = 0x%lx\n", __FILE__, size); 322 + if (!size) { 323 + DRM_ERROR("invalid size.\n"); 324 + return ERR_PTR(-EINVAL); 325 + } 343 326 344 - flags = mask_gem_flags(flags); 327 + size = roundup_gem_size(size, flags); 328 + DRM_DEBUG_KMS("%s\n", __FILE__); 329 + 330 + ret = check_gem_flags(flags); 331 + if (ret) 332 + return ERR_PTR(ret); 345 333 346 334 buf = exynos_drm_init_buf(dev, size); 347 335 if (!buf) ··· 357 331 exynos_gem_obj = exynos_drm_gem_init(dev, size); 358 332 if (!exynos_gem_obj) { 359 333 ret = -ENOMEM; 360 - goto err; 334 + goto err_fini_buf; 361 335 } 362 336 363 337 exynos_gem_obj->buffer = buf; ··· 373 347 ret = exynos_drm_gem_get_pages(&exynos_gem_obj->base); 374 348 if (ret < 0) { 375 349 drm_gem_object_release(&exynos_gem_obj->base); 376 - goto err; 350 + goto err_fini_buf; 377 351 } 378 352 } else { 379 353 ret = exynos_drm_alloc_buf(dev, buf, flags); 380 354 if (ret < 0) { 381 355 drm_gem_object_release(&exynos_gem_obj->base); 382 - goto err; 356 + goto err_fini_buf; 383 357 } 384 358 } 385 359 386 360 return exynos_gem_obj; 387 - err: 361 + 362 + err_fini_buf: 388 363 exynos_drm_fini_buf(dev, buf); 389 364 return ERR_PTR(ret); 390 365 }
+2
drivers/gpu/drm/exynos/exynos_drm_gem.h
··· 29 29 #define to_exynos_gem_obj(x) container_of(x,\ 30 30 struct exynos_drm_gem_obj, base) 31 31 32 + #define IS_NONCONTIG_BUFFER(f) (f & EXYNOS_BO_NONCONTIG) 33 + 32 34 /* 33 35 * exynos drm gem buffer structure. 34 36 *
+48 -59
drivers/gpu/drm/exynos/exynos_drm_hdmi.c
··· 30 30 struct drm_hdmi_context, subdrv); 31 31 32 32 /* these callback points shoud be set by specific drivers. */ 33 - static struct exynos_hdmi_display_ops *hdmi_display_ops; 34 - static struct exynos_hdmi_manager_ops *hdmi_manager_ops; 35 - static struct exynos_hdmi_overlay_ops *hdmi_overlay_ops; 33 + static struct exynos_hdmi_ops *hdmi_ops; 34 + static struct exynos_mixer_ops *mixer_ops; 36 35 37 36 struct drm_hdmi_context { 38 37 struct exynos_drm_subdrv subdrv; ··· 39 40 struct exynos_drm_hdmi_context *mixer_ctx; 40 41 }; 41 42 42 - void exynos_drm_display_ops_register(struct exynos_hdmi_display_ops 43 - *display_ops) 43 + void exynos_hdmi_ops_register(struct exynos_hdmi_ops *ops) 44 44 { 45 45 DRM_DEBUG_KMS("%s\n", __FILE__); 46 46 47 - if (display_ops) 48 - hdmi_display_ops = display_ops; 47 + if (ops) 48 + hdmi_ops = ops; 49 49 } 50 50 51 - void exynos_drm_manager_ops_register(struct exynos_hdmi_manager_ops 52 - *manager_ops) 51 + void exynos_mixer_ops_register(struct exynos_mixer_ops *ops) 53 52 { 54 53 DRM_DEBUG_KMS("%s\n", __FILE__); 55 54 56 - if (manager_ops) 57 - hdmi_manager_ops = manager_ops; 58 - } 59 - 60 - void exynos_drm_overlay_ops_register(struct exynos_hdmi_overlay_ops 61 - *overlay_ops) 62 - { 63 - DRM_DEBUG_KMS("%s\n", __FILE__); 64 - 65 - if (overlay_ops) 66 - hdmi_overlay_ops = overlay_ops; 55 + if (ops) 56 + mixer_ops = ops; 67 57 } 68 58 69 59 static bool drm_hdmi_is_connected(struct device *dev) ··· 61 73 62 74 DRM_DEBUG_KMS("%s\n", __FILE__); 63 75 64 - if (hdmi_display_ops && hdmi_display_ops->is_connected) 65 - return hdmi_display_ops->is_connected(ctx->hdmi_ctx->ctx); 76 + if (hdmi_ops && hdmi_ops->is_connected) 77 + return hdmi_ops->is_connected(ctx->hdmi_ctx->ctx); 66 78 67 79 return false; 68 80 } ··· 74 86 75 87 DRM_DEBUG_KMS("%s\n", __FILE__); 76 88 77 - if (hdmi_display_ops && hdmi_display_ops->get_edid) 78 - return hdmi_display_ops->get_edid(ctx->hdmi_ctx->ctx, 79 - connector, edid, len); 89 + if (hdmi_ops && hdmi_ops->get_edid) 90 + return hdmi_ops->get_edid(ctx->hdmi_ctx->ctx, connector, edid, 91 + len); 80 92 81 93 return 0; 82 94 } ··· 87 99 88 100 DRM_DEBUG_KMS("%s\n", __FILE__); 89 101 90 - if (hdmi_display_ops && hdmi_display_ops->check_timing) 91 - return hdmi_display_ops->check_timing(ctx->hdmi_ctx->ctx, 92 - timing); 102 + if (hdmi_ops && hdmi_ops->check_timing) 103 + return hdmi_ops->check_timing(ctx->hdmi_ctx->ctx, timing); 93 104 94 105 return 0; 95 106 } ··· 99 112 100 113 DRM_DEBUG_KMS("%s\n", __FILE__); 101 114 102 - if (hdmi_display_ops && hdmi_display_ops->power_on) 103 - return hdmi_display_ops->power_on(ctx->hdmi_ctx->ctx, mode); 115 + if (hdmi_ops && hdmi_ops->power_on) 116 + return hdmi_ops->power_on(ctx->hdmi_ctx->ctx, mode); 104 117 105 118 return 0; 106 119 } ··· 117 130 { 118 131 struct drm_hdmi_context *ctx = to_context(subdrv_dev); 119 132 struct exynos_drm_subdrv *subdrv = &ctx->subdrv; 120 - struct exynos_drm_manager *manager = &subdrv->manager; 133 + struct exynos_drm_manager *manager = subdrv->manager; 121 134 122 135 DRM_DEBUG_KMS("%s\n", __FILE__); 123 136 124 - if (hdmi_overlay_ops && hdmi_overlay_ops->enable_vblank) 125 - return hdmi_overlay_ops->enable_vblank(ctx->mixer_ctx->ctx, 126 - manager->pipe); 137 + if (mixer_ops && mixer_ops->enable_vblank) 138 + return mixer_ops->enable_vblank(ctx->mixer_ctx->ctx, 139 + manager->pipe); 127 140 128 141 return 0; 129 142 } ··· 134 147 135 148 DRM_DEBUG_KMS("%s\n", __FILE__); 136 149 137 - if (hdmi_overlay_ops && hdmi_overlay_ops->disable_vblank) 138 - return hdmi_overlay_ops->disable_vblank(ctx->mixer_ctx->ctx); 150 + if (mixer_ops && mixer_ops->disable_vblank) 151 + return mixer_ops->disable_vblank(ctx->mixer_ctx->ctx); 139 152 } 140 153 141 154 static void drm_hdmi_mode_fixup(struct device *subdrv_dev, ··· 147 160 148 161 DRM_DEBUG_KMS("%s\n", __FILE__); 149 162 150 - if (hdmi_manager_ops && hdmi_manager_ops->mode_fixup) 151 - hdmi_manager_ops->mode_fixup(ctx->hdmi_ctx->ctx, connector, 152 - mode, adjusted_mode); 163 + if (hdmi_ops && hdmi_ops->mode_fixup) 164 + hdmi_ops->mode_fixup(ctx->hdmi_ctx->ctx, connector, mode, 165 + adjusted_mode); 153 166 } 154 167 155 168 static void drm_hdmi_mode_set(struct device *subdrv_dev, void *mode) ··· 158 171 159 172 DRM_DEBUG_KMS("%s\n", __FILE__); 160 173 161 - if (hdmi_manager_ops && hdmi_manager_ops->mode_set) 162 - hdmi_manager_ops->mode_set(ctx->hdmi_ctx->ctx, mode); 174 + if (hdmi_ops && hdmi_ops->mode_set) 175 + hdmi_ops->mode_set(ctx->hdmi_ctx->ctx, mode); 163 176 } 164 177 165 178 static void drm_hdmi_get_max_resol(struct device *subdrv_dev, ··· 169 182 170 183 DRM_DEBUG_KMS("%s\n", __FILE__); 171 184 172 - if (hdmi_manager_ops && hdmi_manager_ops->get_max_resol) 173 - hdmi_manager_ops->get_max_resol(ctx->hdmi_ctx->ctx, width, 174 - height); 185 + if (hdmi_ops && hdmi_ops->get_max_resol) 186 + hdmi_ops->get_max_resol(ctx->hdmi_ctx->ctx, width, height); 175 187 } 176 188 177 189 static void drm_hdmi_commit(struct device *subdrv_dev) ··· 179 193 180 194 DRM_DEBUG_KMS("%s\n", __FILE__); 181 195 182 - if (hdmi_manager_ops && hdmi_manager_ops->commit) 183 - hdmi_manager_ops->commit(ctx->hdmi_ctx->ctx); 196 + if (hdmi_ops && hdmi_ops->commit) 197 + hdmi_ops->commit(ctx->hdmi_ctx->ctx); 184 198 } 185 199 186 200 static void drm_hdmi_dpms(struct device *subdrv_dev, int mode) ··· 195 209 case DRM_MODE_DPMS_STANDBY: 196 210 case DRM_MODE_DPMS_SUSPEND: 197 211 case DRM_MODE_DPMS_OFF: 198 - if (hdmi_manager_ops && hdmi_manager_ops->disable) 199 - hdmi_manager_ops->disable(ctx->hdmi_ctx->ctx); 212 + if (hdmi_ops && hdmi_ops->disable) 213 + hdmi_ops->disable(ctx->hdmi_ctx->ctx); 200 214 break; 201 215 default: 202 216 DRM_DEBUG_KMS("unkown dps mode: %d\n", mode); ··· 221 235 222 236 DRM_DEBUG_KMS("%s\n", __FILE__); 223 237 224 - if (hdmi_overlay_ops && hdmi_overlay_ops->win_mode_set) 225 - hdmi_overlay_ops->win_mode_set(ctx->mixer_ctx->ctx, overlay); 238 + if (mixer_ops && mixer_ops->win_mode_set) 239 + mixer_ops->win_mode_set(ctx->mixer_ctx->ctx, overlay); 226 240 } 227 241 228 242 static void drm_mixer_commit(struct device *subdrv_dev, int zpos) ··· 231 245 232 246 DRM_DEBUG_KMS("%s\n", __FILE__); 233 247 234 - if (hdmi_overlay_ops && hdmi_overlay_ops->win_commit) 235 - hdmi_overlay_ops->win_commit(ctx->mixer_ctx->ctx, zpos); 248 + if (mixer_ops && mixer_ops->win_commit) 249 + mixer_ops->win_commit(ctx->mixer_ctx->ctx, zpos); 236 250 } 237 251 238 252 static void drm_mixer_disable(struct device *subdrv_dev, int zpos) ··· 241 255 242 256 DRM_DEBUG_KMS("%s\n", __FILE__); 243 257 244 - if (hdmi_overlay_ops && hdmi_overlay_ops->win_disable) 245 - hdmi_overlay_ops->win_disable(ctx->mixer_ctx->ctx, zpos); 258 + if (mixer_ops && mixer_ops->win_disable) 259 + mixer_ops->win_disable(ctx->mixer_ctx->ctx, zpos); 246 260 } 247 261 248 262 static struct exynos_drm_overlay_ops drm_hdmi_overlay_ops = { ··· 251 265 .disable = drm_mixer_disable, 252 266 }; 253 267 268 + static struct exynos_drm_manager hdmi_manager = { 269 + .pipe = -1, 270 + .ops = &drm_hdmi_manager_ops, 271 + .overlay_ops = &drm_hdmi_overlay_ops, 272 + .display_ops = &drm_hdmi_display_ops, 273 + }; 254 274 255 275 static int hdmi_subdrv_probe(struct drm_device *drm_dev, 256 276 struct device *dev) ··· 324 332 325 333 subdrv = &ctx->subdrv; 326 334 335 + subdrv->dev = dev; 336 + subdrv->manager = &hdmi_manager; 327 337 subdrv->probe = hdmi_subdrv_probe; 328 - subdrv->manager.pipe = -1; 329 - subdrv->manager.ops = &drm_hdmi_manager_ops; 330 - subdrv->manager.overlay_ops = &drm_hdmi_overlay_ops; 331 - subdrv->manager.display_ops = &drm_hdmi_display_ops; 332 - subdrv->manager.dev = dev; 333 338 334 339 platform_set_drvdata(pdev, subdrv); 335 340
+9 -14
drivers/gpu/drm/exynos/exynos_drm_hdmi.h
··· 38 38 void *ctx; 39 39 }; 40 40 41 - struct exynos_hdmi_display_ops { 41 + struct exynos_hdmi_ops { 42 + /* display */ 42 43 bool (*is_connected)(void *ctx); 43 44 int (*get_edid)(void *ctx, struct drm_connector *connector, 44 45 u8 *edid, int len); 45 46 int (*check_timing)(void *ctx, void *timing); 46 47 int (*power_on)(void *ctx, int mode); 47 - }; 48 48 49 - struct exynos_hdmi_manager_ops { 49 + /* manager */ 50 50 void (*mode_fixup)(void *ctx, struct drm_connector *connector, 51 51 struct drm_display_mode *mode, 52 52 struct drm_display_mode *adjusted_mode); ··· 57 57 void (*disable)(void *ctx); 58 58 }; 59 59 60 - struct exynos_hdmi_overlay_ops { 60 + struct exynos_mixer_ops { 61 + /* manager */ 61 62 int (*enable_vblank)(void *ctx, int pipe); 62 63 void (*disable_vblank)(void *ctx); 64 + 65 + /* overlay */ 63 66 void (*win_mode_set)(void *ctx, struct exynos_drm_overlay *overlay); 64 67 void (*win_commit)(void *ctx, int zpos); 65 68 void (*win_disable)(void *ctx, int zpos); 66 69 }; 67 70 68 - extern struct platform_driver hdmi_driver; 69 - extern struct platform_driver mixer_driver; 70 - 71 - void exynos_drm_display_ops_register(struct exynos_hdmi_display_ops 72 - *display_ops); 73 - void exynos_drm_manager_ops_register(struct exynos_hdmi_manager_ops 74 - *manager_ops); 75 - void exynos_drm_overlay_ops_register(struct exynos_hdmi_overlay_ops 76 - *overlay_ops); 77 - 71 + void exynos_hdmi_ops_register(struct exynos_hdmi_ops *ops); 72 + void exynos_mixer_ops_register(struct exynos_mixer_ops *ops); 78 73 #endif
+4
drivers/gpu/drm/exynos/exynos_drm_plane.c
··· 24 24 25 25 static const uint32_t formats[] = { 26 26 DRM_FORMAT_XRGB8888, 27 + DRM_FORMAT_ARGB8888, 28 + DRM_FORMAT_NV12, 29 + DRM_FORMAT_NV12M, 30 + DRM_FORMAT_NV12MT, 27 31 }; 28 32 29 33 static int
+12 -8
drivers/gpu/drm/exynos/exynos_drm_vidi.c
··· 199 199 static void vidi_apply(struct device *subdrv_dev) 200 200 { 201 201 struct vidi_context *ctx = get_vidi_context(subdrv_dev); 202 - struct exynos_drm_manager *mgr = &ctx->subdrv.manager; 202 + struct exynos_drm_manager *mgr = ctx->subdrv.manager; 203 203 struct exynos_drm_manager_ops *mgr_ops = mgr->ops; 204 204 struct exynos_drm_overlay_ops *ovl_ops = mgr->overlay_ops; 205 205 struct vidi_win_data *win_data; ··· 374 374 .disable = vidi_win_disable, 375 375 }; 376 376 377 + static struct exynos_drm_manager vidi_manager = { 378 + .pipe = -1, 379 + .ops = &vidi_manager_ops, 380 + .overlay_ops = &vidi_overlay_ops, 381 + .display_ops = &vidi_display_ops, 382 + }; 383 + 377 384 static void vidi_finish_pageflip(struct drm_device *drm_dev, int crtc) 378 385 { 379 386 struct exynos_drm_private *dev_priv = drm_dev->dev_private; ··· 432 425 struct vidi_context *ctx = container_of(work, struct vidi_context, 433 426 work); 434 427 struct exynos_drm_subdrv *subdrv = &ctx->subdrv; 435 - struct exynos_drm_manager *manager = &subdrv->manager; 428 + struct exynos_drm_manager *manager = subdrv->manager; 436 429 437 430 if (manager->pipe < 0) 438 431 return; ··· 478 471 static int vidi_power_on(struct vidi_context *ctx, bool enable) 479 472 { 480 473 struct exynos_drm_subdrv *subdrv = &ctx->subdrv; 481 - struct device *dev = subdrv->manager.dev; 474 + struct device *dev = subdrv->dev; 482 475 483 476 DRM_DEBUG_KMS("%s\n", __FILE__); 484 477 ··· 618 611 ctx->raw_edid = (struct edid *)fake_edid_info; 619 612 620 613 subdrv = &ctx->subdrv; 614 + subdrv->dev = dev; 615 + subdrv->manager = &vidi_manager; 621 616 subdrv->probe = vidi_subdrv_probe; 622 617 subdrv->remove = vidi_subdrv_remove; 623 - subdrv->manager.pipe = -1; 624 - subdrv->manager.ops = &vidi_manager_ops; 625 - subdrv->manager.overlay_ops = &vidi_overlay_ops; 626 - subdrv->manager.display_ops = &vidi_display_ops; 627 - subdrv->manager.dev = dev; 628 618 629 619 mutex_init(&ctx->lock); 630 620
+20 -22
drivers/gpu/drm/exynos/exynos_hdmi.c
··· 40 40 41 41 #include "exynos_hdmi.h" 42 42 43 - #define HDMI_OVERLAY_NUMBER 3 44 43 #define MAX_WIDTH 1920 45 44 #define MAX_HEIGHT 1080 46 45 #define get_hdmi_context(dev) platform_get_drvdata(to_platform_device(dev)) ··· 1193 1194 1194 1195 static bool hdmi_is_connected(void *ctx) 1195 1196 { 1196 - struct hdmi_context *hdata = (struct hdmi_context *)ctx; 1197 + struct hdmi_context *hdata = ctx; 1197 1198 u32 val = hdmi_reg_read(hdata, HDMI_HPD_STATUS); 1198 1199 1199 1200 if (val) ··· 1206 1207 u8 *edid, int len) 1207 1208 { 1208 1209 struct edid *raw_edid; 1209 - struct hdmi_context *hdata = (struct hdmi_context *)ctx; 1210 + struct hdmi_context *hdata = ctx; 1210 1211 1211 1212 DRM_DEBUG_KMS("[%d] %s\n", __LINE__, __func__); 1212 1213 ··· 1274 1275 1275 1276 static int hdmi_check_timing(void *ctx, void *timing) 1276 1277 { 1277 - struct hdmi_context *hdata = (struct hdmi_context *)ctx; 1278 + struct hdmi_context *hdata = ctx; 1278 1279 struct fb_videomode *check_timing = timing; 1279 1280 1280 1281 DRM_DEBUG_KMS("[%d] %s\n", __LINE__, __func__); ··· 1310 1311 1311 1312 return 0; 1312 1313 } 1313 - 1314 - static struct exynos_hdmi_display_ops display_ops = { 1315 - .is_connected = hdmi_is_connected, 1316 - .get_edid = hdmi_get_edid, 1317 - .check_timing = hdmi_check_timing, 1318 - .power_on = hdmi_display_power_on, 1319 - }; 1320 1314 1321 1315 static void hdmi_set_acr(u32 freq, u8 *acr) 1322 1316 { ··· 1906 1914 struct drm_display_mode *adjusted_mode) 1907 1915 { 1908 1916 struct drm_display_mode *m; 1909 - struct hdmi_context *hdata = (struct hdmi_context *)ctx; 1917 + struct hdmi_context *hdata = ctx; 1910 1918 int index; 1911 1919 1912 1920 DRM_DEBUG_KMS("[%d] %s\n", __LINE__, __func__); ··· 1943 1951 1944 1952 static void hdmi_mode_set(void *ctx, void *mode) 1945 1953 { 1946 - struct hdmi_context *hdata = (struct hdmi_context *)ctx; 1954 + struct hdmi_context *hdata = ctx; 1947 1955 int conf_idx; 1948 1956 1949 1957 DRM_DEBUG_KMS("[%d] %s\n", __LINE__, __func__); ··· 1966 1974 1967 1975 static void hdmi_commit(void *ctx) 1968 1976 { 1969 - struct hdmi_context *hdata = (struct hdmi_context *)ctx; 1977 + struct hdmi_context *hdata = ctx; 1970 1978 1971 1979 DRM_DEBUG_KMS("[%d] %s\n", __LINE__, __func__); 1972 1980 ··· 1977 1985 1978 1986 static void hdmi_disable(void *ctx) 1979 1987 { 1980 - struct hdmi_context *hdata = (struct hdmi_context *)ctx; 1988 + struct hdmi_context *hdata = ctx; 1981 1989 1982 1990 DRM_DEBUG_KMS("[%d] %s\n", __LINE__, __func__); 1983 1991 ··· 1988 1996 } 1989 1997 } 1990 1998 1991 - static struct exynos_hdmi_manager_ops manager_ops = { 1999 + static struct exynos_hdmi_ops hdmi_ops = { 2000 + /* display */ 2001 + .is_connected = hdmi_is_connected, 2002 + .get_edid = hdmi_get_edid, 2003 + .check_timing = hdmi_check_timing, 2004 + .power_on = hdmi_display_power_on, 2005 + 2006 + /* manager */ 1992 2007 .mode_fixup = hdmi_mode_fixup, 1993 2008 .mode_set = hdmi_mode_set, 1994 2009 .get_max_resol = hdmi_get_max_resol, ··· 2019 2020 static irqreturn_t hdmi_irq_handler(int irq, void *arg) 2020 2021 { 2021 2022 struct exynos_drm_hdmi_context *ctx = arg; 2022 - struct hdmi_context *hdata = (struct hdmi_context *)ctx->ctx; 2023 + struct hdmi_context *hdata = ctx->ctx; 2023 2024 u32 intc_flag; 2024 2025 2025 2026 intc_flag = hdmi_reg_read(hdata, HDMI_INTC_FLAG); ··· 2172 2173 2173 2174 DRM_DEBUG_KMS("%s\n", __func__); 2174 2175 2175 - hdmi_resource_poweroff((struct hdmi_context *)ctx->ctx); 2176 + hdmi_resource_poweroff(ctx->ctx); 2176 2177 2177 2178 return 0; 2178 2179 } ··· 2183 2184 2184 2185 DRM_DEBUG_KMS("%s\n", __func__); 2185 2186 2186 - hdmi_resource_poweron((struct hdmi_context *)ctx->ctx); 2187 + hdmi_resource_poweron(ctx->ctx); 2187 2188 2188 2189 return 0; 2189 2190 } ··· 2321 2322 hdata->irq = res->start; 2322 2323 2323 2324 /* register specific callbacks to common hdmi. */ 2324 - exynos_drm_display_ops_register(&display_ops); 2325 - exynos_drm_manager_ops_register(&manager_ops); 2325 + exynos_hdmi_ops_register(&hdmi_ops); 2326 2326 2327 2327 hdmi_resource_poweron(hdata); 2328 2328 ··· 2349 2351 static int __devexit hdmi_remove(struct platform_device *pdev) 2350 2352 { 2351 2353 struct exynos_drm_hdmi_context *ctx = platform_get_drvdata(pdev); 2352 - struct hdmi_context *hdata = (struct hdmi_context *)ctx->ctx; 2354 + struct hdmi_context *hdata = ctx->ctx; 2353 2355 2354 2356 DRM_DEBUG_KMS("[%d] %s\n", __LINE__, __func__); 2355 2357
+19 -21
drivers/gpu/drm/exynos/exynos_mixer.c
··· 37 37 #include "exynos_drm_drv.h" 38 38 #include "exynos_drm_hdmi.h" 39 39 40 - #define HDMI_OVERLAY_NUMBER 3 40 + #define MIXER_WIN_NR 3 41 + #define MIXER_DEFAULT_WIN 0 41 42 42 43 #define get_mixer_context(dev) platform_get_drvdata(to_platform_device(dev)) 43 44 ··· 76 75 }; 77 76 78 77 struct mixer_context { 79 - struct fb_videomode *default_timing; 80 - unsigned int default_win; 81 - unsigned int default_bpp; 82 78 unsigned int irq; 83 79 int pipe; 84 80 bool interlace; 85 - bool vp_enabled; 86 81 87 82 struct mixer_resources mixer_res; 88 - struct hdmi_win_data win_data[HDMI_OVERLAY_NUMBER]; 83 + struct hdmi_win_data win_data[MIXER_WIN_NR]; 89 84 }; 90 85 91 86 static const u8 filter_y_horiz_tap8[] = { ··· 640 643 641 644 win = overlay->zpos; 642 645 if (win == DEFAULT_ZPOS) 643 - win = mixer_ctx->default_win; 646 + win = MIXER_DEFAULT_WIN; 644 647 645 - if (win < 0 || win > HDMI_OVERLAY_NUMBER) { 648 + if (win < 0 || win > MIXER_WIN_NR) { 646 649 DRM_ERROR("overlay plane[%d] is wrong\n", win); 647 650 return; 648 651 } ··· 680 683 DRM_DEBUG_KMS("[%d] %s, win: %d\n", __LINE__, __func__, win); 681 684 682 685 if (win == DEFAULT_ZPOS) 683 - win = mixer_ctx->default_win; 686 + win = MIXER_DEFAULT_WIN; 684 687 685 - if (win < 0 || win > HDMI_OVERLAY_NUMBER) { 688 + if (win < 0 || win > MIXER_WIN_NR) { 686 689 DRM_ERROR("overlay plane[%d] is wrong\n", win); 687 690 return; 688 691 } ··· 703 706 DRM_DEBUG_KMS("[%d] %s, win: %d\n", __LINE__, __func__, win); 704 707 705 708 if (win == DEFAULT_ZPOS) 706 - win = mixer_ctx->default_win; 709 + win = MIXER_DEFAULT_WIN; 707 710 708 - if (win < 0 || win > HDMI_OVERLAY_NUMBER) { 711 + if (win < 0 || win > MIXER_WIN_NR) { 709 712 DRM_ERROR("overlay plane[%d] is wrong\n", win); 710 713 return; 711 714 } ··· 719 722 spin_unlock_irqrestore(&res->reg_slock, flags); 720 723 } 721 724 722 - static struct exynos_hdmi_overlay_ops overlay_ops = { 725 + static struct exynos_mixer_ops mixer_ops = { 726 + /* manager */ 723 727 .enable_vblank = mixer_enable_vblank, 724 728 .disable_vblank = mixer_disable_vblank, 729 + 730 + /* overlay */ 725 731 .win_mode_set = mixer_win_mode_set, 726 732 .win_commit = mixer_win_commit, 727 733 .win_disable = mixer_win_disable, ··· 771 771 static irqreturn_t mixer_irq_handler(int irq, void *arg) 772 772 { 773 773 struct exynos_drm_hdmi_context *drm_hdmi_ctx = arg; 774 - struct mixer_context *ctx = 775 - (struct mixer_context *)drm_hdmi_ctx->ctx; 774 + struct mixer_context *ctx = drm_hdmi_ctx->ctx; 776 775 struct mixer_resources *res = &ctx->mixer_res; 777 776 u32 val, val_base; 778 777 ··· 901 902 902 903 DRM_DEBUG_KMS("resume - start\n"); 903 904 904 - mixer_resource_poweron((struct mixer_context *)ctx->ctx); 905 + mixer_resource_poweron(ctx->ctx); 905 906 906 907 return 0; 907 908 } ··· 912 913 913 914 DRM_DEBUG_KMS("suspend - start\n"); 914 915 915 - mixer_resource_poweroff((struct mixer_context *)ctx->ctx); 916 + mixer_resource_poweroff(ctx->ctx); 916 917 917 918 return 0; 918 919 } ··· 925 926 static int __devinit mixer_resources_init(struct exynos_drm_hdmi_context *ctx, 926 927 struct platform_device *pdev) 927 928 { 928 - struct mixer_context *mixer_ctx = 929 - (struct mixer_context *)ctx->ctx; 929 + struct mixer_context *mixer_ctx = ctx->ctx; 930 930 struct device *dev = &pdev->dev; 931 931 struct mixer_resources *mixer_res = &mixer_ctx->mixer_res; 932 932 struct resource *res; ··· 1074 1076 goto fail; 1075 1077 1076 1078 /* register specific callback point to common hdmi. */ 1077 - exynos_drm_overlay_ops_register(&overlay_ops); 1079 + exynos_mixer_ops_register(&mixer_ops); 1078 1080 1079 1081 mixer_resource_poweron(ctx); 1080 1082 ··· 1091 1093 struct device *dev = &pdev->dev; 1092 1094 struct exynos_drm_hdmi_context *drm_hdmi_ctx = 1093 1095 platform_get_drvdata(pdev); 1094 - struct mixer_context *ctx = (struct mixer_context *)drm_hdmi_ctx->ctx; 1096 + struct mixer_context *ctx = drm_hdmi_ctx->ctx; 1095 1097 1096 1098 dev_info(dev, "remove successful\n"); 1097 1099
+1 -1
drivers/gpu/drm/i915/i915_drv.c
··· 64 64 "Use semaphores for inter-ring sync (default: -1 (use per-chip defaults))"); 65 65 66 66 int i915_enable_rc6 __read_mostly = -1; 67 - module_param_named(i915_enable_rc6, i915_enable_rc6, int, 0600); 67 + module_param_named(i915_enable_rc6, i915_enable_rc6, int, 0400); 68 68 MODULE_PARM_DESC(i915_enable_rc6, 69 69 "Enable power-saving render C-state 6. " 70 70 "Different stages can be selected via bitmask values "
+2
drivers/gpu/drm/i915/i915_gem.c
··· 1493 1493 { 1494 1494 list_del_init(&obj->ring_list); 1495 1495 obj->last_rendering_seqno = 0; 1496 + obj->last_fenced_seqno = 0; 1496 1497 } 1497 1498 1498 1499 static void ··· 1522 1521 BUG_ON(!list_empty(&obj->gpu_write_list)); 1523 1522 BUG_ON(!obj->active); 1524 1523 obj->ring = NULL; 1524 + obj->last_fenced_ring = NULL; 1525 1525 1526 1526 i915_gem_object_move_off_active(obj); 1527 1527 obj->fenced_gpu_access = false;
+3
drivers/gpu/drm/i915/i915_reg.h
··· 3728 3728 #define GT_FIFO_FREE_ENTRIES 0x120008 3729 3729 #define GT_FIFO_NUM_RESERVED_ENTRIES 20 3730 3730 3731 + #define GEN6_UCGCTL1 0x9400 3732 + # define GEN6_BLBUNIT_CLOCK_GATE_DISABLE (1 << 5) 3733 + 3731 3734 #define GEN6_UCGCTL2 0x9404 3732 3735 # define GEN6_RCZUNIT_CLOCK_GATE_DISABLE (1 << 13) 3733 3736 # define GEN6_RCPBUNIT_CLOCK_GATE_DISABLE (1 << 12)
+50 -19
drivers/gpu/drm/i915/intel_display.c
··· 2245 2245 } 2246 2246 2247 2247 static int 2248 + intel_finish_fb(struct drm_framebuffer *old_fb) 2249 + { 2250 + struct drm_i915_gem_object *obj = to_intel_framebuffer(old_fb)->obj; 2251 + struct drm_i915_private *dev_priv = obj->base.dev->dev_private; 2252 + bool was_interruptible = dev_priv->mm.interruptible; 2253 + int ret; 2254 + 2255 + wait_event(dev_priv->pending_flip_queue, 2256 + atomic_read(&dev_priv->mm.wedged) || 2257 + atomic_read(&obj->pending_flip) == 0); 2258 + 2259 + /* Big Hammer, we also need to ensure that any pending 2260 + * MI_WAIT_FOR_EVENT inside a user batch buffer on the 2261 + * current scanout is retired before unpinning the old 2262 + * framebuffer. 2263 + * 2264 + * This should only fail upon a hung GPU, in which case we 2265 + * can safely continue. 2266 + */ 2267 + dev_priv->mm.interruptible = false; 2268 + ret = i915_gem_object_finish_gpu(obj); 2269 + dev_priv->mm.interruptible = was_interruptible; 2270 + 2271 + return ret; 2272 + } 2273 + 2274 + static int 2248 2275 intel_pipe_set_base(struct drm_crtc *crtc, int x, int y, 2249 2276 struct drm_framebuffer *old_fb) 2250 2277 { ··· 2309 2282 return ret; 2310 2283 } 2311 2284 2312 - if (old_fb) { 2313 - struct drm_i915_private *dev_priv = dev->dev_private; 2314 - struct drm_i915_gem_object *obj = to_intel_framebuffer(old_fb)->obj; 2315 - 2316 - wait_event(dev_priv->pending_flip_queue, 2317 - atomic_read(&dev_priv->mm.wedged) || 2318 - atomic_read(&obj->pending_flip) == 0); 2319 - 2320 - /* Big Hammer, we also need to ensure that any pending 2321 - * MI_WAIT_FOR_EVENT inside a user batch buffer on the 2322 - * current scanout is retired before unpinning the old 2323 - * framebuffer. 2324 - * 2325 - * This should only fail upon a hung GPU, in which case we 2326 - * can safely continue. 2327 - */ 2328 - ret = i915_gem_object_finish_gpu(obj); 2329 - (void) ret; 2330 - } 2285 + if (old_fb) 2286 + intel_finish_fb(old_fb); 2331 2287 2332 2288 ret = intel_pipe_set_base_atomic(crtc, crtc->fb, x, y, 2333 2289 LEAVE_ATOMIC_MODE_SET); ··· 3380 3370 { 3381 3371 struct drm_crtc_helper_funcs *crtc_funcs = crtc->helper_private; 3382 3372 struct drm_device *dev = crtc->dev; 3373 + 3374 + /* Flush any pending WAITs before we disable the pipe. Note that 3375 + * we need to drop the struct_mutex in order to acquire it again 3376 + * during the lowlevel dpms routines around a couple of the 3377 + * operations. It does not look trivial nor desirable to move 3378 + * that locking higher. So instead we leave a window for the 3379 + * submission of further commands on the fb before we can actually 3380 + * disable it. This race with userspace exists anyway, and we can 3381 + * only rely on the pipe being disabled by userspace after it 3382 + * receives the hotplug notification and has flushed any pending 3383 + * batches. 3384 + */ 3385 + if (crtc->fb) { 3386 + mutex_lock(&dev->struct_mutex); 3387 + intel_finish_fb(crtc->fb); 3388 + mutex_unlock(&dev->struct_mutex); 3389 + } 3383 3390 3384 3391 crtc_funcs->dpms(crtc, DRM_MODE_DPMS_OFF); 3385 3392 assert_plane_disabled(dev->dev_private, to_intel_crtc(crtc)->plane); ··· 8555 8528 I915_WRITE(WM3_LP_ILK, 0); 8556 8529 I915_WRITE(WM2_LP_ILK, 0); 8557 8530 I915_WRITE(WM1_LP_ILK, 0); 8531 + 8532 + I915_WRITE(GEN6_UCGCTL1, 8533 + I915_READ(GEN6_UCGCTL1) | 8534 + GEN6_BLBUNIT_CLOCK_GATE_DISABLE); 8558 8535 8559 8536 /* According to the BSpec vol1g, bit 12 (RCPBUNIT) clock 8560 8537 * gating disable must be set. Failure to set it results in
+35 -14
drivers/gpu/drm/i915/intel_dp.c
··· 219 219 return (max_link_clock * max_lanes * 8) / 10; 220 220 } 221 221 222 + static bool 223 + intel_dp_adjust_dithering(struct intel_dp *intel_dp, 224 + struct drm_display_mode *mode, 225 + struct drm_display_mode *adjusted_mode) 226 + { 227 + int max_link_clock = intel_dp_link_clock(intel_dp_max_link_bw(intel_dp)); 228 + int max_lanes = intel_dp_max_lane_count(intel_dp); 229 + int max_rate, mode_rate; 230 + 231 + mode_rate = intel_dp_link_required(mode->clock, 24); 232 + max_rate = intel_dp_max_data_rate(max_link_clock, max_lanes); 233 + 234 + if (mode_rate > max_rate) { 235 + mode_rate = intel_dp_link_required(mode->clock, 18); 236 + if (mode_rate > max_rate) 237 + return false; 238 + 239 + if (adjusted_mode) 240 + adjusted_mode->private_flags 241 + |= INTEL_MODE_DP_FORCE_6BPC; 242 + 243 + return true; 244 + } 245 + 246 + return true; 247 + } 248 + 222 249 static int 223 250 intel_dp_mode_valid(struct drm_connector *connector, 224 251 struct drm_display_mode *mode) 225 252 { 226 253 struct intel_dp *intel_dp = intel_attached_dp(connector); 227 - int max_link_clock = intel_dp_link_clock(intel_dp_max_link_bw(intel_dp)); 228 - int max_lanes = intel_dp_max_lane_count(intel_dp); 229 - int max_rate, mode_rate; 230 254 231 255 if (is_edp(intel_dp) && intel_dp->panel_fixed_mode) { 232 256 if (mode->hdisplay > intel_dp->panel_fixed_mode->hdisplay) ··· 260 236 return MODE_PANEL; 261 237 } 262 238 263 - mode_rate = intel_dp_link_required(mode->clock, 24); 264 - max_rate = intel_dp_max_data_rate(max_link_clock, max_lanes); 265 - 266 - if (mode_rate > max_rate) { 267 - mode_rate = intel_dp_link_required(mode->clock, 18); 268 - if (mode_rate > max_rate) 269 - return MODE_CLOCK_HIGH; 270 - else 271 - mode->private_flags |= INTEL_MODE_DP_FORCE_6BPC; 272 - } 239 + if (!intel_dp_adjust_dithering(intel_dp, mode, NULL)) 240 + return MODE_CLOCK_HIGH; 273 241 274 242 if (mode->clock < 10000) 275 243 return MODE_CLOCK_LOW; ··· 688 672 int lane_count, clock; 689 673 int max_lane_count = intel_dp_max_lane_count(intel_dp); 690 674 int max_clock = intel_dp_max_link_bw(intel_dp) == DP_LINK_BW_2_7 ? 1 : 0; 691 - int bpp = mode->private_flags & INTEL_MODE_DP_FORCE_6BPC ? 18 : 24; 675 + int bpp; 692 676 static int bws[2] = { DP_LINK_BW_1_62, DP_LINK_BW_2_7 }; 693 677 694 678 if (is_edp(intel_dp) && intel_dp->panel_fixed_mode) { ··· 701 685 */ 702 686 mode->clock = intel_dp->panel_fixed_mode->clock; 703 687 } 688 + 689 + if (!intel_dp_adjust_dithering(intel_dp, mode, adjusted_mode)) 690 + return false; 691 + 692 + bpp = adjusted_mode->private_flags & INTEL_MODE_DP_FORCE_6BPC ? 18 : 24; 704 693 705 694 for (lane_count = 1; lane_count <= max_lane_count; lane_count <<= 1) { 706 695 for (clock = 0; clock <= max_clock; clock++) {
+1 -1
drivers/gpu/drm/i915/intel_i2c.c
··· 390 390 bus->has_gpio = intel_gpio_setup(bus, i); 391 391 392 392 /* XXX force bit banging until GMBUS is fully debugged */ 393 - if (bus->has_gpio && IS_GEN2(dev)) 393 + if (bus->has_gpio) 394 394 bus->force_bit = true; 395 395 } 396 396
+1 -1
drivers/gpu/drm/i915/intel_ringbuffer.c
··· 1038 1038 * of the buffer. 1039 1039 */ 1040 1040 ring->effective_size = ring->size; 1041 - if (IS_I830(ring->dev)) 1041 + if (IS_I830(ring->dev) || IS_845G(ring->dev)) 1042 1042 ring->effective_size -= 128; 1043 1043 1044 1044 return 0;
-1
drivers/gpu/drm/i915/intel_sprite.c
··· 95 95 /* must disable */ 96 96 sprctl |= SPRITE_TRICKLE_FEED_DISABLE; 97 97 sprctl |= SPRITE_ENABLE; 98 - sprctl |= SPRITE_DEST_KEY; 99 98 100 99 /* Sizes are 0 based */ 101 100 src_w--;
+4
drivers/gpu/drm/radeon/atombios_encoders.c
··· 230 230 if (!atom_parse_cmd_header(rdev->mode_info.atom_context, index, &frev, &crev)) 231 231 return; 232 232 233 + /* some R4xx chips have the wrong frev */ 234 + if (rdev->family <= CHIP_RV410) 235 + frev = 1; 236 + 233 237 switch (frev) { 234 238 case 1: 235 239 switch (crev) {
+1 -1
drivers/gpu/drm/radeon/r100.c
··· 2553 2553 * or the chip could hang on a subsequent access 2554 2554 */ 2555 2555 if (rdev->pll_errata & CHIP_ERRATA_PLL_DELAY) { 2556 - udelay(5000); 2556 + mdelay(5); 2557 2557 } 2558 2558 2559 2559 /* This function is required to workaround a hardware bug in some (all?)
+1 -1
drivers/gpu/drm/radeon/r600.c
··· 2839 2839 /* r7xx asics need to soft reset RLC before halting */ 2840 2840 WREG32(SRBM_SOFT_RESET, SOFT_RESET_RLC); 2841 2841 RREG32(SRBM_SOFT_RESET); 2842 - udelay(15000); 2842 + mdelay(15); 2843 2843 WREG32(SRBM_SOFT_RESET, 0); 2844 2844 RREG32(SRBM_SOFT_RESET); 2845 2845 }
+3 -3
drivers/gpu/drm/radeon/r600_cp.c
··· 407 407 408 408 RADEON_WRITE(R600_GRBM_SOFT_RESET, R600_SOFT_RESET_CP); 409 409 RADEON_READ(R600_GRBM_SOFT_RESET); 410 - DRM_UDELAY(15000); 410 + mdelay(15); 411 411 RADEON_WRITE(R600_GRBM_SOFT_RESET, 0); 412 412 413 413 fw_data = (const __be32 *)dev_priv->me_fw->data; ··· 500 500 501 501 RADEON_WRITE(R600_GRBM_SOFT_RESET, R600_SOFT_RESET_CP); 502 502 RADEON_READ(R600_GRBM_SOFT_RESET); 503 - DRM_UDELAY(15000); 503 + mdelay(15); 504 504 RADEON_WRITE(R600_GRBM_SOFT_RESET, 0); 505 505 506 506 fw_data = (const __be32 *)dev_priv->pfp_fw->data; ··· 1797 1797 1798 1798 RADEON_WRITE(R600_GRBM_SOFT_RESET, R600_SOFT_RESET_CP); 1799 1799 RADEON_READ(R600_GRBM_SOFT_RESET); 1800 - DRM_UDELAY(15000); 1800 + mdelay(15); 1801 1801 RADEON_WRITE(R600_GRBM_SOFT_RESET, 0); 1802 1802 1803 1803
+12 -12
drivers/gpu/drm/radeon/radeon_clocks.c
··· 633 633 tmp &= ~(R300_SCLK_FORCE_VAP); 634 634 tmp |= RADEON_SCLK_FORCE_CP; 635 635 WREG32_PLL(RADEON_SCLK_CNTL, tmp); 636 - udelay(15000); 636 + mdelay(15); 637 637 638 638 tmp = RREG32_PLL(R300_SCLK_CNTL2); 639 639 tmp &= ~(R300_SCLK_FORCE_TCL | ··· 651 651 tmp |= (RADEON_ENGIN_DYNCLK_MODE | 652 652 (0x01 << RADEON_ACTIVE_HILO_LAT_SHIFT)); 653 653 WREG32_PLL(RADEON_CLK_PWRMGT_CNTL, tmp); 654 - udelay(15000); 654 + mdelay(15); 655 655 656 656 tmp = RREG32_PLL(RADEON_CLK_PIN_CNTL); 657 657 tmp |= RADEON_SCLK_DYN_START_CNTL; 658 658 WREG32_PLL(RADEON_CLK_PIN_CNTL, tmp); 659 - udelay(15000); 659 + mdelay(15); 660 660 661 661 /* When DRI is enabled, setting DYN_STOP_LAT to zero can cause some R200 662 662 to lockup randomly, leave them as set by BIOS. ··· 696 696 tmp |= RADEON_SCLK_MORE_FORCEON; 697 697 } 698 698 WREG32_PLL(RADEON_SCLK_MORE_CNTL, tmp); 699 - udelay(15000); 699 + mdelay(15); 700 700 } 701 701 702 702 /* RV200::A11 A12, RV250::A11 A12 */ ··· 709 709 tmp |= RADEON_TCL_BYPASS_DISABLE; 710 710 WREG32_PLL(RADEON_PLL_PWRMGT_CNTL, tmp); 711 711 } 712 - udelay(15000); 712 + mdelay(15); 713 713 714 714 /*enable dynamic mode for display clocks (PIXCLK and PIX2CLK) */ 715 715 tmp = RREG32_PLL(RADEON_PIXCLKS_CNTL); ··· 722 722 RADEON_PIXCLK_TMDS_ALWAYS_ONb); 723 723 724 724 WREG32_PLL(RADEON_PIXCLKS_CNTL, tmp); 725 - udelay(15000); 725 + mdelay(15); 726 726 727 727 tmp = RREG32_PLL(RADEON_VCLK_ECP_CNTL); 728 728 tmp |= (RADEON_PIXCLK_ALWAYS_ONb | 729 729 RADEON_PIXCLK_DAC_ALWAYS_ONb); 730 730 731 731 WREG32_PLL(RADEON_VCLK_ECP_CNTL, tmp); 732 - udelay(15000); 732 + mdelay(15); 733 733 } 734 734 } else { 735 735 /* Turn everything OFF (ForceON to everything) */ ··· 861 861 } 862 862 WREG32_PLL(RADEON_SCLK_CNTL, tmp); 863 863 864 - udelay(16000); 864 + mdelay(16); 865 865 866 866 if ((rdev->family == CHIP_R300) || 867 867 (rdev->family == CHIP_R350)) { ··· 870 870 R300_SCLK_FORCE_GA | 871 871 R300_SCLK_FORCE_CBA); 872 872 WREG32_PLL(R300_SCLK_CNTL2, tmp); 873 - udelay(16000); 873 + mdelay(16); 874 874 } 875 875 876 876 if (rdev->flags & RADEON_IS_IGP) { ··· 878 878 tmp &= ~(RADEON_FORCEON_MCLKA | 879 879 RADEON_FORCEON_YCLKA); 880 880 WREG32_PLL(RADEON_MCLK_CNTL, tmp); 881 - udelay(16000); 881 + mdelay(16); 882 882 } 883 883 884 884 if ((rdev->family == CHIP_RV200) || ··· 887 887 tmp = RREG32_PLL(RADEON_SCLK_MORE_CNTL); 888 888 tmp |= RADEON_SCLK_MORE_FORCEON; 889 889 WREG32_PLL(RADEON_SCLK_MORE_CNTL, tmp); 890 - udelay(16000); 890 + mdelay(16); 891 891 } 892 892 893 893 tmp = RREG32_PLL(RADEON_PIXCLKS_CNTL); ··· 900 900 RADEON_PIXCLK_TMDS_ALWAYS_ONb); 901 901 902 902 WREG32_PLL(RADEON_PIXCLKS_CNTL, tmp); 903 - udelay(16000); 903 + mdelay(16); 904 904 905 905 tmp = RREG32_PLL(RADEON_VCLK_ECP_CNTL); 906 906 tmp &= ~(RADEON_PIXCLK_ALWAYS_ONb |
+4 -4
drivers/gpu/drm/radeon/radeon_combios.c
··· 2845 2845 case 4: 2846 2846 val = RBIOS16(index); 2847 2847 index += 2; 2848 - udelay(val * 1000); 2848 + mdelay(val); 2849 2849 break; 2850 2850 case 6: 2851 2851 slave_addr = id & 0xff; ··· 3044 3044 udelay(150); 3045 3045 break; 3046 3046 case 2: 3047 - udelay(1000); 3047 + mdelay(1); 3048 3048 break; 3049 3049 case 3: 3050 3050 while (tmp--) { ··· 3075 3075 /*mclk_cntl |= 0x00001111;*//* ??? */ 3076 3076 WREG32_PLL(RADEON_MCLK_CNTL, 3077 3077 mclk_cntl); 3078 - udelay(10000); 3078 + mdelay(10); 3079 3079 #endif 3080 3080 WREG32_PLL 3081 3081 (RADEON_CLK_PWRMGT_CNTL, 3082 3082 tmp & 3083 3083 ~RADEON_CG_NO1_DEBUG_0); 3084 - udelay(10000); 3084 + mdelay(10); 3085 3085 } 3086 3086 break; 3087 3087 default:
+4
drivers/gpu/drm/radeon/radeon_i2c.c
··· 900 900 struct radeon_i2c_chan *i2c; 901 901 int ret; 902 902 903 + /* don't add the mm_i2c bus unless hw_i2c is enabled */ 904 + if (rec->mm_i2c && (radeon_hw_i2c == 0)) 905 + return NULL; 906 + 903 907 i2c = kzalloc(sizeof(struct radeon_i2c_chan), GFP_KERNEL); 904 908 if (i2c == NULL) 905 909 return NULL;
+6 -6
drivers/gpu/drm/radeon/radeon_legacy_encoders.c
··· 88 88 lvds_pll_cntl = RREG32(RADEON_LVDS_PLL_CNTL); 89 89 lvds_pll_cntl |= RADEON_LVDS_PLL_EN; 90 90 WREG32(RADEON_LVDS_PLL_CNTL, lvds_pll_cntl); 91 - udelay(1000); 91 + mdelay(1); 92 92 93 93 lvds_pll_cntl = RREG32(RADEON_LVDS_PLL_CNTL); 94 94 lvds_pll_cntl &= ~RADEON_LVDS_PLL_RESET; ··· 101 101 (backlight_level << RADEON_LVDS_BL_MOD_LEVEL_SHIFT)); 102 102 if (is_mac) 103 103 lvds_gen_cntl |= RADEON_LVDS_BL_MOD_EN; 104 - udelay(panel_pwr_delay * 1000); 104 + mdelay(panel_pwr_delay); 105 105 WREG32(RADEON_LVDS_GEN_CNTL, lvds_gen_cntl); 106 106 break; 107 107 case DRM_MODE_DPMS_STANDBY: ··· 118 118 WREG32(RADEON_LVDS_GEN_CNTL, lvds_gen_cntl); 119 119 lvds_gen_cntl &= ~(RADEON_LVDS_ON | RADEON_LVDS_BLON | RADEON_LVDS_EN | RADEON_LVDS_DIGON); 120 120 } 121 - udelay(panel_pwr_delay * 1000); 121 + mdelay(panel_pwr_delay); 122 122 WREG32(RADEON_LVDS_GEN_CNTL, lvds_gen_cntl); 123 123 WREG32_PLL(RADEON_PIXCLKS_CNTL, pixclks_cntl); 124 - udelay(panel_pwr_delay * 1000); 124 + mdelay(panel_pwr_delay); 125 125 break; 126 126 } 127 127 ··· 656 656 657 657 WREG32(RADEON_DAC_MACRO_CNTL, tmp); 658 658 659 - udelay(2000); 659 + mdelay(2); 660 660 661 661 if (RREG32(RADEON_DAC_CNTL) & RADEON_DAC_CMP_OUTPUT) 662 662 found = connector_status_connected; ··· 1499 1499 tmp = dac_cntl2 | RADEON_DAC2_DAC2_CLK_SEL | RADEON_DAC2_CMP_EN; 1500 1500 WREG32(RADEON_DAC_CNTL2, tmp); 1501 1501 1502 - udelay(10000); 1502 + mdelay(10); 1503 1503 1504 1504 if (ASIC_IS_R300(rdev)) { 1505 1505 if (RREG32(RADEON_DAC_CNTL2) & RADEON_DAC2_CMP_OUT_B)
+3 -3
drivers/gpu/drm/savage/savage_state.c
··· 988 988 * for locking on FreeBSD. 989 989 */ 990 990 if (cmdbuf->size) { 991 - kcmd_addr = kmalloc(cmdbuf->size * 8, GFP_KERNEL); 991 + kcmd_addr = kmalloc_array(cmdbuf->size, 8, GFP_KERNEL); 992 992 if (kcmd_addr == NULL) 993 993 return -ENOMEM; 994 994 ··· 1015 1015 cmdbuf->vb_addr = kvb_addr; 1016 1016 } 1017 1017 if (cmdbuf->nbox) { 1018 - kbox_addr = kmalloc(cmdbuf->nbox * sizeof(struct drm_clip_rect), 1019 - GFP_KERNEL); 1018 + kbox_addr = kmalloc_array(cmdbuf->nbox, sizeof(struct drm_clip_rect), 1019 + GFP_KERNEL); 1020 1020 if (kbox_addr == NULL) { 1021 1021 ret = -ENOMEM; 1022 1022 goto done;
+1
drivers/hwmon/acpi_power_meter.c
··· 391 391 break; 392 392 default: 393 393 BUG(); 394 + val = ""; 394 395 } 395 396 396 397 return sprintf(buf, "%s\n", val);
+8 -9
drivers/hwmon/pmbus/pmbus_core.c
··· 710 710 * If a negative value is stored in any of the referenced registers, this value 711 711 * reflects an error code which will be returned. 712 712 */ 713 - static int pmbus_get_boolean(struct pmbus_data *data, int index, int *val) 713 + static int pmbus_get_boolean(struct pmbus_data *data, int index) 714 714 { 715 715 u8 s1 = (index >> 24) & 0xff; 716 716 u8 s2 = (index >> 16) & 0xff; 717 717 u8 reg = (index >> 8) & 0xff; 718 718 u8 mask = index & 0xff; 719 - int status; 719 + int ret, status; 720 720 u8 regval; 721 721 722 722 status = data->status[reg]; ··· 725 725 726 726 regval = status & mask; 727 727 if (!s1 && !s2) 728 - *val = !!regval; 728 + ret = !!regval; 729 729 else { 730 730 long v1, v2; 731 731 struct pmbus_sensor *sensor1, *sensor2; ··· 739 739 740 740 v1 = pmbus_reg2data(data, sensor1); 741 741 v2 = pmbus_reg2data(data, sensor2); 742 - *val = !!(regval && v1 >= v2); 742 + ret = !!(regval && v1 >= v2); 743 743 } 744 - return 0; 744 + return ret; 745 745 } 746 746 747 747 static ssize_t pmbus_show_boolean(struct device *dev, ··· 750 750 struct sensor_device_attribute *attr = to_sensor_dev_attr(da); 751 751 struct pmbus_data *data = pmbus_update_device(dev); 752 752 int val; 753 - int err; 754 753 755 - err = pmbus_get_boolean(data, attr->index, &val); 756 - if (err) 757 - return err; 754 + val = pmbus_get_boolean(data, attr->index); 755 + if (val < 0) 756 + return val; 758 757 return snprintf(buf, PAGE_SIZE, "%d\n", val); 759 758 } 760 759
+8 -6
drivers/hwmon/smsc47b397.c
··· 343 343 return err; 344 344 } 345 345 346 - static int __init smsc47b397_find(unsigned short *addr) 346 + static int __init smsc47b397_find(void) 347 347 { 348 348 u8 id, rev; 349 349 char *name; 350 + unsigned short addr; 350 351 351 352 superio_enter(); 352 353 id = force_id ? force_id : superio_inb(SUPERIO_REG_DEVID); ··· 371 370 rev = superio_inb(SUPERIO_REG_DEVREV); 372 371 373 372 superio_select(SUPERIO_REG_LD8); 374 - *addr = (superio_inb(SUPERIO_REG_BASE_MSB) << 8) 373 + addr = (superio_inb(SUPERIO_REG_BASE_MSB) << 8) 375 374 | superio_inb(SUPERIO_REG_BASE_LSB); 376 375 377 376 pr_info("found SMSC %s (base address 0x%04x, revision %u)\n", 378 - name, *addr, rev); 377 + name, addr, rev); 379 378 380 379 superio_exit(); 381 - return 0; 380 + return addr; 382 381 } 383 382 384 383 static int __init smsc47b397_init(void) ··· 386 385 unsigned short address; 387 386 int ret; 388 387 389 - ret = smsc47b397_find(&address); 390 - if (ret) 388 + ret = smsc47b397_find(); 389 + if (ret < 0) 391 390 return ret; 391 + address = ret; 392 392 393 393 ret = platform_driver_register(&smsc47b397_driver); 394 394 if (ret)
+10 -9
drivers/hwmon/smsc47m1.c
··· 491 491 .attrs = smsc47m1_attributes, 492 492 }; 493 493 494 - static int __init smsc47m1_find(unsigned short *addr, 495 - struct smsc47m1_sio_data *sio_data) 494 + static int __init smsc47m1_find(struct smsc47m1_sio_data *sio_data) 496 495 { 497 496 u8 val; 497 + unsigned short addr; 498 498 499 499 superio_enter(); 500 500 val = force_id ? force_id : superio_inb(SUPERIO_REG_DEVID); ··· 546 546 } 547 547 548 548 superio_select(); 549 - *addr = (superio_inb(SUPERIO_REG_BASE) << 8) 549 + addr = (superio_inb(SUPERIO_REG_BASE) << 8) 550 550 | superio_inb(SUPERIO_REG_BASE + 1); 551 - if (*addr == 0) { 551 + if (addr == 0) { 552 552 pr_info("Device address not set, will not use\n"); 553 553 superio_exit(); 554 554 return -ENODEV; ··· 565 565 } 566 566 567 567 superio_exit(); 568 - return 0; 568 + return addr; 569 569 } 570 570 571 571 /* Restore device to its initial state */ ··· 938 938 unsigned short address; 939 939 struct smsc47m1_sio_data sio_data; 940 940 941 - if (smsc47m1_find(&address, &sio_data)) 942 - return -ENODEV; 941 + err = smsc47m1_find(&sio_data); 942 + if (err < 0) 943 + return err; 944 + address = err; 943 945 944 946 /* Sets global pdev as a side effect */ 945 947 err = smsc47m1_device_add(address, &sio_data); 946 948 if (err) 947 - goto exit; 949 + return err; 948 950 949 951 err = platform_driver_probe(&smsc47m1_driver, smsc47m1_probe); 950 952 if (err) ··· 957 955 exit_device: 958 956 platform_device_unregister(pdev); 959 957 smsc47m1_restore(&sio_data); 960 - exit: 961 958 return err; 962 959 } 963 960
-1
drivers/i2c/busses/i2c-designware-pcidrv.c
··· 182 182 pci_restore_state(pdev); 183 183 184 184 i2c_dw_init(i2c); 185 - i2c_dw_enable(i2c); 186 185 return 0; 187 186 } 188 187
+5 -4
drivers/infiniband/core/sysfs.c
··· 179 179 { 180 180 struct ib_port_attr attr; 181 181 char *speed = ""; 182 - int rate = -1; /* in deci-Gb/sec */ 182 + int rate; /* in deci-Gb/sec */ 183 183 ssize_t ret; 184 184 185 185 ret = ib_query_port(p->ibdev, p->port_num, &attr); ··· 187 187 return ret; 188 188 189 189 switch (attr.active_speed) { 190 - case IB_SPEED_SDR: 191 - rate = 25; 192 - break; 193 190 case IB_SPEED_DDR: 194 191 speed = " DDR"; 195 192 rate = 50; ··· 206 209 case IB_SPEED_EDR: 207 210 speed = " EDR"; 208 211 rate = 250; 212 + break; 213 + case IB_SPEED_SDR: 214 + default: /* default to SDR for invalid rates */ 215 + rate = 25; 209 216 break; 210 217 } 211 218
+5
drivers/infiniband/hw/mlx4/main.c
··· 253 253 if (out_mad->data[15] & 0x1) 254 254 props->active_speed = IB_SPEED_FDR10; 255 255 } 256 + 257 + /* Avoid wrong speed value returned by FW if the IB link is down. */ 258 + if (props->state == IB_PORT_DOWN) 259 + props->active_speed = IB_SPEED_SDR; 260 + 256 261 out: 257 262 kfree(in_mad); 258 263 kfree(out_mad);
+1
drivers/infiniband/ulp/srpt/ib_srpt.c
··· 3232 3232 srq_attr.attr.max_wr = sdev->srq_size; 3233 3233 srq_attr.attr.max_sge = 1; 3234 3234 srq_attr.attr.srq_limit = 0; 3235 + srq_attr.srq_type = IB_SRQT_BASIC; 3235 3236 3236 3237 sdev->srq = ib_create_srq(sdev->pd, &srq_attr); 3237 3238 if (IS_ERR(sdev->srq))
+2 -1
drivers/input/misc/da9052_onkey.c
··· 95 95 input_dev = input_allocate_device(); 96 96 if (!onkey || !input_dev) { 97 97 dev_err(&pdev->dev, "Failed to allocate memory\n"); 98 - return -ENOMEM; 98 + error = -ENOMEM; 99 + goto err_free_mem; 99 100 } 100 101 101 102 onkey->input = input_dev;
+8 -2
drivers/input/mouse/elantech.c
··· 486 486 unsigned char *packet = psmouse->packet; 487 487 488 488 input_report_key(dev, BTN_LEFT, packet[0] & 0x01); 489 - input_report_key(dev, BTN_RIGHT, packet[0] & 0x02); 490 489 input_mt_report_pointer_emulation(dev, true); 491 490 input_sync(dev); 492 491 } ··· 966 967 if (elantech_set_range(psmouse, &x_min, &y_min, &x_max, &y_max, &width)) 967 968 return -1; 968 969 970 + __set_bit(INPUT_PROP_POINTER, dev->propbit); 969 971 __set_bit(EV_KEY, dev->evbit); 970 972 __set_bit(EV_ABS, dev->evbit); 971 973 __clear_bit(EV_REL, dev->evbit); ··· 1017 1017 */ 1018 1018 psmouse_warn(psmouse, "couldn't query resolution data.\n"); 1019 1019 } 1020 - 1020 + /* v4 is clickpad, with only one button. */ 1021 + __set_bit(INPUT_PROP_BUTTONPAD, dev->propbit); 1022 + __clear_bit(BTN_RIGHT, dev->keybit); 1021 1023 __set_bit(BTN_TOOL_QUADTAP, dev->keybit); 1022 1024 /* For X to recognize me as touchpad. */ 1023 1025 input_set_abs_params(dev, ABS_X, x_min, x_max, 0, 0); ··· 1247 1245 */ 1248 1246 static int elantech_reconnect(struct psmouse *psmouse) 1249 1247 { 1248 + psmouse_reset(psmouse); 1249 + 1250 1250 if (elantech_detect(psmouse, 0)) 1251 1251 return -1; 1252 1252 ··· 1327 1323 psmouse->private = etd = kzalloc(sizeof(struct elantech_data), GFP_KERNEL); 1328 1324 if (!etd) 1329 1325 return -ENOMEM; 1326 + 1327 + psmouse_reset(psmouse); 1330 1328 1331 1329 etd->parity[0] = 1; 1332 1330 for (i = 1; i < 256; i++)
+1 -1
drivers/input/mouse/gpio_mouse.c
··· 12 12 #include <linux/module.h> 13 13 #include <linux/platform_device.h> 14 14 #include <linux/input-polldev.h> 15 + #include <linux/gpio.h> 15 16 #include <linux/gpio_mouse.h> 16 17 17 - #include <asm/gpio.h> 18 18 19 19 /* 20 20 * Timer function which is run every scan_ms ms when the device is opened.
+8
drivers/input/mouse/sentelic.c
··· 741 741 } 742 742 } else { 743 743 /* SFAC packet */ 744 + if ((packet[0] & (FSP_PB0_LBTN|FSP_PB0_PHY_BTN)) == 745 + FSP_PB0_LBTN) { 746 + /* On-pad click in SFAC mode should be handled 747 + * by userspace. On-pad clicks in MFMC mode 748 + * are real clickpad clicks, and not ignored. 749 + */ 750 + packet[0] &= ~FSP_PB0_LBTN; 751 + } 744 752 745 753 /* no multi-finger information */ 746 754 ad->last_mt_fgr = 0;
+8 -6
drivers/input/mouse/trackpoint.c
··· 304 304 return 0; 305 305 306 306 if (trackpoint_read(&psmouse->ps2dev, TP_EXT_BTN, &button_info)) { 307 - printk(KERN_WARNING "trackpoint.c: failed to get extended button data\n"); 307 + psmouse_warn(psmouse, "failed to get extended button data\n"); 308 308 button_info = 0; 309 309 } 310 310 ··· 326 326 327 327 error = sysfs_create_group(&ps2dev->serio->dev.kobj, &trackpoint_attr_group); 328 328 if (error) { 329 - printk(KERN_ERR 330 - "trackpoint.c: failed to create sysfs attributes, error: %d\n", 331 - error); 329 + psmouse_err(psmouse, 330 + "failed to create sysfs attributes, error: %d\n", 331 + error); 332 332 kfree(psmouse->private); 333 333 psmouse->private = NULL; 334 334 return -1; 335 335 } 336 336 337 - printk(KERN_INFO "IBM TrackPoint firmware: 0x%02x, buttons: %d/%d\n", 338 - firmware_id, (button_info & 0xf0) >> 4, button_info & 0x0f); 337 + psmouse_info(psmouse, 338 + "IBM TrackPoint firmware: 0x%02x, buttons: %d/%d\n", 339 + firmware_id, 340 + (button_info & 0xf0) >> 4, button_info & 0x0f); 339 341 340 342 return 0; 341 343 }
+1 -3
drivers/input/touchscreen/tps6507x-ts.c
··· 1 1 /* 2 - * drivers/input/touchscreen/tps6507x_ts.c 3 - * 4 2 * Touchscreen driver for the tps6507x chip. 5 3 * 6 4 * Copyright (c) 2009 RidgeRun (todd.fischer@ridgerun.com) ··· 374 376 MODULE_AUTHOR("Todd Fischer <todd.fischer@ridgerun.com>"); 375 377 MODULE_DESCRIPTION("TPS6507x - TouchScreen driver"); 376 378 MODULE_LICENSE("GPL v2"); 377 - MODULE_ALIAS("platform:tps6507x-tsc"); 379 + MODULE_ALIAS("platform:tps6507x-ts");
+1 -1
drivers/isdn/gigaset/interface.c
··· 176 176 struct cardstate *cs = tty->driver_data; 177 177 178 178 if (!cs) { /* happens if we didn't find cs in open */ 179 - printk(KERN_DEBUG "%s: no cardstate\n", __func__); 179 + gig_dbg(DEBUG_IF, "%s: no cardstate", __func__); 180 180 return; 181 181 } 182 182
+2 -3
drivers/md/bitmap.c
··· 539 539 bitmap->events_cleared = bitmap->mddev->events; 540 540 sb->events_cleared = cpu_to_le64(bitmap->mddev->events); 541 541 542 - bitmap->flags |= BITMAP_HOSTENDIAN; 543 - sb->version = cpu_to_le32(BITMAP_MAJOR_HOSTENDIAN); 544 - 545 542 kunmap_atomic(sb); 546 543 547 544 return 0; ··· 1785 1788 * re-add of a missing device */ 1786 1789 start = mddev->recovery_cp; 1787 1790 1791 + mutex_lock(&mddev->bitmap_info.mutex); 1788 1792 err = bitmap_init_from_disk(bitmap, start); 1793 + mutex_unlock(&mddev->bitmap_info.mutex); 1789 1794 1790 1795 if (err) 1791 1796 goto out;
+2 -1
drivers/md/raid1.c
··· 1712 1712 struct r1conf *conf = mddev->private; 1713 1713 int primary; 1714 1714 int i; 1715 + int vcnt; 1715 1716 1716 1717 for (primary = 0; primary < conf->raid_disks * 2; primary++) 1717 1718 if (r1_bio->bios[primary]->bi_end_io == end_sync_read && ··· 1722 1721 break; 1723 1722 } 1724 1723 r1_bio->read_disk = primary; 1724 + vcnt = (r1_bio->sectors + PAGE_SIZE / 512 - 1) >> (PAGE_SHIFT - 9); 1725 1725 for (i = 0; i < conf->raid_disks * 2; i++) { 1726 1726 int j; 1727 - int vcnt = r1_bio->sectors >> (PAGE_SHIFT- 9); 1728 1727 struct bio *pbio = r1_bio->bios[primary]; 1729 1728 struct bio *sbio = r1_bio->bios[i]; 1730 1729 int size;
+2 -2
drivers/md/raid10.c
··· 1788 1788 struct r10conf *conf = mddev->private; 1789 1789 int i, first; 1790 1790 struct bio *tbio, *fbio; 1791 + int vcnt; 1791 1792 1792 1793 atomic_set(&r10_bio->remaining, 1); 1793 1794 ··· 1803 1802 first = i; 1804 1803 fbio = r10_bio->devs[i].bio; 1805 1804 1805 + vcnt = (r10_bio->sectors + (PAGE_SIZE >> 9) - 1) >> (PAGE_SHIFT - 9); 1806 1806 /* now find blocks with errors */ 1807 1807 for (i=0 ; i < conf->copies ; i++) { 1808 1808 int j, d; 1809 - int vcnt = r10_bio->sectors >> (PAGE_SHIFT-9); 1810 1809 1811 1810 tbio = r10_bio->devs[i].bio; 1812 1811 ··· 1872 1871 */ 1873 1872 for (i = 0; i < conf->copies; i++) { 1874 1873 int j, d; 1875 - int vcnt = r10_bio->sectors >> (PAGE_SHIFT-9); 1876 1874 1877 1875 tbio = r10_bio->devs[i].repl_bio; 1878 1876 if (!tbio || !tbio->bi_end_io)
+11 -1
drivers/media/dvb/dvb-core/dvb_frontend.c
··· 143 143 static void dvb_frontend_wakeup(struct dvb_frontend *fe); 144 144 static int dtv_get_frontend(struct dvb_frontend *fe, 145 145 struct dvb_frontend_parameters *p_out); 146 + static int dtv_property_legacy_params_sync(struct dvb_frontend *fe, 147 + struct dvb_frontend_parameters *p); 146 148 147 149 static bool has_get_frontend(struct dvb_frontend *fe) 148 150 { 149 - return fe->ops.get_frontend; 151 + return fe->ops.get_frontend != NULL; 150 152 } 151 153 152 154 /* ··· 699 697 fepriv->algo_status |= DVBFE_ALGO_SEARCH_AGAIN; 700 698 fepriv->delay = HZ / 2; 701 699 } 700 + dtv_property_legacy_params_sync(fe, &fepriv->parameters_out); 702 701 fe->ops.read_status(fe, &s); 703 702 if (s != fepriv->status) { 704 703 dvb_frontend_add_event(fe, s); /* update event list */ ··· 1834 1831 1835 1832 if (dvb_frontend_check_parameters(fe) < 0) 1836 1833 return -EINVAL; 1834 + 1835 + /* 1836 + * Initialize output parameters to match the values given by 1837 + * the user. FE_SET_FRONTEND triggers an initial frontend event 1838 + * with status = 0, which copies output parameters to userspace. 1839 + */ 1840 + dtv_property_legacy_params_sync(fe, &fepriv->parameters_out); 1837 1841 1838 1842 /* 1839 1843 * Be sure that the bandwidth will be filled for all
+40 -14
drivers/media/dvb/dvb-usb/it913x.c
··· 238 238 239 239 static u32 it913x_query(struct usb_device *udev, u8 pro) 240 240 { 241 - int ret; 241 + int ret, i; 242 242 u8 data[4]; 243 - ret = it913x_io(udev, READ_LONG, pro, CMD_DEMOD_READ, 244 - 0x1222, 0, &data[0], 3); 243 + u8 ver; 245 244 246 - it913x_config.chip_ver = data[0]; 245 + for (i = 0; i < 5; i++) { 246 + ret = it913x_io(udev, READ_LONG, pro, CMD_DEMOD_READ, 247 + 0x1222, 0, &data[0], 3); 248 + ver = data[0]; 249 + if (ver > 0 && ver < 3) 250 + break; 251 + msleep(100); 252 + } 253 + 254 + if (ver < 1 || ver > 2) { 255 + info("Failed to identify chip version applying 1"); 256 + it913x_config.chip_ver = 0x1; 257 + it913x_config.chip_type = 0x9135; 258 + return 0; 259 + } 260 + 261 + it913x_config.chip_ver = ver; 247 262 it913x_config.chip_type = (u16)(data[2] << 8) + data[1]; 248 263 249 264 info("Chip Version=%02x Chip Type=%04x", it913x_config.chip_ver, ··· 675 660 if ((packet_size > min_pkt) || (i == fw->size)) { 676 661 fw_data = (u8 *)(fw->data + pos); 677 662 pos += packet_size; 678 - if (packet_size > 0) 679 - ret |= it913x_io(udev, WRITE_DATA, 663 + if (packet_size > 0) { 664 + ret = it913x_io(udev, WRITE_DATA, 680 665 DEV_0, CMD_SCATTER_WRITE, 0, 681 666 0, fw_data, packet_size); 667 + if (ret < 0) 668 + break; 669 + } 682 670 udelay(1000); 683 671 } 684 672 } 685 673 i++; 686 674 } 687 675 688 - ret |= it913x_io(udev, WRITE_CMD, DEV_0, CMD_BOOT, 0, 0, NULL, 0); 689 - 690 - msleep(100); 691 - 692 676 if (ret < 0) 693 - info("FRM Firmware Download Failed (%04x)" , ret); 677 + info("FRM Firmware Download Failed (%d)" , ret); 694 678 else 695 679 info("FRM Firmware Download Completed - Resetting Device"); 696 680 697 - ret |= it913x_return_status(udev); 681 + msleep(30); 682 + 683 + ret = it913x_io(udev, WRITE_CMD, DEV_0, CMD_BOOT, 0, 0, NULL, 0); 684 + if (ret < 0) 685 + info("FRM Device not responding to reboot"); 686 + 687 + ret = it913x_return_status(udev); 688 + if (ret == 0) { 689 + info("FRM Failed to reboot device"); 690 + return -ENODEV; 691 + } 698 692 699 693 msleep(30); 700 694 701 - ret |= it913x_wr_reg(udev, DEV_0, I2C_CLK, I2C_CLK_400); 695 + ret = it913x_wr_reg(udev, DEV_0, I2C_CLK, I2C_CLK_400); 696 + 697 + msleep(30); 702 698 703 699 /* Tuner function */ 704 700 if (it913x_config.dual_mode) ··· 927 901 928 902 MODULE_AUTHOR("Malcolm Priestley <tvboxspy@gmail.com>"); 929 903 MODULE_DESCRIPTION("it913x USB 2 Driver"); 930 - MODULE_VERSION("1.27"); 904 + MODULE_VERSION("1.28"); 931 905 MODULE_LICENSE("GPL");
+2 -2
drivers/media/video/ivtv/ivtv-ioctl.c
··· 1763 1763 IVTV_DEBUG_IOCTL("AUDIO_CHANNEL_SELECT\n"); 1764 1764 if (iarg > AUDIO_STEREO_SWAPPED) 1765 1765 return -EINVAL; 1766 - return v4l2_ctrl_s_ctrl(itv->ctrl_audio_playback, iarg); 1766 + return v4l2_ctrl_s_ctrl(itv->ctrl_audio_playback, iarg + 1); 1767 1767 1768 1768 case AUDIO_BILINGUAL_CHANNEL_SELECT: 1769 1769 IVTV_DEBUG_IOCTL("AUDIO_BILINGUAL_CHANNEL_SELECT\n"); 1770 1770 if (iarg > AUDIO_STEREO_SWAPPED) 1771 1771 return -EINVAL; 1772 - return v4l2_ctrl_s_ctrl(itv->ctrl_audio_multilingual_playback, iarg); 1772 + return v4l2_ctrl_s_ctrl(itv->ctrl_audio_multilingual_playback, iarg + 1); 1773 1773 1774 1774 default: 1775 1775 return -EINVAL;
+33 -19
drivers/media/video/uvc/uvc_video.c
··· 468 468 spin_unlock_irqrestore(&stream->clock.lock, flags); 469 469 } 470 470 471 + static void uvc_video_clock_reset(struct uvc_streaming *stream) 472 + { 473 + struct uvc_clock *clock = &stream->clock; 474 + 475 + clock->head = 0; 476 + clock->count = 0; 477 + clock->last_sof = -1; 478 + clock->sof_offset = -1; 479 + } 480 + 471 481 static int uvc_video_clock_init(struct uvc_streaming *stream) 472 482 { 473 483 struct uvc_clock *clock = &stream->clock; 474 484 475 485 spin_lock_init(&clock->lock); 476 - clock->head = 0; 477 - clock->count = 0; 478 486 clock->size = 32; 479 - clock->last_sof = -1; 480 - clock->sof_offset = -1; 481 487 482 488 clock->samples = kmalloc(clock->size * sizeof(*clock->samples), 483 489 GFP_KERNEL); 484 490 if (clock->samples == NULL) 485 491 return -ENOMEM; 492 + 493 + uvc_video_clock_reset(stream); 486 494 487 495 return 0; 488 496 } ··· 1432 1424 1433 1425 if (free_buffers) 1434 1426 uvc_free_urb_buffers(stream); 1435 - 1436 - uvc_video_clock_cleanup(stream); 1437 1427 } 1438 1428 1439 1429 /* ··· 1561 1555 1562 1556 uvc_video_stats_start(stream); 1563 1557 1564 - ret = uvc_video_clock_init(stream); 1565 - if (ret < 0) 1566 - return ret; 1567 - 1568 1558 if (intf->num_altsetting > 1) { 1569 1559 struct usb_host_endpoint *best_ep = NULL; 1570 1560 unsigned int best_psize = 3 * 1024; ··· 1684 1682 usb_set_interface(stream->dev->udev, stream->intfnum, 0); 1685 1683 1686 1684 stream->frozen = 0; 1685 + 1686 + uvc_video_clock_reset(stream); 1687 1687 1688 1688 ret = uvc_commit_video(stream, &stream->ctrl); 1689 1689 if (ret < 0) { ··· 1823 1819 uvc_uninit_video(stream, 1); 1824 1820 usb_set_interface(stream->dev->udev, stream->intfnum, 0); 1825 1821 uvc_queue_enable(&stream->queue, 0); 1822 + uvc_video_clock_cleanup(stream); 1826 1823 return 0; 1827 1824 } 1828 1825 1829 - ret = uvc_queue_enable(&stream->queue, 1); 1826 + ret = uvc_video_clock_init(stream); 1830 1827 if (ret < 0) 1831 1828 return ret; 1832 1829 1830 + ret = uvc_queue_enable(&stream->queue, 1); 1831 + if (ret < 0) 1832 + goto error_queue; 1833 + 1833 1834 /* Commit the streaming parameters. */ 1834 1835 ret = uvc_commit_video(stream, &stream->ctrl); 1835 - if (ret < 0) { 1836 - uvc_queue_enable(&stream->queue, 0); 1837 - return ret; 1838 - } 1836 + if (ret < 0) 1837 + goto error_commit; 1839 1838 1840 1839 ret = uvc_init_video(stream, GFP_KERNEL); 1841 - if (ret < 0) { 1842 - usb_set_interface(stream->dev->udev, stream->intfnum, 0); 1843 - uvc_queue_enable(&stream->queue, 0); 1844 - } 1840 + if (ret < 0) 1841 + goto error_video; 1842 + 1843 + return 0; 1844 + 1845 + error_video: 1846 + usb_set_interface(stream->dev->udev, stream->intfnum, 0); 1847 + error_commit: 1848 + uvc_queue_enable(&stream->queue, 0); 1849 + error_queue: 1850 + uvc_video_clock_cleanup(stream); 1845 1851 1846 1852 return ret; 1847 1853 }
+1
drivers/mfd/db8500-prcmu.c
··· 2788 2788 .constraints = { 2789 2789 .name = "db8500-vape", 2790 2790 .valid_ops_mask = REGULATOR_CHANGE_STATUS, 2791 + .always_on = true, 2791 2792 }, 2792 2793 .consumer_supplies = db8500_vape_consumers, 2793 2794 .num_consumer_supplies = ARRAY_SIZE(db8500_vape_consumers),
+10 -10
drivers/mtd/mtdchar.c
··· 106 106 } 107 107 108 108 if (mtd->type == MTD_ABSENT) { 109 - put_mtd_device(mtd); 110 109 ret = -ENODEV; 111 - goto out; 110 + goto out1; 112 111 } 113 112 114 113 mtd_ino = iget_locked(mnt->mnt_sb, devnum); 115 114 if (!mtd_ino) { 116 - put_mtd_device(mtd); 117 115 ret = -ENOMEM; 118 - goto out; 116 + goto out1; 119 117 } 120 118 if (mtd_ino->i_state & I_NEW) { 121 119 mtd_ino->i_private = mtd; ··· 125 127 126 128 /* You can't open it RW if it's not a writeable device */ 127 129 if ((file->f_mode & FMODE_WRITE) && !(mtd->flags & MTD_WRITEABLE)) { 128 - iput(mtd_ino); 129 - put_mtd_device(mtd); 130 130 ret = -EACCES; 131 - goto out; 131 + goto out2; 132 132 } 133 133 134 134 mfi = kzalloc(sizeof(*mfi), GFP_KERNEL); 135 135 if (!mfi) { 136 - iput(mtd_ino); 137 - put_mtd_device(mtd); 138 136 ret = -ENOMEM; 139 - goto out; 137 + goto out2; 140 138 } 141 139 mfi->ino = mtd_ino; 142 140 mfi->mtd = mtd; 143 141 file->private_data = mfi; 142 + mutex_unlock(&mtd_mutex); 143 + return 0; 144 144 145 + out2: 146 + iput(mtd_ino); 147 + out1: 148 + put_mtd_device(mtd); 145 149 out: 146 150 mutex_unlock(&mtd_mutex); 147 151 simple_release_fs(&mnt, &count);
+3 -5
drivers/net/wireless/ath/ath9k/main.c
··· 118 118 if (--sc->ps_usecount != 0) 119 119 goto unlock; 120 120 121 - if (sc->ps_flags & PS_WAIT_FOR_TX_ACK) 122 - goto unlock; 123 - 124 - if (sc->ps_idle) 121 + if (sc->ps_idle && (sc->ps_flags & PS_WAIT_FOR_TX_ACK)) 125 122 mode = ATH9K_PM_FULL_SLEEP; 126 123 else if (sc->ps_enabled && 127 124 !(sc->ps_flags & (PS_WAIT_FOR_BEACON | 128 125 PS_WAIT_FOR_CAB | 129 - PS_WAIT_FOR_PSPOLL_DATA))) 126 + PS_WAIT_FOR_PSPOLL_DATA | 127 + PS_WAIT_FOR_TX_ACK))) 130 128 mode = ATH9K_PM_NETWORK_SLEEP; 131 129 else 132 130 goto unlock;
+1 -5
drivers/net/wireless/rt2x00/rt2x00dev.c
··· 1062 1062 1063 1063 set_bit(DEVICE_STATE_INITIALIZED, &rt2x00dev->flags); 1064 1064 1065 - /* 1066 - * Register the extra components. 1067 - */ 1068 - rt2x00rfkill_register(rt2x00dev); 1069 - 1070 1065 return 0; 1071 1066 } 1072 1067 ··· 1205 1210 rt2x00link_register(rt2x00dev); 1206 1211 rt2x00leds_register(rt2x00dev); 1207 1212 rt2x00debug_register(rt2x00dev); 1213 + rt2x00rfkill_register(rt2x00dev); 1208 1214 1209 1215 return 0; 1210 1216
+4 -1
drivers/net/wireless/rtlwifi/base.c
··· 838 838 __le16 fc = hdr->frame_control; 839 839 840 840 txrate = ieee80211_get_tx_rate(hw, info); 841 - tcb_desc->hw_rate = txrate->hw_value; 841 + if (txrate) 842 + tcb_desc->hw_rate = txrate->hw_value; 843 + else 844 + tcb_desc->hw_rate = 0; 842 845 843 846 if (ieee80211_is_data(fc)) { 844 847 /*
+6 -1
drivers/net/wireless/rtlwifi/pci.c
··· 912 912 memset(&tcb_desc, 0, sizeof(struct rtl_tcb_desc)); 913 913 ring = &rtlpci->tx_ring[BEACON_QUEUE]; 914 914 pskb = __skb_dequeue(&ring->queue); 915 - if (pskb) 915 + if (pskb) { 916 + struct rtl_tx_desc *entry = &ring->desc[ring->idx]; 917 + pci_unmap_single(rtlpci->pdev, rtlpriv->cfg->ops->get_desc( 918 + (u8 *) entry, true, HW_DESC_TXBUFF_ADDR), 919 + pskb->len, PCI_DMA_TODEVICE); 916 920 kfree_skb(pskb); 921 + } 917 922 918 923 /*NB: the beacon data buffer must be 32-bit aligned. */ 919 924 pskb = ieee80211_beacon_get(hw, mac->vif);
-6
drivers/net/wireless/rtlwifi/rtl8192de/sw.c
··· 91 91 u8 tid; 92 92 struct rtl_priv *rtlpriv = rtl_priv(hw); 93 93 struct rtl_pci *rtlpci = rtl_pcidev(rtl_pcipriv(hw)); 94 - static int header_print; 95 94 96 95 rtlpriv->dm.dm_initialgain_enable = true; 97 96 rtlpriv->dm.dm_flag = 0; ··· 170 171 for (tid = 0; tid < 8; tid++) 171 172 skb_queue_head_init(&rtlpriv->mac80211.skb_waitq[tid]); 172 173 173 - /* Only load firmware for first MAC */ 174 - if (header_print) 175 - return 0; 176 - 177 174 /* for firmware buf */ 178 175 rtlpriv->rtlhal.pfirmware = vzalloc(0x8000); 179 176 if (!rtlpriv->rtlhal.pfirmware) { ··· 181 186 rtlpriv->max_fw_size = 0x8000; 182 187 pr_info("Driver for Realtek RTL8192DE WLAN interface\n"); 183 188 pr_info("Loading firmware file %s\n", rtlpriv->cfg->fw_name); 184 - header_print++; 185 189 186 190 /* request fw */ 187 191 err = request_firmware_nowait(THIS_MODULE, 1, rtlpriv->cfg->fw_name,
+16 -18
drivers/net/wireless/rtlwifi/usb.c
··· 124 124 return status; 125 125 } 126 126 127 - static u32 _usb_read_sync(struct usb_device *udev, u32 addr, u16 len) 127 + static u32 _usb_read_sync(struct rtl_priv *rtlpriv, u32 addr, u16 len) 128 128 { 129 + struct device *dev = rtlpriv->io.dev; 130 + struct usb_device *udev = to_usb_device(dev); 129 131 u8 request; 130 132 u16 wvalue; 131 133 u16 index; 132 - u32 *data; 133 - u32 ret; 134 + __le32 *data = &rtlpriv->usb_data[rtlpriv->usb_data_index]; 134 135 135 - data = kmalloc(sizeof(u32), GFP_KERNEL); 136 - if (!data) 137 - return -ENOMEM; 138 136 request = REALTEK_USB_VENQT_CMD_REQ; 139 137 index = REALTEK_USB_VENQT_CMD_IDX; /* n/a */ 140 138 141 139 wvalue = (u16)addr; 142 140 _usbctrl_vendorreq_sync_read(udev, request, wvalue, index, data, len); 143 - ret = le32_to_cpu(*data); 144 - kfree(data); 145 - return ret; 141 + if (++rtlpriv->usb_data_index >= RTL_USB_MAX_RX_COUNT) 142 + rtlpriv->usb_data_index = 0; 143 + return le32_to_cpu(*data); 146 144 } 147 145 148 146 static u8 _usb_read8_sync(struct rtl_priv *rtlpriv, u32 addr) 149 147 { 150 - struct device *dev = rtlpriv->io.dev; 151 - 152 - return (u8)_usb_read_sync(to_usb_device(dev), addr, 1); 148 + return (u8)_usb_read_sync(rtlpriv, addr, 1); 153 149 } 154 150 155 151 static u16 _usb_read16_sync(struct rtl_priv *rtlpriv, u32 addr) 156 152 { 157 - struct device *dev = rtlpriv->io.dev; 158 - 159 - return (u16)_usb_read_sync(to_usb_device(dev), addr, 2); 153 + return (u16)_usb_read_sync(rtlpriv, addr, 2); 160 154 } 161 155 162 156 static u32 _usb_read32_sync(struct rtl_priv *rtlpriv, u32 addr) 163 157 { 164 - struct device *dev = rtlpriv->io.dev; 165 - 166 - return _usb_read_sync(to_usb_device(dev), addr, 4); 158 + return _usb_read_sync(rtlpriv, addr, 4); 167 159 } 168 160 169 161 static void _usb_write_async(struct usb_device *udev, u32 addr, u32 val, ··· 947 955 return -ENOMEM; 948 956 } 949 957 rtlpriv = hw->priv; 958 + rtlpriv->usb_data = kzalloc(RTL_USB_MAX_RX_COUNT * sizeof(u32), 959 + GFP_KERNEL); 960 + if (!rtlpriv->usb_data) 961 + return -ENOMEM; 962 + rtlpriv->usb_data_index = 0; 950 963 init_completion(&rtlpriv->firmware_loading_complete); 951 964 SET_IEEE80211_DEV(hw, &intf->dev); 952 965 udev = interface_to_usbdev(intf); ··· 1022 1025 /* rtl_deinit_rfkill(hw); */ 1023 1026 rtl_usb_deinit(hw); 1024 1027 rtl_deinit_core(hw); 1028 + kfree(rtlpriv->usb_data); 1025 1029 rtlpriv->cfg->ops->deinit_sw_leds(hw); 1026 1030 rtlpriv->cfg->ops->deinit_sw_vars(hw); 1027 1031 _rtl_usb_io_handler_release(hw);
+5 -1
drivers/net/wireless/rtlwifi/wifi.h
··· 67 67 #define QOS_QUEUE_NUM 4 68 68 #define RTL_MAC80211_NUM_QUEUE 5 69 69 #define REALTEK_USB_VENQT_MAX_BUF_SIZE 254 70 - 70 + #define RTL_USB_MAX_RX_COUNT 100 71 71 #define QBSS_LOAD_SIZE 5 72 72 #define MAX_WMMELE_LENGTH 64 73 73 ··· 1628 1628 and was used to indicate status of 1629 1629 interface or hardware */ 1630 1630 unsigned long status; 1631 + 1632 + /* data buffer pointer for USB reads */ 1633 + __le32 *usb_data; 1634 + int usb_data_index; 1631 1635 1632 1636 /*This must be the last item so 1633 1637 that it points to the data allocated
+1 -1
drivers/of/gpio.c
··· 140 140 if (WARN_ON(gpiospec->args_count < gc->of_gpio_n_cells)) 141 141 return -EINVAL; 142 142 143 - if (gpiospec->args[0] > gc->ngpio) 143 + if (gpiospec->args[0] >= gc->ngpio) 144 144 return -EINVAL; 145 145 146 146 if (flags)
+39 -18
drivers/pci/pci.c
··· 967 967 return 0; 968 968 } 969 969 970 + static void pci_restore_config_dword(struct pci_dev *pdev, int offset, 971 + u32 saved_val, int retry) 972 + { 973 + u32 val; 974 + 975 + pci_read_config_dword(pdev, offset, &val); 976 + if (val == saved_val) 977 + return; 978 + 979 + for (;;) { 980 + dev_dbg(&pdev->dev, "restoring config space at offset " 981 + "%#x (was %#x, writing %#x)\n", offset, val, saved_val); 982 + pci_write_config_dword(pdev, offset, saved_val); 983 + if (retry-- <= 0) 984 + return; 985 + 986 + pci_read_config_dword(pdev, offset, &val); 987 + if (val == saved_val) 988 + return; 989 + 990 + mdelay(1); 991 + } 992 + } 993 + 994 + static void pci_restore_config_space(struct pci_dev *pdev, int start, int end, 995 + int retry) 996 + { 997 + int index; 998 + 999 + for (index = end; index >= start; index--) 1000 + pci_restore_config_dword(pdev, 4 * index, 1001 + pdev->saved_config_space[index], 1002 + retry); 1003 + } 1004 + 970 1005 /** 971 1006 * pci_restore_state - Restore the saved state of a PCI device 972 1007 * @dev: - PCI device that we're dealing with 973 1008 */ 974 1009 void pci_restore_state(struct pci_dev *dev) 975 1010 { 976 - int i; 977 - u32 val; 978 - int tries; 979 - 980 1011 if (!dev->state_saved) 981 1012 return; 982 1013 ··· 1015 984 pci_restore_pcie_state(dev); 1016 985 pci_restore_ats_state(dev); 1017 986 987 + pci_restore_config_space(dev, 10, 15, 0); 1018 988 /* 1019 989 * The Base Address register should be programmed before the command 1020 990 * register(s) 1021 991 */ 1022 - for (i = 15; i >= 0; i--) { 1023 - pci_read_config_dword(dev, i * 4, &val); 1024 - tries = 10; 1025 - while (tries && val != dev->saved_config_space[i]) { 1026 - dev_dbg(&dev->dev, "restoring config " 1027 - "space at offset %#x (was %#x, writing %#x)\n", 1028 - i, val, (int)dev->saved_config_space[i]); 1029 - pci_write_config_dword(dev,i * 4, 1030 - dev->saved_config_space[i]); 1031 - pci_read_config_dword(dev, i * 4, &val); 1032 - mdelay(10); 1033 - tries--; 1034 - } 1035 - } 992 + pci_restore_config_space(dev, 4, 9, 10); 993 + pci_restore_config_space(dev, 0, 3, 0); 994 + 1036 995 pci_restore_pcix_state(dev); 1037 996 pci_restore_msi_state(dev); 1038 997 pci_restore_iov_state(dev);
+3 -3
drivers/regulator/anatop-regulator.c
··· 214 214 { /* end */ } 215 215 }; 216 216 217 - static struct platform_driver anatop_regulator = { 217 + static struct platform_driver anatop_regulator_driver = { 218 218 .driver = { 219 219 .name = "anatop_regulator", 220 220 .owner = THIS_MODULE, ··· 226 226 227 227 static int __init anatop_regulator_init(void) 228 228 { 229 - return platform_driver_register(&anatop_regulator); 229 + return platform_driver_register(&anatop_regulator_driver); 230 230 } 231 231 postcore_initcall(anatop_regulator_init); 232 232 233 233 static void __exit anatop_regulator_exit(void) 234 234 { 235 - platform_driver_unregister(&anatop_regulator); 235 + platform_driver_unregister(&anatop_regulator_driver); 236 236 } 237 237 module_exit(anatop_regulator_exit); 238 238
-1
drivers/rtc/rtc-efi.c
··· 213 213 .name = "rtc-efi", 214 214 .owner = THIS_MODULE, 215 215 }, 216 - .probe = efi_rtc_probe, 217 216 .remove = __exit_p(efi_rtc_remove), 218 217 }; 219 218
+1 -2
drivers/rtc/rtc-pl031.c
··· 339 339 dev_dbg(&adev->dev, "revision = 0x%01x\n", ldata->hw_revision); 340 340 341 341 /* Enable the clockwatch on ST Variants */ 342 - if ((ldata->hw_designer == AMBA_VENDOR_ST) && 343 - (ldata->hw_revision > 1)) 342 + if (ldata->hw_designer == AMBA_VENDOR_ST) 344 343 writel(readl(ldata->base + RTC_CR) | RTC_CR_CWEN, 345 344 ldata->base + RTC_CR); 346 345
+22
drivers/rtc/rtc-r9701.c
··· 122 122 static int __devinit r9701_probe(struct spi_device *spi) 123 123 { 124 124 struct rtc_device *rtc; 125 + struct rtc_time dt; 125 126 unsigned char tmp; 126 127 int res; 127 128 ··· 131 130 if (res || tmp != 0x20) { 132 131 dev_err(&spi->dev, "cannot read RTC register\n"); 133 132 return -ENODEV; 133 + } 134 + 135 + /* 136 + * The device seems to be present. Now check if the registers 137 + * contain invalid values. If so, try to write a default date: 138 + * 2000/1/1 00:00:00 139 + */ 140 + r9701_get_datetime(&spi->dev, &dt); 141 + if (rtc_valid_tm(&dt)) { 142 + dev_info(&spi->dev, "trying to repair invalid date/time\n"); 143 + dt.tm_sec = 0; 144 + dt.tm_min = 0; 145 + dt.tm_hour = 0; 146 + dt.tm_mday = 1; 147 + dt.tm_mon = 0; 148 + dt.tm_year = 100; 149 + 150 + if (r9701_set_datetime(&spi->dev, &dt)) { 151 + dev_err(&spi->dev, "cannot repair RTC register\n"); 152 + return -ENODEV; 153 + } 134 154 } 135 155 136 156 rtc = rtc_device_register("r9701",
+22 -9
drivers/rtc/rtc-s3c.c
··· 40 40 TYPE_S3C64XX, 41 41 }; 42 42 43 + struct s3c_rtc_drv_data { 44 + int cpu_type; 45 + }; 46 + 43 47 /* I have yet to find an S3C implementation with more than one 44 48 * of these rtc blocks in */ 45 49 ··· 450 446 static inline int s3c_rtc_get_driver_data(struct platform_device *pdev) 451 447 { 452 448 #ifdef CONFIG_OF 449 + struct s3c_rtc_drv_data *data; 453 450 if (pdev->dev.of_node) { 454 451 const struct of_device_id *match; 455 452 match = of_match_node(s3c_rtc_dt_match, pdev->dev.of_node); 456 - return match->data; 453 + data = (struct s3c_rtc_drv_data *) match->data; 454 + return data->cpu_type; 457 455 } 458 456 #endif 459 457 return platform_get_device_id(pdev)->driver_data; ··· 670 664 #define s3c_rtc_resume NULL 671 665 #endif 672 666 667 + static struct s3c_rtc_drv_data s3c_rtc_drv_data_array[] = { 668 + [TYPE_S3C2410] = { TYPE_S3C2410 }, 669 + [TYPE_S3C2416] = { TYPE_S3C2416 }, 670 + [TYPE_S3C2443] = { TYPE_S3C2443 }, 671 + [TYPE_S3C64XX] = { TYPE_S3C64XX }, 672 + }; 673 + 673 674 #ifdef CONFIG_OF 674 675 static const struct of_device_id s3c_rtc_dt_match[] = { 675 676 { 676 - .compatible = "samsung,s3c2410-rtc" 677 - .data = TYPE_S3C2410, 677 + .compatible = "samsung,s3c2410-rtc", 678 + .data = &s3c_rtc_drv_data_array[TYPE_S3C2410], 678 679 }, { 679 - .compatible = "samsung,s3c2416-rtc" 680 - .data = TYPE_S3C2416, 680 + .compatible = "samsung,s3c2416-rtc", 681 + .data = &s3c_rtc_drv_data_array[TYPE_S3C2416], 681 682 }, { 682 - .compatible = "samsung,s3c2443-rtc" 683 - .data = TYPE_S3C2443, 683 + .compatible = "samsung,s3c2443-rtc", 684 + .data = &s3c_rtc_drv_data_array[TYPE_S3C2443], 684 685 }, { 685 - .compatible = "samsung,s3c6410-rtc" 686 - .data = TYPE_S3C64XX, 686 + .compatible = "samsung,s3c6410-rtc", 687 + .data = &s3c_rtc_drv_data_array[TYPE_S3C64XX], 687 688 }, 688 689 {}, 689 690 };
+38 -5
drivers/rtc/rtc-twl.c
··· 112 112 #define BIT_RTC_CTRL_REG_TEST_MODE_M 0x10 113 113 #define BIT_RTC_CTRL_REG_SET_32_COUNTER_M 0x20 114 114 #define BIT_RTC_CTRL_REG_GET_TIME_M 0x40 115 + #define BIT_RTC_CTRL_REG_RTC_V_OPT 0x80 115 116 116 117 /* RTC_STATUS_REG bitfields */ 117 118 #define BIT_RTC_STATUS_REG_RUN_M 0x02 ··· 236 235 unsigned char rtc_data[ALL_TIME_REGS + 1]; 237 236 int ret; 238 237 u8 save_control; 238 + u8 rtc_control; 239 239 240 240 ret = twl_rtc_read_u8(&save_control, REG_RTC_CTRL_REG); 241 - if (ret < 0) 241 + if (ret < 0) { 242 + dev_err(dev, "%s: reading CTRL_REG, error %d\n", __func__, ret); 242 243 return ret; 244 + } 245 + /* for twl6030/32 make sure BIT_RTC_CTRL_REG_GET_TIME_M is clear */ 246 + if (twl_class_is_6030()) { 247 + if (save_control & BIT_RTC_CTRL_REG_GET_TIME_M) { 248 + save_control &= ~BIT_RTC_CTRL_REG_GET_TIME_M; 249 + ret = twl_rtc_write_u8(save_control, REG_RTC_CTRL_REG); 250 + if (ret < 0) { 251 + dev_err(dev, "%s clr GET_TIME, error %d\n", 252 + __func__, ret); 253 + return ret; 254 + } 255 + } 256 + } 243 257 244 - save_control |= BIT_RTC_CTRL_REG_GET_TIME_M; 258 + /* Copy RTC counting registers to static registers or latches */ 259 + rtc_control = save_control | BIT_RTC_CTRL_REG_GET_TIME_M; 245 260 246 - ret = twl_rtc_write_u8(save_control, REG_RTC_CTRL_REG); 247 - if (ret < 0) 261 + /* for twl6030/32 enable read access to static shadowed registers */ 262 + if (twl_class_is_6030()) 263 + rtc_control |= BIT_RTC_CTRL_REG_RTC_V_OPT; 264 + 265 + ret = twl_rtc_write_u8(rtc_control, REG_RTC_CTRL_REG); 266 + if (ret < 0) { 267 + dev_err(dev, "%s: writing CTRL_REG, error %d\n", __func__, ret); 248 268 return ret; 269 + } 249 270 250 271 ret = twl_i2c_read(TWL_MODULE_RTC, rtc_data, 251 272 (rtc_reg_map[REG_SECONDS_REG]), ALL_TIME_REGS); 252 273 253 274 if (ret < 0) { 254 - dev_err(dev, "rtc_read_time error %d\n", ret); 275 + dev_err(dev, "%s: reading data, error %d\n", __func__, ret); 255 276 return ret; 277 + } 278 + 279 + /* for twl6030 restore original state of rtc control register */ 280 + if (twl_class_is_6030()) { 281 + ret = twl_rtc_write_u8(save_control, REG_RTC_CTRL_REG); 282 + if (ret < 0) { 283 + dev_err(dev, "%s: restore CTRL_REG, error %d\n", 284 + __func__, ret); 285 + return ret; 286 + } 256 287 } 257 288 258 289 tm->tm_sec = bcd2bin(rtc_data[0]);
+1 -1
drivers/scsi/scsi_error.c
··· 835 835 836 836 scsi_eh_restore_cmnd(scmd, &ses); 837 837 838 - if (sdrv->eh_action) 838 + if (sdrv && sdrv->eh_action) 839 839 rtn = sdrv->eh_action(scmd, cmnd, cmnd_size, rtn); 840 840 841 841 return rtn;
+3 -3
drivers/spi/spi-davinci.c
··· 653 653 dev_dbg(sdev, "Couldn't DMA map a %d bytes RX buffer\n", 654 654 rx_buf_count); 655 655 if (t->tx_buf) 656 - dma_unmap_single(NULL, t->tx_dma, t->len, 656 + dma_unmap_single(&spi->dev, t->tx_dma, t->len, 657 657 DMA_TO_DEVICE); 658 658 return -ENOMEM; 659 659 } ··· 692 692 if (spicfg->io_type == SPI_IO_TYPE_DMA) { 693 693 694 694 if (t->tx_buf) 695 - dma_unmap_single(NULL, t->tx_dma, t->len, 695 + dma_unmap_single(&spi->dev, t->tx_dma, t->len, 696 696 DMA_TO_DEVICE); 697 697 698 - dma_unmap_single(NULL, t->rx_dma, rx_buf_count, 698 + dma_unmap_single(&spi->dev, t->rx_dma, rx_buf_count, 699 699 DMA_FROM_DEVICE); 700 700 701 701 clear_io_bits(dspi->base + SPIINT, SPIINT_DMA_REQ_EN);
+3 -1
drivers/spi/spi-fsl-spi.c
··· 139 139 static void fsl_spi_chipselect(struct spi_device *spi, int value) 140 140 { 141 141 struct mpc8xxx_spi *mpc8xxx_spi = spi_master_get_devdata(spi->master); 142 - struct fsl_spi_platform_data *pdata = spi->dev.parent->platform_data; 142 + struct fsl_spi_platform_data *pdata; 143 143 bool pol = spi->mode & SPI_CS_HIGH; 144 144 struct spi_mpc8xxx_cs *cs = spi->controller_state; 145 + 146 + pdata = spi->dev.parent->parent->platform_data; 145 147 146 148 if (value == BITBANG_CS_INACTIVE) { 147 149 if (pdata->cs_control)
+8 -4
drivers/spi/spi-imx.c
··· 83 83 struct spi_bitbang bitbang; 84 84 85 85 struct completion xfer_done; 86 - void *base; 86 + void __iomem *base; 87 87 int irq; 88 88 struct clk *clk; 89 89 unsigned long spi_clk; ··· 766 766 } 767 767 768 768 ret = of_property_read_u32(np, "fsl,spi-num-chipselects", &num_cs); 769 - if (ret < 0) 770 - num_cs = mxc_platform_info->num_chipselect; 769 + if (ret < 0) { 770 + if (mxc_platform_info) 771 + num_cs = mxc_platform_info->num_chipselect; 772 + else 773 + return ret; 774 + } 771 775 772 776 master = spi_alloc_master(&pdev->dev, 773 777 sizeof(struct spi_imx_data) + sizeof(int) * num_cs); ··· 788 784 789 785 for (i = 0; i < master->num_chipselect; i++) { 790 786 int cs_gpio = of_get_named_gpio(np, "cs-gpios", i); 791 - if (cs_gpio < 0) 787 + if (cs_gpio < 0 && mxc_platform_info) 792 788 cs_gpio = mxc_platform_info->chipselect[i]; 793 789 794 790 spi_imx->chipselect[i] = cs_gpio;
-2
drivers/spi/spi-pl022.c
··· 2195 2195 struct pl022 *pl022 = dev_get_drvdata(dev); 2196 2196 2197 2197 clk_disable(pl022->clk); 2198 - amba_vcore_disable(pl022->adev); 2199 2198 2200 2199 return 0; 2201 2200 } ··· 2203 2204 { 2204 2205 struct pl022 *pl022 = dev_get_drvdata(dev); 2205 2206 2206 - amba_vcore_enable(pl022->adev); 2207 2207 clk_enable(pl022->clk); 2208 2208 2209 2209 return 0;
+2 -1
drivers/staging/android/Kconfig
··· 27 27 28 28 config ANDROID_PERSISTENT_RAM 29 29 bool 30 + depends on HAVE_MEMBLOCK 30 31 select REED_SOLOMON 31 32 select REED_SOLOMON_ENC8 32 33 select REED_SOLOMON_DEC8 33 34 34 35 config ANDROID_RAM_CONSOLE 35 36 bool "Android RAM buffer console" 36 - depends on !S390 && !UML 37 + depends on !S390 && !UML && HAVE_MEMBLOCK 37 38 select ANDROID_PERSISTENT_RAM 38 39 default n 39 40
+7 -41
drivers/staging/android/lowmemorykiller.c
··· 55 55 }; 56 56 static int lowmem_minfree_size = 4; 57 57 58 - static struct task_struct *lowmem_deathpending; 59 58 static unsigned long lowmem_deathpending_timeout; 60 59 61 60 #define lowmem_print(level, x...) \ ··· 62 63 if (lowmem_debug_level >= (level)) \ 63 64 printk(x); \ 64 65 } while (0) 65 - 66 - static int 67 - task_notify_func(struct notifier_block *self, unsigned long val, void *data); 68 - 69 - static struct notifier_block task_nb = { 70 - .notifier_call = task_notify_func, 71 - }; 72 - 73 - static int 74 - task_notify_func(struct notifier_block *self, unsigned long val, void *data) 75 - { 76 - struct task_struct *task = data; 77 - 78 - if (task == lowmem_deathpending) 79 - lowmem_deathpending = NULL; 80 - 81 - return NOTIFY_OK; 82 - } 83 66 84 67 static int lowmem_shrink(struct shrinker *s, struct shrink_control *sc) 85 68 { ··· 77 96 int other_free = global_page_state(NR_FREE_PAGES); 78 97 int other_file = global_page_state(NR_FILE_PAGES) - 79 98 global_page_state(NR_SHMEM); 80 - 81 - /* 82 - * If we already have a death outstanding, then 83 - * bail out right away; indicating to vmscan 84 - * that we have nothing further to offer on 85 - * this pass. 86 - * 87 - * Note: Currently you need CONFIG_PROFILING 88 - * for this to work correctly. 89 - */ 90 - if (lowmem_deathpending && 91 - time_before_eq(jiffies, lowmem_deathpending_timeout)) 92 - return 0; 93 99 94 100 if (lowmem_adj_size < array_size) 95 101 array_size = lowmem_adj_size; ··· 116 148 if (!p) 117 149 continue; 118 150 151 + if (test_tsk_thread_flag(p, TIF_MEMDIE) && 152 + time_before_eq(jiffies, lowmem_deathpending_timeout)) { 153 + task_unlock(p); 154 + rcu_read_unlock(); 155 + return 0; 156 + } 119 157 oom_score_adj = p->signal->oom_score_adj; 120 158 if (oom_score_adj < min_score_adj) { 121 159 task_unlock(p); ··· 148 174 lowmem_print(1, "send sigkill to %d (%s), adj %d, size %d\n", 149 175 selected->pid, selected->comm, 150 176 selected_oom_score_adj, selected_tasksize); 151 - /* 152 - * If CONFIG_PROFILING is off, then we don't want to stall 153 - * the killer by setting lowmem_deathpending. 154 - */ 155 - #ifdef CONFIG_PROFILING 156 - lowmem_deathpending = selected; 157 177 lowmem_deathpending_timeout = jiffies + HZ; 158 - #endif 159 178 send_sig(SIGKILL, selected, 0); 179 + set_tsk_thread_flag(selected, TIF_MEMDIE); 160 180 rem -= selected_tasksize; 161 181 } 162 182 lowmem_print(4, "lowmem_shrink %lu, %x, return %d\n", ··· 166 198 167 199 static int __init lowmem_init(void) 168 200 { 169 - task_handoff_register(&task_nb); 170 201 register_shrinker(&lowmem_shrinker); 171 202 return 0; 172 203 } ··· 173 206 static void __exit lowmem_exit(void) 174 207 { 175 208 unregister_shrinker(&lowmem_shrinker); 176 - task_handoff_unregister(&task_nb); 177 209 } 178 210 179 211 module_param_named(cost, lowmem_shrinker.seeks, int, S_IRUGO | S_IWUSR);
+7 -4
drivers/staging/android/persistent_ram.c
··· 399 399 struct persistent_ram_zone *__persistent_ram_init(struct device *dev, bool ecc) 400 400 { 401 401 struct persistent_ram_zone *prz; 402 - int ret; 402 + int ret = -ENOMEM; 403 403 404 404 prz = kzalloc(sizeof(struct persistent_ram_zone), GFP_KERNEL); 405 405 if (!prz) { 406 406 pr_err("persistent_ram: failed to allocate persistent ram zone\n"); 407 - return ERR_PTR(-ENOMEM); 407 + goto err; 408 408 } 409 409 410 410 INIT_LIST_HEAD(&prz->node); ··· 412 412 ret = persistent_ram_buffer_init(dev_name(dev), prz); 413 413 if (ret) { 414 414 pr_err("persistent_ram: failed to initialize buffer\n"); 415 - return ERR_PTR(ret); 415 + goto err; 416 416 } 417 417 418 418 prz->ecc = ecc; 419 419 ret = persistent_ram_init_ecc(prz, prz->buffer_size); 420 420 if (ret) 421 - return ERR_PTR(ret); 421 + goto err; 422 422 423 423 if (prz->buffer->sig == PERSISTENT_RAM_SIG) { 424 424 if (buffer_size(prz) > prz->buffer_size || ··· 442 442 atomic_set(&prz->buffer->size, 0); 443 443 444 444 return prz; 445 + err: 446 + kfree(prz); 447 + return ERR_PTR(ret); 445 448 } 446 449 447 450 struct persistent_ram_zone * __init
+15 -12
drivers/staging/android/timed_gpio.c
··· 85 85 struct timed_gpio_platform_data *pdata = pdev->dev.platform_data; 86 86 struct timed_gpio *cur_gpio; 87 87 struct timed_gpio_data *gpio_data, *gpio_dat; 88 - int i, j, ret = 0; 88 + int i, ret; 89 89 90 90 if (!pdata) 91 91 return -EBUSY; ··· 108 108 gpio_dat->dev.get_time = gpio_get_time; 109 109 gpio_dat->dev.enable = gpio_enable; 110 110 ret = gpio_request(cur_gpio->gpio, cur_gpio->name); 111 - if (ret >= 0) { 112 - ret = timed_output_dev_register(&gpio_dat->dev); 113 - if (ret < 0) 114 - gpio_free(cur_gpio->gpio); 115 - } 111 + if (ret < 0) 112 + goto err_out; 113 + ret = timed_output_dev_register(&gpio_dat->dev); 116 114 if (ret < 0) { 117 - for (j = 0; j < i; j++) { 118 - timed_output_dev_unregister(&gpio_data[i].dev); 119 - gpio_free(gpio_data[i].gpio); 120 - } 121 - kfree(gpio_data); 122 - return ret; 115 + gpio_free(cur_gpio->gpio); 116 + goto err_out; 123 117 } 124 118 125 119 gpio_dat->gpio = cur_gpio->gpio; ··· 125 131 platform_set_drvdata(pdev, gpio_data); 126 132 127 133 return 0; 134 + 135 + err_out: 136 + while (--i >= 0) { 137 + timed_output_dev_unregister(&gpio_data[i].dev); 138 + gpio_free(gpio_data[i].gpio); 139 + } 140 + kfree(gpio_data); 141 + 142 + return ret; 128 143 } 129 144 130 145 static int timed_gpio_remove(struct platform_device *pdev)
+1
drivers/staging/iio/inkern.c
··· 82 82 ret = -ENODEV; 83 83 goto error_ret; 84 84 } 85 + i++; 85 86 } 86 87 error_ret: 87 88 mutex_unlock(&iio_map_list_lock);
+5 -3
drivers/staging/iio/magnetometer/ak8975.c
··· 108 108 static int ak8975_write_data(struct i2c_client *client, 109 109 u8 reg, u8 val, u8 mask, u8 shift) 110 110 { 111 - struct ak8975_data *data = i2c_get_clientdata(client); 111 + struct iio_dev *indio_dev = i2c_get_clientdata(client); 112 + struct ak8975_data *data = iio_priv(indio_dev); 112 113 u8 regval; 113 114 int ret; 114 115 ··· 160 159 */ 161 160 static int ak8975_setup(struct i2c_client *client) 162 161 { 163 - struct ak8975_data *data = i2c_get_clientdata(client); 162 + struct iio_dev *indio_dev = i2c_get_clientdata(client); 163 + struct ak8975_data *data = iio_priv(indio_dev); 164 164 u8 device_id; 165 165 int ret; 166 166 ··· 511 509 goto exit_gpio; 512 510 } 513 511 data = iio_priv(indio_dev); 512 + i2c_set_clientdata(client, indio_dev); 514 513 /* Perform some basic start-of-day setup of the device. */ 515 514 err = ak8975_setup(client); 516 515 if (err < 0) { ··· 519 516 goto exit_free_iio; 520 517 } 521 518 522 - i2c_set_clientdata(client, indio_dev); 523 519 data->client = client; 524 520 mutex_init(&data->lock); 525 521 data->eoc_irq = client->irq;
+3 -1
drivers/staging/iio/magnetometer/hmc5843.c
··· 521 521 /* Called when we have found a new HMC5843. */ 522 522 static void hmc5843_init_client(struct i2c_client *client) 523 523 { 524 - struct hmc5843_data *data = i2c_get_clientdata(client); 524 + struct iio_dev *indio_dev = i2c_get_clientdata(client); 525 + struct hmc5843_data *data = iio_priv(indio_dev); 526 + 525 527 hmc5843_set_meas_conf(client, data->meas_conf); 526 528 hmc5843_set_rate(client, data->rate); 527 529 hmc5843_configure(client, data->operating_mode);
+1 -1
drivers/staging/media/as102/as102_fw.c
··· 165 165 int as102_fw_upload(struct as10x_bus_adapter_t *bus_adap) 166 166 { 167 167 int errno = -EFAULT; 168 - const struct firmware *firmware; 168 + const struct firmware *firmware = NULL; 169 169 unsigned char *cmd_buf = NULL; 170 170 char *fw1, *fw2; 171 171 struct usb_device *dev = bus_adap->usb_dev;
+4 -3
drivers/staging/omapdrm/omap_drv.c
··· 803 803 static int pdev_probe(struct platform_device *device) 804 804 { 805 805 DBG("%s", device->name); 806 - if (platform_driver_register(&omap_dmm_driver)) 807 - dev_err(&device->dev, "DMM registration failed\n"); 808 - 809 806 return drm_platform_init(&omap_drm_driver, device); 810 807 } 811 808 ··· 830 833 static int __init omap_drm_init(void) 831 834 { 832 835 DBG("init"); 836 + if (platform_driver_register(&omap_dmm_driver)) { 837 + /* we can continue on without DMM.. so not fatal */ 838 + dev_err(NULL, "DMM registration failed\n"); 839 + } 833 840 return platform_driver_register(&pdev); 834 841 } 835 842
+3 -1
drivers/staging/ozwpan/TODO
··· 8 8 - code review by USB developer community. 9 9 - testing with as many devices as possible. 10 10 11 - Please send any patches for this driver to Chris Kelly <ckelly@ozmodevices.com> 11 + Please send any patches for this driver to 12 + Rupesh Gujare <rgujare@ozmodevices.com> 13 + Chris Kelly <ckelly@ozmodevices.com> 12 14 and Greg Kroah-Hartman <gregkh@linuxfoundation.org>.
+1 -5
drivers/staging/ramster/Kconfig
··· 1 - # Dependency on CONFIG_BROKEN is because there is a commit dependency 2 - # on a cleancache naming change to be submitted by Konrad Wilk 3 - # a39c00ded70339603ffe1b0ffdf3ade85bcf009a "Merge branch 'stable/cleancache.v13' 4 - # into linux-next. Once this commit is present, BROKEN can be removed 5 1 config RAMSTER 6 2 bool "Cross-machine RAM capacity sharing, aka peer-to-peer tmem" 7 - depends on (CLEANCACHE || FRONTSWAP) && CONFIGFS_FS=y && !ZCACHE && !XVMALLOC && !HIGHMEM && BROKEN 3 + depends on (CLEANCACHE || FRONTSWAP) && CONFIGFS_FS=y && !ZCACHE && !XVMALLOC && !HIGHMEM 8 4 select LZO_COMPRESS 9 5 select LZO_DECOMPRESS 10 6 default n
+2 -1
drivers/staging/rts_pstor/ms.c
··· 3498 3498 3499 3499 log_blk++; 3500 3500 3501 - for (seg_no = 0; seg_no < sizeof(ms_start_idx)/2; seg_no++) { 3501 + for (seg_no = 0; seg_no < ARRAY_SIZE(ms_start_idx) - 1; 3502 + seg_no++) { 3502 3503 if (log_blk < ms_start_idx[seg_no+1]) 3503 3504 break; 3504 3505 }
+5
drivers/staging/rts_pstor/rtsx.c
··· 1000 1000 1001 1001 rtsx_init_chip(dev->chip); 1002 1002 1003 + /* set the supported max_lun and max_id for the scsi host 1004 + * NOTE: the minimal value of max_id is 1 */ 1005 + host->max_id = 1; 1006 + host->max_lun = dev->chip->max_lun; 1007 + 1003 1008 /* Start up our control thread */ 1004 1009 th = kthread_run(rtsx_control_thread, dev, CR_DRIVER_NAME); 1005 1010 if (IS_ERR(th)) {
+6 -5
drivers/staging/rts_pstor/rtsx_transport.c
··· 335 335 int sg_cnt, i, resid; 336 336 int err = 0; 337 337 long timeleft; 338 + struct scatterlist *sg_ptr; 338 339 u32 val = TRIG_DMA; 339 340 340 341 if ((sg == NULL) || (num_sg <= 0) || !offset || !index) ··· 372 371 sg_cnt = dma_map_sg(&(rtsx->pci->dev), sg, num_sg, dma_dir); 373 372 374 373 resid = size; 375 - 374 + sg_ptr = sg; 376 375 chip->sgi = 0; 377 376 /* Usually the next entry will be @sg@ + 1, but if this sg element 378 377 * is part of a chained scatterlist, it could jump to the start of ··· 380 379 * the proper sg 381 380 */ 382 381 for (i = 0; i < *index; i++) 383 - sg = sg_next(sg); 382 + sg_ptr = sg_next(sg_ptr); 384 383 for (i = *index; i < sg_cnt; i++) { 385 384 dma_addr_t addr; 386 385 unsigned int len; 387 386 u8 option; 388 387 389 - addr = sg_dma_address(sg); 390 - len = sg_dma_len(sg); 388 + addr = sg_dma_address(sg_ptr); 389 + len = sg_dma_len(sg_ptr); 391 390 392 391 RTSX_DEBUGP("DMA addr: 0x%x, Len: 0x%x\n", 393 392 (unsigned int)addr, len); ··· 416 415 if (!resid) 417 416 break; 418 417 419 - sg = sg_next(sg); 418 + sg_ptr = sg_next(sg_ptr); 420 419 } 421 420 422 421 RTSX_DEBUGP("SG table count = %d\n", chip->sgi);
+3 -3
drivers/staging/sep/sep_main.c
··· 3114 3114 current->pid); 3115 3115 if (1 == test_bit(SEP_LEGACY_SENDMSG_DONE_OFFSET, 3116 3116 &call_status->status)) { 3117 - dev_warn(&sep->pdev->dev, 3117 + dev_dbg(&sep->pdev->dev, 3118 3118 "[PID%d] dcb prep needed before send msg\n", 3119 3119 current->pid); 3120 3120 error = -EPROTO; ··· 3122 3122 } 3123 3123 3124 3124 if (!arg) { 3125 - dev_warn(&sep->pdev->dev, 3125 + dev_dbg(&sep->pdev->dev, 3126 3126 "[PID%d] dcb null arg\n", current->pid); 3127 - error = EINVAL; 3127 + error = -EINVAL; 3128 3128 goto end_function; 3129 3129 } 3130 3130
+2 -2
drivers/staging/vme/devices/vme_pio2_core.c
··· 35 35 static int vector_num; 36 36 static int level[PIO2_CARDS_MAX]; 37 37 static int level_num; 38 - static const char *variant[PIO2_CARDS_MAX]; 38 + static char *variant[PIO2_CARDS_MAX]; 39 39 static int variant_num; 40 40 41 - static int loopback; 41 + static bool loopback; 42 42 43 43 static int pio2_match(struct vme_dev *); 44 44 static int __devinit pio2_probe(struct vme_dev *);
+3
drivers/staging/vt6655/key.c
··· 655 655 return (false); 656 656 } 657 657 658 + if (uKeyLength > MAX_KEY_LEN) 659 + return false; 660 + 658 661 pTable->KeyTable[MAX_KEY_TABLE-1].bInUse = true; 659 662 for(ii=0;ii<ETH_ALEN;ii++) 660 663 pTable->KeyTable[MAX_KEY_TABLE-1].abyBSSID[ii] = 0xFF;
+2 -1
drivers/staging/vt6656/ioctl.c
··· 565 565 result = -ENOMEM; 566 566 break; 567 567 } 568 - pNodeList = (PSNodeList)kmalloc(sizeof(SNodeList) + (sNodeList.uItem * sizeof(SNodeItem)), (int)GFP_ATOMIC); 568 + pNodeList = kmalloc(sizeof(SNodeList) + (sNodeList.uItem * sizeof(SNodeItem)), (int)GFP_ATOMIC); 569 569 if (pNodeList == NULL) { 570 570 result = -ENOMEM; 571 571 break; ··· 601 601 } 602 602 } 603 603 if (copy_to_user(pReq->data, pNodeList, sizeof(SNodeList) + (sNodeList.uItem * sizeof(SNodeItem)))) { 604 + kfree(pNodeList); 604 605 result = -EFAULT; 605 606 break; 606 607 }
+3
drivers/staging/vt6656/key.c
··· 684 684 return (FALSE); 685 685 } 686 686 687 + if (uKeyLength > MAX_KEY_LEN) 688 + return false; 689 + 687 690 pTable->KeyTable[MAX_KEY_TABLE-1].bInUse = TRUE; 688 691 for (ii = 0; ii < ETH_ALEN; ii++) 689 692 pTable->KeyTable[MAX_KEY_TABLE-1].abyBSSID[ii] = 0xFF;
+1 -1
drivers/staging/xgifb/vb_init.c
··· 61 61 } 62 62 temp = xgifb_reg_get(pVBInfo->P3c4, 0x3B); 63 63 /* SR3B[7][3]MAA15 MAA11 (Power on Trapping) */ 64 - if ((temp & 0x88) == 0x80) 64 + if (((temp & 0x88) == 0x80) || ((temp & 0x88) == 0x08)) 65 65 data = 0; /* DDR */ 66 66 else 67 67 data = 1; /* DDRII */
+7
drivers/staging/xgifb/vb_setmode.c
··· 152 152 pVBInfo->pXGINew_CR97 = &XG20_CR97; 153 153 154 154 if (ChipType == XG27) { 155 + unsigned char temp; 155 156 pVBInfo->MCLKData 156 157 = (struct SiS_MCLKData *) XGI27New_MCLKData; 157 158 pVBInfo->CR40 = XGI27_cr41; ··· 163 162 pVBInfo->pCRDE = XG27_CRDE; 164 163 pVBInfo->pSR40 = &XG27_SR40; 165 164 pVBInfo->pSR41 = &XG27_SR41; 165 + pVBInfo->SR15 = XG27_SR13; 166 166 167 + /*Z11m DDR*/ 168 + temp = xgifb_reg_get(pVBInfo->P3c4, 0x3B); 169 + /* SR3B[7][3]MAA15 MAA11 (Power on Trapping) */ 170 + if (((temp & 0x88) == 0x80) || ((temp & 0x88) == 0x08)) 171 + pVBInfo->pXGINew_CR97 = &Z11m_CR97; 167 172 } 168 173 169 174 if (ChipType >= XG20) {
+10 -1
drivers/staging/xgifb/vb_table.h
··· 33 33 {0x5c, 0x23, 0x01, 166} 34 34 }; 35 35 36 + static unsigned char XG27_SR13[4][8] = { 37 + {0x35, 0x45, 0xb1, 0x00, 0x00, 0x00, 0x00, 0x00}, /* SR13 */ 38 + {0x41, 0x51, 0x5c, 0x00, 0x00, 0x00, 0x00, 0x00}, /* SR14 */ 39 + {0x32, 0x32, 0x42, 0x00, 0x00, 0x00, 0x00, 0x00}, /* SR18 */ 40 + {0x03, 0x03, 0x03, 0x00, 0x00, 0x00, 0x00, 0x00} /* SR1B */ 41 + }; 42 + 36 43 static unsigned char XGI340_SR13[4][8] = { 37 44 {0x35, 0x45, 0xb1, 0x00, 0x00, 0x00, 0x00, 0x00}, /* SR13 */ 38 45 {0x41, 0x51, 0x5c, 0x00, 0x00, 0x00, 0x00, 0x00}, /* SR14 */ ··· 78 71 {0x20, 0x40, 0x60, 0x00, 0x00, 0x00, 0x00, 0x00}, /* 0 CR41 */ 79 72 {0xC4, 0x40, 0x84, 0x00, 0x00, 0x00, 0x00, 0x00}, /* 1 CR8A */ 80 73 {0xC4, 0x40, 0x84, 0x00, 0x00, 0x00, 0x00, 0x00}, /* 2 CR8B */ 81 - {0xB5, 0x13, 0xa4, 0x00, 0x00, 0x00, 0x00, 0x00}, /* 3 CR40[7], 74 + {0xB3, 0x13, 0xa4, 0x00, 0x00, 0x00, 0x00, 0x00}, /* 3 CR40[7], 82 75 CR99[2:0], 83 76 CR45[3:0]*/ 84 77 {0xf0, 0xf5, 0xf0, 0x00, 0x00, 0x00, 0x00, 0x00}, /* 4 CR59 */ ··· 2809 2802 static unsigned char XG27_CRDE[2]; 2810 2803 static unsigned char XG27_SR40 = 0x04 ; 2811 2804 static unsigned char XG27_SR41 = 0x00 ; 2805 + 2806 + static unsigned char Z11m_CR97 = 0x80 ; 2812 2807 2813 2808 static struct XGI330_VCLKDataStruct XGI_VCLKData[] = { 2814 2809 /* SR2B,SR2C,SR2D */
+18 -12
drivers/staging/zsmalloc/zsmalloc-main.c
··· 267 267 return off + obj_idx * class_size; 268 268 } 269 269 270 + static void reset_page(struct page *page) 271 + { 272 + clear_bit(PG_private, &page->flags); 273 + clear_bit(PG_private_2, &page->flags); 274 + set_page_private(page, 0); 275 + page->mapping = NULL; 276 + page->freelist = NULL; 277 + reset_page_mapcount(page); 278 + } 279 + 270 280 static void free_zspage(struct page *first_page) 271 281 { 272 - struct page *nextp, *tmp; 282 + struct page *nextp, *tmp, *head_extra; 273 283 274 284 BUG_ON(!is_first_page(first_page)); 275 285 BUG_ON(first_page->inuse); 276 286 277 - nextp = (struct page *)page_private(first_page); 287 + head_extra = (struct page *)page_private(first_page); 278 288 279 - clear_bit(PG_private, &first_page->flags); 280 - clear_bit(PG_private_2, &first_page->flags); 281 - set_page_private(first_page, 0); 282 - first_page->mapping = NULL; 283 - first_page->freelist = NULL; 284 - reset_page_mapcount(first_page); 289 + reset_page(first_page); 285 290 __free_page(first_page); 286 291 287 292 /* zspage with only 1 system page */ 288 - if (!nextp) 293 + if (!head_extra) 289 294 return; 290 295 291 - list_for_each_entry_safe(nextp, tmp, &nextp->lru, lru) { 296 + list_for_each_entry_safe(nextp, tmp, &head_extra->lru, lru) { 292 297 list_del(&nextp->lru); 293 - clear_bit(PG_private_2, &nextp->flags); 294 - nextp->index = 0; 298 + reset_page(nextp); 295 299 __free_page(nextp); 296 300 } 301 + reset_page(head_extra); 302 + __free_page(head_extra); 297 303 } 298 304 299 305 /* Initialize a newly allocated zspage */
+6 -6
drivers/tty/serial/8250/8250.c
··· 1572 1572 do { 1573 1573 struct uart_8250_port *up; 1574 1574 struct uart_port *port; 1575 - bool skip; 1576 1575 1577 1576 up = list_entry(l, struct uart_8250_port, list); 1578 1577 port = &up->port; 1579 - skip = pass_counter && up->port.flags & UPF_IIR_ONCE; 1580 1578 1581 - if (!skip && port->handle_irq(port)) { 1579 + if (port->handle_irq(port)) { 1582 1580 handled = 1; 1583 1581 end = NULL; 1584 1582 } else if (end == NULL) ··· 2035 2037 spin_unlock_irqrestore(&port->lock, flags); 2036 2038 2037 2039 /* 2038 - * If the interrupt is not reasserted, setup a timer to 2039 - * kick the UART on a regular basis. 2040 + * If the interrupt is not reasserted, or we otherwise 2041 + * don't trust the iir, setup a timer to kick the UART 2042 + * on a regular basis. 2040 2043 */ 2041 - if (!(iir1 & UART_IIR_NO_INT) && (iir & UART_IIR_NO_INT)) { 2044 + if ((!(iir1 & UART_IIR_NO_INT) && (iir & UART_IIR_NO_INT)) || 2045 + up->port.flags & UPF_BUG_THRE) { 2042 2046 up->bugs |= UART_BUG_THRE; 2043 2047 pr_debug("ttyS%d - using backup timer\n", 2044 2048 serial_index(port));
+1 -15
drivers/tty/serial/8250/8250_pci.c
··· 1096 1096 const struct pciserial_board *board, 1097 1097 struct uart_port *port, int idx) 1098 1098 { 1099 - port->flags |= UPF_IIR_ONCE; 1099 + port->flags |= UPF_BUG_THRE; 1100 1100 return skip_tx_en_setup(priv, board, port, idx); 1101 1101 } 1102 1102 ··· 1116 1116 { 1117 1117 port->flags |= UPF_EXAR_EFR; 1118 1118 return pci_default_setup(priv, board, port, idx); 1119 - } 1120 - 1121 - static int try_enable_msi(struct pci_dev *dev) 1122 - { 1123 - /* use msi if available, but fallback to legacy otherwise */ 1124 - pci_enable_msi(dev); 1125 - return 0; 1126 - } 1127 - 1128 - static void disable_msi(struct pci_dev *dev) 1129 - { 1130 - pci_disable_msi(dev); 1131 1119 } 1132 1120 1133 1121 #define PCI_VENDOR_ID_SBSMODULARIO 0x124B ··· 1237 1249 .device = PCI_DEVICE_ID_INTEL_PATSBURG_KT, 1238 1250 .subvendor = PCI_ANY_ID, 1239 1251 .subdevice = PCI_ANY_ID, 1240 - .init = try_enable_msi, 1241 1252 .setup = kt_serial_setup, 1242 - .exit = disable_msi, 1243 1253 }, 1244 1254 /* 1245 1255 * ITE
+1 -1
drivers/tty/serial/Kconfig
··· 1041 1041 1042 1042 config SERIAL_OMAP_CONSOLE 1043 1043 bool "Console on OMAP serial port" 1044 - depends on SERIAL_OMAP 1044 + depends on SERIAL_OMAP=y 1045 1045 select SERIAL_CORE_CONSOLE 1046 1046 help 1047 1047 Select this option if you would like to use omap serial port as
+2 -2
drivers/tty/serial/altera_uart.c
··· 556 556 res_mem = platform_get_resource(pdev, IORESOURCE_MEM, 0); 557 557 if (res_mem) 558 558 port->mapbase = res_mem->start; 559 - else if (platp->mapbase) 559 + else if (platp) 560 560 port->mapbase = platp->mapbase; 561 561 else 562 562 return -EINVAL; ··· 564 564 res_irq = platform_get_resource(pdev, IORESOURCE_IRQ, 0); 565 565 if (res_irq) 566 566 port->irq = res_irq->start; 567 - else if (platp->irq) 567 + else if (platp) 568 568 port->irq = platp->irq; 569 569 570 570 /* Check platform data first so we can override device node data */
+4 -4
drivers/tty/serial/amba-pl011.c
··· 1946 1946 goto unmap; 1947 1947 } 1948 1948 1949 - /* Ensure interrupts from this UART are masked and cleared */ 1950 - writew(0, uap->port.membase + UART011_IMSC); 1951 - writew(0xffff, uap->port.membase + UART011_ICR); 1952 - 1953 1949 uap->vendor = vendor; 1954 1950 uap->lcrh_rx = vendor->lcrh_rx; 1955 1951 uap->lcrh_tx = vendor->lcrh_tx; ··· 1962 1966 uap->port.flags = UPF_BOOT_AUTOCONF; 1963 1967 uap->port.line = i; 1964 1968 pl011_dma_probe(uap); 1969 + 1970 + /* Ensure interrupts from this UART are masked and cleared */ 1971 + writew(0, uap->port.membase + UART011_IMSC); 1972 + writew(0xffff, uap->port.membase + UART011_ICR); 1965 1973 1966 1974 snprintf(uap->type, sizeof(uap->type), "PL011 rev%u", amba_rev(dev)); 1967 1975
+4
drivers/tty/serial/atmel_serial.c
··· 389 389 { 390 390 UART_PUT_CR(port, ATMEL_US_RSTSTA); /* reset status and receiver */ 391 391 392 + UART_PUT_CR(port, ATMEL_US_RXEN); 393 + 392 394 if (atmel_use_dma_rx(port)) { 393 395 /* enable PDC controller */ 394 396 UART_PUT_IER(port, ATMEL_US_ENDRX | ATMEL_US_TIMEOUT | ··· 406 404 */ 407 405 static void atmel_stop_rx(struct uart_port *port) 408 406 { 407 + UART_PUT_CR(port, ATMEL_US_RXDIS); 408 + 409 409 if (atmel_use_dma_rx(port)) { 410 410 /* disable PDC receive */ 411 411 UART_PUT_PTCR(port, ATMEL_PDC_RXTDIS);
+20 -23
drivers/tty/serial/omap-serial.c
··· 1381 1381 return -ENODEV; 1382 1382 } 1383 1383 1384 - if (!request_mem_region(mem->start, resource_size(mem), 1384 + if (!devm_request_mem_region(&pdev->dev, mem->start, resource_size(mem), 1385 1385 pdev->dev.driver->name)) { 1386 1386 dev_err(&pdev->dev, "memory region already claimed\n"); 1387 1387 return -EBUSY; 1388 1388 } 1389 1389 1390 1390 dma_rx = platform_get_resource_byname(pdev, IORESOURCE_DMA, "rx"); 1391 - if (!dma_rx) { 1392 - ret = -EINVAL; 1393 - goto err; 1394 - } 1391 + if (!dma_rx) 1392 + return -ENXIO; 1395 1393 1396 1394 dma_tx = platform_get_resource_byname(pdev, IORESOURCE_DMA, "tx"); 1397 - if (!dma_tx) { 1398 - ret = -EINVAL; 1399 - goto err; 1400 - } 1395 + if (!dma_tx) 1396 + return -ENXIO; 1401 1397 1402 - up = kzalloc(sizeof(*up), GFP_KERNEL); 1403 - if (up == NULL) { 1404 - ret = -ENOMEM; 1405 - goto do_release_region; 1406 - } 1398 + up = devm_kzalloc(&pdev->dev, sizeof(*up), GFP_KERNEL); 1399 + if (!up) 1400 + return -ENOMEM; 1401 + 1407 1402 up->pdev = pdev; 1408 1403 up->port.dev = &pdev->dev; 1409 1404 up->port.type = PORT_OMAP; ··· 1418 1423 dev_err(&pdev->dev, "failed to get alias/pdev id, errno %d\n", 1419 1424 up->port.line); 1420 1425 ret = -ENODEV; 1421 - goto err; 1426 + goto err_port_line; 1422 1427 } 1423 1428 1424 1429 sprintf(up->name, "OMAP UART%d", up->port.line); 1425 1430 up->port.mapbase = mem->start; 1426 - up->port.membase = ioremap(mem->start, resource_size(mem)); 1431 + up->port.membase = devm_ioremap(&pdev->dev, mem->start, 1432 + resource_size(mem)); 1427 1433 if (!up->port.membase) { 1428 1434 dev_err(&pdev->dev, "can't ioremap UART\n"); 1429 1435 ret = -ENOMEM; 1430 - goto err; 1436 + goto err_ioremap; 1431 1437 } 1432 1438 1433 1439 up->port.flags = omap_up_info->flags; ··· 1474 1478 1475 1479 ret = uart_add_one_port(&serial_omap_reg, &up->port); 1476 1480 if (ret != 0) 1477 - goto do_release_region; 1481 + goto err_add_port; 1478 1482 1479 1483 pm_runtime_put(&pdev->dev); 1480 1484 platform_set_drvdata(pdev, up); 1481 1485 return 0; 1482 - err: 1486 + 1487 + err_add_port: 1488 + pm_runtime_put(&pdev->dev); 1489 + pm_runtime_disable(&pdev->dev); 1490 + err_ioremap: 1491 + err_port_line: 1483 1492 dev_err(&pdev->dev, "[UART%d]: failure [%s]: %d\n", 1484 1493 pdev->id, __func__, ret); 1485 - do_release_region: 1486 - release_mem_region(mem->start, resource_size(mem)); 1487 1494 return ret; 1488 1495 } 1489 1496 ··· 1498 1499 pm_runtime_disable(&up->pdev->dev); 1499 1500 uart_remove_one_port(&serial_omap_reg, &up->port); 1500 1501 pm_qos_remove_request(&up->pm_qos_request); 1501 - 1502 - kfree(up); 1503 1502 } 1504 1503 1505 1504 platform_set_drvdata(dev, NULL);
+8
drivers/tty/serial/pch_uart.c
··· 210 210 #define CMITC_UARTCLK 192000000 /* 192.0000 MHz */ 211 211 #define FRI2_64_UARTCLK 64000000 /* 64.0000 MHz */ 212 212 #define FRI2_48_UARTCLK 48000000 /* 48.0000 MHz */ 213 + #define NTC1_UARTCLK 64000000 /* 64.0000 MHz */ 213 214 214 215 struct pch_uart_buffer { 215 216 unsigned char *buf; ··· 384 383 cmp = dmi_get_system_info(DMI_PRODUCT_NAME); 385 384 if (cmp && strstr(cmp, "Fish River Island II")) 386 385 return FRI2_48_UARTCLK; 386 + 387 + /* Kontron COMe-mTT10 (nanoETXexpress-TT) */ 388 + cmp = dmi_get_system_info(DMI_BOARD_NAME); 389 + if (cmp && (strstr(cmp, "COMe-mTT") || 390 + strstr(cmp, "nanoETXexpress-TT"))) 391 + return NTC1_UARTCLK; 387 392 388 393 return DEFAULT_UARTCLK; 389 394 } ··· 1658 1651 } 1659 1652 1660 1653 pci_enable_msi(pdev); 1654 + pci_set_master(pdev); 1661 1655 1662 1656 iobase = pci_resource_start(pdev, 0); 1663 1657 mapbase = pci_resource_start(pdev, 1);
+1
drivers/tty/serial/samsung.c
··· 982 982 983 983 ucon &= ucon_mask; 984 984 wr_regl(port, S3C2410_UCON, ucon | cfg->ucon); 985 + wr_regl(port, S3C2410_ULCON, cfg->ulcon); 985 986 986 987 /* reset both fifos */ 987 988 wr_regl(port, S3C2410_UFCON, cfg->ufcon | S3C2410_UFCON_RESETBOTH);
+1 -2
drivers/tty/vt/vt.c
··· 2932 2932 gotoxy(vc, vc->vc_x, vc->vc_y); 2933 2933 csi_J(vc, 0); 2934 2934 update_screen(vc); 2935 - pr_info("Console: %s %s %dx%d", 2935 + pr_info("Console: %s %s %dx%d\n", 2936 2936 vc->vc_can_do_color ? "colour" : "mono", 2937 2937 display_desc, vc->vc_cols, vc->vc_rows); 2938 2938 printable = 1; 2939 - printk("\n"); 2940 2939 2941 2940 console_unlock(); 2942 2941
+8 -8
drivers/usb/Kconfig
··· 2 2 # USB device configuration 3 3 # 4 4 5 - menuconfig USB_SUPPORT 6 - bool "USB support" 7 - depends on HAS_IOMEM 8 - default y 9 - ---help--- 10 - This option adds core support for Universal Serial Bus (USB). 11 - You will also need drivers from the following menu to make use of it. 12 - 13 5 # many non-PCI SOC chips embed OHCI 14 6 config USB_ARCH_HAS_OHCI 15 7 boolean ··· 54 62 config USB_ARCH_HAS_XHCI 55 63 boolean 56 64 default PCI 65 + 66 + menuconfig USB_SUPPORT 67 + bool "USB support" 68 + depends on HAS_IOMEM 69 + default y 70 + ---help--- 71 + This option adds core support for Universal Serial Bus (USB). 72 + You will also need drivers from the following menu to make use of it. 57 73 58 74 if USB_SUPPORT 59 75
+7 -2
drivers/usb/core/driver.c
··· 1189 1189 if (status == 0) { 1190 1190 status = usb_suspend_device(udev, msg); 1191 1191 1192 - /* Again, ignore errors during system sleep transitions */ 1193 - if (!PMSG_IS_AUTO(msg)) 1192 + /* 1193 + * Ignore errors from non-root-hub devices during 1194 + * system sleep transitions. For the most part, 1195 + * these devices should go to low power anyway when 1196 + * the entire bus is suspended. 1197 + */ 1198 + if (udev->parent && !PMSG_IS_AUTO(msg)) 1194 1199 status = 0; 1195 1200 } 1196 1201
+12
drivers/usb/core/hcd.c
··· 1978 1978 if (status == 0) { 1979 1979 usb_set_device_state(rhdev, USB_STATE_SUSPENDED); 1980 1980 hcd->state = HC_STATE_SUSPENDED; 1981 + 1982 + /* Did we race with a root-hub wakeup event? */ 1983 + if (rhdev->do_remote_wakeup) { 1984 + char buffer[6]; 1985 + 1986 + status = hcd->driver->hub_status_data(hcd, buffer); 1987 + if (status != 0) { 1988 + dev_dbg(&rhdev->dev, "suspend raced with wakeup event\n"); 1989 + hcd_bus_resume(rhdev, PMSG_AUTO_RESUME); 1990 + status = -EBUSY; 1991 + } 1992 + } 1981 1993 } else { 1982 1994 spin_lock_irq(&hcd_root_hub_lock); 1983 1995 if (!HCD_DEAD(hcd)) {
+16
drivers/usb/core/hub.c
··· 3163 3163 if (retval) 3164 3164 goto fail; 3165 3165 3166 + /* 3167 + * Some superspeed devices have finished the link training process 3168 + * and attached to a superspeed hub port, but the device descriptor 3169 + * got from those devices show they aren't superspeed devices. Warm 3170 + * reset the port attached by the devices can fix them. 3171 + */ 3172 + if ((udev->speed == USB_SPEED_SUPER) && 3173 + (le16_to_cpu(udev->descriptor.bcdUSB) < 0x0300)) { 3174 + dev_err(&udev->dev, "got a wrong device descriptor, " 3175 + "warm reset device\n"); 3176 + hub_port_reset(hub, port1, udev, 3177 + HUB_BH_RESET_TIME, true); 3178 + retval = -EINVAL; 3179 + goto fail; 3180 + } 3181 + 3166 3182 if (udev->descriptor.bMaxPacketSize0 == 0xff || 3167 3183 udev->speed == USB_SPEED_SUPER) 3168 3184 i = 512;
+6 -5
drivers/usb/core/message.c
··· 308 308 retval = usb_unlink_urb(io->urbs [i]); 309 309 if (retval != -EINPROGRESS && 310 310 retval != -ENODEV && 311 - retval != -EBUSY) 311 + retval != -EBUSY && 312 + retval != -EIDRM) 312 313 dev_err(&io->dev->dev, 313 314 "%s, unlink --> %d\n", 314 315 __func__, retval); ··· 318 317 } 319 318 spin_lock(&io->lock); 320 319 } 321 - urb->dev = NULL; 322 320 323 321 /* on the last completion, signal usb_sg_wait() */ 324 322 io->bytes += urb->actual_length; ··· 524 524 case -ENXIO: /* hc didn't queue this one */ 525 525 case -EAGAIN: 526 526 case -ENOMEM: 527 - io->urbs[i]->dev = NULL; 528 527 retval = 0; 529 528 yield(); 530 529 break; ··· 541 542 542 543 /* fail any uncompleted urbs */ 543 544 default: 544 - io->urbs[i]->dev = NULL; 545 545 io->urbs[i]->status = retval; 546 546 dev_dbg(&io->dev->dev, "%s, submit --> %d\n", 547 547 __func__, retval); ··· 591 593 if (!io->urbs [i]->dev) 592 594 continue; 593 595 retval = usb_unlink_urb(io->urbs [i]); 594 - if (retval != -EINPROGRESS && retval != -EBUSY) 596 + if (retval != -EINPROGRESS 597 + && retval != -ENODEV 598 + && retval != -EBUSY 599 + && retval != -EIDRM) 595 600 dev_warn(&io->dev->dev, "%s, unlink --> %d\n", 596 601 __func__, retval); 597 602 }
+12
drivers/usb/core/urb.c
··· 539 539 * never submitted, or it was unlinked before, or the hardware is already 540 540 * finished with it), even if the completion handler has not yet run. 541 541 * 542 + * The URB must not be deallocated while this routine is running. In 543 + * particular, when a driver calls this routine, it must insure that the 544 + * completion handler cannot deallocate the URB. 545 + * 542 546 * Unlinking and Endpoint Queues: 543 547 * 544 548 * [The behaviors and guarantees described below do not apply to virtual ··· 607 603 * with error -EPERM. Thus even if the URB's completion handler always 608 604 * tries to resubmit, it will not succeed and the URB will become idle. 609 605 * 606 + * The URB must not be deallocated while this routine is running. In 607 + * particular, when a driver calls this routine, it must insure that the 608 + * completion handler cannot deallocate the URB. 609 + * 610 610 * This routine may not be used in an interrupt context (such as a bottom 611 611 * half or a completion handler), or when holding a spinlock, or in other 612 612 * situations where the caller can't schedule(). ··· 647 639 * After and while the routine runs, attempts to resubmit the URB will fail 648 640 * with error -EPERM. Thus even if the URB's completion handler always 649 641 * tries to resubmit, it will not succeed and the URB will become idle. 642 + * 643 + * The URB must not be deallocated while this routine is running. In 644 + * particular, when a driver calls this routine, it must insure that the 645 + * completion handler cannot deallocate the URB. 650 646 * 651 647 * This routine may not be used in an interrupt context (such as a bottom 652 648 * half or a completion handler), or when holding a spinlock, or in other
-1
drivers/usb/gadget/inode.c
··· 1574 1574 DBG (dev, "%s %d\n", __func__, dev->state); 1575 1575 1576 1576 /* dev->state must prevent interference */ 1577 - restart: 1578 1577 spin_lock_irq (&dev->lock); 1579 1578 while (!list_empty(&dev->epfiles)) { 1580 1579 struct ep_data *ep;
+3
drivers/usb/host/ehci-hcd.c
··· 347 347 if (ehci->debug) 348 348 dbgp_external_startup(); 349 349 350 + ehci->port_c_suspend = ehci->suspended_ports = 351 + ehci->resuming_ports = 0; 350 352 return retval; 351 353 } 352 354 ··· 941 939 * like usb_port_resume() does. 942 940 */ 943 941 ehci->reset_done[i] = jiffies + msecs_to_jiffies(25); 942 + set_bit(i, &ehci->resuming_ports); 944 943 ehci_dbg (ehci, "port %d remote wakeup\n", i + 1); 945 944 mod_timer(&hcd->rh_timer, ehci->reset_done[i]); 946 945 }
+16 -15
drivers/usb/host/ehci-hub.c
··· 223 223 * remote wakeup, we must fail the suspend. 224 224 */ 225 225 if (hcd->self.root_hub->do_remote_wakeup) { 226 - port = HCS_N_PORTS(ehci->hcs_params); 227 - while (port--) { 228 - if (ehci->reset_done[port] != 0) { 229 - spin_unlock_irq(&ehci->lock); 230 - ehci_dbg(ehci, "suspend failed because " 231 - "port %d is resuming\n", 232 - port + 1); 233 - return -EBUSY; 234 - } 226 + if (ehci->resuming_ports) { 227 + spin_unlock_irq(&ehci->lock); 228 + ehci_dbg(ehci, "suspend failed because a port is resuming\n"); 229 + return -EBUSY; 235 230 } 236 231 } 237 232 ··· 549 554 ehci_hub_status_data (struct usb_hcd *hcd, char *buf) 550 555 { 551 556 struct ehci_hcd *ehci = hcd_to_ehci (hcd); 552 - u32 temp, status = 0; 557 + u32 temp, status; 553 558 u32 mask; 554 559 int ports, i, retval = 1; 555 560 unsigned long flags; 556 561 u32 ppcd = 0; 557 - 558 - /* if !USB_SUSPEND, root hub timers won't get shut down ... */ 559 - if (ehci->rh_state != EHCI_RH_RUNNING) 560 - return 0; 561 562 562 563 /* init status to no-changes */ 563 564 buf [0] = 0; ··· 562 571 buf [1] = 0; 563 572 retval++; 564 573 } 574 + 575 + /* Inform the core about resumes-in-progress by returning 576 + * a non-zero value even if there are no status changes. 577 + */ 578 + status = ehci->resuming_ports; 565 579 566 580 /* Some boards (mostly VIA?) report bogus overcurrent indications, 567 581 * causing massive log spam unless we completely ignore them. It ··· 842 846 ehci_writel(ehci, 843 847 temp & ~(PORT_RWC_BITS | PORT_RESUME), 844 848 status_reg); 849 + clear_bit(wIndex, &ehci->resuming_ports); 845 850 retval = handshake(ehci, status_reg, 846 851 PORT_RESUME, 0, 2000 /* 2msec */); 847 852 if (retval != 0) { ··· 861 864 ehci->reset_done[wIndex])) { 862 865 status |= USB_PORT_STAT_C_RESET << 16; 863 866 ehci->reset_done [wIndex] = 0; 867 + clear_bit(wIndex, &ehci->resuming_ports); 864 868 865 869 /* force reset to complete */ 866 870 ehci_writel(ehci, temp & ~(PORT_RWC_BITS | PORT_RESET), ··· 882 884 ehci_readl(ehci, status_reg)); 883 885 } 884 886 885 - if (!(temp & (PORT_RESUME|PORT_RESET))) 887 + if (!(temp & (PORT_RESUME|PORT_RESET))) { 886 888 ehci->reset_done[wIndex] = 0; 889 + clear_bit(wIndex, &ehci->resuming_ports); 890 + } 887 891 888 892 /* transfer dedicated ports to the companion hc */ 889 893 if ((temp & PORT_CONNECT) && ··· 920 920 status |= USB_PORT_STAT_SUSPEND; 921 921 } else if (test_bit(wIndex, &ehci->suspended_ports)) { 922 922 clear_bit(wIndex, &ehci->suspended_ports); 923 + clear_bit(wIndex, &ehci->resuming_ports); 923 924 ehci->reset_done[wIndex] = 0; 924 925 if (temp & PORT_PE) 925 926 set_bit(wIndex, &ehci->port_c_suspend);
+2
drivers/usb/host/ehci-tegra.c
··· 224 224 temp &= ~(PORT_RWC_BITS | PORT_WAKE_BITS); 225 225 /* start resume signalling */ 226 226 ehci_writel(ehci, temp | PORT_RESUME, status_reg); 227 + set_bit(wIndex-1, &ehci->resuming_ports); 227 228 228 229 spin_unlock_irqrestore(&ehci->lock, flags); 229 230 msleep(20); ··· 237 236 pr_err("%s: timeout waiting for SUSPEND\n", __func__); 238 237 239 238 ehci->reset_done[wIndex-1] = 0; 239 + clear_bit(wIndex-1, &ehci->resuming_ports); 240 240 241 241 tegra->port_resuming = 1; 242 242 goto done;
+2
drivers/usb/host/ehci.h
··· 117 117 the change-suspend feature turned on */ 118 118 unsigned long suspended_ports; /* which ports are 119 119 suspended */ 120 + unsigned long resuming_ports; /* which ports have 121 + started to resume */ 120 122 121 123 /* per-HC memory pools (could be per-bus, but ...) */ 122 124 struct dma_pool *qh_pool; /* qh per active urb */
+7 -3
drivers/usb/host/pci-quirks.c
··· 825 825 } 826 826 } 827 827 828 - /* Disable any BIOS SMIs */ 829 - writel(XHCI_LEGACY_DISABLE_SMI, 830 - base + ext_cap_offset + XHCI_LEGACY_CONTROL_OFFSET); 828 + val = readl(base + ext_cap_offset + XHCI_LEGACY_CONTROL_OFFSET); 829 + /* Mask off (turn off) any enabled SMIs */ 830 + val &= XHCI_LEGACY_DISABLE_SMI; 831 + /* Mask all SMI events bits, RW1C */ 832 + val |= XHCI_LEGACY_SMI_EVENTS; 833 + /* Disable any BIOS SMIs and clear all SMI events*/ 834 + writel(val, base + ext_cap_offset + XHCI_LEGACY_CONTROL_OFFSET); 831 835 832 836 if (usb_is_intel_switchable_xhci(pdev)) 833 837 usb_enable_xhci_ports(pdev);
+3 -2
drivers/usb/host/uhci-hub.c
··· 196 196 status = get_hub_status_data(uhci, buf); 197 197 198 198 switch (uhci->rh_state) { 199 - case UHCI_RH_SUSPENDING: 200 199 case UHCI_RH_SUSPENDED: 201 200 /* if port change, ask to be resumed */ 202 - if (status || uhci->resuming_ports) 201 + if (status || uhci->resuming_ports) { 202 + status = 1; 203 203 usb_hcd_resume_root_hub(hcd); 204 + } 204 205 break; 205 206 206 207 case UHCI_RH_AUTO_STOPPED:
+1 -1
drivers/usb/host/xhci-dbg.c
··· 119 119 xhci_dbg(xhci, " Event Interrupts %s\n", 120 120 (temp & CMD_EIE) ? "enabled " : "disabled"); 121 121 xhci_dbg(xhci, " Host System Error Interrupts %s\n", 122 - (temp & CMD_EIE) ? "enabled " : "disabled"); 122 + (temp & CMD_HSEIE) ? "enabled " : "disabled"); 123 123 xhci_dbg(xhci, " HC has %sfinished light reset\n", 124 124 (temp & CMD_LRESET) ? "not " : ""); 125 125 }
+3 -2
drivers/usb/host/xhci-ext-caps.h
··· 62 62 /* USB Legacy Support Control and Status Register - section 7.1.2 */ 63 63 /* Add this offset, plus the value of xECP in HCCPARAMS to the base address */ 64 64 #define XHCI_LEGACY_CONTROL_OFFSET (0x04) 65 - /* bits 1:2, 5:12, and 17:19 need to be preserved; bits 21:28 should be zero */ 66 - #define XHCI_LEGACY_DISABLE_SMI ((0x3 << 1) + (0xff << 5) + (0x7 << 17)) 65 + /* bits 1:3, 5:12, and 17:19 need to be preserved; bits 21:28 should be zero */ 66 + #define XHCI_LEGACY_DISABLE_SMI ((0x7 << 1) + (0xff << 5) + (0x7 << 17)) 67 + #define XHCI_LEGACY_SMI_EVENTS (0x7 << 29) 67 68 68 69 /* USB 2.0 xHCI 0.96 L1C capability - section 7.2.2.1.3.2 */ 69 70 #define XHCI_L1C (1 << 16)
+2 -7
drivers/usb/host/xhci-mem.c
··· 1796 1796 int i; 1797 1797 1798 1798 /* Free the Event Ring Segment Table and the actual Event Ring */ 1799 - if (xhci->ir_set) { 1800 - xhci_writel(xhci, 0, &xhci->ir_set->erst_size); 1801 - xhci_write_64(xhci, 0, &xhci->ir_set->erst_base); 1802 - xhci_write_64(xhci, 0, &xhci->ir_set->erst_dequeue); 1803 - } 1804 1799 size = sizeof(struct xhci_erst_entry)*(xhci->erst.num_entries); 1805 1800 if (xhci->erst.entries) 1806 1801 dma_free_coherent(&pdev->dev, size, ··· 1807 1812 xhci->event_ring = NULL; 1808 1813 xhci_dbg(xhci, "Freed event ring\n"); 1809 1814 1810 - xhci_write_64(xhci, 0, &xhci->op_regs->cmd_ring); 1811 1815 if (xhci->cmd_ring) 1812 1816 xhci_ring_free(xhci, xhci->cmd_ring); 1813 1817 xhci->cmd_ring = NULL; ··· 1835 1841 xhci->medium_streams_pool = NULL; 1836 1842 xhci_dbg(xhci, "Freed medium stream array pool\n"); 1837 1843 1838 - xhci_write_64(xhci, 0, &xhci->op_regs->dcbaa_ptr); 1839 1844 if (xhci->dcbaa) 1840 1845 dma_free_coherent(&pdev->dev, sizeof(*xhci->dcbaa), 1841 1846 xhci->dcbaa, xhci->dcbaa->dma); ··· 2452 2459 2453 2460 fail: 2454 2461 xhci_warn(xhci, "Couldn't initialize memory\n"); 2462 + xhci_halt(xhci); 2463 + xhci_reset(xhci); 2455 2464 xhci_mem_cleanup(xhci); 2456 2465 return -ENOMEM; 2457 2466 }
+3 -1
drivers/usb/host/xhci-pci.c
··· 95 95 xhci->quirks |= XHCI_RESET_ON_RESUME; 96 96 xhci_dbg(xhci, "QUIRK: Resetting on resume\n"); 97 97 } 98 + if (pdev->vendor == PCI_VENDOR_ID_VIA) 99 + xhci->quirks |= XHCI_RESET_ON_RESUME; 98 100 } 99 101 100 102 /* called during probe() after chip reset completes */ ··· 328 326 return pci_register_driver(&xhci_pci_driver); 329 327 } 330 328 331 - void __exit xhci_unregister_pci(void) 329 + void xhci_unregister_pci(void) 332 330 { 333 331 pci_unregister_driver(&xhci_pci_driver); 334 332 }
+3 -3
drivers/usb/host/xhci-ring.c
··· 2417 2417 u32 irq_pending; 2418 2418 /* Acknowledge the PCI interrupt */ 2419 2419 irq_pending = xhci_readl(xhci, &xhci->ir_set->irq_pending); 2420 - irq_pending |= 0x3; 2420 + irq_pending |= IMAN_IP; 2421 2421 xhci_writel(xhci, irq_pending, &xhci->ir_set->irq_pending); 2422 2422 } 2423 2423 ··· 2734 2734 urb->dev->speed == USB_SPEED_FULL) 2735 2735 urb->interval /= 8; 2736 2736 } 2737 - return xhci_queue_bulk_tx(xhci, GFP_ATOMIC, urb, slot_id, ep_index); 2737 + return xhci_queue_bulk_tx(xhci, mem_flags, urb, slot_id, ep_index); 2738 2738 } 2739 2739 2740 2740 /* ··· 3514 3514 } 3515 3515 ep_ring->num_trbs_free_temp = ep_ring->num_trbs_free; 3516 3516 3517 - return xhci_queue_isoc_tx(xhci, GFP_ATOMIC, urb, slot_id, ep_index); 3517 + return xhci_queue_isoc_tx(xhci, mem_flags, urb, slot_id, ep_index); 3518 3518 } 3519 3519 3520 3520 /**** Command Ring Operations ****/
+8 -4
drivers/usb/host/xhci.c
··· 106 106 STS_HALT, STS_HALT, XHCI_MAX_HALT_USEC); 107 107 if (!ret) 108 108 xhci->xhc_state |= XHCI_STATE_HALTED; 109 + else 110 + xhci_warn(xhci, "Host not halted after %u microseconds.\n", 111 + XHCI_MAX_HALT_USEC); 109 112 return ret; 110 113 } 111 114 ··· 667 664 xhci->s3.dev_nt = xhci_readl(xhci, &xhci->op_regs->dev_notification); 668 665 xhci->s3.dcbaa_ptr = xhci_read_64(xhci, &xhci->op_regs->dcbaa_ptr); 669 666 xhci->s3.config_reg = xhci_readl(xhci, &xhci->op_regs->config_reg); 670 - xhci->s3.irq_pending = xhci_readl(xhci, &xhci->ir_set->irq_pending); 671 - xhci->s3.irq_control = xhci_readl(xhci, &xhci->ir_set->irq_control); 672 667 xhci->s3.erst_size = xhci_readl(xhci, &xhci->ir_set->erst_size); 673 668 xhci->s3.erst_base = xhci_read_64(xhci, &xhci->ir_set->erst_base); 674 669 xhci->s3.erst_dequeue = xhci_read_64(xhci, &xhci->ir_set->erst_dequeue); 670 + xhci->s3.irq_pending = xhci_readl(xhci, &xhci->ir_set->irq_pending); 671 + xhci->s3.irq_control = xhci_readl(xhci, &xhci->ir_set->irq_control); 675 672 } 676 673 677 674 static void xhci_restore_registers(struct xhci_hcd *xhci) ··· 680 677 xhci_writel(xhci, xhci->s3.dev_nt, &xhci->op_regs->dev_notification); 681 678 xhci_write_64(xhci, xhci->s3.dcbaa_ptr, &xhci->op_regs->dcbaa_ptr); 682 679 xhci_writel(xhci, xhci->s3.config_reg, &xhci->op_regs->config_reg); 683 - xhci_writel(xhci, xhci->s3.irq_pending, &xhci->ir_set->irq_pending); 684 - xhci_writel(xhci, xhci->s3.irq_control, &xhci->ir_set->irq_control); 685 680 xhci_writel(xhci, xhci->s3.erst_size, &xhci->ir_set->erst_size); 686 681 xhci_write_64(xhci, xhci->s3.erst_base, &xhci->ir_set->erst_base); 682 + xhci_write_64(xhci, xhci->s3.erst_dequeue, &xhci->ir_set->erst_dequeue); 683 + xhci_writel(xhci, xhci->s3.irq_pending, &xhci->ir_set->irq_pending); 684 + xhci_writel(xhci, xhci->s3.irq_control, &xhci->ir_set->irq_control); 687 685 } 688 686 689 687 static void xhci_set_cmd_ring_deq(struct xhci_hcd *xhci)
+4
drivers/usb/host/xhci.h
··· 205 205 #define CMD_PM_INDEX (1 << 11) 206 206 /* bits 12:31 are reserved (and should be preserved on writes). */ 207 207 208 + /* IMAN - Interrupt Management Register */ 209 + #define IMAN_IP (1 << 1) 210 + #define IMAN_IE (1 << 0) 211 + 208 212 /* USBSTS - USB status - status bitmasks */ 209 213 /* HC not running - set to 1 when run/stop bit is cleared. */ 210 214 #define STS_HALT XHCI_STS_HALT
-5
drivers/usb/serial/bus.c
··· 60 60 retval = -ENODEV; 61 61 goto exit; 62 62 } 63 - if (port->dev_state != PORT_REGISTERING) 64 - goto exit; 65 63 66 64 driver = port->serial->type; 67 65 if (driver->port_probe) { ··· 95 97 port = to_usb_serial_port(dev); 96 98 if (!port) 97 99 return -ENODEV; 98 - 99 - if (port->dev_state != PORT_UNREGISTERING) 100 - return retval; 101 100 102 101 device_remove_file(&port->dev, &dev_attr_port_number); 103 102
+20 -16
drivers/usb/serial/ftdi_sio.c
··· 75 75 unsigned long last_dtr_rts; /* saved modem control outputs */ 76 76 struct async_icount icount; 77 77 wait_queue_head_t delta_msr_wait; /* Used for TIOCMIWAIT */ 78 - char prev_status, diff_status; /* Used for TIOCMIWAIT */ 78 + char prev_status; /* Used for TIOCMIWAIT */ 79 + bool dev_gone; /* Used to abort TIOCMIWAIT */ 79 80 char transmit_empty; /* If transmitter is empty or not */ 80 81 struct usb_serial_port *port; 81 82 __u16 interface; /* FT2232C, FT2232H or FT4232H port interface ··· 1682 1681 init_waitqueue_head(&priv->delta_msr_wait); 1683 1682 1684 1683 priv->flags = ASYNC_LOW_LATENCY; 1684 + priv->dev_gone = false; 1685 1685 1686 1686 if (quirk && quirk->port_probe) 1687 1687 quirk->port_probe(priv); ··· 1841 1839 1842 1840 dbg("%s", __func__); 1843 1841 1842 + priv->dev_gone = true; 1843 + wake_up_interruptible_all(&priv->delta_msr_wait); 1844 + 1844 1845 remove_sysfs_attrs(port); 1845 1846 1846 1847 kref_put(&priv->kref, ftdi_sio_priv_release); ··· 1987 1982 N.B. packet may be processed more than once, but differences 1988 1983 are only processed once. */ 1989 1984 status = packet[0] & FTDI_STATUS_B0_MASK; 1990 - if (status & FTDI_RS0_CTS) 1991 - priv->icount.cts++; 1992 - if (status & FTDI_RS0_DSR) 1993 - priv->icount.dsr++; 1994 - if (status & FTDI_RS0_RI) 1995 - priv->icount.rng++; 1996 - if (status & FTDI_RS0_RLSD) 1997 - priv->icount.dcd++; 1998 1985 if (status != priv->prev_status) { 1999 - priv->diff_status |= status ^ priv->prev_status; 2000 - wake_up_interruptible(&priv->delta_msr_wait); 1986 + char diff_status = status ^ priv->prev_status; 1987 + 1988 + if (diff_status & FTDI_RS0_CTS) 1989 + priv->icount.cts++; 1990 + if (diff_status & FTDI_RS0_DSR) 1991 + priv->icount.dsr++; 1992 + if (diff_status & FTDI_RS0_RI) 1993 + priv->icount.rng++; 1994 + if (diff_status & FTDI_RS0_RLSD) 1995 + priv->icount.dcd++; 1996 + 1997 + wake_up_interruptible_all(&priv->delta_msr_wait); 2001 1998 priv->prev_status = status; 2002 1999 } 2003 2000 ··· 2402 2395 */ 2403 2396 case TIOCMIWAIT: 2404 2397 cprev = priv->icount; 2405 - while (1) { 2398 + while (!priv->dev_gone) { 2406 2399 interruptible_sleep_on(&priv->delta_msr_wait); 2407 2400 /* see if a signal did it */ 2408 2401 if (signal_pending(current)) 2409 2402 return -ERESTARTSYS; 2410 2403 cnow = priv->icount; 2411 - if (cnow.rng == cprev.rng && cnow.dsr == cprev.dsr && 2412 - cnow.dcd == cprev.dcd && cnow.cts == cprev.cts) 2413 - return -EIO; /* no change => error */ 2414 2404 if (((arg & TIOCM_RNG) && (cnow.rng != cprev.rng)) || 2415 2405 ((arg & TIOCM_DSR) && (cnow.dsr != cprev.dsr)) || 2416 2406 ((arg & TIOCM_CD) && (cnow.dcd != cprev.dcd)) || ··· 2416 2412 } 2417 2413 cprev = cnow; 2418 2414 } 2419 - /* not reached */ 2415 + return -EIO; 2420 2416 break; 2421 2417 case TIOCSERGETLSR: 2422 2418 return get_lsr_info(port, (struct serial_struct __user *)arg);
+3 -3
drivers/usb/serial/metro-usb.c
··· 27 27 28 28 /* Product information. */ 29 29 #define FOCUS_VENDOR_ID 0x0C2E 30 - #define FOCUS_PRODUCT_ID 0x0720 31 - #define FOCUS_PRODUCT_ID_UNI 0x0710 30 + #define FOCUS_PRODUCT_ID_BI 0x0720 31 + #define FOCUS_PRODUCT_ID_UNI 0x0700 32 32 33 33 #define METROUSB_SET_REQUEST_TYPE 0x40 34 34 #define METROUSB_SET_MODEM_CTRL_REQUEST 10 ··· 47 47 48 48 /* Device table list. */ 49 49 static struct usb_device_id id_table[] = { 50 - { USB_DEVICE(FOCUS_VENDOR_ID, FOCUS_PRODUCT_ID) }, 50 + { USB_DEVICE(FOCUS_VENDOR_ID, FOCUS_PRODUCT_ID_BI) }, 51 51 { USB_DEVICE(FOCUS_VENDOR_ID, FOCUS_PRODUCT_ID_UNI) }, 52 52 { }, /* Terminating entry. */ 53 53 };
+1
drivers/usb/serial/option.c
··· 708 708 { USB_DEVICE(NOVATELWIRELESS_VENDOR_ID, NOVATELWIRELESS_PRODUCT_EVDO_EMBEDDED_FULLSPEED) }, 709 709 { USB_DEVICE(NOVATELWIRELESS_VENDOR_ID, NOVATELWIRELESS_PRODUCT_HSPA_EMBEDDED_FULLSPEED) }, 710 710 { USB_DEVICE(NOVATELWIRELESS_VENDOR_ID, NOVATELWIRELESS_PRODUCT_EVDO_HIGHSPEED) }, 711 + { USB_DEVICE(NOVATELWIRELESS_VENDOR_ID, NOVATELWIRELESS_PRODUCT_HSPA_HIGHSPEED) }, 711 712 { USB_DEVICE(NOVATELWIRELESS_VENDOR_ID, NOVATELWIRELESS_PRODUCT_HSPA_HIGHSPEED3) }, 712 713 { USB_DEVICE(NOVATELWIRELESS_VENDOR_ID, NOVATELWIRELESS_PRODUCT_HSPA_HIGHSPEED4) }, 713 714 { USB_DEVICE(NOVATELWIRELESS_VENDOR_ID, NOVATELWIRELESS_PRODUCT_HSPA_HIGHSPEED5) },
+1 -1
drivers/usb/serial/pl2303.c
··· 420 420 control = priv->line_control; 421 421 if ((cflag & CBAUD) == B0) 422 422 priv->line_control &= ~(CONTROL_DTR | CONTROL_RTS); 423 - else 423 + else if ((old_termios->c_cflag & CBAUD) == B0) 424 424 priv->line_control |= (CONTROL_DTR | CONTROL_RTS); 425 425 if (control != priv->line_control) { 426 426 control = priv->line_control;
+1
drivers/usb/serial/sierra.c
··· 289 289 { USB_DEVICE(0x1199, 0x6856) }, /* Sierra Wireless AirCard 881 U */ 290 290 { USB_DEVICE(0x1199, 0x6859) }, /* Sierra Wireless AirCard 885 E */ 291 291 { USB_DEVICE(0x1199, 0x685A) }, /* Sierra Wireless AirCard 885 E */ 292 + { USB_DEVICE(0x1199, 0x68A2) }, /* Sierra Wireless MC7710 */ 292 293 /* Sierra Wireless C885 */ 293 294 { USB_DEVICE_AND_INTERFACE_INFO(0x1199, 0x6880, 0xFF, 0xFF, 0xFF)}, 294 295 /* Sierra Wireless C888, Air Card 501, USB 303, USB 304 */
+10 -21
drivers/usb/serial/usb-serial.c
··· 1059 1059 serial->attached = 1; 1060 1060 } 1061 1061 1062 + /* Avoid race with tty_open and serial_install by setting the 1063 + * disconnected flag and not clearing it until all ports have been 1064 + * registered. 1065 + */ 1066 + serial->disconnected = 1; 1067 + 1062 1068 if (get_free_serial(serial, num_ports, &minor) == NULL) { 1063 1069 dev_err(&interface->dev, "No more free serial devices\n"); 1064 1070 goto probe_error; ··· 1076 1070 port = serial->port[i]; 1077 1071 dev_set_name(&port->dev, "ttyUSB%d", port->number); 1078 1072 dbg ("%s - registering %s", __func__, dev_name(&port->dev)); 1079 - port->dev_state = PORT_REGISTERING; 1080 1073 device_enable_async_suspend(&port->dev); 1081 1074 1082 1075 retval = device_add(&port->dev); 1083 - if (retval) { 1076 + if (retval) 1084 1077 dev_err(&port->dev, "Error registering port device, " 1085 1078 "continuing\n"); 1086 - port->dev_state = PORT_UNREGISTERED; 1087 - } else { 1088 - port->dev_state = PORT_REGISTERED; 1089 - } 1090 1079 } 1080 + 1081 + serial->disconnected = 0; 1091 1082 1092 1083 usb_serial_console_init(debug, minor); 1093 1084 ··· 1127 1124 } 1128 1125 kill_traffic(port); 1129 1126 cancel_work_sync(&port->work); 1130 - if (port->dev_state == PORT_REGISTERED) { 1131 - 1132 - /* Make sure the port is bound so that the 1133 - * driver's port_remove method is called. 1134 - */ 1135 - if (!port->dev.driver) { 1136 - int rc; 1137 - 1138 - port->dev.driver = 1139 - &serial->type->driver; 1140 - rc = device_bind_driver(&port->dev); 1141 - } 1142 - port->dev_state = PORT_UNREGISTERING; 1127 + if (device_is_registered(&port->dev)) 1143 1128 device_del(&port->dev); 1144 - port->dev_state = PORT_UNREGISTERED; 1145 - } 1146 1129 } 1147 1130 } 1148 1131 serial->type->disconnect(serial);
+30
drivers/usb/storage/usb.c
··· 132 132 #undef COMPLIANT_DEV 133 133 #undef USUAL_DEV 134 134 135 + #ifdef CONFIG_LOCKDEP 136 + 137 + static struct lock_class_key us_interface_key[USB_MAXINTERFACES]; 138 + 139 + static void us_set_lock_class(struct mutex *mutex, 140 + struct usb_interface *intf) 141 + { 142 + struct usb_device *udev = interface_to_usbdev(intf); 143 + struct usb_host_config *config = udev->actconfig; 144 + int i; 145 + 146 + for (i = 0; i < config->desc.bNumInterfaces; i++) { 147 + if (config->interface[i] == intf) 148 + break; 149 + } 150 + 151 + BUG_ON(i == config->desc.bNumInterfaces); 152 + 153 + lockdep_set_class(mutex, &us_interface_key[i]); 154 + } 155 + 156 + #else 157 + 158 + static void us_set_lock_class(struct mutex *mutex, 159 + struct usb_interface *intf) 160 + { 161 + } 162 + 163 + #endif 135 164 136 165 #ifdef CONFIG_PM /* Minimal support for suspend and resume */ 137 166 ··· 924 895 *pus = us = host_to_us(host); 925 896 memset(us, 0, sizeof(struct us_data)); 926 897 mutex_init(&(us->dev_mutex)); 898 + us_set_lock_class(&us->dev_mutex, intf); 927 899 init_completion(&us->cmnd_ready); 928 900 init_completion(&(us->notify)); 929 901 init_waitqueue_head(&us->delay_wait);
+3 -2
drivers/video/au1100fb.c
··· 499 499 au1100fb_fix.mmio_start = regs_res->start; 500 500 au1100fb_fix.mmio_len = resource_size(regs_res); 501 501 502 - if (!devm_request_mem_region(au1100fb_fix.mmio_start, 502 + if (!devm_request_mem_region(&dev->dev, 503 + au1100fb_fix.mmio_start, 503 504 au1100fb_fix.mmio_len, 504 505 DRIVER_NAME)) { 505 506 print_err("fail to lock memory region at 0x%08lx", ··· 517 516 fbdev->fb_len = fbdev->panel->xres * fbdev->panel->yres * 518 517 (fbdev->panel->bpp >> 3) * AU1100FB_NBR_VIDEO_BUFFERS; 519 518 520 - fbdev->fb_mem = dmam_alloc_coherent(&dev->dev, &dev->dev, 519 + fbdev->fb_mem = dmam_alloc_coherent(&dev->dev, 521 520 PAGE_ALIGN(fbdev->fb_len), 522 521 &fbdev->fb_phys, GFP_KERNEL); 523 522 if (!fbdev->fb_mem) {
+1 -1
drivers/video/au1200fb.c
··· 1724 1724 /* Allocate the framebuffer to the maximum screen size */ 1725 1725 fbdev->fb_len = (win->w[plane].xres * win->w[plane].yres * bpp) / 8; 1726 1726 1727 - fbdev->fb_mem = dmam_alloc_noncoherent(&dev->dev, &dev->dev, 1727 + fbdev->fb_mem = dmam_alloc_noncoherent(&dev->dev, 1728 1728 PAGE_ALIGN(fbdev->fb_len), 1729 1729 &fbdev->fb_phys, GFP_KERNEL); 1730 1730 if (!fbdev->fb_mem) {
+174 -174
drivers/video/kyro/STG4000Reg.h
··· 73 73 /* Register Table */ 74 74 typedef struct { 75 75 /* 0h */ 76 - volatile unsigned long Thread0Enable; /* 0x0000 */ 77 - volatile unsigned long Thread1Enable; /* 0x0004 */ 78 - volatile unsigned long Thread0Recover; /* 0x0008 */ 79 - volatile unsigned long Thread1Recover; /* 0x000C */ 80 - volatile unsigned long Thread0Step; /* 0x0010 */ 81 - volatile unsigned long Thread1Step; /* 0x0014 */ 82 - volatile unsigned long VideoInStatus; /* 0x0018 */ 83 - volatile unsigned long Core2InSignStart; /* 0x001C */ 84 - volatile unsigned long Core1ResetVector; /* 0x0020 */ 85 - volatile unsigned long Core1ROMOffset; /* 0x0024 */ 86 - volatile unsigned long Core1ArbiterPriority; /* 0x0028 */ 87 - volatile unsigned long VideoInControl; /* 0x002C */ 88 - volatile unsigned long VideoInReg0CtrlA; /* 0x0030 */ 89 - volatile unsigned long VideoInReg0CtrlB; /* 0x0034 */ 90 - volatile unsigned long VideoInReg1CtrlA; /* 0x0038 */ 91 - volatile unsigned long VideoInReg1CtrlB; /* 0x003C */ 92 - volatile unsigned long Thread0Kicker; /* 0x0040 */ 93 - volatile unsigned long Core2InputSign; /* 0x0044 */ 94 - volatile unsigned long Thread0ProgCtr; /* 0x0048 */ 95 - volatile unsigned long Thread1ProgCtr; /* 0x004C */ 96 - volatile unsigned long Thread1Kicker; /* 0x0050 */ 97 - volatile unsigned long GPRegister1; /* 0x0054 */ 98 - volatile unsigned long GPRegister2; /* 0x0058 */ 99 - volatile unsigned long GPRegister3; /* 0x005C */ 100 - volatile unsigned long GPRegister4; /* 0x0060 */ 101 - volatile unsigned long SerialIntA; /* 0x0064 */ 76 + volatile u32 Thread0Enable; /* 0x0000 */ 77 + volatile u32 Thread1Enable; /* 0x0004 */ 78 + volatile u32 Thread0Recover; /* 0x0008 */ 79 + volatile u32 Thread1Recover; /* 0x000C */ 80 + volatile u32 Thread0Step; /* 0x0010 */ 81 + volatile u32 Thread1Step; /* 0x0014 */ 82 + volatile u32 VideoInStatus; /* 0x0018 */ 83 + volatile u32 Core2InSignStart; /* 0x001C */ 84 + volatile u32 Core1ResetVector; /* 0x0020 */ 85 + volatile u32 Core1ROMOffset; /* 0x0024 */ 86 + volatile u32 Core1ArbiterPriority; /* 0x0028 */ 87 + volatile u32 VideoInControl; /* 0x002C */ 88 + volatile u32 VideoInReg0CtrlA; /* 0x0030 */ 89 + volatile u32 VideoInReg0CtrlB; /* 0x0034 */ 90 + volatile u32 VideoInReg1CtrlA; /* 0x0038 */ 91 + volatile u32 VideoInReg1CtrlB; /* 0x003C */ 92 + volatile u32 Thread0Kicker; /* 0x0040 */ 93 + volatile u32 Core2InputSign; /* 0x0044 */ 94 + volatile u32 Thread0ProgCtr; /* 0x0048 */ 95 + volatile u32 Thread1ProgCtr; /* 0x004C */ 96 + volatile u32 Thread1Kicker; /* 0x0050 */ 97 + volatile u32 GPRegister1; /* 0x0054 */ 98 + volatile u32 GPRegister2; /* 0x0058 */ 99 + volatile u32 GPRegister3; /* 0x005C */ 100 + volatile u32 GPRegister4; /* 0x0060 */ 101 + volatile u32 SerialIntA; /* 0x0064 */ 102 102 103 - volatile unsigned long Fill0[6]; /* GAP 0x0068 - 0x007C */ 103 + volatile u32 Fill0[6]; /* GAP 0x0068 - 0x007C */ 104 104 105 - volatile unsigned long SoftwareReset; /* 0x0080 */ 106 - volatile unsigned long SerialIntB; /* 0x0084 */ 105 + volatile u32 SoftwareReset; /* 0x0080 */ 106 + volatile u32 SerialIntB; /* 0x0084 */ 107 107 108 - volatile unsigned long Fill1[37]; /* GAP 0x0088 - 0x011C */ 108 + volatile u32 Fill1[37]; /* GAP 0x0088 - 0x011C */ 109 109 110 - volatile unsigned long ROMELQV; /* 0x011C */ 111 - volatile unsigned long WLWH; /* 0x0120 */ 112 - volatile unsigned long ROMELWL; /* 0x0124 */ 110 + volatile u32 ROMELQV; /* 0x011C */ 111 + volatile u32 WLWH; /* 0x0120 */ 112 + volatile u32 ROMELWL; /* 0x0124 */ 113 113 114 - volatile unsigned long dwFill_1; /* GAP 0x0128 */ 114 + volatile u32 dwFill_1; /* GAP 0x0128 */ 115 115 116 - volatile unsigned long IntStatus; /* 0x012C */ 117 - volatile unsigned long IntMask; /* 0x0130 */ 118 - volatile unsigned long IntClear; /* 0x0134 */ 116 + volatile u32 IntStatus; /* 0x012C */ 117 + volatile u32 IntMask; /* 0x0130 */ 118 + volatile u32 IntClear; /* 0x0134 */ 119 119 120 - volatile unsigned long Fill2[6]; /* GAP 0x0138 - 0x014C */ 120 + volatile u32 Fill2[6]; /* GAP 0x0138 - 0x014C */ 121 121 122 - volatile unsigned long ROMGPIOA; /* 0x0150 */ 123 - volatile unsigned long ROMGPIOB; /* 0x0154 */ 124 - volatile unsigned long ROMGPIOC; /* 0x0158 */ 125 - volatile unsigned long ROMGPIOD; /* 0x015C */ 122 + volatile u32 ROMGPIOA; /* 0x0150 */ 123 + volatile u32 ROMGPIOB; /* 0x0154 */ 124 + volatile u32 ROMGPIOC; /* 0x0158 */ 125 + volatile u32 ROMGPIOD; /* 0x015C */ 126 126 127 - volatile unsigned long Fill3[2]; /* GAP 0x0160 - 0x0168 */ 127 + volatile u32 Fill3[2]; /* GAP 0x0160 - 0x0168 */ 128 128 129 - volatile unsigned long AGPIntID; /* 0x0168 */ 130 - volatile unsigned long AGPIntClassCode; /* 0x016C */ 131 - volatile unsigned long AGPIntBIST; /* 0x0170 */ 132 - volatile unsigned long AGPIntSSID; /* 0x0174 */ 133 - volatile unsigned long AGPIntPMCSR; /* 0x0178 */ 134 - volatile unsigned long VGAFrameBufBase; /* 0x017C */ 135 - volatile unsigned long VGANotify; /* 0x0180 */ 136 - volatile unsigned long DACPLLMode; /* 0x0184 */ 137 - volatile unsigned long Core1VideoClockDiv; /* 0x0188 */ 138 - volatile unsigned long AGPIntStat; /* 0x018C */ 139 - 140 - /* 141 - volatile unsigned long Fill4[0x0400/4 - 0x0190/4]; //GAP 0x0190 - 0x0400 142 - volatile unsigned long Fill5[0x05FC/4 - 0x0400/4]; //GAP 0x0400 - 0x05FC Fog Table 143 - volatile unsigned long Fill6[0x0604/4 - 0x0600/4]; //GAP 0x0600 - 0x0604 144 - volatile unsigned long Fill7[0x0680/4 - 0x0608/4]; //GAP 0x0608 - 0x0680 145 - volatile unsigned long Fill8[0x07FC/4 - 0x0684/4]; //GAP 0x0684 - 0x07FC 146 - */ 147 - volatile unsigned long Fill4[412]; /* 0x0190 - 0x07FC */ 148 - 149 - volatile unsigned long TACtrlStreamBase; /* 0x0800 */ 150 - volatile unsigned long TAObjDataBase; /* 0x0804 */ 151 - volatile unsigned long TAPtrDataBase; /* 0x0808 */ 152 - volatile unsigned long TARegionDataBase; /* 0x080C */ 153 - volatile unsigned long TATailPtrBase; /* 0x0810 */ 154 - volatile unsigned long TAPtrRegionSize; /* 0x0814 */ 155 - volatile unsigned long TAConfiguration; /* 0x0818 */ 156 - volatile unsigned long TAObjDataStartAddr; /* 0x081C */ 157 - volatile unsigned long TAObjDataEndAddr; /* 0x0820 */ 158 - volatile unsigned long TAXScreenClip; /* 0x0824 */ 159 - volatile unsigned long TAYScreenClip; /* 0x0828 */ 160 - volatile unsigned long TARHWClamp; /* 0x082C */ 161 - volatile unsigned long TARHWCompare; /* 0x0830 */ 162 - volatile unsigned long TAStart; /* 0x0834 */ 163 - volatile unsigned long TAObjReStart; /* 0x0838 */ 164 - volatile unsigned long TAPtrReStart; /* 0x083C */ 165 - volatile unsigned long TAStatus1; /* 0x0840 */ 166 - volatile unsigned long TAStatus2; /* 0x0844 */ 167 - volatile unsigned long TAIntStatus; /* 0x0848 */ 168 - volatile unsigned long TAIntMask; /* 0x084C */ 169 - 170 - volatile unsigned long Fill5[235]; /* GAP 0x0850 - 0x0BF8 */ 171 - 172 - volatile unsigned long TextureAddrThresh; /* 0x0BFC */ 173 - volatile unsigned long Core1Translation; /* 0x0C00 */ 174 - volatile unsigned long TextureAddrReMap; /* 0x0C04 */ 175 - volatile unsigned long RenderOutAGPRemap; /* 0x0C08 */ 176 - volatile unsigned long _3DRegionReadTrans; /* 0x0C0C */ 177 - volatile unsigned long _3DPtrReadTrans; /* 0x0C10 */ 178 - volatile unsigned long _3DParamReadTrans; /* 0x0C14 */ 179 - volatile unsigned long _3DRegionReadThresh; /* 0x0C18 */ 180 - volatile unsigned long _3DPtrReadThresh; /* 0x0C1C */ 181 - volatile unsigned long _3DParamReadThresh; /* 0x0C20 */ 182 - volatile unsigned long _3DRegionReadAGPRemap; /* 0x0C24 */ 183 - volatile unsigned long _3DPtrReadAGPRemap; /* 0x0C28 */ 184 - volatile unsigned long _3DParamReadAGPRemap; /* 0x0C2C */ 185 - volatile unsigned long ZBufferAGPRemap; /* 0x0C30 */ 186 - volatile unsigned long TAIndexAGPRemap; /* 0x0C34 */ 187 - volatile unsigned long TAVertexAGPRemap; /* 0x0C38 */ 188 - volatile unsigned long TAUVAddrTrans; /* 0x0C3C */ 189 - volatile unsigned long TATailPtrCacheTrans; /* 0x0C40 */ 190 - volatile unsigned long TAParamWriteTrans; /* 0x0C44 */ 191 - volatile unsigned long TAPtrWriteTrans; /* 0x0C48 */ 192 - volatile unsigned long TAParamWriteThresh; /* 0x0C4C */ 193 - volatile unsigned long TAPtrWriteThresh; /* 0x0C50 */ 194 - volatile unsigned long TATailPtrCacheAGPRe; /* 0x0C54 */ 195 - volatile unsigned long TAParamWriteAGPRe; /* 0x0C58 */ 196 - volatile unsigned long TAPtrWriteAGPRe; /* 0x0C5C */ 197 - volatile unsigned long SDRAMArbiterConf; /* 0x0C60 */ 198 - volatile unsigned long SDRAMConf0; /* 0x0C64 */ 199 - volatile unsigned long SDRAMConf1; /* 0x0C68 */ 200 - volatile unsigned long SDRAMConf2; /* 0x0C6C */ 201 - volatile unsigned long SDRAMRefresh; /* 0x0C70 */ 202 - volatile unsigned long SDRAMPowerStat; /* 0x0C74 */ 203 - 204 - volatile unsigned long Fill6[2]; /* GAP 0x0C78 - 0x0C7C */ 205 - 206 - volatile unsigned long RAMBistData; /* 0x0C80 */ 207 - volatile unsigned long RAMBistCtrl; /* 0x0C84 */ 208 - volatile unsigned long FIFOBistKey; /* 0x0C88 */ 209 - volatile unsigned long RAMBistResult; /* 0x0C8C */ 210 - volatile unsigned long FIFOBistResult; /* 0x0C90 */ 129 + volatile u32 AGPIntID; /* 0x0168 */ 130 + volatile u32 AGPIntClassCode; /* 0x016C */ 131 + volatile u32 AGPIntBIST; /* 0x0170 */ 132 + volatile u32 AGPIntSSID; /* 0x0174 */ 133 + volatile u32 AGPIntPMCSR; /* 0x0178 */ 134 + volatile u32 VGAFrameBufBase; /* 0x017C */ 135 + volatile u32 VGANotify; /* 0x0180 */ 136 + volatile u32 DACPLLMode; /* 0x0184 */ 137 + volatile u32 Core1VideoClockDiv; /* 0x0188 */ 138 + volatile u32 AGPIntStat; /* 0x018C */ 211 139 212 140 /* 213 - volatile unsigned long Fill11[0x0CBC/4 - 0x0C94/4]; //GAP 0x0C94 - 0x0CBC 214 - volatile unsigned long Fill12[0x0CD0/4 - 0x0CC0/4]; //GAP 0x0CC0 - 0x0CD0 3DRegisters 141 + volatile u32 Fill4[0x0400/4 - 0x0190/4]; //GAP 0x0190 - 0x0400 142 + volatile u32 Fill5[0x05FC/4 - 0x0400/4]; //GAP 0x0400 - 0x05FC Fog Table 143 + volatile u32 Fill6[0x0604/4 - 0x0600/4]; //GAP 0x0600 - 0x0604 144 + volatile u32 Fill7[0x0680/4 - 0x0608/4]; //GAP 0x0608 - 0x0680 145 + volatile u32 Fill8[0x07FC/4 - 0x0684/4]; //GAP 0x0684 - 0x07FC 146 + */ 147 + volatile u32 Fill4[412]; /* 0x0190 - 0x07FC */ 148 + 149 + volatile u32 TACtrlStreamBase; /* 0x0800 */ 150 + volatile u32 TAObjDataBase; /* 0x0804 */ 151 + volatile u32 TAPtrDataBase; /* 0x0808 */ 152 + volatile u32 TARegionDataBase; /* 0x080C */ 153 + volatile u32 TATailPtrBase; /* 0x0810 */ 154 + volatile u32 TAPtrRegionSize; /* 0x0814 */ 155 + volatile u32 TAConfiguration; /* 0x0818 */ 156 + volatile u32 TAObjDataStartAddr; /* 0x081C */ 157 + volatile u32 TAObjDataEndAddr; /* 0x0820 */ 158 + volatile u32 TAXScreenClip; /* 0x0824 */ 159 + volatile u32 TAYScreenClip; /* 0x0828 */ 160 + volatile u32 TARHWClamp; /* 0x082C */ 161 + volatile u32 TARHWCompare; /* 0x0830 */ 162 + volatile u32 TAStart; /* 0x0834 */ 163 + volatile u32 TAObjReStart; /* 0x0838 */ 164 + volatile u32 TAPtrReStart; /* 0x083C */ 165 + volatile u32 TAStatus1; /* 0x0840 */ 166 + volatile u32 TAStatus2; /* 0x0844 */ 167 + volatile u32 TAIntStatus; /* 0x0848 */ 168 + volatile u32 TAIntMask; /* 0x084C */ 169 + 170 + volatile u32 Fill5[235]; /* GAP 0x0850 - 0x0BF8 */ 171 + 172 + volatile u32 TextureAddrThresh; /* 0x0BFC */ 173 + volatile u32 Core1Translation; /* 0x0C00 */ 174 + volatile u32 TextureAddrReMap; /* 0x0C04 */ 175 + volatile u32 RenderOutAGPRemap; /* 0x0C08 */ 176 + volatile u32 _3DRegionReadTrans; /* 0x0C0C */ 177 + volatile u32 _3DPtrReadTrans; /* 0x0C10 */ 178 + volatile u32 _3DParamReadTrans; /* 0x0C14 */ 179 + volatile u32 _3DRegionReadThresh; /* 0x0C18 */ 180 + volatile u32 _3DPtrReadThresh; /* 0x0C1C */ 181 + volatile u32 _3DParamReadThresh; /* 0x0C20 */ 182 + volatile u32 _3DRegionReadAGPRemap; /* 0x0C24 */ 183 + volatile u32 _3DPtrReadAGPRemap; /* 0x0C28 */ 184 + volatile u32 _3DParamReadAGPRemap; /* 0x0C2C */ 185 + volatile u32 ZBufferAGPRemap; /* 0x0C30 */ 186 + volatile u32 TAIndexAGPRemap; /* 0x0C34 */ 187 + volatile u32 TAVertexAGPRemap; /* 0x0C38 */ 188 + volatile u32 TAUVAddrTrans; /* 0x0C3C */ 189 + volatile u32 TATailPtrCacheTrans; /* 0x0C40 */ 190 + volatile u32 TAParamWriteTrans; /* 0x0C44 */ 191 + volatile u32 TAPtrWriteTrans; /* 0x0C48 */ 192 + volatile u32 TAParamWriteThresh; /* 0x0C4C */ 193 + volatile u32 TAPtrWriteThresh; /* 0x0C50 */ 194 + volatile u32 TATailPtrCacheAGPRe; /* 0x0C54 */ 195 + volatile u32 TAParamWriteAGPRe; /* 0x0C58 */ 196 + volatile u32 TAPtrWriteAGPRe; /* 0x0C5C */ 197 + volatile u32 SDRAMArbiterConf; /* 0x0C60 */ 198 + volatile u32 SDRAMConf0; /* 0x0C64 */ 199 + volatile u32 SDRAMConf1; /* 0x0C68 */ 200 + volatile u32 SDRAMConf2; /* 0x0C6C */ 201 + volatile u32 SDRAMRefresh; /* 0x0C70 */ 202 + volatile u32 SDRAMPowerStat; /* 0x0C74 */ 203 + 204 + volatile u32 Fill6[2]; /* GAP 0x0C78 - 0x0C7C */ 205 + 206 + volatile u32 RAMBistData; /* 0x0C80 */ 207 + volatile u32 RAMBistCtrl; /* 0x0C84 */ 208 + volatile u32 FIFOBistKey; /* 0x0C88 */ 209 + volatile u32 RAMBistResult; /* 0x0C8C */ 210 + volatile u32 FIFOBistResult; /* 0x0C90 */ 211 + 212 + /* 213 + volatile u32 Fill11[0x0CBC/4 - 0x0C94/4]; //GAP 0x0C94 - 0x0CBC 214 + volatile u32 Fill12[0x0CD0/4 - 0x0CC0/4]; //GAP 0x0CC0 - 0x0CD0 3DRegisters 215 215 */ 216 216 217 - volatile unsigned long Fill7[16]; /* 0x0c94 - 0x0cd0 */ 217 + volatile u32 Fill7[16]; /* 0x0c94 - 0x0cd0 */ 218 218 219 - volatile unsigned long SDRAMAddrSign; /* 0x0CD4 */ 220 - volatile unsigned long SDRAMDataSign; /* 0x0CD8 */ 221 - volatile unsigned long SDRAMSignConf; /* 0x0CDC */ 219 + volatile u32 SDRAMAddrSign; /* 0x0CD4 */ 220 + volatile u32 SDRAMDataSign; /* 0x0CD8 */ 221 + volatile u32 SDRAMSignConf; /* 0x0CDC */ 222 222 223 223 /* DWFILL; //GAP 0x0CE0 */ 224 - volatile unsigned long dwFill_2; 224 + volatile u32 dwFill_2; 225 225 226 - volatile unsigned long ISPSignature; /* 0x0CE4 */ 226 + volatile u32 ISPSignature; /* 0x0CE4 */ 227 227 228 - volatile unsigned long Fill8[454]; /*GAP 0x0CE8 - 0x13FC */ 228 + volatile u32 Fill8[454]; /*GAP 0x0CE8 - 0x13FC */ 229 229 230 - volatile unsigned long DACPrimAddress; /* 0x1400 */ 231 - volatile unsigned long DACPrimSize; /* 0x1404 */ 232 - volatile unsigned long DACCursorAddr; /* 0x1408 */ 233 - volatile unsigned long DACCursorCtrl; /* 0x140C */ 234 - volatile unsigned long DACOverlayAddr; /* 0x1410 */ 235 - volatile unsigned long DACOverlayUAddr; /* 0x1414 */ 236 - volatile unsigned long DACOverlayVAddr; /* 0x1418 */ 237 - volatile unsigned long DACOverlaySize; /* 0x141C */ 238 - volatile unsigned long DACOverlayVtDec; /* 0x1420 */ 230 + volatile u32 DACPrimAddress; /* 0x1400 */ 231 + volatile u32 DACPrimSize; /* 0x1404 */ 232 + volatile u32 DACCursorAddr; /* 0x1408 */ 233 + volatile u32 DACCursorCtrl; /* 0x140C */ 234 + volatile u32 DACOverlayAddr; /* 0x1410 */ 235 + volatile u32 DACOverlayUAddr; /* 0x1414 */ 236 + volatile u32 DACOverlayVAddr; /* 0x1418 */ 237 + volatile u32 DACOverlaySize; /* 0x141C */ 238 + volatile u32 DACOverlayVtDec; /* 0x1420 */ 239 239 240 - volatile unsigned long Fill9[9]; /* GAP 0x1424 - 0x1444 */ 240 + volatile u32 Fill9[9]; /* GAP 0x1424 - 0x1444 */ 241 241 242 - volatile unsigned long DACVerticalScal; /* 0x1448 */ 243 - volatile unsigned long DACPixelFormat; /* 0x144C */ 244 - volatile unsigned long DACHorizontalScal; /* 0x1450 */ 245 - volatile unsigned long DACVidWinStart; /* 0x1454 */ 246 - volatile unsigned long DACVidWinEnd; /* 0x1458 */ 247 - volatile unsigned long DACBlendCtrl; /* 0x145C */ 248 - volatile unsigned long DACHorTim1; /* 0x1460 */ 249 - volatile unsigned long DACHorTim2; /* 0x1464 */ 250 - volatile unsigned long DACHorTim3; /* 0x1468 */ 251 - volatile unsigned long DACVerTim1; /* 0x146C */ 252 - volatile unsigned long DACVerTim2; /* 0x1470 */ 253 - volatile unsigned long DACVerTim3; /* 0x1474 */ 254 - volatile unsigned long DACBorderColor; /* 0x1478 */ 255 - volatile unsigned long DACSyncCtrl; /* 0x147C */ 256 - volatile unsigned long DACStreamCtrl; /* 0x1480 */ 257 - volatile unsigned long DACLUTAddress; /* 0x1484 */ 258 - volatile unsigned long DACLUTData; /* 0x1488 */ 259 - volatile unsigned long DACBurstCtrl; /* 0x148C */ 260 - volatile unsigned long DACCrcTrigger; /* 0x1490 */ 261 - volatile unsigned long DACCrcDone; /* 0x1494 */ 262 - volatile unsigned long DACCrcResult1; /* 0x1498 */ 263 - volatile unsigned long DACCrcResult2; /* 0x149C */ 264 - volatile unsigned long DACLinecount; /* 0x14A0 */ 242 + volatile u32 DACVerticalScal; /* 0x1448 */ 243 + volatile u32 DACPixelFormat; /* 0x144C */ 244 + volatile u32 DACHorizontalScal; /* 0x1450 */ 245 + volatile u32 DACVidWinStart; /* 0x1454 */ 246 + volatile u32 DACVidWinEnd; /* 0x1458 */ 247 + volatile u32 DACBlendCtrl; /* 0x145C */ 248 + volatile u32 DACHorTim1; /* 0x1460 */ 249 + volatile u32 DACHorTim2; /* 0x1464 */ 250 + volatile u32 DACHorTim3; /* 0x1468 */ 251 + volatile u32 DACVerTim1; /* 0x146C */ 252 + volatile u32 DACVerTim2; /* 0x1470 */ 253 + volatile u32 DACVerTim3; /* 0x1474 */ 254 + volatile u32 DACBorderColor; /* 0x1478 */ 255 + volatile u32 DACSyncCtrl; /* 0x147C */ 256 + volatile u32 DACStreamCtrl; /* 0x1480 */ 257 + volatile u32 DACLUTAddress; /* 0x1484 */ 258 + volatile u32 DACLUTData; /* 0x1488 */ 259 + volatile u32 DACBurstCtrl; /* 0x148C */ 260 + volatile u32 DACCrcTrigger; /* 0x1490 */ 261 + volatile u32 DACCrcDone; /* 0x1494 */ 262 + volatile u32 DACCrcResult1; /* 0x1498 */ 263 + volatile u32 DACCrcResult2; /* 0x149C */ 264 + volatile u32 DACLinecount; /* 0x14A0 */ 265 265 266 - volatile unsigned long Fill10[151]; /*GAP 0x14A4 - 0x16FC */ 266 + volatile u32 Fill10[151]; /*GAP 0x14A4 - 0x16FC */ 267 267 268 - volatile unsigned long DigVidPortCtrl; /* 0x1700 */ 269 - volatile unsigned long DigVidPortStat; /* 0x1704 */ 268 + volatile u32 DigVidPortCtrl; /* 0x1700 */ 269 + volatile u32 DigVidPortStat; /* 0x1704 */ 270 270 271 271 /* 272 - volatile unsigned long Fill11[0x1FFC/4 - 0x1708/4]; //GAP 0x1708 - 0x1FFC 273 - volatile unsigned long Fill17[0x3000/4 - 0x2FFC/4]; //GAP 0x2000 - 0x2FFC ALUT 272 + volatile u32 Fill11[0x1FFC/4 - 0x1708/4]; //GAP 0x1708 - 0x1FFC 273 + volatile u32 Fill17[0x3000/4 - 0x2FFC/4]; //GAP 0x2000 - 0x2FFC ALUT 274 274 */ 275 275 276 - volatile unsigned long Fill11[1598]; 276 + volatile u32 Fill11[1598]; 277 277 278 278 /* DWFILL; //GAP 0x3000 ALUT 256MB offset */ 279 - volatile unsigned long Fill_3; 279 + volatile u32 Fill_3; 280 280 281 281 } STG4000REG; 282 282
+4 -4
drivers/video/msm/mddi.c
··· 420 420 mddi_set_auto_hibernate(&mddi->client_data, 1); 421 421 } 422 422 423 - static int __init mddi_get_client_caps(struct mddi_info *mddi) 423 + static int __devinit mddi_get_client_caps(struct mddi_info *mddi) 424 424 { 425 425 int i, j; 426 426 ··· 622 622 623 623 static struct mddi_info mddi_info[2]; 624 624 625 - static int __init mddi_clk_setup(struct platform_device *pdev, 626 - struct mddi_info *mddi, 627 - unsigned long clk_rate) 625 + static int __devinit mddi_clk_setup(struct platform_device *pdev, 626 + struct mddi_info *mddi, 627 + unsigned long clk_rate) 628 628 { 629 629 int ret; 630 630
+9 -2
drivers/video/uvesafb.c
··· 815 815 par->pmi_setpal = pmi_setpal; 816 816 par->ypan = ypan; 817 817 818 - if (par->pmi_setpal || par->ypan) 819 - uvesafb_vbe_getpmi(task, par); 818 + if (par->pmi_setpal || par->ypan) { 819 + if (__supported_pte_mask & _PAGE_NX) { 820 + par->pmi_setpal = par->ypan = 0; 821 + printk(KERN_WARNING "uvesafb: NX protection is actively." 822 + "We have better not to use the PMI.\n"); 823 + } else { 824 + uvesafb_vbe_getpmi(task, par); 825 + } 826 + } 820 827 #else 821 828 /* The protected mode interface is not available on non-x86. */ 822 829 par->pmi_setpal = par->ypan = 0;
+2
fs/btrfs/compression.c
··· 405 405 bio_put(bio); 406 406 407 407 bio = compressed_bio_alloc(bdev, first_byte, GFP_NOFS); 408 + BUG_ON(!bio); 408 409 bio->bi_private = cb; 409 410 bio->bi_end_io = end_compressed_bio_write; 410 411 bio_add_page(bio, page, PAGE_CACHE_SIZE, 0); ··· 688 687 689 688 comp_bio = compressed_bio_alloc(bdev, cur_disk_byte, 690 689 GFP_NOFS); 690 + BUG_ON(!comp_bio); 691 691 comp_bio->bi_private = cb; 692 692 comp_bio->bi_end_io = end_compressed_bio_read; 693 693
+4 -7
fs/btrfs/extent-tree.c
··· 529 529 * allocate blocks for the tree root we can't do the fast caching since 530 530 * we likely hold important locks. 531 531 */ 532 - if (trans && (!trans->transaction->in_commit) && 533 - (root && root != root->fs_info->tree_root) && 534 - btrfs_test_opt(root, SPACE_CACHE)) { 532 + if (fs_info->mount_opt & BTRFS_MOUNT_SPACE_CACHE) { 535 533 ret = load_free_space_cache(fs_info, cache); 536 534 537 535 spin_lock(&cache->lock); ··· 3150 3152 /* 3151 3153 * returns target flags in extended format or 0 if restripe for this 3152 3154 * chunk_type is not in progress 3155 + * 3156 + * should be called with either volume_mutex or balance_lock held 3153 3157 */ 3154 3158 static u64 get_restripe_target(struct btrfs_fs_info *fs_info, u64 flags) 3155 3159 { 3156 3160 struct btrfs_balance_control *bctl = fs_info->balance_ctl; 3157 3161 u64 target = 0; 3158 - 3159 - BUG_ON(!mutex_is_locked(&fs_info->volume_mutex) && 3160 - !spin_is_locked(&fs_info->balance_lock)); 3161 3162 3162 3163 if (!bctl) 3163 3164 return 0; ··· 4202 4205 num_bytes += div64_u64(data_used + meta_used, 50); 4203 4206 4204 4207 if (num_bytes * 3 > meta_used) 4205 - num_bytes = div64_u64(meta_used, 3) * 2; 4208 + num_bytes = div64_u64(meta_used, 3); 4206 4209 4207 4210 return ALIGN(num_bytes, fs_info->extent_root->leafsize << 10); 4208 4211 }
+5 -1
fs/btrfs/extent_io.c
··· 1937 1937 struct btrfs_mapping_tree *map_tree = &root->fs_info->mapping_tree; 1938 1938 u64 start = eb->start; 1939 1939 unsigned long i, num_pages = num_extent_pages(eb->start, eb->len); 1940 - int ret; 1940 + int ret = 0; 1941 1941 1942 1942 for (i = 0; i < num_pages; i++) { 1943 1943 struct page *p = extent_buffer_page(eb, i); ··· 2180 2180 } 2181 2181 2182 2182 bio = bio_alloc(GFP_NOFS, 1); 2183 + if (!bio) { 2184 + free_io_failure(inode, failrec, 0); 2185 + return -EIO; 2186 + } 2183 2187 bio->bi_private = state; 2184 2188 bio->bi_end_io = failed_bio->bi_end_io; 2185 2189 bio->bi_sector = failrec->logical >> 9;
+2 -7
fs/btrfs/free-space-cache.c
··· 748 748 u64 used = btrfs_block_group_used(&block_group->item); 749 749 750 750 /* 751 - * If we're unmounting then just return, since this does a search on the 752 - * normal root and not the commit root and we could deadlock. 753 - */ 754 - if (btrfs_fs_closing(fs_info)) 755 - return 0; 756 - 757 - /* 758 751 * If this block group has been marked to be cleared for one reason or 759 752 * another then we can't trust the on disk cache, so just return. 760 753 */ ··· 761 768 path = btrfs_alloc_path(); 762 769 if (!path) 763 770 return 0; 771 + path->search_commit_root = 1; 772 + path->skip_locking = 1; 764 773 765 774 inode = lookup_free_space_inode(root, block_group, path); 766 775 if (IS_ERR(inode)) {
+4
fs/btrfs/scrub.c
··· 1044 1044 1045 1045 BUG_ON(!page->page); 1046 1046 bio = bio_alloc(GFP_NOFS, 1); 1047 + if (!bio) 1048 + return -EIO; 1047 1049 bio->bi_bdev = page->bdev; 1048 1050 bio->bi_sector = page->physical >> 9; 1049 1051 bio->bi_end_io = scrub_complete_bio_end_io; ··· 1173 1171 DECLARE_COMPLETION_ONSTACK(complete); 1174 1172 1175 1173 bio = bio_alloc(GFP_NOFS, 1); 1174 + if (!bio) 1175 + return -EIO; 1176 1176 bio->bi_bdev = page_bad->bdev; 1177 1177 bio->bi_sector = page_bad->physical >> 9; 1178 1178 bio->bi_end_io = scrub_complete_bio_end_io;
+5 -4
fs/btrfs/transaction.c
··· 480 480 struct btrfs_transaction *cur_trans = trans->transaction; 481 481 struct btrfs_fs_info *info = root->fs_info; 482 482 int count = 0; 483 + int err = 0; 483 484 484 485 if (--trans->use_count) { 485 486 trans->block_rsv = trans->orig_rsv; ··· 533 532 534 533 if (current->journal_info == trans) 535 534 current->journal_info = NULL; 536 - memset(trans, 0, sizeof(*trans)); 537 - kmem_cache_free(btrfs_trans_handle_cachep, trans); 538 535 539 536 if (throttle) 540 537 btrfs_run_delayed_iputs(root); 541 538 542 539 if (trans->aborted || 543 540 root->fs_info->fs_state & BTRFS_SUPER_FLAG_ERROR) { 544 - return -EIO; 541 + err = -EIO; 545 542 } 546 543 547 - return 0; 544 + memset(trans, 0, sizeof(*trans)); 545 + kmem_cache_free(btrfs_trans_handle_cachep, trans); 546 + return err; 548 547 } 549 548 550 549 int btrfs_end_transaction(struct btrfs_trans_handle *trans,
+18 -2
fs/btrfs/volumes.c
··· 3833 3833 int sub_stripes = 0; 3834 3834 u64 stripes_per_dev = 0; 3835 3835 u32 remaining_stripes = 0; 3836 + u32 last_stripe = 0; 3836 3837 3837 3838 if (map->type & 3838 3839 (BTRFS_BLOCK_GROUP_RAID0 | BTRFS_BLOCK_GROUP_RAID10)) { ··· 3847 3846 stripe_nr_orig, 3848 3847 factor, 3849 3848 &remaining_stripes); 3849 + div_u64_rem(stripe_nr_end - 1, factor, &last_stripe); 3850 + last_stripe *= sub_stripes; 3850 3851 } 3851 3852 3852 3853 for (i = 0; i < num_stripes; i++) { ··· 3861 3858 BTRFS_BLOCK_GROUP_RAID10)) { 3862 3859 bbio->stripes[i].length = stripes_per_dev * 3863 3860 map->stripe_len; 3861 + 3864 3862 if (i / sub_stripes < remaining_stripes) 3865 3863 bbio->stripes[i].length += 3866 3864 map->stripe_len; 3865 + 3866 + /* 3867 + * Special for the first stripe and 3868 + * the last stripe: 3869 + * 3870 + * |-------|...|-------| 3871 + * |----------| 3872 + * off end_off 3873 + */ 3867 3874 if (i < sub_stripes) 3868 3875 bbio->stripes[i].length -= 3869 3876 stripe_offset; 3870 - if ((i / sub_stripes + 1) % 3871 - sub_stripes == remaining_stripes) 3877 + 3878 + if (stripe_index >= last_stripe && 3879 + stripe_index <= (last_stripe + 3880 + sub_stripes - 1)) 3872 3881 bbio->stripes[i].length -= 3873 3882 stripe_end_offset; 3883 + 3874 3884 if (i == sub_stripes - 1) 3875 3885 stripe_offset = 0; 3876 3886 } else
+2 -5
fs/gfs2/Kconfig
··· 1 1 config GFS2_FS 2 2 tristate "GFS2 file system support" 3 3 depends on (64BIT || LBDAF) 4 - select DLM if GFS2_FS_LOCKING_DLM 5 - select CONFIGFS_FS if GFS2_FS_LOCKING_DLM 6 - select SYSFS if GFS2_FS_LOCKING_DLM 7 - select IP_SCTP if DLM_SCTP 8 4 select FS_POSIX_ACL 9 5 select CRC32 10 6 select QUOTACTL ··· 25 29 26 30 config GFS2_FS_LOCKING_DLM 27 31 bool "GFS2 DLM locking" 28 - depends on (GFS2_FS!=n) && NET && INET && (IPV6 || IPV6=n) && HOTPLUG 32 + depends on (GFS2_FS!=n) && NET && INET && (IPV6 || IPV6=n) && \ 33 + HOTPLUG && DLM && CONFIGFS_FS && SYSFS 29 34 help 30 35 Multiple node locking module for GFS2 31 36
+2 -2
fs/gfs2/aops.c
··· 807 807 808 808 if (inode == sdp->sd_rindex) { 809 809 adjust_fs_space(inode); 810 - ip->i_gh.gh_flags |= GL_NOCACHE; 810 + sdp->sd_rindex_uptodate = 0; 811 811 } 812 812 813 813 brelse(dibh); ··· 873 873 874 874 if (inode == sdp->sd_rindex) { 875 875 adjust_fs_space(inode); 876 - ip->i_gh.gh_flags |= GL_NOCACHE; 876 + sdp->sd_rindex_uptodate = 0; 877 877 } 878 878 879 879 brelse(dibh);
+5 -1
fs/gfs2/bmap.c
··· 724 724 int metadata; 725 725 unsigned int revokes = 0; 726 726 int x; 727 - int error = 0; 727 + int error; 728 + 729 + error = gfs2_rindex_update(sdp); 730 + if (error) 731 + return error; 728 732 729 733 if (!*top) 730 734 sm->sm_first = 0;
+4
fs/gfs2/dir.c
··· 1844 1844 unsigned int x, size = len * sizeof(u64); 1845 1845 int error; 1846 1846 1847 + error = gfs2_rindex_update(sdp); 1848 + if (error) 1849 + return error; 1850 + 1847 1851 memset(&rlist, 0, sizeof(struct gfs2_rgrp_list)); 1848 1852 1849 1853 ht = kzalloc(size, GFP_NOFS);
+11 -2
fs/gfs2/inode.c
··· 1031 1031 struct buffer_head *bh; 1032 1032 struct gfs2_holder ghs[3]; 1033 1033 struct gfs2_rgrpd *rgd; 1034 - int error = -EROFS; 1034 + int error; 1035 + 1036 + error = gfs2_rindex_update(sdp); 1037 + if (error) 1038 + return error; 1039 + 1040 + error = -EROFS; 1035 1041 1036 1042 gfs2_holder_init(dip->i_gl, LM_ST_EXCLUSIVE, 0, ghs); 1037 1043 gfs2_holder_init(ip->i_gl, LM_ST_EXCLUSIVE, 0, ghs + 1); ··· 1230 1224 return 0; 1231 1225 } 1232 1226 1227 + error = gfs2_rindex_update(sdp); 1228 + if (error) 1229 + return error; 1230 + 1233 1231 if (odip != ndip) { 1234 1232 error = gfs2_glock_nq_init(sdp->sd_rename_gl, LM_ST_EXCLUSIVE, 1235 1233 0, &r_gh); ··· 1355 1345 error = alloc_required; 1356 1346 if (error < 0) 1357 1347 goto out_gunlock; 1358 - error = 0; 1359 1348 1360 1349 if (alloc_required) { 1361 1350 struct gfs2_qadata *qa = gfs2_qadata_get(ndip);
+5 -3
fs/gfs2/rgrp.c
··· 332 332 struct rb_node *n, *next; 333 333 struct gfs2_rgrpd *cur; 334 334 335 - if (gfs2_rindex_update(sdp)) 336 - return NULL; 337 - 338 335 spin_lock(&sdp->sd_rindex_spin); 339 336 n = sdp->sd_rindex_tree.rb_node; 340 337 while (n) { ··· 637 640 return 0; 638 641 639 642 error = 0; /* someone else read in the rgrp; free it and ignore it */ 643 + gfs2_glock_put(rgd->rd_gl); 640 644 641 645 fail: 642 646 kfree(rgd->rd_bits); ··· 924 926 r.minlen = 0; 925 927 } else if (copy_from_user(&r, argp, sizeof(r))) 926 928 return -EFAULT; 929 + 930 + ret = gfs2_rindex_update(sdp); 931 + if (ret) 932 + return ret; 927 933 928 934 rgd = gfs2_blk2rgrpd(sdp, r.start, 0); 929 935 rgd_end = gfs2_blk2rgrpd(sdp, r.start + r.len, 0);
+12
fs/gfs2/xattr.c
··· 238 238 unsigned int x; 239 239 int error; 240 240 241 + error = gfs2_rindex_update(sdp); 242 + if (error) 243 + return error; 244 + 241 245 if (GFS2_EA_IS_STUFFED(ea)) 242 246 return 0; 243 247 ··· 1334 1330 unsigned int x; 1335 1331 int error; 1336 1332 1333 + error = gfs2_rindex_update(sdp); 1334 + if (error) 1335 + return error; 1336 + 1337 1337 memset(&rlist, 0, sizeof(struct gfs2_rgrp_list)); 1338 1338 1339 1339 error = gfs2_meta_read(ip->i_gl, ip->i_eattr, DIO_WAIT, &indbh); ··· 1446 1438 struct buffer_head *dibh; 1447 1439 struct gfs2_holder gh; 1448 1440 int error; 1441 + 1442 + error = gfs2_rindex_update(sdp); 1443 + if (error) 1444 + return error; 1449 1445 1450 1446 rgd = gfs2_blk2rgrpd(sdp, ip->i_eattr, 1); 1451 1447 if (!rgd) {
+1
fs/libfs.c
··· 529 529 return 0; 530 530 out: 531 531 d_genocide(root); 532 + shrink_dcache_parent(root); 532 533 dput(root); 533 534 return -ENOMEM; 534 535 }
+28 -6
fs/proc/stat.c
··· 18 18 #ifndef arch_irq_stat 19 19 #define arch_irq_stat() 0 20 20 #endif 21 - #ifndef arch_idle_time 22 - #define arch_idle_time(cpu) 0 23 - #endif 21 + 22 + #ifdef arch_idle_time 23 + 24 + static cputime64_t get_idle_time(int cpu) 25 + { 26 + cputime64_t idle; 27 + 28 + idle = kcpustat_cpu(cpu).cpustat[CPUTIME_IDLE]; 29 + if (cpu_online(cpu) && !nr_iowait_cpu(cpu)) 30 + idle += arch_idle_time(cpu); 31 + return idle; 32 + } 33 + 34 + static cputime64_t get_iowait_time(int cpu) 35 + { 36 + cputime64_t iowait; 37 + 38 + iowait = kcpustat_cpu(cpu).cpustat[CPUTIME_IOWAIT]; 39 + if (cpu_online(cpu) && nr_iowait_cpu(cpu)) 40 + iowait += arch_idle_time(cpu); 41 + return iowait; 42 + } 43 + 44 + #else 24 45 25 46 static u64 get_idle_time(int cpu) 26 47 { 27 48 u64 idle, idle_time = get_cpu_idle_time_us(cpu, NULL); 28 49 29 - if (idle_time == -1ULL) { 50 + if (idle_time == -1ULL) 30 51 /* !NO_HZ so we can rely on cpustat.idle */ 31 52 idle = kcpustat_cpu(cpu).cpustat[CPUTIME_IDLE]; 32 - idle += arch_idle_time(cpu); 33 - } else 53 + else 34 54 idle = usecs_to_cputime64(idle_time); 35 55 36 56 return idle; ··· 68 48 69 49 return iowait; 70 50 } 51 + 52 + #endif 71 53 72 54 static int show_stat(struct seq_file *p, void *v) 73 55 {
+4 -1
fs/sysfs/dir.c
··· 729 729 else 730 730 parent_sd = &sysfs_root; 731 731 732 + if (!parent_sd) 733 + return -ENOENT; 734 + 732 735 if (sysfs_ns_type(parent_sd)) 733 736 ns = kobj->ktype->namespace(kobj); 734 737 type = sysfs_read_ns_type(kobj); ··· 881 878 882 879 dup_name = sd->s_name; 883 880 sd->s_name = new_name; 884 - sd->s_hash = sysfs_name_hash(sd->s_ns, sd->s_name); 885 881 } 886 882 887 883 /* Move to the appropriate place in the appropriate directories rbtree. */ ··· 888 886 sysfs_get(new_parent_sd); 889 887 sysfs_put(sd->s_parent); 890 888 sd->s_ns = new_ns; 889 + sd->s_hash = sysfs_name_hash(sd->s_ns, sd->s_name); 891 890 sd->s_parent = new_parent_sd; 892 891 sysfs_link_sibling(sd); 893 892
+5 -1
fs/sysfs/group.c
··· 67 67 /* Updates may happen before the object has been instantiated */ 68 68 if (unlikely(update && !kobj->sd)) 69 69 return -EINVAL; 70 - 70 + if (!grp->attrs) { 71 + WARN(1, "sysfs: attrs not set by subsystem for group: %s/%s\n", 72 + kobj->name, grp->name ? "" : grp->name); 73 + return -EINVAL; 74 + } 71 75 if (grp->name) { 72 76 error = sysfs_create_subdir(kobj, grp->name, &sd); 73 77 if (error)
+3 -2
include/drm/exynos_drm.h
··· 85 85 struct drm_exynos_vidi_connection { 86 86 unsigned int connection; 87 87 unsigned int extensions; 88 - uint64_t *edid; 88 + uint64_t edid; 89 89 }; 90 90 91 91 struct drm_exynos_plane_set_zpos { ··· 96 96 /* memory type definitions. */ 97 97 enum e_drm_exynos_gem_mem_type { 98 98 /* Physically Non-Continuous memory. */ 99 - EXYNOS_BO_NONCONTIG = 1 << 0 99 + EXYNOS_BO_NONCONTIG = 1 << 0, 100 + EXYNOS_BO_MASK = EXYNOS_BO_NONCONTIG 100 101 }; 101 102 102 103 #define DRM_EXYNOS_GEM_CREATE 0x00
-7
include/linux/amba/bus.h
··· 30 30 struct device dev; 31 31 struct resource res; 32 32 struct clk *pclk; 33 - struct regulator *vcore; 34 33 u64 dma_mask; 35 34 unsigned int periphid; 36 35 unsigned int irq[AMBA_NR_IRQS]; ··· 73 74 74 75 #define amba_pclk_disable(d) \ 75 76 do { if (!IS_ERR((d)->pclk)) clk_disable((d)->pclk); } while (0) 76 - 77 - #define amba_vcore_enable(d) \ 78 - (IS_ERR((d)->vcore) ? 0 : regulator_enable((d)->vcore)) 79 - 80 - #define amba_vcore_disable(d) \ 81 - do { if (!IS_ERR((d)->vcore)) regulator_disable((d)->vcore); } while (0) 82 77 83 78 /* Some drivers don't use the struct amba_device */ 84 79 #define AMBA_CONFIG_BITS(a) (((a) >> 24) & 0xff)
+2
include/linux/amba/pl022.h
··· 25 25 #ifndef _SSP_PL022_H 26 26 #define _SSP_PL022_H 27 27 28 + #include <linux/types.h> 29 + 28 30 /** 29 31 * whether SSP is in loopback mode or not 30 32 */
+7 -11
include/linux/blkdev.h
··· 426 426 (1 << QUEUE_FLAG_SAME_COMP) | \ 427 427 (1 << QUEUE_FLAG_ADD_RANDOM)) 428 428 429 - static inline int queue_is_locked(struct request_queue *q) 429 + static inline void queue_lockdep_assert_held(struct request_queue *q) 430 430 { 431 - #ifdef CONFIG_SMP 432 - spinlock_t *lock = q->queue_lock; 433 - return lock && spin_is_locked(lock); 434 - #else 435 - return 1; 436 - #endif 431 + if (q->queue_lock) 432 + lockdep_assert_held(q->queue_lock); 437 433 } 438 434 439 435 static inline void queue_flag_set_unlocked(unsigned int flag, ··· 441 445 static inline int queue_flag_test_and_clear(unsigned int flag, 442 446 struct request_queue *q) 443 447 { 444 - WARN_ON_ONCE(!queue_is_locked(q)); 448 + queue_lockdep_assert_held(q); 445 449 446 450 if (test_bit(flag, &q->queue_flags)) { 447 451 __clear_bit(flag, &q->queue_flags); ··· 454 458 static inline int queue_flag_test_and_set(unsigned int flag, 455 459 struct request_queue *q) 456 460 { 457 - WARN_ON_ONCE(!queue_is_locked(q)); 461 + queue_lockdep_assert_held(q); 458 462 459 463 if (!test_bit(flag, &q->queue_flags)) { 460 464 __set_bit(flag, &q->queue_flags); ··· 466 470 467 471 static inline void queue_flag_set(unsigned int flag, struct request_queue *q) 468 472 { 469 - WARN_ON_ONCE(!queue_is_locked(q)); 473 + queue_lockdep_assert_held(q); 470 474 __set_bit(flag, &q->queue_flags); 471 475 } 472 476 ··· 483 487 484 488 static inline void queue_flag_clear(unsigned int flag, struct request_queue *q) 485 489 { 486 - WARN_ON_ONCE(!queue_is_locked(q)); 490 + queue_lockdep_assert_held(q); 487 491 __clear_bit(flag, &q->queue_flags); 488 492 } 489 493
+1
include/linux/dmaengine.h
··· 974 974 void dma_async_device_unregister(struct dma_device *device); 975 975 void dma_run_dependencies(struct dma_async_tx_descriptor *tx); 976 976 struct dma_chan *dma_find_channel(enum dma_transaction_type tx_type); 977 + struct dma_chan *net_dma_find_channel(void); 977 978 #define dma_request_channel(mask, x, y) __dma_request_channel(&(mask), x, y) 978 979 979 980 /* --- Helper iov-locking functions --- */
+5
include/linux/irq.h
··· 263 263 d->state_use_accessors &= ~IRQD_IRQ_INPROGRESS; 264 264 } 265 265 266 + static inline irq_hw_number_t irqd_to_hwirq(struct irq_data *d) 267 + { 268 + return d->hwirq; 269 + } 270 + 266 271 /** 267 272 * struct irq_chip - hardware interrupt chip descriptor 268 273 *
+4 -8
include/linux/irqdomain.h
··· 42 42 /* Number of irqs reserved for a legacy isa controller */ 43 43 #define NUM_ISA_INTERRUPTS 16 44 44 45 - /* This type is the placeholder for a hardware interrupt number. It has to 46 - * be big enough to enclose whatever representation is used by a given 47 - * platform. 48 - */ 49 - typedef unsigned long irq_hw_number_t; 50 - 51 45 /** 52 46 * struct irq_domain_ops - Methods for irq_domain objects 53 47 * @match: Match an interrupt controller device node to a host, returns ··· 98 104 unsigned int size; 99 105 unsigned int *revmap; 100 106 } linear; 107 + struct { 108 + unsigned int max_irq; 109 + } nomap; 101 110 struct radix_tree_root tree; 102 111 } revmap_data; 103 112 const struct irq_domain_ops *ops; ··· 123 126 const struct irq_domain_ops *ops, 124 127 void *host_data); 125 128 struct irq_domain *irq_domain_add_nomap(struct device_node *of_node, 129 + unsigned int max_irq, 126 130 const struct irq_domain_ops *ops, 127 131 void *host_data); 128 132 struct irq_domain *irq_domain_add_tree(struct device_node *of_node, ··· 132 134 133 135 extern struct irq_domain *irq_find_host(struct device_node *node); 134 136 extern void irq_set_default_host(struct irq_domain *host); 135 - extern void irq_set_virq_count(unsigned int count); 136 137 137 138 static inline struct irq_domain *irq_domain_add_legacy_isa( 138 139 struct device_node *of_node, ··· 143 146 } 144 147 extern struct irq_domain *irq_find_host(struct device_node *node); 145 148 extern void irq_set_default_host(struct irq_domain *host); 146 - extern void irq_set_virq_count(unsigned int count); 147 149 148 150 149 151 extern unsigned int irq_create_mapping(struct irq_domain *host,
+18 -4
include/linux/kconfig.h
··· 4 4 #include <generated/autoconf.h> 5 5 6 6 /* 7 - * Helper macros to use CONFIG_ options in C expressions. Note that 7 + * Helper macros to use CONFIG_ options in C/CPP expressions. Note that 8 8 * these only work with boolean and tristate options. 9 9 */ 10 + 11 + /* 12 + * Getting something that works in C and CPP for an arg that may or may 13 + * not be defined is tricky. Here, if we have "#define CONFIG_BOOGER 1" 14 + * we match on the placeholder define, insert the "0," for arg1 and generate 15 + * the triplet (0, 1, 0). Then the last step cherry picks the 2nd arg (a one). 16 + * When CONFIG_BOOGER is not defined, we generate a (... 1, 0) pair, and when 17 + * the last step cherry picks the 2nd arg, we get a zero. 18 + */ 19 + #define __ARG_PLACEHOLDER_1 0, 20 + #define config_enabled(cfg) _config_enabled(cfg) 21 + #define _config_enabled(value) __config_enabled(__ARG_PLACEHOLDER_##value) 22 + #define __config_enabled(arg1_or_junk) ___config_enabled(arg1_or_junk 1, 0) 23 + #define ___config_enabled(__ignored, val, ...) val 10 24 11 25 /* 12 26 * IS_ENABLED(CONFIG_FOO) evaluates to 1 if CONFIG_FOO is set to 'y' or 'm', ··· 28 14 * 29 15 */ 30 16 #define IS_ENABLED(option) \ 31 - (__enabled_ ## option || __enabled_ ## option ## _MODULE) 17 + (config_enabled(option) || config_enabled(option##_MODULE)) 32 18 33 19 /* 34 20 * IS_BUILTIN(CONFIG_FOO) evaluates to 1 if CONFIG_FOO is set to 'y', 0 35 21 * otherwise. For boolean options, this is equivalent to 36 22 * IS_ENABLED(CONFIG_FOO). 37 23 */ 38 - #define IS_BUILTIN(option) __enabled_ ## option 24 + #define IS_BUILTIN(option) config_enabled(option) 39 25 40 26 /* 41 27 * IS_MODULE(CONFIG_FOO) evaluates to 1 if CONFIG_FOO is set to 'm', 0 42 28 * otherwise. 43 29 */ 44 - #define IS_MODULE(option) __enabled_ ## option ## _MODULE 30 + #define IS_MODULE(option) config_enabled(option##_MODULE) 45 31 46 32 #endif /* __LINUX_KCONFIG_H */
+11 -1
include/linux/netfilter_ipv6/ip6_tables.h
··· 287 287 struct xt_table *table); 288 288 289 289 /* Check for an extension */ 290 - extern int ip6t_ext_hdr(u8 nexthdr); 290 + static inline int 291 + ip6t_ext_hdr(u8 nexthdr) 292 + { return (nexthdr == IPPROTO_HOPOPTS) || 293 + (nexthdr == IPPROTO_ROUTING) || 294 + (nexthdr == IPPROTO_FRAGMENT) || 295 + (nexthdr == IPPROTO_ESP) || 296 + (nexthdr == IPPROTO_AH) || 297 + (nexthdr == IPPROTO_NONE) || 298 + (nexthdr == IPPROTO_DSTOPTS); 299 + } 300 + 291 301 /* find specified header and get offset to it */ 292 302 extern int ipv6_find_hdr(const struct sk_buff *skb, unsigned int *offset, 293 303 int target, unsigned short *fragoff);
+1 -1
include/linux/serial_core.h
··· 357 357 #define UPF_CONS_FLOW ((__force upf_t) (1 << 23)) 358 358 #define UPF_SHARE_IRQ ((__force upf_t) (1 << 24)) 359 359 #define UPF_EXAR_EFR ((__force upf_t) (1 << 25)) 360 - #define UPF_IIR_ONCE ((__force upf_t) (1 << 26)) 360 + #define UPF_BUG_THRE ((__force upf_t) (1 << 26)) 361 361 /* The exact UART type is known and should not be probed. */ 362 362 #define UPF_FIXED_TYPE ((__force upf_t) (1 << 27)) 363 363 #define UPF_BOOT_AUTOCONF ((__force upf_t) (1 << 28))
+13
include/linux/skbuff.h
··· 481 481 union { 482 482 __u32 mark; 483 483 __u32 dropcount; 484 + __u32 avail_size; 484 485 }; 485 486 486 487 sk_buff_data_t transport_header; ··· 1364 1363 static inline int skb_tailroom(const struct sk_buff *skb) 1365 1364 { 1366 1365 return skb_is_nonlinear(skb) ? 0 : skb->end - skb->tail; 1366 + } 1367 + 1368 + /** 1369 + * skb_availroom - bytes at buffer end 1370 + * @skb: buffer to check 1371 + * 1372 + * Return the number of bytes of free space at the tail of an sk_buff 1373 + * allocated by sk_stream_alloc() 1374 + */ 1375 + static inline int skb_availroom(const struct sk_buff *skb) 1376 + { 1377 + return skb_is_nonlinear(skb) ? 0 : skb->avail_size - skb->len; 1367 1378 } 1368 1379 1369 1380 /**
+3 -7
include/linux/stddef.h
··· 3 3 4 4 #include <linux/compiler.h> 5 5 6 - #undef NULL 7 - #if defined(__cplusplus) 8 - #define NULL 0 9 - #else 10 - #define NULL ((void *)0) 11 - #endif 12 - 13 6 #ifdef __KERNEL__ 7 + 8 + #undef NULL 9 + #define NULL ((void *)0) 14 10 15 11 enum { 16 12 false = 0,
+6
include/linux/types.h
··· 210 210 211 211 typedef phys_addr_t resource_size_t; 212 212 213 + /* 214 + * This type is the placeholder for a hardware interrupt number. It has to be 215 + * big enough to enclose whatever representation is used by a given platform. 216 + */ 217 + typedef unsigned long irq_hw_number_t; 218 + 213 219 typedef struct { 214 220 int counter; 215 221 } atomic_t;
-8
include/linux/usb/serial.h
··· 28 28 /* parity check flag */ 29 29 #define RELEVANT_IFLAG(iflag) (iflag & (IGNBRK|BRKINT|IGNPAR|PARMRK|INPCK)) 30 30 31 - enum port_dev_state { 32 - PORT_UNREGISTERED, 33 - PORT_REGISTERING, 34 - PORT_REGISTERED, 35 - PORT_UNREGISTERING, 36 - }; 37 - 38 31 /* USB serial flags */ 39 32 #define USB_SERIAL_WRITE_BUSY 0 40 33 ··· 117 124 char throttle_req; 118 125 unsigned long sysrq; /* sysrq timeout */ 119 126 struct device dev; 120 - enum port_dev_state dev_state; 121 127 }; 122 128 #define to_usb_serial_port(d) container_of(d, struct usb_serial_port, dev) 123 129
+2
include/linux/vgaarb.h
··· 47 47 */ 48 48 #define VGA_DEFAULT_DEVICE (NULL) 49 49 50 + struct pci_dev; 51 + 50 52 /* For use by clients */ 51 53 52 54 /**
+2 -1
include/net/bluetooth/hci.h
··· 92 92 HCI_SERVICE_CACHE, 93 93 HCI_LINK_KEYS, 94 94 HCI_DEBUG_KEYS, 95 + HCI_UNREGISTER, 95 96 96 97 HCI_LE_SCAN, 97 98 HCI_SSP_ENABLED, ··· 1328 1327 #define HCI_DEV_NONE 0xffff 1329 1328 1330 1329 #define HCI_CHANNEL_RAW 0 1331 - #define HCI_CHANNEL_CONTROL 1 1332 1330 #define HCI_CHANNEL_MONITOR 2 1331 + #define HCI_CHANNEL_CONTROL 3 1333 1332 1334 1333 struct hci_filter { 1335 1334 unsigned long type_mask;
+7 -5
include/net/bluetooth/hci_core.h
··· 427 427 static inline bool hci_conn_ssp_enabled(struct hci_conn *conn) 428 428 { 429 429 struct hci_dev *hdev = conn->hdev; 430 - return (test_bit(HCI_SSP_ENABLED, &hdev->flags) && 430 + return (test_bit(HCI_SSP_ENABLED, &hdev->dev_flags) && 431 431 test_bit(HCI_CONN_SSP_ENABLED, &conn->flags)); 432 432 } 433 433 ··· 907 907 908 908 static inline bool eir_has_data_type(u8 *data, size_t data_len, u8 type) 909 909 { 910 - u8 field_len; 911 - size_t parsed; 910 + size_t parsed = 0; 912 911 913 - for (parsed = 0; parsed < data_len - 1; parsed += field_len) { 914 - field_len = data[0]; 912 + if (data_len < 2) 913 + return false; 914 + 915 + while (parsed < data_len - 1) { 916 + u8 field_len = data[0]; 915 917 916 918 if (field_len == 0) 917 919 break;
+1 -1
include/net/bluetooth/mgmt.h
··· 117 117 #define MGMT_OP_SET_DISCOVERABLE 0x0006 118 118 struct mgmt_cp_set_discoverable { 119 119 __u8 val; 120 - __u16 timeout; 120 + __le16 timeout; 121 121 } __packed; 122 122 #define MGMT_SET_DISCOVERABLE_SIZE 3 123 123
+1 -1
include/net/mac80211.h
··· 1327 1327 ieee80211_get_tx_rate(const struct ieee80211_hw *hw, 1328 1328 const struct ieee80211_tx_info *c) 1329 1329 { 1330 - if (WARN_ON(c->control.rates[0].idx < 0)) 1330 + if (WARN_ON_ONCE(c->control.rates[0].idx < 0)) 1331 1331 return NULL; 1332 1332 return &hw->wiphy->bands[c->band]->bitrates[c->control.rates[0].idx]; 1333 1333 }
+3
include/scsi/scsi_cmnd.h
··· 134 134 135 135 static inline struct scsi_driver *scsi_cmd_to_driver(struct scsi_cmnd *cmd) 136 136 { 137 + if (!cmd->request->rq_disk) 138 + return NULL; 139 + 137 140 return *(struct scsi_driver **)cmd->request->rq_disk->private_data; 138 141 } 139 142
+10
include/sound/core.h
··· 325 325 326 326 /* --- */ 327 327 328 + /* sound printk debug levels */ 329 + enum { 330 + SND_PR_ALWAYS, 331 + SND_PR_DEBUG, 332 + SND_PR_VERBOSE, 333 + }; 334 + 328 335 #if defined(CONFIG_SND_DEBUG) || defined(CONFIG_SND_VERBOSE_PRINTK) 329 336 __printf(4, 5) 330 337 void __snd_printk(unsigned int level, const char *file, int line, ··· 361 354 */ 362 355 #define snd_printd(fmt, args...) \ 363 356 __snd_printk(1, __FILE__, __LINE__, fmt, ##args) 357 + #define _snd_printd(level, fmt, args...) \ 358 + __snd_printk(level, __FILE__, __LINE__, fmt, ##args) 364 359 365 360 /** 366 361 * snd_BUG - give a BUG warning message and stack trace ··· 392 383 #else /* !CONFIG_SND_DEBUG */ 393 384 394 385 #define snd_printd(fmt, args...) do { } while (0) 386 + #define _snd_printd(level, fmt, args...) do { } while (0) 395 387 #define snd_BUG() do { } while (0) 396 388 static inline int __snd_bug_on(int cond) 397 389 {
+2
kernel/cred.c
··· 386 386 struct cred *new; 387 387 int ret; 388 388 389 + p->replacement_session_keyring = NULL; 390 + 389 391 if ( 390 392 #ifdef CONFIG_KEYS 391 393 !p->cred->thread_keyring &&
+1 -1
kernel/irq/Kconfig
··· 62 62 help 63 63 This option will show the mapping relationship between hardware irq 64 64 numbers and Linux irq numbers. The mapping is exposed via debugfs 65 - in the file "virq_mapping". 65 + in the file "irq_domain_mapping". 66 66 67 67 If you don't know what this means you don't need it. 68 68
+17 -30
kernel/irq/irqdomain.c
··· 23 23 static DEFINE_MUTEX(irq_domain_mutex); 24 24 25 25 static DEFINE_MUTEX(revmap_trees_mutex); 26 - static unsigned int irq_virq_count = NR_IRQS; 27 26 static struct irq_domain *irq_default_domain; 28 27 29 28 /** ··· 183 184 } 184 185 185 186 struct irq_domain *irq_domain_add_nomap(struct device_node *of_node, 187 + unsigned int max_irq, 186 188 const struct irq_domain_ops *ops, 187 189 void *host_data) 188 190 { 189 191 struct irq_domain *domain = irq_domain_alloc(of_node, 190 192 IRQ_DOMAIN_MAP_NOMAP, ops, host_data); 191 - if (domain) 193 + if (domain) { 194 + domain->revmap_data.nomap.max_irq = max_irq ? max_irq : ~0; 192 195 irq_domain_add(domain); 196 + } 193 197 return domain; 194 198 } 195 199 ··· 264 262 irq_default_domain = domain; 265 263 } 266 264 267 - /** 268 - * irq_set_virq_count() - Set the maximum number of linux irqs 269 - * @count: number of linux irqs, capped with NR_IRQS 270 - * 271 - * This is mainly for use by platforms like iSeries who want to program 272 - * the virtual irq number in the controller to avoid the reverse mapping 273 - */ 274 - void irq_set_virq_count(unsigned int count) 275 - { 276 - pr_debug("irq: Trying to set virq count to %d\n", count); 277 - 278 - BUG_ON(count < NUM_ISA_INTERRUPTS); 279 - if (count < NR_IRQS) 280 - irq_virq_count = count; 281 - } 282 - 283 265 static int irq_setup_virq(struct irq_domain *domain, unsigned int virq, 284 266 irq_hw_number_t hwirq) 285 267 { ··· 306 320 pr_debug("irq: create_direct virq allocation failed\n"); 307 321 return 0; 308 322 } 309 - if (virq >= irq_virq_count) { 323 + if (virq >= domain->revmap_data.nomap.max_irq) { 310 324 pr_err("ERROR: no free irqs available below %i maximum\n", 311 - irq_virq_count); 325 + domain->revmap_data.nomap.max_irq); 312 326 irq_free_desc(virq); 313 327 return 0; 314 328 } 315 - 316 329 pr_debug("irq: create_direct obtained virq %d\n", virq); 317 330 318 331 if (irq_setup_virq(domain, virq, virq)) { ··· 335 350 unsigned int irq_create_mapping(struct irq_domain *domain, 336 351 irq_hw_number_t hwirq) 337 352 { 338 - unsigned int virq, hint; 353 + unsigned int hint; 354 + int virq; 339 355 340 356 pr_debug("irq: irq_create_mapping(0x%p, 0x%lx)\n", domain, hwirq); 341 357 ··· 363 377 return irq_domain_legacy_revmap(domain, hwirq); 364 378 365 379 /* Allocate a virtual interrupt number */ 366 - hint = hwirq % irq_virq_count; 380 + hint = hwirq % nr_irqs; 367 381 if (hint == 0) 368 382 hint++; 369 383 virq = irq_alloc_desc_from(hint, 0); 370 - if (!virq) 384 + if (virq <= 0) 371 385 virq = irq_alloc_desc_from(1, 0); 372 - if (!virq) { 386 + if (virq <= 0) { 373 387 pr_debug("irq: -> virq allocation failed\n"); 374 388 return 0; 375 389 } ··· 501 515 irq_hw_number_t hwirq) 502 516 { 503 517 unsigned int i; 504 - unsigned int hint = hwirq % irq_virq_count; 518 + unsigned int hint = hwirq % nr_irqs; 505 519 506 520 /* Look for default domain if nececssary */ 507 521 if (domain == NULL) ··· 522 536 if (data && (data->domain == domain) && (data->hwirq == hwirq)) 523 537 return i; 524 538 i++; 525 - if (i >= irq_virq_count) 539 + if (i >= nr_irqs) 526 540 i = 1; 527 541 } while(i != hint); 528 542 return 0; ··· 628 642 void *data; 629 643 int i; 630 644 631 - seq_printf(m, "%-5s %-7s %-15s %-18s %s\n", "virq", "hwirq", 632 - "chip name", "chip data", "domain name"); 645 + seq_printf(m, "%-5s %-7s %-15s %-*s %s\n", "irq", "hwirq", 646 + "chip name", (int)(2 * sizeof(void *) + 2), "chip data", 647 + "domain name"); 633 648 634 649 for (i = 1; i < nr_irqs; i++) { 635 650 desc = irq_to_desc(i); ··· 653 666 seq_printf(m, "%-15s ", p); 654 667 655 668 data = irq_desc_get_chip_data(desc); 656 - seq_printf(m, "0x%16p ", data); 669 + seq_printf(m, data ? "0x%p " : " %p ", data); 657 670 658 671 if (desc->irq_data.domain && desc->irq_data.domain->of_node) 659 672 p = desc->irq_data.domain->of_node->full_name;
+1
kernel/irq_work.c
··· 11 11 #include <linux/irq_work.h> 12 12 #include <linux/percpu.h> 13 13 #include <linux/hardirq.h> 14 + #include <linux/irqflags.h> 14 15 #include <asm/processor.h> 15 16 16 17 /*
+6 -2
kernel/itimer.c
··· 284 284 if (value) { 285 285 if(copy_from_user(&set_buffer, value, sizeof(set_buffer))) 286 286 return -EFAULT; 287 - } else 288 - memset((char *) &set_buffer, 0, sizeof(set_buffer)); 287 + } else { 288 + memset(&set_buffer, 0, sizeof(set_buffer)); 289 + printk_once(KERN_WARNING "%s calls setitimer() with new_value NULL pointer." 290 + " Misfeature support will be removed\n", 291 + current->comm); 292 + } 289 293 290 294 error = do_setitimer(which, &set_buffer, ovalue ? &get_buffer : NULL); 291 295 if (error || !ovalue)
+1 -1
kernel/panic.c
··· 97 97 /* 98 98 * Avoid nested stack-dumping if a panic occurs during oops processing 99 99 */ 100 - if (!oops_in_progress) 100 + if (!test_taint(TAINT_DIE) && oops_in_progress <= 1) 101 101 dump_stack(); 102 102 #endif 103 103
+4
kernel/time/Kconfig
··· 1 1 # 2 2 # Timer subsystem related configuration options 3 3 # 4 + 5 + # Core internal switch. Selected by NO_HZ / HIGH_RES_TIMERS. This is 6 + # only related to the tick functionality. Oneshot clockevent devices 7 + # are supported independ of this. 4 8 config TICK_ONESHOT 5 9 bool 6 10
+3 -1
kernel/time/tick-broadcast.c
··· 575 575 unsigned long flags; 576 576 577 577 raw_spin_lock_irqsave(&tick_broadcast_lock, flags); 578 + 579 + tick_broadcast_device.mode = TICKDEV_MODE_ONESHOT; 580 + 578 581 if (cpumask_empty(tick_get_broadcast_mask())) 579 582 goto end; 580 583 581 - tick_broadcast_device.mode = TICKDEV_MODE_ONESHOT; 582 584 bc = tick_broadcast_device.evtdev; 583 585 if (bc) 584 586 tick_broadcast_setup_oneshot(bc);
+2 -2
kernel/time/tick-sched.c
··· 534 534 hrtimer_get_expires(&ts->sched_timer), 0)) 535 535 break; 536 536 } 537 - /* Update jiffies and reread time */ 538 - tick_do_update_jiffies64(now); 537 + /* Reread time and update jiffies */ 539 538 now = ktime_get(); 539 + tick_do_update_jiffies64(now); 540 540 } 541 541 } 542 542
+7 -7
lib/kobject.c
··· 192 192 193 193 /* be noisy on error issues */ 194 194 if (error == -EEXIST) 195 - printk(KERN_ERR "%s failed for %s with " 196 - "-EEXIST, don't try to register things with " 197 - "the same name in the same directory.\n", 198 - __func__, kobject_name(kobj)); 195 + WARN(1, "%s failed for %s with " 196 + "-EEXIST, don't try to register things with " 197 + "the same name in the same directory.\n", 198 + __func__, kobject_name(kobj)); 199 199 else 200 - printk(KERN_ERR "%s failed for %s (%d)\n", 201 - __func__, kobject_name(kobj), error); 202 - dump_stack(); 200 + WARN(1, "%s failed for %s (error: %d parent: %s)\n", 201 + __func__, kobject_name(kobj), error, 202 + parent ? kobject_name(parent) : "'none'"); 203 203 } else 204 204 kobj->state_in_sysfs = 1; 205 205
+2
mm/hugetlb.c
··· 2791 2791 * so no worry about deadlock. 2792 2792 */ 2793 2793 page = pte_page(entry); 2794 + get_page(page); 2794 2795 if (page != pagecache_page) 2795 2796 lock_page(page); 2796 2797 ··· 2823 2822 } 2824 2823 if (page != pagecache_page) 2825 2824 unlock_page(page); 2825 + put_page(page); 2826 2826 2827 2827 out_mutex: 2828 2828 mutex_unlock(&hugetlb_instantiation_mutex);
+3 -3
mm/memcontrol.c
··· 2165 2165 if (action == CPU_ONLINE) 2166 2166 return NOTIFY_OK; 2167 2167 2168 - if ((action != CPU_DEAD) || action != CPU_DEAD_FROZEN) 2168 + if (action != CPU_DEAD && action != CPU_DEAD_FROZEN) 2169 2169 return NOTIFY_OK; 2170 2170 2171 2171 for_each_mem_cgroup(iter) ··· 3763 3763 goto try_to_free; 3764 3764 cond_resched(); 3765 3765 /* "ret" should also be checked to ensure all lists are empty. */ 3766 - } while (memcg->res.usage > 0 || ret); 3766 + } while (res_counter_read_u64(&memcg->res, RES_USAGE) > 0 || ret); 3767 3767 out: 3768 3768 css_put(&memcg->css); 3769 3769 return ret; ··· 3778 3778 lru_add_drain_all(); 3779 3779 /* try to free all pages in this cgroup */ 3780 3780 shrink = 1; 3781 - while (nr_retries && memcg->res.usage > 0) { 3781 + while (nr_retries && res_counter_read_u64(&memcg->res, RES_USAGE) > 0) { 3782 3782 int progress; 3783 3783 3784 3784 if (signal_pending(current)) {
+1 -6
mm/vmscan.c
··· 2107 2107 * with multiple processes reclaiming pages, the total 2108 2108 * freeing target can get unreasonably large. 2109 2109 */ 2110 - if (nr_reclaimed >= nr_to_reclaim) 2111 - nr_to_reclaim = 0; 2112 - else 2113 - nr_to_reclaim -= nr_reclaimed; 2114 - 2115 - if (!nr_to_reclaim && priority < DEF_PRIORITY) 2110 + if (nr_reclaimed >= nr_to_reclaim && priority < DEF_PRIORITY) 2116 2111 break; 2117 2112 } 2118 2113 blk_finish_plug(&plug);
+7
net/bluetooth/hci_core.c
··· 665 665 666 666 hci_req_lock(hdev); 667 667 668 + if (test_bit(HCI_UNREGISTER, &hdev->dev_flags)) { 669 + ret = -ENODEV; 670 + goto done; 671 + } 672 + 668 673 if (hdev->rfkill && rfkill_blocked(hdev->rfkill)) { 669 674 ret = -ERFKILL; 670 675 goto done; ··· 1853 1848 int i; 1854 1849 1855 1850 BT_DBG("%p name %s bus %d", hdev, hdev->name, hdev->bus); 1851 + 1852 + set_bit(HCI_UNREGISTER, &hdev->dev_flags); 1856 1853 1857 1854 write_lock(&hci_dev_list_lock); 1858 1855 list_del(&hdev->list);
+3
net/bluetooth/l2cap_core.c
··· 1308 1308 if (chan->retry_count >= chan->remote_max_tx) { 1309 1309 l2cap_send_disconn_req(chan->conn, chan, ECONNABORTED); 1310 1310 l2cap_chan_unlock(chan); 1311 + l2cap_chan_put(chan); 1311 1312 return; 1312 1313 } 1313 1314 ··· 1317 1316 1318 1317 l2cap_send_rr_or_rnr(chan, L2CAP_CTRL_POLL); 1319 1318 l2cap_chan_unlock(chan); 1319 + l2cap_chan_put(chan); 1320 1320 } 1321 1321 1322 1322 static void l2cap_retrans_timeout(struct work_struct *work) ··· 1337 1335 l2cap_send_rr_or_rnr(chan, L2CAP_CTRL_POLL); 1338 1336 1339 1337 l2cap_chan_unlock(chan); 1338 + l2cap_chan_put(chan); 1340 1339 } 1341 1340 1342 1341 static void l2cap_drop_acked_frames(struct l2cap_chan *chan)
+3 -2
net/bluetooth/l2cap_sock.c
··· 82 82 } 83 83 84 84 if (la.l2_cid) 85 - err = l2cap_add_scid(chan, la.l2_cid); 85 + err = l2cap_add_scid(chan, __le16_to_cpu(la.l2_cid)); 86 86 else 87 87 err = l2cap_add_psm(chan, &la.l2_bdaddr, la.l2_psm); 88 88 ··· 123 123 if (la.l2_cid && la.l2_psm) 124 124 return -EINVAL; 125 125 126 - err = l2cap_chan_connect(chan, la.l2_psm, la.l2_cid, &la.l2_bdaddr); 126 + err = l2cap_chan_connect(chan, la.l2_psm, __le16_to_cpu(la.l2_cid), 127 + &la.l2_bdaddr); 127 128 if (err) 128 129 return err; 129 130
+9 -4
net/bluetooth/mgmt.c
··· 2523 2523 2524 2524 if (cp->val) { 2525 2525 type = PAGE_SCAN_TYPE_INTERLACED; 2526 - acp.interval = 0x0024; /* 22.5 msec page scan interval */ 2526 + 2527 + /* 22.5 msec page scan interval */ 2528 + acp.interval = __constant_cpu_to_le16(0x0024); 2527 2529 } else { 2528 2530 type = PAGE_SCAN_TYPE_STANDARD; /* default */ 2529 - acp.interval = 0x0800; /* default 1.28 sec page scan */ 2531 + 2532 + /* default 1.28 sec page scan */ 2533 + acp.interval = __constant_cpu_to_le16(0x0800); 2530 2534 } 2531 2535 2532 - acp.window = 0x0012; /* default 11.25 msec page scan window */ 2536 + /* default 11.25 msec page scan window */ 2537 + acp.window = __constant_cpu_to_le16(0x0012); 2533 2538 2534 2539 err = hci_send_cmd(hdev, HCI_OP_WRITE_PAGE_SCAN_ACTIVITY, sizeof(acp), 2535 2540 &acp); ··· 2941 2936 name, name_len); 2942 2937 2943 2938 if (dev_class && memcmp(dev_class, "\0\0\0", 3) != 0) 2944 - eir_len = eir_append_data(&ev->eir[eir_len], eir_len, 2939 + eir_len = eir_append_data(ev->eir, eir_len, 2945 2940 EIR_CLASS_OF_DEV, dev_class, 3); 2946 2941 2947 2942 put_unaligned_le16(eir_len, &ev->eir_len);
-81
net/bridge/br_multicast.c
··· 241 241 hlist_del_rcu(&mp->hlist[mdb->ver]); 242 242 mdb->size--; 243 243 244 - del_timer(&mp->query_timer); 245 244 call_rcu_bh(&mp->rcu, br_multicast_free_group); 246 245 247 246 out: ··· 270 271 rcu_assign_pointer(*pp, p->next); 271 272 hlist_del_init(&p->mglist); 272 273 del_timer(&p->timer); 273 - del_timer(&p->query_timer); 274 274 call_rcu_bh(&p->rcu, br_multicast_free_pg); 275 275 276 276 if (!mp->ports && !mp->mglist && ··· 505 507 return NULL; 506 508 } 507 509 508 - static void br_multicast_send_group_query(struct net_bridge_mdb_entry *mp) 509 - { 510 - struct net_bridge *br = mp->br; 511 - struct sk_buff *skb; 512 - 513 - skb = br_multicast_alloc_query(br, &mp->addr); 514 - if (!skb) 515 - goto timer; 516 - 517 - netif_rx(skb); 518 - 519 - timer: 520 - if (++mp->queries_sent < br->multicast_last_member_count) 521 - mod_timer(&mp->query_timer, 522 - jiffies + br->multicast_last_member_interval); 523 - } 524 - 525 - static void br_multicast_group_query_expired(unsigned long data) 526 - { 527 - struct net_bridge_mdb_entry *mp = (void *)data; 528 - struct net_bridge *br = mp->br; 529 - 530 - spin_lock(&br->multicast_lock); 531 - if (!netif_running(br->dev) || !mp->mglist || 532 - mp->queries_sent >= br->multicast_last_member_count) 533 - goto out; 534 - 535 - br_multicast_send_group_query(mp); 536 - 537 - out: 538 - spin_unlock(&br->multicast_lock); 539 - } 540 - 541 - static void br_multicast_send_port_group_query(struct net_bridge_port_group *pg) 542 - { 543 - struct net_bridge_port *port = pg->port; 544 - struct net_bridge *br = port->br; 545 - struct sk_buff *skb; 546 - 547 - skb = br_multicast_alloc_query(br, &pg->addr); 548 - if (!skb) 549 - goto timer; 550 - 551 - br_deliver(port, skb); 552 - 553 - timer: 554 - if (++pg->queries_sent < br->multicast_last_member_count) 555 - mod_timer(&pg->query_timer, 556 - jiffies + br->multicast_last_member_interval); 557 - } 558 - 559 - static void br_multicast_port_group_query_expired(unsigned long data) 560 - { 561 - struct net_bridge_port_group *pg = (void *)data; 562 - struct net_bridge_port *port = pg->port; 563 - struct net_bridge *br = port->br; 564 - 565 - spin_lock(&br->multicast_lock); 566 - if (!netif_running(br->dev) || hlist_unhashed(&pg->mglist) || 567 - pg->queries_sent >= br->multicast_last_member_count) 568 - goto out; 569 - 570 - br_multicast_send_port_group_query(pg); 571 - 572 - out: 573 - spin_unlock(&br->multicast_lock); 574 - } 575 - 576 510 static struct net_bridge_mdb_entry *br_multicast_get_group( 577 511 struct net_bridge *br, struct net_bridge_port *port, 578 512 struct br_ip *group, int hash) ··· 620 690 mp->addr = *group; 621 691 setup_timer(&mp->timer, br_multicast_group_expired, 622 692 (unsigned long)mp); 623 - setup_timer(&mp->query_timer, br_multicast_group_query_expired, 624 - (unsigned long)mp); 625 693 626 694 hlist_add_head_rcu(&mp->hlist[mdb->ver], &mdb->mhash[hash]); 627 695 mdb->size++; ··· 673 745 p->next = *pp; 674 746 hlist_add_head(&p->mglist, &port->mglist); 675 747 setup_timer(&p->timer, br_multicast_port_group_expired, 676 - (unsigned long)p); 677 - setup_timer(&p->query_timer, br_multicast_port_group_query_expired, 678 748 (unsigned long)p); 679 749 680 750 rcu_assign_pointer(*pp, p); ··· 1217 1291 time_after(mp->timer.expires, time) : 1218 1292 try_to_del_timer_sync(&mp->timer) >= 0)) { 1219 1293 mod_timer(&mp->timer, time); 1220 - 1221 - mp->queries_sent = 0; 1222 - mod_timer(&mp->query_timer, now); 1223 1294 } 1224 1295 1225 1296 goto out; ··· 1233 1310 time_after(p->timer.expires, time) : 1234 1311 try_to_del_timer_sync(&p->timer) >= 0)) { 1235 1312 mod_timer(&p->timer, time); 1236 - 1237 - p->queries_sent = 0; 1238 - mod_timer(&p->query_timer, now); 1239 1313 } 1240 1314 1241 1315 break; ··· 1601 1681 hlist_for_each_entry_safe(mp, p, n, &mdb->mhash[i], 1602 1682 hlist[ver]) { 1603 1683 del_timer(&mp->timer); 1604 - del_timer(&mp->query_timer); 1605 1684 call_rcu_bh(&mp->rcu, br_multicast_free_group); 1606 1685 } 1607 1686 }
-4
net/bridge/br_private.h
··· 82 82 struct hlist_node mglist; 83 83 struct rcu_head rcu; 84 84 struct timer_list timer; 85 - struct timer_list query_timer; 86 85 struct br_ip addr; 87 - u32 queries_sent; 88 86 }; 89 87 90 88 struct net_bridge_mdb_entry ··· 92 94 struct net_bridge_port_group __rcu *ports; 93 95 struct rcu_head rcu; 94 96 struct timer_list timer; 95 - struct timer_list query_timer; 96 97 struct br_ip addr; 97 98 bool mglist; 98 - u32 queries_sent; 99 99 }; 100 100 101 101 struct net_bridge_mdb_htable
+3 -1
net/core/skbuff.c
··· 952 952 goto adjust_others; 953 953 } 954 954 955 - data = kmalloc(size + sizeof(struct skb_shared_info), gfp_mask); 955 + data = kmalloc(size + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)), 956 + gfp_mask); 956 957 if (!data) 957 958 goto nodata; 959 + size = SKB_WITH_OVERHEAD(ksize(data)); 958 960 959 961 /* Copy only real data... and, alas, header. This should be 960 962 * optimized for the cases when header is void.
+10 -2
net/ipv4/netfilter/nf_conntrack_l3proto_ipv4.c
··· 74 74 75 75 iph = skb_header_pointer(skb, nhoff, sizeof(_iph), &_iph); 76 76 if (iph == NULL) 77 - return -NF_DROP; 77 + return -NF_ACCEPT; 78 78 79 79 /* Conntrack defragments packets, we might still see fragments 80 80 * inside ICMP packets though. */ 81 81 if (iph->frag_off & htons(IP_OFFSET)) 82 - return -NF_DROP; 82 + return -NF_ACCEPT; 83 83 84 84 *dataoff = nhoff + (iph->ihl << 2); 85 85 *protonum = iph->protocol; 86 + 87 + /* Check bogus IP headers */ 88 + if (*dataoff > skb->len) { 89 + pr_debug("nf_conntrack_ipv4: bogus IPv4 packet: " 90 + "nhoff %u, ihl %u, skblen %u\n", 91 + nhoff, iph->ihl << 2, skb->len); 92 + return -NF_ACCEPT; 93 + } 86 94 87 95 return NF_ACCEPT; 88 96 }
+7 -8
net/ipv4/tcp.c
··· 701 701 skb = alloc_skb_fclone(size + sk->sk_prot->max_header, gfp); 702 702 if (skb) { 703 703 if (sk_wmem_schedule(sk, skb->truesize)) { 704 + skb_reserve(skb, sk->sk_prot->max_header); 704 705 /* 705 706 * Make sure that we have exactly size bytes 706 707 * available to the caller, no more, no less. 707 708 */ 708 - skb_reserve(skb, skb_tailroom(skb) - size); 709 + skb->avail_size = size; 709 710 return skb; 710 711 } 711 712 __kfree_skb(skb); ··· 996 995 copy = seglen; 997 996 998 997 /* Where to copy to? */ 999 - if (skb_tailroom(skb) > 0) { 998 + if (skb_availroom(skb) > 0) { 1000 999 /* We have some space in skb head. Superb! */ 1001 - if (copy > skb_tailroom(skb)) 1002 - copy = skb_tailroom(skb); 1000 + copy = min_t(int, copy, skb_availroom(skb)); 1003 1001 err = skb_add_data_nocache(sk, skb, from, copy); 1004 1002 if (err) 1005 1003 goto do_fault; ··· 1452 1452 if ((available < target) && 1453 1453 (len > sysctl_tcp_dma_copybreak) && !(flags & MSG_PEEK) && 1454 1454 !sysctl_tcp_low_latency && 1455 - dma_find_channel(DMA_MEMCPY)) { 1455 + net_dma_find_channel()) { 1456 1456 preempt_enable_no_resched(); 1457 1457 tp->ucopy.pinned_list = 1458 1458 dma_pin_iovec_pages(msg->msg_iov, len); ··· 1667 1667 if (!(flags & MSG_TRUNC)) { 1668 1668 #ifdef CONFIG_NET_DMA 1669 1669 if (!tp->ucopy.dma_chan && tp->ucopy.pinned_list) 1670 - tp->ucopy.dma_chan = dma_find_channel(DMA_MEMCPY); 1670 + tp->ucopy.dma_chan = net_dma_find_channel(); 1671 1671 1672 1672 if (tp->ucopy.dma_chan) { 1673 1673 tp->ucopy.dma_cookie = dma_skb_copy_datagram_iovec( ··· 3302 3302 3303 3303 tcp_init_mem(&init_net); 3304 3304 /* Set per-socket limits to no more than 1/128 the pressure threshold */ 3305 - limit = nr_free_buffer_pages() << (PAGE_SHIFT - 10); 3306 - limit = max(limit, 128UL); 3305 + limit = nr_free_buffer_pages() << (PAGE_SHIFT - 7); 3307 3306 max_share = min(4UL*1024*1024, limit); 3308 3307 3309 3308 sysctl_tcp_wmem[0] = SK_MEM_QUANTUM;
+6 -3
net/ipv4/tcp_input.c
··· 474 474 if (!win_dep) { 475 475 m -= (new_sample >> 3); 476 476 new_sample += m; 477 - } else if (m < new_sample) 478 - new_sample = m << 3; 477 + } else { 478 + m <<= 3; 479 + if (m < new_sample) 480 + new_sample = m; 481 + } 479 482 } else { 480 483 /* No previous measure. */ 481 484 new_sample = m << 3; ··· 5228 5225 return 0; 5229 5226 5230 5227 if (!tp->ucopy.dma_chan && tp->ucopy.pinned_list) 5231 - tp->ucopy.dma_chan = dma_find_channel(DMA_MEMCPY); 5228 + tp->ucopy.dma_chan = net_dma_find_channel(); 5232 5229 5233 5230 if (tp->ucopy.dma_chan && skb_csum_unnecessary(skb)) { 5234 5231
+1 -1
net/ipv4/tcp_ipv4.c
··· 1730 1730 #ifdef CONFIG_NET_DMA 1731 1731 struct tcp_sock *tp = tcp_sk(sk); 1732 1732 if (!tp->ucopy.dma_chan && tp->ucopy.pinned_list) 1733 - tp->ucopy.dma_chan = dma_find_channel(DMA_MEMCPY); 1733 + tp->ucopy.dma_chan = net_dma_find_channel(); 1734 1734 if (tp->ucopy.dma_chan) 1735 1735 ret = tcp_v4_do_rcv(sk, skb); 1736 1736 else
+1 -1
net/ipv4/tcp_output.c
··· 2060 2060 /* Punt if not enough space exists in the first SKB for 2061 2061 * the data in the second 2062 2062 */ 2063 - if (skb->len > skb_tailroom(to)) 2063 + if (skb->len > skb_availroom(to)) 2064 2064 break; 2065 2065 2066 2066 if (after(TCP_SKB_CB(skb)->end_seq, tcp_wnd_end(tp)))
-14
net/ipv6/netfilter/ip6_tables.c
··· 78 78 79 79 Hence the start of any table is given by get_table() below. */ 80 80 81 - /* Check for an extension */ 82 - int 83 - ip6t_ext_hdr(u8 nexthdr) 84 - { 85 - return (nexthdr == IPPROTO_HOPOPTS) || 86 - (nexthdr == IPPROTO_ROUTING) || 87 - (nexthdr == IPPROTO_FRAGMENT) || 88 - (nexthdr == IPPROTO_ESP) || 89 - (nexthdr == IPPROTO_AH) || 90 - (nexthdr == IPPROTO_NONE) || 91 - (nexthdr == IPPROTO_DSTOPTS); 92 - } 93 - 94 81 /* Returns whether matches rule or not. */ 95 82 /* Performance critical - called for every packet */ 96 83 static inline bool ··· 2353 2366 EXPORT_SYMBOL(ip6t_register_table); 2354 2367 EXPORT_SYMBOL(ip6t_unregister_table); 2355 2368 EXPORT_SYMBOL(ip6t_do_table); 2356 - EXPORT_SYMBOL(ip6t_ext_hdr); 2357 2369 EXPORT_SYMBOL(ipv6_find_hdr); 2358 2370 2359 2371 module_init(ip6_tables_init);
+1 -1
net/ipv6/tcp_ipv6.c
··· 1645 1645 #ifdef CONFIG_NET_DMA 1646 1646 struct tcp_sock *tp = tcp_sk(sk); 1647 1647 if (!tp->ucopy.dma_chan && tp->ucopy.pinned_list) 1648 - tp->ucopy.dma_chan = dma_find_channel(DMA_MEMCPY); 1648 + tp->ucopy.dma_chan = net_dma_find_channel(); 1649 1649 if (tp->ucopy.dma_chan) 1650 1650 ret = tcp_v6_do_rcv(sk, skb); 1651 1651 else
+1 -2
net/mac80211/mlme.c
··· 3387 3387 */ 3388 3388 printk(KERN_DEBUG "%s: waiting for beacon from %pM\n", 3389 3389 sdata->name, ifmgd->bssid); 3390 - assoc_data->timeout = jiffies + 3391 - TU_TO_EXP_TIME(req->bss->beacon_interval); 3390 + assoc_data->timeout = TU_TO_EXP_TIME(req->bss->beacon_interval); 3392 3391 } else { 3393 3392 assoc_data->have_beacon = true; 3394 3393 assoc_data->sent_assoc = false;
+1 -1
net/netfilter/nf_conntrack_core.c
··· 1592 1592 return 0; 1593 1593 1594 1594 err_timeout: 1595 - nf_conntrack_timeout_fini(net); 1595 + nf_conntrack_ecache_fini(net); 1596 1596 err_ecache: 1597 1597 nf_conntrack_tstamp_fini(net); 1598 1598 err_tstamp:
+2 -2
net/netfilter/nf_conntrack_proto_tcp.c
··· 584 584 * Let's try to use the data from the packet. 585 585 */ 586 586 sender->td_end = end; 587 - win <<= sender->td_scale; 588 - sender->td_maxwin = (win == 0 ? 1 : win); 587 + swin = win << sender->td_scale; 588 + sender->td_maxwin = (swin == 0 ? 1 : swin); 589 589 sender->td_maxend = end + sender->td_maxwin; 590 590 /* 591 591 * We haven't seen traffic in the other direction yet
+2 -2
net/nfc/llcp/commands.c
··· 474 474 475 475 while (remaining_len > 0) { 476 476 477 - frag_len = min_t(u16, local->remote_miu, remaining_len); 477 + frag_len = min_t(size_t, local->remote_miu, remaining_len); 478 478 479 479 pr_debug("Fragment %zd bytes remaining %zd", 480 480 frag_len, remaining_len); ··· 497 497 release_sock(sk); 498 498 499 499 remaining_len -= frag_len; 500 - msg_ptr += len; 500 + msg_ptr += frag_len; 501 501 } 502 502 503 503 kfree(msg_data);
+18 -13
net/wireless/nl80211.c
··· 1294 1294 goto bad_res; 1295 1295 } 1296 1296 1297 + if (!netif_running(netdev)) { 1298 + result = -ENETDOWN; 1299 + goto bad_res; 1300 + } 1301 + 1297 1302 nla_for_each_nested(nl_txq_params, 1298 1303 info->attrs[NL80211_ATTR_WIPHY_TXQ_PARAMS], 1299 1304 rem_txq_params) { ··· 6389 6384 .doit = nl80211_get_key, 6390 6385 .policy = nl80211_policy, 6391 6386 .flags = GENL_ADMIN_PERM, 6392 - .internal_flags = NL80211_FLAG_NEED_NETDEV | 6387 + .internal_flags = NL80211_FLAG_NEED_NETDEV_UP | 6393 6388 NL80211_FLAG_NEED_RTNL, 6394 6389 }, 6395 6390 { ··· 6421 6416 .policy = nl80211_policy, 6422 6417 .flags = GENL_ADMIN_PERM, 6423 6418 .doit = nl80211_set_beacon, 6424 - .internal_flags = NL80211_FLAG_NEED_NETDEV | 6419 + .internal_flags = NL80211_FLAG_NEED_NETDEV_UP | 6425 6420 NL80211_FLAG_NEED_RTNL, 6426 6421 }, 6427 6422 { ··· 6429 6424 .policy = nl80211_policy, 6430 6425 .flags = GENL_ADMIN_PERM, 6431 6426 .doit = nl80211_start_ap, 6432 - .internal_flags = NL80211_FLAG_NEED_NETDEV | 6427 + .internal_flags = NL80211_FLAG_NEED_NETDEV_UP | 6433 6428 NL80211_FLAG_NEED_RTNL, 6434 6429 }, 6435 6430 { ··· 6437 6432 .policy = nl80211_policy, 6438 6433 .flags = GENL_ADMIN_PERM, 6439 6434 .doit = nl80211_stop_ap, 6440 - .internal_flags = NL80211_FLAG_NEED_NETDEV | 6435 + .internal_flags = NL80211_FLAG_NEED_NETDEV_UP | 6441 6436 NL80211_FLAG_NEED_RTNL, 6442 6437 }, 6443 6438 { ··· 6453 6448 .doit = nl80211_set_station, 6454 6449 .policy = nl80211_policy, 6455 6450 .flags = GENL_ADMIN_PERM, 6456 - .internal_flags = NL80211_FLAG_NEED_NETDEV | 6451 + .internal_flags = NL80211_FLAG_NEED_NETDEV_UP | 6457 6452 NL80211_FLAG_NEED_RTNL, 6458 6453 }, 6459 6454 { ··· 6469 6464 .doit = nl80211_del_station, 6470 6465 .policy = nl80211_policy, 6471 6466 .flags = GENL_ADMIN_PERM, 6472 - .internal_flags = NL80211_FLAG_NEED_NETDEV | 6467 + .internal_flags = NL80211_FLAG_NEED_NETDEV_UP | 6473 6468 NL80211_FLAG_NEED_RTNL, 6474 6469 }, 6475 6470 { ··· 6502 6497 .doit = nl80211_del_mpath, 6503 6498 .policy = nl80211_policy, 6504 6499 .flags = GENL_ADMIN_PERM, 6505 - .internal_flags = NL80211_FLAG_NEED_NETDEV | 6500 + .internal_flags = NL80211_FLAG_NEED_NETDEV_UP | 6506 6501 NL80211_FLAG_NEED_RTNL, 6507 6502 }, 6508 6503 { ··· 6510 6505 .doit = nl80211_set_bss, 6511 6506 .policy = nl80211_policy, 6512 6507 .flags = GENL_ADMIN_PERM, 6513 - .internal_flags = NL80211_FLAG_NEED_NETDEV | 6508 + .internal_flags = NL80211_FLAG_NEED_NETDEV_UP | 6514 6509 NL80211_FLAG_NEED_RTNL, 6515 6510 }, 6516 6511 { ··· 6536 6531 .doit = nl80211_get_mesh_config, 6537 6532 .policy = nl80211_policy, 6538 6533 /* can be retrieved by unprivileged users */ 6539 - .internal_flags = NL80211_FLAG_NEED_NETDEV | 6534 + .internal_flags = NL80211_FLAG_NEED_NETDEV_UP | 6540 6535 NL80211_FLAG_NEED_RTNL, 6541 6536 }, 6542 6537 { ··· 6669 6664 .doit = nl80211_setdel_pmksa, 6670 6665 .policy = nl80211_policy, 6671 6666 .flags = GENL_ADMIN_PERM, 6672 - .internal_flags = NL80211_FLAG_NEED_NETDEV | 6667 + .internal_flags = NL80211_FLAG_NEED_NETDEV_UP | 6673 6668 NL80211_FLAG_NEED_RTNL, 6674 6669 }, 6675 6670 { ··· 6677 6672 .doit = nl80211_setdel_pmksa, 6678 6673 .policy = nl80211_policy, 6679 6674 .flags = GENL_ADMIN_PERM, 6680 - .internal_flags = NL80211_FLAG_NEED_NETDEV | 6675 + .internal_flags = NL80211_FLAG_NEED_NETDEV_UP | 6681 6676 NL80211_FLAG_NEED_RTNL, 6682 6677 }, 6683 6678 { ··· 6685 6680 .doit = nl80211_flush_pmksa, 6686 6681 .policy = nl80211_policy, 6687 6682 .flags = GENL_ADMIN_PERM, 6688 - .internal_flags = NL80211_FLAG_NEED_NETDEV | 6683 + .internal_flags = NL80211_FLAG_NEED_NETDEV_UP | 6689 6684 NL80211_FLAG_NEED_RTNL, 6690 6685 }, 6691 6686 { ··· 6845 6840 .doit = nl80211_probe_client, 6846 6841 .policy = nl80211_policy, 6847 6842 .flags = GENL_ADMIN_PERM, 6848 - .internal_flags = NL80211_FLAG_NEED_NETDEV | 6843 + .internal_flags = NL80211_FLAG_NEED_NETDEV_UP | 6849 6844 NL80211_FLAG_NEED_RTNL, 6850 6845 }, 6851 6846 {
+4 -2
net/wireless/wext-core.c
··· 780 780 if (cmd == SIOCSIWENCODEEXT) { 781 781 struct iw_encode_ext *ee = (void *) extra; 782 782 783 - if (iwp->length < sizeof(*ee) + ee->key_len) 784 - return -EFAULT; 783 + if (iwp->length < sizeof(*ee) + ee->key_len) { 784 + err = -EFAULT; 785 + goto out; 786 + } 785 787 } 786 788 } 787 789
+2 -36
scripts/kconfig/confdata.c
··· 540 540 }; 541 541 542 542 /* 543 - * Generate the __enabled_CONFIG_* and __enabled_CONFIG_*_MODULE macros for 544 - * use by the IS_{ENABLED,BUILTIN,MODULE} macros. The _MODULE variant is 545 - * generated even for booleans so that the IS_ENABLED() macro works. 546 - */ 547 - static void 548 - header_print__enabled_symbol(FILE *fp, struct symbol *sym, const char *value, void *arg) 549 - { 550 - 551 - switch (sym->type) { 552 - case S_BOOLEAN: 553 - case S_TRISTATE: { 554 - fprintf(fp, "#define __enabled_" CONFIG_ "%s %d\n", 555 - sym->name, (*value == 'y')); 556 - fprintf(fp, "#define __enabled_" CONFIG_ "%s_MODULE %d\n", 557 - sym->name, (*value == 'm')); 558 - break; 559 - } 560 - default: 561 - break; 562 - } 563 - } 564 - 565 - static struct conf_printer header__enabled_printer_cb = 566 - { 567 - .print_symbol = header_print__enabled_symbol, 568 - .print_comment = header_print_comment, 569 - }; 570 - 571 - /* 572 543 * Tristate printer 573 544 * 574 545 * This printer is used when generating the `include/config/tristate.conf' file. ··· 920 949 conf_write_heading(out_h, &header_printer_cb, NULL); 921 950 922 951 for_all_symbols(i, sym) { 923 - if (!sym->name) 924 - continue; 925 - 926 952 sym_calc_value(sym); 927 - 928 - conf_write_symbol(out_h, sym, &header__enabled_printer_cb, NULL); 929 - 930 - if (!(sym->flags & SYMBOL_WRITE)) 953 + if (!(sym->flags & SYMBOL_WRITE) || !sym->name) 931 954 continue; 932 955 956 + /* write symbol to auto.conf, tristate and header files */ 933 957 conf_write_symbol(out, sym, &kconfig_printer_cb, (void *)1); 934 958 935 959 conf_write_symbol(tristate, sym, &tristate_printer_cb, (void *)1);
+5 -2
scripts/mod/modpost.c
··· 132 132 /* strip trailing .o */ 133 133 s = strrchr(p, '.'); 134 134 if (s != NULL) 135 - if (strcmp(s, ".o") == 0) 135 + if (strcmp(s, ".o") == 0) { 136 136 *s = '\0'; 137 + mod->is_dot_o = 1; 138 + } 137 139 138 140 /* add to list */ 139 141 mod->name = p; ··· 589 587 unsigned int crc; 590 588 enum export export; 591 589 592 - if (!is_vmlinux(mod->name) && strncmp(symname, "__ksymtab", 9) == 0) 590 + if ((!is_vmlinux(mod->name) || mod->is_dot_o) && 591 + strncmp(symname, "__ksymtab", 9) == 0) 593 592 export = export_from_secname(info, get_secindex(info, sym)); 594 593 else 595 594 export = export_from_sec(info, get_secindex(info, sym));
+1
scripts/mod/modpost.h
··· 113 113 int has_cleanup; 114 114 struct buffer dev_table_buf; 115 115 char srcversion[25]; 116 + int is_dot_o; 116 117 }; 117 118 118 119 struct elf_info {
+15 -4
security/smack/smack_lsm.c
··· 1939 1939 char *hostsp; 1940 1940 struct socket_smack *ssp = sk->sk_security; 1941 1941 struct smk_audit_info ad; 1942 - struct lsm_network_audit net; 1943 1942 1944 1943 rcu_read_lock(); 1945 1944 hostsp = smack_host_label(sap); 1946 1945 if (hostsp != NULL) { 1947 - sk_lbl = SMACK_UNLABELED_SOCKET; 1948 1946 #ifdef CONFIG_AUDIT 1947 + struct lsm_network_audit net; 1948 + 1949 1949 smk_ad_init_net(&ad, __func__, LSM_AUDIT_DATA_NET, &net); 1950 1950 ad.a.u.net->family = sap->sin_family; 1951 1951 ad.a.u.net->dport = sap->sin_port; 1952 1952 ad.a.u.net->v4info.daddr = sap->sin_addr.s_addr; 1953 1953 #endif 1954 + sk_lbl = SMACK_UNLABELED_SOCKET; 1954 1955 rc = smk_access(ssp->smk_out, hostsp, MAY_WRITE, &ad); 1955 1956 } else { 1956 1957 sk_lbl = SMACK_CIPSO_SOCKET; ··· 2810 2809 struct socket_smack *osp = other->sk_security; 2811 2810 struct socket_smack *nsp = newsk->sk_security; 2812 2811 struct smk_audit_info ad; 2813 - struct lsm_network_audit net; 2814 2812 int rc = 0; 2813 + 2814 + #ifdef CONFIG_AUDIT 2815 + struct lsm_network_audit net; 2815 2816 2816 2817 smk_ad_init_net(&ad, __func__, LSM_AUDIT_DATA_NET, &net); 2817 2818 smk_ad_setfield_u_net_sk(&ad, other); 2819 + #endif 2818 2820 2819 2821 if (!capable(CAP_MAC_OVERRIDE)) 2820 2822 rc = smk_access(ssp->smk_out, osp->smk_in, MAY_WRITE, &ad); ··· 2846 2842 struct socket_smack *ssp = sock->sk->sk_security; 2847 2843 struct socket_smack *osp = other->sk->sk_security; 2848 2844 struct smk_audit_info ad; 2849 - struct lsm_network_audit net; 2850 2845 int rc = 0; 2846 + 2847 + #ifdef CONFIG_AUDIT 2848 + struct lsm_network_audit net; 2851 2849 2852 2850 smk_ad_init_net(&ad, __func__, LSM_AUDIT_DATA_NET, &net); 2853 2851 smk_ad_setfield_u_net_sk(&ad, other->sk); 2852 + #endif 2854 2853 2855 2854 if (!capable(CAP_MAC_OVERRIDE)) 2856 2855 rc = smk_access(ssp->smk_out, osp->smk_in, MAY_WRITE, &ad); ··· 3000 2993 char *csp; 3001 2994 int rc; 3002 2995 struct smk_audit_info ad; 2996 + #ifdef CONFIG_AUDIT 3003 2997 struct lsm_network_audit net; 2998 + #endif 3004 2999 if (sk->sk_family != PF_INET && sk->sk_family != PF_INET6) 3005 3000 return 0; 3006 3001 ··· 3165 3156 char *sp; 3166 3157 int rc; 3167 3158 struct smk_audit_info ad; 3159 + #ifdef CONFIG_AUDIT 3168 3160 struct lsm_network_audit net; 3161 + #endif 3169 3162 3170 3163 /* handle mapped IPv4 packets arriving via IPv6 sockets */ 3171 3164 if (family == PF_INET6 && skb->protocol == htons(ETH_P_IP))
+4 -2
sound/isa/sscape.c
··· 1019 1019 irq_cfg = get_irq_config(sscape->type, irq[dev]); 1020 1020 if (irq_cfg == INVALID_IRQ) { 1021 1021 snd_printk(KERN_ERR "sscape: Invalid IRQ %d\n", irq[dev]); 1022 - return -ENXIO; 1022 + err = -ENXIO; 1023 + goto _release_dma; 1023 1024 } 1024 1025 1025 1026 mpu_irq_cfg = get_irq_config(sscape->type, mpu_irq[dev]); 1026 1027 if (mpu_irq_cfg == INVALID_IRQ) { 1027 1028 snd_printk(KERN_ERR "sscape: Invalid IRQ %d\n", mpu_irq[dev]); 1028 - return -ENXIO; 1029 + err = -ENXIO; 1030 + goto _release_dma; 1029 1031 } 1030 1032 1031 1033 /*
+6 -2
sound/oss/msnd_pinnacle.c
··· 1294 1294 1295 1295 static int upload_dsp_code(void) 1296 1296 { 1297 + int ret = 0; 1298 + 1297 1299 msnd_outb(HPBLKSEL_0, dev.io + HP_BLKS); 1298 1300 #ifndef HAVE_DSPCODEH 1299 1301 INITCODESIZE = mod_firmware_load(INITCODEFILE, &INITCODE); ··· 1314 1312 memcpy_toio(dev.base, PERMCODE, PERMCODESIZE); 1315 1313 if (msnd_upload_host(&dev, INITCODE, INITCODESIZE) < 0) { 1316 1314 printk(KERN_WARNING LOGNAME ": Error uploading to DSP\n"); 1317 - return -ENODEV; 1315 + ret = -ENODEV; 1316 + goto out; 1318 1317 } 1319 1318 #ifdef HAVE_DSPCODEH 1320 1319 printk(KERN_INFO LOGNAME ": DSP firmware uploaded (resident)\n"); ··· 1323 1320 printk(KERN_INFO LOGNAME ": DSP firmware uploaded\n"); 1324 1321 #endif 1325 1322 1323 + out: 1326 1324 #ifndef HAVE_DSPCODEH 1327 1325 vfree(INITCODE); 1328 1326 vfree(PERMCODE); 1329 1327 #endif 1330 1328 1331 - return 0; 1329 + return ret; 1332 1330 } 1333 1331 1334 1332 #ifdef MSND_CLASSIC
+2 -2
sound/pci/Kconfig
··· 2 2 3 3 config SND_TEA575X 4 4 tristate 5 - depends on SND_FM801_TEA575X_BOOL || SND_ES1968_RADIO || RADIO_SF16FMR2 6 - default SND_FM801 || SND_ES1968 || RADIO_SF16FMR2 5 + depends on SND_FM801_TEA575X_BOOL || SND_ES1968_RADIO || RADIO_SF16FMR2 || RADIO_MAXIRADIO 6 + default SND_FM801 || SND_ES1968 || RADIO_SF16FMR2 || RADIO_MAXIRADIO 7 7 8 8 menuconfig SND_PCI 9 9 bool "PCI sound devices"
+2 -2
sound/pci/asihpi/hpi_internal.h
··· 1 1 /****************************************************************************** 2 2 3 3 AudioScience HPI driver 4 - Copyright (C) 1997-2011 AudioScience Inc. <support@audioscience.com> 4 + Copyright (C) 1997-2012 AudioScience Inc. <support@audioscience.com> 5 5 6 6 This program is free software; you can redistribute it and/or modify 7 7 it under the terms of version 2 of the GNU General Public License as ··· 42 42 If this function succeeds, then HpiOs_LockedMem_GetVirtAddr() and 43 43 HpiOs_LockedMem_GetPyhsAddr() will always succed on the returned handle. 44 44 */ 45 - int hpios_locked_mem_alloc(struct consistent_dma_area *p_locked_mem_handle, 45 + u16 hpios_locked_mem_alloc(struct consistent_dma_area *p_locked_mem_handle, 46 46 /**< memory handle */ 47 47 u32 size, /**< Size in bytes to allocate */ 48 48 struct pci_dev *p_os_reference
+5 -5
sound/pci/asihpi/hpios.c
··· 1 1 /****************************************************************************** 2 2 3 3 AudioScience HPI driver 4 - Copyright (C) 1997-2011 AudioScience Inc. <support@audioscience.com> 4 + Copyright (C) 1997-2012 AudioScience Inc. <support@audioscience.com> 5 5 6 6 This program is free software; you can redistribute it and/or modify 7 7 it under the terms of version 2 of the GNU General Public License as ··· 39 39 40 40 } 41 41 42 - /** Allocated an area of locked memory for bus master DMA operations. 42 + /** Allocate an area of locked memory for bus master DMA operations. 43 43 44 - On error, return -ENOMEM, and *pMemArea.size = 0 44 + If allocation fails, return 1, and *pMemArea.size = 0 45 45 */ 46 - int hpios_locked_mem_alloc(struct consistent_dma_area *p_mem_area, u32 size, 46 + u16 hpios_locked_mem_alloc(struct consistent_dma_area *p_mem_area, u32 size, 47 47 struct pci_dev *pdev) 48 48 { 49 49 /*?? any benefit in using managed dmam_alloc_coherent? */ ··· 62 62 HPI_DEBUG_LOG(WARNING, 63 63 "failed to allocate %d bytes locked memory\n", size); 64 64 p_mem_area->size = 0; 65 - return -ENOMEM; 65 + return 1; 66 66 } 67 67 } 68 68
+3
sound/pci/hda/hda_codec.h
··· 851 851 unsigned int pin_amp_workaround:1; /* pin out-amp takes index 852 852 * (e.g. Conexant codecs) 853 853 */ 854 + unsigned int single_adc_amp:1; /* adc in-amp takes no index 855 + * (e.g. CX20549 codec) 856 + */ 854 857 unsigned int no_sticky_stream:1; /* no sticky-PCM stream assignment */ 855 858 unsigned int pins_shutup:1; /* pins are shut up */ 856 859 unsigned int no_trigger_sense:1; /* don't trigger at pin-sensing */
+3 -3
sound/pci/hda/hda_eld.c
··· 418 418 else 419 419 buf2[0] = '\0'; 420 420 421 - printk(KERN_INFO "HDMI: supports coding type %s:" 421 + _snd_printd(SND_PR_VERBOSE, "HDMI: supports coding type %s:" 422 422 " channels = %d, rates =%s%s\n", 423 423 cea_audio_coding_type_names[a->format], 424 424 a->channels, ··· 442 442 { 443 443 int i; 444 444 445 - printk(KERN_INFO "HDMI: detected monitor %s at connection type %s\n", 445 + _snd_printd(SND_PR_VERBOSE, "HDMI: detected monitor %s at connection type %s\n", 446 446 e->monitor_name, 447 447 eld_connection_type_names[e->conn_type]); 448 448 449 449 if (e->spk_alloc) { 450 450 char buf[SND_PRINT_CHANNEL_ALLOCATION_ADVISED_BUFSIZE]; 451 451 snd_print_channel_allocation(e->spk_alloc, buf, sizeof(buf)); 452 - printk(KERN_INFO "HDMI: available speakers:%s\n", buf); 452 + _snd_printd(SND_PR_VERBOSE, "HDMI: available speakers:%s\n", buf); 453 453 } 454 454 455 455 for (i = 0; i < e->sad_count; i++)
+10 -3
sound/pci/hda/hda_proc.c
··· 651 651 snd_iprintf(buffer, " Amp-In caps: "); 652 652 print_amp_caps(buffer, codec, nid, HDA_INPUT); 653 653 snd_iprintf(buffer, " Amp-In vals: "); 654 - print_amp_vals(buffer, codec, nid, HDA_INPUT, 655 - wid_caps & AC_WCAP_STEREO, 656 - wid_type == AC_WID_PIN ? 1 : conn_len); 654 + if (wid_type == AC_WID_PIN || 655 + (codec->single_adc_amp && 656 + wid_type == AC_WID_AUD_IN)) 657 + print_amp_vals(buffer, codec, nid, HDA_INPUT, 658 + wid_caps & AC_WCAP_STEREO, 659 + 1); 660 + else 661 + print_amp_vals(buffer, codec, nid, HDA_INPUT, 662 + wid_caps & AC_WCAP_STEREO, 663 + conn_len); 657 664 } 658 665 if (wid_caps & AC_WCAP_OUT_AMP) { 659 666 snd_iprintf(buffer, " Amp-Out caps: ");
+41 -67
sound/pci/hda/patch_conexant.c
··· 141 141 unsigned int hp_laptop:1; 142 142 unsigned int asus:1; 143 143 unsigned int pin_eapd_ctrls:1; 144 - unsigned int single_adc_amp:1; 145 144 146 145 unsigned int adc_switching:1; 147 146 ··· 686 687 static const struct hda_input_mux cxt5045_capture_source = { 687 688 .num_items = 2, 688 689 .items = { 689 - { "IntMic", 0x1 }, 690 - { "ExtMic", 0x2 }, 690 + { "Internal Mic", 0x1 }, 691 + { "Mic", 0x2 }, 691 692 } 692 693 }; 693 694 694 695 static const struct hda_input_mux cxt5045_capture_source_benq = { 695 - .num_items = 5, 696 + .num_items = 4, 696 697 .items = { 697 - { "IntMic", 0x1 }, 698 - { "ExtMic", 0x2 }, 699 - { "LineIn", 0x3 }, 700 - { "CD", 0x4 }, 701 - { "Mixer", 0x0 }, 698 + { "Internal Mic", 0x1 }, 699 + { "Mic", 0x2 }, 700 + { "Line", 0x3 }, 701 + { "Mixer", 0x0 }, 702 702 } 703 703 }; 704 704 705 705 static const struct hda_input_mux cxt5045_capture_source_hp530 = { 706 706 .num_items = 2, 707 707 .items = { 708 - { "ExtMic", 0x1 }, 709 - { "IntMic", 0x2 }, 708 + { "Mic", 0x1 }, 709 + { "Internal Mic", 0x2 }, 710 710 } 711 711 }; 712 712 ··· 796 798 } 797 799 798 800 static const struct snd_kcontrol_new cxt5045_mixers[] = { 799 - HDA_CODEC_VOLUME("Internal Mic Capture Volume", 0x1a, 0x01, HDA_INPUT), 800 - HDA_CODEC_MUTE("Internal Mic Capture Switch", 0x1a, 0x01, HDA_INPUT), 801 - HDA_CODEC_VOLUME("Mic Capture Volume", 0x1a, 0x02, HDA_INPUT), 802 - HDA_CODEC_MUTE("Mic Capture Switch", 0x1a, 0x02, HDA_INPUT), 801 + HDA_CODEC_VOLUME("Capture Volume", 0x1a, 0x00, HDA_INPUT), 802 + HDA_CODEC_MUTE("Capture Switch", 0x1a, 0x0, HDA_INPUT), 803 803 HDA_CODEC_VOLUME("PCM Playback Volume", 0x17, 0x0, HDA_INPUT), 804 804 HDA_CODEC_MUTE("PCM Playback Switch", 0x17, 0x0, HDA_INPUT), 805 805 HDA_CODEC_VOLUME("Internal Mic Playback Volume", 0x17, 0x1, HDA_INPUT), ··· 818 822 }; 819 823 820 824 static const struct snd_kcontrol_new cxt5045_benq_mixers[] = { 821 - HDA_CODEC_VOLUME("CD Capture Volume", 0x1a, 0x04, HDA_INPUT), 822 - HDA_CODEC_MUTE("CD Capture Switch", 0x1a, 0x04, HDA_INPUT), 823 - HDA_CODEC_VOLUME("CD Playback Volume", 0x17, 0x4, HDA_INPUT), 824 - HDA_CODEC_MUTE("CD Playback Switch", 0x17, 0x4, HDA_INPUT), 825 - 826 - HDA_CODEC_VOLUME("Line In Capture Volume", 0x1a, 0x03, HDA_INPUT), 827 - HDA_CODEC_MUTE("Line In Capture Switch", 0x1a, 0x03, HDA_INPUT), 828 - HDA_CODEC_VOLUME("Line In Playback Volume", 0x17, 0x3, HDA_INPUT), 829 - HDA_CODEC_MUTE("Line In Playback Switch", 0x17, 0x3, HDA_INPUT), 830 - 831 - HDA_CODEC_VOLUME("Mixer Capture Volume", 0x1a, 0x0, HDA_INPUT), 832 - HDA_CODEC_MUTE("Mixer Capture Switch", 0x1a, 0x0, HDA_INPUT), 825 + HDA_CODEC_VOLUME("Line Playback Volume", 0x17, 0x3, HDA_INPUT), 826 + HDA_CODEC_MUTE("Line Playback Switch", 0x17, 0x3, HDA_INPUT), 833 827 834 828 {} 835 829 }; 836 830 837 831 static const struct snd_kcontrol_new cxt5045_mixers_hp530[] = { 838 - HDA_CODEC_VOLUME("Internal Mic Capture Volume", 0x1a, 0x02, HDA_INPUT), 839 - HDA_CODEC_MUTE("Internal Mic Capture Switch", 0x1a, 0x02, HDA_INPUT), 840 - HDA_CODEC_VOLUME("Mic Capture Volume", 0x1a, 0x01, HDA_INPUT), 841 - HDA_CODEC_MUTE("Mic Capture Switch", 0x1a, 0x01, HDA_INPUT), 832 + HDA_CODEC_VOLUME("Capture Volume", 0x1a, 0x00, HDA_INPUT), 833 + HDA_CODEC_MUTE("Capture Switch", 0x1a, 0x0, HDA_INPUT), 842 834 HDA_CODEC_VOLUME("PCM Playback Volume", 0x17, 0x0, HDA_INPUT), 843 835 HDA_CODEC_MUTE("PCM Playback Switch", 0x17, 0x0, HDA_INPUT), 844 836 HDA_CODEC_VOLUME("Internal Mic Playback Volume", 0x17, 0x2, HDA_INPUT), ··· 930 946 /* Output controls */ 931 947 HDA_CODEC_VOLUME("Speaker Playback Volume", 0x10, 0x0, HDA_OUTPUT), 932 948 HDA_CODEC_MUTE("Speaker Playback Switch", 0x10, 0x0, HDA_OUTPUT), 933 - HDA_CODEC_VOLUME("Node 11 Playback Volume", 0x11, 0x0, HDA_OUTPUT), 934 - HDA_CODEC_MUTE("Node 11 Playback Switch", 0x11, 0x0, HDA_OUTPUT), 935 - HDA_CODEC_VOLUME("Node 12 Playback Volume", 0x12, 0x0, HDA_OUTPUT), 936 - HDA_CODEC_MUTE("Node 12 Playback Switch", 0x12, 0x0, HDA_OUTPUT), 949 + HDA_CODEC_VOLUME("HP-OUT Playback Volume", 0x11, 0x0, HDA_OUTPUT), 950 + HDA_CODEC_MUTE("HP-OUT Playback Switch", 0x11, 0x0, HDA_OUTPUT), 951 + HDA_CODEC_VOLUME("LINE1 Playback Volume", 0x12, 0x0, HDA_OUTPUT), 952 + HDA_CODEC_MUTE("LINE1 Playback Switch", 0x12, 0x0, HDA_OUTPUT), 937 953 938 954 /* Modes for retasking pin widgets */ 939 955 CXT_PIN_MODE("HP-OUT pin mode", 0x11, CXT_PIN_DIR_INOUT), ··· 944 960 945 961 /* Loopback mixer controls */ 946 962 947 - HDA_CODEC_VOLUME("Mixer-1 Volume", 0x17, 0x0, HDA_INPUT), 948 - HDA_CODEC_MUTE("Mixer-1 Switch", 0x17, 0x0, HDA_INPUT), 949 - HDA_CODEC_VOLUME("Mixer-2 Volume", 0x17, 0x1, HDA_INPUT), 950 - HDA_CODEC_MUTE("Mixer-2 Switch", 0x17, 0x1, HDA_INPUT), 951 - HDA_CODEC_VOLUME("Mixer-3 Volume", 0x17, 0x2, HDA_INPUT), 952 - HDA_CODEC_MUTE("Mixer-3 Switch", 0x17, 0x2, HDA_INPUT), 953 - HDA_CODEC_VOLUME("Mixer-4 Volume", 0x17, 0x3, HDA_INPUT), 954 - HDA_CODEC_MUTE("Mixer-4 Switch", 0x17, 0x3, HDA_INPUT), 955 - HDA_CODEC_VOLUME("Mixer-5 Volume", 0x17, 0x4, HDA_INPUT), 956 - HDA_CODEC_MUTE("Mixer-5 Switch", 0x17, 0x4, HDA_INPUT), 963 + HDA_CODEC_VOLUME("PCM Volume", 0x17, 0x0, HDA_INPUT), 964 + HDA_CODEC_MUTE("PCM Switch", 0x17, 0x0, HDA_INPUT), 965 + HDA_CODEC_VOLUME("MIC1 pin Volume", 0x17, 0x1, HDA_INPUT), 966 + HDA_CODEC_MUTE("MIC1 pin Switch", 0x17, 0x1, HDA_INPUT), 967 + HDA_CODEC_VOLUME("LINE1 pin Volume", 0x17, 0x2, HDA_INPUT), 968 + HDA_CODEC_MUTE("LINE1 pin Switch", 0x17, 0x2, HDA_INPUT), 969 + HDA_CODEC_VOLUME("HP-OUT pin Volume", 0x17, 0x3, HDA_INPUT), 970 + HDA_CODEC_MUTE("HP-OUT pin Switch", 0x17, 0x3, HDA_INPUT), 971 + HDA_CODEC_VOLUME("CD pin Volume", 0x17, 0x4, HDA_INPUT), 972 + HDA_CODEC_MUTE("CD pin Switch", 0x17, 0x4, HDA_INPUT), 957 973 { 958 974 .iface = SNDRV_CTL_ELEM_IFACE_MIXER, 959 975 .name = "Input Source", ··· 962 978 .put = conexant_mux_enum_put, 963 979 }, 964 980 /* Audio input controls */ 965 - HDA_CODEC_VOLUME("Input-1 Volume", 0x1a, 0x0, HDA_INPUT), 966 - HDA_CODEC_MUTE("Input-1 Switch", 0x1a, 0x0, HDA_INPUT), 967 - HDA_CODEC_VOLUME("Input-2 Volume", 0x1a, 0x1, HDA_INPUT), 968 - HDA_CODEC_MUTE("Input-2 Switch", 0x1a, 0x1, HDA_INPUT), 969 - HDA_CODEC_VOLUME("Input-3 Volume", 0x1a, 0x2, HDA_INPUT), 970 - HDA_CODEC_MUTE("Input-3 Switch", 0x1a, 0x2, HDA_INPUT), 971 - HDA_CODEC_VOLUME("Input-4 Volume", 0x1a, 0x3, HDA_INPUT), 972 - HDA_CODEC_MUTE("Input-4 Switch", 0x1a, 0x3, HDA_INPUT), 973 - HDA_CODEC_VOLUME("Input-5 Volume", 0x1a, 0x4, HDA_INPUT), 974 - HDA_CODEC_MUTE("Input-5 Switch", 0x1a, 0x4, HDA_INPUT), 981 + HDA_CODEC_VOLUME("Capture Volume", 0x1a, 0x0, HDA_INPUT), 982 + HDA_CODEC_MUTE("Capture Switch", 0x1a, 0x0, HDA_INPUT), 975 983 { } /* end */ 976 984 }; 977 985 ··· 985 1009 {0x13, AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_OUT}, 986 1010 {0x18, AC_VERB_SET_DIGI_CONVERT_1, 0}, 987 1011 988 - /* Start with output sum widgets muted and their output gains at min */ 989 - {0x17, AC_VERB_SET_AMP_GAIN_MUTE, AMP_IN_MUTE(0)}, 990 - {0x17, AC_VERB_SET_AMP_GAIN_MUTE, AMP_IN_MUTE(1)}, 991 - 992 1012 /* Unmute retasking pin widget output buffers since the default 993 1013 * state appears to be output. As the pin mode is changed by the 994 1014 * user the pin mode control will take care of enabling the pin's ··· 999 1027 /* Set ADC connection select to match default mixer setting (mic1 1000 1028 * pin) 1001 1029 */ 1002 - {0x1a, AC_VERB_SET_CONNECT_SEL, 0x00}, 1003 - {0x17, AC_VERB_SET_CONNECT_SEL, 0x00}, 1030 + {0x1a, AC_VERB_SET_CONNECT_SEL, 0x01}, 1031 + {0x17, AC_VERB_SET_CONNECT_SEL, 0x01}, 1004 1032 1005 1033 /* Mute all inputs to mixer widget (even unconnected ones) */ 1006 - {0x17, AC_VERB_SET_AMP_GAIN_MUTE, AMP_IN_MUTE(0)}, /* Mixer pin */ 1034 + {0x17, AC_VERB_SET_AMP_GAIN_MUTE, AMP_IN_MUTE(0)}, /* Mixer */ 1007 1035 {0x17, AC_VERB_SET_AMP_GAIN_MUTE, AMP_IN_MUTE(1)}, /* Mic1 pin */ 1008 1036 {0x17, AC_VERB_SET_AMP_GAIN_MUTE, AMP_IN_MUTE(2)}, /* Line pin */ 1009 1037 {0x17, AC_VERB_SET_AMP_GAIN_MUTE, AMP_IN_MUTE(3)}, /* HP pin */ ··· 1082 1110 if (!spec) 1083 1111 return -ENOMEM; 1084 1112 codec->spec = spec; 1085 - codec->pin_amp_workaround = 1; 1113 + codec->single_adc_amp = 1; 1086 1114 1087 1115 spec->multiout.max_channels = 2; 1088 1116 spec->multiout.num_dacs = ARRAY_SIZE(cxt5045_dac_nids); ··· 4192 4220 int idx = get_input_connection(codec, adc_nid, nid); 4193 4221 if (idx < 0) 4194 4222 continue; 4195 - if (spec->single_adc_amp) 4223 + if (codec->single_adc_amp) 4196 4224 idx = 0; 4197 4225 return cx_auto_add_volume_idx(codec, label, pfx, 4198 4226 cidx, adc_nid, HDA_INPUT, idx); ··· 4247 4275 if (cidx < 0) 4248 4276 continue; 4249 4277 input_conn[i] = spec->imux_info[i].adc; 4250 - if (!spec->single_adc_amp) 4278 + if (!codec->single_adc_amp) 4251 4279 input_conn[i] |= cidx << 8; 4252 4280 if (i > 0 && input_conn[i] != input_conn[0]) 4253 4281 multi_connection = 1; ··· 4438 4466 if (!spec) 4439 4467 return -ENOMEM; 4440 4468 codec->spec = spec; 4441 - codec->pin_amp_workaround = 1; 4442 4469 4443 4470 switch (codec->vendor_id) { 4444 4471 case 0x14f15045: 4445 - spec->single_adc_amp = 1; 4472 + codec->single_adc_amp = 1; 4446 4473 break; 4447 4474 case 0x14f15051: 4448 4475 add_cx5051_fake_mutes(codec); 4476 + codec->pin_amp_workaround = 1; 4449 4477 break; 4478 + default: 4479 + codec->pin_amp_workaround = 1; 4450 4480 } 4451 4481 4452 4482 apply_pin_fixup(codec, cxt_fixups, cxt_pincfg_tbl);
+4 -5
sound/pci/hda/patch_hdmi.c
··· 757 757 struct hdmi_spec *spec = codec->spec; 758 758 int tag = res >> AC_UNSOL_RES_TAG_SHIFT; 759 759 int pin_nid; 760 - int pd = !!(res & AC_UNSOL_RES_PD); 761 - int eldv = !!(res & AC_UNSOL_RES_ELDV); 762 760 int pin_idx; 763 761 struct hda_jack_tbl *jack; 764 762 ··· 766 768 pin_nid = jack->nid; 767 769 jack->jack_dirty = 1; 768 770 769 - printk(KERN_INFO 771 + _snd_printd(SND_PR_VERBOSE, 770 772 "HDMI hot plug event: Codec=%d Pin=%d Presence_Detect=%d ELD_Valid=%d\n", 771 - codec->addr, pin_nid, pd, eldv); 773 + codec->addr, pin_nid, 774 + !!(res & AC_UNSOL_RES_PD), !!(res & AC_UNSOL_RES_ELDV)); 772 775 773 776 pin_idx = pin_nid_to_pin_index(spec, pin_nid); 774 777 if (pin_idx < 0) ··· 991 992 if (eld->monitor_present) 992 993 eld_valid = !!(present & AC_PINSENSE_ELDV); 993 994 994 - printk(KERN_INFO 995 + _snd_printd(SND_PR_VERBOSE, 995 996 "HDMI status: Codec=%d Pin=%d Presence_Detect=%d ELD_Valid=%d\n", 996 997 codec->addr, pin_nid, eld->monitor_present, eld_valid); 997 998
+26 -10
sound/pci/hda/patch_realtek.c
··· 3398 3398 for (;;) { 3399 3399 badness = fill_and_eval_dacs(codec, fill_hardwired, 3400 3400 fill_mio_first); 3401 - if (badness < 0) 3401 + if (badness < 0) { 3402 + kfree(best_cfg); 3402 3403 return badness; 3404 + } 3403 3405 debug_badness("==> lo_type=%d, wired=%d, mio=%d, badness=0x%x\n", 3404 3406 cfg->line_out_type, fill_hardwired, fill_mio_first, 3405 3407 badness); ··· 3436 3434 cfg->line_out_type = AUTO_PIN_SPEAKER_OUT; 3437 3435 fill_hardwired = true; 3438 3436 continue; 3439 - } 3437 + } 3440 3438 if (cfg->hp_outs > 0 && 3441 3439 cfg->line_out_type == AUTO_PIN_SPEAKER_OUT) { 3442 3440 cfg->speaker_outs = cfg->line_outs; ··· 3450 3448 cfg->line_out_type = AUTO_PIN_HP_OUT; 3451 3449 fill_hardwired = true; 3452 3450 continue; 3453 - } 3451 + } 3454 3452 break; 3455 3453 } 3456 3454 ··· 4425 4423 static int alc880_parse_auto_config(struct hda_codec *codec) 4426 4424 { 4427 4425 static const hda_nid_t alc880_ignore[] = { 0x1d, 0 }; 4428 - static const hda_nid_t alc880_ssids[] = { 0x15, 0x1b, 0x14, 0 }; 4426 + static const hda_nid_t alc880_ssids[] = { 0x15, 0x1b, 0x14, 0 }; 4429 4427 return alc_parse_auto_config(codec, alc880_ignore, alc880_ssids); 4430 4428 } 4431 4429 ··· 5271 5269 { 0x16, 0x99130111 }, /* CLFE speaker */ 5272 5270 { 0x17, 0x99130112 }, /* surround speaker */ 5273 5271 { } 5274 - } 5272 + }, 5273 + .chained = true, 5274 + .chain_id = ALC882_FIXUP_GPIO1, 5275 5275 }, 5276 5276 [ALC882_FIXUP_ACER_ASPIRE_8930G] = { 5277 5277 .type = ALC_FIXUP_PINS, ··· 5316 5312 { 0x20, AC_VERB_SET_COEF_INDEX, 0x07 }, 5317 5313 { 0x20, AC_VERB_SET_PROC_COEF, 0x3050 }, 5318 5314 { } 5319 - } 5315 + }, 5316 + .chained = true, 5317 + .chain_id = ALC882_FIXUP_GPIO1, 5320 5318 }, 5321 5319 [ALC885_FIXUP_MACPRO_GPIO] = { 5322 5320 .type = ALC_FIXUP_FUNC, ··· 5365 5359 ALC882_FIXUP_ACER_ASPIRE_4930G), 5366 5360 SND_PCI_QUIRK(0x1025, 0x0155, "Packard-Bell M5120", ALC882_FIXUP_PB_M5210), 5367 5361 SND_PCI_QUIRK(0x1025, 0x0259, "Acer Aspire 5935", ALC889_FIXUP_DAC_ROUTE), 5362 + SND_PCI_QUIRK(0x1025, 0x026b, "Acer Aspire 8940G", ALC882_FIXUP_ACER_ASPIRE_8930G), 5368 5363 SND_PCI_QUIRK(0x1025, 0x0296, "Acer Aspire 7736z", ALC882_FIXUP_ACER_ASPIRE_7736), 5369 5364 SND_PCI_QUIRK(0x1043, 0x13c2, "Asus A7M", ALC882_FIXUP_EAPD), 5370 5365 SND_PCI_QUIRK(0x1043, 0x1873, "ASUS W90V", ALC882_FIXUP_ASUS_W90V), ··· 5391 5384 SND_PCI_QUIRK(0x106b, 0x3f00, "Macbook 5,1", ALC889_FIXUP_IMAC91_VREF), 5392 5385 SND_PCI_QUIRK(0x106b, 0x4000, "MacbookPro 5,1", ALC889_FIXUP_IMAC91_VREF), 5393 5386 SND_PCI_QUIRK(0x106b, 0x4100, "Macmini 3,1", ALC889_FIXUP_IMAC91_VREF), 5387 + SND_PCI_QUIRK(0x106b, 0x4200, "Mac Pro 5,1", ALC885_FIXUP_MACPRO_GPIO), 5394 5388 SND_PCI_QUIRK(0x106b, 0x4600, "MacbookPro 5,2", ALC889_FIXUP_IMAC91_VREF), 5395 5389 SND_PCI_QUIRK(0x106b, 0x4900, "iMac 9,1 Aluminum", ALC889_FIXUP_IMAC91_VREF), 5396 5390 SND_PCI_QUIRK(0x106b, 0x4a00, "Macbook 5,2", ALC889_FIXUP_IMAC91_VREF), ··· 5404 5396 SND_PCI_QUIRK(0x161f, 0x2054, "Medion laptop", ALC883_FIXUP_EAPD), 5405 5397 SND_PCI_QUIRK(0x17aa, 0x3a0d, "Lenovo Y530", ALC882_FIXUP_LENOVO_Y530), 5406 5398 SND_PCI_QUIRK(0x8086, 0x0022, "DX58SO", ALC889_FIXUP_COEF), 5399 + {} 5400 + }; 5401 + 5402 + static const struct alc_model_fixup alc882_fixup_models[] = { 5403 + {.id = ALC882_FIXUP_ACER_ASPIRE_4930G, .name = "acer-aspire-4930g"}, 5404 + {.id = ALC882_FIXUP_ACER_ASPIRE_8930G, .name = "acer-aspire-8930g"}, 5405 + {.id = ALC883_FIXUP_ACER_EAPD, .name = "acer-aspire"}, 5407 5406 {} 5408 5407 }; 5409 5408 ··· 5454 5439 if (err < 0) 5455 5440 goto error; 5456 5441 5457 - alc_pick_fixup(codec, NULL, alc882_fixup_tbl, alc882_fixups); 5442 + alc_pick_fixup(codec, alc882_fixup_models, alc882_fixup_tbl, 5443 + alc882_fixups); 5458 5444 alc_apply_fixup(codec, ALC_FIXUP_ACT_PRE_PROBE); 5459 5445 5460 5446 alc_auto_parse_customize_define(codec); ··· 6095 6079 * Basically the device should work as is without the fixup table. 6096 6080 * If BIOS doesn't give a proper info, enable the corresponding 6097 6081 * fixup entry. 6098 - */ 6082 + */ 6099 6083 SND_PCI_QUIRK(0x1043, 0x8330, "ASUS Eeepc P703 P900A", 6100 6084 ALC269_FIXUP_AMIC), 6101 6085 SND_PCI_QUIRK(0x1043, 0x1013, "ASUS N61Da", ALC269_FIXUP_AMIC), ··· 6312 6296 { 6313 6297 if (action == ALC_FIXUP_ACT_PRE_PROBE) 6314 6298 codec->no_jack_detect = 1; 6315 - } 6299 + } 6316 6300 6317 6301 static const struct alc_fixup alc861_fixups[] = { 6318 6302 [ALC861_FIXUP_FSC_AMILO_PI1505] = { ··· 6730 6714 * Basically the device should work as is without the fixup table. 6731 6715 * If BIOS doesn't give a proper info, enable the corresponding 6732 6716 * fixup entry. 6733 - */ 6717 + */ 6734 6718 SND_PCI_QUIRK(0x1043, 0x1000, "ASUS N50Vm", ALC662_FIXUP_ASUS_MODE1), 6735 6719 SND_PCI_QUIRK(0x1043, 0x1092, "ASUS NB", ALC662_FIXUP_ASUS_MODE3), 6736 6720 SND_PCI_QUIRK(0x1043, 0x1173, "ASUS K73Jn", ALC662_FIXUP_ASUS_MODE1),
+1 -1
sound/soc/codecs/ak4642.c
··· 140 140 * min : 0xFE : -115.0 dB 141 141 * mute: 0xFF 142 142 */ 143 - static const DECLARE_TLV_DB_SCALE(out_tlv, -11500, 50, 1); 143 + static const DECLARE_TLV_DB_SCALE(out_tlv, -11550, 50, 1); 144 144 145 145 static const struct snd_kcontrol_new ak4642_snd_controls[] = { 146 146
+13 -12
sound/soc/codecs/sgtl5000.c
··· 143 143 } 144 144 145 145 /* 146 - * using codec assist to small pop, hp_powerup or lineout_powerup 147 - * should stay setting until vag_powerup is fully ramped down, 148 - * vag fully ramped down require 400ms. 146 + * As manual described, ADC/DAC only works when VAG powerup, 147 + * So enabled VAG before ADC/DAC up. 148 + * In power down case, we need wait 400ms when vag fully ramped down. 149 149 */ 150 - static int small_pop_event(struct snd_soc_dapm_widget *w, 150 + static int power_vag_event(struct snd_soc_dapm_widget *w, 151 151 struct snd_kcontrol *kcontrol, int event) 152 152 { 153 153 switch (event) { ··· 156 156 SGTL5000_VAG_POWERUP, SGTL5000_VAG_POWERUP); 157 157 break; 158 158 159 - case SND_SOC_DAPM_PRE_PMD: 159 + case SND_SOC_DAPM_POST_PMD: 160 160 snd_soc_update_bits(w->codec, SGTL5000_CHIP_ANA_POWER, 161 161 SGTL5000_VAG_POWERUP, 0); 162 162 msleep(400); ··· 201 201 mic_bias_event, 202 202 SND_SOC_DAPM_POST_PMU | SND_SOC_DAPM_PRE_PMD), 203 203 204 - SND_SOC_DAPM_PGA_E("HP", SGTL5000_CHIP_ANA_POWER, 4, 0, NULL, 0, 205 - small_pop_event, 206 - SND_SOC_DAPM_PRE_PMU | SND_SOC_DAPM_PRE_PMD), 207 - SND_SOC_DAPM_PGA_E("LO", SGTL5000_CHIP_ANA_POWER, 0, 0, NULL, 0, 208 - small_pop_event, 209 - SND_SOC_DAPM_PRE_PMU | SND_SOC_DAPM_PRE_PMD), 204 + SND_SOC_DAPM_PGA("HP", SGTL5000_CHIP_ANA_POWER, 4, 0, NULL, 0), 205 + SND_SOC_DAPM_PGA("LO", SGTL5000_CHIP_ANA_POWER, 0, 0, NULL, 0), 210 206 211 207 SND_SOC_DAPM_MUX("Capture Mux", SND_SOC_NOPM, 0, 0, &adc_mux), 212 208 SND_SOC_DAPM_MUX("Headphone Mux", SND_SOC_NOPM, 0, 0, &dac_mux), ··· 217 221 0, SGTL5000_CHIP_DIG_POWER, 218 222 1, 0), 219 223 220 - SND_SOC_DAPM_ADC("ADC", "Capture", SGTL5000_CHIP_ANA_POWER, 1, 0), 224 + SND_SOC_DAPM_SUPPLY("VAG_POWER", SGTL5000_CHIP_ANA_POWER, 7, 0, 225 + power_vag_event, 226 + SND_SOC_DAPM_PRE_PMU | SND_SOC_DAPM_POST_PMD), 221 227 228 + SND_SOC_DAPM_ADC("ADC", "Capture", SGTL5000_CHIP_ANA_POWER, 1, 0), 222 229 SND_SOC_DAPM_DAC("DAC", "Playback", SGTL5000_CHIP_ANA_POWER, 3, 0), 223 230 }; 224 231 ··· 230 231 {"Capture Mux", "LINE_IN", "LINE_IN"}, /* line_in --> adc_mux */ 231 232 {"Capture Mux", "MIC_IN", "MIC_IN"}, /* mic_in --> adc_mux */ 232 233 234 + {"ADC", NULL, "VAG_POWER"}, 233 235 {"ADC", NULL, "Capture Mux"}, /* adc_mux --> adc */ 234 236 {"AIFOUT", NULL, "ADC"}, /* adc --> i2s_out */ 235 237 238 + {"DAC", NULL, "VAG_POWER"}, 236 239 {"DAC", NULL, "AIFIN"}, /* i2s-->dac,skip audio mux */ 237 240 {"Headphone Mux", "DAC", "DAC"}, /* dac --> hp_mux */ 238 241 {"LO", NULL, "DAC"}, /* dac --> line_out */
+4 -1
sound/soc/fsl/imx-audmux.c
··· 73 73 if (!buf) 74 74 return -ENOMEM; 75 75 76 + if (!audmux_base) 77 + return -ENOSYS; 78 + 76 79 if (audmux_clk) 77 80 clk_prepare_enable(audmux_clk); 78 81 ··· 155 152 return; 156 153 } 157 154 158 - for (i = 1; i < 8; i++) { 155 + for (i = 0; i < MX31_AUDMUX_PORT6_SSI_PINS_6 + 1; i++) { 159 156 snprintf(buf, sizeof(buf), "ssi%d", i); 160 157 if (!debugfs_create_file(buf, 0444, audmux_debugfs_root, 161 158 (void *)i, &audmux_debugfs_fops))
+1
sound/soc/pxa/pxa2xx-i2s.c
··· 17 17 #include <linux/delay.h> 18 18 #include <linux/clk.h> 19 19 #include <linux/platform_device.h> 20 + #include <linux/io.h> 20 21 #include <sound/core.h> 21 22 #include <sound/pcm.h> 22 23 #include <sound/initval.h>
+2
sound/soc/soc-core.c
··· 1072 1072 snd_soc_dapm_new_dai_widgets(&platform->dapm, dai); 1073 1073 } 1074 1074 1075 + platform->dapm.idle_bias_off = 1; 1076 + 1075 1077 if (driver->probe) { 1076 1078 ret = driver->probe(platform); 1077 1079 if (ret < 0) {
+1
tools/perf/builtin-sched.c
··· 17 17 #include "util/debug.h" 18 18 19 19 #include <sys/prctl.h> 20 + #include <sys/resource.h> 20 21 21 22 #include <semaphore.h> 22 23 #include <pthread.h>
+35 -1
tools/perf/builtin-top.c
··· 42 42 #include "util/debug.h" 43 43 44 44 #include <assert.h> 45 + #include <elf.h> 45 46 #include <fcntl.h> 46 47 47 48 #include <stdio.h> ··· 60 59 #include <sys/prctl.h> 61 60 #include <sys/wait.h> 62 61 #include <sys/uio.h> 62 + #include <sys/utsname.h> 63 63 #include <sys/mman.h> 64 64 65 65 #include <linux/unistd.h> ··· 164 162 symbol__annotate_zero_histograms(sym); 165 163 } 166 164 165 + static void ui__warn_map_erange(struct map *map, struct symbol *sym, u64 ip) 166 + { 167 + struct utsname uts; 168 + int err = uname(&uts); 169 + 170 + ui__warning("Out of bounds address found:\n\n" 171 + "Addr: %" PRIx64 "\n" 172 + "DSO: %s %c\n" 173 + "Map: %" PRIx64 "-%" PRIx64 "\n" 174 + "Symbol: %" PRIx64 "-%" PRIx64 " %c %s\n" 175 + "Arch: %s\n" 176 + "Kernel: %s\n" 177 + "Tools: %s\n\n" 178 + "Not all samples will be on the annotation output.\n\n" 179 + "Please report to linux-kernel@vger.kernel.org\n", 180 + ip, map->dso->long_name, dso__symtab_origin(map->dso), 181 + map->start, map->end, sym->start, sym->end, 182 + sym->binding == STB_GLOBAL ? 'g' : 183 + sym->binding == STB_LOCAL ? 'l' : 'w', sym->name, 184 + err ? "[unknown]" : uts.machine, 185 + err ? "[unknown]" : uts.release, perf_version_string); 186 + if (use_browser <= 0) 187 + sleep(5); 188 + 189 + map->erange_warned = true; 190 + } 191 + 167 192 static void perf_top__record_precise_ip(struct perf_top *top, 168 193 struct hist_entry *he, 169 194 int counter, u64 ip) 170 195 { 171 196 struct annotation *notes; 172 197 struct symbol *sym; 198 + int err; 173 199 174 200 if (he == NULL || he->ms.sym == NULL || 175 201 ((top->sym_filter_entry == NULL || ··· 219 189 } 220 190 221 191 ip = he->ms.map->map_ip(he->ms.map, ip); 222 - symbol__inc_addr_samples(sym, he->ms.map, counter, ip); 192 + err = symbol__inc_addr_samples(sym, he->ms.map, counter, ip); 223 193 224 194 pthread_mutex_unlock(&notes->lock); 195 + 196 + if (err == -ERANGE && !he->ms.map->erange_warned) 197 + ui__warn_map_erange(he->ms.map, sym, ip); 225 198 } 226 199 227 200 static void perf_top__show_details(struct perf_top *top) ··· 648 615 649 616 /* Tag samples to be skipped. */ 650 617 static const char *skip_symbols[] = { 618 + "intel_idle", 651 619 "default_idle", 652 620 "native_safe_halt", 653 621 "cpu_idle",
+6 -10
tools/perf/util/annotate.c
··· 64 64 65 65 pr_debug3("%s: addr=%#" PRIx64 "\n", __func__, map->unmap_ip(map, addr)); 66 66 67 - if (addr > sym->end) 68 - return 0; 67 + if (addr < sym->start || addr > sym->end) 68 + return -ERANGE; 69 69 70 70 offset = addr - sym->start; 71 71 h = annotation__histogram(notes, evidx); ··· 561 561 { 562 562 struct annotation *notes = symbol__annotation(sym); 563 563 struct sym_hist *h = annotation__histogram(notes, evidx); 564 - struct objdump_line *pos; 565 - int len = sym->end - sym->start; 564 + int len = sym->end - sym->start, offset; 566 565 567 566 h->sum = 0; 568 - 569 - list_for_each_entry(pos, &notes->src->source, node) { 570 - if (pos->offset != -1 && pos->offset < len) { 571 - h->addr[pos->offset] = h->addr[pos->offset] * 7 / 8; 572 - h->sum += h->addr[pos->offset]; 573 - } 567 + for (offset = 0; offset < len; ++offset) { 568 + h->addr[offset] = h->addr[offset] * 7 / 8; 569 + h->sum += h->addr[offset]; 574 570 } 575 571 } 576 572
+12
tools/perf/util/hist.c
··· 256 256 if (!cmp) { 257 257 he->period += period; 258 258 ++he->nr_events; 259 + 260 + /* If the map of an existing hist_entry has 261 + * become out-of-date due to an exec() or 262 + * similar, update it. Otherwise we will 263 + * mis-adjust symbol addresses when computing 264 + * the history counter to increment. 265 + */ 266 + if (he->ms.map != entry->ms.map) { 267 + he->ms.map = entry->ms.map; 268 + if (he->ms.map) 269 + he->ms.map->referenced = true; 270 + } 259 271 goto out; 260 272 } 261 273
+1
tools/perf/util/map.c
··· 38 38 RB_CLEAR_NODE(&self->rb_node); 39 39 self->groups = NULL; 40 40 self->referenced = false; 41 + self->erange_warned = false; 41 42 } 42 43 43 44 struct map *map__new(struct list_head *dsos__list, u64 start, u64 len,
+1
tools/perf/util/map.h
··· 33 33 u64 end; 34 34 u8 /* enum map_type */ type; 35 35 bool referenced; 36 + bool erange_warned; 36 37 u32 priv; 37 38 u64 pgoff; 38 39
+10 -2
tools/perf/util/session.c
··· 826 826 { 827 827 const u8 cpumode = event->header.misc & PERF_RECORD_MISC_CPUMODE_MASK; 828 828 829 - if (cpumode == PERF_RECORD_MISC_GUEST_KERNEL && perf_guest) 830 - return perf_session__find_machine(session, event->ip.pid); 829 + if (cpumode == PERF_RECORD_MISC_GUEST_KERNEL && perf_guest) { 830 + u32 pid; 831 + 832 + if (event->header.type == PERF_RECORD_MMAP) 833 + pid = event->mmap.pid; 834 + else 835 + pid = event->ip.pid; 836 + 837 + return perf_session__find_machine(session, pid); 838 + } 831 839 832 840 return perf_session__find_host_machine(session); 833 841 }
+3
tools/perf/util/ui/browsers/hists.c
··· 125 125 126 126 static bool map_symbol__toggle_fold(struct map_symbol *self) 127 127 { 128 + if (!self) 129 + return false; 130 + 128 131 if (!self->has_children) 129 132 return false; 130 133