Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'v3.3-rc3' as we've got several bugfixes in there which are colliding annoyingly with development.

Linux 3.3-rc3

.. the number of the half-beast?

Conflicts:
sound/soc/codecs/wm5100.c
sound/soc/codecs/wm8994.c

+1966 -2272
+10 -2
Documentation/DocBook/device-drivers.tmpl
··· 102 102 !Iinclude/linux/device.h 103 103 </sect1> 104 104 <sect1><title>Device Drivers Base</title> 105 + !Idrivers/base/init.c 105 106 !Edrivers/base/driver.c 106 107 !Edrivers/base/core.c 108 + !Edrivers/base/syscore.c 107 109 !Edrivers/base/class.c 110 + !Idrivers/base/node.c 108 111 !Edrivers/base/firmware_class.c 109 112 !Edrivers/base/transport_class.c 110 113 <!-- Cannot be included, because ··· 116 113 exceed allowed 44 characters maximum 117 114 X!Edrivers/base/attribute_container.c 118 115 --> 119 - !Edrivers/base/sys.c 116 + !Edrivers/base/dd.c 120 117 <!-- 121 118 X!Edrivers/base/interface.c 122 119 --> 123 120 !Iinclude/linux/platform_device.h 124 121 !Edrivers/base/platform.c 125 122 !Edrivers/base/bus.c 123 + </sect1> 124 + <sect1><title>Device Drivers DMA Management</title> 125 + !Edrivers/base/dma-buf.c 126 + !Edrivers/base/dma-coherent.c 127 + !Edrivers/base/dma-mapping.c 126 128 </sect1> 127 129 <sect1><title>Device Drivers Power Management</title> 128 130 !Edrivers/base/power/main.c ··· 227 219 <chapter id="uart16x50"> 228 220 <title>16x50 UART Driver</title> 229 221 !Edrivers/tty/serial/serial_core.c 230 - !Edrivers/tty/serial/8250.c 222 + !Edrivers/tty/serial/8250/8250.c 231 223 </chapter> 232 224 233 225 <chapter id="fbdev">
+64 -8
Documentation/input/event-codes.txt
··· 17 17 class/input/event*/device/capabilities/, and the properties of a device are 18 18 provided in class/input/event*/device/properties. 19 19 20 - Types: 21 - ========== 22 - Types are groupings of codes under a logical input construct. Each type has a 23 - set of applicable codes to be used in generating events. See the Codes section 24 - for details on valid codes for each type. 20 + Event types: 21 + =========== 22 + Event types are groupings of codes under a logical input construct. Each 23 + type has a set of applicable codes to be used in generating events. See the 24 + Codes section for details on valid codes for each type. 25 25 26 26 * EV_SYN: 27 27 - Used as markers to separate events. Events may be separated in time or in ··· 63 63 * EV_FF_STATUS: 64 64 - Used to receive force feedback device status. 65 65 66 - Codes: 67 - ========== 68 - Codes define the precise type of event. 66 + Event codes: 67 + =========== 68 + Event codes define the precise type of event. 69 69 70 70 EV_SYN: 71 71 ---------- ··· 220 220 EV_PWR events are a special type of event used specifically for power 221 221 mangement. Its usage is not well defined. To be addressed later. 222 222 223 + Device properties: 224 + ================= 225 + Normally, userspace sets up an input device based on the data it emits, 226 + i.e., the event types. In the case of two devices emitting the same event 227 + types, additional information can be provided in the form of device 228 + properties. 229 + 230 + INPUT_PROP_DIRECT + INPUT_PROP_POINTER: 231 + -------------------------------------- 232 + The INPUT_PROP_DIRECT property indicates that device coordinates should be 233 + directly mapped to screen coordinates (not taking into account trivial 234 + transformations, such as scaling, flipping and rotating). Non-direct input 235 + devices require non-trivial transformation, such as absolute to relative 236 + transformation for touchpads. Typical direct input devices: touchscreens, 237 + drawing tablets; non-direct devices: touchpads, mice. 238 + 239 + The INPUT_PROP_POINTER property indicates that the device is not transposed 240 + on the screen and thus requires use of an on-screen pointer to trace user's 241 + movements. Typical pointer devices: touchpads, tablets, mice; non-pointer 242 + device: touchscreen. 243 + 244 + If neither INPUT_PROP_DIRECT or INPUT_PROP_POINTER are set, the property is 245 + considered undefined and the device type should be deduced in the 246 + traditional way, using emitted event types. 247 + 248 + INPUT_PROP_BUTTONPAD: 249 + -------------------- 250 + For touchpads where the button is placed beneath the surface, such that 251 + pressing down on the pad causes a button click, this property should be 252 + set. Common in clickpad notebooks and macbooks from 2009 and onwards. 253 + 254 + Originally, the buttonpad property was coded into the bcm5974 driver 255 + version field under the name integrated button. For backwards 256 + compatibility, both methods need to be checked in userspace. 257 + 258 + INPUT_PROP_SEMI_MT: 259 + ------------------ 260 + Some touchpads, most common between 2008 and 2011, can detect the presence 261 + of multiple contacts without resolving the individual positions; only the 262 + number of contacts and a rectangular shape is known. For such 263 + touchpads, the semi-mt property should be set. 264 + 265 + Depending on the device, the rectangle may enclose all touches, like a 266 + bounding box, or just some of them, for instance the two most recent 267 + touches. The diversity makes the rectangle of limited use, but some 268 + gestures can normally be extracted from it. 269 + 270 + If INPUT_PROP_SEMI_MT is not set, the device is assumed to be a true MT 271 + device. 272 + 223 273 Guidelines: 224 274 ========== 225 275 The guidelines below ensure proper single-touch and multi-finger functionality. ··· 290 240 BTN_{MOUSE,LEFT,MIDDLE,RIGHT} must not be reported as the result of touch 291 241 contact. BTN_TOOL_<name> events should be reported where possible. 292 242 243 + For new hardware, INPUT_PROP_DIRECT should be set. 244 + 293 245 Trackpads: 294 246 ---------- 295 247 Legacy trackpads that only provide relative position information must report ··· 301 249 location of the touch. BTN_TOUCH should be used to report when a touch is active 302 250 on the trackpad. Where multi-finger support is available, BTN_TOOL_<name> should 303 251 be used to report the number of touches active on the trackpad. 252 + 253 + For new hardware, INPUT_PROP_POINTER should be set. 304 254 305 255 Tablets: 306 256 ---------- ··· 314 260 BTN_{0,1,2,etc} are good generic codes for unlabeled buttons. Do not use 315 261 meaningful buttons, like BTN_FORWARD, unless the button is labeled for that 316 262 purpose on the device. 263 + 264 + For new hardware, both INPUT_PROP_DIRECT and INPUT_PROP_POINTER should be set.
+2
Documentation/sysctl/kernel.txt
··· 601 601 instead of using the one provided by the hardware. 602 602 512 - A kernel warning has occurred. 603 603 1024 - A module from drivers/staging was loaded. 604 + 2048 - The system is working around a severe firmware bug. 605 + 4096 - An out-of-tree module has been loaded. 604 606 605 607 ============================================================== 606 608
+19 -31
MAINTAINERS
··· 159 159 F: drivers/net/ethernet/realtek/r8169.c 160 160 161 161 8250/16?50 (AND CLONE UARTS) SERIAL DRIVER 162 - M: Greg Kroah-Hartman <gregkh@suse.de> 162 + M: Greg Kroah-Hartman <gregkh@linuxfoundation.org> 163 163 L: linux-serial@vger.kernel.org 164 164 W: http://serial.sourceforge.net 165 165 S: Maintained ··· 788 788 F: arch/arm/mach-mx*/ 789 789 F: arch/arm/mach-imx/ 790 790 F: arch/arm/plat-mxc/ 791 - 792 - ARM/FREESCALE IMX51 793 - M: Amit Kucheria <amit.kucheria@canonical.com> 794 - L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 795 - S: Maintained 796 - F: arch/arm/mach-mx5/ 797 791 798 792 ARM/FREESCALE IMX6 799 793 M: Shawn Guo <shawn.guo@linaro.org> ··· 1777 1783 1778 1784 CHAR and MISC DRIVERS 1779 1785 M: Arnd Bergmann <arnd@arndb.de> 1780 - M: Greg Kroah-Hartman <greg@kroah.com> 1786 + M: Greg Kroah-Hartman <gregkh@linuxfoundation.org> 1781 1787 T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc.git 1782 - S: Maintained 1788 + S: Supported 1783 1789 F: drivers/char/* 1784 1790 F: drivers/misc/* 1785 1791 ··· 2281 2287 DOCUMENTATION 2282 2288 M: Randy Dunlap <rdunlap@xenotime.net> 2283 2289 L: linux-doc@vger.kernel.org 2284 - T: quilt http://userweb.kernel.org/~rdunlap/kernel-doc-patches/current/ 2290 + T: quilt http://xenotime.net/kernel-doc-patches/current/ 2285 2291 S: Maintained 2286 2292 F: Documentation/ 2287 2293 ··· 2314 2320 F: Documentation/blockdev/drbd/ 2315 2321 2316 2322 DRIVER CORE, KOBJECTS, DEBUGFS AND SYSFS 2317 - M: Greg Kroah-Hartman <gregkh@suse.de> 2323 + M: Greg Kroah-Hartman <gregkh@linuxfoundation.org> 2318 2324 T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core-2.6.git 2319 2325 S: Supported 2320 2326 F: Documentation/kobject.txt ··· 3986 3992 L: lguest@lists.ozlabs.org 3987 3993 W: http://lguest.ozlabs.org/ 3988 3994 S: Odd Fixes 3989 - F: Documentation/virtual/lguest/ 3995 + F: arch/x86/include/asm/lguest*.h 3990 3996 F: arch/x86/lguest/ 3991 3997 F: drivers/lguest/ 3992 3998 F: include/linux/lguest*.h 3993 - F: arch/x86/include/asm/lguest*.h 3999 + F: tools/lguest/ 3994 4000 3995 4001 LINUX FOR IBM pSERIES (RS/6000) 3996 4002 M: Paul Mackerras <paulus@au.ibm.com> ··· 4130 4136 W: http://www.linux-ntfs.org/content/view/19/37/ 4131 4137 S: Maintained 4132 4138 F: Documentation/ldm.txt 4133 - F: fs/partitions/ldm.* 4139 + F: block/partitions/ldm.* 4134 4140 4135 4141 LogFS 4136 4142 M: Joern Engel <joern@logfs.org> ··· 5627 5633 S: Supported 5628 5634 F: arch/s390/ 5629 5635 F: drivers/s390/ 5630 - F: fs/partitions/ibm.c 5636 + F: block/partitions/ibm.c 5631 5637 F: Documentation/s390/ 5632 5638 F: Documentation/DocBook/s390* 5633 5639 ··· 6270 6276 F: arch/alpha/kernel/srm_env.c 6271 6277 6272 6278 STABLE BRANCH 6273 - M: Greg Kroah-Hartman <greg@kroah.com> 6279 + M: Greg Kroah-Hartman <gregkh@linuxfoundation.org> 6274 6280 L: stable@vger.kernel.org 6275 - S: Maintained 6281 + S: Supported 6276 6282 6277 6283 STAGING SUBSYSTEM 6278 - M: Greg Kroah-Hartman <gregkh@suse.de> 6284 + M: Greg Kroah-Hartman <gregkh@linuxfoundation.org> 6279 6285 T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging.git 6280 6286 L: devel@driverdev.osuosl.org 6281 - S: Maintained 6287 + S: Supported 6282 6288 F: drivers/staging/ 6283 6289 6284 6290 STAGING - AGERE HERMES II and II.5 WIRELESS DRIVERS ··· 6389 6395 M: Omar Ramirez Luna <omar.ramirez@ti.com> 6390 6396 S: Odd Fixes 6391 6397 F: drivers/staging/tidspbridge/ 6392 - 6393 - STAGING - TRIDENT TVMASTER TMxxxx USB VIDEO CAPTURE DRIVERS 6394 - L: linux-media@vger.kernel.org 6395 - S: Odd Fixes 6396 - F: drivers/staging/tm6000/ 6397 6398 6398 6399 STAGING - USB ENE SM/MS CARD READER DRIVER 6399 6400 M: Al Cho <acho@novell.com> ··· 6658 6669 K: ^Subject:.*(?i)trivial 6659 6670 6660 6671 TTY LAYER 6661 - M: Greg Kroah-Hartman <gregkh@suse.de> 6662 - S: Maintained 6672 + M: Greg Kroah-Hartman <gregkh@linuxfoundation.org> 6673 + S: Supported 6663 6674 T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty-2.6.git 6664 6675 F: drivers/tty/ 6665 6676 F: drivers/tty/serial/serial_core.c ··· 6947 6958 F: drivers/usb/serial/digi_acceleport.c 6948 6959 6949 6960 USB SERIAL DRIVER 6950 - M: Greg Kroah-Hartman <gregkh@suse.de> 6961 + M: Greg Kroah-Hartman <gregkh@linuxfoundation.org> 6951 6962 L: linux-usb@vger.kernel.org 6952 6963 S: Supported 6953 6964 F: Documentation/usb/usb-serial.txt ··· 6962 6973 F: drivers/usb/serial/empeg.c 6963 6974 6964 6975 USB SERIAL KEYSPAN DRIVER 6965 - M: Greg Kroah-Hartman <greg@kroah.com> 6976 + M: Greg Kroah-Hartman <gregkh@linuxfoundation.org> 6966 6977 L: linux-usb@vger.kernel.org 6967 - W: http://www.kroah.com/linux/ 6968 6978 S: Maintained 6969 6979 F: drivers/usb/serial/*keyspan* 6970 6980 ··· 6991 7003 F: drivers/media/video/sn9c102/ 6992 7004 6993 7005 USB SUBSYSTEM 6994 - M: Greg Kroah-Hartman <gregkh@suse.de> 7006 + M: Greg Kroah-Hartman <gregkh@linuxfoundation.org> 6995 7007 L: linux-usb@vger.kernel.org 6996 7008 W: http://www.linux-usb.org 6997 7009 T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb-2.6.git ··· 7078 7090 7079 7091 USERSPACE I/O (UIO) 7080 7092 M: "Hans J. Koch" <hjk@hansjkoch.de> 7081 - M: Greg Kroah-Hartman <gregkh@suse.de> 7093 + M: Greg Kroah-Hartman <gregkh@linuxfoundation.org> 7082 7094 S: Maintained 7083 7095 F: Documentation/DocBook/uio-howto.tmpl 7084 7096 F: drivers/uio/
+1 -1
Makefile
··· 1 1 VERSION = 3 2 2 PATCHLEVEL = 3 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc2 4 + EXTRAVERSION = -rc3 5 5 NAME = Saber-toothed Squirrel 6 6 7 7 # *DOCUMENTATION*
+9 -1
arch/arm/include/asm/tlb.h
··· 198 198 unsigned long addr) 199 199 { 200 200 pgtable_page_dtor(pte); 201 - tlb_add_flush(tlb, addr); 201 + 202 + /* 203 + * With the classic ARM MMU, a pte page has two corresponding pmd 204 + * entries, each covering 1MB. 205 + */ 206 + addr &= PMD_MASK; 207 + tlb_add_flush(tlb, addr + SZ_1M - PAGE_SIZE); 208 + tlb_add_flush(tlb, addr + SZ_1M); 209 + 202 210 tlb_remove_page(tlb, pte); 203 211 } 204 212
+1 -1
arch/arm/kernel/entry-armv.S
··· 790 790 smp_dmb arm 791 791 rsbs r0, r3, #0 @ set returned val and C flag 792 792 ldmfd sp!, {r4, r5, r6, r7} 793 - bx lr 793 + usr_ret lr 794 794 795 795 #elif !defined(CONFIG_SMP) 796 796
+28
arch/arm/kernel/perf_event_v7.c
··· 469 469 [C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED, 470 470 }, 471 471 }, 472 + [C(NODE)] = { 473 + [C(OP_READ)] = { 474 + [C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED, 475 + [C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED, 476 + }, 477 + [C(OP_WRITE)] = { 478 + [C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED, 479 + [C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED, 480 + }, 481 + [C(OP_PREFETCH)] = { 482 + [C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED, 483 + [C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED, 484 + }, 485 + }, 472 486 }; 473 487 474 488 /* ··· 587 573 [C(OP_WRITE)] = { 588 574 [C(RESULT_ACCESS)] = ARMV7_PERFCTR_PC_BRANCH_PRED, 589 575 [C(RESULT_MISS)] = ARMV7_PERFCTR_PC_BRANCH_MIS_PRED, 576 + }, 577 + [C(OP_PREFETCH)] = { 578 + [C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED, 579 + [C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED, 580 + }, 581 + }, 582 + [C(NODE)] = { 583 + [C(OP_READ)] = { 584 + [C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED, 585 + [C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED, 586 + }, 587 + [C(OP_WRITE)] = { 588 + [C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED, 589 + [C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED, 590 590 }, 591 591 [C(OP_PREFETCH)] = { 592 592 [C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED,
+5 -3
arch/arm/kernel/ptrace.c
··· 699 699 { 700 700 int ret; 701 701 struct thread_info *thread = task_thread_info(target); 702 - struct vfp_hard_struct new_vfp = thread->vfpstate.hard; 702 + struct vfp_hard_struct new_vfp; 703 703 const size_t user_fpregs_offset = offsetof(struct user_vfp, fpregs); 704 704 const size_t user_fpscr_offset = offsetof(struct user_vfp, fpscr); 705 + 706 + vfp_sync_hwstate(thread); 707 + new_vfp = thread->vfpstate.hard; 705 708 706 709 ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, 707 710 &new_vfp.fpregs, ··· 726 723 if (ret) 727 724 return ret; 728 725 729 - vfp_sync_hwstate(thread); 730 - thread->vfpstate.hard = new_vfp; 731 726 vfp_flush_hwstate(thread); 727 + thread->vfpstate.hard = new_vfp; 732 728 733 729 return 0; 734 730 }
+2 -3
arch/arm/kernel/signal.c
··· 227 227 if (magic != VFP_MAGIC || size != VFP_STORAGE_SIZE) 228 228 return -EINVAL; 229 229 230 + vfp_flush_hwstate(thread); 231 + 230 232 /* 231 233 * Copy the floating point registers. There can be unused 232 234 * registers see asm/hwcap.h for details. ··· 252 250 253 251 __get_user_error(h->fpinst, &frame->ufp_exc.fpinst, err); 254 252 __get_user_error(h->fpinst2, &frame->ufp_exc.fpinst2, err); 255 - 256 - if (!err) 257 - vfp_flush_hwstate(thread); 258 253 259 254 return err ? -EFAULT : 0; 260 255 }
+1 -1
arch/arm/mach-bcmring/arch.c
··· 194 194 .init_early = bcmring_init_early, 195 195 .init_irq = bcmring_init_irq, 196 196 .timer = &bcmring_timer, 197 - .init_machine = bcmring_init_machine 197 + .init_machine = bcmring_init_machine, 198 198 .restart = bcmring_restart, 199 199 MACHINE_END
-812
arch/arm/mach-bcmring/dma.c
··· 33 33 34 34 #include <mach/timer.h> 35 35 36 - #include <linux/mm.h> 37 36 #include <linux/pfn.h> 38 37 #include <linux/atomic.h> 39 38 #include <linux/sched.h> 40 39 #include <mach/dma.h> 41 - 42 - /* I don't quite understand why dc4 fails when this is set to 1 and DMA is enabled */ 43 - /* especially since dc4 doesn't use kmalloc'd memory. */ 44 - 45 - #define ALLOW_MAP_OF_KMALLOC_MEMORY 0 46 40 47 41 /* ---- Public Variables ------------------------------------------------- */ 48 42 ··· 47 53 #define CONTROLLER_FROM_HANDLE(handle) (((handle) >> 4) & 0x0f) 48 54 #define CHANNEL_FROM_HANDLE(handle) ((handle) & 0x0f) 49 55 50 - #define DMA_MAP_DEBUG 0 51 - 52 - #if DMA_MAP_DEBUG 53 - # define DMA_MAP_PRINT(fmt, args...) printk("%s: " fmt, __func__, ## args) 54 - #else 55 - # define DMA_MAP_PRINT(fmt, args...) 56 - #endif 57 56 58 57 /* ---- Private Variables ------------------------------------------------ */ 59 58 60 59 static DMA_Global_t gDMA; 61 60 static struct proc_dir_entry *gDmaDir; 62 61 63 - static atomic_t gDmaStatMemTypeKmalloc = ATOMIC_INIT(0); 64 - static atomic_t gDmaStatMemTypeVmalloc = ATOMIC_INIT(0); 65 - static atomic_t gDmaStatMemTypeUser = ATOMIC_INIT(0); 66 - static atomic_t gDmaStatMemTypeCoherent = ATOMIC_INIT(0); 67 - 68 62 #include "dma_device.c" 69 63 70 64 /* ---- Private Function Prototypes -------------------------------------- */ 71 65 72 66 /* ---- Functions ------------------------------------------------------- */ 73 - 74 - /****************************************************************************/ 75 - /** 76 - * Displays information for /proc/dma/mem-type 77 - */ 78 - /****************************************************************************/ 79 - 80 - static int dma_proc_read_mem_type(char *buf, char **start, off_t offset, 81 - int count, int *eof, void *data) 82 - { 83 - int len = 0; 84 - 85 - len += sprintf(buf + len, "dma_map_mem statistics\n"); 86 - len += 87 - sprintf(buf + len, "coherent: %d\n", 88 - atomic_read(&gDmaStatMemTypeCoherent)); 89 - len += 90 - sprintf(buf + len, "kmalloc: %d\n", 91 - atomic_read(&gDmaStatMemTypeKmalloc)); 92 - len += 93 - sprintf(buf + len, "vmalloc: %d\n", 94 - atomic_read(&gDmaStatMemTypeVmalloc)); 95 - len += 96 - sprintf(buf + len, "user: %d\n", 97 - atomic_read(&gDmaStatMemTypeUser)); 98 - 99 - return len; 100 - } 101 67 102 68 /****************************************************************************/ 103 69 /** ··· 800 846 dma_proc_read_channels, NULL); 801 847 create_proc_read_entry("devices", 0, gDmaDir, 802 848 dma_proc_read_devices, NULL); 803 - create_proc_read_entry("mem-type", 0, gDmaDir, 804 - dma_proc_read_mem_type, NULL); 805 849 } 806 850 807 851 out: ··· 1517 1565 } 1518 1566 1519 1567 EXPORT_SYMBOL(dma_set_device_handler); 1520 - 1521 - /****************************************************************************/ 1522 - /** 1523 - * Initializes a memory mapping structure 1524 - */ 1525 - /****************************************************************************/ 1526 - 1527 - int dma_init_mem_map(DMA_MemMap_t *memMap) 1528 - { 1529 - memset(memMap, 0, sizeof(*memMap)); 1530 - 1531 - sema_init(&memMap->lock, 1); 1532 - 1533 - return 0; 1534 - } 1535 - 1536 - EXPORT_SYMBOL(dma_init_mem_map); 1537 - 1538 - /****************************************************************************/ 1539 - /** 1540 - * Releases any memory currently being held by a memory mapping structure. 1541 - */ 1542 - /****************************************************************************/ 1543 - 1544 - int dma_term_mem_map(DMA_MemMap_t *memMap) 1545 - { 1546 - down(&memMap->lock); /* Just being paranoid */ 1547 - 1548 - /* Free up any allocated memory */ 1549 - 1550 - up(&memMap->lock); 1551 - memset(memMap, 0, sizeof(*memMap)); 1552 - 1553 - return 0; 1554 - } 1555 - 1556 - EXPORT_SYMBOL(dma_term_mem_map); 1557 - 1558 - /****************************************************************************/ 1559 - /** 1560 - * Looks at a memory address and categorizes it. 1561 - * 1562 - * @return One of the values from the DMA_MemType_t enumeration. 1563 - */ 1564 - /****************************************************************************/ 1565 - 1566 - DMA_MemType_t dma_mem_type(void *addr) 1567 - { 1568 - unsigned long addrVal = (unsigned long)addr; 1569 - 1570 - if (addrVal >= CONSISTENT_BASE) { 1571 - /* NOTE: DMA virtual memory space starts at 0xFFxxxxxx */ 1572 - 1573 - /* dma_alloc_xxx pages are physically and virtually contiguous */ 1574 - 1575 - return DMA_MEM_TYPE_DMA; 1576 - } 1577 - 1578 - /* Technically, we could add one more classification. Addresses between VMALLOC_END */ 1579 - /* and the beginning of the DMA virtual address could be considered to be I/O space. */ 1580 - /* Right now, nobody cares about this particular classification, so we ignore it. */ 1581 - 1582 - if (is_vmalloc_addr(addr)) { 1583 - /* Address comes from the vmalloc'd region. Pages are virtually */ 1584 - /* contiguous but NOT physically contiguous */ 1585 - 1586 - return DMA_MEM_TYPE_VMALLOC; 1587 - } 1588 - 1589 - if (addrVal >= PAGE_OFFSET) { 1590 - /* PAGE_OFFSET is typically 0xC0000000 */ 1591 - 1592 - /* kmalloc'd pages are physically contiguous */ 1593 - 1594 - return DMA_MEM_TYPE_KMALLOC; 1595 - } 1596 - 1597 - return DMA_MEM_TYPE_USER; 1598 - } 1599 - 1600 - EXPORT_SYMBOL(dma_mem_type); 1601 - 1602 - /****************************************************************************/ 1603 - /** 1604 - * Looks at a memory address and determines if we support DMA'ing to/from 1605 - * that type of memory. 1606 - * 1607 - * @return boolean - 1608 - * return value != 0 means dma supported 1609 - * return value == 0 means dma not supported 1610 - */ 1611 - /****************************************************************************/ 1612 - 1613 - int dma_mem_supports_dma(void *addr) 1614 - { 1615 - DMA_MemType_t memType = dma_mem_type(addr); 1616 - 1617 - return (memType == DMA_MEM_TYPE_DMA) 1618 - #if ALLOW_MAP_OF_KMALLOC_MEMORY 1619 - || (memType == DMA_MEM_TYPE_KMALLOC) 1620 - #endif 1621 - || (memType == DMA_MEM_TYPE_USER); 1622 - } 1623 - 1624 - EXPORT_SYMBOL(dma_mem_supports_dma); 1625 - 1626 - /****************************************************************************/ 1627 - /** 1628 - * Maps in a memory region such that it can be used for performing a DMA. 1629 - * 1630 - * @return 1631 - */ 1632 - /****************************************************************************/ 1633 - 1634 - int dma_map_start(DMA_MemMap_t *memMap, /* Stores state information about the map */ 1635 - enum dma_data_direction dir /* Direction that the mapping will be going */ 1636 - ) { 1637 - int rc; 1638 - 1639 - down(&memMap->lock); 1640 - 1641 - DMA_MAP_PRINT("memMap: %p\n", memMap); 1642 - 1643 - if (memMap->inUse) { 1644 - printk(KERN_ERR "%s: memory map %p is already being used\n", 1645 - __func__, memMap); 1646 - rc = -EBUSY; 1647 - goto out; 1648 - } 1649 - 1650 - memMap->inUse = 1; 1651 - memMap->dir = dir; 1652 - memMap->numRegionsUsed = 0; 1653 - 1654 - rc = 0; 1655 - 1656 - out: 1657 - 1658 - DMA_MAP_PRINT("returning %d", rc); 1659 - 1660 - up(&memMap->lock); 1661 - 1662 - return rc; 1663 - } 1664 - 1665 - EXPORT_SYMBOL(dma_map_start); 1666 - 1667 - /****************************************************************************/ 1668 - /** 1669 - * Adds a segment of memory to a memory map. Each segment is both 1670 - * physically and virtually contiguous. 1671 - * 1672 - * @return 0 on success, error code otherwise. 1673 - */ 1674 - /****************************************************************************/ 1675 - 1676 - static int dma_map_add_segment(DMA_MemMap_t *memMap, /* Stores state information about the map */ 1677 - DMA_Region_t *region, /* Region that the segment belongs to */ 1678 - void *virtAddr, /* Virtual address of the segment being added */ 1679 - dma_addr_t physAddr, /* Physical address of the segment being added */ 1680 - size_t numBytes /* Number of bytes of the segment being added */ 1681 - ) { 1682 - DMA_Segment_t *segment; 1683 - 1684 - DMA_MAP_PRINT("memMap:%p va:%p pa:0x%x #:%d\n", memMap, virtAddr, 1685 - physAddr, numBytes); 1686 - 1687 - /* Sanity check */ 1688 - 1689 - if (((unsigned long)virtAddr < (unsigned long)region->virtAddr) 1690 - || (((unsigned long)virtAddr + numBytes)) > 1691 - ((unsigned long)region->virtAddr + region->numBytes)) { 1692 - printk(KERN_ERR 1693 - "%s: virtAddr %p is outside region @ %p len: %d\n", 1694 - __func__, virtAddr, region->virtAddr, region->numBytes); 1695 - return -EINVAL; 1696 - } 1697 - 1698 - if (region->numSegmentsUsed > 0) { 1699 - /* Check to see if this segment is physically contiguous with the previous one */ 1700 - 1701 - segment = &region->segment[region->numSegmentsUsed - 1]; 1702 - 1703 - if ((segment->physAddr + segment->numBytes) == physAddr) { 1704 - /* It is - just add on to the end */ 1705 - 1706 - DMA_MAP_PRINT("appending %d bytes to last segment\n", 1707 - numBytes); 1708 - 1709 - segment->numBytes += numBytes; 1710 - 1711 - return 0; 1712 - } 1713 - } 1714 - 1715 - /* Reallocate to hold more segments, if required. */ 1716 - 1717 - if (region->numSegmentsUsed >= region->numSegmentsAllocated) { 1718 - DMA_Segment_t *newSegment; 1719 - size_t oldSize = 1720 - region->numSegmentsAllocated * sizeof(*newSegment); 1721 - int newAlloc = region->numSegmentsAllocated + 4; 1722 - size_t newSize = newAlloc * sizeof(*newSegment); 1723 - 1724 - newSegment = kmalloc(newSize, GFP_KERNEL); 1725 - if (newSegment == NULL) { 1726 - return -ENOMEM; 1727 - } 1728 - memcpy(newSegment, region->segment, oldSize); 1729 - memset(&((uint8_t *) newSegment)[oldSize], 0, 1730 - newSize - oldSize); 1731 - kfree(region->segment); 1732 - 1733 - region->numSegmentsAllocated = newAlloc; 1734 - region->segment = newSegment; 1735 - } 1736 - 1737 - segment = &region->segment[region->numSegmentsUsed]; 1738 - region->numSegmentsUsed++; 1739 - 1740 - segment->virtAddr = virtAddr; 1741 - segment->physAddr = physAddr; 1742 - segment->numBytes = numBytes; 1743 - 1744 - DMA_MAP_PRINT("returning success\n"); 1745 - 1746 - return 0; 1747 - } 1748 - 1749 - /****************************************************************************/ 1750 - /** 1751 - * Adds a region of memory to a memory map. Each region is virtually 1752 - * contiguous, but not necessarily physically contiguous. 1753 - * 1754 - * @return 0 on success, error code otherwise. 1755 - */ 1756 - /****************************************************************************/ 1757 - 1758 - int dma_map_add_region(DMA_MemMap_t *memMap, /* Stores state information about the map */ 1759 - void *mem, /* Virtual address that we want to get a map of */ 1760 - size_t numBytes /* Number of bytes being mapped */ 1761 - ) { 1762 - unsigned long addr = (unsigned long)mem; 1763 - unsigned int offset; 1764 - int rc = 0; 1765 - DMA_Region_t *region; 1766 - dma_addr_t physAddr; 1767 - 1768 - down(&memMap->lock); 1769 - 1770 - DMA_MAP_PRINT("memMap:%p va:%p #:%d\n", memMap, mem, numBytes); 1771 - 1772 - if (!memMap->inUse) { 1773 - printk(KERN_ERR "%s: Make sure you call dma_map_start first\n", 1774 - __func__); 1775 - rc = -EINVAL; 1776 - goto out; 1777 - } 1778 - 1779 - /* Reallocate to hold more regions. */ 1780 - 1781 - if (memMap->numRegionsUsed >= memMap->numRegionsAllocated) { 1782 - DMA_Region_t *newRegion; 1783 - size_t oldSize = 1784 - memMap->numRegionsAllocated * sizeof(*newRegion); 1785 - int newAlloc = memMap->numRegionsAllocated + 4; 1786 - size_t newSize = newAlloc * sizeof(*newRegion); 1787 - 1788 - newRegion = kmalloc(newSize, GFP_KERNEL); 1789 - if (newRegion == NULL) { 1790 - rc = -ENOMEM; 1791 - goto out; 1792 - } 1793 - memcpy(newRegion, memMap->region, oldSize); 1794 - memset(&((uint8_t *) newRegion)[oldSize], 0, newSize - oldSize); 1795 - 1796 - kfree(memMap->region); 1797 - 1798 - memMap->numRegionsAllocated = newAlloc; 1799 - memMap->region = newRegion; 1800 - } 1801 - 1802 - region = &memMap->region[memMap->numRegionsUsed]; 1803 - memMap->numRegionsUsed++; 1804 - 1805 - offset = addr & ~PAGE_MASK; 1806 - 1807 - region->memType = dma_mem_type(mem); 1808 - region->virtAddr = mem; 1809 - region->numBytes = numBytes; 1810 - region->numSegmentsUsed = 0; 1811 - region->numLockedPages = 0; 1812 - region->lockedPages = NULL; 1813 - 1814 - switch (region->memType) { 1815 - case DMA_MEM_TYPE_VMALLOC: 1816 - { 1817 - atomic_inc(&gDmaStatMemTypeVmalloc); 1818 - 1819 - /* printk(KERN_ERR "%s: vmalloc'd pages are not supported\n", __func__); */ 1820 - 1821 - /* vmalloc'd pages are not physically contiguous */ 1822 - 1823 - rc = -EINVAL; 1824 - break; 1825 - } 1826 - 1827 - case DMA_MEM_TYPE_KMALLOC: 1828 - { 1829 - atomic_inc(&gDmaStatMemTypeKmalloc); 1830 - 1831 - /* kmalloc'd pages are physically contiguous, so they'll have exactly */ 1832 - /* one segment */ 1833 - 1834 - #if ALLOW_MAP_OF_KMALLOC_MEMORY 1835 - physAddr = 1836 - dma_map_single(NULL, mem, numBytes, memMap->dir); 1837 - rc = dma_map_add_segment(memMap, region, mem, physAddr, 1838 - numBytes); 1839 - #else 1840 - rc = -EINVAL; 1841 - #endif 1842 - break; 1843 - } 1844 - 1845 - case DMA_MEM_TYPE_DMA: 1846 - { 1847 - /* dma_alloc_xxx pages are physically contiguous */ 1848 - 1849 - atomic_inc(&gDmaStatMemTypeCoherent); 1850 - 1851 - physAddr = (vmalloc_to_pfn(mem) << PAGE_SHIFT) + offset; 1852 - 1853 - dma_sync_single_for_cpu(NULL, physAddr, numBytes, 1854 - memMap->dir); 1855 - rc = dma_map_add_segment(memMap, region, mem, physAddr, 1856 - numBytes); 1857 - break; 1858 - } 1859 - 1860 - case DMA_MEM_TYPE_USER: 1861 - { 1862 - size_t firstPageOffset; 1863 - size_t firstPageSize; 1864 - struct page **pages; 1865 - struct task_struct *userTask; 1866 - 1867 - atomic_inc(&gDmaStatMemTypeUser); 1868 - 1869 - #if 1 1870 - /* If the pages are user pages, then the dma_mem_map_set_user_task function */ 1871 - /* must have been previously called. */ 1872 - 1873 - if (memMap->userTask == NULL) { 1874 - printk(KERN_ERR 1875 - "%s: must call dma_mem_map_set_user_task when using user-mode memory\n", 1876 - __func__); 1877 - return -EINVAL; 1878 - } 1879 - 1880 - /* User pages need to be locked. */ 1881 - 1882 - firstPageOffset = 1883 - (unsigned long)region->virtAddr & (PAGE_SIZE - 1); 1884 - firstPageSize = PAGE_SIZE - firstPageOffset; 1885 - 1886 - region->numLockedPages = (firstPageOffset 1887 - + region->numBytes + 1888 - PAGE_SIZE - 1) / PAGE_SIZE; 1889 - pages = 1890 - kmalloc(region->numLockedPages * 1891 - sizeof(struct page *), GFP_KERNEL); 1892 - 1893 - if (pages == NULL) { 1894 - region->numLockedPages = 0; 1895 - return -ENOMEM; 1896 - } 1897 - 1898 - userTask = memMap->userTask; 1899 - 1900 - down_read(&userTask->mm->mmap_sem); 1901 - rc = get_user_pages(userTask, /* task */ 1902 - userTask->mm, /* mm */ 1903 - (unsigned long)region->virtAddr, /* start */ 1904 - region->numLockedPages, /* len */ 1905 - memMap->dir == DMA_FROM_DEVICE, /* write */ 1906 - 0, /* force */ 1907 - pages, /* pages (array of pointers to page) */ 1908 - NULL); /* vmas */ 1909 - up_read(&userTask->mm->mmap_sem); 1910 - 1911 - if (rc != region->numLockedPages) { 1912 - kfree(pages); 1913 - region->numLockedPages = 0; 1914 - 1915 - if (rc >= 0) { 1916 - rc = -EINVAL; 1917 - } 1918 - } else { 1919 - uint8_t *virtAddr = region->virtAddr; 1920 - size_t bytesRemaining; 1921 - int pageIdx; 1922 - 1923 - rc = 0; /* Since get_user_pages returns +ve number */ 1924 - 1925 - region->lockedPages = pages; 1926 - 1927 - /* We've locked the user pages. Now we need to walk them and figure */ 1928 - /* out the physical addresses. */ 1929 - 1930 - /* The first page may be partial */ 1931 - 1932 - dma_map_add_segment(memMap, 1933 - region, 1934 - virtAddr, 1935 - PFN_PHYS(page_to_pfn 1936 - (pages[0])) + 1937 - firstPageOffset, 1938 - firstPageSize); 1939 - 1940 - virtAddr += firstPageSize; 1941 - bytesRemaining = 1942 - region->numBytes - firstPageSize; 1943 - 1944 - for (pageIdx = 1; 1945 - pageIdx < region->numLockedPages; 1946 - pageIdx++) { 1947 - size_t bytesThisPage = 1948 - (bytesRemaining > 1949 - PAGE_SIZE ? PAGE_SIZE : 1950 - bytesRemaining); 1951 - 1952 - DMA_MAP_PRINT 1953 - ("pageIdx:%d pages[pageIdx]=%p pfn=%u phys=%u\n", 1954 - pageIdx, pages[pageIdx], 1955 - page_to_pfn(pages[pageIdx]), 1956 - PFN_PHYS(page_to_pfn 1957 - (pages[pageIdx]))); 1958 - 1959 - dma_map_add_segment(memMap, 1960 - region, 1961 - virtAddr, 1962 - PFN_PHYS(page_to_pfn 1963 - (pages 1964 - [pageIdx])), 1965 - bytesThisPage); 1966 - 1967 - virtAddr += bytesThisPage; 1968 - bytesRemaining -= bytesThisPage; 1969 - } 1970 - } 1971 - #else 1972 - printk(KERN_ERR 1973 - "%s: User mode pages are not yet supported\n", 1974 - __func__); 1975 - 1976 - /* user pages are not physically contiguous */ 1977 - 1978 - rc = -EINVAL; 1979 - #endif 1980 - break; 1981 - } 1982 - 1983 - default: 1984 - { 1985 - printk(KERN_ERR "%s: Unsupported memory type: %d\n", 1986 - __func__, region->memType); 1987 - 1988 - rc = -EINVAL; 1989 - break; 1990 - } 1991 - } 1992 - 1993 - if (rc != 0) { 1994 - memMap->numRegionsUsed--; 1995 - } 1996 - 1997 - out: 1998 - 1999 - DMA_MAP_PRINT("returning %d\n", rc); 2000 - 2001 - up(&memMap->lock); 2002 - 2003 - return rc; 2004 - } 2005 - 2006 - EXPORT_SYMBOL(dma_map_add_segment); 2007 - 2008 - /****************************************************************************/ 2009 - /** 2010 - * Maps in a memory region such that it can be used for performing a DMA. 2011 - * 2012 - * @return 0 on success, error code otherwise. 2013 - */ 2014 - /****************************************************************************/ 2015 - 2016 - int dma_map_mem(DMA_MemMap_t *memMap, /* Stores state information about the map */ 2017 - void *mem, /* Virtual address that we want to get a map of */ 2018 - size_t numBytes, /* Number of bytes being mapped */ 2019 - enum dma_data_direction dir /* Direction that the mapping will be going */ 2020 - ) { 2021 - int rc; 2022 - 2023 - rc = dma_map_start(memMap, dir); 2024 - if (rc == 0) { 2025 - rc = dma_map_add_region(memMap, mem, numBytes); 2026 - if (rc < 0) { 2027 - /* Since the add fails, this function will fail, and the caller won't */ 2028 - /* call unmap, so we need to do it here. */ 2029 - 2030 - dma_unmap(memMap, 0); 2031 - } 2032 - } 2033 - 2034 - return rc; 2035 - } 2036 - 2037 - EXPORT_SYMBOL(dma_map_mem); 2038 - 2039 - /****************************************************************************/ 2040 - /** 2041 - * Setup a descriptor ring for a given memory map. 2042 - * 2043 - * It is assumed that the descriptor ring has already been initialized, and 2044 - * this routine will only reallocate a new descriptor ring if the existing 2045 - * one is too small. 2046 - * 2047 - * @return 0 on success, error code otherwise. 2048 - */ 2049 - /****************************************************************************/ 2050 - 2051 - int dma_map_create_descriptor_ring(DMA_Device_t dev, /* DMA device (where the ring is stored) */ 2052 - DMA_MemMap_t *memMap, /* Memory map that will be used */ 2053 - dma_addr_t devPhysAddr /* Physical address of device */ 2054 - ) { 2055 - int rc; 2056 - int numDescriptors; 2057 - DMA_DeviceAttribute_t *devAttr; 2058 - DMA_Region_t *region; 2059 - DMA_Segment_t *segment; 2060 - dma_addr_t srcPhysAddr; 2061 - dma_addr_t dstPhysAddr; 2062 - int regionIdx; 2063 - int segmentIdx; 2064 - 2065 - devAttr = &DMA_gDeviceAttribute[dev]; 2066 - 2067 - down(&memMap->lock); 2068 - 2069 - /* Figure out how many descriptors we need */ 2070 - 2071 - numDescriptors = 0; 2072 - for (regionIdx = 0; regionIdx < memMap->numRegionsUsed; regionIdx++) { 2073 - region = &memMap->region[regionIdx]; 2074 - 2075 - for (segmentIdx = 0; segmentIdx < region->numSegmentsUsed; 2076 - segmentIdx++) { 2077 - segment = &region->segment[segmentIdx]; 2078 - 2079 - if (memMap->dir == DMA_TO_DEVICE) { 2080 - srcPhysAddr = segment->physAddr; 2081 - dstPhysAddr = devPhysAddr; 2082 - } else { 2083 - srcPhysAddr = devPhysAddr; 2084 - dstPhysAddr = segment->physAddr; 2085 - } 2086 - 2087 - rc = 2088 - dma_calculate_descriptor_count(dev, srcPhysAddr, 2089 - dstPhysAddr, 2090 - segment-> 2091 - numBytes); 2092 - if (rc < 0) { 2093 - printk(KERN_ERR 2094 - "%s: dma_calculate_descriptor_count failed: %d\n", 2095 - __func__, rc); 2096 - goto out; 2097 - } 2098 - numDescriptors += rc; 2099 - } 2100 - } 2101 - 2102 - /* Adjust the size of the ring, if it isn't big enough */ 2103 - 2104 - if (numDescriptors > devAttr->ring.descriptorsAllocated) { 2105 - dma_free_descriptor_ring(&devAttr->ring); 2106 - rc = 2107 - dma_alloc_descriptor_ring(&devAttr->ring, 2108 - numDescriptors); 2109 - if (rc < 0) { 2110 - printk(KERN_ERR 2111 - "%s: dma_alloc_descriptor_ring failed: %d\n", 2112 - __func__, rc); 2113 - goto out; 2114 - } 2115 - } else { 2116 - rc = 2117 - dma_init_descriptor_ring(&devAttr->ring, 2118 - numDescriptors); 2119 - if (rc < 0) { 2120 - printk(KERN_ERR 2121 - "%s: dma_init_descriptor_ring failed: %d\n", 2122 - __func__, rc); 2123 - goto out; 2124 - } 2125 - } 2126 - 2127 - /* Populate the descriptors */ 2128 - 2129 - for (regionIdx = 0; regionIdx < memMap->numRegionsUsed; regionIdx++) { 2130 - region = &memMap->region[regionIdx]; 2131 - 2132 - for (segmentIdx = 0; segmentIdx < region->numSegmentsUsed; 2133 - segmentIdx++) { 2134 - segment = &region->segment[segmentIdx]; 2135 - 2136 - if (memMap->dir == DMA_TO_DEVICE) { 2137 - srcPhysAddr = segment->physAddr; 2138 - dstPhysAddr = devPhysAddr; 2139 - } else { 2140 - srcPhysAddr = devPhysAddr; 2141 - dstPhysAddr = segment->physAddr; 2142 - } 2143 - 2144 - rc = 2145 - dma_add_descriptors(&devAttr->ring, dev, 2146 - srcPhysAddr, dstPhysAddr, 2147 - segment->numBytes); 2148 - if (rc < 0) { 2149 - printk(KERN_ERR 2150 - "%s: dma_add_descriptors failed: %d\n", 2151 - __func__, rc); 2152 - goto out; 2153 - } 2154 - } 2155 - } 2156 - 2157 - rc = 0; 2158 - 2159 - out: 2160 - 2161 - up(&memMap->lock); 2162 - return rc; 2163 - } 2164 - 2165 - EXPORT_SYMBOL(dma_map_create_descriptor_ring); 2166 - 2167 - /****************************************************************************/ 2168 - /** 2169 - * Maps in a memory region such that it can be used for performing a DMA. 2170 - * 2171 - * @return 2172 - */ 2173 - /****************************************************************************/ 2174 - 2175 - int dma_unmap(DMA_MemMap_t *memMap, /* Stores state information about the map */ 2176 - int dirtied /* non-zero if any of the pages were modified */ 2177 - ) { 2178 - 2179 - int rc = 0; 2180 - int regionIdx; 2181 - int segmentIdx; 2182 - DMA_Region_t *region; 2183 - DMA_Segment_t *segment; 2184 - 2185 - down(&memMap->lock); 2186 - 2187 - for (regionIdx = 0; regionIdx < memMap->numRegionsUsed; regionIdx++) { 2188 - region = &memMap->region[regionIdx]; 2189 - 2190 - for (segmentIdx = 0; segmentIdx < region->numSegmentsUsed; 2191 - segmentIdx++) { 2192 - segment = &region->segment[segmentIdx]; 2193 - 2194 - switch (region->memType) { 2195 - case DMA_MEM_TYPE_VMALLOC: 2196 - { 2197 - printk(KERN_ERR 2198 - "%s: vmalloc'd pages are not yet supported\n", 2199 - __func__); 2200 - rc = -EINVAL; 2201 - goto out; 2202 - } 2203 - 2204 - case DMA_MEM_TYPE_KMALLOC: 2205 - { 2206 - #if ALLOW_MAP_OF_KMALLOC_MEMORY 2207 - dma_unmap_single(NULL, 2208 - segment->physAddr, 2209 - segment->numBytes, 2210 - memMap->dir); 2211 - #endif 2212 - break; 2213 - } 2214 - 2215 - case DMA_MEM_TYPE_DMA: 2216 - { 2217 - dma_sync_single_for_cpu(NULL, 2218 - segment-> 2219 - physAddr, 2220 - segment-> 2221 - numBytes, 2222 - memMap->dir); 2223 - break; 2224 - } 2225 - 2226 - case DMA_MEM_TYPE_USER: 2227 - { 2228 - /* Nothing to do here. */ 2229 - 2230 - break; 2231 - } 2232 - 2233 - default: 2234 - { 2235 - printk(KERN_ERR 2236 - "%s: Unsupported memory type: %d\n", 2237 - __func__, region->memType); 2238 - rc = -EINVAL; 2239 - goto out; 2240 - } 2241 - } 2242 - 2243 - segment->virtAddr = NULL; 2244 - segment->physAddr = 0; 2245 - segment->numBytes = 0; 2246 - } 2247 - 2248 - if (region->numLockedPages > 0) { 2249 - int pageIdx; 2250 - 2251 - /* Some user pages were locked. We need to go and unlock them now. */ 2252 - 2253 - for (pageIdx = 0; pageIdx < region->numLockedPages; 2254 - pageIdx++) { 2255 - struct page *page = 2256 - region->lockedPages[pageIdx]; 2257 - 2258 - if (memMap->dir == DMA_FROM_DEVICE) { 2259 - SetPageDirty(page); 2260 - } 2261 - page_cache_release(page); 2262 - } 2263 - kfree(region->lockedPages); 2264 - region->numLockedPages = 0; 2265 - region->lockedPages = NULL; 2266 - } 2267 - 2268 - region->memType = DMA_MEM_TYPE_NONE; 2269 - region->virtAddr = NULL; 2270 - region->numBytes = 0; 2271 - region->numSegmentsUsed = 0; 2272 - } 2273 - memMap->userTask = NULL; 2274 - memMap->numRegionsUsed = 0; 2275 - memMap->inUse = 0; 2276 - 2277 - out: 2278 - up(&memMap->lock); 2279 - 2280 - return rc; 2281 - } 2282 - 2283 - EXPORT_SYMBOL(dma_unmap);
-196
arch/arm/mach-bcmring/include/mach/dma.h
··· 26 26 /* ---- Include Files ---------------------------------------------------- */ 27 27 28 28 #include <linux/kernel.h> 29 - #include <linux/wait.h> 30 29 #include <linux/semaphore.h> 31 30 #include <csp/dmacHw.h> 32 31 #include <mach/timer.h> 33 - #include <linux/scatterlist.h> 34 - #include <linux/dma-mapping.h> 35 - #include <linux/mm.h> 36 - #include <linux/vmalloc.h> 37 - #include <linux/pagemap.h> 38 32 39 33 /* ---- Constants and Types ---------------------------------------------- */ 40 34 ··· 104 110 size_t bytesAllocated; /* Number of bytes allocated in the descriptor ring */ 105 111 106 112 } DMA_DescriptorRing_t; 107 - 108 - /**************************************************************************** 109 - * 110 - * The DMA_MemType_t and DMA_MemMap_t are helper structures used to setup 111 - * DMA chains from a variety of memory sources. 112 - * 113 - *****************************************************************************/ 114 - 115 - #define DMA_MEM_MAP_MIN_SIZE 4096 /* Pages less than this size are better */ 116 - /* off not being DMA'd. */ 117 - 118 - typedef enum { 119 - DMA_MEM_TYPE_NONE, /* Not a valid setting */ 120 - DMA_MEM_TYPE_VMALLOC, /* Memory came from vmalloc call */ 121 - DMA_MEM_TYPE_KMALLOC, /* Memory came from kmalloc call */ 122 - DMA_MEM_TYPE_DMA, /* Memory came from dma_alloc_xxx call */ 123 - DMA_MEM_TYPE_USER, /* Memory came from user space. */ 124 - 125 - } DMA_MemType_t; 126 - 127 - /* A segment represents a physically and virtually contiguous chunk of memory. */ 128 - /* i.e. each segment can be DMA'd */ 129 - /* A user of the DMA code will add memory regions. Each region may need to be */ 130 - /* represented by one or more segments. */ 131 - 132 - typedef struct { 133 - void *virtAddr; /* Virtual address used for this segment */ 134 - dma_addr_t physAddr; /* Physical address this segment maps to */ 135 - size_t numBytes; /* Size of the segment, in bytes */ 136 - 137 - } DMA_Segment_t; 138 - 139 - /* A region represents a virtually contiguous chunk of memory, which may be */ 140 - /* made up of multiple segments. */ 141 - 142 - typedef struct { 143 - DMA_MemType_t memType; 144 - void *virtAddr; 145 - size_t numBytes; 146 - 147 - /* Each region (virtually contiguous) consists of one or more segments. Each */ 148 - /* segment is virtually and physically contiguous. */ 149 - 150 - int numSegmentsUsed; 151 - int numSegmentsAllocated; 152 - DMA_Segment_t *segment; 153 - 154 - /* When a region corresponds to user memory, we need to lock all of the pages */ 155 - /* down before we can figure out the physical addresses. The lockedPage array contains */ 156 - /* the pages that were locked, and which subsequently need to be unlocked once the */ 157 - /* memory is unmapped. */ 158 - 159 - unsigned numLockedPages; 160 - struct page **lockedPages; 161 - 162 - } DMA_Region_t; 163 - 164 - typedef struct { 165 - int inUse; /* Is this mapping currently being used? */ 166 - struct semaphore lock; /* Acquired when using this structure */ 167 - enum dma_data_direction dir; /* Direction this transfer is intended for */ 168 - 169 - /* In the event that we're mapping user memory, we need to know which task */ 170 - /* the memory is for, so that we can obtain the correct mm locks. */ 171 - 172 - struct task_struct *userTask; 173 - 174 - int numRegionsUsed; 175 - int numRegionsAllocated; 176 - DMA_Region_t *region; 177 - 178 - } DMA_MemMap_t; 179 113 180 114 /**************************************************************************** 181 115 * ··· 488 566 dma_addr_t dstData1, /* Physical address of first destination buffer */ 489 567 dma_addr_t dstData2, /* Physical address of second destination buffer */ 490 568 size_t numBytes /* Number of bytes in each destination buffer */ 491 - ); 492 - 493 - /****************************************************************************/ 494 - /** 495 - * Initializes a DMA_MemMap_t data structure 496 - */ 497 - /****************************************************************************/ 498 - 499 - int dma_init_mem_map(DMA_MemMap_t *memMap /* Stores state information about the map */ 500 - ); 501 - 502 - /****************************************************************************/ 503 - /** 504 - * Releases any memory currently being held by a memory mapping structure. 505 - */ 506 - /****************************************************************************/ 507 - 508 - int dma_term_mem_map(DMA_MemMap_t *memMap /* Stores state information about the map */ 509 - ); 510 - 511 - /****************************************************************************/ 512 - /** 513 - * Looks at a memory address and categorizes it. 514 - * 515 - * @return One of the values from the DMA_MemType_t enumeration. 516 - */ 517 - /****************************************************************************/ 518 - 519 - DMA_MemType_t dma_mem_type(void *addr); 520 - 521 - /****************************************************************************/ 522 - /** 523 - * Sets the process (aka userTask) associated with a mem map. This is 524 - * required if user-mode segments will be added to the mapping. 525 - */ 526 - /****************************************************************************/ 527 - 528 - static inline void dma_mem_map_set_user_task(DMA_MemMap_t *memMap, 529 - struct task_struct *task) 530 - { 531 - memMap->userTask = task; 532 - } 533 - 534 - /****************************************************************************/ 535 - /** 536 - * Looks at a memory address and determines if we support DMA'ing to/from 537 - * that type of memory. 538 - * 539 - * @return boolean - 540 - * return value != 0 means dma supported 541 - * return value == 0 means dma not supported 542 - */ 543 - /****************************************************************************/ 544 - 545 - int dma_mem_supports_dma(void *addr); 546 - 547 - /****************************************************************************/ 548 - /** 549 - * Initializes a memory map for use. Since this function acquires a 550 - * sempaphore within the memory map, it is VERY important that dma_unmap 551 - * be called when you're finished using the map. 552 - */ 553 - /****************************************************************************/ 554 - 555 - int dma_map_start(DMA_MemMap_t *memMap, /* Stores state information about the map */ 556 - enum dma_data_direction dir /* Direction that the mapping will be going */ 557 - ); 558 - 559 - /****************************************************************************/ 560 - /** 561 - * Adds a segment of memory to a memory map. 562 - * 563 - * @return 0 on success, error code otherwise. 564 - */ 565 - /****************************************************************************/ 566 - 567 - int dma_map_add_region(DMA_MemMap_t *memMap, /* Stores state information about the map */ 568 - void *mem, /* Virtual address that we want to get a map of */ 569 - size_t numBytes /* Number of bytes being mapped */ 570 - ); 571 - 572 - /****************************************************************************/ 573 - /** 574 - * Creates a descriptor ring from a memory mapping. 575 - * 576 - * @return 0 on success, error code otherwise. 577 - */ 578 - /****************************************************************************/ 579 - 580 - int dma_map_create_descriptor_ring(DMA_Device_t dev, /* DMA device (where the ring is stored) */ 581 - DMA_MemMap_t *memMap, /* Memory map that will be used */ 582 - dma_addr_t devPhysAddr /* Physical address of device */ 583 - ); 584 - 585 - /****************************************************************************/ 586 - /** 587 - * Maps in a memory region such that it can be used for performing a DMA. 588 - * 589 - * @return 590 - */ 591 - /****************************************************************************/ 592 - 593 - int dma_map_mem(DMA_MemMap_t *memMap, /* Stores state information about the map */ 594 - void *addr, /* Virtual address that we want to get a map of */ 595 - size_t count, /* Number of bytes being mapped */ 596 - enum dma_data_direction dir /* Direction that the mapping will be going */ 597 - ); 598 - 599 - /****************************************************************************/ 600 - /** 601 - * Maps in a memory region such that it can be used for performing a DMA. 602 - * 603 - * @return 604 - */ 605 - /****************************************************************************/ 606 - 607 - int dma_unmap(DMA_MemMap_t *memMap, /* Stores state information about the map */ 608 - int dirtied /* non-zero if any of the pages were modified */ 609 569 ); 610 570 611 571 /****************************************************************************/
+1 -1
arch/arm/mach-davinci/board-da850-evm.c
··· 44 44 #include <mach/aemif.h> 45 45 #include <mach/spi.h> 46 46 47 - #define DA850_EVM_PHY_ID "0:00" 47 + #define DA850_EVM_PHY_ID "davinci_mdio-0:00" 48 48 #define DA850_LCD_PWR_PIN GPIO_TO_PIN(2, 8) 49 49 #define DA850_LCD_BL_PIN GPIO_TO_PIN(2, 15) 50 50
+1 -1
arch/arm/mach-davinci/board-dm365-evm.c
··· 54 54 return 0; 55 55 } 56 56 57 - #define DM365_EVM_PHY_ID "0:01" 57 + #define DM365_EVM_PHY_ID "davinci_mdio-0:01" 58 58 /* 59 59 * A MAX-II CPLD is used for various board control functions. 60 60 */
+1 -1
arch/arm/mach-davinci/board-dm644x-evm.c
··· 40 40 #include <mach/usb.h> 41 41 #include <mach/aemif.h> 42 42 43 - #define DM644X_EVM_PHY_ID "0:01" 43 + #define DM644X_EVM_PHY_ID "davinci_mdio-0:01" 44 44 #define LXT971_PHY_ID (0x001378e2) 45 45 #define LXT971_PHY_MASK (0xfffffff0) 46 46
+1 -1
arch/arm/mach-davinci/board-dm646x-evm.c
··· 736 736 .enabled_uarts = (1 << 0), 737 737 }; 738 738 739 - #define DM646X_EVM_PHY_ID "0:01" 739 + #define DM646X_EVM_PHY_ID "davinci_mdio-0:01" 740 740 /* 741 741 * The following EDMA channels/slots are not being used by drivers (for 742 742 * example: Timer, GPIO, UART events etc) on dm646x, hence they are being
+1 -1
arch/arm/mach-davinci/board-neuros-osd2.c
··· 39 39 #include <mach/mmc.h> 40 40 #include <mach/usb.h> 41 41 42 - #define NEUROS_OSD2_PHY_ID "0:01" 42 + #define NEUROS_OSD2_PHY_ID "davinci_mdio-0:01" 43 43 #define LXT971_PHY_ID 0x001378e2 44 44 #define LXT971_PHY_MASK 0xfffffff0 45 45
+1 -1
arch/arm/mach-davinci/board-omapl138-hawk.c
··· 21 21 #include <mach/da8xx.h> 22 22 #include <mach/mux.h> 23 23 24 - #define HAWKBOARD_PHY_ID "0:07" 24 + #define HAWKBOARD_PHY_ID "davinci_mdio-0:07" 25 25 #define DA850_HAWK_MMCSD_CD_PIN GPIO_TO_PIN(3, 12) 26 26 #define DA850_HAWK_MMCSD_WP_PIN GPIO_TO_PIN(3, 13) 27 27
+1 -1
arch/arm/mach-davinci/board-sffsdr.c
··· 42 42 #include <mach/mux.h> 43 43 #include <mach/usb.h> 44 44 45 - #define SFFSDR_PHY_ID "0:01" 45 + #define SFFSDR_PHY_ID "davinci_mdio-0:01" 46 46 static struct mtd_partition davinci_sffsdr_nandflash_partition[] = { 47 47 /* U-Boot Environment: Block 0 48 48 * UBL: Block 1
-32
arch/arm/mach-davinci/da850.c
··· 153 153 .div_reg = PLLDIV3, 154 154 }; 155 155 156 - static struct clk pll1_sysclk4 = { 157 - .name = "pll1_sysclk4", 158 - .parent = &pll1_clk, 159 - .flags = CLK_PLL, 160 - .div_reg = PLLDIV4, 161 - }; 162 - 163 - static struct clk pll1_sysclk5 = { 164 - .name = "pll1_sysclk5", 165 - .parent = &pll1_clk, 166 - .flags = CLK_PLL, 167 - .div_reg = PLLDIV5, 168 - }; 169 - 170 - static struct clk pll1_sysclk6 = { 171 - .name = "pll0_sysclk6", 172 - .parent = &pll0_clk, 173 - .flags = CLK_PLL, 174 - .div_reg = PLLDIV6, 175 - }; 176 - 177 - static struct clk pll1_sysclk7 = { 178 - .name = "pll1_sysclk7", 179 - .parent = &pll1_clk, 180 - .flags = CLK_PLL, 181 - .div_reg = PLLDIV7, 182 - }; 183 - 184 156 static struct clk i2c0_clk = { 185 157 .name = "i2c0", 186 158 .parent = &pll0_aux_clk, ··· 369 397 CLK(NULL, "pll1_aux", &pll1_aux_clk), 370 398 CLK(NULL, "pll1_sysclk2", &pll1_sysclk2), 371 399 CLK(NULL, "pll1_sysclk3", &pll1_sysclk3), 372 - CLK(NULL, "pll1_sysclk4", &pll1_sysclk4), 373 - CLK(NULL, "pll1_sysclk5", &pll1_sysclk5), 374 - CLK(NULL, "pll1_sysclk6", &pll1_sysclk6), 375 - CLK(NULL, "pll1_sysclk7", &pll1_sysclk7), 376 400 CLK("i2c_davinci.1", NULL, &i2c0_clk), 377 401 CLK(NULL, "timer0", &timerp64_0_clk), 378 402 CLK("watchdog", NULL, &timerp64_1_clk),
+5 -6
arch/arm/mach-omap2/Kconfig
··· 213 213 depends on ARCH_OMAP3 214 214 default y 215 215 select OMAP_PACKAGE_CBB 216 - select REGULATOR_FIXED_VOLTAGE 216 + select REGULATOR_FIXED_VOLTAGE if REGULATOR 217 217 218 218 config MACH_OMAP3_TOUCHBOOK 219 219 bool "OMAP3 Touch Book" 220 220 depends on ARCH_OMAP3 221 221 default y 222 - select BACKLIGHT_CLASS_DEVICE 223 222 224 223 config MACH_OMAP_3430SDP 225 224 bool "OMAP 3430 SDP board" ··· 264 265 select SERIAL_8250 265 266 select SERIAL_CORE_CONSOLE 266 267 select SERIAL_8250_CONSOLE 267 - select REGULATOR_FIXED_VOLTAGE 268 + select REGULATOR_FIXED_VOLTAGE if REGULATOR 268 269 269 270 config MACH_OMAP_ZOOM3 270 271 bool "OMAP3630 Zoom3 board" ··· 274 275 select SERIAL_8250 275 276 select SERIAL_CORE_CONSOLE 276 277 select SERIAL_8250_CONSOLE 277 - select REGULATOR_FIXED_VOLTAGE 278 + select REGULATOR_FIXED_VOLTAGE if REGULATOR 278 279 279 280 config MACH_CM_T35 280 281 bool "CompuLab CM-T35/CM-T3730 modules" ··· 333 334 depends on ARCH_OMAP4 334 335 select OMAP_PACKAGE_CBL 335 336 select OMAP_PACKAGE_CBS 336 - select REGULATOR_FIXED_VOLTAGE 337 + select REGULATOR_FIXED_VOLTAGE if REGULATOR 337 338 338 339 config MACH_OMAP4_PANDA 339 340 bool "OMAP4 Panda Board" ··· 341 342 depends on ARCH_OMAP4 342 343 select OMAP_PACKAGE_CBL 343 344 select OMAP_PACKAGE_CBS 344 - select REGULATOR_FIXED_VOLTAGE 345 + select REGULATOR_FIXED_VOLTAGE if REGULATOR 345 346 346 347 config OMAP3_EMU 347 348 bool "OMAP3 debugging peripherals"
+14 -4
arch/arm/mach-omap2/board-4430sdp.c
··· 52 52 #define ETH_KS8851_QUART 138 53 53 #define OMAP4_SFH7741_SENSOR_OUTPUT_GPIO 184 54 54 #define OMAP4_SFH7741_ENABLE_GPIO 188 55 - #define HDMI_GPIO_HPD 60 /* Hot plug pin for HDMI */ 55 + #define HDMI_GPIO_CT_CP_HPD 60 /* HPD mode enable/disable */ 56 56 #define HDMI_GPIO_LS_OE 41 /* Level shifter for HDMI */ 57 + #define HDMI_GPIO_HPD 63 /* Hotplug detect */ 57 58 #define DISPLAY_SEL_GPIO 59 /* LCD2/PicoDLP switch */ 58 59 #define DLP_POWER_ON_GPIO 40 59 60 ··· 604 603 } 605 604 606 605 static struct gpio sdp4430_hdmi_gpios[] = { 607 - { HDMI_GPIO_HPD, GPIOF_OUT_INIT_HIGH, "hdmi_gpio_hpd" }, 606 + { HDMI_GPIO_CT_CP_HPD, GPIOF_OUT_INIT_HIGH, "hdmi_gpio_ct_cp_hpd" }, 608 607 { HDMI_GPIO_LS_OE, GPIOF_OUT_INIT_HIGH, "hdmi_gpio_ls_oe" }, 608 + { HDMI_GPIO_HPD, GPIOF_DIR_IN, "hdmi_gpio_hpd" }, 609 609 }; 610 610 611 611 static int sdp4430_panel_enable_hdmi(struct omap_dss_device *dssdev) ··· 623 621 624 622 static void sdp4430_panel_disable_hdmi(struct omap_dss_device *dssdev) 625 623 { 626 - gpio_free(HDMI_GPIO_LS_OE); 627 - gpio_free(HDMI_GPIO_HPD); 624 + gpio_free_array(sdp4430_hdmi_gpios, ARRAY_SIZE(sdp4430_hdmi_gpios)); 628 625 } 629 626 630 627 static struct nokia_dsi_panel_data dsi1_panel = { ··· 739 738 pr_err("%s: Could not get lcd2_reset_gpio\n", __func__); 740 739 } 741 740 741 + static struct omap_dss_hdmi_data sdp4430_hdmi_data = { 742 + .hpd_gpio = HDMI_GPIO_HPD, 743 + }; 744 + 742 745 static struct omap_dss_device sdp4430_hdmi_device = { 743 746 .name = "hdmi", 744 747 .driver_name = "hdmi_panel", ··· 750 745 .platform_enable = sdp4430_panel_enable_hdmi, 751 746 .platform_disable = sdp4430_panel_disable_hdmi, 752 747 .channel = OMAP_DSS_CHANNEL_DIGIT, 748 + .data = &sdp4430_hdmi_data, 753 749 }; 754 750 755 751 static struct picodlp_panel_data sdp4430_picodlp_pdata = { ··· 835 829 omap_hdmi_init(OMAP_HDMI_SDA_SCL_EXTERNAL_PULLUP); 836 830 else 837 831 omap_hdmi_init(0); 832 + 833 + omap_mux_init_gpio(HDMI_GPIO_LS_OE, OMAP_PIN_OUTPUT); 834 + omap_mux_init_gpio(HDMI_GPIO_CT_CP_HPD, OMAP_PIN_OUTPUT); 835 + omap_mux_init_gpio(HDMI_GPIO_HPD, OMAP_PIN_INPUT_PULLDOWN); 838 836 } 839 837 840 838 #ifdef CONFIG_OMAP_MUX
+14 -4
arch/arm/mach-omap2/board-omap4panda.c
··· 51 51 #define GPIO_HUB_NRESET 62 52 52 #define GPIO_WIFI_PMENA 43 53 53 #define GPIO_WIFI_IRQ 53 54 - #define HDMI_GPIO_HPD 60 /* Hot plug pin for HDMI */ 54 + #define HDMI_GPIO_CT_CP_HPD 60 /* HPD mode enable/disable */ 55 55 #define HDMI_GPIO_LS_OE 41 /* Level shifter for HDMI */ 56 + #define HDMI_GPIO_HPD 63 /* Hotplug detect */ 56 57 57 58 /* wl127x BT, FM, GPS connectivity chip */ 58 59 static int wl1271_gpios[] = {46, -1, -1}; ··· 414 413 } 415 414 416 415 static struct gpio panda_hdmi_gpios[] = { 417 - { HDMI_GPIO_HPD, GPIOF_OUT_INIT_HIGH, "hdmi_gpio_hpd" }, 416 + { HDMI_GPIO_CT_CP_HPD, GPIOF_OUT_INIT_HIGH, "hdmi_gpio_ct_cp_hpd" }, 418 417 { HDMI_GPIO_LS_OE, GPIOF_OUT_INIT_HIGH, "hdmi_gpio_ls_oe" }, 418 + { HDMI_GPIO_HPD, GPIOF_DIR_IN, "hdmi_gpio_hpd" }, 419 419 }; 420 420 421 421 static int omap4_panda_panel_enable_hdmi(struct omap_dss_device *dssdev) ··· 433 431 434 432 static void omap4_panda_panel_disable_hdmi(struct omap_dss_device *dssdev) 435 433 { 436 - gpio_free(HDMI_GPIO_LS_OE); 437 - gpio_free(HDMI_GPIO_HPD); 434 + gpio_free_array(panda_hdmi_gpios, ARRAY_SIZE(panda_hdmi_gpios)); 438 435 } 436 + 437 + static struct omap_dss_hdmi_data omap4_panda_hdmi_data = { 438 + .hpd_gpio = HDMI_GPIO_HPD, 439 + }; 439 440 440 441 static struct omap_dss_device omap4_panda_hdmi_device = { 441 442 .name = "hdmi", ··· 447 442 .platform_enable = omap4_panda_panel_enable_hdmi, 448 443 .platform_disable = omap4_panda_panel_disable_hdmi, 449 444 .channel = OMAP_DSS_CHANNEL_DIGIT, 445 + .data = &omap4_panda_hdmi_data, 450 446 }; 451 447 452 448 static struct omap_dss_device *omap4_panda_dss_devices[] = { ··· 479 473 omap_hdmi_init(OMAP_HDMI_SDA_SCL_EXTERNAL_PULLUP); 480 474 else 481 475 omap_hdmi_init(0); 476 + 477 + omap_mux_init_gpio(HDMI_GPIO_LS_OE, OMAP_PIN_OUTPUT); 478 + omap_mux_init_gpio(HDMI_GPIO_CT_CP_HPD, OMAP_PIN_OUTPUT); 479 + omap_mux_init_gpio(HDMI_GPIO_HPD, OMAP_PIN_INPUT_PULLDOWN); 482 480 } 483 481 484 482 static void __init omap4_panda_init(void)
+1
arch/arm/mach-omap2/devices.c
··· 405 405 break; 406 406 default: 407 407 pr_err("Invalid McSPI Revision value\n"); 408 + kfree(pdata); 408 409 return -EINVAL; 409 410 } 410 411
-4
arch/arm/mach-omap2/display.c
··· 103 103 u32 reg; 104 104 u16 control_i2c_1; 105 105 106 - /* PAD0_HDMI_HPD_PAD1_HDMI_CEC */ 107 - omap_mux_init_signal("hdmi_hpd", 108 - OMAP_PIN_INPUT_PULLUP); 109 106 omap_mux_init_signal("hdmi_cec", 110 107 OMAP_PIN_INPUT_PULLUP); 111 - /* PAD0_HDMI_DDC_SCL_PAD1_HDMI_DDC_SDA */ 112 108 omap_mux_init_signal("hdmi_ddc_scl", 113 109 OMAP_PIN_INPUT_PULLUP); 114 110 omap_mux_init_signal("hdmi_ddc_sda",
+6
arch/arm/mach-omap2/gpmc.c
··· 528 528 529 529 case GPMC_CONFIG_DEV_SIZE: 530 530 regval = gpmc_cs_read_reg(cs, GPMC_CS_CONFIG1); 531 + 532 + /* clear 2 target bits */ 533 + regval &= ~GPMC_CONFIG1_DEVICESIZE(3); 534 + 535 + /* set the proper value */ 531 536 regval |= GPMC_CONFIG1_DEVICESIZE(wval); 537 + 532 538 gpmc_cs_write_reg(cs, GPMC_CS_CONFIG1, regval); 533 539 break; 534 540
+8 -8
arch/arm/mach-omap2/hsmmc.c
··· 175 175 { 176 176 u32 reg; 177 177 178 - if (mmc->slots[0].internal_clock) { 179 - reg = omap_ctrl_readl(control_devconf1_offset); 178 + reg = omap_ctrl_readl(control_devconf1_offset); 179 + if (mmc->slots[0].internal_clock) 180 180 reg |= OMAP2_MMCSDIO2ADPCLKISEL; 181 - omap_ctrl_writel(reg, control_devconf1_offset); 182 - } 181 + else 182 + reg &= ~OMAP2_MMCSDIO2ADPCLKISEL; 183 + omap_ctrl_writel(reg, control_devconf1_offset); 183 184 } 184 185 185 - static void hsmmc23_before_set_reg(struct device *dev, int slot, 186 + static void hsmmc2_before_set_reg(struct device *dev, int slot, 186 187 int power_on, int vdd) 187 188 { 188 189 struct omap_mmc_platform_data *mmc = dev->platform_data; ··· 408 407 c->caps &= ~MMC_CAP_8_BIT_DATA; 409 408 c->caps |= MMC_CAP_4_BIT_DATA; 410 409 } 411 - /* FALLTHROUGH */ 412 - case 3: 413 410 if (mmc->slots[0].features & HSMMC_HAS_PBIAS) { 414 411 /* off-chip level shifting, or none */ 415 - mmc->slots[0].before_set_reg = hsmmc23_before_set_reg; 412 + mmc->slots[0].before_set_reg = hsmmc2_before_set_reg; 416 413 mmc->slots[0].after_set_reg = NULL; 417 414 } 418 415 break; 416 + case 3: 419 417 case 4: 420 418 case 5: 421 419 mmc->slots[0].before_set_reg = NULL;
+3 -1
arch/arm/mach-omap2/io.c
··· 388 388 omap_pm_if_early_init(); 389 389 } 390 390 391 - #ifdef CONFIG_ARCH_OMAP2 391 + #ifdef CONFIG_SOC_OMAP2420 392 392 void __init omap2420_init_early(void) 393 393 { 394 394 omap2_set_globals_242x(); ··· 400 400 omap_hwmod_init_postsetup(); 401 401 omap2420_clk_init(); 402 402 } 403 + #endif 403 404 405 + #ifdef CONFIG_SOC_OMAP2430 404 406 void __init omap2430_init_early(void) 405 407 { 406 408 omap2_set_globals_243x();
-21
arch/arm/mach-omap2/omap_hwmod_2xxx_3xxx_ipblock_data.c
··· 56 56 }; 57 57 58 58 /* 59 - * 'dispc' class 60 - * display controller 61 - */ 62 - 63 - static struct omap_hwmod_class_sysconfig omap2_dispc_sysc = { 64 - .rev_offs = 0x0000, 65 - .sysc_offs = 0x0010, 66 - .syss_offs = 0x0014, 67 - .sysc_flags = (SYSC_HAS_SIDLEMODE | SYSC_HAS_MIDLEMODE | 68 - SYSC_HAS_SOFTRESET | SYSC_HAS_AUTOIDLE), 69 - .idlemodes = (SIDLE_FORCE | SIDLE_NO | SIDLE_SMART | 70 - MSTANDBY_FORCE | MSTANDBY_NO | MSTANDBY_SMART), 71 - .sysc_fields = &omap_hwmod_sysc_type1, 72 - }; 73 - 74 - struct omap_hwmod_class omap2_dispc_hwmod_class = { 75 - .name = "dispc", 76 - .sysc = &omap2_dispc_sysc, 77 - }; 78 - 79 - /* 80 59 * 'rfbi' class 81 60 * remote frame buffer interface 82 61 */
+22
arch/arm/mach-omap2/omap_hwmod_2xxx_ipblock_data.c
··· 28 28 { .name = "dispc", .dma_req = 5 }, 29 29 { .dma_req = -1 } 30 30 }; 31 + 32 + /* 33 + * 'dispc' class 34 + * display controller 35 + */ 36 + 37 + static struct omap_hwmod_class_sysconfig omap2_dispc_sysc = { 38 + .rev_offs = 0x0000, 39 + .sysc_offs = 0x0010, 40 + .syss_offs = 0x0014, 41 + .sysc_flags = (SYSC_HAS_SIDLEMODE | SYSC_HAS_MIDLEMODE | 42 + SYSC_HAS_SOFTRESET | SYSC_HAS_AUTOIDLE), 43 + .idlemodes = (SIDLE_FORCE | SIDLE_NO | SIDLE_SMART | 44 + MSTANDBY_FORCE | MSTANDBY_NO | MSTANDBY_SMART), 45 + .sysc_fields = &omap_hwmod_sysc_type1, 46 + }; 47 + 48 + struct omap_hwmod_class omap2_dispc_hwmod_class = { 49 + .name = "dispc", 50 + .sysc = &omap2_dispc_sysc, 51 + }; 52 + 31 53 /* OMAP2xxx Timer Common */ 32 54 static struct omap_hwmod_class_sysconfig omap2xxx_timer_sysc = { 33 55 .rev_offs = 0x0000,
+47 -7
arch/arm/mach-omap2/omap_hwmod_3xxx_data.c
··· 1480 1480 .masters_cnt = ARRAY_SIZE(omap3xxx_dss_masters), 1481 1481 }; 1482 1482 1483 + /* 1484 + * 'dispc' class 1485 + * display controller 1486 + */ 1487 + 1488 + static struct omap_hwmod_class_sysconfig omap3_dispc_sysc = { 1489 + .rev_offs = 0x0000, 1490 + .sysc_offs = 0x0010, 1491 + .syss_offs = 0x0014, 1492 + .sysc_flags = (SYSC_HAS_SIDLEMODE | SYSC_HAS_MIDLEMODE | 1493 + SYSC_HAS_SOFTRESET | SYSC_HAS_AUTOIDLE | 1494 + SYSC_HAS_ENAWAKEUP), 1495 + .idlemodes = (SIDLE_FORCE | SIDLE_NO | SIDLE_SMART | 1496 + MSTANDBY_FORCE | MSTANDBY_NO | MSTANDBY_SMART), 1497 + .sysc_fields = &omap_hwmod_sysc_type1, 1498 + }; 1499 + 1500 + static struct omap_hwmod_class omap3_dispc_hwmod_class = { 1501 + .name = "dispc", 1502 + .sysc = &omap3_dispc_sysc, 1503 + }; 1504 + 1483 1505 /* l4_core -> dss_dispc */ 1484 1506 static struct omap_hwmod_ocp_if omap3xxx_l4_core__dss_dispc = { 1485 1507 .master = &omap3xxx_l4_core_hwmod, ··· 1525 1503 1526 1504 static struct omap_hwmod omap3xxx_dss_dispc_hwmod = { 1527 1505 .name = "dss_dispc", 1528 - .class = &omap2_dispc_hwmod_class, 1506 + .class = &omap3_dispc_hwmod_class, 1529 1507 .mpu_irqs = omap2_dispc_irqs, 1530 1508 .main_clk = "dss1_alwon_fck", 1531 1509 .prcm = { ··· 3545 3523 &omap3xxx_uart2_hwmod, 3546 3524 &omap3xxx_uart3_hwmod, 3547 3525 3548 - /* dss class */ 3549 - &omap3xxx_dss_dispc_hwmod, 3550 - &omap3xxx_dss_dsi1_hwmod, 3551 - &omap3xxx_dss_rfbi_hwmod, 3552 - &omap3xxx_dss_venc_hwmod, 3553 - 3554 3526 /* i2c class */ 3555 3527 &omap3xxx_i2c1_hwmod, 3556 3528 &omap3xxx_i2c2_hwmod, ··· 3651 3635 NULL 3652 3636 }; 3653 3637 3638 + static __initdata struct omap_hwmod *omap3xxx_dss_hwmods[] = { 3639 + /* dss class */ 3640 + &omap3xxx_dss_dispc_hwmod, 3641 + &omap3xxx_dss_dsi1_hwmod, 3642 + &omap3xxx_dss_rfbi_hwmod, 3643 + &omap3xxx_dss_venc_hwmod, 3644 + NULL 3645 + }; 3646 + 3654 3647 int __init omap3xxx_hwmod_init(void) 3655 3648 { 3656 3649 int r; ··· 3733 3708 3734 3709 if (h) 3735 3710 r = omap_hwmod_register(h); 3711 + if (r < 0) 3712 + return r; 3713 + 3714 + /* 3715 + * DSS code presumes that dss_core hwmod is handled first, 3716 + * _before_ any other DSS related hwmods so register common 3717 + * DSS hwmods last to ensure that dss_core is already registered. 3718 + * Otherwise some change things may happen, for ex. if dispc 3719 + * is handled before dss_core and DSS is enabled in bootloader 3720 + * DIPSC will be reset with outputs enabled which sometimes leads 3721 + * to unrecoverable L3 error. 3722 + * XXX The long-term fix to this is to ensure modules are set up 3723 + * in dependency order in the hwmod core code. 3724 + */ 3725 + r = omap_hwmod_register(omap3xxx_dss_hwmods); 3736 3726 3737 3727 return r; 3738 3728 }
+2
arch/arm/mach-omap2/omap_hwmod_44xx_data.c
··· 1031 1031 1032 1032 static struct omap_hwmod_addr_space omap44xx_dmic_addrs[] = { 1033 1033 { 1034 + .name = "mpu", 1034 1035 .pa_start = 0x4012e000, 1035 1036 .pa_end = 0x4012e07f, 1036 1037 .flags = ADDR_TYPE_RT ··· 1050 1049 1051 1050 static struct omap_hwmod_addr_space omap44xx_dmic_dma_addrs[] = { 1052 1051 { 1052 + .name = "dma", 1053 1053 .pa_start = 0x4902e000, 1054 1054 .pa_end = 0x4902e07f, 1055 1055 .flags = ADDR_TYPE_RT
+1
arch/arm/mach-omap2/prm2xxx_3xxx.c
··· 19 19 #include "common.h" 20 20 #include <plat/cpu.h> 21 21 #include <plat/prcm.h> 22 + #include <plat/irqs.h> 22 23 23 24 #include "vp.h" 24 25
+1 -1
arch/arm/mach-omap2/smartreflex.c
··· 897 897 ret = sr_late_init(sr_info); 898 898 if (ret) { 899 899 pr_warning("%s: Error in SR late init\n", __func__); 900 - return ret; 900 + goto err_iounmap; 901 901 } 902 902 } 903 903
+1 -1
arch/arm/mach-omap2/timer.c
··· 270 270 static u32 notrace dmtimer_read_sched_clock(void) 271 271 { 272 272 if (clksrc.reserved) 273 - return __omap_dm_timer_read_counter(clksrc.io_base, 1); 273 + return __omap_dm_timer_read_counter(&clksrc, 1); 274 274 275 275 return 0; 276 276 }
+2
arch/arm/mach-shmobile/setup-sh7372.c
··· 662 662 .dmaor_is_32bit = 1, 663 663 .needs_tend_set = 1, 664 664 .no_dmars = 1, 665 + .slave_only = 1, 665 666 }; 666 667 667 668 static struct resource sh7372_usb_dmae0_resources[] = { ··· 724 723 .dmaor_is_32bit = 1, 725 724 .needs_tend_set = 1, 726 725 .no_dmars = 1, 726 + .slave_only = 1, 727 727 }; 728 728 729 729 static struct resource sh7372_usb_dmae1_resources[] = {
+1 -2
arch/arm/mm/ioremap.c
··· 225 225 if ((area->flags & VM_ARM_MTYPE_MASK) != VM_ARM_MTYPE(mtype)) 226 226 continue; 227 227 if (__phys_to_pfn(area->phys_addr) > pfn || 228 - __pfn_to_phys(pfn) + offset + size-1 > 229 - area->phys_addr + area->size-1) 228 + __pfn_to_phys(pfn) + size-1 > area->phys_addr + area->size-1) 230 229 continue; 231 230 /* we can drop the lock here as we know *area is static */ 232 231 read_unlock(&vmlist_lock);
+1
arch/avr32/Kconfig
··· 8 8 select HAVE_KPROBES 9 9 select HAVE_GENERIC_HARDIRQS 10 10 select GENERIC_IRQ_PROBE 11 + select GENERIC_ATOMIC64 11 12 select HARDIRQS_SW_RESEND 12 13 select GENERIC_IRQ_SHOW 13 14 select ARCH_HAVE_NMI_SAFE_CMPXCHG
+1 -20
arch/microblaze/kernel/setup.c
··· 26 26 #include <linux/cache.h> 27 27 #include <linux/of_platform.h> 28 28 #include <linux/dma-mapping.h> 29 - #include <linux/cpu.h> 30 29 #include <asm/cacheflush.h> 31 30 #include <asm/entry.h> 32 31 #include <asm/cpuinfo.h> ··· 226 227 227 228 return 0; 228 229 } 230 + 229 231 arch_initcall(setup_bus_notifier); 230 - 231 - static DEFINE_PER_CPU(struct cpu, cpu_devices); 232 - 233 - static int __init topology_init(void) 234 - { 235 - int i, ret; 236 - 237 - for_each_present_cpu(i) { 238 - struct cpu *c = &per_cpu(cpu_devices, i); 239 - 240 - ret = register_cpu(c, i); 241 - if (ret) 242 - printk(KERN_WARNING "topology_init: register_cpu %d " 243 - "failed (%d)\n", i, ret); 244 - } 245 - 246 - return 0; 247 - } 248 - subsys_initcall(topology_init);
+1
arch/mips/Kconfig
··· 2356 2356 depends on HW_HAS_PCI 2357 2357 select PCI_DOMAINS 2358 2358 select GENERIC_PCI_IOMAP 2359 + select NO_GENERIC_PCI_IOPORT_MAP 2359 2360 help 2360 2361 Find out whether you have a PCI motherboard. PCI is the name of a 2361 2362 bus system, i.e. the way the CPU talks to the other stuff inside
+2 -2
arch/mips/lib/iomap-pci.c
··· 10 10 #include <linux/module.h> 11 11 #include <asm/io.h> 12 12 13 - static void __iomem *ioport_map_pci(struct pci_dev *dev, 14 - unsigned long port, unsigned int nr) 13 + void __iomem *__pci_ioport_map(struct pci_dev *dev, 14 + unsigned long port, unsigned int nr) 15 15 { 16 16 struct pci_controller *ctrl = dev->bus->sysdata; 17 17 unsigned long base = ctrl->io_map_base;
+1
arch/sh/Kconfig
··· 859 859 depends on SYS_SUPPORTS_PCI 860 860 select PCI_DOMAINS 861 861 select GENERIC_PCI_IOMAP 862 + select NO_GENERIC_PCI_IOPORT_MAP 862 863 help 863 864 Find out whether you have a PCI motherboard. PCI is the name of a 864 865 bus system, i.e. the way the CPU talks to the other stuff inside
+2 -2
arch/sh/drivers/pci/pci.c
··· 356 356 357 357 #ifndef CONFIG_GENERIC_IOMAP 358 358 359 - static void __iomem *ioport_map_pci(struct pci_dev *dev, 360 - unsigned long port, unsigned int nr) 359 + void __iomem *__pci_ioport_map(struct pci_dev *dev, 360 + unsigned long port, unsigned int nr) 361 361 { 362 362 struct pci_channel *chan = dev->sysdata; 363 363
+1
arch/sparc/Kconfig
··· 33 33 config SPARC32 34 34 def_bool !64BIT 35 35 select GENERIC_ATOMIC64 36 + select CLZ_TAB 36 37 37 38 config SPARC64 38 39 def_bool 64BIT
+1 -15
arch/sparc/lib/divdi3.S
··· 17 17 the Free Software Foundation, 59 Temple Place - Suite 330, 18 18 Boston, MA 02111-1307, USA. */ 19 19 20 - .data 21 - .align 8 22 - .globl __clz_tab 23 - __clz_tab: 24 - .byte 0,1,2,2,3,3,3,3,4,4,4,4,4,4,4,4,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5 25 - .byte 6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6 26 - .byte 7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7 27 - .byte 7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7 28 - .byte 8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8 29 - .byte 8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8 30 - .byte 8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8 31 - .byte 8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8 32 - .size __clz_tab,256 33 - .global .udiv 34 - 35 20 .text 36 21 .align 4 22 + .global .udiv 37 23 .globl __divdi3 38 24 __divdi3: 39 25 save %sp,-104,%sp
+3 -3
arch/x86/include/asm/cmpxchg.h
··· 145 145 146 146 #ifdef __HAVE_ARCH_CMPXCHG 147 147 #define cmpxchg(ptr, old, new) \ 148 - __cmpxchg((ptr), (old), (new), sizeof(*ptr)) 148 + __cmpxchg(ptr, old, new, sizeof(*(ptr))) 149 149 150 150 #define sync_cmpxchg(ptr, old, new) \ 151 - __sync_cmpxchg((ptr), (old), (new), sizeof(*ptr)) 151 + __sync_cmpxchg(ptr, old, new, sizeof(*(ptr))) 152 152 153 153 #define cmpxchg_local(ptr, old, new) \ 154 - __cmpxchg_local((ptr), (old), (new), sizeof(*ptr)) 154 + __cmpxchg_local(ptr, old, new, sizeof(*(ptr))) 155 155 #endif 156 156 157 157 /*
+16
arch/x86/include/asm/kvm_emulate.h
··· 190 190 int (*intercept)(struct x86_emulate_ctxt *ctxt, 191 191 struct x86_instruction_info *info, 192 192 enum x86_intercept_stage stage); 193 + 194 + bool (*get_cpuid)(struct x86_emulate_ctxt *ctxt, 195 + u32 *eax, u32 *ebx, u32 *ecx, u32 *edx); 193 196 }; 194 197 195 198 typedef u32 __attribute__((vector_size(16))) sse128_t; ··· 300 297 /* any protected mode */ 301 298 #define X86EMUL_MODE_PROT (X86EMUL_MODE_PROT16|X86EMUL_MODE_PROT32| \ 302 299 X86EMUL_MODE_PROT64) 300 + 301 + /* CPUID vendors */ 302 + #define X86EMUL_CPUID_VENDOR_AuthenticAMD_ebx 0x68747541 303 + #define X86EMUL_CPUID_VENDOR_AuthenticAMD_ecx 0x444d4163 304 + #define X86EMUL_CPUID_VENDOR_AuthenticAMD_edx 0x69746e65 305 + 306 + #define X86EMUL_CPUID_VENDOR_AMDisbetterI_ebx 0x69444d41 307 + #define X86EMUL_CPUID_VENDOR_AMDisbetterI_ecx 0x21726574 308 + #define X86EMUL_CPUID_VENDOR_AMDisbetterI_edx 0x74656273 309 + 310 + #define X86EMUL_CPUID_VENDOR_GenuineIntel_ebx 0x756e6547 311 + #define X86EMUL_CPUID_VENDOR_GenuineIntel_ecx 0x6c65746e 312 + #define X86EMUL_CPUID_VENDOR_GenuineIntel_edx 0x49656e69 303 313 304 314 enum x86_intercept_stage { 305 315 X86_ICTP_NONE = 0, /* Allow zero-init to not match anything */
+2 -1
arch/x86/kernel/dumpstack.c
··· 252 252 unsigned short ss; 253 253 unsigned long sp; 254 254 #endif 255 - printk(KERN_EMERG "%s: %04lx [#%d] ", str, err & 0xffff, ++die_counter); 255 + printk(KERN_DEFAULT 256 + "%s: %04lx [#%d] ", str, err & 0xffff, ++die_counter); 256 257 #ifdef CONFIG_PREEMPT 257 258 printk("PREEMPT "); 258 259 #endif
+4 -4
arch/x86/kernel/dumpstack_64.c
··· 129 129 if (!stack) { 130 130 if (regs) 131 131 stack = (unsigned long *)regs->sp; 132 - else if (task && task != current) 132 + else if (task != current) 133 133 stack = (unsigned long *)task->thread.sp; 134 134 else 135 135 stack = &dummy; ··· 269 269 unsigned char c; 270 270 u8 *ip; 271 271 272 - printk(KERN_EMERG "Stack:\n"); 272 + printk(KERN_DEFAULT "Stack:\n"); 273 273 show_stack_log_lvl(NULL, regs, (unsigned long *)sp, 274 - 0, KERN_EMERG); 274 + 0, KERN_DEFAULT); 275 275 276 - printk(KERN_EMERG "Code: "); 276 + printk(KERN_DEFAULT "Code: "); 277 277 278 278 ip = (u8 *)regs->ip - code_prologue; 279 279 if (ip < (u8 *)PAGE_OFFSET || probe_kernel_address(ip, c)) {
+26 -10
arch/x86/kernel/reboot.c
··· 39 39 enum reboot_type reboot_type = BOOT_ACPI; 40 40 int reboot_force; 41 41 42 + /* This variable is used privately to keep track of whether or not 43 + * reboot_type is still set to its default value (i.e., reboot= hasn't 44 + * been set on the command line). This is needed so that we can 45 + * suppress DMI scanning for reboot quirks. Without it, it's 46 + * impossible to override a faulty reboot quirk without recompiling. 47 + */ 48 + static int reboot_default = 1; 49 + 42 50 #if defined(CONFIG_X86_32) && defined(CONFIG_SMP) 43 51 static int reboot_cpu = -1; 44 52 #endif ··· 75 67 static int __init reboot_setup(char *str) 76 68 { 77 69 for (;;) { 70 + /* Having anything passed on the command line via 71 + * reboot= will cause us to disable DMI checking 72 + * below. 73 + */ 74 + reboot_default = 0; 75 + 78 76 switch (*str) { 79 77 case 'w': 80 78 reboot_mode = 0x1234; ··· 309 295 DMI_MATCH(DMI_BOARD_NAME, "P4S800"), 310 296 }, 311 297 }, 312 - { /* Handle problems with rebooting on VersaLogic Menlow boards */ 313 - .callback = set_bios_reboot, 314 - .ident = "VersaLogic Menlow based board", 315 - .matches = { 316 - DMI_MATCH(DMI_BOARD_VENDOR, "VersaLogic Corporation"), 317 - DMI_MATCH(DMI_BOARD_NAME, "VersaLogic Menlow board"), 318 - }, 319 - }, 320 298 { /* Handle reboot issue on Acer Aspire one */ 321 299 .callback = set_kbd_reboot, 322 300 .ident = "Acer Aspire One A110", ··· 322 316 323 317 static int __init reboot_init(void) 324 318 { 325 - dmi_check_system(reboot_dmi_table); 319 + /* Only do the DMI check if reboot_type hasn't been overridden 320 + * on the command line 321 + */ 322 + if (reboot_default) { 323 + dmi_check_system(reboot_dmi_table); 324 + } 326 325 return 0; 327 326 } 328 327 core_initcall(reboot_init); ··· 476 465 477 466 static int __init pci_reboot_init(void) 478 467 { 479 - dmi_check_system(pci_reboot_dmi_table); 468 + /* Only do the DMI check if reboot_type hasn't been overridden 469 + * on the command line 470 + */ 471 + if (reboot_default) { 472 + dmi_check_system(pci_reboot_dmi_table); 473 + } 480 474 return 0; 481 475 } 482 476 core_initcall(pci_reboot_init);
+51
arch/x86/kvm/emulate.c
··· 1891 1891 ss->p = 1; 1892 1892 } 1893 1893 1894 + static bool em_syscall_is_enabled(struct x86_emulate_ctxt *ctxt) 1895 + { 1896 + struct x86_emulate_ops *ops = ctxt->ops; 1897 + u32 eax, ebx, ecx, edx; 1898 + 1899 + /* 1900 + * syscall should always be enabled in longmode - so only become 1901 + * vendor specific (cpuid) if other modes are active... 1902 + */ 1903 + if (ctxt->mode == X86EMUL_MODE_PROT64) 1904 + return true; 1905 + 1906 + eax = 0x00000000; 1907 + ecx = 0x00000000; 1908 + if (ops->get_cpuid(ctxt, &eax, &ebx, &ecx, &edx)) { 1909 + /* 1910 + * Intel ("GenuineIntel") 1911 + * remark: Intel CPUs only support "syscall" in 64bit 1912 + * longmode. Also an 64bit guest with a 1913 + * 32bit compat-app running will #UD !! While this 1914 + * behaviour can be fixed (by emulating) into AMD 1915 + * response - CPUs of AMD can't behave like Intel. 1916 + */ 1917 + if (ebx == X86EMUL_CPUID_VENDOR_GenuineIntel_ebx && 1918 + ecx == X86EMUL_CPUID_VENDOR_GenuineIntel_ecx && 1919 + edx == X86EMUL_CPUID_VENDOR_GenuineIntel_edx) 1920 + return false; 1921 + 1922 + /* AMD ("AuthenticAMD") */ 1923 + if (ebx == X86EMUL_CPUID_VENDOR_AuthenticAMD_ebx && 1924 + ecx == X86EMUL_CPUID_VENDOR_AuthenticAMD_ecx && 1925 + edx == X86EMUL_CPUID_VENDOR_AuthenticAMD_edx) 1926 + return true; 1927 + 1928 + /* AMD ("AMDisbetter!") */ 1929 + if (ebx == X86EMUL_CPUID_VENDOR_AMDisbetterI_ebx && 1930 + ecx == X86EMUL_CPUID_VENDOR_AMDisbetterI_ecx && 1931 + edx == X86EMUL_CPUID_VENDOR_AMDisbetterI_edx) 1932 + return true; 1933 + } 1934 + 1935 + /* default: (not Intel, not AMD), apply Intel's stricter rules... */ 1936 + return false; 1937 + } 1938 + 1894 1939 static int em_syscall(struct x86_emulate_ctxt *ctxt) 1895 1940 { 1896 1941 struct x86_emulate_ops *ops = ctxt->ops; ··· 1949 1904 ctxt->mode == X86EMUL_MODE_VM86) 1950 1905 return emulate_ud(ctxt); 1951 1906 1907 + if (!(em_syscall_is_enabled(ctxt))) 1908 + return emulate_ud(ctxt); 1909 + 1952 1910 ops->get_msr(ctxt, MSR_EFER, &efer); 1953 1911 setup_syscalls_segments(ctxt, &cs, &ss); 1912 + 1913 + if (!(efer & EFER_SCE)) 1914 + return emulate_ud(ctxt); 1954 1915 1955 1916 ops->get_msr(ctxt, MSR_STAR, &msr_data); 1956 1917 msr_data >>= 32;
+45
arch/x86/kvm/x86.c
··· 1495 1495 1496 1496 int kvm_set_msr_common(struct kvm_vcpu *vcpu, u32 msr, u64 data) 1497 1497 { 1498 + bool pr = false; 1499 + 1498 1500 switch (msr) { 1499 1501 case MSR_EFER: 1500 1502 return set_efer(vcpu, data); ··· 1636 1634 case MSR_K7_PERFCTR3: 1637 1635 pr_unimpl(vcpu, "unimplemented perfctr wrmsr: " 1638 1636 "0x%x data 0x%llx\n", msr, data); 1637 + break; 1638 + case MSR_P6_PERFCTR0: 1639 + case MSR_P6_PERFCTR1: 1640 + pr = true; 1641 + case MSR_P6_EVNTSEL0: 1642 + case MSR_P6_EVNTSEL1: 1643 + if (kvm_pmu_msr(vcpu, msr)) 1644 + return kvm_pmu_set_msr(vcpu, msr, data); 1645 + 1646 + if (pr || data != 0) 1647 + pr_unimpl(vcpu, "disabled perfctr wrmsr: " 1648 + "0x%x data 0x%llx\n", msr, data); 1639 1649 break; 1640 1650 case MSR_K7_CLK_CTL: 1641 1651 /* ··· 1847 1833 case MSR_K8_INT_PENDING_MSG: 1848 1834 case MSR_AMD64_NB_CFG: 1849 1835 case MSR_FAM10H_MMIO_CONF_BASE: 1836 + data = 0; 1837 + break; 1838 + case MSR_P6_PERFCTR0: 1839 + case MSR_P6_PERFCTR1: 1840 + case MSR_P6_EVNTSEL0: 1841 + case MSR_P6_EVNTSEL1: 1842 + if (kvm_pmu_msr(vcpu, msr)) 1843 + return kvm_pmu_get_msr(vcpu, msr, pdata); 1850 1844 data = 0; 1851 1845 break; 1852 1846 case MSR_IA32_UCODE_REV: ··· 4202 4180 return kvm_x86_ops->check_intercept(emul_to_vcpu(ctxt), info, stage); 4203 4181 } 4204 4182 4183 + static bool emulator_get_cpuid(struct x86_emulate_ctxt *ctxt, 4184 + u32 *eax, u32 *ebx, u32 *ecx, u32 *edx) 4185 + { 4186 + struct kvm_cpuid_entry2 *cpuid = NULL; 4187 + 4188 + if (eax && ecx) 4189 + cpuid = kvm_find_cpuid_entry(emul_to_vcpu(ctxt), 4190 + *eax, *ecx); 4191 + 4192 + if (cpuid) { 4193 + *eax = cpuid->eax; 4194 + *ecx = cpuid->ecx; 4195 + if (ebx) 4196 + *ebx = cpuid->ebx; 4197 + if (edx) 4198 + *edx = cpuid->edx; 4199 + return true; 4200 + } 4201 + 4202 + return false; 4203 + } 4204 + 4205 4205 static struct x86_emulate_ops emulate_ops = { 4206 4206 .read_std = kvm_read_guest_virt_system, 4207 4207 .write_std = kvm_write_guest_virt_system, ··· 4255 4211 .get_fpu = emulator_get_fpu, 4256 4212 .put_fpu = emulator_put_fpu, 4257 4213 .intercept = emulator_intercept, 4214 + .get_cpuid = emulator_get_cpuid, 4258 4215 }; 4259 4216 4260 4217 static void cache_all_regs(struct kvm_vcpu *vcpu)
+2 -2
arch/x86/mm/fault.c
··· 673 673 674 674 stackend = end_of_stack(tsk); 675 675 if (tsk != &init_task && *stackend != STACK_END_MAGIC) 676 - printk(KERN_ALERT "Thread overran stack, or stack corrupted\n"); 676 + printk(KERN_EMERG "Thread overran stack, or stack corrupted\n"); 677 677 678 678 tsk->thread.cr2 = address; 679 679 tsk->thread.trap_no = 14; ··· 684 684 sig = 0; 685 685 686 686 /* Executive summary in case the body of the oops scrolled away */ 687 - printk(KERN_EMERG "CR2: %016lx\n", address); 687 + printk(KERN_DEFAULT "CR2: %016lx\n", address); 688 688 689 689 oops_end(flags, regs, sig); 690 690 }
-3
arch/xtensa/include/asm/string.h
··· 118 118 /* Don't build bcopy at all ... */ 119 119 #define __HAVE_ARCH_BCOPY 120 120 121 - #define __HAVE_ARCH_MEMSCAN 122 - #define memscan memchr 123 - 124 121 #endif /* _XTENSA_STRING_H */
-7
drivers/acpi/processor_driver.c
··· 586 586 if (pr->flags.need_hotplug_init) 587 587 return 0; 588 588 589 - /* 590 - * Do not start hotplugged CPUs now, but when they 591 - * are onlined the first time 592 - */ 593 - if (pr->flags.need_hotplug_init) 594 - return 0; 595 - 596 589 result = acpi_processor_start(pr); 597 590 if (result) 598 591 goto err_remove_sysfs;
+5 -2
drivers/block/rbd.c
··· 380 380 rbdc = __rbd_client_find(opt); 381 381 if (rbdc) { 382 382 ceph_destroy_options(opt); 383 + kfree(rbd_opts); 383 384 384 385 /* using an existing client */ 385 386 kref_get(&rbdc->kref); ··· 407 406 408 407 /* 409 408 * Destroy ceph client 409 + * 410 + * Caller must hold node_lock. 410 411 */ 411 412 static void rbd_client_release(struct kref *kref) 412 413 { 413 414 struct rbd_client *rbdc = container_of(kref, struct rbd_client, kref); 414 415 415 416 dout("rbd_release_client %p\n", rbdc); 416 - spin_lock(&node_lock); 417 417 list_del(&rbdc->node); 418 - spin_unlock(&node_lock); 419 418 420 419 ceph_destroy_client(rbdc->client); 421 420 kfree(rbdc->rbd_opts); ··· 428 427 */ 429 428 static void rbd_put_client(struct rbd_device *rbd_dev) 430 429 { 430 + spin_lock(&node_lock); 431 431 kref_put(&rbd_dev->rbd_client->kref, rbd_client_release); 432 + spin_unlock(&node_lock); 432 433 rbd_dev->rbd_client = NULL; 433 434 rbd_dev->client = NULL; 434 435 }
+2 -2
drivers/dma/at_hdmac.c
··· 1343 1343 1344 1344 tasklet_init(&atchan->tasklet, atc_tasklet, 1345 1345 (unsigned long)atchan); 1346 - atc_enable_irq(atchan); 1346 + atc_enable_chan_irq(atdma, i); 1347 1347 } 1348 1348 1349 1349 /* set base routines */ ··· 1410 1410 struct at_dma_chan *atchan = to_at_dma_chan(chan); 1411 1411 1412 1412 /* Disable interrupts */ 1413 - atc_disable_irq(atchan); 1413 + atc_disable_chan_irq(atdma, chan->chan_id); 1414 1414 tasklet_disable(&atchan->tasklet); 1415 1415 1416 1416 tasklet_kill(&atchan->tasklet);
+8 -9
drivers/dma/at_hdmac_regs.h
··· 327 327 } 328 328 329 329 330 - static void atc_setup_irq(struct at_dma_chan *atchan, int on) 330 + static void atc_setup_irq(struct at_dma *atdma, int chan_id, int on) 331 331 { 332 - struct at_dma *atdma = to_at_dma(atchan->chan_common.device); 333 - u32 ebci; 332 + u32 ebci; 334 333 335 334 /* enable interrupts on buffer transfer completion & error */ 336 - ebci = AT_DMA_BTC(atchan->chan_common.chan_id) 337 - | AT_DMA_ERR(atchan->chan_common.chan_id); 335 + ebci = AT_DMA_BTC(chan_id) 336 + | AT_DMA_ERR(chan_id); 338 337 if (on) 339 338 dma_writel(atdma, EBCIER, ebci); 340 339 else 341 340 dma_writel(atdma, EBCIDR, ebci); 342 341 } 343 342 344 - static inline void atc_enable_irq(struct at_dma_chan *atchan) 343 + static void atc_enable_chan_irq(struct at_dma *atdma, int chan_id) 345 344 { 346 - atc_setup_irq(atchan, 1); 345 + atc_setup_irq(atdma, chan_id, 1); 347 346 } 348 347 349 - static inline void atc_disable_irq(struct at_dma_chan *atchan) 348 + static void atc_disable_chan_irq(struct at_dma *atdma, int chan_id) 350 349 { 351 - atc_setup_irq(atchan, 0); 350 + atc_setup_irq(atdma, chan_id, 0); 352 351 } 353 352 354 353
+1 -1
drivers/dma/dmatest.c
··· 599 599 } 600 600 if (dma_has_cap(DMA_PQ, dma_dev->cap_mask)) { 601 601 cnt = dmatest_add_threads(dtc, DMA_PQ); 602 - thread_count += cnt > 0 ?: 0; 602 + thread_count += cnt > 0 ? cnt : 0; 603 603 } 604 604 605 605 pr_info("dmatest: Started %u threads using %s\n",
+4 -2
drivers/dma/imx-sdma.c
··· 1102 1102 case DMA_SLAVE_CONFIG: 1103 1103 if (dmaengine_cfg->direction == DMA_DEV_TO_MEM) { 1104 1104 sdmac->per_address = dmaengine_cfg->src_addr; 1105 - sdmac->watermark_level = dmaengine_cfg->src_maxburst; 1105 + sdmac->watermark_level = dmaengine_cfg->src_maxburst * 1106 + dmaengine_cfg->src_addr_width; 1106 1107 sdmac->word_size = dmaengine_cfg->src_addr_width; 1107 1108 } else { 1108 1109 sdmac->per_address = dmaengine_cfg->dst_addr; 1109 - sdmac->watermark_level = dmaengine_cfg->dst_maxburst; 1110 + sdmac->watermark_level = dmaengine_cfg->dst_maxburst * 1111 + dmaengine_cfg->dst_addr_width; 1110 1112 sdmac->word_size = dmaengine_cfg->dst_addr_width; 1111 1113 } 1112 1114 sdmac->direction = dmaengine_cfg->direction;
+2 -1
drivers/dma/shdma.c
··· 1262 1262 1263 1263 INIT_LIST_HEAD(&shdev->common.channels); 1264 1264 1265 - dma_cap_set(DMA_MEMCPY, shdev->common.cap_mask); 1265 + if (!pdata->slave_only) 1266 + dma_cap_set(DMA_MEMCPY, shdev->common.cap_mask); 1266 1267 if (pdata->slave && pdata->slave_num) 1267 1268 dma_cap_set(DMA_SLAVE, shdev->common.cap_mask); 1268 1269
+5 -1
drivers/firewire/ohci.c
··· 263 263 static char ohci_driver_name[] = KBUILD_MODNAME; 264 264 265 265 #define PCI_DEVICE_ID_AGERE_FW643 0x5901 266 + #define PCI_DEVICE_ID_CREATIVE_SB1394 0x4001 266 267 #define PCI_DEVICE_ID_JMICRON_JMB38X_FW 0x2380 267 268 #define PCI_DEVICE_ID_TI_TSB12LV22 0x8009 268 269 #define PCI_DEVICE_ID_TI_TSB12LV26 0x8020 ··· 290 289 {PCI_VENDOR_ID_ATT, PCI_DEVICE_ID_AGERE_FW643, 6, 291 290 QUIRK_NO_MSI}, 292 291 292 + {PCI_VENDOR_ID_CREATIVE, PCI_DEVICE_ID_CREATIVE_SB1394, PCI_ANY_ID, 293 + QUIRK_RESET_PACKET}, 294 + 293 295 {PCI_VENDOR_ID_JMICRON, PCI_DEVICE_ID_JMICRON_JMB38X_FW, PCI_ANY_ID, 294 296 QUIRK_NO_MSI}, 295 297 ··· 303 299 QUIRK_NO_MSI}, 304 300 305 301 {PCI_VENDOR_ID_RICOH, PCI_ANY_ID, PCI_ANY_ID, 306 - QUIRK_CYCLE_TIMER}, 302 + QUIRK_CYCLE_TIMER | QUIRK_NO_MSI}, 307 303 308 304 {PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_TSB12LV22, PCI_ANY_ID, 309 305 QUIRK_CYCLE_TIMER | QUIRK_RESET_PACKET | QUIRK_NO_1394A},
+1 -1
drivers/gpio/gpio-lpc32xx.c
··· 96 96 }; 97 97 98 98 static const char *gpio_p3_names[LPC32XX_GPIO_P3_MAX] = { 99 - "gpi000", "gpio01", "gpio02", "gpio03", 99 + "gpio00", "gpio01", "gpio02", "gpio03", 100 100 "gpio04", "gpio05" 101 101 }; 102 102
+1
drivers/gpio/gpio-ml-ioh.c
··· 448 448 chip->reg = chip->base; 449 449 chip->ch = i; 450 450 mutex_init(&chip->lock); 451 + spin_lock_init(&chip->spinlock); 451 452 ioh_gpio_setup(chip, num_ports[i]); 452 453 ret = gpiochip_add(&chip->gpio); 453 454 if (ret) {
+1
drivers/gpio/gpio-pch.c
··· 392 392 chip->reg = chip->base; 393 393 pci_set_drvdata(pdev, chip); 394 394 mutex_init(&chip->lock); 395 + spin_lock_init(&chip->spinlock); 395 396 pch_gpio_setup(chip); 396 397 ret = gpiochip_add(&chip->gpio); 397 398 if (ret) {
+13 -10
drivers/gpio/gpio-samsung.c
··· 2387 2387 }; 2388 2388 2389 2389 #if defined(CONFIG_ARCH_EXYNOS4) && defined(CONFIG_OF) 2390 - static int exynos4_gpio_xlate(struct gpio_chip *gc, struct device_node *np, 2391 - const void *gpio_spec, u32 *flags) 2390 + static int exynos4_gpio_xlate(struct gpio_chip *gc, 2391 + const struct of_phandle_args *gpiospec, u32 *flags) 2392 2392 { 2393 - const __be32 *gpio = gpio_spec; 2394 - const u32 n = be32_to_cpup(gpio); 2395 - unsigned int pin = gc->base + be32_to_cpu(gpio[0]); 2393 + unsigned int pin; 2396 2394 2397 2395 if (WARN_ON(gc->of_gpio_n_cells < 4)) 2398 2396 return -EINVAL; 2399 2397 2400 - if (n > gc->ngpio) 2398 + if (WARN_ON(gpiospec->args_count < gc->of_gpio_n_cells)) 2401 2399 return -EINVAL; 2402 2400 2403 - if (s3c_gpio_cfgpin(pin, S3C_GPIO_SFN(be32_to_cpu(gpio[1])))) 2401 + if (gpiospec->args[0] > gc->ngpio) 2402 + return -EINVAL; 2403 + 2404 + pin = gc->base + gpiospec->args[0]; 2405 + 2406 + if (s3c_gpio_cfgpin(pin, S3C_GPIO_SFN(gpiospec->args[1]))) 2404 2407 pr_warn("gpio_xlate: failed to set pin function\n"); 2405 - if (s3c_gpio_setpull(pin, be32_to_cpu(gpio[2]))) 2408 + if (s3c_gpio_setpull(pin, gpiospec->args[2])) 2406 2409 pr_warn("gpio_xlate: failed to set pin pull up/down\n"); 2407 - if (s5p_gpio_set_drvstr(pin, be32_to_cpu(gpio[3]))) 2410 + if (s5p_gpio_set_drvstr(pin, gpiospec->args[3])) 2408 2411 pr_warn("gpio_xlate: failed to set pin drive strength\n"); 2409 2412 2410 - return n; 2413 + return gpiospec->args[0]; 2411 2414 } 2412 2415 2413 2416 static const struct of_device_id exynos4_gpio_dt_match[] __initdata = {
+3 -2
drivers/gpu/drm/nouveau/nouveau_bios.h
··· 54 54 int bit_table(struct drm_device *, u8 id, struct bit_entry *); 55 55 56 56 enum dcb_gpio_tag { 57 - DCB_GPIO_TVDAC0 = 0xc, 57 + DCB_GPIO_PANEL_POWER = 0x01, 58 + DCB_GPIO_TVDAC0 = 0x0c, 58 59 DCB_GPIO_TVDAC1 = 0x2d, 59 - DCB_GPIO_PWM_FAN = 0x9, 60 + DCB_GPIO_PWM_FAN = 0x09, 60 61 DCB_GPIO_FAN_SENSE = 0x3d, 61 62 DCB_GPIO_UNUSED = 0xff 62 63 };
+10
drivers/gpu/drm/nouveau/nouveau_display.c
··· 219 219 if (ret) 220 220 return ret; 221 221 222 + /* power on internal panel if it's not already. the init tables of 223 + * some vbios default this to off for some reason, causing the 224 + * panel to not work after resume 225 + */ 226 + if (nouveau_gpio_func_get(dev, DCB_GPIO_PANEL_POWER) == 0) { 227 + nouveau_gpio_func_set(dev, DCB_GPIO_PANEL_POWER, true); 228 + msleep(300); 229 + } 230 + 231 + /* enable polling for external displays */ 222 232 drm_kms_helper_poll_enable(dev); 223 233 224 234 /* enable hotplug interrupts */
+1 -1
drivers/gpu/drm/nouveau/nouveau_drv.c
··· 124 124 int nouveau_ctxfw; 125 125 module_param_named(ctxfw, nouveau_ctxfw, int, 0400); 126 126 127 - MODULE_PARM_DESC(ctxfw, "Santise DCB table according to MXM-SIS\n"); 127 + MODULE_PARM_DESC(mxmdcb, "Santise DCB table according to MXM-SIS\n"); 128 128 int nouveau_mxmdcb = 1; 129 129 module_param_named(mxmdcb, nouveau_mxmdcb, int, 0400); 130 130
+21 -2
drivers/gpu/drm/nouveau/nouveau_gem.c
··· 380 380 } 381 381 382 382 static int 383 + validate_sync(struct nouveau_channel *chan, struct nouveau_bo *nvbo) 384 + { 385 + struct nouveau_fence *fence = NULL; 386 + int ret = 0; 387 + 388 + spin_lock(&nvbo->bo.bdev->fence_lock); 389 + if (nvbo->bo.sync_obj) 390 + fence = nouveau_fence_ref(nvbo->bo.sync_obj); 391 + spin_unlock(&nvbo->bo.bdev->fence_lock); 392 + 393 + if (fence) { 394 + ret = nouveau_fence_sync(fence, chan); 395 + nouveau_fence_unref(&fence); 396 + } 397 + 398 + return ret; 399 + } 400 + 401 + static int 383 402 validate_list(struct nouveau_channel *chan, struct list_head *list, 384 403 struct drm_nouveau_gem_pushbuf_bo *pbbo, uint64_t user_pbbo_ptr) 385 404 { ··· 412 393 list_for_each_entry(nvbo, list, entry) { 413 394 struct drm_nouveau_gem_pushbuf_bo *b = &pbbo[nvbo->pbbo_index]; 414 395 415 - ret = nouveau_fence_sync(nvbo->bo.sync_obj, chan); 396 + ret = validate_sync(chan, nvbo); 416 397 if (unlikely(ret)) { 417 398 NV_ERROR(dev, "fail pre-validate sync\n"); 418 399 return ret; ··· 435 416 return ret; 436 417 } 437 418 438 - ret = nouveau_fence_sync(nvbo->bo.sync_obj, chan); 419 + ret = validate_sync(chan, nvbo); 439 420 if (unlikely(ret)) { 440 421 NV_ERROR(dev, "fail post-validate sync\n"); 441 422 return ret;
+9
drivers/gpu/drm/nouveau/nouveau_mxm.c
··· 656 656 657 657 if (mxm_shadow(dev, mxm[0])) { 658 658 MXM_MSG(dev, "failed to locate valid SIS\n"); 659 + #if 0 660 + /* we should, perhaps, fall back to some kind of limited 661 + * mode here if the x86 vbios hasn't already done the 662 + * work for us (so we prevent loading with completely 663 + * whacked vbios tables). 664 + */ 659 665 return -EINVAL; 666 + #else 667 + return 0; 668 + #endif 660 669 } 661 670 662 671 MXM_MSG(dev, "MXMS Version %d.%d\n",
+2 -2
drivers/gpu/drm/nouveau/nv50_pm.c
··· 495 495 struct drm_nouveau_private *dev_priv = dev->dev_private; 496 496 struct nv50_pm_state *info; 497 497 struct pll_lims pll; 498 - int ret = -EINVAL; 498 + int clk, ret = -EINVAL; 499 499 int N, M, P1, P2; 500 - u32 clk, out; 500 + u32 out; 501 501 502 502 if (dev_priv->chipset == 0xaa || 503 503 dev_priv->chipset == 0xac)
+2 -2
drivers/gpu/drm/radeon/atombios_crtc.c
··· 1184 1184 WREG32(EVERGREEN_GRPH_ENABLE + radeon_crtc->crtc_offset, 1); 1185 1185 1186 1186 WREG32(EVERGREEN_DESKTOP_HEIGHT + radeon_crtc->crtc_offset, 1187 - crtc->mode.vdisplay); 1187 + target_fb->height); 1188 1188 x &= ~3; 1189 1189 y &= ~1; 1190 1190 WREG32(EVERGREEN_VIEWPORT_START + radeon_crtc->crtc_offset, ··· 1353 1353 WREG32(AVIVO_D1GRPH_ENABLE + radeon_crtc->crtc_offset, 1); 1354 1354 1355 1355 WREG32(AVIVO_D1MODE_DESKTOP_HEIGHT + radeon_crtc->crtc_offset, 1356 - crtc->mode.vdisplay); 1356 + target_fb->height); 1357 1357 x &= ~3; 1358 1358 y &= ~1; 1359 1359 WREG32(AVIVO_D1MODE_VIEWPORT_START + radeon_crtc->crtc_offset,
+15 -3
drivers/gpu/drm/radeon/atombios_dp.c
··· 564 564 ENCODER_OBJECT_ID_NUTMEG) 565 565 panel_mode = DP_PANEL_MODE_INTERNAL_DP1_MODE; 566 566 else if (radeon_connector_encoder_get_dp_bridge_encoder_id(connector) == 567 - ENCODER_OBJECT_ID_TRAVIS) 568 - panel_mode = DP_PANEL_MODE_INTERNAL_DP2_MODE; 569 - else if (connector->connector_type == DRM_MODE_CONNECTOR_eDP) { 567 + ENCODER_OBJECT_ID_TRAVIS) { 568 + u8 id[6]; 569 + int i; 570 + for (i = 0; i < 6; i++) 571 + id[i] = radeon_read_dpcd_reg(radeon_connector, 0x503 + i); 572 + if (id[0] == 0x73 && 573 + id[1] == 0x69 && 574 + id[2] == 0x76 && 575 + id[3] == 0x61 && 576 + id[4] == 0x72 && 577 + id[5] == 0x54) 578 + panel_mode = DP_PANEL_MODE_INTERNAL_DP1_MODE; 579 + else 580 + panel_mode = DP_PANEL_MODE_INTERNAL_DP2_MODE; 581 + } else if (connector->connector_type == DRM_MODE_CONNECTOR_eDP) { 570 582 u8 tmp = radeon_read_dpcd_reg(radeon_connector, DP_EDP_CONFIGURATION_CAP); 571 583 if (tmp & 1) 572 584 panel_mode = DP_PANEL_MODE_INTERNAL_DP2_MODE;
+25 -10
drivers/gpu/drm/radeon/r600_blit_kms.c
··· 468 468 radeon_ring_write(ring, sq_stack_resource_mgmt_2); 469 469 } 470 470 471 + #define I2F_MAX_BITS 15 472 + #define I2F_MAX_INPUT ((1 << I2F_MAX_BITS) - 1) 473 + #define I2F_SHIFT (24 - I2F_MAX_BITS) 474 + 475 + /* 476 + * Converts unsigned integer into 32-bit IEEE floating point representation. 477 + * Conversion is not universal and only works for the range from 0 478 + * to 2^I2F_MAX_BITS-1. Currently we only use it with inputs between 479 + * 0 and 16384 (inclusive), so I2F_MAX_BITS=15 is enough. If necessary, 480 + * I2F_MAX_BITS can be increased, but that will add to the loop iterations 481 + * and slow us down. Conversion is done by shifting the input and counting 482 + * down until the first 1 reaches bit position 23. The resulting counter 483 + * and the shifted input are, respectively, the exponent and the fraction. 484 + * The sign is always zero. 485 + */ 471 486 static uint32_t i2f(uint32_t input) 472 487 { 473 488 u32 result, i, exponent, fraction; 474 489 475 - if ((input & 0x3fff) == 0) 476 - result = 0; /* 0 is a special case */ 490 + WARN_ON_ONCE(input > I2F_MAX_INPUT); 491 + 492 + if ((input & I2F_MAX_INPUT) == 0) 493 + result = 0; 477 494 else { 478 - exponent = 140; /* exponent biased by 127; */ 479 - fraction = (input & 0x3fff) << 10; /* cheat and only 480 - handle numbers below 2^^15 */ 481 - for (i = 0; i < 14; i++) { 495 + exponent = 126 + I2F_MAX_BITS; 496 + fraction = (input & I2F_MAX_INPUT) << I2F_SHIFT; 497 + 498 + for (i = 0; i < I2F_MAX_BITS; i++) { 482 499 if (fraction & 0x800000) 483 500 break; 484 501 else { 485 - fraction = fraction << 1; /* keep 486 - shifting left until top bit = 1 */ 502 + fraction = fraction << 1; 487 503 exponent = exponent - 1; 488 504 } 489 505 } 490 - result = exponent << 23 | (fraction & 0x7fffff); /* mask 491 - off top bit; assumed 1 */ 506 + result = exponent << 23 | (fraction & 0x7fffff); 492 507 } 493 508 return result; 494 509 }
+2 -1
drivers/gpu/drm/radeon/radeon_atpx_handler.c
··· 59 59 60 60 obj = (union acpi_object *)buffer.pointer; 61 61 memcpy(bios+offset, obj->buffer.pointer, obj->buffer.length); 62 + len = obj->buffer.length; 62 63 kfree(buffer.pointer); 63 - return obj->buffer.length; 64 + return len; 64 65 } 65 66 66 67 bool radeon_atrm_supported(struct pci_dev *pdev)
+4
drivers/gpu/drm/radeon/radeon_device.c
··· 883 883 if (dev->switch_power_state == DRM_SWITCH_POWER_OFF) 884 884 return 0; 885 885 886 + drm_kms_helper_poll_disable(dev); 887 + 886 888 /* turn off display hw */ 887 889 list_for_each_entry(connector, &dev->mode_config.connector_list, head) { 888 890 drm_helper_connector_dpms(connector, DRM_MODE_DPMS_OFF); ··· 974 972 list_for_each_entry(connector, &dev->mode_config.connector_list, head) { 975 973 drm_helper_connector_dpms(connector, DRM_MODE_DPMS_ON); 976 974 } 975 + 976 + drm_kms_helper_poll_enable(dev); 977 977 return 0; 978 978 } 979 979
+1
drivers/gpu/drm/radeon/radeon_i2c.c
··· 958 958 i2c->rec = *rec; 959 959 i2c->adapter.owner = THIS_MODULE; 960 960 i2c->adapter.class = I2C_CLASS_DDC; 961 + i2c->adapter.dev.parent = &dev->pdev->dev; 961 962 i2c->dev = dev; 962 963 snprintf(i2c->adapter.name, sizeof(i2c->adapter.name), 963 964 "Radeon aux bus %s", name);
+1
drivers/hid/hid-hyperv.c
··· 548 548 struct mousevsc_dev *input_dev = hv_get_drvdata(dev); 549 549 550 550 vmbus_close(dev->channel); 551 + hid_hw_stop(input_dev->hid_device); 551 552 hid_destroy_device(input_dev->hid_device); 552 553 mousevsc_free_device(input_dev); 553 554
+4 -3
drivers/hid/hid-wacom.c
··· 531 531 wdata->battery.type = POWER_SUPPLY_TYPE_BATTERY; 532 532 wdata->battery.use_for_apm = 0; 533 533 534 - power_supply_powers(&wdata->battery, &hdev->dev); 535 534 536 535 ret = power_supply_register(&hdev->dev, &wdata->battery); 537 536 if (ret) { ··· 539 540 goto err_battery; 540 541 } 541 542 543 + power_supply_powers(&wdata->battery, &hdev->dev); 544 + 542 545 wdata->ac.properties = wacom_ac_props; 543 546 wdata->ac.num_properties = ARRAY_SIZE(wacom_ac_props); 544 547 wdata->ac.get_property = wacom_ac_get_property; ··· 548 547 wdata->ac.type = POWER_SUPPLY_TYPE_MAINS; 549 548 wdata->ac.use_for_apm = 0; 550 549 551 - power_supply_powers(&wdata->battery, &hdev->dev); 552 - 553 550 ret = power_supply_register(&hdev->dev, &wdata->ac); 554 551 if (ret) { 555 552 hid_warn(hdev, 556 553 "can't create ac battery attribute, err: %d\n", ret); 557 554 goto err_ac; 558 555 } 556 + 557 + power_supply_powers(&wdata->ac, &hdev->dev); 559 558 #endif 560 559 return 0; 561 560
+2 -2
drivers/hid/hid-wiimote-core.c
··· 1226 1226 wdata->battery.type = POWER_SUPPLY_TYPE_BATTERY; 1227 1227 wdata->battery.use_for_apm = 0; 1228 1228 1229 - power_supply_powers(&wdata->battery, &hdev->dev); 1230 - 1231 1229 ret = power_supply_register(&wdata->hdev->dev, &wdata->battery); 1232 1230 if (ret) { 1233 1231 hid_err(hdev, "Cannot register battery device\n"); 1234 1232 goto err_battery; 1235 1233 } 1234 + 1235 + power_supply_powers(&wdata->battery, &hdev->dev); 1236 1236 1237 1237 ret = wiimote_leds_create(wdata); 1238 1238 if (ret)
+2 -2
drivers/hid/usbhid/hiddev.c
··· 922 922 struct hiddev *hiddev = hid->hiddev; 923 923 struct usbhid_device *usbhid = hid->driver_data; 924 924 925 + usb_deregister_dev(usbhid->intf, &hiddev_class); 926 + 925 927 mutex_lock(&hiddev->existancelock); 926 928 hiddev->exist = 0; 927 - 928 - usb_deregister_dev(usbhid->intf, &hiddev_class); 929 929 930 930 if (hiddev->open) { 931 931 mutex_unlock(&hiddev->existancelock);
+20 -3
drivers/hwmon/w83627ehf.c
··· 1920 1920 fan4min = 0; 1921 1921 fan5pin = 0; 1922 1922 } else if (sio_data->kind == nct6776) { 1923 - fan3pin = !(superio_inb(sio_data->sioreg, 0x24) & 0x40); 1924 - fan4pin = !!(superio_inb(sio_data->sioreg, 0x1C) & 0x01); 1925 - fan5pin = !!(superio_inb(sio_data->sioreg, 0x1C) & 0x02); 1923 + bool gpok = superio_inb(sio_data->sioreg, 0x27) & 0x80; 1924 + 1925 + superio_select(sio_data->sioreg, W83627EHF_LD_HWM); 1926 + regval = superio_inb(sio_data->sioreg, SIO_REG_ENABLE); 1927 + 1928 + if (regval & 0x80) 1929 + fan3pin = gpok; 1930 + else 1931 + fan3pin = !(superio_inb(sio_data->sioreg, 0x24) & 0x40); 1932 + 1933 + if (regval & 0x40) 1934 + fan4pin = gpok; 1935 + else 1936 + fan4pin = !!(superio_inb(sio_data->sioreg, 0x1C) & 0x01); 1937 + 1938 + if (regval & 0x20) 1939 + fan5pin = gpok; 1940 + else 1941 + fan5pin = !!(superio_inb(sio_data->sioreg, 0x1C) & 0x02); 1942 + 1926 1943 fan4min = fan4pin; 1927 1944 } else if (sio_data->kind == w83667hg || sio_data->kind == w83667hg_b) { 1928 1945 fan3pin = 1;
+1 -1
drivers/i2c/busses/i2c-omap.c
··· 1018 1018 goto err_release_region; 1019 1019 } 1020 1020 1021 - match = of_match_device(omap_i2c_of_match, &pdev->dev); 1021 + match = of_match_device(of_match_ptr(omap_i2c_of_match), &pdev->dev); 1022 1022 if (match) { 1023 1023 u32 freq = 100000; /* default to 100000 Hz */ 1024 1024
+4 -1
drivers/infiniband/core/ucma.c
··· 808 808 return PTR_ERR(ctx); 809 809 810 810 if (cmd.conn_param.valid) { 811 - ctx->uid = cmd.uid; 812 811 ucma_copy_conn_param(&conn_param, &cmd.conn_param); 812 + mutex_lock(&file->mut); 813 813 ret = rdma_accept(ctx->cm_id, &conn_param); 814 + if (!ret) 815 + ctx->uid = cmd.uid; 816 + mutex_unlock(&file->mut); 814 817 } else 815 818 ret = rdma_accept(ctx->cm_id, NULL); 816 819
+1
drivers/infiniband/core/uverbs_cmd.c
··· 1485 1485 qp->event_handler = attr.event_handler; 1486 1486 qp->qp_context = attr.qp_context; 1487 1487 qp->qp_type = attr.qp_type; 1488 + atomic_set(&qp->usecnt, 0); 1488 1489 atomic_inc(&pd->usecnt); 1489 1490 atomic_inc(&attr.send_cq->usecnt); 1490 1491 if (attr.recv_cq)
+1 -1
drivers/infiniband/core/verbs.c
··· 421 421 qp->uobject = NULL; 422 422 qp->qp_type = qp_init_attr->qp_type; 423 423 424 + atomic_set(&qp->usecnt, 0); 424 425 if (qp_init_attr->qp_type == IB_QPT_XRC_TGT) { 425 426 qp->event_handler = __ib_shared_qp_event_handler; 426 427 qp->qp_context = qp; ··· 431 430 qp->xrcd = qp_init_attr->xrcd; 432 431 atomic_inc(&qp_init_attr->xrcd->usecnt); 433 432 INIT_LIST_HEAD(&qp->open_list); 434 - atomic_set(&qp->usecnt, 0); 435 433 436 434 real_qp = qp; 437 435 qp = __ib_open_qp(real_qp, qp_init_attr->event_handler,
+1 -1
drivers/infiniband/hw/ipath/ipath_fs.c
··· 89 89 error = ipathfs_mknod(parent->d_inode, *dentry, 90 90 mode, fops, data); 91 91 else 92 - error = PTR_ERR(dentry); 92 + error = PTR_ERR(*dentry); 93 93 mutex_unlock(&parent->d_inode->i_mutex); 94 94 95 95 return error;
+2 -5
drivers/infiniband/hw/mlx4/mad.c
··· 257 257 return IB_MAD_RESULT_SUCCESS; 258 258 259 259 /* 260 - * Don't process SMInfo queries or vendor-specific 261 - * MADs -- the SMA can't handle them. 260 + * Don't process SMInfo queries -- the SMA can't handle them. 262 261 */ 263 - if (in_mad->mad_hdr.attr_id == IB_SMP_ATTR_SM_INFO || 264 - ((in_mad->mad_hdr.attr_id & IB_SMP_ATTR_VENDOR_MASK) == 265 - IB_SMP_ATTR_VENDOR_MASK)) 262 + if (in_mad->mad_hdr.attr_id == IB_SMP_ATTR_SM_INFO) 266 263 return IB_MAD_RESULT_SUCCESS; 267 264 } else if (in_mad->mad_hdr.mgmt_class == IB_MGMT_CLASS_PERF_MGMT || 268 265 in_mad->mad_hdr.mgmt_class == MLX4_IB_VENDOR_CLASS1 ||
+1 -1
drivers/infiniband/hw/nes/nes.c
··· 1 1 /* 2 - * Copyright (c) 2006 - 2009 Intel Corporation. All rights reserved. 2 + * Copyright (c) 2006 - 2011 Intel Corporation. All rights reserved. 3 3 * Copyright (c) 2005 Open Grid Computing, Inc. All rights reserved. 4 4 * 5 5 * This software is available to you under a choice of one of two
+1 -1
drivers/infiniband/hw/nes/nes.h
··· 1 1 /* 2 - * Copyright (c) 2006 - 2009 Intel Corporation. All rights reserved. 2 + * Copyright (c) 2006 - 2011 Intel Corporation. All rights reserved. 3 3 * Copyright (c) 2005 Open Grid Computing, Inc. All rights reserved. 4 4 * 5 5 * This software is available to you under a choice of one of two
+7 -3
drivers/infiniband/hw/nes/nes_cm.c
··· 1 1 /* 2 - * Copyright (c) 2006 - 2009 Intel Corporation. All rights reserved. 2 + * Copyright (c) 2006 - 2011 Intel Corporation. All rights reserved. 3 3 * 4 4 * This software is available to you under a choice of one of two 5 5 * licenses. You may choose to be licensed under the terms of the GNU ··· 233 233 u8 *start_ptr = &start_addr; 234 234 u8 **start_buff = &start_ptr; 235 235 u16 buff_len = 0; 236 + struct ietf_mpa_v1 *mpa_frame; 236 237 237 238 skb = dev_alloc_skb(MAX_CM_BUFFER); 238 239 if (!skb) { ··· 243 242 244 243 /* send an MPA reject frame */ 245 244 cm_build_mpa_frame(cm_node, start_buff, &buff_len, NULL, MPA_KEY_REPLY); 245 + mpa_frame = (struct ietf_mpa_v1 *)*start_buff; 246 + mpa_frame->flags |= IETF_MPA_FLAGS_REJECT; 246 247 form_cm_frame(skb, cm_node, NULL, 0, *start_buff, buff_len, SET_ACK | SET_FIN); 247 248 248 249 cm_node->state = NES_CM_STATE_FIN_WAIT1; ··· 1363 1360 if (!memcmp(nesadapter->arp_table[arpindex].mac_addr, 1364 1361 neigh->ha, ETH_ALEN)) { 1365 1362 /* Mac address same as in nes_arp_table */ 1366 - ip_rt_put(rt); 1367 - return rc; 1363 + goto out; 1368 1364 } 1369 1365 1370 1366 nes_manage_arp_cache(nesvnic->netdev, ··· 1379 1377 neigh_event_send(neigh, NULL); 1380 1378 } 1381 1379 } 1380 + 1381 + out: 1382 1382 rcu_read_unlock(); 1383 1383 ip_rt_put(rt); 1384 1384 return rc;
+1 -1
drivers/infiniband/hw/nes/nes_cm.h
··· 1 1 /* 2 - * Copyright (c) 2006 - 2009 Intel Corporation. All rights reserved. 2 + * Copyright (c) 2006 - 2011 Intel Corporation. All rights reserved. 3 3 * 4 4 * This software is available to you under a choice of one of two 5 5 * licenses. You may choose to be licensed under the terms of the GNU
+1 -1
drivers/infiniband/hw/nes/nes_context.h
··· 1 1 /* 2 - * Copyright (c) 2006 - 2009 Intel Corporation. All rights reserved. 2 + * Copyright (c) 2006 - 2011 Intel Corporation. All rights reserved. 3 3 * 4 4 * This software is available to you under a choice of one of two 5 5 * licenses. You may choose to be licensed under the terms of the GNU
+1 -1
drivers/infiniband/hw/nes/nes_hw.c
··· 1 1 /* 2 - * Copyright (c) 2006 - 2009 Intel Corporation. All rights reserved. 2 + * Copyright (c) 2006 - 2011 Intel Corporation. All rights reserved. 3 3 * 4 4 * This software is available to you under a choice of one of two 5 5 * licenses. You may choose to be licensed under the terms of the GNU
+1 -1
drivers/infiniband/hw/nes/nes_hw.h
··· 1 1 /* 2 - * Copyright (c) 2006 - 2009 Intel Corporation. All rights reserved. 2 + * Copyright (c) 2006 - 2011 Intel Corporation. All rights reserved. 3 3 * 4 4 * This software is available to you under a choice of one of two 5 5 * licenses. You may choose to be licensed under the terms of the GNU
+1 -1
drivers/infiniband/hw/nes/nes_mgt.c
··· 1 1 /* 2 - * Copyright (c) 2006 - 2009 Intel-NE, Inc. All rights reserved. 2 + * Copyright (c) 2006 - 2011 Intel-NE, Inc. All rights reserved. 3 3 * 4 4 * This software is available to you under a choice of one of two 5 5 * licenses. You may choose to be licensed under the terms of the GNU
+1 -1
drivers/infiniband/hw/nes/nes_mgt.h
··· 1 1 /* 2 - * Copyright (c) 2010 Intel-NE, Inc. All rights reserved. 2 + * Copyright (c) 2006 - 2011 Intel-NE, Inc. All rights reserved. 3 3 * 4 4 * This software is available to you under a choice of one of two 5 5 * licenses. You may choose to be licensed under the terms of the GNU
+1 -1
drivers/infiniband/hw/nes/nes_nic.c
··· 1 1 /* 2 - * Copyright (c) 2006 - 2009 Intel Corporation. All rights reserved. 2 + * Copyright (c) 2006 - 2011 Intel Corporation. All rights reserved. 3 3 * 4 4 * This software is available to you under a choice of one of two 5 5 * licenses. You may choose to be licensed under the terms of the GNU
+1 -1
drivers/infiniband/hw/nes/nes_user.h
··· 1 1 /* 2 - * Copyright (c) 2006 - 2009 Intel Corporation. All rights reserved. 2 + * Copyright (c) 2006 - 2011 Intel Corporation. All rights reserved. 3 3 * Copyright (c) 2005 Topspin Communications. All rights reserved. 4 4 * Copyright (c) 2005 Cisco Systems. All rights reserved. 5 5 * Copyright (c) 2005 Open Grid Computing, Inc. All rights reserved.
+1 -1
drivers/infiniband/hw/nes/nes_utils.c
··· 1 1 /* 2 - * Copyright (c) 2006 - 2009 Intel Corporation. All rights reserved. 2 + * Copyright (c) 2006 - 2011 Intel Corporation. All rights reserved. 3 3 * 4 4 * This software is available to you under a choice of one of two 5 5 * licenses. You may choose to be licensed under the terms of the GNU
+4 -2
drivers/infiniband/hw/nes/nes_verbs.c
··· 1 1 /* 2 - * Copyright (c) 2006 - 2009 Intel Corporation. All rights reserved. 2 + * Copyright (c) 2006 - 2011 Intel Corporation. All rights reserved. 3 3 * 4 4 * This software is available to you under a choice of one of two 5 5 * licenses. You may choose to be licensed under the terms of the GNU ··· 3428 3428 NES_IWARP_SQ_FMR_WQE_LENGTH_LOW_IDX, 3429 3429 ib_wr->wr.fast_reg.length); 3430 3430 set_wqe_32bit_value(wqe->wqe_words, 3431 + NES_IWARP_SQ_FMR_WQE_LENGTH_HIGH_IDX, 0); 3432 + set_wqe_32bit_value(wqe->wqe_words, 3431 3433 NES_IWARP_SQ_FMR_WQE_MR_STAG_IDX, 3432 3434 ib_wr->wr.fast_reg.rkey); 3433 3435 /* Set page size: */ ··· 3726 3724 entry->opcode = IB_WC_SEND; 3727 3725 break; 3728 3726 case NES_IWARP_SQ_OP_LOCINV: 3729 - entry->opcode = IB_WR_LOCAL_INV; 3727 + entry->opcode = IB_WC_LOCAL_INV; 3730 3728 break; 3731 3729 case NES_IWARP_SQ_OP_FAST_REG: 3732 3730 entry->opcode = IB_WC_FAST_REG_MR;
+1 -1
drivers/infiniband/hw/nes/nes_verbs.h
··· 1 1 /* 2 - * Copyright (c) 2006 - 2009 Intel Corporation. All rights reserved. 2 + * Copyright (c) 2006 - 2011 Intel Corporation. All rights reserved. 3 3 * Copyright (c) 2005 Open Grid Computing, Inc. All rights reserved. 4 4 * 5 5 * This software is available to you under a choice of one of two
+1 -1
drivers/infiniband/hw/qib/qib_iba6120.c
··· 2105 2105 dd->cspec->dummy_hdrq = dma_alloc_coherent(&dd->pcidev->dev, 2106 2106 dd->rcd[0]->rcvhdrq_size, 2107 2107 &dd->cspec->dummy_hdrq_phys, 2108 - GFP_KERNEL | __GFP_COMP); 2108 + GFP_ATOMIC | __GFP_COMP); 2109 2109 if (!dd->cspec->dummy_hdrq) { 2110 2110 qib_devinfo(dd->pcidev, "Couldn't allocate dummy hdrq\n"); 2111 2111 /* fallback to just 0'ing */
+1 -1
drivers/infiniband/hw/qib/qib_pcie.c
··· 560 560 * BIOS may not set PCIe bus-utilization parameters for best performance. 561 561 * Check and optionally adjust them to maximize our throughput. 562 562 */ 563 - static int qib_pcie_caps = 0x51; 563 + static int qib_pcie_caps; 564 564 module_param_named(pcie_caps, qib_pcie_caps, int, S_IRUGO); 565 565 MODULE_PARM_DESC(pcie_caps, "Max PCIe tuning: Payload (0..3), ReadReq (4..7)"); 566 566
+7 -10
drivers/infiniband/ulp/srpt/ib_srpt.c
··· 69 69 */ 70 70 71 71 static u64 srpt_service_guid; 72 - static spinlock_t srpt_dev_lock; /* Protects srpt_dev_list. */ 73 - static struct list_head srpt_dev_list; /* List of srpt_device structures. */ 72 + static DEFINE_SPINLOCK(srpt_dev_lock); /* Protects srpt_dev_list. */ 73 + static LIST_HEAD(srpt_dev_list); /* List of srpt_device structures. */ 74 74 75 75 static unsigned srp_max_req_size = DEFAULT_MAX_REQ_SIZE; 76 76 module_param(srp_max_req_size, int, 0444); ··· 687 687 while (--i >= 0) 688 688 srpt_free_ioctx(sdev, ring[i], dma_size, dir); 689 689 kfree(ring); 690 + ring = NULL; 690 691 out: 691 692 return ring; 692 693 } ··· 2596 2595 } 2597 2596 2598 2597 ch->sess = transport_init_session(); 2599 - if (!ch->sess) { 2598 + if (IS_ERR(ch->sess)) { 2600 2599 rej->reason = __constant_cpu_to_be32( 2601 2600 SRP_LOGIN_REJ_INSUFFICIENT_RESOURCES); 2602 2601 pr_debug("Failed to create session\n"); ··· 3265 3264 for (i = 0; i < sdev->srq_size; ++i) 3266 3265 srpt_post_recv(sdev, sdev->ioctx_ring[i]); 3267 3266 3268 - WARN_ON(sdev->device->phys_port_cnt 3269 - > sizeof(sdev->port)/sizeof(sdev->port[0])); 3267 + WARN_ON(sdev->device->phys_port_cnt > ARRAY_SIZE(sdev->port)); 3270 3268 3271 3269 for (i = 1; i <= sdev->device->phys_port_cnt; i++) { 3272 3270 sport = &sdev->port[i - 1]; ··· 4010 4010 goto out; 4011 4011 } 4012 4012 4013 - spin_lock_init(&srpt_dev_lock); 4014 - INIT_LIST_HEAD(&srpt_dev_list); 4015 - 4016 - ret = -ENODEV; 4017 4013 srpt_target = target_fabric_configfs_init(THIS_MODULE, "srpt"); 4018 - if (!srpt_target) { 4014 + if (IS_ERR(srpt_target)) { 4019 4015 printk(KERN_ERR "couldn't register\n"); 4016 + ret = PTR_ERR(srpt_target); 4020 4017 goto out; 4021 4018 } 4022 4019
-1
drivers/infiniband/ulp/srpt/ib_srpt.h
··· 35 35 #ifndef IB_SRPT_H 36 36 #define IB_SRPT_H 37 37 38 - #include <linux/version.h> 39 38 #include <linux/types.h> 40 39 #include <linux/list.h> 41 40 #include <linux/wait.h>
+1 -1
drivers/input/evdev.c
··· 386 386 struct evdev_client *client = file->private_data; 387 387 struct evdev *evdev = client->evdev; 388 388 struct input_event event; 389 - int retval; 389 + int retval = 0; 390 390 391 391 if (count < input_event_size()) 392 392 return -EINVAL;
+1 -3
drivers/input/keyboard/twl4030_keypad.c
··· 34 34 #include <linux/i2c/twl.h> 35 35 #include <linux/slab.h> 36 36 37 - 38 37 /* 39 38 * The TWL4030 family chips include a keypad controller that supports 40 39 * up to an 8x8 switch matrix. The controller can issue system wakeup ··· 301 302 if (twl4030_kpwrite_u8(kp, i, KEYP_DEB) < 0) 302 303 return -EIO; 303 304 304 - /* Set timeout period to 100 ms */ 305 + /* Set timeout period to 200 ms */ 305 306 i = KEYP_PERIOD_US(200000, PTV_PRESCALER); 306 307 if (twl4030_kpwrite_u8(kp, (i & 0xFF), KEYP_TIMEOUT_L) < 0) 307 308 return -EIO; ··· 465 466 MODULE_DESCRIPTION("TWL4030 Keypad Driver"); 466 467 MODULE_LICENSE("GPL"); 467 468 MODULE_ALIAS("platform:twl4030_keypad"); 468 -
+7
drivers/input/serio/i8042-x86ia64io.h
··· 512 512 DMI_MATCH(DMI_PRODUCT_NAME, "Vostro 1720"), 513 513 }, 514 514 }, 515 + { 516 + /* Lenovo Ideapad U455 */ 517 + .matches = { 518 + DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 519 + DMI_MATCH(DMI_PRODUCT_NAME, "20046"), 520 + }, 521 + }, 515 522 { } 516 523 }; 517 524
+9 -6
drivers/input/serio/serio_raw.c
··· 164 164 struct serio_raw_client *client = file->private_data; 165 165 struct serio_raw *serio_raw = client->serio_raw; 166 166 char uninitialized_var(c); 167 - ssize_t retval = 0; 167 + ssize_t read = 0; 168 + int retval; 168 169 169 170 if (serio_raw->dead) 170 171 return -ENODEV; ··· 181 180 if (serio_raw->dead) 182 181 return -ENODEV; 183 182 184 - while (retval < count && serio_raw_fetch_byte(serio_raw, &c)) { 185 - if (put_user(c, buffer++)) 186 - return -EFAULT; 187 - retval++; 183 + while (read < count && serio_raw_fetch_byte(serio_raw, &c)) { 184 + if (put_user(c, buffer++)) { 185 + retval = -EFAULT; 186 + break; 187 + } 188 + read++; 188 189 } 189 190 190 - return retval; 191 + return read ?: retval; 191 192 } 192 193 193 194 static ssize_t serio_raw_write(struct file *file, const char __user *buffer,
+3
drivers/iommu/amd_iommu.c
··· 2863 2863 2864 2864 for_each_pci_dev(pdev) { 2865 2865 if (!check_device(&pdev->dev)) { 2866 + 2867 + iommu_ignore_device(&pdev->dev); 2868 + 2866 2869 unhandled += 1; 2867 2870 continue; 2868 2871 }
+1 -6
drivers/iommu/msm_iommu.c
··· 482 482 483 483 priv = domain->priv; 484 484 485 - if (!priv) { 486 - ret = -ENODEV; 485 + if (!priv) 487 486 goto fail; 488 - } 489 487 490 488 fl_table = priv->pgtable; 491 489 492 490 if (len != SZ_16M && len != SZ_1M && 493 491 len != SZ_64K && len != SZ_4K) { 494 492 pr_debug("Bad length: %d\n", len); 495 - ret = -EINVAL; 496 493 goto fail; 497 494 } 498 495 499 496 if (!fl_table) { 500 497 pr_debug("Null page table\n"); 501 - ret = -EINVAL; 502 498 goto fail; 503 499 } 504 500 ··· 503 507 504 508 if (*fl_pte == 0) { 505 509 pr_debug("First level PTE is 0\n"); 506 - ret = -ENODEV; 507 510 goto fail; 508 511 } 509 512
+2 -2
drivers/leds/leds-lm3530.c
··· 164 164 165 165 if (drvdata->mode == LM3530_BL_MODE_ALS) { 166 166 if (pltfm->als_vmax == 0) { 167 - pltfm->als_vmin = als_vmin = 0; 168 - pltfm->als_vmin = als_vmax = LM3530_ALS_WINDOW_mV; 167 + pltfm->als_vmin = 0; 168 + pltfm->als_vmax = LM3530_ALS_WINDOW_mV; 169 169 } 170 170 171 171 als_vmin = pltfm->als_vmin;
+9 -3
drivers/md/dm-raid.c
··· 56 56 struct raid_set { 57 57 struct dm_target *ti; 58 58 59 - uint64_t print_flags; 59 + uint32_t bitmap_loaded; 60 + uint32_t print_flags; 60 61 61 62 struct mddev md; 62 63 struct raid_type *raid_type; ··· 1086 1085 raid_param_cnt += 2; 1087 1086 } 1088 1087 1089 - raid_param_cnt += (hweight64(rs->print_flags & ~DMPF_REBUILD) * 2); 1088 + raid_param_cnt += (hweight32(rs->print_flags & ~DMPF_REBUILD) * 2); 1090 1089 if (rs->print_flags & (DMPF_SYNC | DMPF_NOSYNC)) 1091 1090 raid_param_cnt--; 1092 1091 ··· 1198 1197 { 1199 1198 struct raid_set *rs = ti->private; 1200 1199 1201 - bitmap_load(&rs->md); 1200 + if (!rs->bitmap_loaded) { 1201 + bitmap_load(&rs->md); 1202 + rs->bitmap_loaded = 1; 1203 + } else 1204 + md_wakeup_thread(rs->md.thread); 1205 + 1202 1206 mddev_resume(&rs->md); 1203 1207 } 1204 1208
+3 -2
drivers/md/md.c
··· 7333 7333 printk(KERN_INFO 7334 7334 "md: checkpointing %s of %s.\n", 7335 7335 desc, mdname(mddev)); 7336 - mddev->recovery_cp = mddev->curr_resync; 7336 + mddev->recovery_cp = 7337 + mddev->curr_resync_completed; 7337 7338 } 7338 7339 } else 7339 7340 mddev->recovery_cp = MaxSector; ··· 7352 7351 rcu_read_unlock(); 7353 7352 } 7354 7353 } 7354 + skip: 7355 7355 set_bit(MD_CHANGE_DEVS, &mddev->flags); 7356 7356 7357 - skip: 7358 7357 if (!test_bit(MD_RECOVERY_INTR, &mddev->recovery)) { 7359 7358 /* We completed so min/max setting can be forgotten if used. */ 7360 7359 if (test_bit(MD_RECOVERY_REQUESTED, &mddev->recovery))
+77 -51
drivers/mfd/twl6040-core.c
··· 282 282 /* Default PLL configuration after power up */ 283 283 twl6040->pll = TWL6040_SYSCLK_SEL_LPPLL; 284 284 twl6040->sysclk = 19200000; 285 + twl6040->mclk = 32768; 285 286 } else { 286 287 /* already powered-down */ 287 288 if (!twl6040->power_count) { ··· 306 305 twl6040_power_down(twl6040); 307 306 } 308 307 twl6040->sysclk = 0; 308 + twl6040->mclk = 0; 309 309 } 310 310 311 311 out: ··· 326 324 hppllctl = twl6040_reg_read(twl6040, TWL6040_REG_HPPLLCTL); 327 325 lppllctl = twl6040_reg_read(twl6040, TWL6040_REG_LPPLLCTL); 328 326 327 + /* Force full reconfiguration when switching between PLL */ 328 + if (pll_id != twl6040->pll) { 329 + twl6040->sysclk = 0; 330 + twl6040->mclk = 0; 331 + } 332 + 329 333 switch (pll_id) { 330 334 case TWL6040_SYSCLK_SEL_LPPLL: 331 335 /* low-power PLL divider */ 332 - switch (freq_out) { 333 - case 17640000: 334 - lppllctl |= TWL6040_LPLLFIN; 335 - break; 336 - case 19200000: 337 - lppllctl &= ~TWL6040_LPLLFIN; 338 - break; 339 - default: 340 - dev_err(twl6040->dev, 341 - "freq_out %d not supported\n", freq_out); 342 - ret = -EINVAL; 343 - goto pll_out; 336 + /* Change the sysclk configuration only if it has been canged */ 337 + if (twl6040->sysclk != freq_out) { 338 + switch (freq_out) { 339 + case 17640000: 340 + lppllctl |= TWL6040_LPLLFIN; 341 + break; 342 + case 19200000: 343 + lppllctl &= ~TWL6040_LPLLFIN; 344 + break; 345 + default: 346 + dev_err(twl6040->dev, 347 + "freq_out %d not supported\n", 348 + freq_out); 349 + ret = -EINVAL; 350 + goto pll_out; 351 + } 352 + twl6040_reg_write(twl6040, TWL6040_REG_LPPLLCTL, 353 + lppllctl); 344 354 } 345 - twl6040_reg_write(twl6040, TWL6040_REG_LPPLLCTL, lppllctl); 355 + 356 + /* The PLL in use has not been change, we can exit */ 357 + if (twl6040->pll == pll_id) 358 + break; 346 359 347 360 switch (freq_in) { 348 361 case 32768: ··· 388 371 goto pll_out; 389 372 } 390 373 391 - hppllctl &= ~TWL6040_MCLK_MSK; 374 + if (twl6040->mclk != freq_in) { 375 + hppllctl &= ~TWL6040_MCLK_MSK; 392 376 393 - switch (freq_in) { 394 - case 12000000: 395 - /* PLL enabled, active mode */ 396 - hppllctl |= TWL6040_MCLK_12000KHZ | 397 - TWL6040_HPLLENA; 398 - break; 399 - case 19200000: 377 + switch (freq_in) { 378 + case 12000000: 379 + /* PLL enabled, active mode */ 380 + hppllctl |= TWL6040_MCLK_12000KHZ | 381 + TWL6040_HPLLENA; 382 + break; 383 + case 19200000: 384 + /* 385 + * PLL disabled 386 + * (enable PLL if MCLK jitter quality 387 + * doesn't meet specification) 388 + */ 389 + hppllctl |= TWL6040_MCLK_19200KHZ; 390 + break; 391 + case 26000000: 392 + /* PLL enabled, active mode */ 393 + hppllctl |= TWL6040_MCLK_26000KHZ | 394 + TWL6040_HPLLENA; 395 + break; 396 + case 38400000: 397 + /* PLL enabled, active mode */ 398 + hppllctl |= TWL6040_MCLK_38400KHZ | 399 + TWL6040_HPLLENA; 400 + break; 401 + default: 402 + dev_err(twl6040->dev, 403 + "freq_in %d not supported\n", freq_in); 404 + ret = -EINVAL; 405 + goto pll_out; 406 + } 407 + 400 408 /* 401 - * PLL disabled 402 - * (enable PLL if MCLK jitter quality 403 - * doesn't meet specification) 409 + * enable clock slicer to ensure input waveform is 410 + * square 404 411 */ 405 - hppllctl |= TWL6040_MCLK_19200KHZ; 406 - break; 407 - case 26000000: 408 - /* PLL enabled, active mode */ 409 - hppllctl |= TWL6040_MCLK_26000KHZ | 410 - TWL6040_HPLLENA; 411 - break; 412 - case 38400000: 413 - /* PLL enabled, active mode */ 414 - hppllctl |= TWL6040_MCLK_38400KHZ | 415 - TWL6040_HPLLENA; 416 - break; 417 - default: 418 - dev_err(twl6040->dev, 419 - "freq_in %d not supported\n", freq_in); 420 - ret = -EINVAL; 421 - goto pll_out; 412 + hppllctl |= TWL6040_HPLLSQRENA; 413 + 414 + twl6040_reg_write(twl6040, TWL6040_REG_HPPLLCTL, 415 + hppllctl); 416 + usleep_range(500, 700); 417 + lppllctl |= TWL6040_HPLLSEL; 418 + twl6040_reg_write(twl6040, TWL6040_REG_LPPLLCTL, 419 + lppllctl); 420 + lppllctl &= ~TWL6040_LPLLENA; 421 + twl6040_reg_write(twl6040, TWL6040_REG_LPPLLCTL, 422 + lppllctl); 422 423 } 423 - 424 - /* enable clock slicer to ensure input waveform is square */ 425 - hppllctl |= TWL6040_HPLLSQRENA; 426 - 427 - twl6040_reg_write(twl6040, TWL6040_REG_HPPLLCTL, hppllctl); 428 - usleep_range(500, 700); 429 - lppllctl |= TWL6040_HPLLSEL; 430 - twl6040_reg_write(twl6040, TWL6040_REG_LPPLLCTL, lppllctl); 431 - lppllctl &= ~TWL6040_LPLLENA; 432 - twl6040_reg_write(twl6040, TWL6040_REG_LPPLLCTL, lppllctl); 433 424 break; 434 425 default: 435 426 dev_err(twl6040->dev, "unknown pll id %d\n", pll_id); ··· 446 421 } 447 422 448 423 twl6040->sysclk = freq_out; 424 + twl6040->mclk = freq_in; 449 425 twl6040->pll = pll_id; 450 426 451 427 pll_out:
+5 -1
drivers/misc/lkdtm.c
··· 354 354 static void lkdtm_handler(void) 355 355 { 356 356 unsigned long flags; 357 + bool do_it = false; 357 358 358 359 spin_lock_irqsave(&count_lock, flags); 359 360 count--; ··· 362 361 cp_name_to_str(cpoint), cp_type_to_str(cptype), count); 363 362 364 363 if (count == 0) { 365 - lkdtm_do_action(cptype); 364 + do_it = true; 366 365 count = cpoint_count; 367 366 } 368 367 spin_unlock_irqrestore(&count_lock, flags); 368 + 369 + if (do_it) 370 + lkdtm_do_action(cptype); 369 371 } 370 372 371 373 static int lkdtm_register_cpoint(enum cname which)
+1 -1
drivers/mtd/mtdcore.c
··· 119 119 { 120 120 struct mtd_info *mtd = dev_get_drvdata(dev); 121 121 122 - return mtd_suspend(mtd); 122 + return mtd ? mtd_suspend(mtd) : 0; 123 123 } 124 124 125 125 static int mtd_cls_resume(struct device *dev)
+41 -4
drivers/mtd/nand/atmel_nand.c
··· 161 161 !!host->board->rdy_pin_active_low; 162 162 } 163 163 164 + /* 165 + * Minimal-overhead PIO for data access. 166 + */ 167 + static void atmel_read_buf8(struct mtd_info *mtd, u8 *buf, int len) 168 + { 169 + struct nand_chip *nand_chip = mtd->priv; 170 + 171 + __raw_readsb(nand_chip->IO_ADDR_R, buf, len); 172 + } 173 + 174 + static void atmel_read_buf16(struct mtd_info *mtd, u8 *buf, int len) 175 + { 176 + struct nand_chip *nand_chip = mtd->priv; 177 + 178 + __raw_readsw(nand_chip->IO_ADDR_R, buf, len / 2); 179 + } 180 + 181 + static void atmel_write_buf8(struct mtd_info *mtd, const u8 *buf, int len) 182 + { 183 + struct nand_chip *nand_chip = mtd->priv; 184 + 185 + __raw_writesb(nand_chip->IO_ADDR_W, buf, len); 186 + } 187 + 188 + static void atmel_write_buf16(struct mtd_info *mtd, const u8 *buf, int len) 189 + { 190 + struct nand_chip *nand_chip = mtd->priv; 191 + 192 + __raw_writesw(nand_chip->IO_ADDR_W, buf, len / 2); 193 + } 194 + 164 195 static void dma_complete_func(void *completion) 165 196 { 166 197 complete(completion); ··· 266 235 static void atmel_read_buf(struct mtd_info *mtd, u8 *buf, int len) 267 236 { 268 237 struct nand_chip *chip = mtd->priv; 238 + struct atmel_nand_host *host = chip->priv; 269 239 270 240 if (use_dma && len > mtd->oobsize) 271 241 /* only use DMA for bigger than oob size: better performances */ 272 242 if (atmel_nand_dma_op(mtd, buf, len, 1) == 0) 273 243 return; 274 244 275 - /* if no DMA operation possible, use PIO */ 276 - memcpy_fromio(buf, chip->IO_ADDR_R, len); 245 + if (host->board->bus_width_16) 246 + atmel_read_buf16(mtd, buf, len); 247 + else 248 + atmel_read_buf8(mtd, buf, len); 277 249 } 278 250 279 251 static void atmel_write_buf(struct mtd_info *mtd, const u8 *buf, int len) 280 252 { 281 253 struct nand_chip *chip = mtd->priv; 254 + struct atmel_nand_host *host = chip->priv; 282 255 283 256 if (use_dma && len > mtd->oobsize) 284 257 /* only use DMA for bigger than oob size: better performances */ 285 258 if (atmel_nand_dma_op(mtd, (void *)buf, len, 0) == 0) 286 259 return; 287 260 288 - /* if no DMA operation possible, use PIO */ 289 - memcpy_toio(chip->IO_ADDR_W, buf, len); 261 + if (host->board->bus_width_16) 262 + atmel_write_buf16(mtd, buf, len); 263 + else 264 + atmel_write_buf8(mtd, buf, len); 290 265 } 291 266 292 267 /*
+14 -4
drivers/mtd/nand/gpmi-nand/gpmi-lib.c
··· 69 69 * [1] enable the module. 70 70 * [2] reset the module. 71 71 * 72 - * In most of the cases, it's ok. But there is a hardware bug in the BCH block. 72 + * In most of the cases, it's ok. 73 + * But in MX23, there is a hardware bug in the BCH block (see erratum #2847). 73 74 * If you try to soft reset the BCH block, it becomes unusable until 74 75 * the next hard reset. This case occurs in the NAND boot mode. When the board 75 76 * boots by NAND, the ROM of the chip will initialize the BCH blocks itself. 76 77 * So If the driver tries to reset the BCH again, the BCH will not work anymore. 77 - * You will see a DMA timeout in this case. 78 + * You will see a DMA timeout in this case. The bug has been fixed 79 + * in the following chips, such as MX28. 78 80 * 79 81 * To avoid this bug, just add a new parameter `just_enable` for 80 82 * the mxs_reset_block(), and rewrite it here. 81 83 */ 82 - int gpmi_reset_block(void __iomem *reset_addr, bool just_enable) 84 + static int gpmi_reset_block(void __iomem *reset_addr, bool just_enable) 83 85 { 84 86 int ret; 85 87 int timeout = 0x400; ··· 208 206 if (ret) 209 207 goto err_out; 210 208 211 - ret = gpmi_reset_block(r->bch_regs, true); 209 + /* 210 + * Due to erratum #2847 of the MX23, the BCH cannot be soft reset on this 211 + * chip, otherwise it will lock up. So we skip resetting BCH on the MX23. 212 + * On the other hand, the MX28 needs the reset, because one case has been 213 + * seen where the BCH produced ECC errors constantly after 10000 214 + * consecutive reboots. The latter case has not been seen on the MX23 yet, 215 + * still we don't know if it could happen there as well. 216 + */ 217 + ret = gpmi_reset_block(r->bch_regs, GPMI_IS_MX23(this)); 212 218 if (ret) 213 219 goto err_out; 214 220
+1 -1
drivers/mtd/nand/nand_base.c
··· 2588 2588 instr->state = MTD_ERASING; 2589 2589 2590 2590 while (len) { 2591 - /* Heck if we have a bad block, we do not erase bad blocks! */ 2591 + /* Check if we have a bad block, we do not erase bad blocks! */ 2592 2592 if (nand_block_checkbad(mtd, ((loff_t) page) << 2593 2593 chip->page_shift, 0, allowbbt)) { 2594 2594 pr_warn("%s: attempt to erase a bad block at page 0x%08x\n",
+1 -3
drivers/pcmcia/ds.c
··· 1269 1269 1270 1270 static int pcmcia_bus_early_resume(struct pcmcia_socket *skt) 1271 1271 { 1272 - if (!verify_cis_cache(skt)) { 1273 - pcmcia_put_socket(skt); 1272 + if (!verify_cis_cache(skt)) 1274 1273 return 0; 1275 - } 1276 1274 1277 1275 dev_dbg(&skt->dev, "cis mismatch - different card\n"); 1278 1276
+1 -1
drivers/spi/Kconfig
··· 299 299 300 300 config SPI_S3C64XX 301 301 tristate "Samsung S3C64XX series type SPI" 302 - depends on (ARCH_S3C64XX || ARCH_S5P64X0) 302 + depends on (ARCH_S3C64XX || ARCH_S5P64X0 || ARCH_EXYNOS) 303 303 select S3C64XX_DMA if ARCH_S3C64XX 304 304 help 305 305 SPI driver for Samsung S3C64XX and newer SoCs.
+3 -3
drivers/spi/spi-topcliff-pch.c
··· 1720 1720 1721 1721 #endif 1722 1722 1723 - static struct pci_driver pch_spi_pcidev = { 1723 + static struct pci_driver pch_spi_pcidev_driver = { 1724 1724 .name = "pch_spi", 1725 1725 .id_table = pch_spi_pcidev_id, 1726 1726 .probe = pch_spi_probe, ··· 1736 1736 if (ret) 1737 1737 return ret; 1738 1738 1739 - ret = pci_register_driver(&pch_spi_pcidev); 1739 + ret = pci_register_driver(&pch_spi_pcidev_driver); 1740 1740 if (ret) 1741 1741 return ret; 1742 1742 ··· 1746 1746 1747 1747 static void __exit pch_spi_exit(void) 1748 1748 { 1749 - pci_unregister_driver(&pch_spi_pcidev); 1749 + pci_unregister_driver(&pch_spi_pcidev_driver); 1750 1750 platform_driver_unregister(&pch_spi_pd_driver); 1751 1751 } 1752 1752 module_exit(pch_spi_exit);
+1
drivers/staging/media/go7007/go7007-usb.c
··· 1279 1279 }; 1280 1280 1281 1281 module_usb_driver(go7007_usb_driver); 1282 + MODULE_LICENSE("GPL v2");
+34 -5
drivers/target/iscsi/iscsi_target.c
··· 1061 1061 if (ret < 0) 1062 1062 return iscsit_add_reject_from_cmd( 1063 1063 ISCSI_REASON_BOOKMARK_NO_RESOURCES, 1064 - 1, 1, buf, cmd); 1064 + 1, 0, buf, cmd); 1065 1065 /* 1066 1066 * Check the CmdSN against ExpCmdSN/MaxCmdSN here if 1067 1067 * the Immediate Bit is not set, and no Immediate ··· 3164 3164 return 0; 3165 3165 } 3166 3166 3167 + static bool iscsit_check_inaddr_any(struct iscsi_np *np) 3168 + { 3169 + bool ret = false; 3170 + 3171 + if (np->np_sockaddr.ss_family == AF_INET6) { 3172 + const struct sockaddr_in6 sin6 = { 3173 + .sin6_addr = IN6ADDR_ANY_INIT }; 3174 + struct sockaddr_in6 *sock_in6 = 3175 + (struct sockaddr_in6 *)&np->np_sockaddr; 3176 + 3177 + if (!memcmp(sock_in6->sin6_addr.s6_addr, 3178 + sin6.sin6_addr.s6_addr, 16)) 3179 + ret = true; 3180 + } else { 3181 + struct sockaddr_in * sock_in = 3182 + (struct sockaddr_in *)&np->np_sockaddr; 3183 + 3184 + if (sock_in->sin_addr.s_addr == INADDR_ANY) 3185 + ret = true; 3186 + } 3187 + 3188 + return ret; 3189 + } 3190 + 3167 3191 static int iscsit_build_sendtargets_response(struct iscsi_cmd *cmd) 3168 3192 { 3169 3193 char *payload = NULL; ··· 3237 3213 spin_lock(&tpg->tpg_np_lock); 3238 3214 list_for_each_entry(tpg_np, &tpg->tpg_gnp_list, 3239 3215 tpg_np_list) { 3216 + struct iscsi_np *np = tpg_np->tpg_np; 3217 + bool inaddr_any = iscsit_check_inaddr_any(np); 3218 + 3240 3219 len = sprintf(buf, "TargetAddress=" 3241 3220 "%s%s%s:%hu,%hu", 3242 - (tpg_np->tpg_np->np_sockaddr.ss_family == AF_INET6) ? 3243 - "[" : "", tpg_np->tpg_np->np_ip, 3244 - (tpg_np->tpg_np->np_sockaddr.ss_family == AF_INET6) ? 3245 - "]" : "", tpg_np->tpg_np->np_port, 3221 + (np->np_sockaddr.ss_family == AF_INET6) ? 3222 + "[" : "", (inaddr_any == false) ? 3223 + np->np_ip : conn->local_ip, 3224 + (np->np_sockaddr.ss_family == AF_INET6) ? 3225 + "]" : "", (inaddr_any == false) ? 3226 + np->np_port : conn->local_port, 3246 3227 tpg->tpgt); 3247 3228 len += 1; 3248 3229
+1
drivers/target/iscsi/iscsi_target_configfs.c
··· 21 21 22 22 #include <linux/configfs.h> 23 23 #include <linux/export.h> 24 + #include <linux/inet.h> 24 25 #include <target/target_core_base.h> 25 26 #include <target/target_core_fabric.h> 26 27 #include <target/target_core_fabric_configfs.h>
+4 -2
drivers/target/iscsi/iscsi_target_core.h
··· 508 508 u16 cid; 509 509 /* Remote TCP Port */ 510 510 u16 login_port; 511 + u16 local_port; 511 512 int net_size; 512 513 u32 auth_id; 513 514 #define CONNFLAG_SCTP_STRUCT_FILE 0x01 ··· 528 527 unsigned char bad_hdr[ISCSI_HDR_LEN]; 529 528 #define IPV6_ADDRESS_SPACE 48 530 529 unsigned char login_ip[IPV6_ADDRESS_SPACE]; 530 + unsigned char local_ip[IPV6_ADDRESS_SPACE]; 531 531 int conn_usage_count; 532 532 int conn_waiting_on_uc; 533 533 atomic_t check_immediate_queue; ··· 563 561 struct hash_desc conn_tx_hash; 564 562 /* Used for scheduling TX and RX connection kthreads */ 565 563 cpumask_var_t conn_cpumask; 566 - int conn_rx_reset_cpumask:1; 567 - int conn_tx_reset_cpumask:1; 564 + unsigned int conn_rx_reset_cpumask:1; 565 + unsigned int conn_tx_reset_cpumask:1; 568 566 /* list_head of struct iscsi_cmd for this connection */ 569 567 struct list_head conn_cmd_list; 570 568 struct list_head immed_queue_list;
+2 -2
drivers/target/iscsi/iscsi_target_erl1.c
··· 1238 1238 { 1239 1239 struct iscsi_conn *conn = cmd->conn; 1240 1240 struct iscsi_session *sess = conn->sess; 1241 - struct iscsi_node_attrib *na = na = iscsit_tpg_get_node_attrib(sess); 1241 + struct iscsi_node_attrib *na = iscsit_tpg_get_node_attrib(sess); 1242 1242 1243 1243 spin_lock_bh(&cmd->dataout_timeout_lock); 1244 1244 if (!(cmd->dataout_timer_flags & ISCSI_TF_RUNNING)) { ··· 1261 1261 struct iscsi_conn *conn) 1262 1262 { 1263 1263 struct iscsi_session *sess = conn->sess; 1264 - struct iscsi_node_attrib *na = na = iscsit_tpg_get_node_attrib(sess); 1264 + struct iscsi_node_attrib *na = iscsit_tpg_get_node_attrib(sess); 1265 1265 1266 1266 if (cmd->dataout_timer_flags & ISCSI_TF_RUNNING) 1267 1267 return;
+35 -4
drivers/target/iscsi/iscsi_target_login.c
··· 615 615 } 616 616 617 617 pr_debug("iSCSI Login successful on CID: %hu from %s to" 618 - " %s:%hu,%hu\n", conn->cid, conn->login_ip, np->np_ip, 619 - np->np_port, tpg->tpgt); 618 + " %s:%hu,%hu\n", conn->cid, conn->login_ip, 619 + conn->local_ip, conn->local_port, tpg->tpgt); 620 620 621 621 list_add_tail(&conn->conn_list, &sess->sess_conn_list); 622 622 atomic_inc(&sess->nconn); ··· 658 658 sess->session_state = TARG_SESS_STATE_LOGGED_IN; 659 659 660 660 pr_debug("iSCSI Login successful on CID: %hu from %s to %s:%hu,%hu\n", 661 - conn->cid, conn->login_ip, np->np_ip, np->np_port, tpg->tpgt); 661 + conn->cid, conn->login_ip, conn->local_ip, conn->local_port, 662 + tpg->tpgt); 662 663 663 664 spin_lock_bh(&sess->conn_lock); 664 665 list_add_tail(&conn->conn_list, &sess->sess_conn_list); ··· 838 837 (char *)&opt, sizeof(opt)); 839 838 if (ret < 0) { 840 839 pr_err("kernel_setsockopt() for SO_REUSEADDR" 840 + " failed\n"); 841 + goto fail; 842 + } 843 + 844 + ret = kernel_setsockopt(sock, IPPROTO_IP, IP_FREEBIND, 845 + (char *)&opt, sizeof(opt)); 846 + if (ret < 0) { 847 + pr_err("kernel_setsockopt() for IP_FREEBIND" 841 848 " failed\n"); 842 849 goto fail; 843 850 } ··· 1029 1020 snprintf(conn->login_ip, sizeof(conn->login_ip), "%pI6c", 1030 1021 &sock_in6.sin6_addr.in6_u); 1031 1022 conn->login_port = ntohs(sock_in6.sin6_port); 1023 + 1024 + if (conn->sock->ops->getname(conn->sock, 1025 + (struct sockaddr *)&sock_in6, &err, 0) < 0) { 1026 + pr_err("sock_ops->getname() failed.\n"); 1027 + iscsit_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR, 1028 + ISCSI_LOGIN_STATUS_TARGET_ERROR); 1029 + goto new_sess_out; 1030 + } 1031 + snprintf(conn->local_ip, sizeof(conn->local_ip), "%pI6c", 1032 + &sock_in6.sin6_addr.in6_u); 1033 + conn->local_port = ntohs(sock_in6.sin6_port); 1034 + 1032 1035 } else { 1033 1036 memset(&sock_in, 0, sizeof(struct sockaddr_in)); 1034 1037 ··· 1053 1032 } 1054 1033 sprintf(conn->login_ip, "%pI4", &sock_in.sin_addr.s_addr); 1055 1034 conn->login_port = ntohs(sock_in.sin_port); 1035 + 1036 + if (conn->sock->ops->getname(conn->sock, 1037 + (struct sockaddr *)&sock_in, &err, 0) < 0) { 1038 + pr_err("sock_ops->getname() failed.\n"); 1039 + iscsit_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR, 1040 + ISCSI_LOGIN_STATUS_TARGET_ERROR); 1041 + goto new_sess_out; 1042 + } 1043 + sprintf(conn->local_ip, "%pI4", &sock_in.sin_addr.s_addr); 1044 + conn->local_port = ntohs(sock_in.sin_port); 1056 1045 } 1057 1046 1058 1047 conn->network_transport = np->np_network_transport; ··· 1070 1039 pr_debug("Received iSCSI login request from %s on %s Network" 1071 1040 " Portal %s:%hu\n", conn->login_ip, 1072 1041 (conn->network_transport == ISCSI_TCP) ? "TCP" : "SCTP", 1073 - np->np_ip, np->np_port); 1042 + conn->local_ip, conn->local_port); 1074 1043 1075 1044 pr_debug("Moving to TARG_CONN_STATE_IN_LOGIN.\n"); 1076 1045 conn->conn_state = TARG_CONN_STATE_IN_LOGIN;
+11
drivers/target/iscsi/iscsi_target_util.c
··· 849 849 case ISCSI_OP_SCSI_TMFUNC: 850 850 transport_generic_free_cmd(&cmd->se_cmd, 1); 851 851 break; 852 + case ISCSI_OP_REJECT: 853 + /* 854 + * Handle special case for REJECT when iscsi_add_reject*() has 855 + * overwritten the original iscsi_opcode assignment, and the 856 + * associated cmd->se_cmd needs to be released. 857 + */ 858 + if (cmd->se_cmd.se_tfo != NULL) { 859 + transport_generic_free_cmd(&cmd->se_cmd, 1); 860 + break; 861 + } 862 + /* Fall-through */ 852 863 default: 853 864 iscsit_release_cmd(cmd); 854 865 break;
+4 -4
drivers/target/target_core_alua.c
··· 78 78 return -EINVAL; 79 79 } 80 80 81 - buf = transport_kmap_first_data_page(cmd); 81 + buf = transport_kmap_data_sg(cmd); 82 82 83 83 spin_lock(&su_dev->t10_alua.tg_pt_gps_lock); 84 84 list_for_each_entry(tg_pt_gp, &su_dev->t10_alua.tg_pt_gps_list, ··· 163 163 buf[2] = ((rd_len >> 8) & 0xff); 164 164 buf[3] = (rd_len & 0xff); 165 165 166 - transport_kunmap_first_data_page(cmd); 166 + transport_kunmap_data_sg(cmd); 167 167 168 168 task->task_scsi_status = GOOD; 169 169 transport_complete_task(task, 1); ··· 194 194 cmd->scsi_sense_reason = TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; 195 195 return -EINVAL; 196 196 } 197 - buf = transport_kmap_first_data_page(cmd); 197 + buf = transport_kmap_data_sg(cmd); 198 198 199 199 /* 200 200 * Determine if explict ALUA via SET_TARGET_PORT_GROUPS is allowed ··· 351 351 } 352 352 353 353 out: 354 - transport_kunmap_first_data_page(cmd); 354 + transport_kunmap_data_sg(cmd); 355 355 task->task_scsi_status = GOOD; 356 356 transport_complete_task(task, 1); 357 357 return 0;
+26 -25
drivers/target/target_core_cdb.c
··· 83 83 return -EINVAL; 84 84 } 85 85 86 - buf = transport_kmap_first_data_page(cmd); 86 + buf = transport_kmap_data_sg(cmd); 87 87 88 88 if (dev == tpg->tpg_virt_lun0.lun_se_dev) { 89 89 buf[0] = 0x3f; /* Not connected */ ··· 134 134 buf[4] = 31; /* Set additional length to 31 */ 135 135 136 136 out: 137 - transport_kunmap_first_data_page(cmd); 137 + transport_kunmap_data_sg(cmd); 138 138 return 0; 139 139 } 140 140 ··· 698 698 int p, ret; 699 699 700 700 if (!(cdb[1] & 0x1)) { 701 + if (cdb[2]) { 702 + pr_err("INQUIRY with EVPD==0 but PAGE CODE=%02x\n", 703 + cdb[2]); 704 + cmd->scsi_sense_reason = TCM_INVALID_CDB_FIELD; 705 + return -EINVAL; 706 + } 707 + 701 708 ret = target_emulate_inquiry_std(cmd); 702 709 goto out; 703 710 } ··· 723 716 return -EINVAL; 724 717 } 725 718 726 - buf = transport_kmap_first_data_page(cmd); 719 + buf = transport_kmap_data_sg(cmd); 727 720 728 721 buf[0] = dev->transport->get_device_type(dev); 729 722 ··· 736 729 } 737 730 738 731 pr_err("Unknown VPD Code: 0x%02x\n", cdb[2]); 739 - cmd->scsi_sense_reason = TCM_UNSUPPORTED_SCSI_OPCODE; 732 + cmd->scsi_sense_reason = TCM_INVALID_CDB_FIELD; 740 733 ret = -EINVAL; 741 734 742 735 out_unmap: 743 - transport_kunmap_first_data_page(cmd); 736 + transport_kunmap_data_sg(cmd); 744 737 out: 745 738 if (!ret) { 746 739 task->task_scsi_status = GOOD; ··· 762 755 else 763 756 blocks = (u32)blocks_long; 764 757 765 - buf = transport_kmap_first_data_page(cmd); 758 + buf = transport_kmap_data_sg(cmd); 766 759 767 760 buf[0] = (blocks >> 24) & 0xff; 768 761 buf[1] = (blocks >> 16) & 0xff; ··· 778 771 if (dev->se_sub_dev->se_dev_attrib.emulate_tpu || dev->se_sub_dev->se_dev_attrib.emulate_tpws) 779 772 put_unaligned_be32(0xFFFFFFFF, &buf[0]); 780 773 781 - transport_kunmap_first_data_page(cmd); 774 + transport_kunmap_data_sg(cmd); 782 775 783 776 task->task_scsi_status = GOOD; 784 777 transport_complete_task(task, 1); ··· 792 785 unsigned char *buf; 793 786 unsigned long long blocks = dev->transport->get_blocks(dev); 794 787 795 - buf = transport_kmap_first_data_page(cmd); 788 + buf = transport_kmap_data_sg(cmd); 796 789 797 790 buf[0] = (blocks >> 56) & 0xff; 798 791 buf[1] = (blocks >> 48) & 0xff; ··· 813 806 if (dev->se_sub_dev->se_dev_attrib.emulate_tpu || dev->se_sub_dev->se_dev_attrib.emulate_tpws) 814 807 buf[14] = 0x80; 815 808 816 - transport_kunmap_first_data_page(cmd); 809 + transport_kunmap_data_sg(cmd); 817 810 818 811 task->task_scsi_status = GOOD; 819 812 transport_complete_task(task, 1); ··· 1026 1019 offset = cmd->data_length; 1027 1020 } 1028 1021 1029 - rbuf = transport_kmap_first_data_page(cmd); 1022 + rbuf = transport_kmap_data_sg(cmd); 1030 1023 memcpy(rbuf, buf, offset); 1031 - transport_kunmap_first_data_page(cmd); 1024 + transport_kunmap_data_sg(cmd); 1032 1025 1033 1026 task->task_scsi_status = GOOD; 1034 1027 transport_complete_task(task, 1); ··· 1050 1043 return -ENOSYS; 1051 1044 } 1052 1045 1053 - buf = transport_kmap_first_data_page(cmd); 1046 + buf = transport_kmap_data_sg(cmd); 1054 1047 1055 1048 if (!core_scsi3_ua_clear_for_request_sense(cmd, &ua_asc, &ua_ascq)) { 1056 1049 /* ··· 1058 1051 */ 1059 1052 buf[0] = 0x70; 1060 1053 buf[SPC_SENSE_KEY_OFFSET] = UNIT_ATTENTION; 1061 - /* 1062 - * Make sure request data length is enough for additional 1063 - * sense data. 1064 - */ 1065 - if (cmd->data_length <= 18) { 1054 + 1055 + if (cmd->data_length < 18) { 1066 1056 buf[7] = 0x00; 1067 1057 err = -EINVAL; 1068 1058 goto end; ··· 1076 1072 */ 1077 1073 buf[0] = 0x70; 1078 1074 buf[SPC_SENSE_KEY_OFFSET] = NO_SENSE; 1079 - /* 1080 - * Make sure request data length is enough for additional 1081 - * sense data. 1082 - */ 1083 - if (cmd->data_length <= 18) { 1075 + 1076 + if (cmd->data_length < 18) { 1084 1077 buf[7] = 0x00; 1085 1078 err = -EINVAL; 1086 1079 goto end; ··· 1090 1089 } 1091 1090 1092 1091 end: 1093 - transport_kunmap_first_data_page(cmd); 1092 + transport_kunmap_data_sg(cmd); 1094 1093 task->task_scsi_status = GOOD; 1095 1094 transport_complete_task(task, 1); 1096 1095 return 0; ··· 1124 1123 dl = get_unaligned_be16(&cdb[0]); 1125 1124 bd_dl = get_unaligned_be16(&cdb[2]); 1126 1125 1127 - buf = transport_kmap_first_data_page(cmd); 1126 + buf = transport_kmap_data_sg(cmd); 1128 1127 1129 1128 ptr = &buf[offset]; 1130 1129 pr_debug("UNMAP: Sub: %s Using dl: %hu bd_dl: %hu size: %hu" ··· 1148 1147 } 1149 1148 1150 1149 err: 1151 - transport_kunmap_first_data_page(cmd); 1150 + transport_kunmap_data_sg(cmd); 1152 1151 if (!ret) { 1153 1152 task->task_scsi_status = GOOD; 1154 1153 transport_complete_task(task, 1);
+8 -4
drivers/target/target_core_configfs.c
··· 1704 1704 return -EINVAL; 1705 1705 } 1706 1706 1707 - se_dev->su_dev_flags |= SDF_USING_ALIAS; 1708 1707 read_bytes = snprintf(&se_dev->se_dev_alias[0], SE_DEV_ALIAS_LEN, 1709 1708 "%s", page); 1710 - 1709 + if (!read_bytes) 1710 + return -EINVAL; 1711 1711 if (se_dev->se_dev_alias[read_bytes - 1] == '\n') 1712 1712 se_dev->se_dev_alias[read_bytes - 1] = '\0'; 1713 + 1714 + se_dev->su_dev_flags |= SDF_USING_ALIAS; 1713 1715 1714 1716 pr_debug("Target_Core_ConfigFS: %s/%s set alias: %s\n", 1715 1717 config_item_name(&hba->hba_group.cg_item), ··· 1755 1753 return -EINVAL; 1756 1754 } 1757 1755 1758 - se_dev->su_dev_flags |= SDF_USING_UDEV_PATH; 1759 1756 read_bytes = snprintf(&se_dev->se_dev_udev_path[0], SE_UDEV_PATH_LEN, 1760 1757 "%s", page); 1761 - 1758 + if (!read_bytes) 1759 + return -EINVAL; 1762 1760 if (se_dev->se_dev_udev_path[read_bytes - 1] == '\n') 1763 1761 se_dev->se_dev_udev_path[read_bytes - 1] = '\0'; 1762 + 1763 + se_dev->su_dev_flags |= SDF_USING_UDEV_PATH; 1764 1764 1765 1765 pr_debug("Target_Core_ConfigFS: %s/%s set udev_path: %s\n", 1766 1766 config_item_name(&hba->hba_group.cg_item),
+15 -13
drivers/target/target_core_device.c
··· 320 320 void core_dec_lacl_count(struct se_node_acl *se_nacl, struct se_cmd *se_cmd) 321 321 { 322 322 struct se_dev_entry *deve; 323 + unsigned long flags; 323 324 324 - spin_lock_irq(&se_nacl->device_list_lock); 325 + spin_lock_irqsave(&se_nacl->device_list_lock, flags); 325 326 deve = &se_nacl->device_list[se_cmd->orig_fe_lun]; 326 327 deve->deve_cmds--; 327 - spin_unlock_irq(&se_nacl->device_list_lock); 328 + spin_unlock_irqrestore(&se_nacl->device_list_lock, flags); 328 329 } 329 330 330 331 void core_update_device_list_access( ··· 657 656 unsigned char *buf; 658 657 u32 cdb_offset = 0, lun_count = 0, offset = 8, i; 659 658 660 - buf = transport_kmap_first_data_page(se_cmd); 659 + buf = (unsigned char *) transport_kmap_data_sg(se_cmd); 661 660 662 661 /* 663 662 * If no struct se_session pointer is present, this struct se_cmd is ··· 695 694 * See SPC3 r07, page 159. 696 695 */ 697 696 done: 698 - transport_kunmap_first_data_page(se_cmd); 697 + transport_kunmap_data_sg(se_cmd); 699 698 lun_count *= 8; 700 699 buf[0] = ((lun_count >> 24) & 0xff); 701 700 buf[1] = ((lun_count >> 16) & 0xff); ··· 1295 1294 { 1296 1295 struct se_lun *lun_p; 1297 1296 u32 lun_access = 0; 1297 + int rc; 1298 1298 1299 1299 if (atomic_read(&dev->dev_access_obj.obj_access_count) != 0) { 1300 1300 pr_err("Unable to export struct se_device while dev_access_obj: %d\n", 1301 1301 atomic_read(&dev->dev_access_obj.obj_access_count)); 1302 - return NULL; 1302 + return ERR_PTR(-EACCES); 1303 1303 } 1304 1304 1305 1305 lun_p = core_tpg_pre_addlun(tpg, lun); 1306 - if ((IS_ERR(lun_p)) || !lun_p) 1307 - return NULL; 1306 + if (IS_ERR(lun_p)) 1307 + return lun_p; 1308 1308 1309 1309 if (dev->dev_flags & DF_READ_ONLY) 1310 1310 lun_access = TRANSPORT_LUNFLAGS_READ_ONLY; 1311 1311 else 1312 1312 lun_access = TRANSPORT_LUNFLAGS_READ_WRITE; 1313 1313 1314 - if (core_tpg_post_addlun(tpg, lun_p, lun_access, dev) < 0) 1315 - return NULL; 1314 + rc = core_tpg_post_addlun(tpg, lun_p, lun_access, dev); 1315 + if (rc < 0) 1316 + return ERR_PTR(rc); 1316 1317 1317 1318 pr_debug("%s_TPG[%u]_LUN[%u] - Activated %s Logical Unit from" 1318 1319 " CORE HBA: %u\n", tpg->se_tpg_tfo->get_fabric_name(), ··· 1351 1348 u32 unpacked_lun) 1352 1349 { 1353 1350 struct se_lun *lun; 1354 - int ret = 0; 1355 1351 1356 - lun = core_tpg_pre_dellun(tpg, unpacked_lun, &ret); 1357 - if (!lun) 1358 - return ret; 1352 + lun = core_tpg_pre_dellun(tpg, unpacked_lun); 1353 + if (IS_ERR(lun)) 1354 + return PTR_ERR(lun); 1359 1355 1360 1356 core_tpg_post_dellun(tpg, lun); 1361 1357
+2 -2
drivers/target/target_core_fabric_configfs.c
··· 766 766 767 767 lun_p = core_dev_add_lun(se_tpg, dev->se_hba, dev, 768 768 lun->unpacked_lun); 769 - if (IS_ERR(lun_p) || !lun_p) { 769 + if (IS_ERR(lun_p)) { 770 770 pr_err("core_dev_add_lun() failed\n"); 771 - ret = -EINVAL; 771 + ret = PTR_ERR(lun_p); 772 772 goto out; 773 773 } 774 774
+9 -2
drivers/target/target_core_iblock.c
··· 129 129 /* 130 130 * These settings need to be made tunable.. 131 131 */ 132 - ib_dev->ibd_bio_set = bioset_create(32, 64); 132 + ib_dev->ibd_bio_set = bioset_create(32, 0); 133 133 if (!ib_dev->ibd_bio_set) { 134 134 pr_err("IBLOCK: Unable to create bioset()\n"); 135 135 return ERR_PTR(-ENOMEM); ··· 181 181 */ 182 182 dev->se_sub_dev->se_dev_attrib.max_unmap_block_desc_count = 1; 183 183 dev->se_sub_dev->se_dev_attrib.unmap_granularity = 184 - q->limits.discard_granularity; 184 + q->limits.discard_granularity >> 9; 185 185 dev->se_sub_dev->se_dev_attrib.unmap_granularity_alignment = 186 186 q->limits.discard_alignment; 187 187 ··· 487 487 struct iblock_dev *ib_dev = task->task_se_cmd->se_dev->dev_ptr; 488 488 struct iblock_req *ib_req = IBLOCK_REQ(task); 489 489 struct bio *bio; 490 + 491 + /* 492 + * Only allocate as many vector entries as the bio code allows us to, 493 + * we'll loop later on until we have handled the whole request. 494 + */ 495 + if (sg_num > BIO_MAX_PAGES) 496 + sg_num = BIO_MAX_PAGES; 490 497 491 498 bio = bio_alloc_bioset(GFP_NOIO, sg_num, ib_dev->ibd_bio_set); 492 499 if (!bio) {
+1 -1
drivers/target/target_core_internal.h
··· 90 90 struct se_lun *core_tpg_pre_addlun(struct se_portal_group *, u32); 91 91 int core_tpg_post_addlun(struct se_portal_group *, struct se_lun *, 92 92 u32, void *); 93 - struct se_lun *core_tpg_pre_dellun(struct se_portal_group *, u32, int *); 93 + struct se_lun *core_tpg_pre_dellun(struct se_portal_group *, u32 unpacked_lun); 94 94 int core_tpg_post_dellun(struct se_portal_group *, struct se_lun *); 95 95 96 96 /* target_core_transport.c */
+22 -21
drivers/target/target_core_pr.c
··· 478 478 case READ_MEDIA_SERIAL_NUMBER: 479 479 case REPORT_LUNS: 480 480 case REQUEST_SENSE: 481 + case PERSISTENT_RESERVE_IN: 481 482 ret = 0; /*/ Allowed CDBs */ 482 483 break; 483 484 default: ··· 1535 1534 tidh_new->dest_local_nexus = 1; 1536 1535 list_add_tail(&tidh_new->dest_list, &tid_dest_list); 1537 1536 1538 - buf = transport_kmap_first_data_page(cmd); 1537 + buf = transport_kmap_data_sg(cmd); 1539 1538 /* 1540 1539 * For a PERSISTENT RESERVE OUT specify initiator ports payload, 1541 1540 * first extract TransportID Parameter Data Length, and make sure ··· 1786 1785 1787 1786 } 1788 1787 1789 - transport_kunmap_first_data_page(cmd); 1788 + transport_kunmap_data_sg(cmd); 1790 1789 1791 1790 /* 1792 1791 * Go ahead and create a registrations from tid_dest_list for the ··· 1834 1833 1835 1834 return 0; 1836 1835 out: 1837 - transport_kunmap_first_data_page(cmd); 1836 + transport_kunmap_data_sg(cmd); 1838 1837 /* 1839 1838 * For the failure case, release everything from tid_dest_list 1840 1839 * including *dest_pr_reg and the configfs dependances.. ··· 3121 3120 if (!calling_it_nexus) 3122 3121 core_scsi3_ua_allocate(pr_reg_nacl, 3123 3122 pr_res_mapped_lun, 0x2A, 3124 - ASCQ_2AH_RESERVATIONS_PREEMPTED); 3123 + ASCQ_2AH_REGISTRATIONS_PREEMPTED); 3125 3124 } 3126 3125 spin_unlock(&pr_tmpl->registration_lock); 3127 3126 /* ··· 3234 3233 * additional sense code set to REGISTRATIONS PREEMPTED; 3235 3234 */ 3236 3235 core_scsi3_ua_allocate(pr_reg_nacl, pr_res_mapped_lun, 0x2A, 3237 - ASCQ_2AH_RESERVATIONS_PREEMPTED); 3236 + ASCQ_2AH_REGISTRATIONS_PREEMPTED); 3238 3237 } 3239 3238 spin_unlock(&pr_tmpl->registration_lock); 3240 3239 /* ··· 3411 3410 * will be moved to for the TransportID containing SCSI initiator WWN 3412 3411 * information. 3413 3412 */ 3414 - buf = transport_kmap_first_data_page(cmd); 3413 + buf = transport_kmap_data_sg(cmd); 3415 3414 rtpi = (buf[18] & 0xff) << 8; 3416 3415 rtpi |= buf[19] & 0xff; 3417 3416 tid_len = (buf[20] & 0xff) << 24; 3418 3417 tid_len |= (buf[21] & 0xff) << 16; 3419 3418 tid_len |= (buf[22] & 0xff) << 8; 3420 3419 tid_len |= buf[23] & 0xff; 3421 - transport_kunmap_first_data_page(cmd); 3420 + transport_kunmap_data_sg(cmd); 3422 3421 buf = NULL; 3423 3422 3424 3423 if ((tid_len + 24) != cmd->data_length) { ··· 3470 3469 return -EINVAL; 3471 3470 } 3472 3471 3473 - buf = transport_kmap_first_data_page(cmd); 3472 + buf = transport_kmap_data_sg(cmd); 3474 3473 proto_ident = (buf[24] & 0x0f); 3475 3474 #if 0 3476 3475 pr_debug("SPC-3 PR REGISTER_AND_MOVE: Extracted Protocol Identifier:" ··· 3504 3503 goto out; 3505 3504 } 3506 3505 3507 - transport_kunmap_first_data_page(cmd); 3506 + transport_kunmap_data_sg(cmd); 3508 3507 buf = NULL; 3509 3508 3510 3509 pr_debug("SPC-3 PR [%s] Extracted initiator %s identifier: %s" ··· 3769 3768 " REGISTER_AND_MOVE\n"); 3770 3769 } 3771 3770 3772 - transport_kunmap_first_data_page(cmd); 3771 + transport_kunmap_data_sg(cmd); 3773 3772 3774 3773 core_scsi3_put_pr_reg(dest_pr_reg); 3775 3774 return 0; 3776 3775 out: 3777 3776 if (buf) 3778 - transport_kunmap_first_data_page(cmd); 3777 + transport_kunmap_data_sg(cmd); 3779 3778 if (dest_se_deve) 3780 3779 core_scsi3_lunacl_undepend_item(dest_se_deve); 3781 3780 if (dest_node_acl) ··· 3849 3848 scope = (cdb[2] & 0xf0); 3850 3849 type = (cdb[2] & 0x0f); 3851 3850 3852 - buf = transport_kmap_first_data_page(cmd); 3851 + buf = transport_kmap_data_sg(cmd); 3853 3852 /* 3854 3853 * From PERSISTENT_RESERVE_OUT parameter list (payload) 3855 3854 */ ··· 3867 3866 aptpl = (buf[17] & 0x01); 3868 3867 unreg = (buf[17] & 0x02); 3869 3868 } 3870 - transport_kunmap_first_data_page(cmd); 3869 + transport_kunmap_data_sg(cmd); 3871 3870 buf = NULL; 3872 3871 3873 3872 /* ··· 3967 3966 return -EINVAL; 3968 3967 } 3969 3968 3970 - buf = transport_kmap_first_data_page(cmd); 3969 + buf = transport_kmap_data_sg(cmd); 3971 3970 buf[0] = ((su_dev->t10_pr.pr_generation >> 24) & 0xff); 3972 3971 buf[1] = ((su_dev->t10_pr.pr_generation >> 16) & 0xff); 3973 3972 buf[2] = ((su_dev->t10_pr.pr_generation >> 8) & 0xff); ··· 4001 4000 buf[6] = ((add_len >> 8) & 0xff); 4002 4001 buf[7] = (add_len & 0xff); 4003 4002 4004 - transport_kunmap_first_data_page(cmd); 4003 + transport_kunmap_data_sg(cmd); 4005 4004 4006 4005 return 0; 4007 4006 } ··· 4027 4026 return -EINVAL; 4028 4027 } 4029 4028 4030 - buf = transport_kmap_first_data_page(cmd); 4029 + buf = transport_kmap_data_sg(cmd); 4031 4030 buf[0] = ((su_dev->t10_pr.pr_generation >> 24) & 0xff); 4032 4031 buf[1] = ((su_dev->t10_pr.pr_generation >> 16) & 0xff); 4033 4032 buf[2] = ((su_dev->t10_pr.pr_generation >> 8) & 0xff); ··· 4086 4085 4087 4086 err: 4088 4087 spin_unlock(&se_dev->dev_reservation_lock); 4089 - transport_kunmap_first_data_page(cmd); 4088 + transport_kunmap_data_sg(cmd); 4090 4089 4091 4090 return 0; 4092 4091 } ··· 4110 4109 return -EINVAL; 4111 4110 } 4112 4111 4113 - buf = transport_kmap_first_data_page(cmd); 4112 + buf = transport_kmap_data_sg(cmd); 4114 4113 4115 4114 buf[0] = ((add_len << 8) & 0xff); 4116 4115 buf[1] = (add_len & 0xff); ··· 4142 4141 buf[4] |= 0x02; /* PR_TYPE_WRITE_EXCLUSIVE */ 4143 4142 buf[5] |= 0x01; /* PR_TYPE_EXCLUSIVE_ACCESS_ALLREG */ 4144 4143 4145 - transport_kunmap_first_data_page(cmd); 4144 + transport_kunmap_data_sg(cmd); 4146 4145 4147 4146 return 0; 4148 4147 } ··· 4172 4171 return -EINVAL; 4173 4172 } 4174 4173 4175 - buf = transport_kmap_first_data_page(cmd); 4174 + buf = transport_kmap_data_sg(cmd); 4176 4175 4177 4176 buf[0] = ((su_dev->t10_pr.pr_generation >> 24) & 0xff); 4178 4177 buf[1] = ((su_dev->t10_pr.pr_generation >> 16) & 0xff); ··· 4293 4292 buf[6] = ((add_len >> 8) & 0xff); 4294 4293 buf[7] = (add_len & 0xff); 4295 4294 4296 - transport_kunmap_first_data_page(cmd); 4295 + transport_kunmap_data_sg(cmd); 4297 4296 4298 4297 return 0; 4299 4298 }
+2 -2
drivers/target/target_core_pscsi.c
··· 693 693 694 694 if (task->task_se_cmd->se_deve->lun_flags & 695 695 TRANSPORT_LUNFLAGS_READ_ONLY) { 696 - unsigned char *buf = transport_kmap_first_data_page(task->task_se_cmd); 696 + unsigned char *buf = transport_kmap_data_sg(task->task_se_cmd); 697 697 698 698 if (cdb[0] == MODE_SENSE_10) { 699 699 if (!(buf[3] & 0x80)) ··· 703 703 buf[2] |= 0x80; 704 704 } 705 705 706 - transport_kunmap_first_data_page(task->task_se_cmd); 706 + transport_kunmap_data_sg(task->task_se_cmd); 707 707 } 708 708 } 709 709 after_mode_sense:
+1 -2
drivers/target/target_core_tpg.c
··· 807 807 808 808 struct se_lun *core_tpg_pre_dellun( 809 809 struct se_portal_group *tpg, 810 - u32 unpacked_lun, 811 - int *ret) 810 + u32 unpacked_lun) 812 811 { 813 812 struct se_lun *lun; 814 813
+85 -43
drivers/target/target_core_transport.c
··· 1255 1255 static void scsi_dump_inquiry(struct se_device *dev) 1256 1256 { 1257 1257 struct t10_wwn *wwn = &dev->se_sub_dev->t10_wwn; 1258 + char buf[17]; 1258 1259 int i, device_type; 1259 1260 /* 1260 1261 * Print Linux/SCSI style INQUIRY formatting to the kernel ring buffer 1261 1262 */ 1262 - pr_debug(" Vendor: "); 1263 1263 for (i = 0; i < 8; i++) 1264 1264 if (wwn->vendor[i] >= 0x20) 1265 - pr_debug("%c", wwn->vendor[i]); 1265 + buf[i] = wwn->vendor[i]; 1266 1266 else 1267 - pr_debug(" "); 1267 + buf[i] = ' '; 1268 + buf[i] = '\0'; 1269 + pr_debug(" Vendor: %s\n", buf); 1268 1270 1269 - pr_debug(" Model: "); 1270 1271 for (i = 0; i < 16; i++) 1271 1272 if (wwn->model[i] >= 0x20) 1272 - pr_debug("%c", wwn->model[i]); 1273 + buf[i] = wwn->model[i]; 1273 1274 else 1274 - pr_debug(" "); 1275 + buf[i] = ' '; 1276 + buf[i] = '\0'; 1277 + pr_debug(" Model: %s\n", buf); 1275 1278 1276 - pr_debug(" Revision: "); 1277 1279 for (i = 0; i < 4; i++) 1278 1280 if (wwn->revision[i] >= 0x20) 1279 - pr_debug("%c", wwn->revision[i]); 1281 + buf[i] = wwn->revision[i]; 1280 1282 else 1281 - pr_debug(" "); 1282 - 1283 - pr_debug("\n"); 1283 + buf[i] = ' '; 1284 + buf[i] = '\0'; 1285 + pr_debug(" Revision: %s\n", buf); 1284 1286 1285 1287 device_type = dev->transport->get_device_type(dev); 1286 1288 pr_debug(" Type: %s ", scsi_device_type(device_type)); ··· 1657 1655 * This may only be called from process context, and also currently 1658 1656 * assumes internal allocation of fabric payload buffer by target-core. 1659 1657 **/ 1660 - int target_submit_cmd(struct se_cmd *se_cmd, struct se_session *se_sess, 1658 + void target_submit_cmd(struct se_cmd *se_cmd, struct se_session *se_sess, 1661 1659 unsigned char *cdb, unsigned char *sense, u32 unpacked_lun, 1662 1660 u32 data_length, int task_attr, int data_dir, int flags) 1663 1661 { ··· 1690 1688 /* 1691 1689 * Locate se_lun pointer and attach it to struct se_cmd 1692 1690 */ 1693 - if (transport_lookup_cmd_lun(se_cmd, unpacked_lun) < 0) 1694 - goto out_check_cond; 1691 + if (transport_lookup_cmd_lun(se_cmd, unpacked_lun) < 0) { 1692 + transport_send_check_condition_and_sense(se_cmd, 1693 + se_cmd->scsi_sense_reason, 0); 1694 + target_put_sess_cmd(se_sess, se_cmd); 1695 + return; 1696 + } 1695 1697 /* 1696 1698 * Sanitize CDBs via transport_generic_cmd_sequencer() and 1697 1699 * allocate the necessary tasks to complete the received CDB+data 1698 1700 */ 1699 1701 rc = transport_generic_allocate_tasks(se_cmd, cdb); 1700 - if (rc != 0) 1701 - goto out_check_cond; 1702 + if (rc != 0) { 1703 + transport_generic_request_failure(se_cmd); 1704 + return; 1705 + } 1702 1706 /* 1703 1707 * Dispatch se_cmd descriptor to se_lun->lun_se_dev backend 1704 1708 * for immediate execution of READs, otherwise wait for ··· 1712 1704 * when fabric has filled the incoming buffer. 1713 1705 */ 1714 1706 transport_handle_cdb_direct(se_cmd); 1715 - return 0; 1716 - 1717 - out_check_cond: 1718 - transport_send_check_condition_and_sense(se_cmd, 1719 - se_cmd->scsi_sense_reason, 0); 1720 - return 0; 1707 + return; 1721 1708 } 1722 1709 EXPORT_SYMBOL(target_submit_cmd); 1723 1710 ··· 2697 2694 cmd->se_cmd_flags |= SCF_SCSI_CONTROL_SG_IO_CDB; 2698 2695 2699 2696 if (target_check_write_same_discard(&cdb[10], dev) < 0) 2700 - goto out_invalid_cdb_field; 2697 + goto out_unsupported_cdb; 2701 2698 if (!passthrough) 2702 2699 cmd->execute_task = target_emulate_write_same; 2703 2700 break; ··· 2980 2977 cmd->se_cmd_flags |= SCF_SCSI_CONTROL_SG_IO_CDB; 2981 2978 2982 2979 if (target_check_write_same_discard(&cdb[1], dev) < 0) 2983 - goto out_invalid_cdb_field; 2980 + goto out_unsupported_cdb; 2984 2981 if (!passthrough) 2985 2982 cmd->execute_task = target_emulate_write_same; 2986 2983 break; ··· 3003 3000 * of byte 1 bit 3 UNMAP instead of original reserved field 3004 3001 */ 3005 3002 if (target_check_write_same_discard(&cdb[1], dev) < 0) 3006 - goto out_invalid_cdb_field; 3003 + goto out_unsupported_cdb; 3007 3004 if (!passthrough) 3008 3005 cmd->execute_task = target_emulate_write_same; 3009 3006 break; ··· 3084 3081 if (!(passthrough || cmd->execute_task || 3085 3082 (cmd->se_cmd_flags & SCF_SCSI_DATA_SG_IO_CDB))) 3086 3083 goto out_unsupported_cdb; 3087 - 3088 - /* Let's limit control cdbs to a page, for simplicity's sake. */ 3089 - if ((cmd->se_cmd_flags & SCF_SCSI_CONTROL_SG_IO_CDB) && 3090 - size > PAGE_SIZE) 3091 - goto out_invalid_cdb_field; 3092 3084 3093 3085 transport_set_supported_SAM_opcode(cmd); 3094 3086 return ret; ··· 3488 3490 } 3489 3491 EXPORT_SYMBOL(transport_generic_map_mem_to_cmd); 3490 3492 3491 - void *transport_kmap_first_data_page(struct se_cmd *cmd) 3493 + void *transport_kmap_data_sg(struct se_cmd *cmd) 3492 3494 { 3493 3495 struct scatterlist *sg = cmd->t_data_sg; 3496 + struct page **pages; 3497 + int i; 3494 3498 3495 3499 BUG_ON(!sg); 3496 3500 /* ··· 3500 3500 * tcm_loop who may be using a contig buffer from the SCSI midlayer for 3501 3501 * control CDBs passed as SGLs via transport_generic_map_mem_to_cmd() 3502 3502 */ 3503 - return kmap(sg_page(sg)) + sg->offset; 3504 - } 3505 - EXPORT_SYMBOL(transport_kmap_first_data_page); 3503 + if (!cmd->t_data_nents) 3504 + return NULL; 3505 + else if (cmd->t_data_nents == 1) 3506 + return kmap(sg_page(sg)) + sg->offset; 3506 3507 3507 - void transport_kunmap_first_data_page(struct se_cmd *cmd) 3508 - { 3509 - kunmap(sg_page(cmd->t_data_sg)); 3508 + /* >1 page. use vmap */ 3509 + pages = kmalloc(sizeof(*pages) * cmd->t_data_nents, GFP_KERNEL); 3510 + if (!pages) 3511 + return NULL; 3512 + 3513 + /* convert sg[] to pages[] */ 3514 + for_each_sg(cmd->t_data_sg, sg, cmd->t_data_nents, i) { 3515 + pages[i] = sg_page(sg); 3516 + } 3517 + 3518 + cmd->t_data_vmap = vmap(pages, cmd->t_data_nents, VM_MAP, PAGE_KERNEL); 3519 + kfree(pages); 3520 + if (!cmd->t_data_vmap) 3521 + return NULL; 3522 + 3523 + return cmd->t_data_vmap + cmd->t_data_sg[0].offset; 3510 3524 } 3511 - EXPORT_SYMBOL(transport_kunmap_first_data_page); 3525 + EXPORT_SYMBOL(transport_kmap_data_sg); 3526 + 3527 + void transport_kunmap_data_sg(struct se_cmd *cmd) 3528 + { 3529 + if (!cmd->t_data_nents) 3530 + return; 3531 + else if (cmd->t_data_nents == 1) 3532 + kunmap(sg_page(cmd->t_data_sg)); 3533 + 3534 + vunmap(cmd->t_data_vmap); 3535 + cmd->t_data_vmap = NULL; 3536 + } 3537 + EXPORT_SYMBOL(transport_kunmap_data_sg); 3512 3538 3513 3539 static int 3514 3540 transport_generic_get_mem(struct se_cmd *cmd) ··· 3542 3516 u32 length = cmd->data_length; 3543 3517 unsigned int nents; 3544 3518 struct page *page; 3519 + gfp_t zero_flag; 3545 3520 int i = 0; 3546 3521 3547 3522 nents = DIV_ROUND_UP(length, PAGE_SIZE); ··· 3553 3526 cmd->t_data_nents = nents; 3554 3527 sg_init_table(cmd->t_data_sg, nents); 3555 3528 3529 + zero_flag = cmd->se_cmd_flags & SCF_SCSI_DATA_SG_IO_CDB ? 0 : __GFP_ZERO; 3530 + 3556 3531 while (length) { 3557 3532 u32 page_len = min_t(u32, length, PAGE_SIZE); 3558 - page = alloc_page(GFP_KERNEL | __GFP_ZERO); 3533 + page = alloc_page(GFP_KERNEL | zero_flag); 3559 3534 if (!page) 3560 3535 goto out; 3561 3536 ··· 3785 3756 struct se_task *task; 3786 3757 unsigned long flags; 3787 3758 3759 + /* Workaround for handling zero-length control CDBs */ 3760 + if ((cmd->se_cmd_flags & SCF_SCSI_CONTROL_SG_IO_CDB) && 3761 + !cmd->data_length) 3762 + return 0; 3763 + 3788 3764 task = transport_generic_get_task(cmd, cmd->data_direction); 3789 3765 if (!task) 3790 3766 return -ENOMEM; ··· 3861 3827 else if (!task_cdbs && (cmd->se_cmd_flags & SCF_SCSI_DATA_SG_IO_CDB)) { 3862 3828 cmd->t_state = TRANSPORT_COMPLETE; 3863 3829 atomic_set(&cmd->t_transport_active, 1); 3830 + 3831 + if (cmd->t_task_cdb[0] == REQUEST_SENSE) { 3832 + u8 ua_asc = 0, ua_ascq = 0; 3833 + 3834 + core_scsi3_ua_clear_for_request_sense(cmd, 3835 + &ua_asc, &ua_ascq); 3836 + } 3837 + 3864 3838 INIT_WORK(&cmd->work, target_complete_ok_work); 3865 3839 queue_work(target_completion_wq, &cmd->work); 3866 3840 return 0; ··· 4490 4448 /* CURRENT ERROR */ 4491 4449 buffer[offset] = 0x70; 4492 4450 buffer[offset+SPC_ADD_SENSE_LEN_OFFSET] = 10; 4493 - /* ABORTED COMMAND */ 4494 - buffer[offset+SPC_SENSE_KEY_OFFSET] = ABORTED_COMMAND; 4451 + /* ILLEGAL REQUEST */ 4452 + buffer[offset+SPC_SENSE_KEY_OFFSET] = ILLEGAL_REQUEST; 4495 4453 /* INVALID FIELD IN CDB */ 4496 4454 buffer[offset+SPC_ASC_KEY_OFFSET] = 0x24; 4497 4455 break; ··· 4499 4457 /* CURRENT ERROR */ 4500 4458 buffer[offset] = 0x70; 4501 4459 buffer[offset+SPC_ADD_SENSE_LEN_OFFSET] = 10; 4502 - /* ABORTED COMMAND */ 4503 - buffer[offset+SPC_SENSE_KEY_OFFSET] = ABORTED_COMMAND; 4460 + /* ILLEGAL REQUEST */ 4461 + buffer[offset+SPC_SENSE_KEY_OFFSET] = ILLEGAL_REQUEST; 4504 4462 /* INVALID FIELD IN PARAMETER LIST */ 4505 4463 buffer[offset+SPC_ASC_KEY_OFFSET] = 0x26; 4506 4464 break;
+2 -7
drivers/target/tcm_fc/tfc_cmd.c
··· 540 540 int data_dir = 0; 541 541 u32 data_len; 542 542 int task_attr; 543 - int ret; 544 543 545 544 fcp = fc_frame_payload_get(cmd->req_frame, sizeof(*fcp)); 546 545 if (!fcp) ··· 602 603 * Use a single se_cmd->cmd_kref as we expect to release se_cmd 603 604 * directly from ft_check_stop_free callback in response path. 604 605 */ 605 - ret = target_submit_cmd(&cmd->se_cmd, cmd->sess->se_sess, cmd->cdb, 606 + target_submit_cmd(&cmd->se_cmd, cmd->sess->se_sess, cmd->cdb, 606 607 &cmd->ft_sense_buffer[0], cmd->lun, data_len, 607 608 task_attr, data_dir, 0); 608 - pr_debug("r_ctl %x alloc target_submit_cmd %d\n", fh->fh_r_ctl, ret); 609 - if (ret < 0) { 610 - ft_dump_cmd(cmd, __func__); 611 - return; 612 - } 609 + pr_debug("r_ctl %x alloc target_submit_cmd\n", fh->fh_r_ctl); 613 610 return; 614 611 615 612 err:
-1
drivers/tty/vt/vt_ioctl.c
··· 1463 1463 if (!perm && op->op != KD_FONT_OP_GET) 1464 1464 return -EPERM; 1465 1465 op->data = compat_ptr(((struct compat_console_font_op *)op)->data); 1466 - op->flags |= KD_FONT_FLAG_OLD; 1467 1466 i = con_font_op(vc, op); 1468 1467 if (i) 1469 1468 return i;
+1 -1
drivers/video/atmel_lcdfb.c
··· 1108 1108 */ 1109 1109 lcdc_writel(sinfo, ATMEL_LCDC_IDR, ~0UL); 1110 1110 1111 - sinfo->saved_lcdcon = lcdc_readl(sinfo, ATMEL_LCDC_CONTRAST_VAL); 1111 + sinfo->saved_lcdcon = lcdc_readl(sinfo, ATMEL_LCDC_CONTRAST_CTR); 1112 1112 lcdc_writel(sinfo, ATMEL_LCDC_CONTRAST_CTR, 0); 1113 1113 if (sinfo->atmel_lcdfb_power_control) 1114 1114 sinfo->atmel_lcdfb_power_control(0);
+2 -2
drivers/video/fsl-diu-fb.c
··· 1432 1432 struct fsl_diu_data *data; 1433 1433 1434 1434 data = dev_get_drvdata(&ofdev->dev); 1435 - disable_lcdc(data->fsl_diu_info[0]); 1435 + disable_lcdc(data->fsl_diu_info); 1436 1436 1437 1437 return 0; 1438 1438 } ··· 1442 1442 struct fsl_diu_data *data; 1443 1443 1444 1444 data = dev_get_drvdata(&ofdev->dev); 1445 - enable_lcdc(data->fsl_diu_info[0]); 1445 + enable_lcdc(data->fsl_diu_info); 1446 1446 1447 1447 return 0; 1448 1448 }
-1
drivers/video/intelfb/intelfbdrv.c
··· 529 529 if (fb_alloc_cmap(&info->cmap, 256, 1) < 0) { 530 530 ERR_MSG("Could not allocate cmap for intelfb_info.\n"); 531 531 goto err_out_cmap; 532 - return -ENODEV; 533 532 } 534 533 535 534 dinfo = info->par;
+1 -1
drivers/video/omap2/dss/dispc.c
··· 401 401 402 402 DSSDBG("dispc_runtime_put\n"); 403 403 404 - r = pm_runtime_put(&dispc.pdev->dev); 404 + r = pm_runtime_put_sync(&dispc.pdev->dev); 405 405 WARN_ON(r < 0); 406 406 } 407 407
+1 -1
drivers/video/omap2/dss/dsi.c
··· 1079 1079 1080 1080 DSSDBG("dsi_runtime_put\n"); 1081 1081 1082 - r = pm_runtime_put(&dsi->pdev->dev); 1082 + r = pm_runtime_put_sync(&dsi->pdev->dev); 1083 1083 WARN_ON(r < 0); 1084 1084 } 1085 1085
+1 -1
drivers/video/omap2/dss/dss.c
··· 720 720 721 721 DSSDBG("dss_runtime_put\n"); 722 722 723 - r = pm_runtime_put(&dss.pdev->dev); 723 + r = pm_runtime_put_sync(&dss.pdev->dev); 724 724 WARN_ON(r < 0); 725 725 } 726 726
+4 -1
drivers/video/omap2/dss/hdmi.c
··· 176 176 177 177 DSSDBG("hdmi_runtime_put\n"); 178 178 179 - r = pm_runtime_put(&hdmi.pdev->dev); 179 + r = pm_runtime_put_sync(&hdmi.pdev->dev); 180 180 WARN_ON(r < 0); 181 181 } 182 182 ··· 497 497 498 498 int omapdss_hdmi_display_enable(struct omap_dss_device *dssdev) 499 499 { 500 + struct omap_dss_hdmi_data *priv = dssdev->data; 500 501 int r = 0; 501 502 502 503 DSSDBG("ENTER hdmi_display_enable\n"); ··· 509 508 r = -ENODEV; 510 509 goto err0; 511 510 } 511 + 512 + hdmi.ip_data.hpd_gpio = priv->hpd_gpio; 512 513 513 514 r = omap_dss_start_device(dssdev); 514 515 if (r) {
+1 -1
drivers/video/omap2/dss/rfbi.c
··· 140 140 141 141 DSSDBG("rfbi_runtime_put\n"); 142 142 143 - r = pm_runtime_put(&rfbi.pdev->dev); 143 + r = pm_runtime_put_sync(&rfbi.pdev->dev); 144 144 WARN_ON(r < 0); 145 145 } 146 146
+4
drivers/video/omap2/dss/ti_hdmi.h
··· 126 126 const struct ti_hdmi_ip_ops *ops; 127 127 struct hdmi_config cfg; 128 128 struct hdmi_pll_info pll_data; 129 + 130 + /* ti_hdmi_4xxx_ip private data. These should be in a separate struct */ 131 + int hpd_gpio; 132 + bool phy_tx_enabled; 129 133 }; 130 134 int ti_hdmi_4xxx_phy_enable(struct hdmi_ip_data *ip_data); 131 135 void ti_hdmi_4xxx_phy_disable(struct hdmi_ip_data *ip_data);
+64 -4
drivers/video/omap2/dss/ti_hdmi_4xxx_ip.c
··· 28 28 #include <linux/delay.h> 29 29 #include <linux/string.h> 30 30 #include <linux/seq_file.h> 31 + #include <linux/gpio.h> 31 32 32 33 #include "ti_hdmi_4xxx_ip.h" 33 34 #include "dss.h" ··· 224 223 hdmi_set_pll_pwr(ip_data, HDMI_PLLPWRCMD_ALLOFF); 225 224 } 226 225 226 + static int hdmi_check_hpd_state(struct hdmi_ip_data *ip_data) 227 + { 228 + unsigned long flags; 229 + bool hpd; 230 + int r; 231 + /* this should be in ti_hdmi_4xxx_ip private data */ 232 + static DEFINE_SPINLOCK(phy_tx_lock); 233 + 234 + spin_lock_irqsave(&phy_tx_lock, flags); 235 + 236 + hpd = gpio_get_value(ip_data->hpd_gpio); 237 + 238 + if (hpd == ip_data->phy_tx_enabled) { 239 + spin_unlock_irqrestore(&phy_tx_lock, flags); 240 + return 0; 241 + } 242 + 243 + if (hpd) 244 + r = hdmi_set_phy_pwr(ip_data, HDMI_PHYPWRCMD_TXON); 245 + else 246 + r = hdmi_set_phy_pwr(ip_data, HDMI_PHYPWRCMD_LDOON); 247 + 248 + if (r) { 249 + DSSERR("Failed to %s PHY TX power\n", 250 + hpd ? "enable" : "disable"); 251 + goto err; 252 + } 253 + 254 + ip_data->phy_tx_enabled = hpd; 255 + err: 256 + spin_unlock_irqrestore(&phy_tx_lock, flags); 257 + return r; 258 + } 259 + 260 + static irqreturn_t hpd_irq_handler(int irq, void *data) 261 + { 262 + struct hdmi_ip_data *ip_data = data; 263 + 264 + hdmi_check_hpd_state(ip_data); 265 + 266 + return IRQ_HANDLED; 267 + } 268 + 227 269 int ti_hdmi_4xxx_phy_enable(struct hdmi_ip_data *ip_data) 228 270 { 229 271 u16 r = 0; 230 272 void __iomem *phy_base = hdmi_phy_base(ip_data); 231 273 232 274 r = hdmi_set_phy_pwr(ip_data, HDMI_PHYPWRCMD_LDOON); 233 - if (r) 234 - return r; 235 - 236 - r = hdmi_set_phy_pwr(ip_data, HDMI_PHYPWRCMD_TXON); 237 275 if (r) 238 276 return r; 239 277 ··· 297 257 /* Write to phy address 3 to change the polarity control */ 298 258 REG_FLD_MOD(phy_base, HDMI_TXPHY_PAD_CFG_CTRL, 0x1, 27, 27); 299 259 260 + r = request_threaded_irq(gpio_to_irq(ip_data->hpd_gpio), 261 + NULL, hpd_irq_handler, 262 + IRQF_DISABLED | IRQF_TRIGGER_RISING | 263 + IRQF_TRIGGER_FALLING, "hpd", ip_data); 264 + if (r) { 265 + DSSERR("HPD IRQ request failed\n"); 266 + hdmi_set_phy_pwr(ip_data, HDMI_PHYPWRCMD_OFF); 267 + return r; 268 + } 269 + 270 + r = hdmi_check_hpd_state(ip_data); 271 + if (r) { 272 + free_irq(gpio_to_irq(ip_data->hpd_gpio), ip_data); 273 + hdmi_set_phy_pwr(ip_data, HDMI_PHYPWRCMD_OFF); 274 + return r; 275 + } 276 + 300 277 return 0; 301 278 } 302 279 303 280 void ti_hdmi_4xxx_phy_disable(struct hdmi_ip_data *ip_data) 304 281 { 282 + free_irq(gpio_to_irq(ip_data->hpd_gpio), ip_data); 283 + 305 284 hdmi_set_phy_pwr(ip_data, HDMI_PHYPWRCMD_OFF); 285 + ip_data->phy_tx_enabled = false; 306 286 } 307 287 308 288 static int hdmi_core_ddc_init(struct hdmi_ip_data *ip_data)
+1 -1
drivers/video/omap2/dss/venc.c
··· 401 401 402 402 DSSDBG("venc_runtime_put\n"); 403 403 404 - r = pm_runtime_put(&venc.pdev->dev); 404 + r = pm_runtime_put_sync(&venc.pdev->dev); 405 405 WARN_ON(r < 0); 406 406 } 407 407
+2 -2
fs/ceph/caps.c
··· 641 641 unsigned long ttl; 642 642 u32 gen; 643 643 644 - spin_lock(&cap->session->s_cap_lock); 644 + spin_lock(&cap->session->s_gen_ttl_lock); 645 645 gen = cap->session->s_cap_gen; 646 646 ttl = cap->session->s_cap_ttl; 647 - spin_unlock(&cap->session->s_cap_lock); 647 + spin_unlock(&cap->session->s_gen_ttl_lock); 648 648 649 649 if (cap->cap_gen < gen || time_after_eq(jiffies, ttl)) { 650 650 dout("__cap_is_valid %p cap %p issued %s "
+2 -2
fs/ceph/dir.c
··· 975 975 di = ceph_dentry(dentry); 976 976 if (di->lease_session) { 977 977 s = di->lease_session; 978 - spin_lock(&s->s_cap_lock); 978 + spin_lock(&s->s_gen_ttl_lock); 979 979 gen = s->s_cap_gen; 980 980 ttl = s->s_cap_ttl; 981 - spin_unlock(&s->s_cap_lock); 981 + spin_unlock(&s->s_gen_ttl_lock); 982 982 983 983 if (di->lease_gen == gen && 984 984 time_before(jiffies, dentry->d_time) &&
+7 -3
fs/ceph/mds_client.c
··· 262 262 /* trace */ 263 263 ceph_decode_32_safe(&p, end, len, bad); 264 264 if (len > 0) { 265 + ceph_decode_need(&p, end, len, bad); 265 266 err = parse_reply_info_trace(&p, p+len, info, features); 266 267 if (err < 0) 267 268 goto out_bad; ··· 271 270 /* extra */ 272 271 ceph_decode_32_safe(&p, end, len, bad); 273 272 if (len > 0) { 273 + ceph_decode_need(&p, end, len, bad); 274 274 err = parse_reply_info_extra(&p, p+len, info, features); 275 275 if (err < 0) 276 276 goto out_bad; ··· 400 398 s->s_con.peer_name.type = CEPH_ENTITY_TYPE_MDS; 401 399 s->s_con.peer_name.num = cpu_to_le64(mds); 402 400 403 - spin_lock_init(&s->s_cap_lock); 401 + spin_lock_init(&s->s_gen_ttl_lock); 404 402 s->s_cap_gen = 0; 405 403 s->s_cap_ttl = 0; 404 + 405 + spin_lock_init(&s->s_cap_lock); 406 406 s->s_renew_requested = 0; 407 407 s->s_renew_seq = 0; 408 408 INIT_LIST_HEAD(&s->s_caps); ··· 2330 2326 case CEPH_SESSION_STALE: 2331 2327 pr_info("mds%d caps went stale, renewing\n", 2332 2328 session->s_mds); 2333 - spin_lock(&session->s_cap_lock); 2329 + spin_lock(&session->s_gen_ttl_lock); 2334 2330 session->s_cap_gen++; 2335 2331 session->s_cap_ttl = 0; 2336 - spin_unlock(&session->s_cap_lock); 2332 + spin_unlock(&session->s_gen_ttl_lock); 2337 2333 send_renew_caps(mdsc, session); 2338 2334 break; 2339 2335
+5 -2
fs/ceph/mds_client.h
··· 117 117 void *s_authorizer_buf, *s_authorizer_reply_buf; 118 118 size_t s_authorizer_buf_len, s_authorizer_reply_buf_len; 119 119 120 - /* protected by s_cap_lock */ 121 - spinlock_t s_cap_lock; 120 + /* protected by s_gen_ttl_lock */ 121 + spinlock_t s_gen_ttl_lock; 122 122 u32 s_cap_gen; /* inc each time we get mds stale msg */ 123 123 unsigned long s_cap_ttl; /* when session caps expire */ 124 + 125 + /* protected by s_cap_lock */ 126 + spinlock_t s_cap_lock; 124 127 struct list_head s_caps; /* all caps issued by this session */ 125 128 int s_nr_caps, s_trim_caps; 126 129 int s_num_cap_releases;
+3 -1
fs/ceph/xattr.c
··· 111 111 } 112 112 113 113 static struct ceph_vxattr_cb ceph_file_vxattrs[] = { 114 + { true, "ceph.file.layout", ceph_vxattrcb_layout}, 115 + /* The following extended attribute name is deprecated */ 114 116 { true, "ceph.layout", ceph_vxattrcb_layout}, 115 - { NULL, NULL } 117 + { true, NULL, NULL } 116 118 }; 117 119 118 120 static struct ceph_vxattr_cb *ceph_inode_vxattrs(struct inode *inode)
+2 -2
fs/cifs/Kconfig
··· 139 139 points. If unsure, say N. 140 140 141 141 config CIFS_FSCACHE 142 - bool "Provide CIFS client caching support (EXPERIMENTAL)" 142 + bool "Provide CIFS client caching support" 143 143 depends on CIFS=m && FSCACHE || CIFS=y && FSCACHE=y 144 144 help 145 145 Makes CIFS FS-Cache capable. Say Y here if you want your CIFS data ··· 147 147 manager. If unsure, say N. 148 148 149 149 config CIFS_ACL 150 - bool "Provide CIFS ACL support (EXPERIMENTAL)" 150 + bool "Provide CIFS ACL support" 151 151 depends on CIFS_XATTR && KEYS 152 152 help 153 153 Allows to fetch CIFS/NTFS ACL from the server. The DACL blob
+6 -8
fs/cifs/connect.c
··· 2142 2142 2143 2143 len = delim - payload; 2144 2144 if (len > MAX_USERNAME_SIZE || len <= 0) { 2145 - cFYI(1, "Bad value from username search (len=%ld)", len); 2145 + cFYI(1, "Bad value from username search (len=%zd)", len); 2146 2146 rc = -EINVAL; 2147 2147 goto out_key_put; 2148 2148 } 2149 2149 2150 2150 vol->username = kstrndup(payload, len, GFP_KERNEL); 2151 2151 if (!vol->username) { 2152 - cFYI(1, "Unable to allocate %ld bytes for username", len); 2152 + cFYI(1, "Unable to allocate %zd bytes for username", len); 2153 2153 rc = -ENOMEM; 2154 2154 goto out_key_put; 2155 2155 } ··· 2157 2157 2158 2158 len = key->datalen - (len + 1); 2159 2159 if (len > MAX_PASSWORD_SIZE || len <= 0) { 2160 - cFYI(1, "Bad len for password search (len=%ld)", len); 2160 + cFYI(1, "Bad len for password search (len=%zd)", len); 2161 2161 rc = -EINVAL; 2162 2162 kfree(vol->username); 2163 2163 vol->username = NULL; ··· 2167 2167 ++delim; 2168 2168 vol->password = kstrndup(delim, len, GFP_KERNEL); 2169 2169 if (!vol->password) { 2170 - cFYI(1, "Unable to allocate %ld bytes for password", len); 2170 + cFYI(1, "Unable to allocate %zd bytes for password", len); 2171 2171 rc = -ENOMEM; 2172 2172 kfree(vol->username); 2173 2173 vol->username = NULL; ··· 3857 3857 struct smb_vol *vol_info; 3858 3858 3859 3859 vol_info = kzalloc(sizeof(*vol_info), GFP_KERNEL); 3860 - if (vol_info == NULL) { 3861 - tcon = ERR_PTR(-ENOMEM); 3862 - goto out; 3863 - } 3860 + if (vol_info == NULL) 3861 + return ERR_PTR(-ENOMEM); 3864 3862 3865 3863 vol_info->local_nls = cifs_sb->local_nls; 3866 3864 vol_info->linux_uid = fsuid;
+7 -4
fs/cifs/sess.c
··· 246 246 /* copy user */ 247 247 /* BB what about null user mounts - check that we do this BB */ 248 248 /* copy user */ 249 - if (ses->user_name != NULL) 249 + if (ses->user_name != NULL) { 250 250 strncpy(bcc_ptr, ses->user_name, MAX_USERNAME_SIZE); 251 + bcc_ptr += strnlen(ses->user_name, MAX_USERNAME_SIZE); 252 + } 251 253 /* else null user mount */ 252 - 253 - bcc_ptr += strnlen(ses->user_name, MAX_USERNAME_SIZE); 254 254 *bcc_ptr = 0; 255 255 bcc_ptr++; /* account for null termination */ 256 256 257 257 /* copy domain */ 258 - 259 258 if (ses->domainName != NULL) { 260 259 strncpy(bcc_ptr, ses->domainName, 256); 261 260 bcc_ptr += strnlen(ses->domainName, 256); ··· 394 395 ses->ntlmssp->server_flags = le32_to_cpu(pblob->NegotiateFlags); 395 396 tioffset = le32_to_cpu(pblob->TargetInfoArray.BufferOffset); 396 397 tilen = le16_to_cpu(pblob->TargetInfoArray.Length); 398 + if (tioffset > blob_len || tioffset + tilen > blob_len) { 399 + cERROR(1, "tioffset + tilen too high %u + %u", tioffset, tilen); 400 + return -EINVAL; 401 + } 397 402 if (tilen) { 398 403 ses->auth_key.response = kmalloc(tilen, GFP_KERNEL); 399 404 if (!ses->auth_key.response) {
+17 -16
fs/exec.c
··· 1071 1071 perf_event_comm(tsk); 1072 1072 } 1073 1073 1074 + static void filename_to_taskname(char *tcomm, const char *fn, unsigned int len) 1075 + { 1076 + int i, ch; 1077 + 1078 + /* Copies the binary name from after last slash */ 1079 + for (i = 0; (ch = *(fn++)) != '\0';) { 1080 + if (ch == '/') 1081 + i = 0; /* overwrite what we wrote */ 1082 + else 1083 + if (i < len - 1) 1084 + tcomm[i++] = ch; 1085 + } 1086 + tcomm[i] = '\0'; 1087 + } 1088 + 1074 1089 int flush_old_exec(struct linux_binprm * bprm) 1075 1090 { 1076 1091 int retval; ··· 1100 1085 1101 1086 set_mm_exe_file(bprm->mm, bprm->file); 1102 1087 1088 + filename_to_taskname(bprm->tcomm, bprm->filename, sizeof(bprm->tcomm)); 1103 1089 /* 1104 1090 * Release all of the old mmap stuff 1105 1091 */ ··· 1132 1116 1133 1117 void setup_new_exec(struct linux_binprm * bprm) 1134 1118 { 1135 - int i, ch; 1136 - const char *name; 1137 - char tcomm[sizeof(current->comm)]; 1138 - 1139 1119 arch_pick_mmap_layout(current->mm); 1140 1120 1141 1121 /* This is the point of no return */ ··· 1142 1130 else 1143 1131 set_dumpable(current->mm, suid_dumpable); 1144 1132 1145 - name = bprm->filename; 1146 - 1147 - /* Copies the binary name from after last slash */ 1148 - for (i=0; (ch = *(name++)) != '\0';) { 1149 - if (ch == '/') 1150 - i = 0; /* overwrite what we wrote */ 1151 - else 1152 - if (i < (sizeof(tcomm) - 1)) 1153 - tcomm[i++] = ch; 1154 - } 1155 - tcomm[i] = '\0'; 1156 - set_task_comm(current, tcomm); 1133 + set_task_comm(current, bprm->tcomm); 1157 1134 1158 1135 /* Set the new mm task size. We have to do that late because it may 1159 1136 * depend on TIF_32BIT which is only updated in flush_thread() on
+1 -1
fs/jffs2/erase.c
··· 335 335 void *ebuf; 336 336 uint32_t ofs; 337 337 size_t retlen; 338 - int ret = -EIO; 338 + int ret; 339 339 unsigned long *wordebuf; 340 340 341 341 ret = mtd_point(c->mtd, jeb->offset, c->sector_size, &retlen,
-6
fs/logfs/dev_mtd.c
··· 152 152 filler_t *filler = logfs_mtd_readpage; 153 153 struct mtd_info *mtd = super->s_mtd; 154 154 155 - if (!mtd_can_have_bb(mtd)) 156 - return NULL; 157 - 158 155 *ofs = 0; 159 156 while (mtd_block_isbad(mtd, *ofs)) { 160 157 *ofs += mtd->erasesize; ··· 168 171 struct address_space *mapping = super->s_mapping_inode->i_mapping; 169 172 filler_t *filler = logfs_mtd_readpage; 170 173 struct mtd_info *mtd = super->s_mtd; 171 - 172 - if (!mtd_can_have_bb(mtd)) 173 - return NULL; 174 174 175 175 *ofs = mtd->size - mtd->erasesize; 176 176 while (mtd_block_isbad(mtd, *ofs)) {
+2
fs/nilfs2/ioctl.c
··· 603 603 nsegs = argv[4].v_nmembs; 604 604 if (argv[4].v_size != argsz[4]) 605 605 goto out; 606 + if (nsegs > UINT_MAX / sizeof(__u64)) 607 + goto out; 606 608 607 609 /* 608 610 * argv[4] points to segment numbers this ioctl cleans. We
+48 -82
fs/proc/base.c
··· 198 198 return result; 199 199 } 200 200 201 - static struct mm_struct *mm_access(struct task_struct *task, unsigned int mode) 202 - { 203 - struct mm_struct *mm; 204 - int err; 205 - 206 - err = mutex_lock_killable(&task->signal->cred_guard_mutex); 207 - if (err) 208 - return ERR_PTR(err); 209 - 210 - mm = get_task_mm(task); 211 - if (mm && mm != current->mm && 212 - !ptrace_may_access(task, mode)) { 213 - mmput(mm); 214 - mm = ERR_PTR(-EACCES); 215 - } 216 - mutex_unlock(&task->signal->cred_guard_mutex); 217 - 218 - return mm; 219 - } 220 - 221 201 struct mm_struct *mm_for_maps(struct task_struct *task) 222 202 { 223 203 return mm_access(task, PTRACE_MODE_READ); ··· 691 711 if (IS_ERR(mm)) 692 712 return PTR_ERR(mm); 693 713 714 + if (mm) { 715 + /* ensure this mm_struct can't be freed */ 716 + atomic_inc(&mm->mm_count); 717 + /* but do not pin its memory */ 718 + mmput(mm); 719 + } 720 + 694 721 /* OK to pass negative loff_t, we can catch out-of-range */ 695 722 file->f_mode |= FMODE_UNSIGNED_OFFSET; 696 723 file->private_data = mm; ··· 705 718 return 0; 706 719 } 707 720 708 - static ssize_t mem_read(struct file * file, char __user * buf, 709 - size_t count, loff_t *ppos) 721 + static ssize_t mem_rw(struct file *file, char __user *buf, 722 + size_t count, loff_t *ppos, int write) 710 723 { 711 - int ret; 712 - char *page; 713 - unsigned long src = *ppos; 714 724 struct mm_struct *mm = file->private_data; 715 - 716 - if (!mm) 717 - return 0; 718 - 719 - page = (char *)__get_free_page(GFP_TEMPORARY); 720 - if (!page) 721 - return -ENOMEM; 722 - 723 - ret = 0; 724 - 725 - while (count > 0) { 726 - int this_len, retval; 727 - 728 - this_len = (count > PAGE_SIZE) ? PAGE_SIZE : count; 729 - retval = access_remote_vm(mm, src, page, this_len, 0); 730 - if (!retval) { 731 - if (!ret) 732 - ret = -EIO; 733 - break; 734 - } 735 - 736 - if (copy_to_user(buf, page, retval)) { 737 - ret = -EFAULT; 738 - break; 739 - } 740 - 741 - ret += retval; 742 - src += retval; 743 - buf += retval; 744 - count -= retval; 745 - } 746 - *ppos = src; 747 - 748 - free_page((unsigned long) page); 749 - return ret; 750 - } 751 - 752 - static ssize_t mem_write(struct file * file, const char __user *buf, 753 - size_t count, loff_t *ppos) 754 - { 755 - int copied; 725 + unsigned long addr = *ppos; 726 + ssize_t copied; 756 727 char *page; 757 - unsigned long dst = *ppos; 758 - struct mm_struct *mm = file->private_data; 759 728 760 729 if (!mm) 761 730 return 0; ··· 721 778 return -ENOMEM; 722 779 723 780 copied = 0; 724 - while (count > 0) { 725 - int this_len, retval; 781 + if (!atomic_inc_not_zero(&mm->mm_users)) 782 + goto free; 726 783 727 - this_len = (count > PAGE_SIZE) ? PAGE_SIZE : count; 728 - if (copy_from_user(page, buf, this_len)) { 784 + while (count > 0) { 785 + int this_len = min_t(int, count, PAGE_SIZE); 786 + 787 + if (write && copy_from_user(page, buf, this_len)) { 729 788 copied = -EFAULT; 730 789 break; 731 790 } 732 - retval = access_remote_vm(mm, dst, page, this_len, 1); 733 - if (!retval) { 791 + 792 + this_len = access_remote_vm(mm, addr, page, this_len, write); 793 + if (!this_len) { 734 794 if (!copied) 735 795 copied = -EIO; 736 796 break; 737 797 } 738 - copied += retval; 739 - buf += retval; 740 - dst += retval; 741 - count -= retval; 742 - } 743 - *ppos = dst; 744 798 799 + if (!write && copy_to_user(buf, page, this_len)) { 800 + copied = -EFAULT; 801 + break; 802 + } 803 + 804 + buf += this_len; 805 + addr += this_len; 806 + copied += this_len; 807 + count -= this_len; 808 + } 809 + *ppos = addr; 810 + 811 + mmput(mm); 812 + free: 745 813 free_page((unsigned long) page); 746 814 return copied; 815 + } 816 + 817 + static ssize_t mem_read(struct file *file, char __user *buf, 818 + size_t count, loff_t *ppos) 819 + { 820 + return mem_rw(file, buf, count, ppos, 0); 821 + } 822 + 823 + static ssize_t mem_write(struct file *file, const char __user *buf, 824 + size_t count, loff_t *ppos) 825 + { 826 + return mem_rw(file, (char __user*)buf, count, ppos, 1); 747 827 } 748 828 749 829 loff_t mem_lseek(struct file *file, loff_t offset, int orig) ··· 788 822 static int mem_release(struct inode *inode, struct file *file) 789 823 { 790 824 struct mm_struct *mm = file->private_data; 791 - 792 - mmput(mm); 825 + if (mm) 826 + mmdrop(mm); 793 827 return 0; 794 828 } 795 829
+10
include/asm-generic/pci_iomap.h
··· 15 15 #ifdef CONFIG_PCI 16 16 /* Create a virtual mapping cookie for a PCI BAR (memory or IO) */ 17 17 extern void __iomem *pci_iomap(struct pci_dev *dev, int bar, unsigned long max); 18 + /* Create a virtual mapping cookie for a port on a given PCI device. 19 + * Do not call this directly, it exists to make it easier for architectures 20 + * to override */ 21 + #ifdef CONFIG_NO_GENERIC_PCI_IOPORT_MAP 22 + extern void __iomem *__pci_ioport_map(struct pci_dev *dev, unsigned long port, 23 + unsigned int nr); 24 + #else 25 + #define __pci_ioport_map(dev, port, nr) ioport_map((port), (nr)) 26 + #endif 27 + 18 28 #else 19 29 static inline void __iomem *pci_iomap(struct pci_dev *dev, int bar, unsigned long max) 20 30 {
+2 -1
include/linux/binfmts.h
··· 18 18 #define BINPRM_BUF_SIZE 128 19 19 20 20 #ifdef __KERNEL__ 21 - #include <linux/list.h> 21 + #include <linux/sched.h> 22 22 23 23 #define CORENAME_MAX_SIZE 128 24 24 ··· 58 58 unsigned interp_flags; 59 59 unsigned interp_data; 60 60 unsigned long loader, exec; 61 + char tcomm[TASK_COMM_LEN]; 61 62 }; 62 63 63 64 #define BINPRM_FLAGS_ENFORCE_NONDUMP_BIT 0
+2
include/linux/gpio_keys.h
··· 1 1 #ifndef _GPIO_KEYS_H 2 2 #define _GPIO_KEYS_H 3 3 4 + struct device; 5 + 4 6 struct gpio_keys_button { 5 7 /* Configuration parameters */ 6 8 unsigned int code; /* input event code (KEY_*, SW_*) */
include/linux/lp8727.h
+2
include/linux/mfd/twl6040.h
··· 187 187 int rev; 188 188 u8 vibra_ctrl_cache[2]; 189 189 190 + /* PLL configuration */ 190 191 int pll; 191 192 unsigned int sysclk; 193 + unsigned int mclk; 192 194 193 195 unsigned int irq; 194 196 unsigned int irq_base;
-2
include/linux/mpi.h
··· 57 57 58 58 typedef struct gcry_mpi *MPI; 59 59 60 - #define MPI_NULL NULL 61 - 62 60 #define mpi_get_nlimbs(a) ((a)->nlimbs) 63 61 #define mpi_is_neg(a) ((a)->sign) 64 62
+2 -4
include/linux/mtd/mtd.h
··· 427 427 428 428 static inline int mtd_suspend(struct mtd_info *mtd) 429 429 { 430 - if (!mtd->suspend) 431 - return -EOPNOTSUPP; 432 - return mtd->suspend(mtd); 430 + return mtd->suspend ? mtd->suspend(mtd) : 0; 433 431 } 434 432 435 433 static inline void mtd_resume(struct mtd_info *mtd) ··· 487 489 488 490 static inline int mtd_can_have_bb(const struct mtd_info *mtd) 489 491 { 490 - return 0; 492 + return !!mtd->block_isbad; 491 493 } 492 494 493 495 /* Kernel-side ioctl definitions */
+1
include/linux/perf_event.h
··· 587 587 u64 sample_period; 588 588 u64 last_period; 589 589 local64_t period_left; 590 + u64 interrupts_seq; 590 591 u64 interrupts; 591 592 592 593 u64 freq_time_stamp;
+13 -1
include/linux/pm_qos.h
··· 110 110 { return; } 111 111 112 112 static inline int pm_qos_request(int pm_qos_class) 113 - { return 0; } 113 + { 114 + switch (pm_qos_class) { 115 + case PM_QOS_CPU_DMA_LATENCY: 116 + return PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE; 117 + case PM_QOS_NETWORK_LATENCY: 118 + return PM_QOS_NETWORK_LAT_DEFAULT_VALUE; 119 + case PM_QOS_NETWORK_THROUGHPUT: 120 + return PM_QOS_NETWORK_THROUGHPUT_DEFAULT_VALUE; 121 + default: 122 + return PM_QOS_DEFAULT_VALUE; 123 + } 124 + } 125 + 114 126 static inline int pm_qos_add_notifier(int pm_qos_class, 115 127 struct notifier_block *notifier) 116 128 { return 0; }
+6
include/linux/sched.h
··· 2259 2259 extern void mmput(struct mm_struct *); 2260 2260 /* Grab a reference to a task's mm, if it is not already going away */ 2261 2261 extern struct mm_struct *get_task_mm(struct task_struct *task); 2262 + /* 2263 + * Grab a reference to a task's mm, if it is not already going away 2264 + * and ptrace_may_access with the mode parameter passed to it 2265 + * succeeds. 2266 + */ 2267 + extern struct mm_struct *mm_access(struct task_struct *task, unsigned int mode); 2262 2268 /* Remove the current tasks stale references to the old mm_struct */ 2263 2269 extern void mm_release(struct task_struct *, struct mm_struct *); 2264 2270 /* Allocate a new mm structure and copy contents from tsk->mm */
+1
include/linux/sh_dma.h
··· 70 70 unsigned int needs_tend_set:1; 71 71 unsigned int no_dmars:1; 72 72 unsigned int chclr_present:1; 73 + unsigned int slave_only:1; 73 74 }; 74 75 75 76 /* DMA register */
+2
include/sound/core.h
··· 417 417 #define gameport_get_port_data(gp) (gp)->port_data 418 418 #endif 419 419 420 + #ifdef CONFIG_PCI 420 421 /* PCI quirk list helper */ 421 422 struct snd_pci_quirk { 422 423 unsigned short subvendor; /* PCI subvendor ID */ ··· 457 456 const struct snd_pci_quirk * 458 457 snd_pci_quirk_lookup_id(u16 vendor, u16 device, 459 458 const struct snd_pci_quirk *list); 459 + #endif 460 460 461 461 #endif /* __SOUND_CORE_H */
+2 -2
include/target/target_core_backend.h
··· 59 59 int transport_set_vpd_ident(struct t10_vpd *, unsigned char *); 60 60 61 61 /* core helpers also used by command snooping in pscsi */ 62 - void *transport_kmap_first_data_page(struct se_cmd *); 63 - void transport_kunmap_first_data_page(struct se_cmd *); 62 + void *transport_kmap_data_sg(struct se_cmd *); 63 + void transport_kunmap_data_sg(struct se_cmd *); 64 64 65 65 #endif /* TARGET_CORE_BACKEND_H */
+1
include/target/target_core_base.h
··· 582 582 583 583 struct scatterlist *t_data_sg; 584 584 unsigned int t_data_nents; 585 + void *t_data_vmap; 585 586 struct scatterlist *t_bidi_data_sg; 586 587 unsigned int t_bidi_data_nents; 587 588
+1 -1
include/target/target_core_fabric.h
··· 114 114 struct se_session *, u32, int, int, unsigned char *); 115 115 int transport_lookup_cmd_lun(struct se_cmd *, u32); 116 116 int transport_generic_allocate_tasks(struct se_cmd *, unsigned char *); 117 - int target_submit_cmd(struct se_cmd *, struct se_session *, unsigned char *, 117 + void target_submit_cmd(struct se_cmd *, struct se_session *, unsigned char *, 118 118 unsigned char *, u32, u32, int, int, int); 119 119 int transport_handle_cdb_direct(struct se_cmd *); 120 120 int transport_generic_handle_cdb_map(struct se_cmd *);
+5
include/video/omapdss.h
··· 590 590 int (*get_backlight)(struct omap_dss_device *dssdev); 591 591 }; 592 592 593 + struct omap_dss_hdmi_data 594 + { 595 + int hpd_gpio; 596 + }; 597 + 593 598 struct omap_dss_driver { 594 599 struct device_driver driver; 595 600
+66 -38
kernel/events/core.c
··· 2300 2300 return div64_u64(dividend, divisor); 2301 2301 } 2302 2302 2303 + static DEFINE_PER_CPU(int, perf_throttled_count); 2304 + static DEFINE_PER_CPU(u64, perf_throttled_seq); 2305 + 2303 2306 static void perf_adjust_period(struct perf_event *event, u64 nsec, u64 count) 2304 2307 { 2305 2308 struct hw_perf_event *hwc = &event->hw; ··· 2328 2325 } 2329 2326 } 2330 2327 2331 - static void perf_ctx_adjust_freq(struct perf_event_context *ctx, u64 period) 2328 + /* 2329 + * combine freq adjustment with unthrottling to avoid two passes over the 2330 + * events. At the same time, make sure, having freq events does not change 2331 + * the rate of unthrottling as that would introduce bias. 2332 + */ 2333 + static void perf_adjust_freq_unthr_context(struct perf_event_context *ctx, 2334 + int needs_unthr) 2332 2335 { 2333 2336 struct perf_event *event; 2334 2337 struct hw_perf_event *hwc; 2335 - u64 interrupts, now; 2338 + u64 now, period = TICK_NSEC; 2336 2339 s64 delta; 2337 2340 2338 - if (!ctx->nr_freq) 2341 + /* 2342 + * only need to iterate over all events iff: 2343 + * - context have events in frequency mode (needs freq adjust) 2344 + * - there are events to unthrottle on this cpu 2345 + */ 2346 + if (!(ctx->nr_freq || needs_unthr)) 2339 2347 return; 2348 + 2349 + raw_spin_lock(&ctx->lock); 2340 2350 2341 2351 list_for_each_entry_rcu(event, &ctx->event_list, event_entry) { 2342 2352 if (event->state != PERF_EVENT_STATE_ACTIVE) ··· 2360 2344 2361 2345 hwc = &event->hw; 2362 2346 2363 - interrupts = hwc->interrupts; 2364 - hwc->interrupts = 0; 2365 - 2366 - /* 2367 - * unthrottle events on the tick 2368 - */ 2369 - if (interrupts == MAX_INTERRUPTS) { 2347 + if (needs_unthr && hwc->interrupts == MAX_INTERRUPTS) { 2348 + hwc->interrupts = 0; 2370 2349 perf_log_throttle(event, 1); 2371 2350 event->pmu->start(event, 0); 2372 2351 } ··· 2369 2358 if (!event->attr.freq || !event->attr.sample_freq) 2370 2359 continue; 2371 2360 2372 - event->pmu->read(event); 2361 + /* 2362 + * stop the event and update event->count 2363 + */ 2364 + event->pmu->stop(event, PERF_EF_UPDATE); 2365 + 2373 2366 now = local64_read(&event->count); 2374 2367 delta = now - hwc->freq_count_stamp; 2375 2368 hwc->freq_count_stamp = now; 2376 2369 2370 + /* 2371 + * restart the event 2372 + * reload only if value has changed 2373 + */ 2377 2374 if (delta > 0) 2378 2375 perf_adjust_period(event, period, delta); 2376 + 2377 + event->pmu->start(event, delta > 0 ? PERF_EF_RELOAD : 0); 2379 2378 } 2379 + 2380 + raw_spin_unlock(&ctx->lock); 2380 2381 } 2381 2382 2382 2383 /* ··· 2411 2388 */ 2412 2389 static void perf_rotate_context(struct perf_cpu_context *cpuctx) 2413 2390 { 2414 - u64 interval = (u64)cpuctx->jiffies_interval * TICK_NSEC; 2415 2391 struct perf_event_context *ctx = NULL; 2416 - int rotate = 0, remove = 1, freq = 0; 2392 + int rotate = 0, remove = 1; 2417 2393 2418 2394 if (cpuctx->ctx.nr_events) { 2419 2395 remove = 0; 2420 2396 if (cpuctx->ctx.nr_events != cpuctx->ctx.nr_active) 2421 2397 rotate = 1; 2422 - if (cpuctx->ctx.nr_freq) 2423 - freq = 1; 2424 2398 } 2425 2399 2426 2400 ctx = cpuctx->task_ctx; ··· 2425 2405 remove = 0; 2426 2406 if (ctx->nr_events != ctx->nr_active) 2427 2407 rotate = 1; 2428 - if (ctx->nr_freq) 2429 - freq = 1; 2430 2408 } 2431 2409 2432 - if (!rotate && !freq) 2410 + if (!rotate) 2433 2411 goto done; 2434 2412 2435 2413 perf_ctx_lock(cpuctx, cpuctx->task_ctx); 2436 2414 perf_pmu_disable(cpuctx->ctx.pmu); 2437 2415 2438 - if (freq) { 2439 - perf_ctx_adjust_freq(&cpuctx->ctx, interval); 2440 - if (ctx) 2441 - perf_ctx_adjust_freq(ctx, interval); 2442 - } 2416 + cpu_ctx_sched_out(cpuctx, EVENT_FLEXIBLE); 2417 + if (ctx) 2418 + ctx_sched_out(ctx, cpuctx, EVENT_FLEXIBLE); 2443 2419 2444 - if (rotate) { 2445 - cpu_ctx_sched_out(cpuctx, EVENT_FLEXIBLE); 2446 - if (ctx) 2447 - ctx_sched_out(ctx, cpuctx, EVENT_FLEXIBLE); 2420 + rotate_ctx(&cpuctx->ctx); 2421 + if (ctx) 2422 + rotate_ctx(ctx); 2448 2423 2449 - rotate_ctx(&cpuctx->ctx); 2450 - if (ctx) 2451 - rotate_ctx(ctx); 2452 - 2453 - perf_event_sched_in(cpuctx, ctx, current); 2454 - } 2424 + perf_event_sched_in(cpuctx, ctx, current); 2455 2425 2456 2426 perf_pmu_enable(cpuctx->ctx.pmu); 2457 2427 perf_ctx_unlock(cpuctx, cpuctx->task_ctx); 2458 - 2459 2428 done: 2460 2429 if (remove) 2461 2430 list_del_init(&cpuctx->rotation_list); ··· 2454 2445 { 2455 2446 struct list_head *head = &__get_cpu_var(rotation_list); 2456 2447 struct perf_cpu_context *cpuctx, *tmp; 2448 + struct perf_event_context *ctx; 2449 + int throttled; 2457 2450 2458 2451 WARN_ON(!irqs_disabled()); 2459 2452 2453 + __this_cpu_inc(perf_throttled_seq); 2454 + throttled = __this_cpu_xchg(perf_throttled_count, 0); 2455 + 2460 2456 list_for_each_entry_safe(cpuctx, tmp, head, rotation_list) { 2457 + ctx = &cpuctx->ctx; 2458 + perf_adjust_freq_unthr_context(ctx, throttled); 2459 + 2460 + ctx = cpuctx->task_ctx; 2461 + if (ctx) 2462 + perf_adjust_freq_unthr_context(ctx, throttled); 2463 + 2461 2464 if (cpuctx->jiffies_interval == 1 || 2462 2465 !(jiffies % cpuctx->jiffies_interval)) 2463 2466 perf_rotate_context(cpuctx); ··· 4530 4509 { 4531 4510 int events = atomic_read(&event->event_limit); 4532 4511 struct hw_perf_event *hwc = &event->hw; 4512 + u64 seq; 4533 4513 int ret = 0; 4534 4514 4535 4515 /* ··· 4540 4518 if (unlikely(!is_sampling_event(event))) 4541 4519 return 0; 4542 4520 4543 - if (unlikely(hwc->interrupts >= max_samples_per_tick)) { 4544 - if (throttle) { 4521 + seq = __this_cpu_read(perf_throttled_seq); 4522 + if (seq != hwc->interrupts_seq) { 4523 + hwc->interrupts_seq = seq; 4524 + hwc->interrupts = 1; 4525 + } else { 4526 + hwc->interrupts++; 4527 + if (unlikely(throttle 4528 + && hwc->interrupts >= max_samples_per_tick)) { 4529 + __this_cpu_inc(perf_throttled_count); 4545 4530 hwc->interrupts = MAX_INTERRUPTS; 4546 4531 perf_log_throttle(event, 0); 4547 4532 ret = 1; 4548 4533 } 4549 - } else 4550 - hwc->interrupts++; 4534 + } 4551 4535 4552 4536 if (event->attr.freq) { 4553 4537 u64 now = perf_clock();
+16
kernel/exit.c
··· 1038 1038 if (tsk->nr_dirtied) 1039 1039 __this_cpu_add(dirty_throttle_leaks, tsk->nr_dirtied); 1040 1040 exit_rcu(); 1041 + 1042 + /* 1043 + * The setting of TASK_RUNNING by try_to_wake_up() may be delayed 1044 + * when the following two conditions become true. 1045 + * - There is race condition of mmap_sem (It is acquired by 1046 + * exit_mm()), and 1047 + * - SMI occurs before setting TASK_RUNINNG. 1048 + * (or hypervisor of virtual machine switches to other guest) 1049 + * As a result, we may become TASK_RUNNING after becoming TASK_DEAD 1050 + * 1051 + * To avoid it, we have to wait for releasing tsk->pi_lock which 1052 + * is held by try_to_wake_up() 1053 + */ 1054 + smp_mb(); 1055 + raw_spin_unlock_wait(&tsk->pi_lock); 1056 + 1041 1057 /* causes final put_task_struct in finish_task_switch(). */ 1042 1058 tsk->state = TASK_DEAD; 1043 1059 tsk->flags |= PF_NOFREEZE; /* tell freezer to ignore us */
+20
kernel/fork.c
··· 647 647 } 648 648 EXPORT_SYMBOL_GPL(get_task_mm); 649 649 650 + struct mm_struct *mm_access(struct task_struct *task, unsigned int mode) 651 + { 652 + struct mm_struct *mm; 653 + int err; 654 + 655 + err = mutex_lock_killable(&task->signal->cred_guard_mutex); 656 + if (err) 657 + return ERR_PTR(err); 658 + 659 + mm = get_task_mm(task); 660 + if (mm && mm != current->mm && 661 + !ptrace_may_access(task, mode)) { 662 + mmput(mm); 663 + mm = ERR_PTR(-EACCES); 664 + } 665 + mutex_unlock(&task->signal->cred_guard_mutex); 666 + 667 + return mm; 668 + } 669 + 650 670 /* Please note the differences between mmput and mm_release. 651 671 * mmput is called whenever we stop holding onto a mm_struct, 652 672 * error success whatever.
+5 -1
kernel/kprobes.c
··· 1673 1673 ri->rp = rp; 1674 1674 ri->task = current; 1675 1675 1676 - if (rp->entry_handler && rp->entry_handler(ri, regs)) 1676 + if (rp->entry_handler && rp->entry_handler(ri, regs)) { 1677 + raw_spin_lock_irqsave(&rp->lock, flags); 1678 + hlist_add_head(&ri->hlist, &rp->free_instances); 1679 + raw_spin_unlock_irqrestore(&rp->lock, flags); 1677 1680 return 0; 1681 + } 1678 1682 1679 1683 arch_prepare_kretprobe(ri, regs); 1680 1684
+22 -2
kernel/power/power.h
··· 231 231 #ifdef CONFIG_SUSPEND_FREEZER 232 232 static inline int suspend_freeze_processes(void) 233 233 { 234 - int error = freeze_processes(); 235 - return error ? : freeze_kernel_threads(); 234 + int error; 235 + 236 + error = freeze_processes(); 237 + 238 + /* 239 + * freeze_processes() automatically thaws every task if freezing 240 + * fails. So we need not do anything extra upon error. 241 + */ 242 + if (error) 243 + goto Finish; 244 + 245 + error = freeze_kernel_threads(); 246 + 247 + /* 248 + * freeze_kernel_threads() thaws only kernel threads upon freezing 249 + * failure. So we have to thaw the userspace tasks ourselves. 250 + */ 251 + if (error) 252 + thaw_processes(); 253 + 254 + Finish: 255 + return error; 236 256 } 237 257 238 258 static inline void suspend_thaw_processes(void)
+5 -2
kernel/power/process.c
··· 143 143 /** 144 144 * freeze_kernel_threads - Make freezable kernel threads go to the refrigerator. 145 145 * 146 - * On success, returns 0. On failure, -errno and system is fully thawed. 146 + * On success, returns 0. On failure, -errno and only the kernel threads are 147 + * thawed, so as to give a chance to the caller to do additional cleanups 148 + * (if any) before thawing the userspace tasks. So, it is the responsibility 149 + * of the caller to thaw the userspace tasks, when the time is right. 147 150 */ 148 151 int freeze_kernel_threads(void) 149 152 { ··· 162 159 BUG_ON(in_atomic()); 163 160 164 161 if (error) 165 - thaw_processes(); 162 + thaw_kernel_threads(); 166 163 return error; 167 164 } 168 165
+4 -2
kernel/power/user.c
··· 249 249 } 250 250 pm_restore_gfp_mask(); 251 251 error = hibernation_snapshot(data->platform_support); 252 - if (!error) { 252 + if (error) { 253 + thaw_kernel_threads(); 254 + } else { 253 255 error = put_user(in_suspend, (int __user *)arg); 254 256 if (!error && !freezer_test_done) 255 257 data->ready = 1; 256 258 if (freezer_test_done) { 257 259 freezer_test_done = false; 258 - thaw_processes(); 260 + thaw_kernel_threads(); 259 261 } 260 262 } 261 263 break;
+7 -12
kernel/sched/core.c
··· 74 74 75 75 #include <asm/tlb.h> 76 76 #include <asm/irq_regs.h> 77 + #include <asm/mutex.h> 77 78 #ifdef CONFIG_PARAVIRT 78 79 #include <asm/paravirt.h> 79 80 #endif ··· 724 723 p->sched_class->dequeue_task(rq, p, flags); 725 724 } 726 725 727 - /* 728 - * activate_task - move a task to the runqueue. 729 - */ 730 726 void activate_task(struct rq *rq, struct task_struct *p, int flags) 731 727 { 732 728 if (task_contributes_to_load(p)) ··· 732 734 enqueue_task(rq, p, flags); 733 735 } 734 736 735 - /* 736 - * deactivate_task - remove a task from the runqueue. 737 - */ 738 737 void deactivate_task(struct rq *rq, struct task_struct *p, int flags) 739 738 { 740 739 if (task_contributes_to_load(p)) ··· 4129 4134 on_rq = p->on_rq; 4130 4135 running = task_current(rq, p); 4131 4136 if (on_rq) 4132 - deactivate_task(rq, p, 0); 4137 + dequeue_task(rq, p, 0); 4133 4138 if (running) 4134 4139 p->sched_class->put_prev_task(rq, p); 4135 4140 ··· 4142 4147 if (running) 4143 4148 p->sched_class->set_curr_task(rq); 4144 4149 if (on_rq) 4145 - activate_task(rq, p, 0); 4150 + enqueue_task(rq, p, 0); 4146 4151 4147 4152 check_class_changed(rq, p, prev_class, oldprio); 4148 4153 task_rq_unlock(rq, p, &flags); ··· 4993 4998 * placed properly. 4994 4999 */ 4995 5000 if (p->on_rq) { 4996 - deactivate_task(rq_src, p, 0); 5001 + dequeue_task(rq_src, p, 0); 4997 5002 set_task_cpu(p, dest_cpu); 4998 - activate_task(rq_dest, p, 0); 5003 + enqueue_task(rq_dest, p, 0); 4999 5004 check_preempt_curr(rq_dest, p, 0); 5000 5005 } 5001 5006 done: ··· 7027 7032 7028 7033 on_rq = p->on_rq; 7029 7034 if (on_rq) 7030 - deactivate_task(rq, p, 0); 7035 + dequeue_task(rq, p, 0); 7031 7036 __setscheduler(rq, p, SCHED_NORMAL, 0); 7032 7037 if (on_rq) { 7033 - activate_task(rq, p, 0); 7038 + enqueue_task(rq, p, 0); 7034 7039 resched_task(rq->curr); 7035 7040 } 7036 7041
+29 -5
kernel/sched/fair.c
··· 4866 4866 return; 4867 4867 } 4868 4868 4869 + static inline void clear_nohz_tick_stopped(int cpu) 4870 + { 4871 + if (unlikely(test_bit(NOHZ_TICK_STOPPED, nohz_flags(cpu)))) { 4872 + cpumask_clear_cpu(cpu, nohz.idle_cpus_mask); 4873 + atomic_dec(&nohz.nr_cpus); 4874 + clear_bit(NOHZ_TICK_STOPPED, nohz_flags(cpu)); 4875 + } 4876 + } 4877 + 4869 4878 static inline void set_cpu_sd_state_busy(void) 4870 4879 { 4871 4880 struct sched_domain *sd; ··· 4913 4904 { 4914 4905 int cpu = smp_processor_id(); 4915 4906 4907 + /* 4908 + * If this cpu is going down, then nothing needs to be done. 4909 + */ 4910 + if (!cpu_active(cpu)) 4911 + return; 4912 + 4916 4913 if (stop_tick) { 4917 4914 if (test_bit(NOHZ_TICK_STOPPED, nohz_flags(cpu))) 4918 4915 return; ··· 4928 4913 set_bit(NOHZ_TICK_STOPPED, nohz_flags(cpu)); 4929 4914 } 4930 4915 return; 4916 + } 4917 + 4918 + static int __cpuinit sched_ilb_notifier(struct notifier_block *nfb, 4919 + unsigned long action, void *hcpu) 4920 + { 4921 + switch (action & ~CPU_TASKS_FROZEN) { 4922 + case CPU_DYING: 4923 + clear_nohz_tick_stopped(smp_processor_id()); 4924 + return NOTIFY_OK; 4925 + default: 4926 + return NOTIFY_DONE; 4927 + } 4931 4928 } 4932 4929 #endif 4933 4930 ··· 5097 5070 * busy tick after returning from idle, we will update the busy stats. 5098 5071 */ 5099 5072 set_cpu_sd_state_busy(); 5100 - if (unlikely(test_bit(NOHZ_TICK_STOPPED, nohz_flags(cpu)))) { 5101 - clear_bit(NOHZ_TICK_STOPPED, nohz_flags(cpu)); 5102 - cpumask_clear_cpu(cpu, nohz.idle_cpus_mask); 5103 - atomic_dec(&nohz.nr_cpus); 5104 - } 5073 + clear_nohz_tick_stopped(cpu); 5105 5074 5106 5075 /* 5107 5076 * None are in tickless mode and hence no need for NOHZ idle load ··· 5613 5590 5614 5591 #ifdef CONFIG_NO_HZ 5615 5592 zalloc_cpumask_var(&nohz.idle_cpus_mask, GFP_NOWAIT); 5593 + cpu_notifier(sched_ilb_notifier, 0); 5616 5594 #endif 5617 5595 #endif /* SMP */ 5618 5596
+5
kernel/sched/rt.c
··· 1587 1587 if (!next_task) 1588 1588 return 0; 1589 1589 1590 + #ifdef __ARCH_WANT_INTERRUPTS_ON_CTXSW 1591 + if (unlikely(task_running(rq, next_task))) 1592 + return 0; 1593 + #endif 1594 + 1590 1595 retry: 1591 1596 if (unlikely(next_task == rq->curr)) { 1592 1597 WARN_ON(1);
+1 -1
kernel/watchdog.c
··· 296 296 if (__this_cpu_read(soft_watchdog_warn) == true) 297 297 return HRTIMER_RESTART; 298 298 299 - printk(KERN_ERR "BUG: soft lockup - CPU#%d stuck for %us! [%s:%d]\n", 299 + printk(KERN_EMERG "BUG: soft lockup - CPU#%d stuck for %us! [%s:%d]\n", 300 300 smp_processor_id(), duration, 301 301 current->comm, task_pid_nr(current)); 302 302 print_modules();
+7
lib/Kconfig
··· 19 19 config GENERIC_FIND_FIRST_BIT 20 20 bool 21 21 22 + config NO_GENERIC_PCI_IOPORT_MAP 23 + bool 24 + 22 25 config GENERIC_PCI_IOMAP 23 26 bool 24 27 ··· 282 279 283 280 If unsure, say N. 284 281 282 + config CLZ_TAB 283 + bool 284 + 285 285 config CORDIC 286 286 tristate "CORDIC algorithm" 287 287 help ··· 293 287 294 288 config MPILIB 295 289 tristate 290 + select CLZ_TAB 296 291 help 297 292 Multiprecision maths library from GnuPG. 298 293 It is used to implement RSA digital signature verification,
+2
lib/Makefile
··· 121 121 obj-$(CONFIG_MPILIB) += mpi/ 122 122 obj-$(CONFIG_SIGNATURE) += digsig.o 123 123 124 + obj-$(CONFIG_CLZ_TAB) += clz_tab.o 125 + 124 126 hostprogs-y := gen_crc32table 125 127 clean-files := crc32table.h 126 128
+1 -1
lib/bug.c
··· 169 169 return BUG_TRAP_TYPE_WARN; 170 170 } 171 171 172 - printk(KERN_EMERG "------------[ cut here ]------------\n"); 172 + printk(KERN_DEFAULT "------------[ cut here ]------------\n"); 173 173 174 174 if (file) 175 175 printk(KERN_CRIT "kernel BUG at %s:%u!\n",
+18
lib/clz_tab.c
··· 1 + const unsigned char __clz_tab[] = { 2 + 0, 1, 2, 2, 3, 3, 3, 3, 4, 4, 4, 4, 4, 4, 4, 4, 5, 5, 5, 5, 5, 5, 5, 5, 3 + 5, 5, 5, 5, 5, 5, 5, 5, 4 + 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 5 + 6, 6, 6, 6, 6, 6, 6, 6, 6 + 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7 + 7, 7, 7, 7, 7, 7, 7, 7, 8 + 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 9 + 7, 7, 7, 7, 7, 7, 7, 7, 10 + 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 11 + 8, 8, 8, 8, 8, 8, 8, 8, 12 + 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 13 + 8, 8, 8, 8, 8, 8, 8, 8, 14 + 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 15 + 8, 8, 8, 8, 8, 8, 8, 8, 16 + 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 17 + 8, 8, 8, 8, 8, 8, 8, 8, 18 + };
+23 -29
lib/digsig.c
··· 34 34 unsigned long msglen, 35 35 unsigned long modulus_bitlen, 36 36 unsigned char *out, 37 - unsigned long *outlen, 38 - int *is_valid) 37 + unsigned long *outlen) 39 38 { 40 39 unsigned long modulus_len, ps_len, i; 41 - int result; 42 - 43 - /* default to invalid packet */ 44 - *is_valid = 0; 45 40 46 41 modulus_len = (modulus_bitlen >> 3) + (modulus_bitlen & 7 ? 1 : 0); 47 42 ··· 45 50 return -EINVAL; 46 51 47 52 /* separate encoded message */ 48 - if ((msg[0] != 0x00) || (msg[1] != (unsigned char)1)) { 49 - result = -EINVAL; 50 - goto bail; 51 - } 53 + if ((msg[0] != 0x00) || (msg[1] != (unsigned char)1)) 54 + return -EINVAL; 52 55 53 56 for (i = 2; i < modulus_len - 1; i++) 54 57 if (msg[i] != 0xFF) 55 58 break; 56 59 57 60 /* separator check */ 58 - if (msg[i] != 0) { 61 + if (msg[i] != 0) 59 62 /* There was no octet with hexadecimal value 0x00 60 63 to separate ps from m. */ 61 - result = -EINVAL; 62 - goto bail; 63 - } 64 + return -EINVAL; 64 65 65 66 ps_len = i - 2; 66 67 67 68 if (*outlen < (msglen - (2 + ps_len + 1))) { 68 69 *outlen = msglen - (2 + ps_len + 1); 69 - result = -EOVERFLOW; 70 - goto bail; 70 + return -EOVERFLOW; 71 71 } 72 72 73 73 *outlen = (msglen - (2 + ps_len + 1)); 74 74 memcpy(out, &msg[2 + ps_len + 1], *outlen); 75 75 76 - /* valid packet */ 77 - *is_valid = 1; 78 - result = 0; 79 - bail: 80 - return result; 76 + return 0; 81 77 } 82 78 83 79 /* ··· 82 96 unsigned long len; 83 97 unsigned long mlen, mblen; 84 98 unsigned nret, l; 85 - int valid, head, i; 99 + int head, i; 86 100 unsigned char *out1 = NULL, *out2 = NULL; 87 101 MPI in = NULL, res = NULL, pkey[2]; 88 102 uint8_t *p, *datap, *endp; ··· 91 105 92 106 down_read(&key->sem); 93 107 ukp = key->payload.data; 108 + 109 + if (ukp->datalen < sizeof(*pkh)) 110 + goto err1; 111 + 94 112 pkh = (struct pubkey_hdr *)ukp->data; 95 113 96 114 if (pkh->version != 1) ··· 107 117 goto err1; 108 118 109 119 datap = pkh->mpi; 110 - endp = datap + ukp->datalen; 120 + endp = ukp->data + ukp->datalen; 121 + 122 + err = -ENOMEM; 111 123 112 124 for (i = 0; i < pkh->nmpi; i++) { 113 125 unsigned int remaining = endp - datap; 114 126 pkey[i] = mpi_read_from_buffer(datap, &remaining); 127 + if (!pkey[i]) 128 + goto err; 115 129 datap += remaining; 116 130 } 117 131 118 132 mblen = mpi_get_nbits(pkey[0]); 119 133 mlen = (mblen + 7)/8; 120 134 121 - err = -ENOMEM; 135 + if (mlen == 0) 136 + goto err; 122 137 123 138 out1 = kzalloc(mlen, GFP_KERNEL); 124 139 if (!out1) ··· 162 167 memset(out1, 0, head); 163 168 memcpy(out1 + head, p, l); 164 169 165 - err = -EINVAL; 166 - pkcs_1_v1_5_decode_emsa(out1, len, mblen, out2, &len, &valid); 170 + err = pkcs_1_v1_5_decode_emsa(out1, len, mblen, out2, &len); 167 171 168 - if (valid && len == hlen) 172 + if (!err && len == hlen) 169 173 err = memcmp(out2, h, hlen); 170 174 171 175 err: ··· 172 178 mpi_free(res); 173 179 kfree(out1); 174 180 kfree(out2); 175 - mpi_free(pkey[0]); 176 - mpi_free(pkey[1]); 181 + while (--i >= 0) 182 + mpi_free(pkey[i]); 177 183 err1: 178 184 up_read(&key->sem); 179 185
+33 -11
lib/mpi/longlong.h
··· 1200 1200 "r" ((USItype)(v)) \ 1201 1201 : "%g1", "%g2" __AND_CLOBBER_CC) 1202 1202 #define UMUL_TIME 39 /* 39 instructions */ 1203 - #endif 1204 - #ifndef udiv_qrnnd 1205 - #ifndef LONGLONG_STANDALONE 1203 + /* It's quite necessary to add this much assembler for the sparc. 1204 + The default udiv_qrnnd (in C) is more than 10 times slower! */ 1206 1205 #define udiv_qrnnd(q, r, n1, n0, d) \ 1207 - do { USItype __r; \ 1208 - (q) = __udiv_qrnnd(&__r, (n1), (n0), (d)); \ 1209 - (r) = __r; \ 1210 - } while (0) 1211 - extern USItype __udiv_qrnnd(); 1212 - #define UDIV_TIME 140 1213 - #endif /* LONGLONG_STANDALONE */ 1214 - #endif /* udiv_qrnnd */ 1206 + __asm__ ("! Inlined udiv_qrnnd\n\t" \ 1207 + "mov 32,%%g1\n\t" \ 1208 + "subcc %1,%2,%%g0\n\t" \ 1209 + "1: bcs 5f\n\t" \ 1210 + "addxcc %0,%0,%0 ! shift n1n0 and a q-bit in lsb\n\t" \ 1211 + "sub %1,%2,%1 ! this kills msb of n\n\t" \ 1212 + "addx %1,%1,%1 ! so this can't give carry\n\t" \ 1213 + "subcc %%g1,1,%%g1\n\t" \ 1214 + "2: bne 1b\n\t" \ 1215 + "subcc %1,%2,%%g0\n\t" \ 1216 + "bcs 3f\n\t" \ 1217 + "addxcc %0,%0,%0 ! shift n1n0 and a q-bit in lsb\n\t" \ 1218 + "b 3f\n\t" \ 1219 + "sub %1,%2,%1 ! this kills msb of n\n\t" \ 1220 + "4: sub %1,%2,%1\n\t" \ 1221 + "5: addxcc %1,%1,%1\n\t" \ 1222 + "bcc 2b\n\t" \ 1223 + "subcc %%g1,1,%%g1\n\t" \ 1224 + "! Got carry from n. Subtract next step to cancel this carry.\n\t" \ 1225 + "bne 4b\n\t" \ 1226 + "addcc %0,%0,%0 ! shift n1n0 and a 0-bit in lsb\n\t" \ 1227 + "sub %1,%2,%1\n\t" \ 1228 + "3: xnor %0,0,%0\n\t" \ 1229 + "! End of inline udiv_qrnnd\n" \ 1230 + : "=&r" ((USItype)(q)), \ 1231 + "=&r" ((USItype)(r)) \ 1232 + : "r" ((USItype)(d)), \ 1233 + "1" ((USItype)(n1)), \ 1234 + "0" ((USItype)(n0)) : "%g1", "cc") 1235 + #define UDIV_TIME (3+7*32) /* 7 instructions/iteration. 32 iterations. */ 1236 + #endif 1215 1237 #endif /* __sparc__ */ 1216 1238 1217 1239 /***************************************
-19
lib/mpi/mpi-bit.c
··· 21 21 #include "mpi-internal.h" 22 22 #include "longlong.h" 23 23 24 - const unsigned char __clz_tab[] = { 25 - 0, 1, 2, 2, 3, 3, 3, 3, 4, 4, 4, 4, 4, 4, 4, 4, 5, 5, 5, 5, 5, 5, 5, 5, 26 - 5, 5, 5, 5, 5, 5, 5, 5, 27 - 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 28 - 6, 6, 6, 6, 6, 6, 6, 6, 29 - 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 30 - 7, 7, 7, 7, 7, 7, 7, 7, 31 - 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 32 - 7, 7, 7, 7, 7, 7, 7, 7, 33 - 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 34 - 8, 8, 8, 8, 8, 8, 8, 8, 35 - 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 36 - 8, 8, 8, 8, 8, 8, 8, 8, 37 - 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 38 - 8, 8, 8, 8, 8, 8, 8, 8, 39 - 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 40 - 8, 8, 8, 8, 8, 8, 8, 8, 41 - }; 42 - 43 24 #define A_LIMB_1 ((mpi_limb_t) 1) 44 25 45 26 /****************
+5
lib/mpi/mpi-div.c
··· 149 149 mpi_ptr_t marker[5]; 150 150 int markidx = 0; 151 151 152 + if (!dsize) 153 + return -EINVAL; 154 + 152 155 memset(marker, 0, sizeof(marker)); 153 156 154 157 /* Ensure space is enough for quotient and remainder. ··· 210 207 * numerator would be gradually overwritten by the quotient limbs. */ 211 208 if (qp == np) { /* Copy NP object to temporary space. */ 212 209 np = marker[markidx++] = mpi_alloc_limb_space(nsize); 210 + if (!np) 211 + goto nomem; 213 212 MPN_COPY(np, qp, nsize); 214 213 } 215 214 } else /* Put quotient at top of remainder. */
+1 -1
lib/mpi/mpi-pow.c
··· 59 59 ep = exp->d; 60 60 61 61 if (!msize) 62 - msize = 1 / msize; /* provoke a signal */ 62 + return -EINVAL; 63 63 64 64 if (!esize) { 65 65 /* Exponent is zero, result is 1 mod MOD, i.e., 1 or 0
+2 -89
lib/mpi/mpicoder.c
··· 20 20 21 21 #include "mpi-internal.h" 22 22 23 - #define DIM(v) (sizeof(v)/sizeof((v)[0])) 24 23 #define MAX_EXTERN_MPI_BITS 16384 25 - 26 - static uint8_t asn[15] = /* Object ID is 1.3.14.3.2.26 */ 27 - { 0x30, 0x21, 0x30, 0x09, 0x06, 0x05, 0x2b, 0x0e, 0x03, 28 - 0x02, 0x1a, 0x05, 0x00, 0x04, 0x14 29 - }; 30 - 31 - MPI do_encode_md(const void *sha_buffer, unsigned nbits) 32 - { 33 - int nframe = (nbits + 7) / 8; 34 - uint8_t *frame, *fr_pt; 35 - int i = 0, n; 36 - size_t asnlen = DIM(asn); 37 - MPI a = MPI_NULL; 38 - 39 - if (SHA1_DIGEST_LENGTH + asnlen + 4 > nframe) 40 - pr_info("MPI: can't encode a %d bit MD into a %d bits frame\n", 41 - (int)(SHA1_DIGEST_LENGTH * 8), (int)nbits); 42 - 43 - /* We encode the MD in this way: 44 - * 45 - * 0 A PAD(n bytes) 0 ASN(asnlen bytes) MD(len bytes) 46 - * 47 - * PAD consists of FF bytes. 48 - */ 49 - frame = kmalloc(nframe, GFP_KERNEL); 50 - if (!frame) 51 - return MPI_NULL; 52 - n = 0; 53 - frame[n++] = 0; 54 - frame[n++] = 1; /* block type */ 55 - i = nframe - SHA1_DIGEST_LENGTH - asnlen - 3; 56 - 57 - if (i <= 1) { 58 - pr_info("MPI: message digest encoding failed\n"); 59 - kfree(frame); 60 - return a; 61 - } 62 - 63 - memset(frame + n, 0xff, i); 64 - n += i; 65 - frame[n++] = 0; 66 - memcpy(frame + n, &asn, asnlen); 67 - n += asnlen; 68 - memcpy(frame + n, sha_buffer, SHA1_DIGEST_LENGTH); 69 - n += SHA1_DIGEST_LENGTH; 70 - 71 - i = nframe; 72 - fr_pt = frame; 73 - 74 - if (n != nframe) { 75 - printk 76 - ("MPI: message digest encoding failed, frame length is wrong\n"); 77 - kfree(frame); 78 - return a; 79 - } 80 - 81 - a = mpi_alloc((nframe + BYTES_PER_MPI_LIMB - 1) / BYTES_PER_MPI_LIMB); 82 - mpi_set_buffer(a, frame, nframe, 0); 83 - kfree(frame); 84 - 85 - return a; 86 - } 87 24 88 25 MPI mpi_read_from_buffer(const void *xbuffer, unsigned *ret_nread) 89 26 { ··· 28 91 int i, j; 29 92 unsigned nbits, nbytes, nlimbs, nread = 0; 30 93 mpi_limb_t a; 31 - MPI val = MPI_NULL; 94 + MPI val = NULL; 32 95 33 96 if (*ret_nread < 2) 34 97 goto leave; ··· 45 108 nlimbs = (nbytes + BYTES_PER_MPI_LIMB - 1) / BYTES_PER_MPI_LIMB; 46 109 val = mpi_alloc(nlimbs); 47 110 if (!val) 48 - return MPI_NULL; 111 + return NULL; 49 112 i = BYTES_PER_MPI_LIMB - nbytes % BYTES_PER_MPI_LIMB; 50 113 i %= BYTES_PER_MPI_LIMB; 51 114 val->nbits = nbits; ··· 147 210 return 0; 148 211 } 149 212 EXPORT_SYMBOL_GPL(mpi_fromstr); 150 - 151 - /**************** 152 - * Special function to get the low 8 bytes from an mpi. 153 - * This can be used as a keyid; KEYID is an 2 element array. 154 - * Return the low 4 bytes. 155 - */ 156 - u32 mpi_get_keyid(const MPI a, u32 *keyid) 157 - { 158 - #if BYTES_PER_MPI_LIMB == 4 159 - if (keyid) { 160 - keyid[0] = a->nlimbs >= 2 ? a->d[1] : 0; 161 - keyid[1] = a->nlimbs >= 1 ? a->d[0] : 0; 162 - } 163 - return a->nlimbs >= 1 ? a->d[0] : 0; 164 - #elif BYTES_PER_MPI_LIMB == 8 165 - if (keyid) { 166 - keyid[0] = a->nlimbs ? (u32) (a->d[0] >> 32) : 0; 167 - keyid[1] = a->nlimbs ? (u32) (a->d[0] & 0xffffffff) : 0; 168 - } 169 - return a->nlimbs ? (u32) (a->d[0] & 0xffffffff) : 0; 170 - #else 171 - #error Make this function work with other LIMB sizes 172 - #endif 173 - } 174 213 175 214 /**************** 176 215 * Return an allocated buffer with the MPI (msb first).
+4
lib/mpi/mpih-div.c
··· 217 217 case 0: 218 218 /* We are asked to divide by zero, so go ahead and do it! (To make 219 219 the compiler not remove this statement, return the value.) */ 220 + /* 221 + * existing clients of this function have been modified 222 + * not to call it with dsize == 0, so this should not happen 223 + */ 220 224 return 1 / dsize; 221 225 222 226 case 1:
+4 -1
lib/mpi/mpiutil.c
··· 58 58 { 59 59 size_t len = nlimbs * sizeof(mpi_limb_t); 60 60 61 + if (!len) 62 + return NULL; 63 + 61 64 return kmalloc(len, GFP_KERNEL); 62 65 } 63 66 ··· 138 135 size_t i; 139 136 MPI b; 140 137 141 - *copied = MPI_NULL; 138 + *copied = NULL; 142 139 143 140 if (a) { 144 141 b = mpi_alloc(a->nlimbs);
+1 -1
lib/pci_iomap.c
··· 34 34 if (maxlen && len > maxlen) 35 35 len = maxlen; 36 36 if (flags & IORESOURCE_IO) 37 - return ioport_map(start, len); 37 + return __pci_ioport_map(dev, start, len); 38 38 if (flags & IORESOURCE_MEM) { 39 39 if (flags & IORESOURCE_CACHEABLE) 40 40 return ioremap(start, len);
+23 -1
mm/compaction.c
··· 313 313 } else if (!locked) 314 314 spin_lock_irq(&zone->lru_lock); 315 315 316 + /* 317 + * migrate_pfn does not necessarily start aligned to a 318 + * pageblock. Ensure that pfn_valid is called when moving 319 + * into a new MAX_ORDER_NR_PAGES range in case of large 320 + * memory holes within the zone 321 + */ 322 + if ((low_pfn & (MAX_ORDER_NR_PAGES - 1)) == 0) { 323 + if (!pfn_valid(low_pfn)) { 324 + low_pfn += MAX_ORDER_NR_PAGES - 1; 325 + continue; 326 + } 327 + } 328 + 316 329 if (!pfn_valid_within(low_pfn)) 317 330 continue; 318 331 nr_scanned++; 319 332 320 - /* Get the page and skip if free */ 333 + /* 334 + * Get the page and ensure the page is within the same zone. 335 + * See the comment in isolate_freepages about overlapping 336 + * nodes. It is deliberate that the new zone lock is not taken 337 + * as memory compaction should not move pages between nodes. 338 + */ 321 339 page = pfn_to_page(low_pfn); 340 + if (page_zone(page) != zone) 341 + continue; 342 + 343 + /* Skip if free */ 322 344 if (PageBuddy(page)) 323 345 continue; 324 346
+4 -4
mm/filemap.c
··· 1400 1400 unsigned long seg = 0; 1401 1401 size_t count; 1402 1402 loff_t *ppos = &iocb->ki_pos; 1403 - struct blk_plug plug; 1404 1403 1405 1404 count = 0; 1406 1405 retval = generic_segment_checks(iov, &nr_segs, &count, VERIFY_WRITE); 1407 1406 if (retval) 1408 1407 return retval; 1409 - 1410 - blk_start_plug(&plug); 1411 1408 1412 1409 /* coalesce the iovecs and go direct-to-BIO for O_DIRECT */ 1413 1410 if (filp->f_flags & O_DIRECT) { ··· 1421 1424 retval = filemap_write_and_wait_range(mapping, pos, 1422 1425 pos + iov_length(iov, nr_segs) - 1); 1423 1426 if (!retval) { 1427 + struct blk_plug plug; 1428 + 1429 + blk_start_plug(&plug); 1424 1430 retval = mapping->a_ops->direct_IO(READ, iocb, 1425 1431 iov, pos, nr_segs); 1432 + blk_finish_plug(&plug); 1426 1433 } 1427 1434 if (retval > 0) { 1428 1435 *ppos = pos + retval; ··· 1482 1481 break; 1483 1482 } 1484 1483 out: 1485 - blk_finish_plug(&plug); 1486 1484 return retval; 1487 1485 } 1488 1486 EXPORT_SYMBOL(generic_file_aio_read);
+6 -1
mm/filemap_xip.c
··· 263 263 xip_pfn); 264 264 if (err == -ENOMEM) 265 265 return VM_FAULT_OOM; 266 - BUG_ON(err); 266 + /* 267 + * err == -EBUSY is fine, we've raced against another thread 268 + * that faulted-in the same page 269 + */ 270 + if (err != -EBUSY) 271 + BUG_ON(err); 267 272 return VM_FAULT_NOPAGE; 268 273 } else { 269 274 int err, ret = VM_FAULT_OOM;
+2 -2
mm/huge_memory.c
··· 2083 2083 { 2084 2084 struct mm_struct *mm = mm_slot->mm; 2085 2085 2086 - VM_BUG_ON(!spin_is_locked(&khugepaged_mm_lock)); 2086 + VM_BUG_ON(NR_CPUS != 1 && !spin_is_locked(&khugepaged_mm_lock)); 2087 2087 2088 2088 if (khugepaged_test_exit(mm)) { 2089 2089 /* free mm_slot */ ··· 2113 2113 int progress = 0; 2114 2114 2115 2115 VM_BUG_ON(!pages); 2116 - VM_BUG_ON(!spin_is_locked(&khugepaged_mm_lock)); 2116 + VM_BUG_ON(NR_CPUS != 1 && !spin_is_locked(&khugepaged_mm_lock)); 2117 2117 2118 2118 if (khugepaged_scan.mm_slot) 2119 2119 mm_slot = khugepaged_scan.mm_slot;
+2 -1
mm/kmemleak.c
··· 1036 1036 { 1037 1037 pr_debug("%s(0x%p)\n", __func__, ptr); 1038 1038 1039 - if (atomic_read(&kmemleak_enabled) && ptr && !IS_ERR(ptr)) 1039 + if (atomic_read(&kmemleak_enabled) && ptr && size && !IS_ERR(ptr)) 1040 1040 add_scan_area((unsigned long)ptr, size, gfp); 1041 1041 else if (atomic_read(&kmemleak_early_log)) 1042 1042 log_early(KMEMLEAK_SCAN_AREA, ptr, size, 0); ··· 1757 1757 1758 1758 #ifdef CONFIG_DEBUG_KMEMLEAK_DEFAULT_OFF 1759 1759 if (!kmemleak_skip_disable) { 1760 + atomic_set(&kmemleak_early_log, 0); 1760 1761 kmemleak_disable(); 1761 1762 return; 1762 1763 }
+2 -1
mm/memcontrol.c
··· 776 776 /* threshold event is triggered in finer grain than soft limit */ 777 777 if (unlikely(mem_cgroup_event_ratelimit(memcg, 778 778 MEM_CGROUP_TARGET_THRESH))) { 779 - bool do_softlimit, do_numainfo; 779 + bool do_softlimit; 780 + bool do_numainfo __maybe_unused; 780 781 781 782 do_softlimit = mem_cgroup_event_ratelimit(memcg, 782 783 MEM_CGROUP_TARGET_SOFTLIMIT);
+1 -1
mm/migrate.c
··· 445 445 ClearPageSwapCache(page); 446 446 ClearPagePrivate(page); 447 447 set_page_private(page, 0); 448 - page->mapping = NULL; 449 448 450 449 /* 451 450 * If any waiters have accumulated on the new page then ··· 666 667 } else { 667 668 if (remap_swapcache) 668 669 remove_migration_ptes(page, newpage); 670 + page->mapping = NULL; 669 671 } 670 672 671 673 unlock_page(newpage);
+9 -14
mm/process_vm_access.c
··· 298 298 goto free_proc_pages; 299 299 } 300 300 301 - task_lock(task); 302 - if (__ptrace_may_access(task, PTRACE_MODE_ATTACH)) { 303 - task_unlock(task); 304 - rc = -EPERM; 301 + mm = mm_access(task, PTRACE_MODE_ATTACH); 302 + if (!mm || IS_ERR(mm)) { 303 + rc = IS_ERR(mm) ? PTR_ERR(mm) : -ESRCH; 304 + /* 305 + * Explicitly map EACCES to EPERM as EPERM is a more a 306 + * appropriate error code for process_vw_readv/writev 307 + */ 308 + if (rc == -EACCES) 309 + rc = -EPERM; 305 310 goto put_task_struct; 306 311 } 307 - mm = task->mm; 308 - 309 - if (!mm || (task->flags & PF_KTHREAD)) { 310 - task_unlock(task); 311 - rc = -EINVAL; 312 - goto put_task_struct; 313 - } 314 - 315 - atomic_inc(&mm->mm_users); 316 - task_unlock(task); 317 312 318 313 for (i = 0; i < riovcnt && iov_l_curr_idx < liovcnt; i++) { 319 314 rc = process_vm_rw_single_vec(
+1 -1
mm/swap.c
··· 659 659 VM_BUG_ON(!PageHead(page)); 660 660 VM_BUG_ON(PageCompound(page_tail)); 661 661 VM_BUG_ON(PageLRU(page_tail)); 662 - VM_BUG_ON(!spin_is_locked(&zone->lru_lock)); 662 + VM_BUG_ON(NR_CPUS != 1 && !spin_is_locked(&zone->lru_lock)); 663 663 664 664 SetPageLRU(page_tail); 665 665
-2
net/ceph/ceph_common.c
··· 85 85 } else { 86 86 pr_info("client%lld fsid %pU\n", ceph_client_id(client), fsid); 87 87 memcpy(&client->fsid, fsid, sizeof(*fsid)); 88 - ceph_debugfs_client_init(client); 89 - client->have_fsid = true; 90 88 } 91 89 return 0; 92 90 }
+12 -1
net/ceph/mon_client.c
··· 8 8 9 9 #include <linux/ceph/mon_client.h> 10 10 #include <linux/ceph/libceph.h> 11 + #include <linux/ceph/debugfs.h> 11 12 #include <linux/ceph/decode.h> 12 - 13 13 #include <linux/ceph/auth.h> 14 14 15 15 /* ··· 340 340 client->monc.monmap = monmap; 341 341 kfree(old); 342 342 343 + if (!client->have_fsid) { 344 + client->have_fsid = true; 345 + mutex_unlock(&monc->mutex); 346 + /* 347 + * do debugfs initialization without mutex to avoid 348 + * creating a locking dependency 349 + */ 350 + ceph_debugfs_client_init(client); 351 + goto out_unlocked; 352 + } 343 353 out: 344 354 mutex_unlock(&monc->mutex); 355 + out_unlocked: 345 356 wake_up_all(&client->auth_wq); 346 357 } 347 358
+6
scripts/checkpatch.pl
··· 1924 1924 my $pre_ctx = "$1$2"; 1925 1925 1926 1926 my ($level, @ctx) = ctx_statement_level($linenr, $realcnt, 0); 1927 + 1928 + if ($line =~ /^\+\t{6,}/) { 1929 + WARN("DEEP_INDENTATION", 1930 + "Too many leading tabs - consider code refactoring\n" . $herecurr); 1931 + } 1932 + 1927 1933 my $ctx_cnt = $realcnt - $#ctx - 1; 1928 1934 my $ctx = join("\n", @ctx); 1929 1935
-1
sound/isa/sb/emu8000_patch.c
··· 22 22 #include "emu8000_local.h" 23 23 #include <asm/uaccess.h> 24 24 #include <linux/moduleparam.h> 25 - #include <linux/moduleparam.h> 26 25 27 26 static int emu8000_reset_addr; 28 27 module_param(emu8000_reset_addr, int, 0444);
+1 -1
sound/pci/hda/hda_codec.c
··· 1447 1447 for (i = 0; i < c->cvt_setups.used; i++) { 1448 1448 p = snd_array_elem(&c->cvt_setups, i); 1449 1449 if (!p->active && p->stream_tag == stream_tag && 1450 - get_wcaps_type(get_wcaps(codec, p->nid)) == type) 1450 + get_wcaps_type(get_wcaps(c, p->nid)) == type) 1451 1451 p->dirty = 1; 1452 1452 } 1453 1453 }
+15 -9
sound/pci/hda/hda_jack.c
··· 282 282 EXPORT_SYMBOL_HDA(snd_hda_jack_add_kctl); 283 283 284 284 static int add_jack_kctl(struct hda_codec *codec, hda_nid_t nid, 285 - const struct auto_pin_cfg *cfg) 285 + const struct auto_pin_cfg *cfg, 286 + char *lastname, int *lastidx) 286 287 { 287 288 unsigned int def_conf, conn; 288 289 char name[44]; ··· 299 298 return 0; 300 299 301 300 snd_hda_get_pin_label(codec, nid, cfg, name, sizeof(name), &idx); 301 + if (!strcmp(name, lastname) && idx == *lastidx) 302 + idx++; 303 + strncpy(lastname, name, 44); 304 + *lastidx = idx; 302 305 err = snd_hda_jack_add_kctl(codec, nid, name, idx); 303 306 if (err < 0) 304 307 return err; ··· 316 311 const struct auto_pin_cfg *cfg) 317 312 { 318 313 const hda_nid_t *p; 319 - int i, err; 314 + int i, err, lastidx = 0; 315 + char lastname[44] = ""; 320 316 321 317 for (i = 0, p = cfg->line_out_pins; i < cfg->line_outs; i++, p++) { 322 - err = add_jack_kctl(codec, *p, cfg); 318 + err = add_jack_kctl(codec, *p, cfg, lastname, &lastidx); 323 319 if (err < 0) 324 320 return err; 325 321 } 326 322 for (i = 0, p = cfg->hp_pins; i < cfg->hp_outs; i++, p++) { 327 323 if (*p == *cfg->line_out_pins) /* might be duplicated */ 328 324 break; 329 - err = add_jack_kctl(codec, *p, cfg); 325 + err = add_jack_kctl(codec, *p, cfg, lastname, &lastidx); 330 326 if (err < 0) 331 327 return err; 332 328 } 333 329 for (i = 0, p = cfg->speaker_pins; i < cfg->speaker_outs; i++, p++) { 334 330 if (*p == *cfg->line_out_pins) /* might be duplicated */ 335 331 break; 336 - err = add_jack_kctl(codec, *p, cfg); 332 + err = add_jack_kctl(codec, *p, cfg, lastname, &lastidx); 337 333 if (err < 0) 338 334 return err; 339 335 } 340 336 for (i = 0; i < cfg->num_inputs; i++) { 341 - err = add_jack_kctl(codec, cfg->inputs[i].pin, cfg); 337 + err = add_jack_kctl(codec, cfg->inputs[i].pin, cfg, lastname, &lastidx); 342 338 if (err < 0) 343 339 return err; 344 340 } 345 341 for (i = 0, p = cfg->dig_out_pins; i < cfg->dig_outs; i++, p++) { 346 - err = add_jack_kctl(codec, *p, cfg); 342 + err = add_jack_kctl(codec, *p, cfg, lastname, &lastidx); 347 343 if (err < 0) 348 344 return err; 349 345 } 350 - err = add_jack_kctl(codec, cfg->dig_in_pin, cfg); 346 + err = add_jack_kctl(codec, cfg->dig_in_pin, cfg, lastname, &lastidx); 351 347 if (err < 0) 352 348 return err; 353 - err = add_jack_kctl(codec, cfg->mono_out_pin, cfg); 349 + err = add_jack_kctl(codec, cfg->mono_out_pin, cfg, lastname, &lastidx); 354 350 if (err < 0) 355 351 return err; 356 352 return 0;
+19 -14
sound/pci/hda/patch_ca0132.c
··· 728 728 729 729 err = chipio_read(codec, REG_CODEC_MUTE, &data); 730 730 if (err < 0) 731 - return err; 731 + goto exit; 732 732 733 733 /* *valp 0 is mute, 1 is unmute */ 734 734 data = (data & 0x7f) | (*valp ? 0 : 0x80); 735 - chipio_write(codec, REG_CODEC_MUTE, data); 735 + err = chipio_write(codec, REG_CODEC_MUTE, data); 736 736 if (err < 0) 737 - return err; 737 + goto exit; 738 738 739 739 spec->curr_hp_switch = *valp; 740 740 741 + exit: 741 742 snd_hda_power_down(codec); 742 - return 1; 743 + return err < 0 ? err : 1; 743 744 } 744 745 745 746 static int ca0132_speaker_switch_get(struct snd_kcontrol *kcontrol, ··· 771 770 772 771 err = chipio_read(codec, REG_CODEC_MUTE, &data); 773 772 if (err < 0) 774 - return err; 773 + goto exit; 775 774 776 775 /* *valp 0 is mute, 1 is unmute */ 777 776 data = (data & 0xef) | (*valp ? 0 : 0x10); 778 - chipio_write(codec, REG_CODEC_MUTE, data); 777 + err = chipio_write(codec, REG_CODEC_MUTE, data); 779 778 if (err < 0) 780 - return err; 779 + goto exit; 781 780 782 781 spec->curr_speaker_switch = *valp; 783 782 783 + exit: 784 784 snd_hda_power_down(codec); 785 - return 1; 785 + return err < 0 ? err : 1; 786 786 } 787 787 788 788 static int ca0132_hp_volume_get(struct snd_kcontrol *kcontrol, ··· 821 819 822 820 err = chipio_read(codec, REG_CODEC_HP_VOL_L, &data); 823 821 if (err < 0) 824 - return err; 822 + goto exit; 825 823 826 824 val = 31 - left_vol; 827 825 data = (data & 0xe0) | val; 828 - chipio_write(codec, REG_CODEC_HP_VOL_L, data); 826 + err = chipio_write(codec, REG_CODEC_HP_VOL_L, data); 829 827 if (err < 0) 830 - return err; 828 + goto exit; 831 829 832 830 val = 31 - right_vol; 833 831 data = (data & 0xe0) | val; 834 - chipio_write(codec, REG_CODEC_HP_VOL_R, data); 832 + err = chipio_write(codec, REG_CODEC_HP_VOL_R, data); 835 833 if (err < 0) 836 - return err; 834 + goto exit; 837 835 838 836 spec->curr_hp_volume[0] = left_vol; 839 837 spec->curr_hp_volume[1] = right_vol; 840 838 839 + exit: 841 840 snd_hda_power_down(codec); 842 - return 1; 841 + return err < 0 ? err : 1; 843 842 } 844 843 845 844 static int add_hp_switch(struct hda_codec *codec, hda_nid_t nid) ··· 939 936 if (err < 0) 940 937 return err; 941 938 err = add_in_volume(codec, spec->dig_in, "IEC958"); 939 + if (err < 0) 940 + return err; 942 941 } 943 942 return 0; 944 943 }
+4 -2
sound/pci/hda/patch_cirrus.c
··· 988 988 change_cur_input(codec, !spec->automic_idx, 0); 989 989 } else { 990 990 if (present) { 991 - spec->last_input = spec->cur_input; 992 - spec->cur_input = spec->automic_idx; 991 + if (spec->cur_input != spec->automic_idx) { 992 + spec->last_input = spec->cur_input; 993 + spec->cur_input = spec->automic_idx; 994 + } 993 995 } else { 994 996 spec->cur_input = spec->last_input; 995 997 }
+41 -23
sound/pci/hda/patch_realtek.c
··· 177 177 unsigned int detect_lo:1; /* Line-out detection enabled */ 178 178 unsigned int automute_speaker_possible:1; /* there are speakers and either LO or HP */ 179 179 unsigned int automute_lo_possible:1; /* there are line outs and HP */ 180 + unsigned int keep_vref_in_automute:1; /* Don't clear VREF in automute */ 180 181 181 182 /* other flags */ 182 183 unsigned int no_analog :1; /* digital I/O only */ ··· 496 495 497 496 for (i = 0; i < num_pins; i++) { 498 497 hda_nid_t nid = pins[i]; 498 + unsigned int val; 499 499 if (!nid) 500 500 break; 501 501 switch (spec->automute_mode) { 502 502 case ALC_AUTOMUTE_PIN: 503 + /* don't reset VREF value in case it's controlling 504 + * the amp (see alc861_fixup_asus_amp_vref_0f()) 505 + */ 506 + if (spec->keep_vref_in_automute) { 507 + val = snd_hda_codec_read(codec, nid, 0, 508 + AC_VERB_GET_PIN_WIDGET_CONTROL, 0); 509 + val &= ~PIN_HP; 510 + } else 511 + val = 0; 512 + val |= pin_bits; 503 513 snd_hda_codec_write(codec, nid, 0, 504 514 AC_VERB_SET_PIN_WIDGET_CONTROL, 505 - pin_bits); 515 + val); 506 516 break; 507 517 case ALC_AUTOMUTE_AMP: 508 518 snd_hda_codec_amp_stereo(codec, nid, HDA_OUTPUT, 0, ··· 1855 1843 "Speaker Playback Volume", 1856 1844 "Mono Playback Volume", 1857 1845 "Line-Out Playback Volume", 1846 + "CLFE Playback Volume", 1847 + "Bass Speaker Playback Volume", 1858 1848 "PCM Playback Volume", 1859 1849 NULL, 1860 1850 }; ··· 1872 1858 "Mono Playback Switch", 1873 1859 "IEC958 Playback Switch", 1874 1860 "Line-Out Playback Switch", 1861 + "CLFE Playback Switch", 1862 + "Bass Speaker Playback Switch", 1875 1863 "PCM Playback Switch", 1876 1864 NULL, 1877 1865 }; ··· 2322 2306 "%s Analog", codec->chip_name); 2323 2307 info->name = spec->stream_name_analog; 2324 2308 2325 - if (spec->multiout.dac_nids > 0) { 2309 + if (spec->multiout.num_dacs > 0) { 2326 2310 p = spec->stream_analog_playback; 2327 2311 if (!p) 2328 2312 p = &alc_pcm_analog_playback; ··· 4751 4735 ALC262_FIXUP_FSC_H270, 4752 4736 ALC262_FIXUP_HP_Z200, 4753 4737 ALC262_FIXUP_TYAN, 4754 - ALC262_FIXUP_TOSHIBA_RX1, 4755 4738 ALC262_FIXUP_LENOVO_3000, 4756 4739 ALC262_FIXUP_BENQ, 4757 4740 ALC262_FIXUP_BENQ_T31, ··· 4778 4763 .v.pins = (const struct alc_pincfg[]) { 4779 4764 { 0x14, 0x1993e1f0 }, /* int AUX */ 4780 4765 { } 4781 - } 4782 - }, 4783 - [ALC262_FIXUP_TOSHIBA_RX1] = { 4784 - .type = ALC_FIXUP_PINS, 4785 - .v.pins = (const struct alc_pincfg[]) { 4786 - { 0x14, 0x90170110 }, /* speaker */ 4787 - { 0x15, 0x0421101f }, /* HP */ 4788 - { 0x1a, 0x40f000f0 }, /* N/A */ 4789 - { 0x1b, 0x40f000f0 }, /* N/A */ 4790 - { 0x1e, 0x40f000f0 }, /* N/A */ 4791 4766 } 4792 4767 }, 4793 4768 [ALC262_FIXUP_LENOVO_3000] = { ··· 4812 4807 SND_PCI_QUIRK(0x10cf, 0x1397, "Fujitsu", ALC262_FIXUP_BENQ), 4813 4808 SND_PCI_QUIRK(0x10cf, 0x142d, "Fujitsu Lifebook E8410", ALC262_FIXUP_BENQ), 4814 4809 SND_PCI_QUIRK(0x10f1, 0x2915, "Tyan Thunder n6650W", ALC262_FIXUP_TYAN), 4815 - SND_PCI_QUIRK(0x1179, 0x0001, "Toshiba dynabook SS RX1", 4816 - ALC262_FIXUP_TOSHIBA_RX1), 4817 4810 SND_PCI_QUIRK(0x1734, 0x1147, "FSC Celsius H270", ALC262_FIXUP_FSC_H270), 4818 4811 SND_PCI_QUIRK(0x17aa, 0x384e, "Lenovo 3000", ALC262_FIXUP_LENOVO_3000), 4819 4812 SND_PCI_QUIRK(0x17ff, 0x0560, "Benq ED8", ALC262_FIXUP_BENQ), ··· 5380 5377 SND_PCI_QUIRK(0x1043, 0x8330, "ASUS Eeepc P703 P900A", 5381 5378 ALC269_FIXUP_AMIC), 5382 5379 SND_PCI_QUIRK(0x1043, 0x1013, "ASUS N61Da", ALC269_FIXUP_AMIC), 5383 - SND_PCI_QUIRK(0x1043, 0x1113, "ASUS N63Jn", ALC269_FIXUP_AMIC), 5384 5380 SND_PCI_QUIRK(0x1043, 0x1143, "ASUS B53f", ALC269_FIXUP_AMIC), 5385 5381 SND_PCI_QUIRK(0x1043, 0x1133, "ASUS UJ20ft", ALC269_FIXUP_AMIC), 5386 5382 SND_PCI_QUIRK(0x1043, 0x1183, "ASUS K72DR", ALC269_FIXUP_AMIC), ··· 5591 5589 PINFIX_ASUS_A6RP, 5592 5590 }; 5593 5591 5592 + /* On some laptops, VREF of pin 0x0f is abused for controlling the main amp */ 5593 + static void alc861_fixup_asus_amp_vref_0f(struct hda_codec *codec, 5594 + const struct alc_fixup *fix, int action) 5595 + { 5596 + struct alc_spec *spec = codec->spec; 5597 + unsigned int val; 5598 + 5599 + if (action != ALC_FIXUP_ACT_INIT) 5600 + return; 5601 + val = snd_hda_codec_read(codec, 0x0f, 0, 5602 + AC_VERB_GET_PIN_WIDGET_CONTROL, 0); 5603 + if (!(val & (AC_PINCTL_IN_EN | AC_PINCTL_OUT_EN))) 5604 + val |= AC_PINCTL_IN_EN; 5605 + val |= AC_PINCTL_VREF_50; 5606 + snd_hda_codec_write(codec, 0x0f, 0, 5607 + AC_VERB_SET_PIN_WIDGET_CONTROL, val); 5608 + spec->keep_vref_in_automute = 1; 5609 + } 5610 + 5594 5611 static const struct alc_fixup alc861_fixups[] = { 5595 5612 [PINFIX_FSC_AMILO_PI1505] = { 5596 5613 .type = ALC_FIXUP_PINS, ··· 5620 5599 } 5621 5600 }, 5622 5601 [PINFIX_ASUS_A6RP] = { 5623 - .type = ALC_FIXUP_VERBS, 5624 - .v.verbs = (const struct hda_verb[]) { 5625 - /* node 0x0f VREF seems controlling the master output */ 5626 - { 0x0f, AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_VREF50 }, 5627 - { } 5628 - }, 5602 + .type = ALC_FIXUP_FUNC, 5603 + .v.func = alc861_fixup_asus_amp_vref_0f, 5629 5604 }, 5630 5605 }; 5631 5606 5632 5607 static const struct snd_pci_quirk alc861_fixup_tbl[] = { 5633 - SND_PCI_QUIRK(0x1043, 0x1393, "ASUS A6Rp", PINFIX_ASUS_A6RP), 5608 + SND_PCI_QUIRK_VENDOR(0x1043, "ASUS laptop", PINFIX_ASUS_A6RP), 5609 + SND_PCI_QUIRK(0x1584, 0x0000, "Uniwill ECS M31EI", PINFIX_ASUS_A6RP), 5634 5610 SND_PCI_QUIRK(0x1584, 0x2b01, "Haier W18", PINFIX_ASUS_A6RP), 5635 5611 SND_PCI_QUIRK(0x1734, 0x10c7, "FSC Amilo Pi1505", PINFIX_FSC_AMILO_PI1505), 5636 5612 {}
+129 -155
sound/pci/hda/patch_via.c
··· 199 199 unsigned int no_pin_power_ctl; 200 200 enum VIA_HDA_CODEC codec_type; 201 201 202 + /* analog low-power control */ 203 + bool alc_mode; 204 + 202 205 /* smart51 setup */ 203 206 unsigned int smart51_nums; 204 207 hda_nid_t smart51_pins[2]; ··· 690 687 } 691 688 } 692 689 690 + static void update_power_state(struct hda_codec *codec, hda_nid_t nid, 691 + unsigned int parm) 692 + { 693 + if (snd_hda_codec_read(codec, nid, 0, 694 + AC_VERB_GET_POWER_STATE, 0) == parm) 695 + return; 696 + snd_hda_codec_write(codec, nid, 0, AC_VERB_SET_POWER_STATE, parm); 697 + } 698 + 693 699 static void set_pin_power_state(struct hda_codec *codec, hda_nid_t nid, 694 700 unsigned int *affected_parm) 695 701 { ··· 721 709 } else 722 710 parm = AC_PWRST_D3; 723 711 724 - snd_hda_codec_write(codec, nid, 0, AC_VERB_SET_POWER_STATE, parm); 712 + update_power_state(codec, nid, parm); 725 713 } 726 714 727 715 static int via_pin_power_ctl_info(struct snd_kcontrol *kcontrol, ··· 761 749 return 0; 762 750 spec->no_pin_power_ctl = val; 763 751 set_widgets_power_state(codec); 752 + analog_low_current_mode(codec); 764 753 return 1; 765 754 } 766 755 ··· 1049 1036 } 1050 1037 1051 1038 /* enter/exit analog low-current mode */ 1052 - static void analog_low_current_mode(struct hda_codec *codec) 1039 + static void __analog_low_current_mode(struct hda_codec *codec, bool force) 1053 1040 { 1054 1041 struct via_spec *spec = codec->spec; 1055 1042 bool enable; 1056 1043 unsigned int verb, parm; 1057 1044 1058 - enable = is_aa_path_mute(codec) && (spec->opened_streams != 0); 1045 + if (spec->no_pin_power_ctl) 1046 + enable = false; 1047 + else 1048 + enable = is_aa_path_mute(codec) && !spec->opened_streams; 1049 + if (enable == spec->alc_mode && !force) 1050 + return; 1051 + spec->alc_mode = enable; 1059 1052 1060 1053 /* decide low current mode's verb & parameter */ 1061 1054 switch (spec->codec_type) { ··· 1091 1072 } 1092 1073 /* send verb */ 1093 1074 snd_hda_codec_write(codec, codec->afg, 0, verb, parm); 1075 + } 1076 + 1077 + static void analog_low_current_mode(struct hda_codec *codec) 1078 + { 1079 + return __analog_low_current_mode(codec, false); 1094 1080 } 1095 1081 1096 1082 /* ··· 1470 1446 struct snd_kcontrol *kctl; 1471 1447 int err, i; 1472 1448 1449 + spec->no_pin_power_ctl = 1; 1473 1450 if (spec->set_widgets_power_state) 1474 1451 if (!via_clone_control(spec, &via_pin_power_ctl_enum)) 1475 1452 return -ENOMEM; ··· 1523 1498 if (err < 0) 1524 1499 return err; 1525 1500 } 1526 - 1527 - /* init power states */ 1528 - set_widgets_power_state(codec); 1529 - analog_low_current_mode(codec); 1530 1501 1531 1502 via_free_kctls(codec); /* no longer needed */ 1532 1503 ··· 2316 2295 2317 2296 if (mux) { 2318 2297 /* switch to D0 beofre change index */ 2319 - if (snd_hda_codec_read(codec, mux, 0, 2320 - AC_VERB_GET_POWER_STATE, 0x00) != AC_PWRST_D0) 2321 - snd_hda_codec_write(codec, mux, 0, 2322 - AC_VERB_SET_POWER_STATE, AC_PWRST_D0); 2298 + update_power_state(codec, mux, AC_PWRST_D0); 2323 2299 snd_hda_codec_write(codec, mux, 0, 2324 2300 AC_VERB_SET_CONNECT_SEL, 2325 2301 spec->inputs[cur].mux_idx); ··· 2794 2776 for (i = 0; i < spec->num_iverbs; i++) 2795 2777 snd_hda_sequence_write(codec, spec->init_verbs[i]); 2796 2778 2779 + /* init power states */ 2780 + set_widgets_power_state(codec); 2781 + __analog_low_current_mode(codec, true); 2782 + 2797 2783 via_auto_init_multi_out(codec); 2798 2784 via_auto_init_hp_out(codec); 2799 2785 via_auto_init_speaker_out(codec); ··· 2944 2922 if (imux_is_smixer) 2945 2923 parm = AC_PWRST_D0; 2946 2924 /* SW0 (17h), AIW 0/1 (13h/14h) */ 2947 - snd_hda_codec_write(codec, 0x17, 0, AC_VERB_SET_POWER_STATE, parm); 2948 - snd_hda_codec_write(codec, 0x13, 0, AC_VERB_SET_POWER_STATE, parm); 2949 - snd_hda_codec_write(codec, 0x14, 0, AC_VERB_SET_POWER_STATE, parm); 2925 + update_power_state(codec, 0x17, parm); 2926 + update_power_state(codec, 0x13, parm); 2927 + update_power_state(codec, 0x14, parm); 2950 2928 2951 2929 /* outputs */ 2952 2930 /* PW0 (19h), SW1 (18h), AOW1 (11h) */ ··· 2954 2932 set_pin_power_state(codec, 0x19, &parm); 2955 2933 if (spec->smart51_enabled) 2956 2934 set_pin_power_state(codec, 0x1b, &parm); 2957 - snd_hda_codec_write(codec, 0x18, 0, AC_VERB_SET_POWER_STATE, parm); 2958 - snd_hda_codec_write(codec, 0x11, 0, AC_VERB_SET_POWER_STATE, parm); 2935 + update_power_state(codec, 0x18, parm); 2936 + update_power_state(codec, 0x11, parm); 2959 2937 2960 2938 /* PW6 (22h), SW2 (26h), AOW2 (24h) */ 2961 2939 if (is_8ch) { ··· 2963 2941 set_pin_power_state(codec, 0x22, &parm); 2964 2942 if (spec->smart51_enabled) 2965 2943 set_pin_power_state(codec, 0x1a, &parm); 2966 - snd_hda_codec_write(codec, 0x26, 0, 2967 - AC_VERB_SET_POWER_STATE, parm); 2968 - snd_hda_codec_write(codec, 0x24, 0, 2969 - AC_VERB_SET_POWER_STATE, parm); 2944 + update_power_state(codec, 0x26, parm); 2945 + update_power_state(codec, 0x24, parm); 2970 2946 } else if (codec->vendor_id == 0x11064397) { 2971 2947 /* PW7(23h), SW2(27h), AOW2(25h) */ 2972 2948 parm = AC_PWRST_D3; 2973 2949 set_pin_power_state(codec, 0x23, &parm); 2974 2950 if (spec->smart51_enabled) 2975 2951 set_pin_power_state(codec, 0x1a, &parm); 2976 - snd_hda_codec_write(codec, 0x27, 0, 2977 - AC_VERB_SET_POWER_STATE, parm); 2978 - snd_hda_codec_write(codec, 0x25, 0, 2979 - AC_VERB_SET_POWER_STATE, parm); 2952 + update_power_state(codec, 0x27, parm); 2953 + update_power_state(codec, 0x25, parm); 2980 2954 } 2981 2955 2982 2956 /* PW 3/4/7 (1ch/1dh/23h) */ ··· 2984 2966 set_pin_power_state(codec, 0x23, &parm); 2985 2967 2986 2968 /* MW0 (16h), Sw3 (27h), AOW 0/3 (10h/25h) */ 2987 - snd_hda_codec_write(codec, 0x16, 0, AC_VERB_SET_POWER_STATE, 2988 - imux_is_smixer ? AC_PWRST_D0 : parm); 2989 - snd_hda_codec_write(codec, 0x10, 0, AC_VERB_SET_POWER_STATE, parm); 2969 + update_power_state(codec, 0x16, imux_is_smixer ? AC_PWRST_D0 : parm); 2970 + update_power_state(codec, 0x10, parm); 2990 2971 if (is_8ch) { 2991 - snd_hda_codec_write(codec, 0x25, 0, 2992 - AC_VERB_SET_POWER_STATE, parm); 2993 - snd_hda_codec_write(codec, 0x27, 0, 2994 - AC_VERB_SET_POWER_STATE, parm); 2972 + update_power_state(codec, 0x25, parm); 2973 + update_power_state(codec, 0x27, parm); 2995 2974 } else if (codec->vendor_id == 0x11064397 && spec->hp_independent_mode) 2996 - snd_hda_codec_write(codec, 0x25, 0, 2997 - AC_VERB_SET_POWER_STATE, parm); 2975 + update_power_state(codec, 0x25, parm); 2998 2976 } 2999 2977 3000 2978 static int patch_vt1708S(struct hda_codec *codec); ··· 3163 3149 if (imux_is_smixer) 3164 3150 parm = AC_PWRST_D0; /* SW0 (13h) = stereo mixer (idx 3) */ 3165 3151 /* SW0 (13h), AIW 0/1/2 (12h/1fh/20h) */ 3166 - snd_hda_codec_write(codec, 0x13, 0, AC_VERB_SET_POWER_STATE, parm); 3167 - snd_hda_codec_write(codec, 0x12, 0, AC_VERB_SET_POWER_STATE, parm); 3168 - snd_hda_codec_write(codec, 0x1f, 0, AC_VERB_SET_POWER_STATE, parm); 3169 - snd_hda_codec_write(codec, 0x20, 0, AC_VERB_SET_POWER_STATE, parm); 3152 + update_power_state(codec, 0x13, parm); 3153 + update_power_state(codec, 0x12, parm); 3154 + update_power_state(codec, 0x1f, parm); 3155 + update_power_state(codec, 0x20, parm); 3170 3156 3171 3157 /* outputs */ 3172 3158 /* PW 3/4 (16h/17h) */ ··· 3174 3160 set_pin_power_state(codec, 0x17, &parm); 3175 3161 set_pin_power_state(codec, 0x16, &parm); 3176 3162 /* MW0 (1ah), AOW 0/1 (10h/1dh) */ 3177 - snd_hda_codec_write(codec, 0x1a, 0, AC_VERB_SET_POWER_STATE, 3178 - imux_is_smixer ? AC_PWRST_D0 : parm); 3179 - snd_hda_codec_write(codec, 0x10, 0, AC_VERB_SET_POWER_STATE, parm); 3180 - snd_hda_codec_write(codec, 0x1d, 0, AC_VERB_SET_POWER_STATE, parm); 3163 + update_power_state(codec, 0x1a, imux_is_smixer ? AC_PWRST_D0 : parm); 3164 + update_power_state(codec, 0x10, parm); 3165 + update_power_state(codec, 0x1d, parm); 3181 3166 } 3182 3167 3183 3168 static int patch_vt1702(struct hda_codec *codec) ··· 3241 3228 if (imux_is_smixer) 3242 3229 parm = AC_PWRST_D0; 3243 3230 /* MUX6/7 (1eh/1fh), AIW 0/1 (10h/11h) */ 3244 - snd_hda_codec_write(codec, 0x1e, 0, AC_VERB_SET_POWER_STATE, parm); 3245 - snd_hda_codec_write(codec, 0x1f, 0, AC_VERB_SET_POWER_STATE, parm); 3246 - snd_hda_codec_write(codec, 0x10, 0, AC_VERB_SET_POWER_STATE, parm); 3247 - snd_hda_codec_write(codec, 0x11, 0, AC_VERB_SET_POWER_STATE, parm); 3231 + update_power_state(codec, 0x1e, parm); 3232 + update_power_state(codec, 0x1f, parm); 3233 + update_power_state(codec, 0x10, parm); 3234 + update_power_state(codec, 0x11, parm); 3248 3235 3249 3236 /* outputs */ 3250 3237 /* PW3 (27h), MW2 (1ah), AOW3 (bh) */ 3251 3238 parm = AC_PWRST_D3; 3252 3239 set_pin_power_state(codec, 0x27, &parm); 3253 - snd_hda_codec_write(codec, 0x1a, 0, AC_VERB_SET_POWER_STATE, parm); 3254 - snd_hda_codec_write(codec, 0xb, 0, AC_VERB_SET_POWER_STATE, parm); 3240 + update_power_state(codec, 0x1a, parm); 3241 + update_power_state(codec, 0xb, parm); 3255 3242 3256 3243 /* PW2 (26h), AOW2 (ah) */ 3257 3244 parm = AC_PWRST_D3; 3258 3245 set_pin_power_state(codec, 0x26, &parm); 3259 3246 if (spec->smart51_enabled) 3260 3247 set_pin_power_state(codec, 0x2b, &parm); 3261 - snd_hda_codec_write(codec, 0xa, 0, AC_VERB_SET_POWER_STATE, parm); 3248 + update_power_state(codec, 0xa, parm); 3262 3249 3263 3250 /* PW0 (24h), AOW0 (8h) */ 3264 3251 parm = AC_PWRST_D3; 3265 3252 set_pin_power_state(codec, 0x24, &parm); 3266 3253 if (!spec->hp_independent_mode) /* check for redirected HP */ 3267 3254 set_pin_power_state(codec, 0x28, &parm); 3268 - snd_hda_codec_write(codec, 0x8, 0, AC_VERB_SET_POWER_STATE, parm); 3255 + update_power_state(codec, 0x8, parm); 3269 3256 /* MW9 (21h), Mw2 (1ah), AOW0 (8h) */ 3270 - snd_hda_codec_write(codec, 0x21, 0, AC_VERB_SET_POWER_STATE, 3271 - imux_is_smixer ? AC_PWRST_D0 : parm); 3257 + update_power_state(codec, 0x21, imux_is_smixer ? AC_PWRST_D0 : parm); 3272 3258 3273 3259 /* PW1 (25h), AOW1 (9h) */ 3274 3260 parm = AC_PWRST_D3; 3275 3261 set_pin_power_state(codec, 0x25, &parm); 3276 3262 if (spec->smart51_enabled) 3277 3263 set_pin_power_state(codec, 0x2a, &parm); 3278 - snd_hda_codec_write(codec, 0x9, 0, AC_VERB_SET_POWER_STATE, parm); 3264 + update_power_state(codec, 0x9, parm); 3279 3265 3280 3266 if (spec->hp_independent_mode) { 3281 3267 /* PW4 (28h), MW3 (1bh), MUX1(34h), AOW4 (ch) */ 3282 3268 parm = AC_PWRST_D3; 3283 3269 set_pin_power_state(codec, 0x28, &parm); 3284 - snd_hda_codec_write(codec, 0x1b, 0, 3285 - AC_VERB_SET_POWER_STATE, parm); 3286 - snd_hda_codec_write(codec, 0x34, 0, 3287 - AC_VERB_SET_POWER_STATE, parm); 3288 - snd_hda_codec_write(codec, 0xc, 0, 3289 - AC_VERB_SET_POWER_STATE, parm); 3270 + update_power_state(codec, 0x1b, parm); 3271 + update_power_state(codec, 0x34, parm); 3272 + update_power_state(codec, 0xc, parm); 3290 3273 } 3291 3274 } 3292 3275 ··· 3442 3433 if (imux_is_smixer) 3443 3434 parm = AC_PWRST_D0; 3444 3435 /* SW0 (17h), AIW0(13h) */ 3445 - snd_hda_codec_write(codec, 0x17, 0, AC_VERB_SET_POWER_STATE, parm); 3446 - snd_hda_codec_write(codec, 0x13, 0, AC_VERB_SET_POWER_STATE, parm); 3436 + update_power_state(codec, 0x17, parm); 3437 + update_power_state(codec, 0x13, parm); 3447 3438 3448 3439 parm = AC_PWRST_D3; 3449 3440 set_pin_power_state(codec, 0x1e, &parm); ··· 3451 3442 if (spec->dmic_enabled) 3452 3443 set_pin_power_state(codec, 0x22, &parm); 3453 3444 else 3454 - snd_hda_codec_write(codec, 0x22, 0, 3455 - AC_VERB_SET_POWER_STATE, AC_PWRST_D3); 3445 + update_power_state(codec, 0x22, AC_PWRST_D3); 3456 3446 3457 3447 /* SW2(26h), AIW1(14h) */ 3458 - snd_hda_codec_write(codec, 0x26, 0, AC_VERB_SET_POWER_STATE, parm); 3459 - snd_hda_codec_write(codec, 0x14, 0, AC_VERB_SET_POWER_STATE, parm); 3448 + update_power_state(codec, 0x26, parm); 3449 + update_power_state(codec, 0x14, parm); 3460 3450 3461 3451 /* outputs */ 3462 3452 /* PW0 (19h), SW1 (18h), AOW1 (11h) */ ··· 3464 3456 /* Smart 5.1 PW2(1bh) */ 3465 3457 if (spec->smart51_enabled) 3466 3458 set_pin_power_state(codec, 0x1b, &parm); 3467 - snd_hda_codec_write(codec, 0x18, 0, AC_VERB_SET_POWER_STATE, parm); 3468 - snd_hda_codec_write(codec, 0x11, 0, AC_VERB_SET_POWER_STATE, parm); 3459 + update_power_state(codec, 0x18, parm); 3460 + update_power_state(codec, 0x11, parm); 3469 3461 3470 3462 /* PW7 (23h), SW3 (27h), AOW3 (25h) */ 3471 3463 parm = AC_PWRST_D3; ··· 3473 3465 /* Smart 5.1 PW1(1ah) */ 3474 3466 if (spec->smart51_enabled) 3475 3467 set_pin_power_state(codec, 0x1a, &parm); 3476 - snd_hda_codec_write(codec, 0x27, 0, AC_VERB_SET_POWER_STATE, parm); 3468 + update_power_state(codec, 0x27, parm); 3477 3469 3478 3470 /* Smart 5.1 PW5(1eh) */ 3479 3471 if (spec->smart51_enabled) 3480 3472 set_pin_power_state(codec, 0x1e, &parm); 3481 - snd_hda_codec_write(codec, 0x25, 0, AC_VERB_SET_POWER_STATE, parm); 3473 + update_power_state(codec, 0x25, parm); 3482 3474 3483 3475 /* Mono out */ 3484 3476 /* SW4(28h)->MW1(29h)-> PW12 (2ah)*/ ··· 3494 3486 mono_out = 1; 3495 3487 } 3496 3488 parm = mono_out ? AC_PWRST_D0 : AC_PWRST_D3; 3497 - snd_hda_codec_write(codec, 0x28, 0, AC_VERB_SET_POWER_STATE, parm); 3498 - snd_hda_codec_write(codec, 0x29, 0, AC_VERB_SET_POWER_STATE, parm); 3499 - snd_hda_codec_write(codec, 0x2a, 0, AC_VERB_SET_POWER_STATE, parm); 3489 + update_power_state(codec, 0x28, parm); 3490 + update_power_state(codec, 0x29, parm); 3491 + update_power_state(codec, 0x2a, parm); 3500 3492 3501 3493 /* PW 3/4 (1ch/1dh) */ 3502 3494 parm = AC_PWRST_D3; ··· 3504 3496 set_pin_power_state(codec, 0x1d, &parm); 3505 3497 /* HP Independent Mode, power on AOW3 */ 3506 3498 if (spec->hp_independent_mode) 3507 - snd_hda_codec_write(codec, 0x25, 0, 3508 - AC_VERB_SET_POWER_STATE, parm); 3499 + update_power_state(codec, 0x25, parm); 3509 3500 3510 3501 /* force to D0 for internal Speaker */ 3511 3502 /* MW0 (16h), AOW0 (10h) */ 3512 - snd_hda_codec_write(codec, 0x16, 0, AC_VERB_SET_POWER_STATE, 3513 - imux_is_smixer ? AC_PWRST_D0 : parm); 3514 - snd_hda_codec_write(codec, 0x10, 0, AC_VERB_SET_POWER_STATE, 3515 - mono_out ? AC_PWRST_D0 : parm); 3503 + update_power_state(codec, 0x16, imux_is_smixer ? AC_PWRST_D0 : parm); 3504 + update_power_state(codec, 0x10, mono_out ? AC_PWRST_D0 : parm); 3516 3505 } 3517 3506 3518 3507 static int patch_vt1716S(struct hda_codec *codec) ··· 3585 3580 set_pin_power_state(codec, 0x2b, &parm); 3586 3581 parm = AC_PWRST_D0; 3587 3582 /* MUX9/10 (1eh/1fh), AIW 0/1 (10h/11h) */ 3588 - snd_hda_codec_write(codec, 0x1e, 0, AC_VERB_SET_POWER_STATE, parm); 3589 - snd_hda_codec_write(codec, 0x1f, 0, AC_VERB_SET_POWER_STATE, parm); 3590 - snd_hda_codec_write(codec, 0x10, 0, AC_VERB_SET_POWER_STATE, parm); 3591 - snd_hda_codec_write(codec, 0x11, 0, AC_VERB_SET_POWER_STATE, parm); 3583 + update_power_state(codec, 0x1e, parm); 3584 + update_power_state(codec, 0x1f, parm); 3585 + update_power_state(codec, 0x10, parm); 3586 + update_power_state(codec, 0x11, parm); 3592 3587 3593 3588 /* outputs */ 3594 3589 /* AOW0 (8h)*/ 3595 - snd_hda_codec_write(codec, 0x8, 0, AC_VERB_SET_POWER_STATE, parm); 3590 + update_power_state(codec, 0x8, parm); 3596 3591 3597 3592 if (spec->codec_type == VT1802) { 3598 3593 /* PW4 (28h), MW4 (18h), MUX4(38h) */ 3599 3594 parm = AC_PWRST_D3; 3600 3595 set_pin_power_state(codec, 0x28, &parm); 3601 - snd_hda_codec_write(codec, 0x18, 0, 3602 - AC_VERB_SET_POWER_STATE, parm); 3603 - snd_hda_codec_write(codec, 0x38, 0, 3604 - AC_VERB_SET_POWER_STATE, parm); 3596 + update_power_state(codec, 0x18, parm); 3597 + update_power_state(codec, 0x38, parm); 3605 3598 } else { 3606 3599 /* PW4 (26h), MW4 (1ch), MUX4(37h) */ 3607 3600 parm = AC_PWRST_D3; 3608 3601 set_pin_power_state(codec, 0x26, &parm); 3609 - snd_hda_codec_write(codec, 0x1c, 0, 3610 - AC_VERB_SET_POWER_STATE, parm); 3611 - snd_hda_codec_write(codec, 0x37, 0, 3612 - AC_VERB_SET_POWER_STATE, parm); 3602 + update_power_state(codec, 0x1c, parm); 3603 + update_power_state(codec, 0x37, parm); 3613 3604 } 3614 3605 3615 3606 if (spec->codec_type == VT1802) { 3616 3607 /* PW1 (25h), MW1 (15h), MUX1(35h), AOW1 (9h) */ 3617 3608 parm = AC_PWRST_D3; 3618 3609 set_pin_power_state(codec, 0x25, &parm); 3619 - snd_hda_codec_write(codec, 0x15, 0, 3620 - AC_VERB_SET_POWER_STATE, parm); 3621 - snd_hda_codec_write(codec, 0x35, 0, 3622 - AC_VERB_SET_POWER_STATE, parm); 3610 + update_power_state(codec, 0x15, parm); 3611 + update_power_state(codec, 0x35, parm); 3623 3612 } else { 3624 3613 /* PW1 (25h), MW1 (19h), MUX1(35h), AOW1 (9h) */ 3625 3614 parm = AC_PWRST_D3; 3626 3615 set_pin_power_state(codec, 0x25, &parm); 3627 - snd_hda_codec_write(codec, 0x19, 0, 3628 - AC_VERB_SET_POWER_STATE, parm); 3629 - snd_hda_codec_write(codec, 0x35, 0, 3630 - AC_VERB_SET_POWER_STATE, parm); 3616 + update_power_state(codec, 0x19, parm); 3617 + update_power_state(codec, 0x35, parm); 3631 3618 } 3632 3619 3633 3620 if (spec->hp_independent_mode) 3634 - snd_hda_codec_write(codec, 0x9, 0, 3635 - AC_VERB_SET_POWER_STATE, AC_PWRST_D0); 3621 + update_power_state(codec, 0x9, AC_PWRST_D0); 3636 3622 3637 3623 /* Class-D */ 3638 3624 /* PW0 (24h), MW0(18h/14h), MUX0(34h) */ ··· 3633 3637 set_pin_power_state(codec, 0x24, &parm); 3634 3638 parm = present ? AC_PWRST_D3 : AC_PWRST_D0; 3635 3639 if (spec->codec_type == VT1802) 3636 - snd_hda_codec_write(codec, 0x14, 0, 3637 - AC_VERB_SET_POWER_STATE, parm); 3640 + update_power_state(codec, 0x14, parm); 3638 3641 else 3639 - snd_hda_codec_write(codec, 0x18, 0, 3640 - AC_VERB_SET_POWER_STATE, parm); 3641 - snd_hda_codec_write(codec, 0x34, 0, AC_VERB_SET_POWER_STATE, parm); 3642 + update_power_state(codec, 0x18, parm); 3643 + update_power_state(codec, 0x34, parm); 3642 3644 3643 3645 /* Mono Out */ 3644 3646 present = snd_hda_jack_detect(codec, 0x26); ··· 3644 3650 parm = present ? AC_PWRST_D3 : AC_PWRST_D0; 3645 3651 if (spec->codec_type == VT1802) { 3646 3652 /* PW15 (33h), MW8(1ch), MUX8(3ch) */ 3647 - snd_hda_codec_write(codec, 0x33, 0, 3648 - AC_VERB_SET_POWER_STATE, parm); 3649 - snd_hda_codec_write(codec, 0x1c, 0, 3650 - AC_VERB_SET_POWER_STATE, parm); 3651 - snd_hda_codec_write(codec, 0x3c, 0, 3652 - AC_VERB_SET_POWER_STATE, parm); 3653 + update_power_state(codec, 0x33, parm); 3654 + update_power_state(codec, 0x1c, parm); 3655 + update_power_state(codec, 0x3c, parm); 3653 3656 } else { 3654 3657 /* PW15 (31h), MW8(17h), MUX8(3bh) */ 3655 - snd_hda_codec_write(codec, 0x31, 0, 3656 - AC_VERB_SET_POWER_STATE, parm); 3657 - snd_hda_codec_write(codec, 0x17, 0, 3658 - AC_VERB_SET_POWER_STATE, parm); 3659 - snd_hda_codec_write(codec, 0x3b, 0, 3660 - AC_VERB_SET_POWER_STATE, parm); 3658 + update_power_state(codec, 0x31, parm); 3659 + update_power_state(codec, 0x17, parm); 3660 + update_power_state(codec, 0x3b, parm); 3661 3661 } 3662 3662 /* MW9 (21h) */ 3663 3663 if (imux_is_smixer || !is_aa_path_mute(codec)) 3664 - snd_hda_codec_write(codec, 0x21, 0, 3665 - AC_VERB_SET_POWER_STATE, AC_PWRST_D0); 3664 + update_power_state(codec, 0x21, AC_PWRST_D0); 3666 3665 else 3667 - snd_hda_codec_write(codec, 0x21, 0, 3668 - AC_VERB_SET_POWER_STATE, AC_PWRST_D3); 3666 + update_power_state(codec, 0x21, AC_PWRST_D3); 3669 3667 } 3670 3668 3671 3669 /* patch for vt2002P */ ··· 3717 3731 set_pin_power_state(codec, 0x2b, &parm); 3718 3732 parm = AC_PWRST_D0; 3719 3733 /* MUX10/11 (1eh/1fh), AIW 0/1 (10h/11h) */ 3720 - snd_hda_codec_write(codec, 0x1e, 0, AC_VERB_SET_POWER_STATE, parm); 3721 - snd_hda_codec_write(codec, 0x1f, 0, AC_VERB_SET_POWER_STATE, parm); 3722 - snd_hda_codec_write(codec, 0x10, 0, AC_VERB_SET_POWER_STATE, parm); 3723 - snd_hda_codec_write(codec, 0x11, 0, AC_VERB_SET_POWER_STATE, parm); 3734 + update_power_state(codec, 0x1e, parm); 3735 + update_power_state(codec, 0x1f, parm); 3736 + update_power_state(codec, 0x10, parm); 3737 + update_power_state(codec, 0x11, parm); 3724 3738 3725 3739 /* outputs */ 3726 3740 /* AOW0 (8h)*/ 3727 - snd_hda_codec_write(codec, 0x8, 0, 3728 - AC_VERB_SET_POWER_STATE, AC_PWRST_D0); 3741 + update_power_state(codec, 0x8, AC_PWRST_D0); 3729 3742 3730 3743 /* PW4 (28h), MW4 (18h), MUX4(38h) */ 3731 3744 parm = AC_PWRST_D3; 3732 3745 set_pin_power_state(codec, 0x28, &parm); 3733 - snd_hda_codec_write(codec, 0x18, 0, AC_VERB_SET_POWER_STATE, parm); 3734 - snd_hda_codec_write(codec, 0x38, 0, AC_VERB_SET_POWER_STATE, parm); 3746 + update_power_state(codec, 0x18, parm); 3747 + update_power_state(codec, 0x38, parm); 3735 3748 3736 3749 /* PW1 (25h), MW1 (15h), MUX1(35h), AOW1 (9h) */ 3737 3750 parm = AC_PWRST_D3; 3738 3751 set_pin_power_state(codec, 0x25, &parm); 3739 - snd_hda_codec_write(codec, 0x15, 0, AC_VERB_SET_POWER_STATE, parm); 3740 - snd_hda_codec_write(codec, 0x35, 0, AC_VERB_SET_POWER_STATE, parm); 3752 + update_power_state(codec, 0x15, parm); 3753 + update_power_state(codec, 0x35, parm); 3741 3754 if (spec->hp_independent_mode) 3742 - snd_hda_codec_write(codec, 0x9, 0, 3743 - AC_VERB_SET_POWER_STATE, AC_PWRST_D0); 3755 + update_power_state(codec, 0x9, AC_PWRST_D0); 3744 3756 3745 3757 /* Internal Speaker */ 3746 3758 /* PW0 (24h), MW0(14h), MUX0(34h) */ ··· 3747 3763 parm = AC_PWRST_D3; 3748 3764 set_pin_power_state(codec, 0x24, &parm); 3749 3765 if (present) { 3750 - snd_hda_codec_write(codec, 0x14, 0, 3751 - AC_VERB_SET_POWER_STATE, AC_PWRST_D3); 3752 - snd_hda_codec_write(codec, 0x34, 0, 3753 - AC_VERB_SET_POWER_STATE, AC_PWRST_D3); 3766 + update_power_state(codec, 0x14, AC_PWRST_D3); 3767 + update_power_state(codec, 0x34, AC_PWRST_D3); 3754 3768 } else { 3755 - snd_hda_codec_write(codec, 0x14, 0, 3756 - AC_VERB_SET_POWER_STATE, AC_PWRST_D0); 3757 - snd_hda_codec_write(codec, 0x34, 0, 3758 - AC_VERB_SET_POWER_STATE, AC_PWRST_D0); 3769 + update_power_state(codec, 0x14, AC_PWRST_D0); 3770 + update_power_state(codec, 0x34, AC_PWRST_D0); 3759 3771 } 3760 3772 3761 3773 ··· 3762 3782 parm = AC_PWRST_D3; 3763 3783 set_pin_power_state(codec, 0x31, &parm); 3764 3784 if (present) { 3765 - snd_hda_codec_write(codec, 0x1c, 0, 3766 - AC_VERB_SET_POWER_STATE, AC_PWRST_D3); 3767 - snd_hda_codec_write(codec, 0x3c, 0, 3768 - AC_VERB_SET_POWER_STATE, AC_PWRST_D3); 3769 - snd_hda_codec_write(codec, 0x3e, 0, 3770 - AC_VERB_SET_POWER_STATE, AC_PWRST_D3); 3785 + update_power_state(codec, 0x1c, AC_PWRST_D3); 3786 + update_power_state(codec, 0x3c, AC_PWRST_D3); 3787 + update_power_state(codec, 0x3e, AC_PWRST_D3); 3771 3788 } else { 3772 - snd_hda_codec_write(codec, 0x1c, 0, 3773 - AC_VERB_SET_POWER_STATE, AC_PWRST_D0); 3774 - snd_hda_codec_write(codec, 0x3c, 0, 3775 - AC_VERB_SET_POWER_STATE, AC_PWRST_D0); 3776 - snd_hda_codec_write(codec, 0x3e, 0, 3777 - AC_VERB_SET_POWER_STATE, AC_PWRST_D0); 3789 + update_power_state(codec, 0x1c, AC_PWRST_D0); 3790 + update_power_state(codec, 0x3c, AC_PWRST_D0); 3791 + update_power_state(codec, 0x3e, AC_PWRST_D0); 3778 3792 } 3779 3793 3780 3794 /* PW15 (33h), MW15 (1dh), MUX15(3dh) */ 3781 3795 parm = AC_PWRST_D3; 3782 3796 set_pin_power_state(codec, 0x33, &parm); 3783 - snd_hda_codec_write(codec, 0x1d, 0, AC_VERB_SET_POWER_STATE, parm); 3784 - snd_hda_codec_write(codec, 0x3d, 0, AC_VERB_SET_POWER_STATE, parm); 3797 + update_power_state(codec, 0x1d, parm); 3798 + update_power_state(codec, 0x3d, parm); 3785 3799 3786 3800 } 3787 3801
+14 -11
sound/pci/oxygen/oxygen_mixer.c
··· 618 618 mutex_lock(&chip->mutex); 619 619 reg = oxygen_read_ac97(chip, codec, index); 620 620 mutex_unlock(&chip->mutex); 621 - value->value.integer.value[0] = 31 - (reg & 0x1f); 622 - if (stereo) 623 - value->value.integer.value[1] = 31 - ((reg >> 8) & 0x1f); 621 + if (!stereo) { 622 + value->value.integer.value[0] = 31 - (reg & 0x1f); 623 + } else { 624 + value->value.integer.value[0] = 31 - ((reg >> 8) & 0x1f); 625 + value->value.integer.value[1] = 31 - (reg & 0x1f); 626 + } 624 627 return 0; 625 628 } 626 629 ··· 639 636 640 637 mutex_lock(&chip->mutex); 641 638 oldreg = oxygen_read_ac97(chip, codec, index); 642 - newreg = oldreg; 643 - newreg = (newreg & ~0x1f) | 644 - (31 - (value->value.integer.value[0] & 0x1f)); 645 - if (stereo) 646 - newreg = (newreg & ~0x1f00) | 647 - ((31 - (value->value.integer.value[1] & 0x1f)) << 8); 648 - else 649 - newreg = (newreg & ~0x1f00) | ((newreg & 0x1f) << 8); 639 + if (!stereo) { 640 + newreg = oldreg & ~0x1f; 641 + newreg |= 31 - (value->value.integer.value[0] & 0x1f); 642 + } else { 643 + newreg = oldreg & ~0x1f1f; 644 + newreg |= (31 - (value->value.integer.value[0] & 0x1f)) << 8; 645 + newreg |= 31 - (value->value.integer.value[1] & 0x1f); 646 + } 650 647 change = newreg != oldreg; 651 648 if (change) 652 649 oxygen_write_ac97(chip, codec, index, newreg);
+1 -1
sound/soc/codecs/cs42l73.c
··· 1113 1113 priv->config[id].mmcc &= 0xC0; 1114 1114 priv->config[id].mmcc |= cs42l73_mclk_coeffs[mclk_coeff].mmcc; 1115 1115 priv->config[id].spc &= 0xFC; 1116 - priv->config[id].spc &= MCK_SCLK_64FS; 1116 + priv->config[id].spc |= MCK_SCLK_MCLK; 1117 1117 } else { 1118 1118 /* CS42L73 Slave */ 1119 1119 priv->config[id].spc &= 0xFC;
+9 -1
sound/soc/codecs/wm5100.c
··· 2053 2053 if (wm5100->jack_detecting) { 2054 2054 dev_dbg(wm5100->dev, "Microphone detected\n"); 2055 2055 wm5100->jack_mic = true; 2056 + wm5100->jack_detecting = false; 2056 2057 snd_soc_jack_report(wm5100->jack, 2057 2058 SND_JACK_HEADSET, 2058 2059 SND_JACK_HEADSET | SND_JACK_BTN_0); ··· 2430 2429 .cache_type = REGCACHE_RBTREE, 2431 2430 }; 2432 2431 2432 + static const unsigned int wm5100_mic_ctrl_reg[] = { 2433 + WM5100_IN1L_CONTROL, 2434 + WM5100_IN2L_CONTROL, 2435 + WM5100_IN3L_CONTROL, 2436 + WM5100_IN4L_CONTROL, 2437 + }; 2438 + 2433 2439 static __devinit int wm5100_i2c_probe(struct i2c_client *i2c, 2434 2440 const struct i2c_device_id *id) 2435 2441 { ··· 2567 2559 } 2568 2560 2569 2561 for (i = 0; i < ARRAY_SIZE(wm5100->pdata.in_mode); i++) { 2570 - regmap_update_bits(wm5100->regmap, WM5100_IN1L_CONTROL, 2562 + regmap_update_bits(wm5100->regmap, wm5100_mic_ctrl_reg[i], 2571 2563 WM5100_IN1_MODE_MASK | 2572 2564 WM5100_IN1_DMIC_SUP_MASK, 2573 2565 (wm5100->pdata.in_mode[i] <<
+4 -4
sound/soc/codecs/wm8962.c
··· 96 96 struct wm8962_priv *wm8962 = container_of(nb, struct wm8962_priv, \ 97 97 disable_nb[n]); \ 98 98 if (event & REGULATOR_EVENT_DISABLE) { \ 99 - regcache_cache_only(wm8962->regmap, true); \ 99 + regcache_mark_dirty(wm8962->regmap); \ 100 100 } \ 101 101 return 0; \ 102 102 } ··· 2657 2657 case SNDRV_PCM_FORMAT_S16_LE: 2658 2658 break; 2659 2659 case SNDRV_PCM_FORMAT_S20_3LE: 2660 - aif0 |= 0x40; 2660 + aif0 |= 0x4; 2661 2661 break; 2662 2662 case SNDRV_PCM_FORMAT_S24_LE: 2663 - aif0 |= 0x80; 2663 + aif0 |= 0x8; 2664 2664 break; 2665 2665 case SNDRV_PCM_FORMAT_S32_LE: 2666 - aif0 |= 0xc0; 2666 + aif0 |= 0xc; 2667 2667 break; 2668 2668 default: 2669 2669 return -EINVAL;
+10 -1
sound/soc/codecs/wm8994.c
··· 770 770 { 771 771 struct wm8994_priv *wm8994 = snd_soc_codec_get_drvdata(codec); 772 772 773 + pm_runtime_get_sync(codec->dev); 774 + 773 775 wm8994->vmid_refcount++; 774 776 775 777 dev_dbg(codec->dev, "Referencing VMID, refcount is now %d\n", ··· 785 783 WM8994_VMID_RAMP_MASK, 786 784 WM8994_STARTUP_BIAS_ENA | 787 785 WM8994_VMID_BUF_ENA | 788 - (0x11 << WM8994_VMID_RAMP_SHIFT)); 786 + (0x3 << WM8994_VMID_RAMP_SHIFT)); 787 + 788 + /* Remove discharge for line out */ 789 + snd_soc_update_bits(codec, WM8994_ANTIPOP_1, 790 + WM8994_LINEOUT1_DISCH | 791 + WM8994_LINEOUT2_DISCH, 0); 789 792 790 793 /* Main bias enable, VMID=2x40k */ 791 794 snd_soc_update_bits(codec, WM8994_POWER_MANAGEMENT_1, ··· 844 837 WM8994_VMID_BUF_ENA | 845 838 WM8994_VMID_RAMP_MASK, 0); 846 839 } 840 + 841 + pm_runtime_put(codec->dev); 847 842 } 848 843 849 844 static int vmid_event(struct snd_soc_dapm_widget *w,
+1 -1
sound/soc/codecs/wm8996.c
··· 108 108 struct wm8996_priv *wm8996 = container_of(nb, struct wm8996_priv, \ 109 109 disable_nb[n]); \ 110 110 if (event & REGULATOR_EVENT_DISABLE) { \ 111 - regcache_cache_only(wm8996->regmap, true); \ 111 + regcache_mark_dirty(wm8996->regmap); \ 112 112 } \ 113 113 return 0; \ 114 114 }
+12 -6
sound/soc/codecs/wm_hubs.c
··· 586 586 }; 587 587 588 588 static const struct snd_kcontrol_new line2_mix[] = { 589 - SOC_DAPM_SINGLE("IN2R Switch", WM8993_LINE_MIXER2, 2, 1, 0), 590 - SOC_DAPM_SINGLE("IN2L Switch", WM8993_LINE_MIXER2, 1, 1, 0), 589 + SOC_DAPM_SINGLE("IN1L Switch", WM8993_LINE_MIXER2, 2, 1, 0), 590 + SOC_DAPM_SINGLE("IN1R Switch", WM8993_LINE_MIXER2, 1, 1, 0), 591 591 SOC_DAPM_SINGLE("Output Switch", WM8993_LINE_MIXER2, 0, 1, 0), 592 592 }; 593 593 594 594 static const struct snd_kcontrol_new line2n_mix[] = { 595 - SOC_DAPM_SINGLE("Left Output Switch", WM8993_LINE_MIXER2, 6, 1, 0), 596 - SOC_DAPM_SINGLE("Right Output Switch", WM8993_LINE_MIXER2, 5, 1, 0), 595 + SOC_DAPM_SINGLE("Left Output Switch", WM8993_LINE_MIXER2, 5, 1, 0), 596 + SOC_DAPM_SINGLE("Right Output Switch", WM8993_LINE_MIXER2, 6, 1, 0), 597 597 }; 598 598 599 599 static const struct snd_kcontrol_new line2p_mix[] = { ··· 612 612 613 613 SND_SOC_DAPM_SUPPLY("MICBIAS2", WM8993_POWER_MANAGEMENT_1, 5, 0, NULL, 0), 614 614 SND_SOC_DAPM_SUPPLY("MICBIAS1", WM8993_POWER_MANAGEMENT_1, 4, 0, NULL, 0), 615 + 616 + SND_SOC_DAPM_SUPPLY("LINEOUT_VMID_BUF", WM8993_ANTIPOP1, 7, 0, NULL, 0), 615 617 616 618 SND_SOC_DAPM_MIXER("IN1L PGA", WM8993_POWER_MANAGEMENT_2, 6, 0, 617 619 in1l_pga, ARRAY_SIZE(in1l_pga)), ··· 836 834 }; 837 835 838 836 static const struct snd_soc_dapm_route lineout1_se_routes[] = { 837 + { "LINEOUT1N Mixer", NULL, "LINEOUT_VMID_BUF" }, 839 838 { "LINEOUT1N Mixer", "Left Output Switch", "Left Output PGA" }, 840 839 { "LINEOUT1N Mixer", "Right Output Switch", "Right Output PGA" }, 841 840 841 + { "LINEOUT1P Mixer", NULL, "LINEOUT_VMID_BUF" }, 842 842 { "LINEOUT1P Mixer", "Left Output Switch", "Left Output PGA" }, 843 843 844 844 { "LINEOUT1N Driver", NULL, "LINEOUT1N Mixer" }, ··· 848 844 }; 849 845 850 846 static const struct snd_soc_dapm_route lineout2_diff_routes[] = { 851 - { "LINEOUT2 Mixer", "IN2L Switch", "IN2L PGA" }, 852 - { "LINEOUT2 Mixer", "IN2R Switch", "IN2R PGA" }, 847 + { "LINEOUT2 Mixer", "IN1L Switch", "IN1L PGA" }, 848 + { "LINEOUT2 Mixer", "IN1R Switch", "IN1R PGA" }, 853 849 { "LINEOUT2 Mixer", "Output Switch", "Right Output PGA" }, 854 850 855 851 { "LINEOUT2N Driver", NULL, "LINEOUT2 Mixer" }, ··· 857 853 }; 858 854 859 855 static const struct snd_soc_dapm_route lineout2_se_routes[] = { 856 + { "LINEOUT2N Mixer", NULL, "LINEOUT_VMID_BUF" }, 860 857 { "LINEOUT2N Mixer", "Left Output Switch", "Left Output PGA" }, 861 858 { "LINEOUT2N Mixer", "Right Output Switch", "Right Output PGA" }, 862 859 860 + { "LINEOUT2P Mixer", NULL, "LINEOUT_VMID_BUF" }, 863 861 { "LINEOUT2P Mixer", "Right Output Switch", "Right Output PGA" }, 864 862 865 863 { "LINEOUT2N Driver", NULL, "LINEOUT2N Mixer" },
+1 -64
sound/soc/samsung/neo1973_wm8753.c
··· 230 230 231 231 /* GTA02 specific routes and controls */ 232 232 233 - #ifdef CONFIG_MACH_NEO1973_GTA02 234 - 235 233 static int gta02_speaker_enabled; 236 234 237 235 static int lm4853_set_spk(struct snd_kcontrol *kcontrol, ··· 309 311 return 0; 310 312 } 311 313 312 - #else 313 - static int neo1973_gta02_wm8753_init(struct snd_soc_code *codec) { return 0; } 314 - #endif 315 - 316 314 static int neo1973_wm8753_init(struct snd_soc_pcm_runtime *rtd) 317 315 { 318 316 struct snd_soc_codec *codec = rtd->codec; ··· 316 322 int ret; 317 323 318 324 /* set up NC codec pins */ 319 - if (machine_is_neo1973_gta01()) { 320 - snd_soc_dapm_nc_pin(dapm, "LOUT2"); 321 - snd_soc_dapm_nc_pin(dapm, "ROUT2"); 322 - } 323 325 snd_soc_dapm_nc_pin(dapm, "OUT3"); 324 326 snd_soc_dapm_nc_pin(dapm, "OUT4"); 325 327 snd_soc_dapm_nc_pin(dapm, "LINE1"); ··· 360 370 return 0; 361 371 } 362 372 363 - /* GTA01 specific controls */ 364 - 365 - #ifdef CONFIG_MACH_NEO1973_GTA01 366 - 367 - static const struct snd_soc_dapm_route neo1973_lm4857_routes[] = { 368 - {"Amp IN", NULL, "ROUT1"}, 369 - {"Amp IN", NULL, "LOUT1"}, 370 - 371 - {"Handset Spk", NULL, "Amp EP"}, 372 - {"Stereo Out", NULL, "Amp LS"}, 373 - {"Headphone", NULL, "Amp HP"}, 374 - }; 375 - 376 - static const struct snd_soc_dapm_widget neo1973_lm4857_dapm_widgets[] = { 377 - SND_SOC_DAPM_SPK("Handset Spk", NULL), 378 - SND_SOC_DAPM_SPK("Stereo Out", NULL), 379 - SND_SOC_DAPM_HP("Headphone", NULL), 380 - }; 381 - 382 - static int neo1973_lm4857_init(struct snd_soc_dapm_context *dapm) 383 - { 384 - int ret; 385 - 386 - ret = snd_soc_dapm_new_controls(dapm, neo1973_lm4857_dapm_widgets, 387 - ARRAY_SIZE(neo1973_lm4857_dapm_widgets)); 388 - if (ret) 389 - return ret; 390 - 391 - ret = snd_soc_dapm_add_routes(dapm, neo1973_lm4857_routes, 392 - ARRAY_SIZE(neo1973_lm4857_routes)); 393 - if (ret) 394 - return ret; 395 - 396 - snd_soc_dapm_ignore_suspend(dapm, "Stereo Out"); 397 - snd_soc_dapm_ignore_suspend(dapm, "Handset Spk"); 398 - snd_soc_dapm_ignore_suspend(dapm, "Headphone"); 399 - 400 - return 0; 401 - } 402 - 403 - #else 404 - static int neo1973_lm4857_init(struct snd_soc_dapm_context *dapm) { return 0; }; 405 - #endif 406 - 407 373 static struct snd_soc_dai_link neo1973_dai[] = { 408 374 { /* Hifi Playback - for similatious use with voice below */ 409 375 .name = "WM8753", ··· 386 440 .name = "dfbmcs320", 387 441 .codec_name = "dfbmcs320.0", 388 442 }, 389 - { 390 - .name = "lm4857", 391 - .codec_name = "lm4857.0-007c", 392 - .init = neo1973_lm4857_init, 393 - }, 394 443 }; 395 444 396 445 static struct snd_soc_codec_conf neo1973_codec_conf[] = { ··· 395 454 }, 396 455 }; 397 456 398 - #ifdef CONFIG_MACH_NEO1973_GTA02 399 457 static const struct gpio neo1973_gta02_gpios[] = { 400 458 { GTA02_GPIO_HP_IN, GPIOF_OUT_INIT_HIGH, "GTA02_HP_IN" }, 401 459 { GTA02_GPIO_AMP_SHUT, GPIOF_OUT_INIT_HIGH, "GTA02_AMP_SHUT" }, 402 460 }; 403 - #else 404 - static const struct gpio neo1973_gta02_gpios[] = {}; 405 - #endif 406 461 407 462 static struct snd_soc_card neo1973 = { 408 463 .name = "neo1973", ··· 417 480 { 418 481 int ret; 419 482 420 - if (!machine_is_neo1973_gta01() && !machine_is_neo1973_gta02()) 483 + if (!machine_is_neo1973_gta02()) 421 484 return -ENODEV; 422 485 423 486 if (machine_is_neo1973_gta02()) {
+11
sound/soc/soc-core.c
··· 567 567 if (!codec->suspended && codec->driver->suspend) { 568 568 switch (codec->dapm.bias_level) { 569 569 case SND_SOC_BIAS_STANDBY: 570 + /* 571 + * If the CODEC is capable of idle 572 + * bias off then being in STANDBY 573 + * means it's doing something, 574 + * otherwise fall through. 575 + */ 576 + if (codec->dapm.idle_bias_off) { 577 + dev_dbg(codec->dev, 578 + "idle_bias_off CODEC on over suspend\n"); 579 + break; 580 + } 570 581 case SND_SOC_BIAS_OFF: 571 582 codec->driver->suspend(codec); 572 583 codec->suspended = 1;
+8
sound/usb/quirks-table.h
··· 1618 1618 } 1619 1619 }, 1620 1620 { 1621 + /* Edirol UM-3G */ 1622 + USB_DEVICE_VENDOR_SPEC(0x0582, 0x0108), 1623 + .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) { 1624 + .ifnum = 0, 1625 + .type = QUIRK_MIDI_STANDARD_INTERFACE 1626 + } 1627 + }, 1628 + { 1621 1629 /* Boss JS-8 Jam Station */ 1622 1630 USB_DEVICE(0x0582, 0x0109), 1623 1631 .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
+2 -5
tools/perf/Makefile
··· 104 104 105 105 CFLAGS = -fno-omit-frame-pointer -ggdb3 -Wall -Wextra -std=gnu99 $(CFLAGS_WERROR) $(CFLAGS_OPTIMIZE) -D_FORTIFY_SOURCE=2 $(EXTRA_WARNINGS) $(EXTRA_CFLAGS) 106 106 EXTLIBS = -lpthread -lrt -lelf -lm 107 - ALL_CFLAGS = $(CFLAGS) -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64 107 + ALL_CFLAGS = $(CFLAGS) -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64 -D_GNU_SOURCE 108 108 ALL_LDFLAGS = $(LDFLAGS) 109 109 STRIP ?= strip 110 110 ··· 168 168 169 169 ### --- END CONFIGURATION SECTION --- 170 170 171 - # Those must not be GNU-specific; they are shared with perl/ which may 172 - # be built by a different compiler. (Note that this is an artifact now 173 - # but it still might be nice to keep that distinction.) 174 - BASIC_CFLAGS = -Iutil/include -Iarch/$(ARCH)/include 171 + BASIC_CFLAGS = -Iutil/include -Iarch/$(ARCH)/include -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64 -D_GNU_SOURCE 175 172 BASIC_LDFLAGS = 176 173 177 174 # Guard against environment variables
-2
tools/perf/builtin-probe.c
··· 20 20 * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. 21 21 * 22 22 */ 23 - #define _GNU_SOURCE 24 23 #include <sys/utsname.h> 25 24 #include <sys/types.h> 26 25 #include <sys/stat.h> ··· 30 31 #include <stdlib.h> 31 32 #include <string.h> 32 33 33 - #undef _GNU_SOURCE 34 34 #include "perf.h" 35 35 #include "builtin.h" 36 36 #include "util/util.h"
+10 -3
tools/perf/builtin-top.c
··· 89 89 90 90 static void perf_top__update_print_entries(struct perf_top *top) 91 91 { 92 - top->print_entries = top->winsize.ws_row; 93 - 94 92 if (top->print_entries > 9) 95 93 top->print_entries -= 9; 96 94 } ··· 98 100 struct perf_top *top = arg; 99 101 100 102 get_term_dimensions(&top->winsize); 103 + if (!top->print_entries 104 + || (top->print_entries+4) > top->winsize.ws_row) { 105 + top->print_entries = top->winsize.ws_row; 106 + } else { 107 + top->print_entries += 4; 108 + top->winsize.ws_row = top->print_entries; 109 + } 101 110 perf_top__update_print_entries(top); 102 111 } 103 112 ··· 458 453 }; 459 454 perf_top__sig_winch(SIGWINCH, NULL, top); 460 455 sigaction(SIGWINCH, &act, NULL); 461 - } else 456 + } else { 457 + perf_top__sig_winch(SIGWINCH, NULL, top); 462 458 signal(SIGWINCH, SIG_DFL); 459 + } 463 460 break; 464 461 case 'E': 465 462 if (top->evlist->nr_entries > 1) {
+1 -1
tools/perf/util/header.c
··· 2105 2105 strncpy(ev.event_type.event_type.name, name, MAX_EVENT_NAME - 1); 2106 2106 2107 2107 ev.event_type.header.type = PERF_RECORD_HEADER_EVENT_TYPE; 2108 - size = strlen(name); 2108 + size = strlen(ev.event_type.event_type.name); 2109 2109 size = ALIGN(size, sizeof(u64)); 2110 2110 ev.event_type.header.size = sizeof(ev.event_type) - 2111 2111 (sizeof(ev.event_type.event_type.name) - size);
-2
tools/perf/util/probe-event.c
··· 19 19 * 20 20 */ 21 21 22 - #define _GNU_SOURCE 23 22 #include <sys/utsname.h> 24 23 #include <sys/types.h> 25 24 #include <sys/stat.h> ··· 32 33 #include <limits.h> 33 34 #include <elf.h> 34 35 35 - #undef _GNU_SOURCE 36 36 #include "util.h" 37 37 #include "event.h" 38 38 #include "string.h"
-1
tools/perf/util/symbol.c
··· 1 - #define _GNU_SOURCE 2 1 #include <ctype.h> 3 2 #include <dirent.h> 4 3 #include <errno.h>
+1 -2
tools/perf/util/trace-event-parse.c
··· 21 21 * The parts for function graph printing was taken and modified from the 22 22 * Linux Kernel that were written by Frederic Weisbecker. 23 23 */ 24 - #define _GNU_SOURCE 24 + 25 25 #include <stdio.h> 26 26 #include <stdlib.h> 27 27 #include <string.h> 28 28 #include <ctype.h> 29 29 #include <errno.h> 30 30 31 - #undef _GNU_SOURCE 32 31 #include "../perf.h" 33 32 #include "util.h" 34 33 #include "trace-event.h"
-2
tools/perf/util/ui/browsers/hists.c
··· 1 - #define _GNU_SOURCE 2 1 #include <stdio.h> 3 - #undef _GNU_SOURCE 4 2 #include "../libslang.h" 5 3 #include <stdlib.h> 6 4 #include <string.h>
-1
tools/perf/util/ui/helpline.c
··· 1 - #define _GNU_SOURCE 2 1 #include <stdio.h> 3 2 #include <stdlib.h> 4 3 #include <string.h>
-1
tools/perf/util/util.h
··· 40 40 #define decimal_length(x) ((int)(sizeof(x) * 2.56 + 0.5) + 1) 41 41 42 42 #define _ALL_SOURCE 1 43 - #define _GNU_SOURCE 1 44 43 #define _BSD_SOURCE 1 45 44 #define HAS_BOOL 46 45
+1 -1
virt/kvm/kvm_main.c
··· 1543 1543 if (memslot && memslot->dirty_bitmap) { 1544 1544 unsigned long rel_gfn = gfn - memslot->base_gfn; 1545 1545 1546 - if (!__test_and_set_bit_le(rel_gfn, memslot->dirty_bitmap)) 1546 + if (!test_and_set_bit_le(rel_gfn, memslot->dirty_bitmap)) 1547 1547 memslot->nr_dirty_pages++; 1548 1548 } 1549 1549 }