Merge branches 'release', 'bugzilla-12011', 'bugzilla-12632', 'misc' and 'suspend' into release

Len Brown 5acfac5a 5423a0cb

+5021 -1544
-1
CREDITS
··· 2166 2166 2167 2167 N: Pavel Machek 2168 2168 E: pavel@ucw.cz 2169 - E: pavel@suse.cz 2170 2169 D: Softcursor for vga, hypertech cdrom support, vcsa bugfix, nbd 2171 2170 D: sun4/330 port, capabilities for elf, speedup for rm on ext2, USB, 2172 2171 D: work on suspend-to-ram/disk, killing duplicates from ioctl32
+1 -1
Documentation/ABI/testing/sysfs-firmware-memmap
··· 1 1 What: /sys/firmware/memmap/ 2 2 Date: June 2008 3 - Contact: Bernhard Walle <bwalle@suse.de> 3 + Contact: Bernhard Walle <bernhard.walle@gmx.de> 4 4 Description: 5 5 On all platforms, the firmware provides a memory map which the 6 6 kernel reads. The resources from that memory map are registered
+1 -1
Documentation/PCI/PCIEBUS-HOWTO.txt
··· 93 93 94 94 int pcie_port_service_register(struct pcie_port_service_driver *new) 95 95 96 - This API replaces the Linux Driver Model's pci_module_init API. A 96 + This API replaces the Linux Driver Model's pci_register_driver API. A 97 97 service driver should always calls pcie_port_service_register at 98 98 module init. Note that after service driver being loaded, calls 99 99 such as pci_enable_device(dev) and pci_set_master(dev) are no longer
+2 -4
Documentation/cgroups/cgroups.txt
··· 252 252 When a task is moved from one cgroup to another, it gets a new 253 253 css_set pointer - if there's an already existing css_set with the 254 254 desired collection of cgroups then that group is reused, else a new 255 - css_set is allocated. Note that the current implementation uses a 256 - linear search to locate an appropriate existing css_set, so isn't 257 - very efficient. A future version will use a hash table for better 258 - performance. 255 + css_set is allocated. The appropriate existing css_set is located by 256 + looking into a hash table. 259 257 260 258 To allow access from a cgroup to the css_sets (and hence tasks) 261 259 that comprise it, a set of cg_cgroup_link objects form a lattice;
+36 -27
Documentation/cgroups/cpusets.txt
··· 142 142 - in fork and exit, to attach and detach a task from its cpuset. 143 143 - in sched_setaffinity, to mask the requested CPUs by what's 144 144 allowed in that tasks cpuset. 145 - - in sched.c migrate_all_tasks(), to keep migrating tasks within 145 + - in sched.c migrate_live_tasks(), to keep migrating tasks within 146 146 the CPUs allowed by their cpuset, if possible. 147 147 - in the mbind and set_mempolicy system calls, to mask the requested 148 148 Memory Nodes by what's allowed in that tasks cpuset. ··· 175 175 - mem_exclusive flag: is memory placement exclusive? 176 176 - mem_hardwall flag: is memory allocation hardwalled 177 177 - memory_pressure: measure of how much paging pressure in cpuset 178 + - memory_spread_page flag: if set, spread page cache evenly on allowed nodes 179 + - memory_spread_slab flag: if set, spread slab cache evenly on allowed nodes 180 + - sched_load_balance flag: if set, load balance within CPUs on that cpuset 181 + - sched_relax_domain_level: the searching range when migrating tasks 178 182 179 183 In addition, the root cpuset only has the following file: 180 184 - memory_pressure_enabled flag: compute memory_pressure? ··· 256 252 257 253 This is useful both on tightly managed systems running a wide mix of 258 254 submitted jobs, which may choose to terminate or re-prioritize jobs that 259 - are trying to use more memory than allowed on the nodes assigned them, 255 + are trying to use more memory than allowed on the nodes assigned to them, 260 256 and with tightly coupled, long running, massively parallel scientific 261 257 computing jobs that will dramatically fail to meet required performance 262 258 goals if they start to use more memory than allowed to them. ··· 382 378 The algorithmic cost of load balancing and its impact on key shared 383 379 kernel data structures such as the task list increases more than 384 380 linearly with the number of CPUs being balanced. So the scheduler 385 - has support to partition the systems CPUs into a number of sched 381 + has support to partition the systems CPUs into a number of sched 386 382 domains such that it only load balances within each sched domain. 387 383 Each sched domain covers some subset of the CPUs in the system; 388 384 no two sched domains overlap; some CPUs might not be in any sched ··· 489 485 The internal kernel cpuset to scheduler interface passes from the 490 486 cpuset code to the scheduler code a partition of the load balanced 491 487 CPUs in the system. This partition is a set of subsets (represented 492 - as an array of cpumask_t) of CPUs, pairwise disjoint, that cover all 493 - the CPUs that must be load balanced. 488 + as an array of struct cpumask) of CPUs, pairwise disjoint, that cover 489 + all the CPUs that must be load balanced. 494 490 495 - Whenever the 'sched_load_balance' flag changes, or CPUs come or go 496 - from a cpuset with this flag enabled, or a cpuset with this flag 497 - enabled is removed, the cpuset code builds a new such partition and 498 - passes it to the scheduler sched domain setup code, to have the sched 499 - domains rebuilt as necessary. 491 + The cpuset code builds a new such partition and passes it to the 492 + scheduler sched domain setup code, to have the sched domains rebuilt 493 + as necessary, whenever: 494 + - the 'sched_load_balance' flag of a cpuset with non-empty CPUs changes, 495 + - or CPUs come or go from a cpuset with this flag enabled, 496 + - or 'sched_relax_domain_level' value of a cpuset with non-empty CPUs 497 + and with this flag enabled changes, 498 + - or a cpuset with non-empty CPUs and with this flag enabled is removed, 499 + - or a cpu is offlined/onlined. 500 500 501 501 This partition exactly defines what sched domains the scheduler should 502 - setup - one sched domain for each element (cpumask_t) in the partition. 502 + setup - one sched domain for each element (struct cpumask) in the 503 + partition. 503 504 504 505 The scheduler remembers the currently active sched domain partitions. 505 506 When the scheduler routine partition_sched_domains() is invoked from ··· 568 559 requests 0 and others are -1 then 0 is used. 569 560 570 561 Note that modifying this file will have both good and bad effects, 571 - and whether it is acceptable or not will be depend on your situation. 562 + and whether it is acceptable or not depends on your situation. 572 563 Don't modify this file if you are not sure. 573 564 574 565 If your situation is: ··· 609 600 610 601 If a cpuset has its 'cpus' modified, then each task in that cpuset 611 602 will have its allowed CPU placement changed immediately. Similarly, 612 - if a tasks pid is written to a cpusets 'tasks' file, in either its 613 - current cpuset or another cpuset, then its allowed CPU placement is 614 - changed immediately. If such a task had been bound to some subset 615 - of its cpuset using the sched_setaffinity() call, the task will be 616 - allowed to run on any CPU allowed in its new cpuset, negating the 617 - affect of the prior sched_setaffinity() call. 603 + if a tasks pid is written to another cpusets 'tasks' file, then its 604 + allowed CPU placement is changed immediately. If such a task had been 605 + bound to some subset of its cpuset using the sched_setaffinity() call, 606 + the task will be allowed to run on any CPU allowed in its new cpuset, 607 + negating the effect of the prior sched_setaffinity() call. 618 608 619 609 In summary, the memory placement of a task whose cpuset is changed is 620 610 updated by the kernel, on the next allocation of a page for that task, 621 - but the processor placement is not updated, until that tasks pid is 622 - rewritten to the 'tasks' file of its cpuset. This is done to avoid 623 - impacting the scheduler code in the kernel with a check for changes 624 - in a tasks processor placement. 611 + and the processor placement is updated immediately. 625 612 626 613 Normally, once a page is allocated (given a physical page 627 614 of main memory) then that page stays on whatever node it ··· 686 681 # The next line should display '/Charlie' 687 682 cat /proc/self/cpuset 688 683 689 - In the future, a C library interface to cpusets will likely be 690 - available. For now, the only way to query or modify cpusets is 691 - via the cpuset file system, using the various cd, mkdir, echo, cat, 692 - rmdir commands from the shell, or their equivalent from C. 684 + There are ways to query or modify cpusets: 685 + - via the cpuset file system directly, using the various cd, mkdir, echo, 686 + cat, rmdir commands from the shell, or their equivalent from C. 687 + - via the C library libcpuset. 688 + - via the C library libcgroup. 689 + (http://sourceforge.net/proects/libcg/) 690 + - via the python application cset. 691 + (http://developer.novell.com/wiki/index.php/Cpuset) 693 692 694 693 The sched_setaffinity calls can also be done at the shell prompt using 695 694 SGI's runon or Robert Love's taskset. The mbind and set_mempolicy ··· 765 756 766 757 is equivalent to 767 758 768 - mount -t cgroup -ocpuset X /dev/cpuset 759 + mount -t cgroup -ocpuset,noprefix X /dev/cpuset 769 760 echo "/sbin/cpuset_release_agent" > /dev/cpuset/release_agent 770 761 771 762 2.2 Adding/removing cpus
+101
Documentation/hwmon/hpfall.c
··· 1 + /* Disk protection for HP machines. 2 + * 3 + * Copyright 2008 Eric Piel 4 + * Copyright 2009 Pavel Machek <pavel@suse.cz> 5 + * 6 + * GPLv2. 7 + */ 8 + 9 + #include <stdio.h> 10 + #include <stdlib.h> 11 + #include <unistd.h> 12 + #include <fcntl.h> 13 + #include <sys/stat.h> 14 + #include <sys/types.h> 15 + #include <string.h> 16 + #include <stdint.h> 17 + #include <errno.h> 18 + #include <signal.h> 19 + 20 + void write_int(char *path, int i) 21 + { 22 + char buf[1024]; 23 + int fd = open(path, O_RDWR); 24 + if (fd < 0) { 25 + perror("open"); 26 + exit(1); 27 + } 28 + sprintf(buf, "%d", i); 29 + if (write(fd, buf, strlen(buf)) != strlen(buf)) { 30 + perror("write"); 31 + exit(1); 32 + } 33 + close(fd); 34 + } 35 + 36 + void set_led(int on) 37 + { 38 + write_int("/sys/class/leds/hp::hddprotect/brightness", on); 39 + } 40 + 41 + void protect(int seconds) 42 + { 43 + write_int("/sys/block/sda/device/unload_heads", seconds*1000); 44 + } 45 + 46 + int on_ac(void) 47 + { 48 + // /sys/class/power_supply/AC0/online 49 + } 50 + 51 + int lid_open(void) 52 + { 53 + // /proc/acpi/button/lid/LID/state 54 + } 55 + 56 + void ignore_me(void) 57 + { 58 + protect(0); 59 + set_led(0); 60 + 61 + } 62 + 63 + int main(int argc, char* argv[]) 64 + { 65 + int fd, ret; 66 + 67 + fd = open("/dev/freefall", O_RDONLY); 68 + if (fd < 0) { 69 + perror("open"); 70 + return EXIT_FAILURE; 71 + } 72 + 73 + signal(SIGALRM, ignore_me); 74 + 75 + for (;;) { 76 + unsigned char count; 77 + 78 + ret = read(fd, &count, sizeof(count)); 79 + alarm(0); 80 + if ((ret == -1) && (errno == EINTR)) { 81 + /* Alarm expired, time to unpark the heads */ 82 + continue; 83 + } 84 + 85 + if (ret != sizeof(count)) { 86 + perror("read"); 87 + break; 88 + } 89 + 90 + protect(21); 91 + set_led(1); 92 + if (1 || on_ac() || lid_open()) { 93 + alarm(2); 94 + } else { 95 + alarm(20); 96 + } 97 + } 98 + 99 + close(fd); 100 + return EXIT_SUCCESS; 101 + }
+8
Documentation/hwmon/lis3lv02d
··· 33 33 This driver also provides an absolute input class device, allowing 34 34 the laptop to act as a pinball machine-esque joystick. 35 35 36 + Another feature of the driver is misc device called "freefall" that 37 + acts similar to /dev/rtc and reacts on free-fall interrupts received 38 + from the device. It supports blocking operations, poll/select and 39 + fasync operation modes. You must read 1 bytes from the device. The 40 + result is number of free-fall interrupts since the last successful 41 + read (or 255 if number of interrupts would not fit). 42 + 43 + 36 44 Axes orientation 37 45 ---------------- 38 46
+2 -4
Documentation/tracers/mmiotrace.txt
··· 78 78 events were lost, the trace is incomplete. You should enlarge the buffers and 79 79 try again. Buffers are enlarged by first seeing how large the current buffers 80 80 are: 81 - $ cat /debug/tracing/trace_entries 81 + $ cat /debug/tracing/buffer_size_kb 82 82 gives you a number. Approximately double this number and write it back, for 83 83 instance: 84 - $ echo 0 > /debug/tracing/tracing_enabled 85 - $ echo 128000 > /debug/tracing/trace_entries 86 - $ echo 1 > /debug/tracing/tracing_enabled 84 + $ echo 128000 > /debug/tracing/buffer_size_kb 87 85 Then start again from the top. 88 86 89 87 If you are doing a trace for a driver project, e.g. Nouveau, you should also
+18 -11
MAINTAINERS
··· 692 692 L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) 693 693 S: Maintained 694 694 695 + ARM/NUVOTON W90X900 ARM ARCHITECTURE 696 + P: Wan ZongShun 697 + M: mcuos.com@gmail.com 698 + L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) 699 + W: http://www.mcuos.com 700 + S: Maintained 701 + 695 702 ARPD SUPPORT 696 703 P: Jonathan Layes 697 704 L: netdev@vger.kernel.org ··· 1912 1905 S: Maintained 1913 1906 1914 1907 HARD DRIVE ACTIVE PROTECTION SYSTEM (HDAPS) DRIVER 1915 - P: Robert Love 1916 - M: rlove@rlove.org 1917 - M: linux-kernel@vger.kernel.org 1918 - W: http://www.kernel.org/pub/linux/kernel/people/rml/hdaps/ 1908 + P: Frank Seidel 1909 + M: frank@f-seidel.de 1910 + L: lm-sensors@lm-sensors.org 1911 + W: http://www.kernel.org/pub/linux/kernel/people/fseidel/hdaps/ 1919 1912 S: Maintained 1920 1913 1921 1914 GSPCA FINEPIX SUBDRIVER ··· 2008 2001 2009 2002 HIBERNATION (aka Software Suspend, aka swsusp) 2010 2003 P: Pavel Machek 2011 - M: pavel@suse.cz 2004 + M: pavel@ucw.cz 2012 2005 P: Rafael J. Wysocki 2013 2006 M: rjw@sisk.pl 2014 2007 L: linux-pm@lists.linux-foundation.org ··· 3334 3327 M: jeremy@xensource.com 3335 3328 P: Chris Wright 3336 3329 M: chrisw@sous-sol.org 3337 - P: Zachary Amsden 3338 - M: zach@vmware.com 3330 + P: Alok Kataria 3331 + M: akataria@vmware.com 3339 3332 P: Rusty Russell 3340 3333 M: rusty@rustcorp.com.au 3341 3334 L: virtualization@lists.osdl.org ··· 4179 4172 P: Len Brown 4180 4173 M: len.brown@intel.com 4181 4174 P: Pavel Machek 4182 - M: pavel@suse.cz 4175 + M: pavel@ucw.cz 4183 4176 P: Rafael J. Wysocki 4184 4177 M: rjw@sisk.pl 4185 4178 L: linux-pm@lists.linux-foundation.org ··· 4931 4924 S: Maintained 4932 4925 4933 4926 ZR36067 VIDEO FOR LINUX DRIVER 4934 - P: Ronald Bultje 4935 - M: rbultje@ronald.bitfreak.net 4936 4927 L: mjpeg-users@lists.sourceforge.net 4928 + L: linux-media@vger.kernel.org 4937 4929 W: http://mjpeg.sourceforge.net/driver-zoran/ 4938 - S: Maintained 4930 + T: Mercurial http://linuxtv.org/hg/v4l-dvb 4931 + S: Odd Fixes 4939 4932 4940 4933 ZS DECSTATION Z85C30 SERIAL DRIVER 4941 4934 P: Maciej W. Rozycki
+1 -1
Makefile
··· 389 389 # output directory. 390 390 outputmakefile: 391 391 ifneq ($(KBUILD_SRC),) 392 + $(Q)ln -fsn $(srctree) source 392 393 $(Q)$(CONFIG_SHELL) $(srctree)/scripts/mkmakefile \ 393 394 $(srctree) $(objtree) $(VERSION) $(PATCHLEVEL) 394 395 endif ··· 947 946 mkdir -p include2; \ 948 947 ln -fsn $(srctree)/include/asm-$(SRCARCH) include2/asm; \ 949 948 fi 950 - ln -fsn $(srctree) source 951 949 endif 952 950 953 951 # prepare2 creates a makefile if using a separate output directory
+1 -1
README
··· 188 188 values to random values. 189 189 190 190 You can find more information on using the Linux kernel config tools 191 - in Documentation/kbuild/make-configs.txt. 191 + in Documentation/kbuild/kconfig.txt. 192 192 193 193 NOTES on "make config": 194 194 - having unnecessary drivers will make the kernel bigger, and can
+4 -4
arch/alpha/kernel/process.c
··· 93 93 if (cpuid != boot_cpuid) { 94 94 flags |= 0x00040000UL; /* "remain halted" */ 95 95 *pflags = flags; 96 - cpu_clear(cpuid, cpu_present_map); 97 - cpu_clear(cpuid, cpu_possible_map); 96 + set_cpu_present(cpuid, false); 97 + set_cpu_possible(cpuid, false); 98 98 halt(); 99 99 } 100 100 #endif ··· 120 120 121 121 #ifdef CONFIG_SMP 122 122 /* Wait for the secondaries to halt. */ 123 - cpu_clear(boot_cpuid, cpu_present_map); 124 - cpu_clear(boot_cpuid, cpu_possible_map); 123 + set_cpu_present(boot_cpuid, false); 124 + set_cpu_possible(boot_cpuid, false); 125 125 while (cpus_weight(cpu_present_map)) 126 126 barrier(); 127 127 #endif
+6 -6
arch/alpha/kernel/smp.c
··· 120 120 smp_callin(void) 121 121 { 122 122 int cpuid = hard_smp_processor_id(); 123 - cpumask_t mask = cpu_online_map; 124 123 125 - if (cpu_test_and_set(cpuid, mask)) { 124 + if (cpu_online(cpuid)) { 126 125 printk("??, cpu 0x%x already present??\n", cpuid); 127 126 BUG(); 128 127 } 128 + set_cpu_online(cpuid, true); 129 129 130 130 /* Turn on machine checks. */ 131 131 wrmces(7); ··· 436 436 ((char *)cpubase + i*hwrpb->processor_size); 437 437 if ((cpu->flags & 0x1cc) == 0x1cc) { 438 438 smp_num_probed++; 439 - cpu_set(i, cpu_possible_map); 440 - cpu_set(i, cpu_present_map); 439 + set_cpu_possible(i, true); 440 + set_cpu_present(i, true); 441 441 cpu->pal_revision = boot_cpu_palrev; 442 442 } 443 443 ··· 470 470 471 471 /* Nothing to do on a UP box, or when told not to. */ 472 472 if (smp_num_probed == 1 || max_cpus == 0) { 473 - cpu_possible_map = cpumask_of_cpu(boot_cpuid); 474 - cpu_present_map = cpumask_of_cpu(boot_cpuid); 473 + init_cpu_possible(cpumask_of(boot_cpuid)); 474 + init_cpu_present(cpumask_of(boot_cpuid)); 475 475 printk(KERN_INFO "SMP mode deactivated.\n"); 476 476 return; 477 477 }
+1 -1
arch/arm/configs/at91sam9260ek_defconfig
··· 608 608 # Watchdog Device Drivers 609 609 # 610 610 # CONFIG_SOFT_WATCHDOG is not set 611 - CONFIG_AT91SAM9_WATCHDOG=y 611 + CONFIG_AT91SAM9X_WATCHDOG=y 612 612 613 613 # 614 614 # USB-based Watchdog Cards
+1 -1
arch/arm/configs/at91sam9261ek_defconfig
··· 700 700 # Watchdog Device Drivers 701 701 # 702 702 # CONFIG_SOFT_WATCHDOG is not set 703 - CONFIG_AT91SAM9_WATCHDOG=y 703 + CONFIG_AT91SAM9X_WATCHDOG=y 704 704 705 705 # 706 706 # USB-based Watchdog Cards
+1 -1
arch/arm/configs/at91sam9263ek_defconfig
··· 710 710 # Watchdog Device Drivers 711 711 # 712 712 # CONFIG_SOFT_WATCHDOG is not set 713 - CONFIG_AT91SAM9_WATCHDOG=y 713 + CONFIG_AT91SAM9X_WATCHDOG=y 714 714 715 715 # 716 716 # USB-based Watchdog Cards
+1 -1
arch/arm/configs/at91sam9rlek_defconfig
··· 606 606 # Watchdog Device Drivers 607 607 # 608 608 # CONFIG_SOFT_WATCHDOG is not set 609 - CONFIG_AT91SAM9_WATCHDOG=y 609 + CONFIG_AT91SAM9X_WATCHDOG=y 610 610 611 611 # 612 612 # Sonics Silicon Backplane
+1 -1
arch/arm/configs/qil-a9260_defconfig
··· 727 727 # Watchdog Device Drivers 728 728 # 729 729 # CONFIG_SOFT_WATCHDOG is not set 730 - # CONFIG_AT91SAM9_WATCHDOG is not set 730 + # CONFIG_AT91SAM9X_WATCHDOG is not set 731 731 732 732 # 733 733 # USB-based Watchdog Cards
+2 -2
arch/arm/kernel/elf.c
··· 74 74 */ 75 75 int arm_elf_read_implies_exec(const struct elf32_hdr *x, int executable_stack) 76 76 { 77 - if (executable_stack != EXSTACK_ENABLE_X) 77 + if (executable_stack != EXSTACK_DISABLE_X) 78 78 return 1; 79 - if (cpu_architecture() <= CPU_ARCH_ARMv6) 79 + if (cpu_architecture() < CPU_ARCH_ARMv6) 80 80 return 1; 81 81 return 0; 82 82 }
+1 -1
arch/arm/mach-at91/at91cap9_devices.c
··· 697 697 * Watchdog 698 698 * -------------------------------------------------------------------- */ 699 699 700 - #if defined(CONFIG_AT91SAM9_WATCHDOG) || defined(CONFIG_AT91SAM9_WATCHDOG_MODULE) 700 + #if defined(CONFIG_AT91SAM9X_WATCHDOG) || defined(CONFIG_AT91SAM9X_WATCHDOG_MODULE) 701 701 static struct platform_device at91cap9_wdt_device = { 702 702 .name = "at91_wdt", 703 703 .id = -1,
+1 -1
arch/arm/mach-at91/at91sam9260_devices.c
··· 643 643 * Watchdog 644 644 * -------------------------------------------------------------------- */ 645 645 646 - #if defined(CONFIG_AT91SAM9_WATCHDOG) || defined(CONFIG_AT91SAM9_WATCHDOG_MODULE) 646 + #if defined(CONFIG_AT91SAM9X_WATCHDOG) || defined(CONFIG_AT91SAM9X_WATCHDOG_MODULE) 647 647 static struct platform_device at91sam9260_wdt_device = { 648 648 .name = "at91_wdt", 649 649 .id = -1,
+1 -1
arch/arm/mach-at91/at91sam9261_devices.c
··· 621 621 * Watchdog 622 622 * -------------------------------------------------------------------- */ 623 623 624 - #if defined(CONFIG_AT91SAM9_WATCHDOG) || defined(CONFIG_AT91SAM9_WATCHDOG_MODULE) 624 + #if defined(CONFIG_AT91SAM9X_WATCHDOG) || defined(CONFIG_AT91SAM9X_WATCHDOG_MODULE) 625 625 static struct platform_device at91sam9261_wdt_device = { 626 626 .name = "at91_wdt", 627 627 .id = -1,
+1 -1
arch/arm/mach-at91/at91sam9263_devices.c
··· 854 854 * Watchdog 855 855 * -------------------------------------------------------------------- */ 856 856 857 - #if defined(CONFIG_AT91SAM9_WATCHDOG) || defined(CONFIG_AT91SAM9_WATCHDOG_MODULE) 857 + #if defined(CONFIG_AT91SAM9X_WATCHDOG) || defined(CONFIG_AT91SAM9X_WATCHDOG_MODULE) 858 858 static struct platform_device at91sam9263_wdt_device = { 859 859 .name = "at91_wdt", 860 860 .id = -1,
+1 -1
arch/arm/mach-at91/at91sam9rl_devices.c
··· 609 609 * Watchdog 610 610 * -------------------------------------------------------------------- */ 611 611 612 - #if defined(CONFIG_AT91SAM9_WATCHDOG) || defined(CONFIG_AT91SAM9_WATCHDOG_MODULE) 612 + #if defined(CONFIG_AT91SAM9X_WATCHDOG) || defined(CONFIG_AT91SAM9X_WATCHDOG_MODULE) 613 613 static struct platform_device at91sam9rl_wdt_device = { 614 614 .name = "at91_wdt", 615 615 .id = -1,
+10 -5
arch/arm/mach-at91/gpio.c
··· 490 490 491 491 /*--------------------------------------------------------------------------*/ 492 492 493 - /* This lock class tells lockdep that GPIO irqs are in a different 493 + /* 494 + * This lock class tells lockdep that GPIO irqs are in a different 494 495 * category than their parents, so it won't report false recursion. 495 496 */ 496 497 static struct lock_class_key gpio_lock_class; ··· 509 508 prev = this, this++) { 510 509 unsigned id = this->id; 511 510 unsigned i; 512 - 513 - /* enable PIO controller's clock */ 514 - clk_enable(this->clock); 515 511 516 512 __raw_writel(~0, this->regbase + PIO_IDR); 517 513 ··· 554 556 data->chipbase = PIN_BASE + i * 32; 555 557 data->regbase = data->offset + (void __iomem *)AT91_VA_BASE_SYS; 556 558 557 - /* AT91SAM9263_ID_PIOCDE groups PIOC, PIOD, PIOE */ 559 + /* enable PIO controller's clock */ 560 + clk_enable(data->clock); 561 + 562 + /* 563 + * Some processors share peripheral ID between multiple GPIO banks. 564 + * SAM9263 (PIOC, PIOD, PIOE) 565 + * CAP9 (PIOA, PIOB, PIOC, PIOD) 566 + */ 558 567 if (last && last->id == data->id) 559 568 last->next = data; 560 569 }
+1
arch/arm/mach-at91/include/mach/board.h
··· 93 93 u8 enable_pin; /* chip enable */ 94 94 u8 det_pin; /* card detect */ 95 95 u8 rdy_pin; /* ready/busy */ 96 + u8 rdy_pin_active_low; /* rdy_pin value is inverted */ 96 97 u8 ale; /* address line number connected to ALE */ 97 98 u8 cle; /* address line number connected to CLE */ 98 99 u8 bus_width_16; /* buswidth is 16 bit */
-3
arch/arm/mach-ep93xx/include/mach/gesbc9312.h
··· 1 - /* 2 - * arch/arm/mach-ep93xx/include/mach/gesbc9312.h 3 - */
-1
arch/arm/mach-ep93xx/include/mach/hardware.h
··· 10 10 11 11 #include "platform.h" 12 12 13 - #include "gesbc9312.h" 14 13 #include "ts72xx.h" 15 14 16 15 #endif
+1 -1
arch/arm/mach-kirkwood/irq.c
··· 42 42 writel(0, GPIO_EDGE_CAUSE(32)); 43 43 44 44 for (i = IRQ_KIRKWOOD_GPIO_START; i < NR_IRQS; i++) { 45 - set_irq_chip(i, &orion_gpio_irq_level_chip); 45 + set_irq_chip(i, &orion_gpio_irq_chip); 46 46 set_irq_handler(i, handle_level_irq); 47 47 irq_desc[i].status |= IRQ_LEVEL; 48 48 set_irq_flags(i, IRQF_VALID);
+1 -1
arch/arm/mach-mv78xx0/irq.c
··· 40 40 writel(0, GPIO_EDGE_CAUSE(0)); 41 41 42 42 for (i = IRQ_MV78XX0_GPIO_START; i < NR_IRQS; i++) { 43 - set_irq_chip(i, &orion_gpio_irq_level_chip); 43 + set_irq_chip(i, &orion_gpio_irq_chip); 44 44 set_irq_handler(i, handle_level_irq); 45 45 irq_desc[i].status |= IRQ_LEVEL; 46 46 set_irq_flags(i, IRQF_VALID);
+8 -8
arch/arm/mach-omap2/clock.c
··· 565 565 * 566 566 * Given a struct clk of a rate-selectable clksel clock, and a clock divisor, 567 567 * find the corresponding register field value. The return register value is 568 - * the value before left-shifting. Returns 0xffffffff on error 568 + * the value before left-shifting. Returns ~0 on error 569 569 */ 570 570 u32 omap2_divisor_to_clksel(struct clk *clk, u32 div) 571 571 { ··· 577 577 578 578 clks = omap2_get_clksel_by_parent(clk, clk->parent); 579 579 if (clks == NULL) 580 - return 0; 580 + return ~0; 581 581 582 582 for (clkr = clks->rates; clkr->div; clkr++) { 583 583 if ((clkr->flags & cpu_mask) && (clkr->div == div)) ··· 588 588 printk(KERN_ERR "clock: Could not find divisor %d for " 589 589 "clock %s parent %s\n", div, clk->name, 590 590 clk->parent->name); 591 - return 0; 591 + return ~0; 592 592 } 593 593 594 594 return clkr->val; ··· 708 708 return 0; 709 709 710 710 for (clkr = clks->rates; clkr->div; clkr++) { 711 - if (clkr->flags & (cpu_mask | DEFAULT_RATE)) 711 + if (clkr->flags & cpu_mask && clkr->flags & DEFAULT_RATE) 712 712 break; /* Found the default rate for this platform */ 713 713 } 714 714 ··· 746 746 return -EINVAL; 747 747 748 748 if (clk->usecount > 0) 749 - _omap2_clk_disable(clk); 749 + omap2_clk_disable(clk); 750 750 751 751 /* Set new source value (previous dividers if any in effect) */ 752 752 reg_val = __raw_readl(src_addr) & ~field_mask; ··· 759 759 wmb(); 760 760 } 761 761 762 - if (clk->usecount > 0) 763 - _omap2_clk_enable(clk); 764 - 765 762 clk->parent = new_parent; 763 + 764 + if (clk->usecount > 0) 765 + omap2_clk_enable(clk); 766 766 767 767 /* CLKSEL clocks follow their parents' rates, divided by a divisor */ 768 768 clk->rate = new_parent->rate;
+1 -1
arch/arm/mach-orion5x/irq.c
··· 44 44 * User can use set_type() if he wants to use edge types handlers. 45 45 */ 46 46 for (i = IRQ_ORION5X_GPIO_START; i < NR_IRQS; i++) { 47 - set_irq_chip(i, &orion_gpio_irq_level_chip); 47 + set_irq_chip(i, &orion_gpio_irq_chip); 48 48 set_irq_handler(i, handle_level_irq); 49 49 irq_desc[i].status |= IRQ_LEVEL; 50 50 set_irq_flags(i, IRQF_VALID);
+2 -1
arch/arm/mm/mmu.c
··· 693 693 * Check whether this memory bank would entirely overlap 694 694 * the vmalloc area. 695 695 */ 696 - if (__va(bank->start) >= VMALLOC_MIN) { 696 + if (__va(bank->start) >= VMALLOC_MIN || 697 + __va(bank->start) < PAGE_OFFSET) { 697 698 printk(KERN_NOTICE "Ignoring RAM at %.8lx-%.8lx " 698 699 "(vmalloc region overlap).\n", 699 700 bank->start, bank->start + bank->size - 1);
+26 -49
arch/arm/plat-orion/gpio.c
··· 265 265 * polarity LEVEL mask 266 266 * 267 267 ****************************************************************************/ 268 - static void gpio_irq_edge_ack(u32 irq) 269 - { 270 - int pin = irq_to_gpio(irq); 271 268 272 - writel(~(1 << (pin & 31)), GPIO_EDGE_CAUSE(pin)); 269 + static void gpio_irq_ack(u32 irq) 270 + { 271 + int type = irq_desc[irq].status & IRQ_TYPE_SENSE_MASK; 272 + if (type & (IRQ_TYPE_EDGE_RISING | IRQ_TYPE_EDGE_FALLING)) { 273 + int pin = irq_to_gpio(irq); 274 + writel(~(1 << (pin & 31)), GPIO_EDGE_CAUSE(pin)); 275 + } 273 276 } 274 277 275 - static void gpio_irq_edge_mask(u32 irq) 278 + static void gpio_irq_mask(u32 irq) 276 279 { 277 280 int pin = irq_to_gpio(irq); 278 - u32 u; 279 - 280 - u = readl(GPIO_EDGE_MASK(pin)); 281 + int type = irq_desc[irq].status & IRQ_TYPE_SENSE_MASK; 282 + u32 reg = (type & (IRQ_TYPE_EDGE_RISING | IRQ_TYPE_EDGE_FALLING)) ? 283 + GPIO_EDGE_MASK(pin) : GPIO_LEVEL_MASK(pin); 284 + u32 u = readl(reg); 281 285 u &= ~(1 << (pin & 31)); 282 - writel(u, GPIO_EDGE_MASK(pin)); 286 + writel(u, reg); 283 287 } 284 288 285 - static void gpio_irq_edge_unmask(u32 irq) 289 + static void gpio_irq_unmask(u32 irq) 286 290 { 287 291 int pin = irq_to_gpio(irq); 288 - u32 u; 289 - 290 - u = readl(GPIO_EDGE_MASK(pin)); 292 + int type = irq_desc[irq].status & IRQ_TYPE_SENSE_MASK; 293 + u32 reg = (type & (IRQ_TYPE_EDGE_RISING | IRQ_TYPE_EDGE_FALLING)) ? 294 + GPIO_EDGE_MASK(pin) : GPIO_LEVEL_MASK(pin); 295 + u32 u = readl(reg); 291 296 u |= 1 << (pin & 31); 292 - writel(u, GPIO_EDGE_MASK(pin)); 293 - } 294 - 295 - static void gpio_irq_level_mask(u32 irq) 296 - { 297 - int pin = irq_to_gpio(irq); 298 - u32 u; 299 - 300 - u = readl(GPIO_LEVEL_MASK(pin)); 301 - u &= ~(1 << (pin & 31)); 302 - writel(u, GPIO_LEVEL_MASK(pin)); 303 - } 304 - 305 - static void gpio_irq_level_unmask(u32 irq) 306 - { 307 - int pin = irq_to_gpio(irq); 308 - u32 u; 309 - 310 - u = readl(GPIO_LEVEL_MASK(pin)); 311 - u |= 1 << (pin & 31); 312 - writel(u, GPIO_LEVEL_MASK(pin)); 297 + writel(u, reg); 313 298 } 314 299 315 300 static int gpio_irq_set_type(u32 irq, u32 type) ··· 316 331 * Set edge/level type. 317 332 */ 318 333 if (type & (IRQ_TYPE_EDGE_RISING | IRQ_TYPE_EDGE_FALLING)) { 319 - desc->chip = &orion_gpio_irq_edge_chip; 334 + desc->handle_irq = handle_edge_irq; 320 335 } else if (type & (IRQ_TYPE_LEVEL_HIGH | IRQ_TYPE_LEVEL_LOW)) { 321 - desc->chip = &orion_gpio_irq_level_chip; 336 + desc->handle_irq = handle_level_irq; 322 337 } else { 323 338 printk(KERN_ERR "failed to set irq=%d (type=%d)\n", irq, type); 324 339 return -EINVAL; ··· 356 371 return 0; 357 372 } 358 373 359 - struct irq_chip orion_gpio_irq_edge_chip = { 360 - .name = "orion_gpio_irq_edge", 361 - .ack = gpio_irq_edge_ack, 362 - .mask = gpio_irq_edge_mask, 363 - .unmask = gpio_irq_edge_unmask, 364 - .set_type = gpio_irq_set_type, 365 - }; 366 - 367 - struct irq_chip orion_gpio_irq_level_chip = { 368 - .name = "orion_gpio_irq_level", 369 - .mask = gpio_irq_level_mask, 370 - .mask_ack = gpio_irq_level_mask, 371 - .unmask = gpio_irq_level_unmask, 374 + struct irq_chip orion_gpio_irq_chip = { 375 + .name = "orion_gpio", 376 + .ack = gpio_irq_ack, 377 + .mask = gpio_irq_mask, 378 + .unmask = gpio_irq_unmask, 372 379 .set_type = gpio_irq_set_type, 373 380 }; 374 381
+1 -2
arch/arm/plat-orion/include/plat/gpio.h
··· 31 31 /* 32 32 * GPIO interrupt handling. 33 33 */ 34 - extern struct irq_chip orion_gpio_irq_edge_chip; 35 - extern struct irq_chip orion_gpio_irq_level_chip; 34 + extern struct irq_chip orion_gpio_irq_chip; 36 35 void orion_gpio_irq_handler(int irqoff); 37 36 38 37
+1
arch/avr32/mach-at32ap/include/mach/board.h
··· 116 116 int enable_pin; /* chip enable */ 117 117 int det_pin; /* card detect */ 118 118 int rdy_pin; /* ready/busy */ 119 + u8 rdy_pin_active_low; /* rdy_pin value is inverted */ 119 120 u8 ale; /* address line number connected to ALE */ 120 121 u8 cle; /* address line number connected to CLE */ 121 122 u8 bus_width_16; /* buswidth is 16 bit */
+5 -2
arch/ia64/Kconfig
··· 221 221 222 222 config IA64_XEN_GUEST 223 223 bool "Xen guest" 224 + select SWIOTLB 224 225 depends on XEN 226 + help 227 + Build a kernel that runs on Xen guest domain. At this moment only 228 + 16KB page size in supported. 225 229 226 230 endchoice 227 231 ··· 483 479 default y if VIRTUAL_MEM_MAP 484 480 485 481 config HAVE_ARCH_EARLY_PFN_TO_NID 486 - def_bool y 487 - depends on NEED_MULTIPLE_NODES 482 + def_bool NUMA && SPARSEMEM 488 483 489 484 config HAVE_ARCH_NODEDATA_EXTENSION 490 485 def_bool y
+1601
arch/ia64/configs/xen_domu_defconfig
··· 1 + # 2 + # Automatically generated make config: don't edit 3 + # Linux kernel version: 2.6.29-rc1 4 + # Fri Jan 16 11:49:59 2009 5 + # 6 + CONFIG_DEFCONFIG_LIST="/lib/modules/$UNAME_RELEASE/.config" 7 + 8 + # 9 + # General setup 10 + # 11 + CONFIG_EXPERIMENTAL=y 12 + CONFIG_LOCK_KERNEL=y 13 + CONFIG_INIT_ENV_ARG_LIMIT=32 14 + CONFIG_LOCALVERSION="" 15 + CONFIG_LOCALVERSION_AUTO=y 16 + CONFIG_SWAP=y 17 + CONFIG_SYSVIPC=y 18 + CONFIG_SYSVIPC_SYSCTL=y 19 + CONFIG_POSIX_MQUEUE=y 20 + # CONFIG_BSD_PROCESS_ACCT is not set 21 + # CONFIG_TASKSTATS is not set 22 + # CONFIG_AUDIT is not set 23 + CONFIG_IKCONFIG=y 24 + CONFIG_IKCONFIG_PROC=y 25 + CONFIG_LOG_BUF_SHIFT=20 26 + CONFIG_HAVE_UNSTABLE_SCHED_CLOCK=y 27 + # CONFIG_GROUP_SCHED is not set 28 + 29 + # 30 + # Control Group support 31 + # 32 + # CONFIG_CGROUPS is not set 33 + CONFIG_SYSFS_DEPRECATED=y 34 + CONFIG_SYSFS_DEPRECATED_V2=y 35 + # CONFIG_RELAY is not set 36 + CONFIG_NAMESPACES=y 37 + # CONFIG_UTS_NS is not set 38 + # CONFIG_IPC_NS is not set 39 + # CONFIG_USER_NS is not set 40 + # CONFIG_PID_NS is not set 41 + CONFIG_BLK_DEV_INITRD=y 42 + CONFIG_INITRAMFS_SOURCE="" 43 + CONFIG_CC_OPTIMIZE_FOR_SIZE=y 44 + CONFIG_SYSCTL=y 45 + # CONFIG_EMBEDDED is not set 46 + CONFIG_SYSCTL_SYSCALL=y 47 + CONFIG_KALLSYMS=y 48 + CONFIG_KALLSYMS_ALL=y 49 + CONFIG_KALLSYMS_STRIP_GENERATED=y 50 + # CONFIG_KALLSYMS_EXTRA_PASS is not set 51 + CONFIG_HOTPLUG=y 52 + CONFIG_PRINTK=y 53 + CONFIG_BUG=y 54 + CONFIG_ELF_CORE=y 55 + CONFIG_COMPAT_BRK=y 56 + CONFIG_BASE_FULL=y 57 + CONFIG_FUTEX=y 58 + CONFIG_ANON_INODES=y 59 + CONFIG_EPOLL=y 60 + CONFIG_SIGNALFD=y 61 + CONFIG_TIMERFD=y 62 + CONFIG_EVENTFD=y 63 + CONFIG_SHMEM=y 64 + CONFIG_AIO=y 65 + CONFIG_VM_EVENT_COUNTERS=y 66 + CONFIG_PCI_QUIRKS=y 67 + CONFIG_SLUB_DEBUG=y 68 + # CONFIG_SLAB is not set 69 + CONFIG_SLUB=y 70 + # CONFIG_SLOB is not set 71 + # CONFIG_PROFILING is not set 72 + CONFIG_HAVE_OPROFILE=y 73 + # CONFIG_KPROBES is not set 74 + CONFIG_HAVE_KPROBES=y 75 + CONFIG_HAVE_KRETPROBES=y 76 + CONFIG_HAVE_ARCH_TRACEHOOK=y 77 + CONFIG_HAVE_DMA_ATTRS=y 78 + CONFIG_USE_GENERIC_SMP_HELPERS=y 79 + # CONFIG_HAVE_GENERIC_DMA_COHERENT is not set 80 + CONFIG_SLABINFO=y 81 + CONFIG_RT_MUTEXES=y 82 + CONFIG_BASE_SMALL=0 83 + CONFIG_MODULES=y 84 + # CONFIG_MODULE_FORCE_LOAD is not set 85 + CONFIG_MODULE_UNLOAD=y 86 + # CONFIG_MODULE_FORCE_UNLOAD is not set 87 + CONFIG_MODVERSIONS=y 88 + CONFIG_MODULE_SRCVERSION_ALL=y 89 + CONFIG_STOP_MACHINE=y 90 + CONFIG_BLOCK=y 91 + # CONFIG_BLK_DEV_IO_TRACE is not set 92 + # CONFIG_BLK_DEV_BSG is not set 93 + # CONFIG_BLK_DEV_INTEGRITY is not set 94 + 95 + # 96 + # IO Schedulers 97 + # 98 + CONFIG_IOSCHED_NOOP=y 99 + CONFIG_IOSCHED_AS=y 100 + CONFIG_IOSCHED_DEADLINE=y 101 + CONFIG_IOSCHED_CFQ=y 102 + CONFIG_DEFAULT_AS=y 103 + # CONFIG_DEFAULT_DEADLINE is not set 104 + # CONFIG_DEFAULT_CFQ is not set 105 + # CONFIG_DEFAULT_NOOP is not set 106 + CONFIG_DEFAULT_IOSCHED="anticipatory" 107 + CONFIG_CLASSIC_RCU=y 108 + # CONFIG_TREE_RCU is not set 109 + # CONFIG_PREEMPT_RCU is not set 110 + # CONFIG_TREE_RCU_TRACE is not set 111 + # CONFIG_PREEMPT_RCU_TRACE is not set 112 + CONFIG_FREEZER=y 113 + 114 + # 115 + # Processor type and features 116 + # 117 + CONFIG_IA64=y 118 + CONFIG_64BIT=y 119 + CONFIG_ZONE_DMA=y 120 + CONFIG_QUICKLIST=y 121 + CONFIG_MMU=y 122 + CONFIG_SWIOTLB=y 123 + CONFIG_IOMMU_HELPER=y 124 + CONFIG_RWSEM_XCHGADD_ALGORITHM=y 125 + CONFIG_HUGETLB_PAGE_SIZE_VARIABLE=y 126 + CONFIG_GENERIC_FIND_NEXT_BIT=y 127 + CONFIG_GENERIC_CALIBRATE_DELAY=y 128 + CONFIG_GENERIC_TIME=y 129 + CONFIG_GENERIC_TIME_VSYSCALL=y 130 + CONFIG_HAVE_SETUP_PER_CPU_AREA=y 131 + CONFIG_DMI=y 132 + CONFIG_EFI=y 133 + CONFIG_GENERIC_IOMAP=y 134 + CONFIG_SCHED_OMIT_FRAME_POINTER=y 135 + CONFIG_AUDIT_ARCH=y 136 + CONFIG_PARAVIRT_GUEST=y 137 + CONFIG_PARAVIRT=y 138 + CONFIG_XEN=y 139 + CONFIG_XEN_XENCOMM=y 140 + CONFIG_NO_IDLE_HZ=y 141 + # CONFIG_IA64_GENERIC is not set 142 + # CONFIG_IA64_DIG is not set 143 + # CONFIG_IA64_DIG_VTD is not set 144 + # CONFIG_IA64_HP_ZX1 is not set 145 + # CONFIG_IA64_HP_ZX1_SWIOTLB is not set 146 + # CONFIG_IA64_SGI_SN2 is not set 147 + # CONFIG_IA64_SGI_UV is not set 148 + # CONFIG_IA64_HP_SIM is not set 149 + CONFIG_IA64_XEN_GUEST=y 150 + # CONFIG_ITANIUM is not set 151 + CONFIG_MCKINLEY=y 152 + # CONFIG_IA64_PAGE_SIZE_4KB is not set 153 + # CONFIG_IA64_PAGE_SIZE_8KB is not set 154 + CONFIG_IA64_PAGE_SIZE_16KB=y 155 + # CONFIG_IA64_PAGE_SIZE_64KB is not set 156 + CONFIG_PGTABLE_3=y 157 + # CONFIG_PGTABLE_4 is not set 158 + CONFIG_HZ=250 159 + # CONFIG_HZ_100 is not set 160 + CONFIG_HZ_250=y 161 + # CONFIG_HZ_300 is not set 162 + # CONFIG_HZ_1000 is not set 163 + # CONFIG_SCHED_HRTICK is not set 164 + CONFIG_IA64_L1_CACHE_SHIFT=7 165 + CONFIG_IA64_CYCLONE=y 166 + CONFIG_IOSAPIC=y 167 + CONFIG_FORCE_MAX_ZONEORDER=17 168 + # CONFIG_VIRT_CPU_ACCOUNTING is not set 169 + CONFIG_SMP=y 170 + CONFIG_NR_CPUS=16 171 + CONFIG_HOTPLUG_CPU=y 172 + CONFIG_ARCH_ENABLE_MEMORY_HOTPLUG=y 173 + CONFIG_ARCH_ENABLE_MEMORY_HOTREMOVE=y 174 + # CONFIG_SCHED_SMT is not set 175 + CONFIG_PERMIT_BSP_REMOVE=y 176 + CONFIG_FORCE_CPEI_RETARGET=y 177 + CONFIG_PREEMPT_NONE=y 178 + # CONFIG_PREEMPT_VOLUNTARY is not set 179 + # CONFIG_PREEMPT is not set 180 + CONFIG_SELECT_MEMORY_MODEL=y 181 + CONFIG_FLATMEM_MANUAL=y 182 + # CONFIG_DISCONTIGMEM_MANUAL is not set 183 + # CONFIG_SPARSEMEM_MANUAL is not set 184 + CONFIG_FLATMEM=y 185 + CONFIG_FLAT_NODE_MEM_MAP=y 186 + CONFIG_SPARSEMEM_VMEMMAP_ENABLE=y 187 + CONFIG_PAGEFLAGS_EXTENDED=y 188 + CONFIG_SPLIT_PTLOCK_CPUS=4 189 + CONFIG_MIGRATION=y 190 + CONFIG_PHYS_ADDR_T_64BIT=y 191 + CONFIG_ZONE_DMA_FLAG=1 192 + CONFIG_BOUNCE=y 193 + CONFIG_NR_QUICK=1 194 + CONFIG_VIRT_TO_BUS=y 195 + CONFIG_UNEVICTABLE_LRU=y 196 + CONFIG_ARCH_SELECT_MEMORY_MODEL=y 197 + CONFIG_ARCH_DISCONTIGMEM_ENABLE=y 198 + CONFIG_ARCH_FLATMEM_ENABLE=y 199 + CONFIG_ARCH_SPARSEMEM_ENABLE=y 200 + CONFIG_ARCH_POPULATES_NODE_MAP=y 201 + CONFIG_VIRTUAL_MEM_MAP=y 202 + CONFIG_HOLES_IN_ZONE=y 203 + # CONFIG_IA32_SUPPORT is not set 204 + # CONFIG_COMPAT_FOR_U64_ALIGNMENT is not set 205 + CONFIG_IA64_MCA_RECOVERY=y 206 + CONFIG_PERFMON=y 207 + CONFIG_IA64_PALINFO=y 208 + # CONFIG_IA64_MC_ERR_INJECT is not set 209 + # CONFIG_IA64_ESI is not set 210 + # CONFIG_IA64_HP_AML_NFW is not set 211 + CONFIG_KEXEC=y 212 + # CONFIG_CRASH_DUMP is not set 213 + 214 + # 215 + # Firmware Drivers 216 + # 217 + # CONFIG_FIRMWARE_MEMMAP is not set 218 + CONFIG_EFI_VARS=y 219 + CONFIG_EFI_PCDP=y 220 + CONFIG_DMIID=y 221 + CONFIG_BINFMT_ELF=y 222 + # CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set 223 + # CONFIG_HAVE_AOUT is not set 224 + CONFIG_BINFMT_MISC=m 225 + 226 + # 227 + # Power management and ACPI options 228 + # 229 + CONFIG_PM=y 230 + # CONFIG_PM_DEBUG is not set 231 + CONFIG_PM_SLEEP=y 232 + CONFIG_SUSPEND=y 233 + CONFIG_SUSPEND_FREEZER=y 234 + CONFIG_ACPI=y 235 + CONFIG_ACPI_SLEEP=y 236 + CONFIG_ACPI_PROCFS=y 237 + CONFIG_ACPI_PROCFS_POWER=y 238 + CONFIG_ACPI_SYSFS_POWER=y 239 + CONFIG_ACPI_PROC_EVENT=y 240 + CONFIG_ACPI_BUTTON=m 241 + CONFIG_ACPI_FAN=m 242 + # CONFIG_ACPI_DOCK is not set 243 + CONFIG_ACPI_PROCESSOR=m 244 + CONFIG_ACPI_HOTPLUG_CPU=y 245 + CONFIG_ACPI_THERMAL=m 246 + # CONFIG_ACPI_CUSTOM_DSDT is not set 247 + CONFIG_ACPI_BLACKLIST_YEAR=0 248 + # CONFIG_ACPI_DEBUG is not set 249 + # CONFIG_ACPI_PCI_SLOT is not set 250 + CONFIG_ACPI_SYSTEM=y 251 + CONFIG_ACPI_CONTAINER=m 252 + 253 + # 254 + # CPU Frequency scaling 255 + # 256 + # CONFIG_CPU_FREQ is not set 257 + 258 + # 259 + # Bus options (PCI, PCMCIA) 260 + # 261 + CONFIG_PCI=y 262 + CONFIG_PCI_DOMAINS=y 263 + CONFIG_PCI_SYSCALL=y 264 + # CONFIG_PCIEPORTBUS is not set 265 + CONFIG_ARCH_SUPPORTS_MSI=y 266 + # CONFIG_PCI_MSI is not set 267 + CONFIG_PCI_LEGACY=y 268 + # CONFIG_PCI_DEBUG is not set 269 + # CONFIG_PCI_STUB is not set 270 + CONFIG_HOTPLUG_PCI=m 271 + # CONFIG_HOTPLUG_PCI_FAKE is not set 272 + CONFIG_HOTPLUG_PCI_ACPI=m 273 + # CONFIG_HOTPLUG_PCI_ACPI_IBM is not set 274 + # CONFIG_HOTPLUG_PCI_CPCI is not set 275 + # CONFIG_HOTPLUG_PCI_SHPC is not set 276 + # CONFIG_PCCARD is not set 277 + CONFIG_NET=y 278 + 279 + # 280 + # Networking options 281 + # 282 + # CONFIG_NET_NS is not set 283 + CONFIG_COMPAT_NET_DEV_OPS=y 284 + CONFIG_PACKET=y 285 + # CONFIG_PACKET_MMAP is not set 286 + CONFIG_UNIX=y 287 + CONFIG_XFRM=y 288 + # CONFIG_XFRM_USER is not set 289 + # CONFIG_XFRM_SUB_POLICY is not set 290 + # CONFIG_XFRM_MIGRATE is not set 291 + # CONFIG_XFRM_STATISTICS is not set 292 + # CONFIG_NET_KEY is not set 293 + CONFIG_INET=y 294 + CONFIG_IP_MULTICAST=y 295 + # CONFIG_IP_ADVANCED_ROUTER is not set 296 + CONFIG_IP_FIB_HASH=y 297 + # CONFIG_IP_PNP is not set 298 + # CONFIG_NET_IPIP is not set 299 + # CONFIG_NET_IPGRE is not set 300 + # CONFIG_IP_MROUTE is not set 301 + CONFIG_ARPD=y 302 + CONFIG_SYN_COOKIES=y 303 + # CONFIG_INET_AH is not set 304 + # CONFIG_INET_ESP is not set 305 + # CONFIG_INET_IPCOMP is not set 306 + # CONFIG_INET_XFRM_TUNNEL is not set 307 + # CONFIG_INET_TUNNEL is not set 308 + CONFIG_INET_XFRM_MODE_TRANSPORT=y 309 + CONFIG_INET_XFRM_MODE_TUNNEL=y 310 + CONFIG_INET_XFRM_MODE_BEET=y 311 + # CONFIG_INET_LRO is not set 312 + CONFIG_INET_DIAG=y 313 + CONFIG_INET_TCP_DIAG=y 314 + # CONFIG_TCP_CONG_ADVANCED is not set 315 + CONFIG_TCP_CONG_CUBIC=y 316 + CONFIG_DEFAULT_TCP_CONG="cubic" 317 + # CONFIG_TCP_MD5SIG is not set 318 + # CONFIG_IPV6 is not set 319 + # CONFIG_NETWORK_SECMARK is not set 320 + # CONFIG_NETFILTER is not set 321 + # CONFIG_IP_DCCP is not set 322 + # CONFIG_IP_SCTP is not set 323 + # CONFIG_TIPC is not set 324 + # CONFIG_ATM is not set 325 + # CONFIG_BRIDGE is not set 326 + # CONFIG_NET_DSA is not set 327 + # CONFIG_VLAN_8021Q is not set 328 + # CONFIG_DECNET is not set 329 + # CONFIG_LLC2 is not set 330 + # CONFIG_IPX is not set 331 + # CONFIG_ATALK is not set 332 + # CONFIG_X25 is not set 333 + # CONFIG_LAPB is not set 334 + # CONFIG_ECONET is not set 335 + # CONFIG_WAN_ROUTER is not set 336 + # CONFIG_NET_SCHED is not set 337 + # CONFIG_DCB is not set 338 + 339 + # 340 + # Network testing 341 + # 342 + # CONFIG_NET_PKTGEN is not set 343 + # CONFIG_HAMRADIO is not set 344 + # CONFIG_CAN is not set 345 + # CONFIG_IRDA is not set 346 + # CONFIG_BT is not set 347 + # CONFIG_AF_RXRPC is not set 348 + # CONFIG_PHONET is not set 349 + # CONFIG_WIRELESS is not set 350 + # CONFIG_WIMAX is not set 351 + # CONFIG_RFKILL is not set 352 + # CONFIG_NET_9P is not set 353 + 354 + # 355 + # Device Drivers 356 + # 357 + 358 + # 359 + # Generic Driver Options 360 + # 361 + CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 362 + CONFIG_STANDALONE=y 363 + CONFIG_PREVENT_FIRMWARE_BUILD=y 364 + CONFIG_FW_LOADER=y 365 + CONFIG_FIRMWARE_IN_KERNEL=y 366 + CONFIG_EXTRA_FIRMWARE="" 367 + # CONFIG_DEBUG_DRIVER is not set 368 + # CONFIG_DEBUG_DEVRES is not set 369 + # CONFIG_SYS_HYPERVISOR is not set 370 + # CONFIG_CONNECTOR is not set 371 + # CONFIG_MTD is not set 372 + # CONFIG_PARPORT is not set 373 + CONFIG_PNP=y 374 + CONFIG_PNP_DEBUG_MESSAGES=y 375 + 376 + # 377 + # Protocols 378 + # 379 + CONFIG_PNPACPI=y 380 + CONFIG_BLK_DEV=y 381 + # CONFIG_BLK_CPQ_DA is not set 382 + # CONFIG_BLK_CPQ_CISS_DA is not set 383 + # CONFIG_BLK_DEV_DAC960 is not set 384 + # CONFIG_BLK_DEV_UMEM is not set 385 + # CONFIG_BLK_DEV_COW_COMMON is not set 386 + CONFIG_BLK_DEV_LOOP=m 387 + CONFIG_BLK_DEV_CRYPTOLOOP=m 388 + CONFIG_BLK_DEV_NBD=m 389 + # CONFIG_BLK_DEV_SX8 is not set 390 + # CONFIG_BLK_DEV_UB is not set 391 + CONFIG_BLK_DEV_RAM=y 392 + CONFIG_BLK_DEV_RAM_COUNT=16 393 + CONFIG_BLK_DEV_RAM_SIZE=4096 394 + # CONFIG_BLK_DEV_XIP is not set 395 + # CONFIG_CDROM_PKTCDVD is not set 396 + # CONFIG_ATA_OVER_ETH is not set 397 + CONFIG_XEN_BLKDEV_FRONTEND=y 398 + # CONFIG_BLK_DEV_HD is not set 399 + CONFIG_MISC_DEVICES=y 400 + # CONFIG_PHANTOM is not set 401 + # CONFIG_EEPROM_93CX6 is not set 402 + # CONFIG_SGI_IOC4 is not set 403 + # CONFIG_TIFM_CORE is not set 404 + # CONFIG_ICS932S401 is not set 405 + # CONFIG_ENCLOSURE_SERVICES is not set 406 + # CONFIG_HP_ILO is not set 407 + # CONFIG_C2PORT is not set 408 + CONFIG_HAVE_IDE=y 409 + CONFIG_IDE=y 410 + 411 + # 412 + # Please see Documentation/ide/ide.txt for help/info on IDE drives 413 + # 414 + CONFIG_IDE_TIMINGS=y 415 + CONFIG_IDE_ATAPI=y 416 + # CONFIG_BLK_DEV_IDE_SATA is not set 417 + CONFIG_IDE_GD=y 418 + CONFIG_IDE_GD_ATA=y 419 + # CONFIG_IDE_GD_ATAPI is not set 420 + CONFIG_BLK_DEV_IDECD=y 421 + CONFIG_BLK_DEV_IDECD_VERBOSE_ERRORS=y 422 + # CONFIG_BLK_DEV_IDETAPE is not set 423 + # CONFIG_BLK_DEV_IDEACPI is not set 424 + # CONFIG_IDE_TASK_IOCTL is not set 425 + CONFIG_IDE_PROC_FS=y 426 + 427 + # 428 + # IDE chipset support/bugfixes 429 + # 430 + # CONFIG_IDE_GENERIC is not set 431 + # CONFIG_BLK_DEV_PLATFORM is not set 432 + # CONFIG_BLK_DEV_IDEPNP is not set 433 + CONFIG_BLK_DEV_IDEDMA_SFF=y 434 + 435 + # 436 + # PCI IDE chipsets support 437 + # 438 + CONFIG_BLK_DEV_IDEPCI=y 439 + CONFIG_IDEPCI_PCIBUS_ORDER=y 440 + # CONFIG_BLK_DEV_OFFBOARD is not set 441 + CONFIG_BLK_DEV_GENERIC=y 442 + # CONFIG_BLK_DEV_OPTI621 is not set 443 + CONFIG_BLK_DEV_IDEDMA_PCI=y 444 + # CONFIG_BLK_DEV_AEC62XX is not set 445 + # CONFIG_BLK_DEV_ALI15X3 is not set 446 + # CONFIG_BLK_DEV_AMD74XX is not set 447 + CONFIG_BLK_DEV_CMD64X=y 448 + # CONFIG_BLK_DEV_TRIFLEX is not set 449 + # CONFIG_BLK_DEV_CS5520 is not set 450 + # CONFIG_BLK_DEV_CS5530 is not set 451 + # CONFIG_BLK_DEV_HPT366 is not set 452 + # CONFIG_BLK_DEV_JMICRON is not set 453 + # CONFIG_BLK_DEV_SC1200 is not set 454 + CONFIG_BLK_DEV_PIIX=y 455 + # CONFIG_BLK_DEV_IT8172 is not set 456 + # CONFIG_BLK_DEV_IT8213 is not set 457 + # CONFIG_BLK_DEV_IT821X is not set 458 + # CONFIG_BLK_DEV_NS87415 is not set 459 + # CONFIG_BLK_DEV_PDC202XX_OLD is not set 460 + # CONFIG_BLK_DEV_PDC202XX_NEW is not set 461 + # CONFIG_BLK_DEV_SVWKS is not set 462 + # CONFIG_BLK_DEV_SIIMAGE is not set 463 + # CONFIG_BLK_DEV_SLC90E66 is not set 464 + # CONFIG_BLK_DEV_TRM290 is not set 465 + # CONFIG_BLK_DEV_VIA82CXXX is not set 466 + # CONFIG_BLK_DEV_TC86C001 is not set 467 + CONFIG_BLK_DEV_IDEDMA=y 468 + 469 + # 470 + # SCSI device support 471 + # 472 + # CONFIG_RAID_ATTRS is not set 473 + CONFIG_SCSI=y 474 + CONFIG_SCSI_DMA=y 475 + # CONFIG_SCSI_TGT is not set 476 + CONFIG_SCSI_NETLINK=y 477 + CONFIG_SCSI_PROC_FS=y 478 + 479 + # 480 + # SCSI support type (disk, tape, CD-ROM) 481 + # 482 + CONFIG_BLK_DEV_SD=y 483 + CONFIG_CHR_DEV_ST=m 484 + # CONFIG_CHR_DEV_OSST is not set 485 + CONFIG_BLK_DEV_SR=m 486 + # CONFIG_BLK_DEV_SR_VENDOR is not set 487 + CONFIG_CHR_DEV_SG=m 488 + # CONFIG_CHR_DEV_SCH is not set 489 + 490 + # 491 + # Some SCSI devices (e.g. CD jukebox) support multiple LUNs 492 + # 493 + # CONFIG_SCSI_MULTI_LUN is not set 494 + # CONFIG_SCSI_CONSTANTS is not set 495 + # CONFIG_SCSI_LOGGING is not set 496 + # CONFIG_SCSI_SCAN_ASYNC is not set 497 + CONFIG_SCSI_WAIT_SCAN=m 498 + 499 + # 500 + # SCSI Transports 501 + # 502 + CONFIG_SCSI_SPI_ATTRS=y 503 + CONFIG_SCSI_FC_ATTRS=y 504 + # CONFIG_SCSI_ISCSI_ATTRS is not set 505 + # CONFIG_SCSI_SAS_LIBSAS is not set 506 + # CONFIG_SCSI_SRP_ATTRS is not set 507 + CONFIG_SCSI_LOWLEVEL=y 508 + # CONFIG_ISCSI_TCP is not set 509 + # CONFIG_SCSI_CXGB3_ISCSI is not set 510 + # CONFIG_BLK_DEV_3W_XXXX_RAID is not set 511 + # CONFIG_SCSI_3W_9XXX is not set 512 + # CONFIG_SCSI_ACARD is not set 513 + # CONFIG_SCSI_AACRAID is not set 514 + # CONFIG_SCSI_AIC7XXX is not set 515 + # CONFIG_SCSI_AIC7XXX_OLD is not set 516 + # CONFIG_SCSI_AIC79XX is not set 517 + # CONFIG_SCSI_AIC94XX is not set 518 + # CONFIG_SCSI_DPT_I2O is not set 519 + # CONFIG_SCSI_ADVANSYS is not set 520 + # CONFIG_SCSI_ARCMSR is not set 521 + # CONFIG_MEGARAID_NEWGEN is not set 522 + # CONFIG_MEGARAID_LEGACY is not set 523 + # CONFIG_MEGARAID_SAS is not set 524 + # CONFIG_SCSI_HPTIOP is not set 525 + # CONFIG_LIBFC is not set 526 + # CONFIG_FCOE is not set 527 + # CONFIG_SCSI_DMX3191D is not set 528 + # CONFIG_SCSI_FUTURE_DOMAIN is not set 529 + # CONFIG_SCSI_IPS is not set 530 + # CONFIG_SCSI_INITIO is not set 531 + # CONFIG_SCSI_INIA100 is not set 532 + # CONFIG_SCSI_MVSAS is not set 533 + # CONFIG_SCSI_STEX is not set 534 + CONFIG_SCSI_SYM53C8XX_2=y 535 + CONFIG_SCSI_SYM53C8XX_DMA_ADDRESSING_MODE=1 536 + CONFIG_SCSI_SYM53C8XX_DEFAULT_TAGS=16 537 + CONFIG_SCSI_SYM53C8XX_MAX_TAGS=64 538 + CONFIG_SCSI_SYM53C8XX_MMIO=y 539 + CONFIG_SCSI_QLOGIC_1280=y 540 + # CONFIG_SCSI_QLA_FC is not set 541 + # CONFIG_SCSI_QLA_ISCSI is not set 542 + # CONFIG_SCSI_LPFC is not set 543 + # CONFIG_SCSI_DC395x is not set 544 + # CONFIG_SCSI_DC390T is not set 545 + # CONFIG_SCSI_DEBUG is not set 546 + # CONFIG_SCSI_SRP is not set 547 + # CONFIG_SCSI_DH is not set 548 + # CONFIG_ATA is not set 549 + CONFIG_MD=y 550 + CONFIG_BLK_DEV_MD=m 551 + CONFIG_MD_LINEAR=m 552 + CONFIG_MD_RAID0=m 553 + CONFIG_MD_RAID1=m 554 + # CONFIG_MD_RAID10 is not set 555 + # CONFIG_MD_RAID456 is not set 556 + CONFIG_MD_MULTIPATH=m 557 + # CONFIG_MD_FAULTY is not set 558 + CONFIG_BLK_DEV_DM=m 559 + # CONFIG_DM_DEBUG is not set 560 + CONFIG_DM_CRYPT=m 561 + CONFIG_DM_SNAPSHOT=m 562 + CONFIG_DM_MIRROR=m 563 + CONFIG_DM_ZERO=m 564 + # CONFIG_DM_MULTIPATH is not set 565 + # CONFIG_DM_DELAY is not set 566 + # CONFIG_DM_UEVENT is not set 567 + CONFIG_FUSION=y 568 + CONFIG_FUSION_SPI=y 569 + CONFIG_FUSION_FC=y 570 + # CONFIG_FUSION_SAS is not set 571 + CONFIG_FUSION_MAX_SGE=128 572 + CONFIG_FUSION_CTL=y 573 + # CONFIG_FUSION_LOGGING is not set 574 + 575 + # 576 + # IEEE 1394 (FireWire) support 577 + # 578 + 579 + # 580 + # Enable only one of the two stacks, unless you know what you are doing 581 + # 582 + # CONFIG_FIREWIRE is not set 583 + # CONFIG_IEEE1394 is not set 584 + # CONFIG_I2O is not set 585 + CONFIG_NETDEVICES=y 586 + CONFIG_DUMMY=m 587 + # CONFIG_BONDING is not set 588 + # CONFIG_MACVLAN is not set 589 + # CONFIG_EQUALIZER is not set 590 + # CONFIG_TUN is not set 591 + # CONFIG_VETH is not set 592 + # CONFIG_NET_SB1000 is not set 593 + # CONFIG_ARCNET is not set 594 + CONFIG_PHYLIB=y 595 + 596 + # 597 + # MII PHY device drivers 598 + # 599 + # CONFIG_MARVELL_PHY is not set 600 + # CONFIG_DAVICOM_PHY is not set 601 + # CONFIG_QSEMI_PHY is not set 602 + # CONFIG_LXT_PHY is not set 603 + # CONFIG_CICADA_PHY is not set 604 + # CONFIG_VITESSE_PHY is not set 605 + # CONFIG_SMSC_PHY is not set 606 + # CONFIG_BROADCOM_PHY is not set 607 + # CONFIG_ICPLUS_PHY is not set 608 + # CONFIG_REALTEK_PHY is not set 609 + # CONFIG_NATIONAL_PHY is not set 610 + # CONFIG_STE10XP is not set 611 + # CONFIG_LSI_ET1011C_PHY is not set 612 + # CONFIG_FIXED_PHY is not set 613 + # CONFIG_MDIO_BITBANG is not set 614 + CONFIG_NET_ETHERNET=y 615 + CONFIG_MII=m 616 + # CONFIG_HAPPYMEAL is not set 617 + # CONFIG_SUNGEM is not set 618 + # CONFIG_CASSINI is not set 619 + # CONFIG_NET_VENDOR_3COM is not set 620 + CONFIG_NET_TULIP=y 621 + # CONFIG_DE2104X is not set 622 + CONFIG_TULIP=m 623 + # CONFIG_TULIP_MWI is not set 624 + # CONFIG_TULIP_MMIO is not set 625 + # CONFIG_TULIP_NAPI is not set 626 + # CONFIG_DE4X5 is not set 627 + # CONFIG_WINBOND_840 is not set 628 + # CONFIG_DM9102 is not set 629 + # CONFIG_ULI526X is not set 630 + # CONFIG_HP100 is not set 631 + # CONFIG_IBM_NEW_EMAC_ZMII is not set 632 + # CONFIG_IBM_NEW_EMAC_RGMII is not set 633 + # CONFIG_IBM_NEW_EMAC_TAH is not set 634 + # CONFIG_IBM_NEW_EMAC_EMAC4 is not set 635 + # CONFIG_IBM_NEW_EMAC_NO_FLOW_CTRL is not set 636 + # CONFIG_IBM_NEW_EMAC_MAL_CLR_ICINTSTAT is not set 637 + # CONFIG_IBM_NEW_EMAC_MAL_COMMON_ERR is not set 638 + CONFIG_NET_PCI=y 639 + # CONFIG_PCNET32 is not set 640 + # CONFIG_AMD8111_ETH is not set 641 + # CONFIG_ADAPTEC_STARFIRE is not set 642 + # CONFIG_B44 is not set 643 + # CONFIG_FORCEDETH is not set 644 + CONFIG_E100=m 645 + # CONFIG_FEALNX is not set 646 + # CONFIG_NATSEMI is not set 647 + # CONFIG_NE2K_PCI is not set 648 + # CONFIG_8139CP is not set 649 + # CONFIG_8139TOO is not set 650 + # CONFIG_R6040 is not set 651 + # CONFIG_SIS900 is not set 652 + # CONFIG_EPIC100 is not set 653 + # CONFIG_SMSC9420 is not set 654 + # CONFIG_SUNDANCE is not set 655 + # CONFIG_TLAN is not set 656 + # CONFIG_VIA_RHINE is not set 657 + # CONFIG_SC92031 is not set 658 + # CONFIG_ATL2 is not set 659 + CONFIG_NETDEV_1000=y 660 + # CONFIG_ACENIC is not set 661 + # CONFIG_DL2K is not set 662 + CONFIG_E1000=y 663 + # CONFIG_E1000E is not set 664 + # CONFIG_IP1000 is not set 665 + # CONFIG_IGB is not set 666 + # CONFIG_NS83820 is not set 667 + # CONFIG_HAMACHI is not set 668 + # CONFIG_YELLOWFIN is not set 669 + # CONFIG_R8169 is not set 670 + # CONFIG_SIS190 is not set 671 + # CONFIG_SKGE is not set 672 + # CONFIG_SKY2 is not set 673 + # CONFIG_VIA_VELOCITY is not set 674 + CONFIG_TIGON3=y 675 + # CONFIG_BNX2 is not set 676 + # CONFIG_QLA3XXX is not set 677 + # CONFIG_ATL1 is not set 678 + # CONFIG_ATL1E is not set 679 + # CONFIG_JME is not set 680 + CONFIG_NETDEV_10000=y 681 + # CONFIG_CHELSIO_T1 is not set 682 + CONFIG_CHELSIO_T3_DEPENDS=y 683 + # CONFIG_CHELSIO_T3 is not set 684 + # CONFIG_ENIC is not set 685 + # CONFIG_IXGBE is not set 686 + # CONFIG_IXGB is not set 687 + # CONFIG_S2IO is not set 688 + # CONFIG_MYRI10GE is not set 689 + # CONFIG_NETXEN_NIC is not set 690 + # CONFIG_NIU is not set 691 + # CONFIG_MLX4_EN is not set 692 + # CONFIG_MLX4_CORE is not set 693 + # CONFIG_TEHUTI is not set 694 + # CONFIG_BNX2X is not set 695 + # CONFIG_QLGE is not set 696 + # CONFIG_SFC is not set 697 + # CONFIG_TR is not set 698 + 699 + # 700 + # Wireless LAN 701 + # 702 + # CONFIG_WLAN_PRE80211 is not set 703 + # CONFIG_WLAN_80211 is not set 704 + # CONFIG_IWLWIFI_LEDS is not set 705 + 706 + # 707 + # Enable WiMAX (Networking options) to see the WiMAX drivers 708 + # 709 + 710 + # 711 + # USB Network Adapters 712 + # 713 + # CONFIG_USB_CATC is not set 714 + # CONFIG_USB_KAWETH is not set 715 + # CONFIG_USB_PEGASUS is not set 716 + # CONFIG_USB_RTL8150 is not set 717 + # CONFIG_USB_USBNET is not set 718 + # CONFIG_WAN is not set 719 + CONFIG_XEN_NETDEV_FRONTEND=y 720 + # CONFIG_FDDI is not set 721 + # CONFIG_HIPPI is not set 722 + # CONFIG_PPP is not set 723 + # CONFIG_SLIP is not set 724 + # CONFIG_NET_FC is not set 725 + CONFIG_NETCONSOLE=y 726 + # CONFIG_NETCONSOLE_DYNAMIC is not set 727 + CONFIG_NETPOLL=y 728 + # CONFIG_NETPOLL_TRAP is not set 729 + CONFIG_NET_POLL_CONTROLLER=y 730 + # CONFIG_ISDN is not set 731 + # CONFIG_PHONE is not set 732 + 733 + # 734 + # Input device support 735 + # 736 + CONFIG_INPUT=y 737 + # CONFIG_INPUT_FF_MEMLESS is not set 738 + # CONFIG_INPUT_POLLDEV is not set 739 + 740 + # 741 + # Userland interfaces 742 + # 743 + CONFIG_INPUT_MOUSEDEV=y 744 + CONFIG_INPUT_MOUSEDEV_PSAUX=y 745 + CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024 746 + CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768 747 + # CONFIG_INPUT_JOYDEV is not set 748 + # CONFIG_INPUT_EVDEV is not set 749 + # CONFIG_INPUT_EVBUG is not set 750 + 751 + # 752 + # Input Device Drivers 753 + # 754 + CONFIG_INPUT_KEYBOARD=y 755 + CONFIG_KEYBOARD_ATKBD=y 756 + # CONFIG_KEYBOARD_SUNKBD is not set 757 + # CONFIG_KEYBOARD_LKKBD is not set 758 + # CONFIG_KEYBOARD_XTKBD is not set 759 + # CONFIG_KEYBOARD_NEWTON is not set 760 + # CONFIG_KEYBOARD_STOWAWAY is not set 761 + CONFIG_INPUT_MOUSE=y 762 + CONFIG_MOUSE_PS2=y 763 + CONFIG_MOUSE_PS2_ALPS=y 764 + CONFIG_MOUSE_PS2_LOGIPS2PP=y 765 + CONFIG_MOUSE_PS2_SYNAPTICS=y 766 + CONFIG_MOUSE_PS2_LIFEBOOK=y 767 + CONFIG_MOUSE_PS2_TRACKPOINT=y 768 + # CONFIG_MOUSE_PS2_ELANTECH is not set 769 + # CONFIG_MOUSE_PS2_TOUCHKIT is not set 770 + # CONFIG_MOUSE_SERIAL is not set 771 + # CONFIG_MOUSE_APPLETOUCH is not set 772 + # CONFIG_MOUSE_BCM5974 is not set 773 + # CONFIG_MOUSE_VSXXXAA is not set 774 + # CONFIG_INPUT_JOYSTICK is not set 775 + # CONFIG_INPUT_TABLET is not set 776 + # CONFIG_INPUT_TOUCHSCREEN is not set 777 + # CONFIG_INPUT_MISC is not set 778 + 779 + # 780 + # Hardware I/O ports 781 + # 782 + CONFIG_SERIO=y 783 + CONFIG_SERIO_I8042=y 784 + # CONFIG_SERIO_SERPORT is not set 785 + # CONFIG_SERIO_PCIPS2 is not set 786 + CONFIG_SERIO_LIBPS2=y 787 + # CONFIG_SERIO_RAW is not set 788 + CONFIG_GAMEPORT=m 789 + # CONFIG_GAMEPORT_NS558 is not set 790 + # CONFIG_GAMEPORT_L4 is not set 791 + # CONFIG_GAMEPORT_EMU10K1 is not set 792 + # CONFIG_GAMEPORT_FM801 is not set 793 + 794 + # 795 + # Character devices 796 + # 797 + CONFIG_VT=y 798 + CONFIG_CONSOLE_TRANSLATIONS=y 799 + CONFIG_VT_CONSOLE=y 800 + CONFIG_HW_CONSOLE=y 801 + # CONFIG_VT_HW_CONSOLE_BINDING is not set 802 + CONFIG_DEVKMEM=y 803 + CONFIG_SERIAL_NONSTANDARD=y 804 + # CONFIG_COMPUTONE is not set 805 + # CONFIG_ROCKETPORT is not set 806 + # CONFIG_CYCLADES is not set 807 + # CONFIG_DIGIEPCA is not set 808 + # CONFIG_MOXA_INTELLIO is not set 809 + # CONFIG_MOXA_SMARTIO is not set 810 + # CONFIG_ISI is not set 811 + # CONFIG_SYNCLINKMP is not set 812 + # CONFIG_SYNCLINK_GT is not set 813 + # CONFIG_N_HDLC is not set 814 + # CONFIG_RISCOM8 is not set 815 + # CONFIG_SPECIALIX is not set 816 + # CONFIG_SX is not set 817 + # CONFIG_RIO is not set 818 + # CONFIG_STALDRV is not set 819 + # CONFIG_NOZOMI is not set 820 + 821 + # 822 + # Serial drivers 823 + # 824 + CONFIG_SERIAL_8250=y 825 + CONFIG_SERIAL_8250_CONSOLE=y 826 + CONFIG_SERIAL_8250_PCI=y 827 + CONFIG_SERIAL_8250_PNP=y 828 + CONFIG_SERIAL_8250_NR_UARTS=6 829 + CONFIG_SERIAL_8250_RUNTIME_UARTS=4 830 + CONFIG_SERIAL_8250_EXTENDED=y 831 + CONFIG_SERIAL_8250_SHARE_IRQ=y 832 + # CONFIG_SERIAL_8250_DETECT_IRQ is not set 833 + # CONFIG_SERIAL_8250_RSA is not set 834 + 835 + # 836 + # Non-8250 serial port support 837 + # 838 + CONFIG_SERIAL_CORE=y 839 + CONFIG_SERIAL_CORE_CONSOLE=y 840 + # CONFIG_SERIAL_JSM is not set 841 + CONFIG_UNIX98_PTYS=y 842 + # CONFIG_DEVPTS_MULTIPLE_INSTANCES is not set 843 + CONFIG_LEGACY_PTYS=y 844 + CONFIG_LEGACY_PTY_COUNT=256 845 + CONFIG_HVC_DRIVER=y 846 + CONFIG_HVC_IRQ=y 847 + CONFIG_HVC_XEN=y 848 + # CONFIG_IPMI_HANDLER is not set 849 + # CONFIG_HW_RANDOM is not set 850 + CONFIG_EFI_RTC=y 851 + # CONFIG_R3964 is not set 852 + # CONFIG_APPLICOM is not set 853 + CONFIG_RAW_DRIVER=m 854 + CONFIG_MAX_RAW_DEVS=256 855 + CONFIG_HPET=y 856 + CONFIG_HPET_MMAP=y 857 + # CONFIG_HANGCHECK_TIMER is not set 858 + # CONFIG_TCG_TPM is not set 859 + CONFIG_DEVPORT=y 860 + CONFIG_I2C=m 861 + CONFIG_I2C_BOARDINFO=y 862 + # CONFIG_I2C_CHARDEV is not set 863 + CONFIG_I2C_HELPER_AUTO=y 864 + CONFIG_I2C_ALGOBIT=m 865 + 866 + # 867 + # I2C Hardware Bus support 868 + # 869 + 870 + # 871 + # PC SMBus host controller drivers 872 + # 873 + # CONFIG_I2C_ALI1535 is not set 874 + # CONFIG_I2C_ALI1563 is not set 875 + # CONFIG_I2C_ALI15X3 is not set 876 + # CONFIG_I2C_AMD756 is not set 877 + # CONFIG_I2C_AMD8111 is not set 878 + # CONFIG_I2C_I801 is not set 879 + # CONFIG_I2C_ISCH is not set 880 + # CONFIG_I2C_PIIX4 is not set 881 + # CONFIG_I2C_NFORCE2 is not set 882 + # CONFIG_I2C_SIS5595 is not set 883 + # CONFIG_I2C_SIS630 is not set 884 + # CONFIG_I2C_SIS96X is not set 885 + # CONFIG_I2C_VIA is not set 886 + # CONFIG_I2C_VIAPRO is not set 887 + 888 + # 889 + # I2C system bus drivers (mostly embedded / system-on-chip) 890 + # 891 + # CONFIG_I2C_OCORES is not set 892 + # CONFIG_I2C_SIMTEC is not set 893 + 894 + # 895 + # External I2C/SMBus adapter drivers 896 + # 897 + # CONFIG_I2C_PARPORT_LIGHT is not set 898 + # CONFIG_I2C_TAOS_EVM is not set 899 + # CONFIG_I2C_TINY_USB is not set 900 + 901 + # 902 + # Graphics adapter I2C/DDC channel drivers 903 + # 904 + # CONFIG_I2C_VOODOO3 is not set 905 + 906 + # 907 + # Other I2C/SMBus bus drivers 908 + # 909 + # CONFIG_I2C_PCA_PLATFORM is not set 910 + # CONFIG_I2C_STUB is not set 911 + 912 + # 913 + # Miscellaneous I2C Chip support 914 + # 915 + # CONFIG_DS1682 is not set 916 + # CONFIG_AT24 is not set 917 + # CONFIG_SENSORS_EEPROM is not set 918 + # CONFIG_SENSORS_PCF8574 is not set 919 + # CONFIG_PCF8575 is not set 920 + # CONFIG_SENSORS_PCA9539 is not set 921 + # CONFIG_SENSORS_PCF8591 is not set 922 + # CONFIG_SENSORS_MAX6875 is not set 923 + # CONFIG_SENSORS_TSL2550 is not set 924 + # CONFIG_I2C_DEBUG_CORE is not set 925 + # CONFIG_I2C_DEBUG_ALGO is not set 926 + # CONFIG_I2C_DEBUG_BUS is not set 927 + # CONFIG_I2C_DEBUG_CHIP is not set 928 + # CONFIG_SPI is not set 929 + # CONFIG_W1 is not set 930 + CONFIG_POWER_SUPPLY=y 931 + # CONFIG_POWER_SUPPLY_DEBUG is not set 932 + # CONFIG_PDA_POWER is not set 933 + # CONFIG_BATTERY_DS2760 is not set 934 + # CONFIG_BATTERY_BQ27x00 is not set 935 + CONFIG_HWMON=y 936 + # CONFIG_HWMON_VID is not set 937 + # CONFIG_SENSORS_AD7414 is not set 938 + # CONFIG_SENSORS_AD7418 is not set 939 + # CONFIG_SENSORS_ADM1021 is not set 940 + # CONFIG_SENSORS_ADM1025 is not set 941 + # CONFIG_SENSORS_ADM1026 is not set 942 + # CONFIG_SENSORS_ADM1029 is not set 943 + # CONFIG_SENSORS_ADM1031 is not set 944 + # CONFIG_SENSORS_ADM9240 is not set 945 + # CONFIG_SENSORS_ADT7462 is not set 946 + # CONFIG_SENSORS_ADT7470 is not set 947 + # CONFIG_SENSORS_ADT7473 is not set 948 + # CONFIG_SENSORS_ATXP1 is not set 949 + # CONFIG_SENSORS_DS1621 is not set 950 + # CONFIG_SENSORS_I5K_AMB is not set 951 + # CONFIG_SENSORS_F71805F is not set 952 + # CONFIG_SENSORS_F71882FG is not set 953 + # CONFIG_SENSORS_F75375S is not set 954 + # CONFIG_SENSORS_GL518SM is not set 955 + # CONFIG_SENSORS_GL520SM is not set 956 + # CONFIG_SENSORS_IT87 is not set 957 + # CONFIG_SENSORS_LM63 is not set 958 + # CONFIG_SENSORS_LM75 is not set 959 + # CONFIG_SENSORS_LM77 is not set 960 + # CONFIG_SENSORS_LM78 is not set 961 + # CONFIG_SENSORS_LM80 is not set 962 + # CONFIG_SENSORS_LM83 is not set 963 + # CONFIG_SENSORS_LM85 is not set 964 + # CONFIG_SENSORS_LM87 is not set 965 + # CONFIG_SENSORS_LM90 is not set 966 + # CONFIG_SENSORS_LM92 is not set 967 + # CONFIG_SENSORS_LM93 is not set 968 + # CONFIG_SENSORS_LTC4245 is not set 969 + # CONFIG_SENSORS_MAX1619 is not set 970 + # CONFIG_SENSORS_MAX6650 is not set 971 + # CONFIG_SENSORS_PC87360 is not set 972 + # CONFIG_SENSORS_PC87427 is not set 973 + # CONFIG_SENSORS_SIS5595 is not set 974 + # CONFIG_SENSORS_DME1737 is not set 975 + # CONFIG_SENSORS_SMSC47M1 is not set 976 + # CONFIG_SENSORS_SMSC47M192 is not set 977 + # CONFIG_SENSORS_SMSC47B397 is not set 978 + # CONFIG_SENSORS_ADS7828 is not set 979 + # CONFIG_SENSORS_THMC50 is not set 980 + # CONFIG_SENSORS_VIA686A is not set 981 + # CONFIG_SENSORS_VT1211 is not set 982 + # CONFIG_SENSORS_VT8231 is not set 983 + # CONFIG_SENSORS_W83781D is not set 984 + # CONFIG_SENSORS_W83791D is not set 985 + # CONFIG_SENSORS_W83792D is not set 986 + # CONFIG_SENSORS_W83793 is not set 987 + # CONFIG_SENSORS_W83L785TS is not set 988 + # CONFIG_SENSORS_W83L786NG is not set 989 + # CONFIG_SENSORS_W83627HF is not set 990 + # CONFIG_SENSORS_W83627EHF is not set 991 + # CONFIG_SENSORS_LIS3LV02D is not set 992 + # CONFIG_HWMON_DEBUG_CHIP is not set 993 + CONFIG_THERMAL=m 994 + # CONFIG_THERMAL_HWMON is not set 995 + # CONFIG_WATCHDOG is not set 996 + CONFIG_SSB_POSSIBLE=y 997 + 998 + # 999 + # Sonics Silicon Backplane 1000 + # 1001 + # CONFIG_SSB is not set 1002 + 1003 + # 1004 + # Multifunction device drivers 1005 + # 1006 + # CONFIG_MFD_CORE is not set 1007 + # CONFIG_MFD_SM501 is not set 1008 + # CONFIG_HTC_PASIC3 is not set 1009 + # CONFIG_MFD_TMIO is not set 1010 + # CONFIG_MFD_WM8400 is not set 1011 + # CONFIG_MFD_WM8350_I2C is not set 1012 + # CONFIG_MFD_PCF50633 is not set 1013 + # CONFIG_REGULATOR is not set 1014 + 1015 + # 1016 + # Multimedia devices 1017 + # 1018 + 1019 + # 1020 + # Multimedia core support 1021 + # 1022 + # CONFIG_VIDEO_DEV is not set 1023 + # CONFIG_DVB_CORE is not set 1024 + # CONFIG_VIDEO_MEDIA is not set 1025 + 1026 + # 1027 + # Multimedia drivers 1028 + # 1029 + CONFIG_DAB=y 1030 + # CONFIG_USB_DABUSB is not set 1031 + 1032 + # 1033 + # Graphics support 1034 + # 1035 + CONFIG_AGP=m 1036 + CONFIG_DRM=m 1037 + CONFIG_DRM_TDFX=m 1038 + CONFIG_DRM_R128=m 1039 + CONFIG_DRM_RADEON=m 1040 + CONFIG_DRM_MGA=m 1041 + CONFIG_DRM_SIS=m 1042 + # CONFIG_DRM_VIA is not set 1043 + # CONFIG_DRM_SAVAGE is not set 1044 + # CONFIG_VGASTATE is not set 1045 + # CONFIG_VIDEO_OUTPUT_CONTROL is not set 1046 + # CONFIG_FB is not set 1047 + # CONFIG_BACKLIGHT_LCD_SUPPORT is not set 1048 + 1049 + # 1050 + # Display device support 1051 + # 1052 + # CONFIG_DISPLAY_SUPPORT is not set 1053 + 1054 + # 1055 + # Console display driver support 1056 + # 1057 + CONFIG_VGA_CONSOLE=y 1058 + # CONFIG_VGACON_SOFT_SCROLLBACK is not set 1059 + CONFIG_DUMMY_CONSOLE=y 1060 + # CONFIG_SOUND is not set 1061 + CONFIG_HID_SUPPORT=y 1062 + CONFIG_HID=y 1063 + # CONFIG_HID_DEBUG is not set 1064 + # CONFIG_HIDRAW is not set 1065 + 1066 + # 1067 + # USB Input Devices 1068 + # 1069 + CONFIG_USB_HID=y 1070 + # CONFIG_HID_PID is not set 1071 + # CONFIG_USB_HIDDEV is not set 1072 + 1073 + # 1074 + # Special HID drivers 1075 + # 1076 + CONFIG_HID_COMPAT=y 1077 + CONFIG_HID_A4TECH=y 1078 + CONFIG_HID_APPLE=y 1079 + CONFIG_HID_BELKIN=y 1080 + CONFIG_HID_CHERRY=y 1081 + CONFIG_HID_CHICONY=y 1082 + CONFIG_HID_CYPRESS=y 1083 + CONFIG_HID_EZKEY=y 1084 + CONFIG_HID_GYRATION=y 1085 + CONFIG_HID_LOGITECH=y 1086 + # CONFIG_LOGITECH_FF is not set 1087 + # CONFIG_LOGIRUMBLEPAD2_FF is not set 1088 + CONFIG_HID_MICROSOFT=y 1089 + CONFIG_HID_MONTEREY=y 1090 + CONFIG_HID_NTRIG=y 1091 + CONFIG_HID_PANTHERLORD=y 1092 + # CONFIG_PANTHERLORD_FF is not set 1093 + CONFIG_HID_PETALYNX=y 1094 + CONFIG_HID_SAMSUNG=y 1095 + CONFIG_HID_SONY=y 1096 + CONFIG_HID_SUNPLUS=y 1097 + # CONFIG_GREENASIA_FF is not set 1098 + CONFIG_HID_TOPSEED=y 1099 + # CONFIG_THRUSTMASTER_FF is not set 1100 + # CONFIG_ZEROPLUS_FF is not set 1101 + CONFIG_USB_SUPPORT=y 1102 + CONFIG_USB_ARCH_HAS_HCD=y 1103 + CONFIG_USB_ARCH_HAS_OHCI=y 1104 + CONFIG_USB_ARCH_HAS_EHCI=y 1105 + CONFIG_USB=y 1106 + # CONFIG_USB_DEBUG is not set 1107 + # CONFIG_USB_ANNOUNCE_NEW_DEVICES is not set 1108 + 1109 + # 1110 + # Miscellaneous USB options 1111 + # 1112 + CONFIG_USB_DEVICEFS=y 1113 + CONFIG_USB_DEVICE_CLASS=y 1114 + # CONFIG_USB_DYNAMIC_MINORS is not set 1115 + # CONFIG_USB_SUSPEND is not set 1116 + # CONFIG_USB_OTG is not set 1117 + # CONFIG_USB_MON is not set 1118 + # CONFIG_USB_WUSB is not set 1119 + # CONFIG_USB_WUSB_CBAF is not set 1120 + 1121 + # 1122 + # USB Host Controller Drivers 1123 + # 1124 + # CONFIG_USB_C67X00_HCD is not set 1125 + CONFIG_USB_EHCI_HCD=m 1126 + # CONFIG_USB_EHCI_ROOT_HUB_TT is not set 1127 + # CONFIG_USB_EHCI_TT_NEWSCHED is not set 1128 + # CONFIG_USB_OXU210HP_HCD is not set 1129 + # CONFIG_USB_ISP116X_HCD is not set 1130 + # CONFIG_USB_ISP1760_HCD is not set 1131 + CONFIG_USB_OHCI_HCD=m 1132 + # CONFIG_USB_OHCI_BIG_ENDIAN_DESC is not set 1133 + # CONFIG_USB_OHCI_BIG_ENDIAN_MMIO is not set 1134 + CONFIG_USB_OHCI_LITTLE_ENDIAN=y 1135 + CONFIG_USB_UHCI_HCD=y 1136 + # CONFIG_USB_SL811_HCD is not set 1137 + # CONFIG_USB_R8A66597_HCD is not set 1138 + # CONFIG_USB_WHCI_HCD is not set 1139 + # CONFIG_USB_HWA_HCD is not set 1140 + 1141 + # 1142 + # USB Device Class drivers 1143 + # 1144 + # CONFIG_USB_ACM is not set 1145 + # CONFIG_USB_PRINTER is not set 1146 + # CONFIG_USB_WDM is not set 1147 + # CONFIG_USB_TMC is not set 1148 + 1149 + # 1150 + # NOTE: USB_STORAGE depends on SCSI but BLK_DEV_SD may also be needed; 1151 + # 1152 + 1153 + # 1154 + # see USB_STORAGE Help for more information 1155 + # 1156 + CONFIG_USB_STORAGE=m 1157 + # CONFIG_USB_STORAGE_DEBUG is not set 1158 + # CONFIG_USB_STORAGE_DATAFAB is not set 1159 + # CONFIG_USB_STORAGE_FREECOM is not set 1160 + # CONFIG_USB_STORAGE_ISD200 is not set 1161 + # CONFIG_USB_STORAGE_USBAT is not set 1162 + # CONFIG_USB_STORAGE_SDDR09 is not set 1163 + # CONFIG_USB_STORAGE_SDDR55 is not set 1164 + # CONFIG_USB_STORAGE_JUMPSHOT is not set 1165 + # CONFIG_USB_STORAGE_ALAUDA is not set 1166 + # CONFIG_USB_STORAGE_ONETOUCH is not set 1167 + # CONFIG_USB_STORAGE_KARMA is not set 1168 + # CONFIG_USB_STORAGE_CYPRESS_ATACB is not set 1169 + # CONFIG_USB_LIBUSUAL is not set 1170 + 1171 + # 1172 + # USB Imaging devices 1173 + # 1174 + # CONFIG_USB_MDC800 is not set 1175 + # CONFIG_USB_MICROTEK is not set 1176 + 1177 + # 1178 + # USB port drivers 1179 + # 1180 + # CONFIG_USB_SERIAL is not set 1181 + 1182 + # 1183 + # USB Miscellaneous drivers 1184 + # 1185 + # CONFIG_USB_EMI62 is not set 1186 + # CONFIG_USB_EMI26 is not set 1187 + # CONFIG_USB_ADUTUX is not set 1188 + # CONFIG_USB_SEVSEG is not set 1189 + # CONFIG_USB_RIO500 is not set 1190 + # CONFIG_USB_LEGOTOWER is not set 1191 + # CONFIG_USB_LCD is not set 1192 + # CONFIG_USB_BERRY_CHARGE is not set 1193 + # CONFIG_USB_LED is not set 1194 + # CONFIG_USB_CYPRESS_CY7C63 is not set 1195 + # CONFIG_USB_CYTHERM is not set 1196 + # CONFIG_USB_PHIDGET is not set 1197 + # CONFIG_USB_IDMOUSE is not set 1198 + # CONFIG_USB_FTDI_ELAN is not set 1199 + # CONFIG_USB_APPLEDISPLAY is not set 1200 + # CONFIG_USB_SISUSBVGA is not set 1201 + # CONFIG_USB_LD is not set 1202 + # CONFIG_USB_TRANCEVIBRATOR is not set 1203 + # CONFIG_USB_IOWARRIOR is not set 1204 + # CONFIG_USB_TEST is not set 1205 + # CONFIG_USB_ISIGHTFW is not set 1206 + # CONFIG_USB_VST is not set 1207 + # CONFIG_USB_GADGET is not set 1208 + 1209 + # 1210 + # OTG and related infrastructure 1211 + # 1212 + # CONFIG_UWB is not set 1213 + # CONFIG_MMC is not set 1214 + # CONFIG_MEMSTICK is not set 1215 + # CONFIG_NEW_LEDS is not set 1216 + # CONFIG_ACCESSIBILITY is not set 1217 + # CONFIG_INFINIBAND is not set 1218 + # CONFIG_RTC_CLASS is not set 1219 + # CONFIG_DMADEVICES is not set 1220 + # CONFIG_UIO is not set 1221 + CONFIG_XEN_BALLOON=y 1222 + CONFIG_XEN_SCRUB_PAGES=y 1223 + CONFIG_XENFS=y 1224 + CONFIG_XEN_COMPAT_XENFS=y 1225 + # CONFIG_STAGING is not set 1226 + # CONFIG_MSPEC is not set 1227 + 1228 + # 1229 + # File systems 1230 + # 1231 + CONFIG_EXT2_FS=y 1232 + CONFIG_EXT2_FS_XATTR=y 1233 + CONFIG_EXT2_FS_POSIX_ACL=y 1234 + CONFIG_EXT2_FS_SECURITY=y 1235 + # CONFIG_EXT2_FS_XIP is not set 1236 + CONFIG_EXT3_FS=y 1237 + CONFIG_EXT3_FS_XATTR=y 1238 + CONFIG_EXT3_FS_POSIX_ACL=y 1239 + CONFIG_EXT3_FS_SECURITY=y 1240 + # CONFIG_EXT4_FS is not set 1241 + CONFIG_JBD=y 1242 + CONFIG_FS_MBCACHE=y 1243 + CONFIG_REISERFS_FS=y 1244 + # CONFIG_REISERFS_CHECK is not set 1245 + # CONFIG_REISERFS_PROC_INFO is not set 1246 + CONFIG_REISERFS_FS_XATTR=y 1247 + CONFIG_REISERFS_FS_POSIX_ACL=y 1248 + CONFIG_REISERFS_FS_SECURITY=y 1249 + # CONFIG_JFS_FS is not set 1250 + CONFIG_FS_POSIX_ACL=y 1251 + CONFIG_FILE_LOCKING=y 1252 + CONFIG_XFS_FS=y 1253 + # CONFIG_XFS_QUOTA is not set 1254 + # CONFIG_XFS_POSIX_ACL is not set 1255 + # CONFIG_XFS_RT is not set 1256 + # CONFIG_XFS_DEBUG is not set 1257 + # CONFIG_GFS2_FS is not set 1258 + # CONFIG_OCFS2_FS is not set 1259 + # CONFIG_BTRFS_FS is not set 1260 + CONFIG_DNOTIFY=y 1261 + CONFIG_INOTIFY=y 1262 + CONFIG_INOTIFY_USER=y 1263 + # CONFIG_QUOTA is not set 1264 + CONFIG_AUTOFS_FS=y 1265 + CONFIG_AUTOFS4_FS=y 1266 + # CONFIG_FUSE_FS is not set 1267 + 1268 + # 1269 + # CD-ROM/DVD Filesystems 1270 + # 1271 + CONFIG_ISO9660_FS=m 1272 + CONFIG_JOLIET=y 1273 + # CONFIG_ZISOFS is not set 1274 + CONFIG_UDF_FS=m 1275 + CONFIG_UDF_NLS=y 1276 + 1277 + # 1278 + # DOS/FAT/NT Filesystems 1279 + # 1280 + CONFIG_FAT_FS=y 1281 + # CONFIG_MSDOS_FS is not set 1282 + CONFIG_VFAT_FS=y 1283 + CONFIG_FAT_DEFAULT_CODEPAGE=437 1284 + CONFIG_FAT_DEFAULT_IOCHARSET="iso8859-1" 1285 + CONFIG_NTFS_FS=m 1286 + # CONFIG_NTFS_DEBUG is not set 1287 + # CONFIG_NTFS_RW is not set 1288 + 1289 + # 1290 + # Pseudo filesystems 1291 + # 1292 + CONFIG_PROC_FS=y 1293 + CONFIG_PROC_KCORE=y 1294 + CONFIG_PROC_SYSCTL=y 1295 + CONFIG_PROC_PAGE_MONITOR=y 1296 + CONFIG_SYSFS=y 1297 + CONFIG_TMPFS=y 1298 + # CONFIG_TMPFS_POSIX_ACL is not set 1299 + CONFIG_HUGETLBFS=y 1300 + CONFIG_HUGETLB_PAGE=y 1301 + # CONFIG_CONFIGFS_FS is not set 1302 + CONFIG_MISC_FILESYSTEMS=y 1303 + # CONFIG_ADFS_FS is not set 1304 + # CONFIG_AFFS_FS is not set 1305 + # CONFIG_HFS_FS is not set 1306 + # CONFIG_HFSPLUS_FS is not set 1307 + # CONFIG_BEFS_FS is not set 1308 + # CONFIG_BFS_FS is not set 1309 + # CONFIG_EFS_FS is not set 1310 + # CONFIG_CRAMFS is not set 1311 + # CONFIG_SQUASHFS is not set 1312 + # CONFIG_VXFS_FS is not set 1313 + # CONFIG_MINIX_FS is not set 1314 + # CONFIG_OMFS_FS is not set 1315 + # CONFIG_HPFS_FS is not set 1316 + # CONFIG_QNX4FS_FS is not set 1317 + # CONFIG_ROMFS_FS is not set 1318 + # CONFIG_SYSV_FS is not set 1319 + # CONFIG_UFS_FS is not set 1320 + CONFIG_NETWORK_FILESYSTEMS=y 1321 + CONFIG_NFS_FS=m 1322 + CONFIG_NFS_V3=y 1323 + # CONFIG_NFS_V3_ACL is not set 1324 + CONFIG_NFS_V4=y 1325 + CONFIG_NFSD=m 1326 + CONFIG_NFSD_V3=y 1327 + # CONFIG_NFSD_V3_ACL is not set 1328 + CONFIG_NFSD_V4=y 1329 + CONFIG_LOCKD=m 1330 + CONFIG_LOCKD_V4=y 1331 + CONFIG_EXPORTFS=m 1332 + CONFIG_NFS_COMMON=y 1333 + CONFIG_SUNRPC=m 1334 + CONFIG_SUNRPC_GSS=m 1335 + # CONFIG_SUNRPC_REGISTER_V4 is not set 1336 + CONFIG_RPCSEC_GSS_KRB5=m 1337 + # CONFIG_RPCSEC_GSS_SPKM3 is not set 1338 + CONFIG_SMB_FS=m 1339 + CONFIG_SMB_NLS_DEFAULT=y 1340 + CONFIG_SMB_NLS_REMOTE="cp437" 1341 + CONFIG_CIFS=m 1342 + # CONFIG_CIFS_STATS is not set 1343 + # CONFIG_CIFS_WEAK_PW_HASH is not set 1344 + # CONFIG_CIFS_XATTR is not set 1345 + # CONFIG_CIFS_DEBUG2 is not set 1346 + # CONFIG_CIFS_EXPERIMENTAL is not set 1347 + # CONFIG_NCP_FS is not set 1348 + # CONFIG_CODA_FS is not set 1349 + # CONFIG_AFS_FS is not set 1350 + 1351 + # 1352 + # Partition Types 1353 + # 1354 + CONFIG_PARTITION_ADVANCED=y 1355 + # CONFIG_ACORN_PARTITION is not set 1356 + # CONFIG_OSF_PARTITION is not set 1357 + # CONFIG_AMIGA_PARTITION is not set 1358 + # CONFIG_ATARI_PARTITION is not set 1359 + # CONFIG_MAC_PARTITION is not set 1360 + CONFIG_MSDOS_PARTITION=y 1361 + # CONFIG_BSD_DISKLABEL is not set 1362 + # CONFIG_MINIX_SUBPARTITION is not set 1363 + # CONFIG_SOLARIS_X86_PARTITION is not set 1364 + # CONFIG_UNIXWARE_DISKLABEL is not set 1365 + # CONFIG_LDM_PARTITION is not set 1366 + CONFIG_SGI_PARTITION=y 1367 + # CONFIG_ULTRIX_PARTITION is not set 1368 + # CONFIG_SUN_PARTITION is not set 1369 + # CONFIG_KARMA_PARTITION is not set 1370 + CONFIG_EFI_PARTITION=y 1371 + # CONFIG_SYSV68_PARTITION is not set 1372 + CONFIG_NLS=y 1373 + CONFIG_NLS_DEFAULT="iso8859-1" 1374 + CONFIG_NLS_CODEPAGE_437=y 1375 + CONFIG_NLS_CODEPAGE_737=m 1376 + CONFIG_NLS_CODEPAGE_775=m 1377 + CONFIG_NLS_CODEPAGE_850=m 1378 + CONFIG_NLS_CODEPAGE_852=m 1379 + CONFIG_NLS_CODEPAGE_855=m 1380 + CONFIG_NLS_CODEPAGE_857=m 1381 + CONFIG_NLS_CODEPAGE_860=m 1382 + CONFIG_NLS_CODEPAGE_861=m 1383 + CONFIG_NLS_CODEPAGE_862=m 1384 + CONFIG_NLS_CODEPAGE_863=m 1385 + CONFIG_NLS_CODEPAGE_864=m 1386 + CONFIG_NLS_CODEPAGE_865=m 1387 + CONFIG_NLS_CODEPAGE_866=m 1388 + CONFIG_NLS_CODEPAGE_869=m 1389 + CONFIG_NLS_CODEPAGE_936=m 1390 + CONFIG_NLS_CODEPAGE_950=m 1391 + CONFIG_NLS_CODEPAGE_932=m 1392 + CONFIG_NLS_CODEPAGE_949=m 1393 + CONFIG_NLS_CODEPAGE_874=m 1394 + CONFIG_NLS_ISO8859_8=m 1395 + CONFIG_NLS_CODEPAGE_1250=m 1396 + CONFIG_NLS_CODEPAGE_1251=m 1397 + # CONFIG_NLS_ASCII is not set 1398 + CONFIG_NLS_ISO8859_1=y 1399 + CONFIG_NLS_ISO8859_2=m 1400 + CONFIG_NLS_ISO8859_3=m 1401 + CONFIG_NLS_ISO8859_4=m 1402 + CONFIG_NLS_ISO8859_5=m 1403 + CONFIG_NLS_ISO8859_6=m 1404 + CONFIG_NLS_ISO8859_7=m 1405 + CONFIG_NLS_ISO8859_9=m 1406 + CONFIG_NLS_ISO8859_13=m 1407 + CONFIG_NLS_ISO8859_14=m 1408 + CONFIG_NLS_ISO8859_15=m 1409 + CONFIG_NLS_KOI8_R=m 1410 + CONFIG_NLS_KOI8_U=m 1411 + CONFIG_NLS_UTF8=m 1412 + # CONFIG_DLM is not set 1413 + 1414 + # 1415 + # Kernel hacking 1416 + # 1417 + # CONFIG_PRINTK_TIME is not set 1418 + CONFIG_ENABLE_WARN_DEPRECATED=y 1419 + CONFIG_ENABLE_MUST_CHECK=y 1420 + CONFIG_FRAME_WARN=2048 1421 + CONFIG_MAGIC_SYSRQ=y 1422 + # CONFIG_UNUSED_SYMBOLS is not set 1423 + # CONFIG_DEBUG_FS is not set 1424 + # CONFIG_HEADERS_CHECK is not set 1425 + CONFIG_DEBUG_KERNEL=y 1426 + # CONFIG_DEBUG_SHIRQ is not set 1427 + CONFIG_DETECT_SOFTLOCKUP=y 1428 + # CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC is not set 1429 + CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC_VALUE=0 1430 + CONFIG_SCHED_DEBUG=y 1431 + # CONFIG_SCHEDSTATS is not set 1432 + # CONFIG_TIMER_STATS is not set 1433 + # CONFIG_DEBUG_OBJECTS is not set 1434 + # CONFIG_SLUB_DEBUG_ON is not set 1435 + # CONFIG_SLUB_STATS is not set 1436 + # CONFIG_DEBUG_RT_MUTEXES is not set 1437 + # CONFIG_RT_MUTEX_TESTER is not set 1438 + # CONFIG_DEBUG_SPINLOCK is not set 1439 + CONFIG_DEBUG_MUTEXES=y 1440 + # CONFIG_DEBUG_SPINLOCK_SLEEP is not set 1441 + # CONFIG_DEBUG_LOCKING_API_SELFTESTS is not set 1442 + # CONFIG_DEBUG_KOBJECT is not set 1443 + # CONFIG_DEBUG_INFO is not set 1444 + # CONFIG_DEBUG_VM is not set 1445 + # CONFIG_DEBUG_WRITECOUNT is not set 1446 + CONFIG_DEBUG_MEMORY_INIT=y 1447 + # CONFIG_DEBUG_LIST is not set 1448 + # CONFIG_DEBUG_SG is not set 1449 + # CONFIG_DEBUG_NOTIFIERS is not set 1450 + # CONFIG_BOOT_PRINTK_DELAY is not set 1451 + # CONFIG_RCU_TORTURE_TEST is not set 1452 + # CONFIG_RCU_CPU_STALL_DETECTOR is not set 1453 + # CONFIG_BACKTRACE_SELF_TEST is not set 1454 + # CONFIG_DEBUG_BLOCK_EXT_DEVT is not set 1455 + # CONFIG_FAULT_INJECTION is not set 1456 + # CONFIG_SYSCTL_SYSCALL_CHECK is not set 1457 + 1458 + # 1459 + # Tracers 1460 + # 1461 + # CONFIG_SCHED_TRACER is not set 1462 + # CONFIG_CONTEXT_SWITCH_TRACER is not set 1463 + # CONFIG_BOOT_TRACER is not set 1464 + # CONFIG_TRACE_BRANCH_PROFILING is not set 1465 + # CONFIG_DYNAMIC_PRINTK_DEBUG is not set 1466 + # CONFIG_SAMPLES is not set 1467 + CONFIG_IA64_GRANULE_16MB=y 1468 + # CONFIG_IA64_GRANULE_64MB is not set 1469 + # CONFIG_IA64_PRINT_HAZARDS is not set 1470 + # CONFIG_DISABLE_VHPT is not set 1471 + # CONFIG_IA64_DEBUG_CMPXCHG is not set 1472 + # CONFIG_IA64_DEBUG_IRQ is not set 1473 + 1474 + # 1475 + # Security options 1476 + # 1477 + # CONFIG_KEYS is not set 1478 + # CONFIG_SECURITY is not set 1479 + # CONFIG_SECURITYFS is not set 1480 + # CONFIG_SECURITY_FILE_CAPABILITIES is not set 1481 + CONFIG_CRYPTO=y 1482 + 1483 + # 1484 + # Crypto core or helper 1485 + # 1486 + # CONFIG_CRYPTO_FIPS is not set 1487 + CONFIG_CRYPTO_ALGAPI=y 1488 + CONFIG_CRYPTO_ALGAPI2=y 1489 + CONFIG_CRYPTO_AEAD2=y 1490 + CONFIG_CRYPTO_BLKCIPHER=m 1491 + CONFIG_CRYPTO_BLKCIPHER2=y 1492 + CONFIG_CRYPTO_HASH=y 1493 + CONFIG_CRYPTO_HASH2=y 1494 + CONFIG_CRYPTO_RNG2=y 1495 + CONFIG_CRYPTO_MANAGER=m 1496 + CONFIG_CRYPTO_MANAGER2=y 1497 + # CONFIG_CRYPTO_GF128MUL is not set 1498 + # CONFIG_CRYPTO_NULL is not set 1499 + # CONFIG_CRYPTO_CRYPTD is not set 1500 + # CONFIG_CRYPTO_AUTHENC is not set 1501 + # CONFIG_CRYPTO_TEST is not set 1502 + 1503 + # 1504 + # Authenticated Encryption with Associated Data 1505 + # 1506 + # CONFIG_CRYPTO_CCM is not set 1507 + # CONFIG_CRYPTO_GCM is not set 1508 + # CONFIG_CRYPTO_SEQIV is not set 1509 + 1510 + # 1511 + # Block modes 1512 + # 1513 + CONFIG_CRYPTO_CBC=m 1514 + # CONFIG_CRYPTO_CTR is not set 1515 + # CONFIG_CRYPTO_CTS is not set 1516 + CONFIG_CRYPTO_ECB=m 1517 + # CONFIG_CRYPTO_LRW is not set 1518 + CONFIG_CRYPTO_PCBC=m 1519 + # CONFIG_CRYPTO_XTS is not set 1520 + 1521 + # 1522 + # Hash modes 1523 + # 1524 + # CONFIG_CRYPTO_HMAC is not set 1525 + # CONFIG_CRYPTO_XCBC is not set 1526 + 1527 + # 1528 + # Digest 1529 + # 1530 + # CONFIG_CRYPTO_CRC32C is not set 1531 + # CONFIG_CRYPTO_MD4 is not set 1532 + CONFIG_CRYPTO_MD5=y 1533 + # CONFIG_CRYPTO_MICHAEL_MIC is not set 1534 + # CONFIG_CRYPTO_RMD128 is not set 1535 + # CONFIG_CRYPTO_RMD160 is not set 1536 + # CONFIG_CRYPTO_RMD256 is not set 1537 + # CONFIG_CRYPTO_RMD320 is not set 1538 + # CONFIG_CRYPTO_SHA1 is not set 1539 + # CONFIG_CRYPTO_SHA256 is not set 1540 + # CONFIG_CRYPTO_SHA512 is not set 1541 + # CONFIG_CRYPTO_TGR192 is not set 1542 + # CONFIG_CRYPTO_WP512 is not set 1543 + 1544 + # 1545 + # Ciphers 1546 + # 1547 + # CONFIG_CRYPTO_AES is not set 1548 + # CONFIG_CRYPTO_ANUBIS is not set 1549 + # CONFIG_CRYPTO_ARC4 is not set 1550 + # CONFIG_CRYPTO_BLOWFISH is not set 1551 + # CONFIG_CRYPTO_CAMELLIA is not set 1552 + # CONFIG_CRYPTO_CAST5 is not set 1553 + # CONFIG_CRYPTO_CAST6 is not set 1554 + CONFIG_CRYPTO_DES=m 1555 + # CONFIG_CRYPTO_FCRYPT is not set 1556 + # CONFIG_CRYPTO_KHAZAD is not set 1557 + # CONFIG_CRYPTO_SALSA20 is not set 1558 + # CONFIG_CRYPTO_SEED is not set 1559 + # CONFIG_CRYPTO_SERPENT is not set 1560 + # CONFIG_CRYPTO_TEA is not set 1561 + # CONFIG_CRYPTO_TWOFISH is not set 1562 + 1563 + # 1564 + # Compression 1565 + # 1566 + # CONFIG_CRYPTO_DEFLATE is not set 1567 + # CONFIG_CRYPTO_LZO is not set 1568 + 1569 + # 1570 + # Random Number Generation 1571 + # 1572 + # CONFIG_CRYPTO_ANSI_CPRNG is not set 1573 + CONFIG_CRYPTO_HW=y 1574 + # CONFIG_CRYPTO_DEV_HIFN_795X is not set 1575 + CONFIG_HAVE_KVM=y 1576 + CONFIG_VIRTUALIZATION=y 1577 + # CONFIG_KVM is not set 1578 + # CONFIG_VIRTIO_PCI is not set 1579 + # CONFIG_VIRTIO_BALLOON is not set 1580 + 1581 + # 1582 + # Library routines 1583 + # 1584 + CONFIG_BITREVERSE=y 1585 + CONFIG_GENERIC_FIND_LAST_BIT=y 1586 + # CONFIG_CRC_CCITT is not set 1587 + # CONFIG_CRC16 is not set 1588 + # CONFIG_CRC_T10DIF is not set 1589 + CONFIG_CRC_ITU_T=m 1590 + CONFIG_CRC32=y 1591 + # CONFIG_CRC7 is not set 1592 + # CONFIG_LIBCRC32C is not set 1593 + CONFIG_PLIST=y 1594 + CONFIG_HAS_IOMEM=y 1595 + CONFIG_HAS_IOPORT=y 1596 + CONFIG_HAS_DMA=y 1597 + CONFIG_GENERIC_HARDIRQS=y 1598 + CONFIG_GENERIC_IRQ_PROBE=y 1599 + CONFIG_GENERIC_PENDING_IRQ=y 1600 + CONFIG_IRQ_PER_CPU=y 1601 + # CONFIG_IOMMU_API is not set
+4
arch/ia64/include/asm/kvm.h
··· 25 25 26 26 #include <linux/ioctl.h> 27 27 28 + /* Select x86 specific features in <linux/kvm.h> */ 29 + #define __KVM_HAVE_IOAPIC 30 + #define __KVM_HAVE_DEVICE_ASSIGNMENT 31 + 28 32 /* Architectural interrupt line count. */ 29 33 #define KVM_NR_INTERRUPTS 256 30 34
-4
arch/ia64/include/asm/mmzone.h
··· 31 31 #endif 32 32 } 33 33 34 - #ifdef CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID 35 - extern int early_pfn_to_nid(unsigned long pfn); 36 - #endif 37 - 38 34 #ifdef CONFIG_IA64_DIG /* DIG systems are small */ 39 35 # define MAX_PHYSNODE_ID 8 40 36 # define NR_NODE_MEMBLKS (MAX_NUMNODES * 8)
+1 -1
arch/ia64/include/asm/sn/bte.h
··· 39 39 /* BTE status register only supports 16 bits for length field */ 40 40 #define BTE_LEN_BITS (16) 41 41 #define BTE_LEN_MASK ((1 << BTE_LEN_BITS) - 1) 42 - #define BTE_MAX_XFER ((1 << BTE_LEN_BITS) * L1_CACHE_BYTES) 42 + #define BTE_MAX_XFER (BTE_LEN_MASK << L1_CACHE_SHIFT) 43 43 44 44 45 45 /* Define hardware */
+3 -2
arch/ia64/kernel/smpboot.c
··· 736 736 return -EBUSY; 737 737 } 738 738 739 + cpu_clear(cpu, cpu_online_map); 740 + 739 741 if (migrate_platform_irqs(cpu)) { 740 742 cpu_set(cpu, cpu_online_map); 741 - return (-EBUSY); 743 + return -EBUSY; 742 744 } 743 745 744 746 remove_siblinginfo(cpu); 745 747 fixup_irqs(); 746 - cpu_clear(cpu, cpu_online_map); 747 748 local_flush_tlb_all(); 748 749 cpu_clear(cpu, cpu_callin_map); 749 750 return 0;
+4
arch/ia64/kvm/kvm-ia64.c
··· 1337 1337 } 1338 1338 } 1339 1339 1340 + void kvm_arch_sync_events(struct kvm *kvm) 1341 + { 1342 + } 1343 + 1340 1344 void kvm_arch_destroy_vm(struct kvm *kvm) 1341 1345 { 1342 1346 kvm_iommu_unmap_guest(kvm);
+9 -8
arch/ia64/kvm/process.c
··· 455 455 if (!vmm_fpswa_interface) 456 456 return (fpswa_ret_t) {-1, 0, 0, 0}; 457 457 458 - /* 459 - * Just let fpswa driver to use hardware fp registers. 460 - * No fp register is valid in memory. 461 - */ 462 458 memset(&fp_state, 0, sizeof(fp_state_t)); 463 459 464 460 /* 461 + * compute fp_state. only FP registers f6 - f11 are used by the 462 + * vmm, so set those bits in the mask and set the low volatile 463 + * pointer to point to these registers. 464 + */ 465 + fp_state.bitmask_low64 = 0xfc0; /* bit6..bit11 */ 466 + 467 + fp_state.fp_state_low_volatile = (fp_state_low_volatile_t *) &regs->f6; 468 + 469 + /* 465 470 * unsigned long (*EFI_FPSWA) ( 466 471 * unsigned long trap_type, 467 472 * void *Bundle, ··· 550 545 status = vmm_handle_fpu_swa(0, regs, isr); 551 546 if (!status) 552 547 return ; 553 - else if (-EAGAIN == status) { 554 - vcpu_decrement_iip(vcpu); 555 - return ; 556 - } 557 548 break; 558 549 } 559 550
+2 -2
arch/ia64/mm/numa.c
··· 58 58 * SPARSEMEM to allocate the SPARSEMEM sectionmap on the NUMA node where 59 59 * the section resides. 60 60 */ 61 - int early_pfn_to_nid(unsigned long pfn) 61 + int __meminit __early_pfn_to_nid(unsigned long pfn) 62 62 { 63 63 int i, section = pfn >> PFN_SECTION_SHIFT, ssec, esec; 64 64 ··· 70 70 return node_memblk[i].nid; 71 71 } 72 72 73 - return 0; 73 + return -1; 74 74 } 75 75 76 76 #ifdef CONFIG_MEMORY_HOTPLUG
+4 -3
arch/ia64/sn/kernel/bte.c
··· 97 97 return BTE_SUCCESS; 98 98 } 99 99 100 - BUG_ON((len & L1_CACHE_MASK) || 101 - (src & L1_CACHE_MASK) || (dest & L1_CACHE_MASK)); 102 - BUG_ON(!(len < ((BTE_LEN_MASK + 1) << L1_CACHE_SHIFT))); 100 + BUG_ON(len & L1_CACHE_MASK); 101 + BUG_ON(src & L1_CACHE_MASK); 102 + BUG_ON(dest & L1_CACHE_MASK); 103 + BUG_ON(len > BTE_MAX_XFER); 103 104 104 105 /* 105 106 * Start with interface corresponding to cpu number
+1 -2
arch/ia64/xen/Kconfig
··· 8 8 depends on PARAVIRT && MCKINLEY && IA64_PAGE_SIZE_16KB && EXPERIMENTAL 9 9 select XEN_XENCOMM 10 10 select NO_IDLE_HZ 11 - 12 - # those are required to save/restore. 11 + # followings are required to save/restore. 13 12 select ARCH_SUSPEND_POSSIBLE 14 13 select SUSPEND 15 14 select PM_SLEEP
+2 -2
arch/ia64/xen/xen_pv_ops.c
··· 153 153 xen_setup_vcpu_info_placement(); 154 154 } 155 155 156 - static const struct pv_init_ops xen_init_ops __initdata = { 156 + static const struct pv_init_ops xen_init_ops __initconst = { 157 157 .banner = xen_banner, 158 158 159 159 .reserve_memory = xen_reserve_memory, ··· 337 337 HYPERVISOR_physdev_op(PHYSDEVOP_apic_write, &apic_op); 338 338 } 339 339 340 - static const struct pv_iosapic_ops xen_iosapic_ops __initdata = { 340 + static const struct pv_iosapic_ops xen_iosapic_ops __initconst = { 341 341 .pcat_compat_init = xen_pcat_compat_init, 342 342 .__get_irq_chip = xen_iosapic_get_irq_chip, 343 343
+1
arch/mn10300/Kconfig
··· 7 7 8 8 config MN10300 9 9 def_bool y 10 + select HAVE_OPROFILE 10 11 11 12 config AM33 12 13 def_bool y
+1 -1
arch/mn10300/unit-asb2305/pci.c
··· 173 173 BRIDGEREGB(where) = value; 174 174 } else { 175 175 if (bus->number == 0 && 176 - (devfn == PCI_DEVFN(2, 0) && devfn == PCI_DEVFN(3, 0)) 176 + (devfn == PCI_DEVFN(2, 0) || devfn == PCI_DEVFN(3, 0)) 177 177 ) 178 178 __pcidebug("<= %02x", bus, devfn, where, value); 179 179 CONFIG_ADDRESS = CONFIG_CMD(bus, devfn, where);
+1 -1
arch/powerpc/include/asm/pgtable-4k.h
··· 60 60 /* It should be preserving the high 48 bits and then specifically */ 61 61 /* preserving _PAGE_SECONDARY | _PAGE_GROUP_IX */ 62 62 #define _PAGE_CHG_MASK (PAGE_MASK | _PAGE_ACCESSED | _PAGE_DIRTY | \ 63 - _PAGE_HPTEFLAGS) 63 + _PAGE_HPTEFLAGS | _PAGE_SPECIAL) 64 64 65 65 /* Bits to mask out from a PMD to get to the PTE page */ 66 66 #define PMD_MASKED_BITS 0
+1 -1
arch/powerpc/include/asm/pgtable-64k.h
··· 114 114 * pgprot changes 115 115 */ 116 116 #define _PAGE_CHG_MASK (PTE_RPN_MASK | _PAGE_HPTEFLAGS | _PAGE_DIRTY | \ 117 - _PAGE_ACCESSED) 117 + _PAGE_ACCESSED | _PAGE_SPECIAL) 118 118 119 119 /* Bits to mask out from a PMD to get to the PTE page */ 120 120 #define PMD_MASKED_BITS 0x1ff
+2 -1
arch/powerpc/include/asm/pgtable-ppc32.h
··· 429 429 #define PMD_PAGE_SIZE(pmd) bad_call_to_PMD_PAGE_SIZE() 430 430 #endif 431 431 432 - #define _PAGE_CHG_MASK (PAGE_MASK | _PAGE_ACCESSED | _PAGE_DIRTY) 432 + #define _PAGE_CHG_MASK (PAGE_MASK | _PAGE_ACCESSED | _PAGE_DIRTY | \ 433 + _PAGE_SPECIAL) 433 434 434 435 435 436 #define PAGE_PROT_BITS (_PAGE_GUARDED | _PAGE_COHERENT | _PAGE_NO_CACHE | \
+6 -1
arch/powerpc/kernel/align.c
··· 646 646 unsigned int areg, struct pt_regs *regs, 647 647 unsigned int flags, unsigned int length) 648 648 { 649 - char *ptr = (char *) &current->thread.TS_FPR(reg); 649 + char *ptr; 650 650 int ret = 0; 651 651 652 652 flush_vsx_to_thread(current); 653 + 654 + if (reg < 32) 655 + ptr = (char *) &current->thread.TS_FPR(reg); 656 + else 657 + ptr = (char *) &current->thread.vr[reg - 32]; 653 658 654 659 if (flags & ST) 655 660 ret = __copy_to_user(addr, ptr, length);
+4
arch/powerpc/kvm/powerpc.c
··· 125 125 } 126 126 } 127 127 128 + void kvm_arch_sync_events(struct kvm *kvm) 129 + { 130 + } 131 + 128 132 void kvm_arch_destroy_vm(struct kvm *kvm) 129 133 { 130 134 kvmppc_free_vcpus(kvm);
+3 -2
arch/powerpc/mm/numa.c
··· 19 19 #include <linux/notifier.h> 20 20 #include <linux/lmb.h> 21 21 #include <linux/of.h> 22 + #include <linux/pfn.h> 22 23 #include <asm/sparsemem.h> 23 24 #include <asm/prom.h> 24 25 #include <asm/system.h> ··· 883 882 unsigned long physbase = lmb.reserved.region[i].base; 884 883 unsigned long size = lmb.reserved.region[i].size; 885 884 unsigned long start_pfn = physbase >> PAGE_SHIFT; 886 - unsigned long end_pfn = ((physbase + size) >> PAGE_SHIFT); 885 + unsigned long end_pfn = PFN_UP(physbase + size); 887 886 struct node_active_region node_ar; 888 887 unsigned long node_end_pfn = node->node_start_pfn + 889 888 node->node_spanned_pages; ··· 909 908 */ 910 909 if (end_pfn > node_ar.end_pfn) 911 910 reserve_size = (node_ar.end_pfn << PAGE_SHIFT) 912 - - (start_pfn << PAGE_SHIFT); 911 + - physbase; 913 912 /* 914 913 * Only worry about *this* node, others may not 915 914 * yet have valid NODE_DATA().
+1 -1
arch/powerpc/platforms/ps3/mm.c
··· 328 328 return result; 329 329 } 330 330 331 - core_initcall(ps3_mm_add_memory); 331 + device_initcall(ps3_mm_add_memory); 332 332 333 333 /*============================================================================*/ 334 334 /* dma routines */
+1 -1
arch/s390/include/asm/cputime.h
··· 145 145 value->tv_usec = rp.subreg.even / 4096; 146 146 value->tv_sec = rp.subreg.odd; 147 147 #else 148 - value->tv_usec = cputime % 4096000000ULL; 148 + value->tv_usec = (cputime % 4096000000ULL) / 4096; 149 149 value->tv_sec = cputime / 4096000000ULL; 150 150 #endif 151 151 }
+2
arch/s390/include/asm/setup.h
··· 43 43 44 44 extern struct mem_chunk memory_chunk[]; 45 45 extern unsigned long real_memory_size; 46 + extern int memory_end_set; 47 + extern unsigned long memory_end; 46 48 47 49 void detect_memory_layout(struct mem_chunk chunk[]); 48 50
+7 -2
arch/s390/kernel/setup.c
··· 82 82 83 83 struct mem_chunk __initdata memory_chunk[MEMORY_CHUNKS]; 84 84 volatile int __cpu_logical_map[NR_CPUS]; /* logical cpu to cpu address */ 85 - static unsigned long __initdata memory_end; 85 + 86 + int __initdata memory_end_set; 87 + unsigned long __initdata memory_end; 86 88 87 89 /* 88 90 * This is set up by the setup-routine at boot-time ··· 283 281 static int __init early_parse_mem(char *p) 284 282 { 285 283 memory_end = memparse(p, &p); 284 + memory_end_set = 1; 286 285 return 0; 287 286 } 288 287 early_param("mem", early_parse_mem); ··· 511 508 int i; 512 509 513 510 #if defined(CONFIG_ZFCPDUMP) || defined(CONFIG_ZFCPDUMP_MODULE) 514 - if (ipl_info.type == IPL_TYPE_FCP_DUMP) 511 + if (ipl_info.type == IPL_TYPE_FCP_DUMP) { 515 512 memory_end = ZFCPDUMP_HSA_SIZE; 513 + memory_end_set = 1; 514 + } 516 515 #endif 517 516 memory_size = 0; 518 517 memory_end &= PAGE_MASK;
+4
arch/s390/kvm/kvm-s390.c
··· 212 212 } 213 213 } 214 214 215 + void kvm_arch_sync_events(struct kvm *kvm) 216 + { 217 + } 218 + 215 219 void kvm_arch_destroy_vm(struct kvm *kvm) 216 220 { 217 221 kvm_free_vcpus(kvm);
+3 -3
arch/um/drivers/vde_user.c
··· 78 78 { 79 79 struct vde_open_args *args; 80 80 81 - vpri->args = kmalloc(sizeof(struct vde_open_args), UM_GFP_KERNEL); 81 + vpri->args = uml_kmalloc(sizeof(struct vde_open_args), UM_GFP_KERNEL); 82 82 if (vpri->args == NULL) { 83 83 printk(UM_KERN_ERR "vde_init_libstuff - vde_open_args " 84 84 "allocation failed"); ··· 91 91 args->group = init->group; 92 92 args->mode = init->mode ? init->mode : 0700; 93 93 94 - args->port ? printk(UM_KERN_INFO "port %d", args->port) : 95 - printk(UM_KERN_INFO "undefined port"); 94 + args->port ? printk("port %d", args->port) : 95 + printk("undefined port"); 96 96 } 97 97 98 98 int vde_user_read(void *conn, void *buf, int len)
+2 -22
arch/x86/Kconfig.debug
··· 174 174 Add a simple leak tracer to the IOMMU code. This is useful when you 175 175 are debugging a buggy device driver that leaks IOMMU mappings. 176 176 177 - config MMIOTRACE 178 - bool "Memory mapped IO tracing" 179 - depends on DEBUG_KERNEL && PCI 180 - select TRACING 181 - help 182 - Mmiotrace traces Memory Mapped I/O access and is meant for 183 - debugging and reverse engineering. It is called from the ioremap 184 - implementation and works via page faults. Tracing is disabled by 185 - default and can be enabled at run-time. 186 - 187 - See Documentation/tracers/mmiotrace.txt. 188 - If you are not helping to develop drivers, say N. 189 - 190 - config MMIOTRACE_TEST 191 - tristate "Test module for mmiotrace" 192 - depends on MMIOTRACE && m 193 - help 194 - This is a dumb module for testing mmiotrace. It is very dangerous 195 - as it will write garbage to IO memory starting at a given address. 196 - However, it should be safe to use on e.g. unused portion of VRAM. 197 - 198 - Say N, unless you absolutely know what you are doing. 177 + config HAVE_MMIOTRACE_SUPPORT 178 + def_bool y 199 179 200 180 # 201 181 # IO delay types:
+7
arch/x86/include/asm/kvm.h
··· 9 9 #include <linux/types.h> 10 10 #include <linux/ioctl.h> 11 11 12 + /* Select x86 specific features in <linux/kvm.h> */ 13 + #define __KVM_HAVE_PIT 14 + #define __KVM_HAVE_IOAPIC 15 + #define __KVM_HAVE_DEVICE_ASSIGNMENT 16 + #define __KVM_HAVE_MSI 17 + #define __KVM_HAVE_USER_NMI 18 + 12 19 /* Architectural interrupt line count. */ 13 20 #define KVM_NR_INTERRUPTS 256 14 21
-2
arch/x86/include/asm/mmzone_32.h
··· 32 32 get_memcfg_numa_flat(); 33 33 } 34 34 35 - extern int early_pfn_to_nid(unsigned long pfn); 36 - 37 35 extern void resume_map_numa_kva(pgd_t *pgd); 38 36 39 37 #else /* !CONFIG_NUMA */
-2
arch/x86/include/asm/mmzone_64.h
··· 40 40 #define node_end_pfn(nid) (NODE_DATA(nid)->node_start_pfn + \ 41 41 NODE_DATA(nid)->node_spanned_pages) 42 42 43 - extern int early_pfn_to_nid(unsigned long pfn); 44 - 45 43 #ifdef CONFIG_NUMA_EMU 46 44 #define FAKE_NODE_MIN_SIZE (64 * 1024 * 1024) 47 45 #define FAKE_NODE_MIN_HASH_MASK (~(FAKE_NODE_MIN_SIZE - 1UL))
-1
arch/x86/include/asm/page.h
··· 57 57 typedef struct { pgprotval_t pgprot; } pgprot_t; 58 58 59 59 extern int page_is_ram(unsigned long pagenr); 60 - extern int pagerange_is_ram(unsigned long start, unsigned long end); 61 60 extern int devmem_is_allowed(unsigned long pagenr); 62 61 extern void map_devmem(unsigned long pfn, unsigned long size, 63 62 pgprot_t vma_prot);
+2 -15
arch/x86/include/asm/paravirt.h
··· 1352 1352 PVOP_VCALL0(pv_cpu_ops.lazy_mode.leave); 1353 1353 } 1354 1354 1355 - static inline void arch_flush_lazy_cpu_mode(void) 1356 - { 1357 - if (unlikely(paravirt_get_lazy_mode() == PARAVIRT_LAZY_CPU)) { 1358 - arch_leave_lazy_cpu_mode(); 1359 - arch_enter_lazy_cpu_mode(); 1360 - } 1361 - } 1362 - 1355 + void arch_flush_lazy_cpu_mode(void); 1363 1356 1364 1357 #define __HAVE_ARCH_ENTER_LAZY_MMU_MODE 1365 1358 static inline void arch_enter_lazy_mmu_mode(void) ··· 1365 1372 PVOP_VCALL0(pv_mmu_ops.lazy_mode.leave); 1366 1373 } 1367 1374 1368 - static inline void arch_flush_lazy_mmu_mode(void) 1369 - { 1370 - if (unlikely(paravirt_get_lazy_mode() == PARAVIRT_LAZY_MMU)) { 1371 - arch_leave_lazy_mmu_mode(); 1372 - arch_enter_lazy_mmu_mode(); 1373 - } 1374 - } 1375 + void arch_flush_lazy_mmu_mode(void); 1375 1376 1376 1377 static inline void __set_fixmap(unsigned /* enum fixed_addresses */ idx, 1377 1378 unsigned long phys, pgprot_t flags)
+10 -20
arch/x86/kernel/acpi/wakeup_64.S
··· 13 13 * Hooray, we are in Long 64-bit mode (but still running in low memory) 14 14 */ 15 15 ENTRY(wakeup_long64) 16 - wakeup_long64: 17 16 movq saved_magic, %rax 18 17 movq $0x123456789abcdef0, %rdx 19 18 cmpq %rdx, %rax ··· 33 34 34 35 movq saved_rip, %rax 35 36 jmp *%rax 37 + ENDPROC(wakeup_long64) 36 38 37 39 bogus_64_magic: 38 40 jmp bogus_64_magic 39 41 40 - .align 2 41 - .p2align 4,,15 42 - .globl do_suspend_lowlevel 43 - .type do_suspend_lowlevel,@function 44 - do_suspend_lowlevel: 45 - .LFB5: 42 + ENTRY(do_suspend_lowlevel) 46 43 subq $8, %rsp 47 44 xorl %eax, %eax 48 45 call save_processor_state ··· 62 67 pushfq 63 68 popq pt_regs_flags(%rax) 64 69 65 - movq $.L97, saved_rip(%rip) 70 + movq $resume_point, saved_rip(%rip) 66 71 67 72 movq %rsp, saved_rsp 68 73 movq %rbp, saved_rbp ··· 73 78 addq $8, %rsp 74 79 movl $3, %edi 75 80 xorl %eax, %eax 76 - jmp acpi_enter_sleep_state 77 - .L97: 78 - .p2align 4,,7 79 - .L99: 80 - .align 4 81 - movl $24, %eax 82 - movw %ax, %ds 81 + call acpi_enter_sleep_state 82 + /* in case something went wrong, restore the machine status and go on */ 83 + jmp resume_point 83 84 85 + .align 4 86 + resume_point: 84 87 /* We don't restore %rax, it must be 0 anyway */ 85 88 movq $saved_context, %rax 86 89 movq saved_context_cr4(%rax), %rbx ··· 110 117 xorl %eax, %eax 111 118 addq $8, %rsp 112 119 jmp restore_processor_state 113 - .LFE5: 114 - .Lfe5: 115 - .size do_suspend_lowlevel, .Lfe5-do_suspend_lowlevel 116 - 120 + ENDPROC(do_suspend_lowlevel) 121 + 117 122 .data 118 - ALIGN 119 123 ENTRY(saved_rbp) .quad 0 120 124 ENTRY(saved_rsi) .quad 0 121 125 ENTRY(saved_rdi) .quad 0
+1 -1
arch/x86/kernel/apic.c
··· 862 862 } 863 863 864 864 /* lets not touch this if we didn't frob it */ 865 - #if defined(CONFIG_X86_MCE_P4THERMAL) || defined(X86_MCE_INTEL) 865 + #if defined(CONFIG_X86_MCE_P4THERMAL) || defined(CONFIG_X86_MCE_INTEL) 866 866 if (maxlvt >= 5) { 867 867 v = apic_read(APIC_LVTTHMR); 868 868 apic_write(APIC_LVTTHMR, v | APIC_LVT_MASKED);
+7 -5
arch/x86/kernel/cpu/cpufreq/powernow-k8.c
··· 1157 1157 data->cpu = pol->cpu; 1158 1158 data->currpstate = HW_PSTATE_INVALID; 1159 1159 1160 - rc = powernow_k8_cpu_init_acpi(data); 1161 - if (rc) { 1160 + if (powernow_k8_cpu_init_acpi(data)) { 1162 1161 /* 1163 1162 * Use the PSB BIOS structure. This is only availabe on 1164 1163 * an UP version, and is deprecated by AMD. ··· 1175 1176 "ACPI maintainers and complain to your BIOS " 1176 1177 "vendor.\n"); 1177 1178 #endif 1178 - goto err_out; 1179 + kfree(data); 1180 + return -ENODEV; 1179 1181 } 1180 1182 if (pol->cpu != 0) { 1181 1183 printk(KERN_ERR FW_BUG PFX "No ACPI _PSS objects for " 1182 1184 "CPU other than CPU0. Complain to your BIOS " 1183 1185 "vendor.\n"); 1184 - goto err_out; 1186 + kfree(data); 1187 + return -ENODEV; 1185 1188 } 1186 1189 rc = find_psb_table(data); 1187 1190 if (rc) { 1188 - goto err_out; 1191 + kfree(data); 1192 + return -ENODEV; 1189 1193 } 1190 1194 /* Take a crude guess here. 1191 1195 * That guess was in microseconds, so multiply with 1000 */
+4 -3
arch/x86/kernel/cpu/mcheck/mce_64.c
··· 295 295 * If we know that the error was in user space, send a 296 296 * SIGBUS. Otherwise, panic if tolerance is low. 297 297 * 298 - * do_exit() takes an awful lot of locks and has a slight 298 + * force_sig() takes an awful lot of locks and has a slight 299 299 * risk of deadlocking. 300 300 */ 301 301 if (user_space) { 302 - do_exit(SIGBUS); 302 + force_sig(SIGBUS, current); 303 303 } else if (panic_on_oops || tolerant < 2) { 304 304 mce_panic("Uncorrected machine check", 305 305 &panicm, mcestart); ··· 490 490 491 491 } 492 492 493 - static void __cpuinit mce_cpu_features(struct cpuinfo_x86 *c) 493 + static void mce_cpu_features(struct cpuinfo_x86 *c) 494 494 { 495 495 switch (c->x86_vendor) { 496 496 case X86_VENDOR_INTEL: ··· 734 734 static int mce_resume(struct sys_device *dev) 735 735 { 736 736 mce_init(NULL); 737 + mce_cpu_features(&current_cpu_data); 737 738 return 0; 738 739 } 739 740
+1 -1
arch/x86/kernel/cpu/mcheck/mce_amd_64.c
··· 121 121 } 122 122 123 123 /* cpu init entry point, called from mce.c with preempt off */ 124 - void __cpuinit mce_amd_feature_init(struct cpuinfo_x86 *c) 124 + void mce_amd_feature_init(struct cpuinfo_x86 *c) 125 125 { 126 126 unsigned int bank, block; 127 127 unsigned int cpu = smp_processor_id();
+2 -2
arch/x86/kernel/cpu/mcheck/mce_intel_64.c
··· 30 30 irq_exit(); 31 31 } 32 32 33 - static void __cpuinit intel_init_thermal(struct cpuinfo_x86 *c) 33 + static void intel_init_thermal(struct cpuinfo_x86 *c) 34 34 { 35 35 u32 l, h; 36 36 int tm2 = 0; ··· 84 84 return; 85 85 } 86 86 87 - void __cpuinit mce_intel_feature_init(struct cpuinfo_x86 *c) 87 + void mce_intel_feature_init(struct cpuinfo_x86 *c) 88 88 { 89 89 intel_init_thermal(c); 90 90 }
+2
arch/x86/kernel/hpet.c
··· 269 269 now = hpet_readl(HPET_COUNTER); 270 270 cmp = now + (unsigned long) delta; 271 271 cfg = hpet_readl(HPET_Tn_CFG(timer)); 272 + /* Make sure we use edge triggered interrupts */ 273 + cfg &= ~HPET_TN_LEVEL; 272 274 cfg |= HPET_TN_ENABLE | HPET_TN_PERIODIC | 273 275 HPET_TN_SETVAL | HPET_TN_32BIT; 274 276 hpet_writel(cfg, HPET_Tn_CFG(timer));
+1 -1
arch/x86/kernel/olpc.c
··· 203 203 static void __init platform_detect(void) 204 204 { 205 205 /* stopgap until OFW support is added to the kernel */ 206 - olpc_platform_info.boardrev = 0xc2; 206 + olpc_platform_info.boardrev = olpc_board(0xc2); 207 207 } 208 208 #endif 209 209
+26
arch/x86/kernel/paravirt.c
··· 268 268 return __get_cpu_var(paravirt_lazy_mode); 269 269 } 270 270 271 + void arch_flush_lazy_mmu_mode(void) 272 + { 273 + preempt_disable(); 274 + 275 + if (paravirt_get_lazy_mode() == PARAVIRT_LAZY_MMU) { 276 + WARN_ON(preempt_count() == 1); 277 + arch_leave_lazy_mmu_mode(); 278 + arch_enter_lazy_mmu_mode(); 279 + } 280 + 281 + preempt_enable(); 282 + } 283 + 284 + void arch_flush_lazy_cpu_mode(void) 285 + { 286 + preempt_disable(); 287 + 288 + if (paravirt_get_lazy_mode() == PARAVIRT_LAZY_CPU) { 289 + WARN_ON(preempt_count() == 1); 290 + arch_leave_lazy_cpu_mode(); 291 + arch_enter_lazy_cpu_mode(); 292 + } 293 + 294 + preempt_enable(); 295 + } 296 + 271 297 struct pv_info pv_info = { 272 298 .name = "bare hardware", 273 299 .paravirt_enabled = 0,
-3
arch/x86/kernel/process_32.c
··· 104 104 check_pgt_cache(); 105 105 rmb(); 106 106 107 - if (rcu_pending(cpu)) 108 - rcu_check_callbacks(cpu, 0); 109 - 110 107 if (cpu_is_offline(cpu)) 111 108 play_dead(); 112 109
+10 -6
arch/x86/kernel/ptrace.c
··· 810 810 811 811 static void ptrace_bts_detach(struct task_struct *child) 812 812 { 813 - if (unlikely(child->bts)) { 814 - ds_release_bts(child->bts); 815 - child->bts = NULL; 816 - 817 - ptrace_bts_free_buffer(child); 818 - } 813 + /* 814 + * Ptrace_detach() races with ptrace_untrace() in case 815 + * the child dies and is reaped by another thread. 816 + * 817 + * We only do the memory accounting at this point and 818 + * leave the buffer deallocation and the bts tracer 819 + * release to ptrace_bts_untrace() which will be called 820 + * later on with tasklist_lock held. 821 + */ 822 + release_locked_buffer(child->bts_buffer, child->bts_size); 819 823 } 820 824 #else 821 825 static inline void ptrace_bts_fork(struct task_struct *tsk) {}
+9 -1
arch/x86/kernel/traps.c
··· 99 99 local_irq_enable(); 100 100 } 101 101 102 + static inline void conditional_cli(struct pt_regs *regs) 103 + { 104 + if (regs->flags & X86_EFLAGS_IF) 105 + local_irq_disable(); 106 + } 107 + 102 108 static inline void preempt_conditional_cli(struct pt_regs *regs) 103 109 { 104 110 if (regs->flags & X86_EFLAGS_IF) ··· 632 626 633 627 #ifdef CONFIG_X86_32 634 628 debug_vm86: 629 + /* reenable preemption: handle_vm86_trap() might sleep */ 630 + dec_preempt_count(); 635 631 handle_vm86_trap((struct kernel_vm86_regs *) regs, error_code, 1); 636 - preempt_conditional_cli(regs); 632 + conditional_cli(regs); 637 633 return; 638 634 #endif 639 635
+4 -1
arch/x86/kernel/vmiclock_32.c
··· 283 283 #endif 284 284 285 285 /** vmi clocksource */ 286 + static struct clocksource clocksource_vmi; 286 287 287 288 static cycle_t read_real_cycles(void) 288 289 { 289 - return vmi_timer_ops.get_cycle_counter(VMI_CYCLES_REAL); 290 + cycle_t ret = (cycle_t)vmi_timer_ops.get_cycle_counter(VMI_CYCLES_REAL); 291 + return ret >= clocksource_vmi.cycle_last ? 292 + ret : clocksource_vmi.cycle_last; 290 293 } 291 294 292 295 static struct clocksource clocksource_vmi = {
+1 -1
arch/x86/kvm/i8254.c
··· 207 207 hrtimer_add_expires_ns(&pt->timer, pt->period); 208 208 pt->scheduled = hrtimer_get_expires_ns(&pt->timer); 209 209 if (pt->period) 210 - ps->channels[0].count_load_time = hrtimer_get_expires(&pt->timer); 210 + ps->channels[0].count_load_time = ktime_get(); 211 211 212 212 return (pt->period == 0 ? 0 : 1); 213 213 }
-7
arch/x86/kvm/irq.c
··· 87 87 } 88 88 EXPORT_SYMBOL_GPL(kvm_inject_pending_timer_irqs); 89 89 90 - void kvm_timer_intr_post(struct kvm_vcpu *vcpu, int vec) 91 - { 92 - kvm_apic_timer_intr_post(vcpu, vec); 93 - /* TODO: PIT, RTC etc. */ 94 - } 95 - EXPORT_SYMBOL_GPL(kvm_timer_intr_post); 96 - 97 90 void __kvm_migrate_timers(struct kvm_vcpu *vcpu) 98 91 { 99 92 __kvm_migrate_apic_timer(vcpu);
-1
arch/x86/kvm/irq.h
··· 89 89 90 90 void kvm_pic_reset(struct kvm_kpic_state *s); 91 91 92 - void kvm_timer_intr_post(struct kvm_vcpu *vcpu, int vec); 93 92 void kvm_inject_pending_timer_irqs(struct kvm_vcpu *vcpu); 94 93 void kvm_inject_apic_timer_irqs(struct kvm_vcpu *vcpu); 95 94 void kvm_apic_nmi_wd_deliver(struct kvm_vcpu *vcpu);
+14 -50
arch/x86/kvm/lapic.c
··· 35 35 #include "kvm_cache_regs.h" 36 36 #include "irq.h" 37 37 38 + #ifndef CONFIG_X86_64 39 + #define mod_64(x, y) ((x) - (y) * div64_u64(x, y)) 40 + #else 41 + #define mod_64(x, y) ((x) % (y)) 42 + #endif 43 + 38 44 #define PRId64 "d" 39 45 #define PRIx64 "llx" 40 46 #define PRIu64 "u" ··· 517 511 518 512 static u32 apic_get_tmcct(struct kvm_lapic *apic) 519 513 { 520 - u64 counter_passed; 521 - ktime_t passed, now; 514 + ktime_t remaining; 515 + s64 ns; 522 516 u32 tmcct; 523 517 524 518 ASSERT(apic != NULL); 525 519 526 - now = apic->timer.dev.base->get_time(); 527 - tmcct = apic_get_reg(apic, APIC_TMICT); 528 - 529 520 /* if initial count is 0, current count should also be 0 */ 530 - if (tmcct == 0) 521 + if (apic_get_reg(apic, APIC_TMICT) == 0) 531 522 return 0; 532 523 533 - if (unlikely(ktime_to_ns(now) <= 534 - ktime_to_ns(apic->timer.last_update))) { 535 - /* Wrap around */ 536 - passed = ktime_add(( { 537 - (ktime_t) { 538 - .tv64 = KTIME_MAX - 539 - (apic->timer.last_update).tv64}; } 540 - ), now); 541 - apic_debug("time elapsed\n"); 542 - } else 543 - passed = ktime_sub(now, apic->timer.last_update); 524 + remaining = hrtimer_expires_remaining(&apic->timer.dev); 525 + if (ktime_to_ns(remaining) < 0) 526 + remaining = ktime_set(0, 0); 544 527 545 - counter_passed = div64_u64(ktime_to_ns(passed), 546 - (APIC_BUS_CYCLE_NS * apic->timer.divide_count)); 547 - 548 - if (counter_passed > tmcct) { 549 - if (unlikely(!apic_lvtt_period(apic))) { 550 - /* one-shot timers stick at 0 until reset */ 551 - tmcct = 0; 552 - } else { 553 - /* 554 - * periodic timers reset to APIC_TMICT when they 555 - * hit 0. The while loop simulates this happening N 556 - * times. (counter_passed %= tmcct) would also work, 557 - * but might be slower or not work on 32-bit?? 558 - */ 559 - while (counter_passed > tmcct) 560 - counter_passed -= tmcct; 561 - tmcct -= counter_passed; 562 - } 563 - } else { 564 - tmcct -= counter_passed; 565 - } 528 + ns = mod_64(ktime_to_ns(remaining), apic->timer.period); 529 + tmcct = div64_u64(ns, (APIC_BUS_CYCLE_NS * apic->timer.divide_count)); 566 530 567 531 return tmcct; 568 532 } ··· 628 652 static void start_apic_timer(struct kvm_lapic *apic) 629 653 { 630 654 ktime_t now = apic->timer.dev.base->get_time(); 631 - 632 - apic->timer.last_update = now; 633 655 634 656 apic->timer.period = apic_get_reg(apic, APIC_TMICT) * 635 657 APIC_BUS_CYCLE_NS * apic->timer.divide_count; ··· 1082 1108 if (kvm_apic_local_deliver(apic, APIC_LVTT)) 1083 1109 atomic_dec(&apic->timer.pending); 1084 1110 } 1085 - } 1086 - 1087 - void kvm_apic_timer_intr_post(struct kvm_vcpu *vcpu, int vec) 1088 - { 1089 - struct kvm_lapic *apic = vcpu->arch.apic; 1090 - 1091 - if (apic && apic_lvt_vector(apic, APIC_LVTT) == vec) 1092 - apic->timer.last_update = ktime_add_ns( 1093 - apic->timer.last_update, 1094 - apic->timer.period); 1095 1111 } 1096 1112 1097 1113 int kvm_get_apic_interrupt(struct kvm_vcpu *vcpu)
-2
arch/x86/kvm/lapic.h
··· 12 12 atomic_t pending; 13 13 s64 period; /* unit: ns */ 14 14 u32 divide_count; 15 - ktime_t last_update; 16 15 struct hrtimer dev; 17 16 } timer; 18 17 struct kvm_vcpu *vcpu; ··· 41 42 void kvm_apic_post_state_restore(struct kvm_vcpu *vcpu); 42 43 int kvm_lapic_enabled(struct kvm_vcpu *vcpu); 43 44 int kvm_lapic_find_highest_irr(struct kvm_vcpu *vcpu); 44 - void kvm_apic_timer_intr_post(struct kvm_vcpu *vcpu, int vec); 45 45 46 46 void kvm_lapic_set_vapic_addr(struct kvm_vcpu *vcpu, gpa_t vapic_addr); 47 47 void kvm_lapic_sync_from_vapic(struct kvm_vcpu *vcpu);
+7 -2
arch/x86/kvm/mmu.c
··· 1698 1698 if (largepage) 1699 1699 spte |= PT_PAGE_SIZE_MASK; 1700 1700 if (mt_mask) { 1701 - mt_mask = get_memory_type(vcpu, gfn) << 1702 - kvm_x86_ops->get_mt_mask_shift(); 1701 + if (!kvm_is_mmio_pfn(pfn)) { 1702 + mt_mask = get_memory_type(vcpu, gfn) << 1703 + kvm_x86_ops->get_mt_mask_shift(); 1704 + mt_mask |= VMX_EPT_IGMT_BIT; 1705 + } else 1706 + mt_mask = MTRR_TYPE_UNCACHABLE << 1707 + kvm_x86_ops->get_mt_mask_shift(); 1703 1708 spte |= mt_mask; 1704 1709 } 1705 1710
-1
arch/x86/kvm/svm.c
··· 1600 1600 /* Okay, we can deliver the interrupt: grab it and update PIC state. */ 1601 1601 intr_vector = kvm_cpu_get_interrupt(vcpu); 1602 1602 svm_inject_irq(svm, intr_vector); 1603 - kvm_timer_intr_post(vcpu, intr_vector); 1604 1603 out: 1605 1604 update_cr8_intercept(vcpu); 1606 1605 }
+2 -3
arch/x86/kvm/vmx.c
··· 903 903 data = vmcs_readl(GUEST_SYSENTER_ESP); 904 904 break; 905 905 default: 906 + vmx_load_host_state(to_vmx(vcpu)); 906 907 msr = find_msr_entry(to_vmx(vcpu), msr_index); 907 908 if (msr) { 908 909 data = msr->data; ··· 3286 3285 } 3287 3286 if (vcpu->arch.interrupt.pending) { 3288 3287 vmx_inject_irq(vcpu, vcpu->arch.interrupt.nr); 3289 - kvm_timer_intr_post(vcpu, vcpu->arch.interrupt.nr); 3290 3288 if (kvm_cpu_has_interrupt(vcpu)) 3291 3289 enable_irq_window(vcpu); 3292 3290 } ··· 3687 3687 if (vm_need_ept()) { 3688 3688 bypass_guest_pf = 0; 3689 3689 kvm_mmu_set_base_ptes(VMX_EPT_READABLE_MASK | 3690 - VMX_EPT_WRITABLE_MASK | 3691 - VMX_EPT_IGMT_BIT); 3690 + VMX_EPT_WRITABLE_MASK); 3692 3691 kvm_mmu_set_mask_ptes(0ull, 0ull, 0ull, 0ull, 3693 3692 VMX_EPT_EXECUTABLE_MASK, 3694 3693 VMX_EPT_DEFAULT_MT << VMX_EPT_MT_EPTE_SHIFT);
+8 -2
arch/x86/kvm/x86.c
··· 967 967 case KVM_CAP_MMU_SHADOW_CACHE_CONTROL: 968 968 case KVM_CAP_SET_TSS_ADDR: 969 969 case KVM_CAP_EXT_CPUID: 970 - case KVM_CAP_CLOCKSOURCE: 971 970 case KVM_CAP_PIT: 972 971 case KVM_CAP_NOP_IO_DELAY: 973 972 case KVM_CAP_MP_STATE: ··· 990 991 break; 991 992 case KVM_CAP_IOMMU: 992 993 r = iommu_found(); 994 + break; 995 + case KVM_CAP_CLOCKSOURCE: 996 + r = boot_cpu_has(X86_FEATURE_CONSTANT_TSC); 993 997 break; 994 998 default: 995 999 r = 0; ··· 4129 4127 4130 4128 } 4131 4129 4132 - void kvm_arch_destroy_vm(struct kvm *kvm) 4130 + void kvm_arch_sync_events(struct kvm *kvm) 4133 4131 { 4134 4132 kvm_free_all_assigned_devices(kvm); 4133 + } 4134 + 4135 + void kvm_arch_destroy_vm(struct kvm *kvm) 4136 + { 4135 4137 kvm_iommu_unmap_guest(kvm); 4136 4138 kvm_free_pit(kvm); 4137 4139 kfree(kvm->arch.vpic);
-19
arch/x86/mm/ioremap.c
··· 134 134 return 0; 135 135 } 136 136 137 - int pagerange_is_ram(unsigned long start, unsigned long end) 138 - { 139 - int ram_page = 0, not_rampage = 0; 140 - unsigned long page_nr; 141 - 142 - for (page_nr = (start >> PAGE_SHIFT); page_nr < (end >> PAGE_SHIFT); 143 - ++page_nr) { 144 - if (page_is_ram(page_nr)) 145 - ram_page = 1; 146 - else 147 - not_rampage = 1; 148 - 149 - if (ram_page == not_rampage) 150 - return -1; 151 - } 152 - 153 - return ram_page; 154 - } 155 - 156 137 /* 157 138 * Fix up the linear direct mapping of the kernel to avoid cache attribute 158 139 * conflicts.
+1 -1
arch/x86/mm/numa_64.c
··· 145 145 return shift; 146 146 } 147 147 148 - int early_pfn_to_nid(unsigned long pfn) 148 + int __meminit __early_pfn_to_nid(unsigned long pfn) 149 149 { 150 150 return phys_to_nid(pfn << PAGE_SHIFT); 151 151 }
+19 -11
arch/x86/mm/pageattr.c
··· 508 508 #endif 509 509 510 510 /* 511 - * Install the new, split up pagetable. Important details here: 511 + * Install the new, split up pagetable. 512 512 * 513 - * On Intel the NX bit of all levels must be cleared to make a 514 - * page executable. See section 4.13.2 of Intel 64 and IA-32 515 - * Architectures Software Developer's Manual). 516 - * 517 - * Mark the entry present. The current mapping might be 518 - * set to not present, which we preserved above. 513 + * We use the standard kernel pagetable protections for the new 514 + * pagetable protections, the actual ptes set above control the 515 + * primary protection behavior: 519 516 */ 520 - ref_prot = pte_pgprot(pte_mkexec(pte_clrhuge(*kpte))); 521 - pgprot_val(ref_prot) |= _PAGE_PRESENT; 522 - __set_pmd_pte(kpte, address, mk_pte(base, ref_prot)); 517 + __set_pmd_pte(kpte, address, mk_pte(base, __pgprot(_KERNPG_TABLE))); 523 518 base = NULL; 524 519 525 520 out_unlock: ··· 570 575 address = cpa->vaddr[cpa->curpage]; 571 576 else 572 577 address = *cpa->vaddr; 573 - 574 578 repeat: 575 579 kpte = lookup_address(address, &level); 576 580 if (!kpte) ··· 806 812 807 813 vm_unmap_aliases(); 808 814 815 + /* 816 + * If we're called with lazy mmu updates enabled, the 817 + * in-memory pte state may be stale. Flush pending updates to 818 + * bring them up to date. 819 + */ 820 + arch_flush_lazy_mmu_mode(); 821 + 809 822 cpa.vaddr = addr; 810 823 cpa.numpages = numpages; 811 824 cpa.mask_set = mask_set; ··· 854 853 cpa_flush_range(*addr, numpages, cache); 855 854 } else 856 855 cpa_flush_all(cache); 856 + 857 + /* 858 + * If we've been called with lazy mmu updates enabled, then 859 + * make sure that everything gets flushed out before we 860 + * return. 861 + */ 862 + arch_flush_lazy_mmu_mode(); 857 863 858 864 out: 859 865 return ret;
+45 -38
arch/x86/mm/pat.c
··· 211 211 static struct memtype *cached_entry; 212 212 static u64 cached_start; 213 213 214 + static int pat_pagerange_is_ram(unsigned long start, unsigned long end) 215 + { 216 + int ram_page = 0, not_rampage = 0; 217 + unsigned long page_nr; 218 + 219 + for (page_nr = (start >> PAGE_SHIFT); page_nr < (end >> PAGE_SHIFT); 220 + ++page_nr) { 221 + /* 222 + * For legacy reasons, physical address range in the legacy ISA 223 + * region is tracked as non-RAM. This will allow users of 224 + * /dev/mem to map portions of legacy ISA region, even when 225 + * some of those portions are listed(or not even listed) with 226 + * different e820 types(RAM/reserved/..) 227 + */ 228 + if (page_nr >= (ISA_END_ADDRESS >> PAGE_SHIFT) && 229 + page_is_ram(page_nr)) 230 + ram_page = 1; 231 + else 232 + not_rampage = 1; 233 + 234 + if (ram_page == not_rampage) 235 + return -1; 236 + } 237 + 238 + return ram_page; 239 + } 240 + 214 241 /* 215 242 * For RAM pages, mark the pages as non WB memory type using 216 243 * PageNonWB (PG_arch_1). We allow only one set_memory_uc() or ··· 363 336 if (new_type) 364 337 *new_type = actual_type; 365 338 366 - /* 367 - * For legacy reasons, some parts of the physical address range in the 368 - * legacy 1MB region is treated as non-RAM (even when listed as RAM in 369 - * the e820 tables). So we will track the memory attributes of this 370 - * legacy 1MB region using the linear memtype_list always. 371 - */ 372 - if (end >= ISA_END_ADDRESS) { 373 - is_range_ram = pagerange_is_ram(start, end); 374 - if (is_range_ram == 1) 375 - return reserve_ram_pages_type(start, end, req_type, 376 - new_type); 377 - else if (is_range_ram < 0) 378 - return -EINVAL; 379 - } 339 + is_range_ram = pat_pagerange_is_ram(start, end); 340 + if (is_range_ram == 1) 341 + return reserve_ram_pages_type(start, end, req_type, 342 + new_type); 343 + else if (is_range_ram < 0) 344 + return -EINVAL; 380 345 381 346 new = kmalloc(sizeof(struct memtype), GFP_KERNEL); 382 347 if (!new) ··· 465 446 if (is_ISA_range(start, end - 1)) 466 447 return 0; 467 448 468 - /* 469 - * For legacy reasons, some parts of the physical address range in the 470 - * legacy 1MB region is treated as non-RAM (even when listed as RAM in 471 - * the e820 tables). So we will track the memory attributes of this 472 - * legacy 1MB region using the linear memtype_list always. 473 - */ 474 - if (end >= ISA_END_ADDRESS) { 475 - is_range_ram = pagerange_is_ram(start, end); 476 - if (is_range_ram == 1) 477 - return free_ram_pages_type(start, end); 478 - else if (is_range_ram < 0) 479 - return -EINVAL; 480 - } 449 + is_range_ram = pat_pagerange_is_ram(start, end); 450 + if (is_range_ram == 1) 451 + return free_ram_pages_type(start, end); 452 + else if (is_range_ram < 0) 453 + return -EINVAL; 481 454 482 455 spin_lock(&memtype_lock); 483 456 list_for_each_entry(entry, &memtype_list, nd) { ··· 637 626 unsigned long flags; 638 627 unsigned long want_flags = (pgprot_val(*vma_prot) & _PAGE_CACHE_MASK); 639 628 640 - is_ram = pagerange_is_ram(paddr, paddr + size); 629 + is_ram = pat_pagerange_is_ram(paddr, paddr + size); 641 630 642 - if (is_ram != 0) { 643 - /* 644 - * For mapping RAM pages, drivers need to call 645 - * set_memory_[uc|wc|wb] directly, for reserve and free, before 646 - * setting up the PTE. 647 - */ 648 - WARN_ON_ONCE(1); 649 - return 0; 650 - } 631 + /* 632 + * reserve_pfn_range() doesn't support RAM pages. 633 + */ 634 + if (is_ram != 0) 635 + return -EINVAL; 651 636 652 637 ret = reserve_memtype(paddr, paddr + size, want_flags, &flags); 653 638 if (ret) ··· 700 693 { 701 694 int is_ram; 702 695 703 - is_ram = pagerange_is_ram(paddr, paddr + size); 696 + is_ram = pat_pagerange_is_ram(paddr, paddr + size); 704 697 if (is_ram == 0) 705 698 free_memtype(paddr, paddr + size); 706 699 }
+8 -1
block/blk-timeout.c
··· 209 209 { 210 210 unsigned long flags; 211 211 struct request *rq, *tmp; 212 + LIST_HEAD(list); 212 213 213 214 spin_lock_irqsave(q->queue_lock, flags); 214 215 215 216 elv_abort_queue(q); 216 217 217 - list_for_each_entry_safe(rq, tmp, &q->timeout_list, timeout_list) 218 + /* 219 + * Splice entries to local list, to avoid deadlocking if entries 220 + * get readded to the timeout list by error handling 221 + */ 222 + list_splice_init(&q->timeout_list, &list); 223 + 224 + list_for_each_entry_safe(rq, tmp, &list, timeout_list) 218 225 blk_abort_request(rq); 219 226 220 227 spin_unlock_irqrestore(q->queue_lock, flags);
+1 -1
block/blktrace.c
··· 142 142 143 143 what |= ddir_act[rw & WRITE]; 144 144 what |= MASK_TC_BIT(rw, BARRIER); 145 - what |= MASK_TC_BIT(rw, SYNC); 145 + what |= MASK_TC_BIT(rw, SYNCIO); 146 146 what |= MASK_TC_BIT(rw, AHEAD); 147 147 what |= MASK_TC_BIT(rw, META); 148 148 what |= MASK_TC_BIT(rw, DISCARD);
+10 -7
block/bsg.c
··· 244 244 * map sg_io_v4 to a request. 245 245 */ 246 246 static struct request * 247 - bsg_map_hdr(struct bsg_device *bd, struct sg_io_v4 *hdr, fmode_t has_write_perm) 247 + bsg_map_hdr(struct bsg_device *bd, struct sg_io_v4 *hdr, fmode_t has_write_perm, 248 + u8 *sense) 248 249 { 249 250 struct request_queue *q = bd->queue; 250 251 struct request *rq, *next_rq = NULL; ··· 307 306 if (ret) 308 307 goto out; 309 308 } 309 + 310 + rq->sense = sense; 311 + rq->sense_len = 0; 312 + 310 313 return rq; 311 314 out: 312 315 if (rq->cmd != rq->__cmd) ··· 353 348 static void bsg_add_command(struct bsg_device *bd, struct request_queue *q, 354 349 struct bsg_command *bc, struct request *rq) 355 350 { 356 - rq->sense = bc->sense; 357 - rq->sense_len = 0; 358 - 359 351 /* 360 352 * add bc command to busy queue and submit rq for io 361 353 */ ··· 421 419 { 422 420 int ret = 0; 423 421 424 - dprintk("rq %p bio %p %u\n", rq, bio, rq->errors); 422 + dprintk("rq %p bio %p 0x%x\n", rq, bio, rq->errors); 425 423 /* 426 424 * fill in all the output members 427 425 */ ··· 637 635 /* 638 636 * get a request, fill in the blanks, and add to request queue 639 637 */ 640 - rq = bsg_map_hdr(bd, &bc->hdr, has_write_perm); 638 + rq = bsg_map_hdr(bd, &bc->hdr, has_write_perm, bc->sense); 641 639 if (IS_ERR(rq)) { 642 640 ret = PTR_ERR(rq); 643 641 rq = NULL; ··· 924 922 struct request *rq; 925 923 struct bio *bio, *bidi_bio = NULL; 926 924 struct sg_io_v4 hdr; 925 + u8 sense[SCSI_SENSE_BUFFERSIZE]; 927 926 928 927 if (copy_from_user(&hdr, uarg, sizeof(hdr))) 929 928 return -EFAULT; 930 929 931 - rq = bsg_map_hdr(bd, &hdr, file->f_mode & FMODE_WRITE); 930 + rq = bsg_map_hdr(bd, &hdr, file->f_mode & FMODE_WRITE, sense); 932 931 if (IS_ERR(rq)) 933 932 return PTR_ERR(rq); 934 933
+8
block/genhd.c
··· 1087 1087 if (strcmp(dev_name(dev), name)) 1088 1088 continue; 1089 1089 1090 + if (partno < disk->minors) { 1091 + /* We need to return the right devno, even 1092 + * if the partition doesn't exist yet. 1093 + */ 1094 + devt = MKDEV(MAJOR(dev->devt), 1095 + MINOR(dev->devt) + partno); 1096 + break; 1097 + } 1090 1098 part = disk_get_part(disk, partno); 1091 1099 if (part) { 1092 1100 devt = part_devt(part);
+7 -1
crypto/lrw.c
··· 45 45 46 46 static inline void setbit128_bbe(void *b, int bit) 47 47 { 48 - __set_bit(bit ^ 0x78, b); 48 + __set_bit(bit ^ (0x80 - 49 + #ifdef __BIG_ENDIAN 50 + BITS_PER_LONG 51 + #else 52 + BITS_PER_BYTE 53 + #endif 54 + ), b); 49 55 } 50 56 51 57 static int setkey(struct crypto_tfm *parent, const u8 *key,
-7
drivers/acpi/Kconfig
··· 254 254 help you correlate PCI bus addresses with the physical geography 255 255 of your slots. If you are unsure, say N. 256 256 257 - config ACPI_SYSTEM 258 - bool 259 - default y 260 - help 261 - This driver will enable your system to shut down using ACPI, and 262 - dump your ACPI DSDT table using /proc/acpi/dsdt. 263 - 264 257 config X86_PM_TIMER 265 258 bool "Power Management Timer Support" if EMBEDDED 266 259 depends on X86
+1 -1
drivers/acpi/Makefile
··· 52 52 obj-$(CONFIG_ACPI_CONTAINER) += container.o 53 53 obj-$(CONFIG_ACPI_THERMAL) += thermal.o 54 54 obj-y += power.o 55 - obj-$(CONFIG_ACPI_SYSTEM) += system.o event.o 55 + obj-y += system.o event.o 56 56 obj-$(CONFIG_ACPI_DEBUG) += debug.o 57 57 obj-$(CONFIG_ACPI_NUMA) += numa.o 58 58 obj-$(CONFIG_ACPI_HOTPLUG_MEMORY) += acpi_memhotplug.o
+24 -1
drivers/acpi/battery.c
··· 138 138 139 139 static int acpi_battery_get_state(struct acpi_battery *battery); 140 140 141 + static int acpi_battery_is_charged(struct acpi_battery *battery) 142 + { 143 + /* either charging or discharging */ 144 + if (battery->state != 0) 145 + return 0; 146 + 147 + /* battery not reporting charge */ 148 + if (battery->capacity_now == ACPI_BATTERY_VALUE_UNKNOWN || 149 + battery->capacity_now == 0) 150 + return 0; 151 + 152 + /* good batteries update full_charge as the batteries degrade */ 153 + if (battery->full_charge_capacity == battery->capacity_now) 154 + return 1; 155 + 156 + /* fallback to using design values for broken batteries */ 157 + if (battery->design_capacity == battery->capacity_now) 158 + return 1; 159 + 160 + /* we don't do any sort of metric based on percentages */ 161 + return 0; 162 + } 163 + 141 164 static int acpi_battery_get_property(struct power_supply *psy, 142 165 enum power_supply_property psp, 143 166 union power_supply_propval *val) ··· 178 155 val->intval = POWER_SUPPLY_STATUS_DISCHARGING; 179 156 else if (battery->state & 0x02) 180 157 val->intval = POWER_SUPPLY_STATUS_CHARGING; 181 - else if (battery->state == 0) 158 + else if (acpi_battery_is_charged(battery)) 182 159 val->intval = POWER_SUPPLY_STATUS_FULL; 183 160 else 184 161 val->intval = POWER_SUPPLY_STATUS_UNKNOWN;
+21 -7
drivers/ata/libata-sff.c
··· 773 773 else 774 774 iowrite32_rep(data_addr, buf, words); 775 775 776 + /* Transfer trailing bytes, if any */ 776 777 if (unlikely(slop)) { 777 - __le32 pad; 778 + unsigned char pad[4]; 779 + 780 + /* Point buf to the tail of buffer */ 781 + buf += buflen - slop; 782 + 783 + /* 784 + * Use io*_rep() accessors here as well to avoid pointlessly 785 + * swapping bytes to and fro on the big endian machines... 786 + */ 778 787 if (rw == READ) { 779 - pad = cpu_to_le32(ioread32(ap->ioaddr.data_addr)); 780 - memcpy(buf + buflen - slop, &pad, slop); 788 + if (slop < 3) 789 + ioread16_rep(data_addr, pad, 1); 790 + else 791 + ioread32_rep(data_addr, pad, 1); 792 + memcpy(buf, pad, slop); 781 793 } else { 782 - memcpy(&pad, buf + buflen - slop, slop); 783 - iowrite32(le32_to_cpu(pad), ap->ioaddr.data_addr); 794 + memcpy(pad, buf, slop); 795 + if (slop < 3) 796 + iowrite16_rep(data_addr, pad, 1); 797 + else 798 + iowrite32_rep(data_addr, pad, 1); 784 799 } 785 - words++; 786 800 } 787 - return words << 2; 801 + return (buflen + 1) & ~1; 788 802 } 789 803 EXPORT_SYMBOL_GPL(ata_sff_data_xfer32); 790 804
+3 -1
drivers/ata/pata_via.c
··· 110 110 { "vt8237s", PCI_DEVICE_ID_VIA_8237S, 0x00, 0x2f, VIA_UDMA_133 | VIA_BAD_AST }, 111 111 { "vt8251", PCI_DEVICE_ID_VIA_8251, 0x00, 0x2f, VIA_UDMA_133 | VIA_BAD_AST }, 112 112 { "cx700", PCI_DEVICE_ID_VIA_CX700, 0x00, 0x2f, VIA_UDMA_133 | VIA_BAD_AST | VIA_SATA_PATA }, 113 - { "vt6410", PCI_DEVICE_ID_VIA_6410, 0x00, 0x2f, VIA_UDMA_133 | VIA_BAD_AST | VIA_NO_ENABLES}, 113 + { "vt6410", PCI_DEVICE_ID_VIA_6410, 0x00, 0x2f, VIA_UDMA_133 | VIA_BAD_AST | VIA_NO_ENABLES }, 114 + { "vt6415", PCI_DEVICE_ID_VIA_6415, 0x00, 0x2f, VIA_UDMA_133 | VIA_BAD_AST | VIA_NO_ENABLES }, 114 115 { "vt8237a", PCI_DEVICE_ID_VIA_8237A, 0x00, 0x2f, VIA_UDMA_133 | VIA_BAD_AST }, 115 116 { "vt8237", PCI_DEVICE_ID_VIA_8237, 0x00, 0x2f, VIA_UDMA_133 | VIA_BAD_AST }, 116 117 { "vt8235", PCI_DEVICE_ID_VIA_8235, 0x00, 0x2f, VIA_UDMA_133 | VIA_BAD_AST }, ··· 594 593 #endif 595 594 596 595 static const struct pci_device_id via[] = { 596 + { PCI_VDEVICE(VIA, 0x0415), }, 597 597 { PCI_VDEVICE(VIA, 0x0571), }, 598 598 { PCI_VDEVICE(VIA, 0x0581), }, 599 599 { PCI_VDEVICE(VIA, 0x1571), },
+8 -6
drivers/ata/sata_nv.c
··· 421 421 .hardreset = ATA_OP_NULL, 422 422 }; 423 423 424 - /* OSDL bz3352 reports that nf2/3 controllers can't determine device 425 - * signature reliably. Also, the following thread reports detection 426 - * failure on cold boot with the standard debouncing timing. 424 + /* nf2 is ripe with hardreset related problems. 425 + * 426 + * kernel bz#3352 reports nf2/3 controllers can't determine device 427 + * signature reliably. The following thread reports detection failure 428 + * on cold boot with the standard debouncing timing. 427 429 * 428 430 * http://thread.gmane.org/gmane.linux.ide/34098 429 431 * 430 - * Debounce with hotplug timing and request follow-up SRST. 432 + * And bz#12176 reports that hardreset simply doesn't work on nf2. 433 + * Give up on it and just don't do hardreset. 431 434 */ 432 435 static struct ata_port_operations nv_nf2_ops = { 433 - .inherits = &nv_common_ops, 436 + .inherits = &nv_generic_ops, 434 437 .freeze = nv_nf2_freeze, 435 438 .thaw = nv_nf2_thaw, 436 - .hardreset = nv_noclassify_hardreset, 437 439 }; 438 440 439 441 /* For initial probing after boot and hot plugging, hardreset mostly
+17
drivers/base/dd.c
··· 18 18 */ 19 19 20 20 #include <linux/device.h> 21 + #include <linux/delay.h> 21 22 #include <linux/module.h> 22 23 #include <linux/kthread.h> 23 24 #include <linux/wait.h> 25 + #include <linux/async.h> 24 26 25 27 #include "base.h" 26 28 #include "power/power.h" ··· 166 164 atomic_read(&probe_count)); 167 165 if (atomic_read(&probe_count)) 168 166 return -EBUSY; 167 + return 0; 168 + } 169 + 170 + /** 171 + * wait_for_device_probe 172 + * Wait for device probing to be completed. 173 + * 174 + * Note: this function polls at 100 msec intervals. 175 + */ 176 + int wait_for_device_probe(void) 177 + { 178 + /* wait for the known devices to complete their probing */ 179 + while (driver_probe_done() != 0) 180 + msleep(100); 181 + async_synchronize_full(); 169 182 return 0; 170 183 } 171 184
+1
drivers/block/aoe/aoe.h
··· 18 18 enum { 19 19 AOECMD_ATA, 20 20 AOECMD_CFG, 21 + AOECMD_VEND_MIN = 0xf0, 21 22 22 23 AOEFL_RSP = (1<<3), 23 24 AOEFL_ERR = (1<<2),
+2
drivers/block/aoe/aoenet.c
··· 142 142 aoecmd_cfg_rsp(skb); 143 143 break; 144 144 default: 145 + if (h->cmd >= AOECMD_VEND_MIN) 146 + break; /* don't complain about vendor commands */ 145 147 printk(KERN_INFO "aoe: unknown cmd %d\n", h->cmd); 146 148 } 147 149 exit:
+215
drivers/block/cciss.c
··· 3390 3390 kfree(p); 3391 3391 } 3392 3392 3393 + /* Send a message CDB to the firmware. */ 3394 + static __devinit int cciss_message(struct pci_dev *pdev, unsigned char opcode, unsigned char type) 3395 + { 3396 + typedef struct { 3397 + CommandListHeader_struct CommandHeader; 3398 + RequestBlock_struct Request; 3399 + ErrDescriptor_struct ErrorDescriptor; 3400 + } Command; 3401 + static const size_t cmd_sz = sizeof(Command) + sizeof(ErrorInfo_struct); 3402 + Command *cmd; 3403 + dma_addr_t paddr64; 3404 + uint32_t paddr32, tag; 3405 + void __iomem *vaddr; 3406 + int i, err; 3407 + 3408 + vaddr = ioremap_nocache(pci_resource_start(pdev, 0), pci_resource_len(pdev, 0)); 3409 + if (vaddr == NULL) 3410 + return -ENOMEM; 3411 + 3412 + /* The Inbound Post Queue only accepts 32-bit physical addresses for the 3413 + CCISS commands, so they must be allocated from the lower 4GiB of 3414 + memory. */ 3415 + err = pci_set_consistent_dma_mask(pdev, DMA_32BIT_MASK); 3416 + if (err) { 3417 + iounmap(vaddr); 3418 + return -ENOMEM; 3419 + } 3420 + 3421 + cmd = pci_alloc_consistent(pdev, cmd_sz, &paddr64); 3422 + if (cmd == NULL) { 3423 + iounmap(vaddr); 3424 + return -ENOMEM; 3425 + } 3426 + 3427 + /* This must fit, because of the 32-bit consistent DMA mask. Also, 3428 + although there's no guarantee, we assume that the address is at 3429 + least 4-byte aligned (most likely, it's page-aligned). */ 3430 + paddr32 = paddr64; 3431 + 3432 + cmd->CommandHeader.ReplyQueue = 0; 3433 + cmd->CommandHeader.SGList = 0; 3434 + cmd->CommandHeader.SGTotal = 0; 3435 + cmd->CommandHeader.Tag.lower = paddr32; 3436 + cmd->CommandHeader.Tag.upper = 0; 3437 + memset(&cmd->CommandHeader.LUN.LunAddrBytes, 0, 8); 3438 + 3439 + cmd->Request.CDBLen = 16; 3440 + cmd->Request.Type.Type = TYPE_MSG; 3441 + cmd->Request.Type.Attribute = ATTR_HEADOFQUEUE; 3442 + cmd->Request.Type.Direction = XFER_NONE; 3443 + cmd->Request.Timeout = 0; /* Don't time out */ 3444 + cmd->Request.CDB[0] = opcode; 3445 + cmd->Request.CDB[1] = type; 3446 + memset(&cmd->Request.CDB[2], 0, 14); /* the rest of the CDB is reserved */ 3447 + 3448 + cmd->ErrorDescriptor.Addr.lower = paddr32 + sizeof(Command); 3449 + cmd->ErrorDescriptor.Addr.upper = 0; 3450 + cmd->ErrorDescriptor.Len = sizeof(ErrorInfo_struct); 3451 + 3452 + writel(paddr32, vaddr + SA5_REQUEST_PORT_OFFSET); 3453 + 3454 + for (i = 0; i < 10; i++) { 3455 + tag = readl(vaddr + SA5_REPLY_PORT_OFFSET); 3456 + if ((tag & ~3) == paddr32) 3457 + break; 3458 + schedule_timeout_uninterruptible(HZ); 3459 + } 3460 + 3461 + iounmap(vaddr); 3462 + 3463 + /* we leak the DMA buffer here ... no choice since the controller could 3464 + still complete the command. */ 3465 + if (i == 10) { 3466 + printk(KERN_ERR "cciss: controller message %02x:%02x timed out\n", 3467 + opcode, type); 3468 + return -ETIMEDOUT; 3469 + } 3470 + 3471 + pci_free_consistent(pdev, cmd_sz, cmd, paddr64); 3472 + 3473 + if (tag & 2) { 3474 + printk(KERN_ERR "cciss: controller message %02x:%02x failed\n", 3475 + opcode, type); 3476 + return -EIO; 3477 + } 3478 + 3479 + printk(KERN_INFO "cciss: controller message %02x:%02x succeeded\n", 3480 + opcode, type); 3481 + return 0; 3482 + } 3483 + 3484 + #define cciss_soft_reset_controller(p) cciss_message(p, 1, 0) 3485 + #define cciss_noop(p) cciss_message(p, 3, 0) 3486 + 3487 + static __devinit int cciss_reset_msi(struct pci_dev *pdev) 3488 + { 3489 + /* the #defines are stolen from drivers/pci/msi.h. */ 3490 + #define msi_control_reg(base) (base + PCI_MSI_FLAGS) 3491 + #define PCI_MSIX_FLAGS_ENABLE (1 << 15) 3492 + 3493 + int pos; 3494 + u16 control = 0; 3495 + 3496 + pos = pci_find_capability(pdev, PCI_CAP_ID_MSI); 3497 + if (pos) { 3498 + pci_read_config_word(pdev, msi_control_reg(pos), &control); 3499 + if (control & PCI_MSI_FLAGS_ENABLE) { 3500 + printk(KERN_INFO "cciss: resetting MSI\n"); 3501 + pci_write_config_word(pdev, msi_control_reg(pos), control & ~PCI_MSI_FLAGS_ENABLE); 3502 + } 3503 + } 3504 + 3505 + pos = pci_find_capability(pdev, PCI_CAP_ID_MSIX); 3506 + if (pos) { 3507 + pci_read_config_word(pdev, msi_control_reg(pos), &control); 3508 + if (control & PCI_MSIX_FLAGS_ENABLE) { 3509 + printk(KERN_INFO "cciss: resetting MSI-X\n"); 3510 + pci_write_config_word(pdev, msi_control_reg(pos), control & ~PCI_MSIX_FLAGS_ENABLE); 3511 + } 3512 + } 3513 + 3514 + return 0; 3515 + } 3516 + 3517 + /* This does a hard reset of the controller using PCI power management 3518 + * states. */ 3519 + static __devinit int cciss_hard_reset_controller(struct pci_dev *pdev) 3520 + { 3521 + u16 pmcsr, saved_config_space[32]; 3522 + int i, pos; 3523 + 3524 + printk(KERN_INFO "cciss: using PCI PM to reset controller\n"); 3525 + 3526 + /* This is very nearly the same thing as 3527 + 3528 + pci_save_state(pci_dev); 3529 + pci_set_power_state(pci_dev, PCI_D3hot); 3530 + pci_set_power_state(pci_dev, PCI_D0); 3531 + pci_restore_state(pci_dev); 3532 + 3533 + but we can't use these nice canned kernel routines on 3534 + kexec, because they also check the MSI/MSI-X state in PCI 3535 + configuration space and do the wrong thing when it is 3536 + set/cleared. Also, the pci_save/restore_state functions 3537 + violate the ordering requirements for restoring the 3538 + configuration space from the CCISS document (see the 3539 + comment below). So we roll our own .... */ 3540 + 3541 + for (i = 0; i < 32; i++) 3542 + pci_read_config_word(pdev, 2*i, &saved_config_space[i]); 3543 + 3544 + pos = pci_find_capability(pdev, PCI_CAP_ID_PM); 3545 + if (pos == 0) { 3546 + printk(KERN_ERR "cciss_reset_controller: PCI PM not supported\n"); 3547 + return -ENODEV; 3548 + } 3549 + 3550 + /* Quoting from the Open CISS Specification: "The Power 3551 + * Management Control/Status Register (CSR) controls the power 3552 + * state of the device. The normal operating state is D0, 3553 + * CSR=00h. The software off state is D3, CSR=03h. To reset 3554 + * the controller, place the interface device in D3 then to 3555 + * D0, this causes a secondary PCI reset which will reset the 3556 + * controller." */ 3557 + 3558 + /* enter the D3hot power management state */ 3559 + pci_read_config_word(pdev, pos + PCI_PM_CTRL, &pmcsr); 3560 + pmcsr &= ~PCI_PM_CTRL_STATE_MASK; 3561 + pmcsr |= PCI_D3hot; 3562 + pci_write_config_word(pdev, pos + PCI_PM_CTRL, pmcsr); 3563 + 3564 + schedule_timeout_uninterruptible(HZ >> 1); 3565 + 3566 + /* enter the D0 power management state */ 3567 + pmcsr &= ~PCI_PM_CTRL_STATE_MASK; 3568 + pmcsr |= PCI_D0; 3569 + pci_write_config_word(pdev, pos + PCI_PM_CTRL, pmcsr); 3570 + 3571 + schedule_timeout_uninterruptible(HZ >> 1); 3572 + 3573 + /* Restore the PCI configuration space. The Open CISS 3574 + * Specification says, "Restore the PCI Configuration 3575 + * Registers, offsets 00h through 60h. It is important to 3576 + * restore the command register, 16-bits at offset 04h, 3577 + * last. Do not restore the configuration status register, 3578 + * 16-bits at offset 06h." Note that the offset is 2*i. */ 3579 + for (i = 0; i < 32; i++) { 3580 + if (i == 2 || i == 3) 3581 + continue; 3582 + pci_write_config_word(pdev, 2*i, saved_config_space[i]); 3583 + } 3584 + wmb(); 3585 + pci_write_config_word(pdev, 4, saved_config_space[2]); 3586 + 3587 + return 0; 3588 + } 3589 + 3393 3590 /* 3394 3591 * This is it. Find all the controllers and register them. I really hate 3395 3592 * stealing all these major device numbers. ··· 3600 3403 int rc; 3601 3404 int dac, return_code; 3602 3405 InquiryData_struct *inq_buff = NULL; 3406 + 3407 + if (reset_devices) { 3408 + /* Reset the controller with a PCI power-cycle */ 3409 + if (cciss_hard_reset_controller(pdev) || cciss_reset_msi(pdev)) 3410 + return -ENODEV; 3411 + 3412 + /* Some devices (notably the HP Smart Array 5i Controller) 3413 + need a little pause here */ 3414 + schedule_timeout_uninterruptible(30*HZ); 3415 + 3416 + /* Now try to get the controller to respond to a no-op */ 3417 + for (i=0; i<12; i++) { 3418 + if (cciss_noop(pdev) == 0) 3419 + break; 3420 + else 3421 + printk("cciss: no-op failed%s\n", (i < 11 ? "; re-trying" : "")); 3422 + } 3423 + } 3603 3424 3604 3425 i = alloc_cciss_hba(); 3605 3426 if (i < 0)
+52 -27
drivers/block/floppy.c
··· 558 558 static void recalibrate_floppy(void); 559 559 static void floppy_shutdown(unsigned long); 560 560 561 + static int floppy_request_regions(int); 562 + static void floppy_release_regions(int); 561 563 static int floppy_grab_irq_and_dma(void); 562 564 static void floppy_release_irq_and_dma(void); 563 565 ··· 4276 4274 FDCS->rawcmd = 2; 4277 4275 if (user_reset_fdc(-1, FD_RESET_ALWAYS, 0)) { 4278 4276 /* free ioports reserved by floppy_grab_irq_and_dma() */ 4279 - release_region(FDCS->address + 2, 4); 4280 - release_region(FDCS->address + 7, 1); 4277 + floppy_release_regions(fdc); 4281 4278 FDCS->address = -1; 4282 4279 FDCS->version = FDC_NONE; 4283 4280 continue; ··· 4285 4284 FDCS->version = get_fdc_version(); 4286 4285 if (FDCS->version == FDC_NONE) { 4287 4286 /* free ioports reserved by floppy_grab_irq_and_dma() */ 4288 - release_region(FDCS->address + 2, 4); 4289 - release_region(FDCS->address + 7, 1); 4287 + floppy_release_regions(fdc); 4290 4288 FDCS->address = -1; 4291 4289 continue; 4292 4290 } ··· 4358 4358 4359 4359 static DEFINE_SPINLOCK(floppy_usage_lock); 4360 4360 4361 + static const struct io_region { 4362 + int offset; 4363 + int size; 4364 + } io_regions[] = { 4365 + { 2, 1 }, 4366 + /* address + 3 is sometimes reserved by pnp bios for motherboard */ 4367 + { 4, 2 }, 4368 + /* address + 6 is reserved, and may be taken by IDE. 4369 + * Unfortunately, Adaptec doesn't know this :-(, */ 4370 + { 7, 1 }, 4371 + }; 4372 + 4373 + static void floppy_release_allocated_regions(int fdc, const struct io_region *p) 4374 + { 4375 + while (p != io_regions) { 4376 + p--; 4377 + release_region(FDCS->address + p->offset, p->size); 4378 + } 4379 + } 4380 + 4381 + #define ARRAY_END(X) (&((X)[ARRAY_SIZE(X)])) 4382 + 4383 + static int floppy_request_regions(int fdc) 4384 + { 4385 + const struct io_region *p; 4386 + 4387 + for (p = io_regions; p < ARRAY_END(io_regions); p++) { 4388 + if (!request_region(FDCS->address + p->offset, p->size, "floppy")) { 4389 + DPRINT("Floppy io-port 0x%04lx in use\n", FDCS->address + p->offset); 4390 + floppy_release_allocated_regions(fdc, p); 4391 + return -EBUSY; 4392 + } 4393 + } 4394 + return 0; 4395 + } 4396 + 4397 + static void floppy_release_regions(int fdc) 4398 + { 4399 + floppy_release_allocated_regions(fdc, ARRAY_END(io_regions)); 4400 + } 4401 + 4361 4402 static int floppy_grab_irq_and_dma(void) 4362 4403 { 4363 4404 unsigned long flags; ··· 4440 4399 4441 4400 for (fdc = 0; fdc < N_FDC; fdc++) { 4442 4401 if (FDCS->address != -1) { 4443 - if (!request_region(FDCS->address + 2, 4, "floppy")) { 4444 - DPRINT("Floppy io-port 0x%04lx in use\n", 4445 - FDCS->address + 2); 4446 - goto cleanup1; 4447 - } 4448 - if (!request_region(FDCS->address + 7, 1, "floppy DIR")) { 4449 - DPRINT("Floppy io-port 0x%04lx in use\n", 4450 - FDCS->address + 7); 4451 - goto cleanup2; 4452 - } 4453 - /* address + 6 is reserved, and may be taken by IDE. 4454 - * Unfortunately, Adaptec doesn't know this :-(, */ 4402 + if (floppy_request_regions(fdc)) 4403 + goto cleanup; 4455 4404 } 4456 4405 } 4457 4406 for (fdc = 0; fdc < N_FDC; fdc++) { ··· 4463 4432 fdc = 0; 4464 4433 irqdma_allocated = 1; 4465 4434 return 0; 4466 - cleanup2: 4467 - release_region(FDCS->address + 2, 4); 4468 - cleanup1: 4435 + cleanup: 4469 4436 fd_free_irq(); 4470 4437 fd_free_dma(); 4471 - while (--fdc >= 0) { 4472 - release_region(FDCS->address + 2, 4); 4473 - release_region(FDCS->address + 7, 1); 4474 - } 4438 + while (--fdc >= 0) 4439 + floppy_release_regions(fdc); 4475 4440 spin_lock_irqsave(&floppy_usage_lock, flags); 4476 4441 usage_count--; 4477 4442 spin_unlock_irqrestore(&floppy_usage_lock, flags); ··· 4528 4501 #endif 4529 4502 old_fdc = fdc; 4530 4503 for (fdc = 0; fdc < N_FDC; fdc++) 4531 - if (FDCS->address != -1) { 4532 - release_region(FDCS->address + 2, 4); 4533 - release_region(FDCS->address + 7, 1); 4534 - } 4504 + if (FDCS->address != -1) 4505 + floppy_release_regions(fdc); 4535 4506 fdc = old_fdc; 4536 4507 } 4537 4508
+1 -1
drivers/block/paride/pg.c
··· 422 422 423 423 for (k = 0; k < len; k++) { 424 424 char c = *buf++; 425 - if (c != ' ' || c != l) 425 + if (c != ' ' && c != l) 426 426 l = *targ++ = c; 427 427 } 428 428 if (l == ' ')
+3 -2
drivers/char/sx.c
··· 1746 1746 sx_dprintk(SX_DEBUG_FIRMWARE, "returning type= %ld\n", rc); 1747 1747 break; 1748 1748 case SXIO_DO_RAMTEST: 1749 - if (sx_initialized) /* Already initialized: better not ramtest the board. */ 1749 + if (sx_initialized) { /* Already initialized: better not ramtest the board. */ 1750 1750 rc = -EPERM; 1751 1751 break; 1752 + } 1752 1753 if (IS_SX_BOARD(board)) { 1753 1754 rc = do_memtest(board, 0, 0x7000); 1754 1755 if (!rc) ··· 1789 1788 nbytes - i : SX_CHUNK_SIZE)) { 1790 1789 kfree(tmp); 1791 1790 rc = -EFAULT; 1792 - break; 1791 + goto out; 1793 1792 } 1794 1793 memcpy_toio(board->base2 + offset + i, tmp, 1795 1794 (i + SX_CHUNK_SIZE > nbytes) ?
+2
drivers/dma/dmaengine.c
··· 518 518 dma_chan_name(chan), err); 519 519 else 520 520 break; 521 + chan->private = NULL; 521 522 chan = NULL; 522 523 } 523 524 } ··· 537 536 WARN_ONCE(chan->client_count != 1, 538 537 "chan reference count %d != 1\n", chan->client_count); 539 538 dma_chan_put(chan); 539 + chan->private = NULL; 540 540 mutex_unlock(&dma_list_mutex); 541 541 } 542 542 EXPORT_SYMBOL_GPL(dma_release_channel);
+2 -3
drivers/dma/dw_dmac.c
··· 560 560 unsigned long flags) 561 561 { 562 562 struct dw_dma_chan *dwc = to_dw_dma_chan(chan); 563 - struct dw_dma_slave *dws = dwc->dws; 563 + struct dw_dma_slave *dws = chan->private; 564 564 struct dw_desc *prev; 565 565 struct dw_desc *first; 566 566 u32 ctllo; ··· 790 790 cfghi = DWC_CFGH_FIFO_MODE; 791 791 cfglo = 0; 792 792 793 - dws = dwc->dws; 793 + dws = chan->private; 794 794 if (dws) { 795 795 /* 796 796 * We need controller-specific data to set up slave ··· 866 866 spin_lock_bh(&dwc->lock); 867 867 list_splice_init(&dwc->free_list, &list); 868 868 dwc->descs_allocated = 0; 869 - dwc->dws = NULL; 870 869 871 870 /* Disable interrupts */ 872 871 channel_clear_bit(dw, MASK.XFER, dwc->mask);
-2
drivers/dma/dw_dmac_regs.h
··· 139 139 struct list_head queue; 140 140 struct list_head free_list; 141 141 142 - struct dw_dma_slave *dws; 143 - 144 142 unsigned int descs_allocated; 145 143 }; 146 144
+1 -1
drivers/firmware/memmap.c
··· 1 1 /* 2 2 * linux/drivers/firmware/memmap.c 3 3 * Copyright (C) 2008 SUSE LINUX Products GmbH 4 - * by Bernhard Walle <bwalle@suse.de> 4 + * by Bernhard Walle <bernhard.walle@gmx.de> 5 5 * 6 6 * This program is free software; you can redistribute it and/or modify 7 7 * it under the terms of the GNU General Public License v2.0 as published by
+6 -7
drivers/gpu/drm/Kconfig
··· 80 80 XFree86 4.4 and above. If unsure, build this and i830 as modules and 81 81 the X server will load the correct one. 82 82 83 - endchoice 84 - 85 83 config DRM_I915_KMS 86 84 bool "Enable modesetting on intel by default" 87 85 depends on DRM_I915 88 86 help 89 - Choose this option if you want kernel modesetting enabled by default, 90 - and you have a new enough userspace to support this. Running old 91 - userspaces with this enabled will cause pain. Note that this causes 92 - the driver to bind to PCI devices, which precludes loading things 93 - like intelfb. 87 + Choose this option if you want kernel modesetting enabled by default, 88 + and you have a new enough userspace to support this. Running old 89 + userspaces with this enabled will cause pain. Note that this causes 90 + the driver to bind to PCI devices, which precludes loading things 91 + like intelfb. 94 92 93 + endchoice 95 94 96 95 config DRM_MGA 97 96 tristate "Matrox g200/g400"
+1 -2
drivers/gpu/drm/drm_crtc.c
··· 1741 1741 * RETURNS: 1742 1742 * Zero on success, errno on failure. 1743 1743 */ 1744 - void drm_fb_release(struct file *filp) 1744 + void drm_fb_release(struct drm_file *priv) 1745 1745 { 1746 - struct drm_file *priv = filp->private_data; 1747 1746 struct drm_device *dev = priv->minor->dev; 1748 1747 struct drm_framebuffer *fb, *tfb; 1749 1748
+16 -5
drivers/gpu/drm/drm_crtc_helper.c
··· 512 512 if (drm_mode_equal(&saved_mode, &crtc->mode)) { 513 513 if (saved_x != crtc->x || saved_y != crtc->y || 514 514 depth_changed || bpp_changed) { 515 - crtc_funcs->mode_set_base(crtc, crtc->x, crtc->y, 516 - old_fb); 515 + ret = !crtc_funcs->mode_set_base(crtc, crtc->x, crtc->y, 516 + old_fb); 517 517 goto done; 518 518 } 519 519 } ··· 552 552 /* Set up the DPLL and any encoders state that needs to adjust or depend 553 553 * on the DPLL. 554 554 */ 555 - crtc_funcs->mode_set(crtc, mode, adjusted_mode, x, y, old_fb); 555 + ret = !crtc_funcs->mode_set(crtc, mode, adjusted_mode, x, y, old_fb); 556 + if (!ret) 557 + goto done; 556 558 557 559 list_for_each_entry(encoder, &dev->mode_config.encoder_list, head) { 558 560 ··· 754 752 if (!drm_crtc_helper_set_mode(set->crtc, set->mode, 755 753 set->x, set->y, 756 754 old_fb)) { 755 + DRM_ERROR("failed to set mode on crtc %p\n", 756 + set->crtc); 757 757 ret = -EINVAL; 758 758 goto fail_set_mode; 759 759 } ··· 769 765 old_fb = set->crtc->fb; 770 766 if (set->crtc->fb != set->fb) 771 767 set->crtc->fb = set->fb; 772 - crtc_funcs->mode_set_base(set->crtc, set->x, set->y, old_fb); 768 + ret = crtc_funcs->mode_set_base(set->crtc, 769 + set->x, set->y, old_fb); 770 + if (ret != 0) 771 + goto fail_set_mode; 773 772 } 774 773 775 774 kfree(save_encoders); ··· 782 775 fail_set_mode: 783 776 set->crtc->enabled = save_enabled; 784 777 count = 0; 785 - list_for_each_entry(connector, &dev->mode_config.connector_list, head) 778 + list_for_each_entry(connector, &dev->mode_config.connector_list, head) { 779 + if (!connector->encoder) 780 + continue; 781 + 786 782 connector->encoder->crtc = save_crtcs[count++]; 783 + } 787 784 fail_no_encoder: 788 785 kfree(save_crtcs); 789 786 count = 0;
+3
drivers/gpu/drm/drm_fops.c
··· 457 457 if (dev->driver->driver_features & DRIVER_GEM) 458 458 drm_gem_release(dev, file_priv); 459 459 460 + if (dev->driver->driver_features & DRIVER_MODESET) 461 + drm_fb_release(file_priv); 462 + 460 463 mutex_lock(&dev->ctxlist_mutex); 461 464 if (!list_empty(&dev->ctxlist)) { 462 465 struct drm_ctx_list *pos, *n;
+56 -25
drivers/gpu/drm/drm_gem.c
··· 104 104 105 105 if (drm_mm_init(&mm->offset_manager, DRM_FILE_PAGE_OFFSET_START, 106 106 DRM_FILE_PAGE_OFFSET_SIZE)) { 107 - drm_free(mm, sizeof(struct drm_gem_mm), DRM_MEM_MM); 108 107 drm_ht_remove(&mm->offset_hash); 108 + drm_free(mm, sizeof(struct drm_gem_mm), DRM_MEM_MM); 109 109 return -ENOMEM; 110 110 } 111 111 ··· 295 295 return -EBADF; 296 296 297 297 again: 298 - if (idr_pre_get(&dev->object_name_idr, GFP_KERNEL) == 0) 299 - return -ENOMEM; 298 + if (idr_pre_get(&dev->object_name_idr, GFP_KERNEL) == 0) { 299 + ret = -ENOMEM; 300 + goto err; 301 + } 300 302 301 303 spin_lock(&dev->object_name_lock); 302 - if (obj->name) { 303 - args->name = obj->name; 304 + if (!obj->name) { 305 + ret = idr_get_new_above(&dev->object_name_idr, obj, 1, 306 + &obj->name); 307 + args->name = (uint64_t) obj->name; 304 308 spin_unlock(&dev->object_name_lock); 305 - return 0; 309 + 310 + if (ret == -EAGAIN) 311 + goto again; 312 + 313 + if (ret != 0) 314 + goto err; 315 + 316 + /* Allocate a reference for the name table. */ 317 + drm_gem_object_reference(obj); 318 + } else { 319 + args->name = (uint64_t) obj->name; 320 + spin_unlock(&dev->object_name_lock); 321 + ret = 0; 306 322 } 307 - ret = idr_get_new_above(&dev->object_name_idr, obj, 1, 308 - &obj->name); 309 - spin_unlock(&dev->object_name_lock); 310 - if (ret == -EAGAIN) 311 - goto again; 312 323 313 - if (ret != 0) { 314 - mutex_lock(&dev->struct_mutex); 315 - drm_gem_object_unreference(obj); 316 - mutex_unlock(&dev->struct_mutex); 317 - return ret; 318 - } 319 - 320 - /* 321 - * Leave the reference from the lookup around as the 322 - * name table now holds one 323 - */ 324 - args->name = (uint64_t) obj->name; 325 - 326 - return 0; 324 + err: 325 + mutex_lock(&dev->struct_mutex); 326 + drm_gem_object_unreference(obj); 327 + mutex_unlock(&dev->struct_mutex); 328 + return ret; 327 329 } 328 330 329 331 /** ··· 450 448 spin_lock(&dev->object_name_lock); 451 449 if (obj->name) { 452 450 idr_remove(&dev->object_name_idr, obj->name); 451 + obj->name = 0; 453 452 spin_unlock(&dev->object_name_lock); 454 453 /* 455 454 * The object name held a reference to this object, drop ··· 462 459 463 460 } 464 461 EXPORT_SYMBOL(drm_gem_object_handle_free); 462 + 463 + void drm_gem_vm_open(struct vm_area_struct *vma) 464 + { 465 + struct drm_gem_object *obj = vma->vm_private_data; 466 + 467 + drm_gem_object_reference(obj); 468 + } 469 + EXPORT_SYMBOL(drm_gem_vm_open); 470 + 471 + void drm_gem_vm_close(struct vm_area_struct *vma) 472 + { 473 + struct drm_gem_object *obj = vma->vm_private_data; 474 + struct drm_device *dev = obj->dev; 475 + 476 + mutex_lock(&dev->struct_mutex); 477 + drm_gem_object_unreference(obj); 478 + mutex_unlock(&dev->struct_mutex); 479 + } 480 + EXPORT_SYMBOL(drm_gem_vm_close); 481 + 465 482 466 483 /** 467 484 * drm_gem_mmap - memory map routine for GEM objects ··· 543 520 prot |= _PAGE_CACHE_WC; 544 521 #endif 545 522 vma->vm_page_prot = __pgprot(prot); 523 + 524 + /* Take a ref for this mapping of the object, so that the fault 525 + * handler can dereference the mmap offset's pointer to the object. 526 + * This reference is cleaned up by the corresponding vm_close 527 + * (which should happen whether the vma was created by this call, or 528 + * by a vm_open due to mremap or partial unmap or whatever). 529 + */ 530 + drm_gem_object_reference(obj); 546 531 547 532 vma->vm_file = filp; /* Needed for drm_vm_open() */ 548 533 drm_vm_open_locked(vma);
+2
drivers/gpu/drm/i915/i915_drv.c
··· 94 94 95 95 static struct vm_operations_struct i915_gem_vm_ops = { 96 96 .fault = i915_gem_fault, 97 + .open = drm_gem_vm_open, 98 + .close = drm_gem_vm_close, 97 99 }; 98 100 99 101 static struct drm_driver driver = {
+2
drivers/gpu/drm/i915/i915_drv.h
··· 184 184 unsigned int lvds_dither:1; 185 185 unsigned int lvds_vbt:1; 186 186 unsigned int int_crt_support:1; 187 + unsigned int lvds_use_ssc:1; 188 + int lvds_ssc_freq; 187 189 188 190 struct drm_i915_fence_reg fence_regs[16]; /* assume 965 */ 189 191 int fence_reg_start; /* 4 if userland hasn't ioctl'd us yet */
+75 -44
drivers/gpu/drm/i915/i915_gem.c
··· 607 607 case -EAGAIN: 608 608 return VM_FAULT_OOM; 609 609 case -EFAULT: 610 - case -EBUSY: 611 - DRM_ERROR("can't insert pfn?? fault or busy...\n"); 612 610 return VM_FAULT_SIGBUS; 613 611 default: 614 612 return VM_FAULT_NOPAGE; ··· 680 682 drm_free(list->map, sizeof(struct drm_map_list), DRM_MEM_DRIVER); 681 683 682 684 return ret; 685 + } 686 + 687 + static void 688 + i915_gem_free_mmap_offset(struct drm_gem_object *obj) 689 + { 690 + struct drm_device *dev = obj->dev; 691 + struct drm_i915_gem_object *obj_priv = obj->driver_private; 692 + struct drm_gem_mm *mm = dev->mm_private; 693 + struct drm_map_list *list; 694 + 695 + list = &obj->map_list; 696 + drm_ht_remove_item(&mm->offset_hash, &list->hash); 697 + 698 + if (list->file_offset_node) { 699 + drm_mm_put_block(list->file_offset_node); 700 + list->file_offset_node = NULL; 701 + } 702 + 703 + if (list->map) { 704 + drm_free(list->map, sizeof(struct drm_map), DRM_MEM_DRIVER); 705 + list->map = NULL; 706 + } 707 + 708 + obj_priv->mmap_offset = 0; 683 709 } 684 710 685 711 /** ··· 780 758 781 759 if (!obj_priv->mmap_offset) { 782 760 ret = i915_gem_create_mmap_offset(obj); 783 - if (ret) 761 + if (ret) { 762 + drm_gem_object_unreference(obj); 763 + mutex_unlock(&dev->struct_mutex); 784 764 return ret; 765 + } 785 766 } 786 767 787 768 args->offset = obj_priv->mmap_offset; ··· 2276 2251 (int) reloc.offset, 2277 2252 reloc.read_domains, 2278 2253 reloc.write_domain); 2254 + drm_gem_object_unreference(target_obj); 2255 + i915_gem_object_unpin(obj); 2279 2256 return -EINVAL; 2280 2257 } 2281 2258 ··· 2507 2480 if (dev_priv->mm.wedged) { 2508 2481 DRM_ERROR("Execbuf while wedged\n"); 2509 2482 mutex_unlock(&dev->struct_mutex); 2510 - return -EIO; 2483 + ret = -EIO; 2484 + goto pre_mutex_err; 2511 2485 } 2512 2486 2513 2487 if (dev_priv->mm.suspended) { 2514 2488 DRM_ERROR("Execbuf while VT-switched.\n"); 2515 2489 mutex_unlock(&dev->struct_mutex); 2516 - return -EBUSY; 2490 + ret = -EBUSY; 2491 + goto pre_mutex_err; 2517 2492 } 2518 2493 2519 2494 /* Look up object handles */ ··· 2661 2632 2662 2633 i915_verify_inactive(dev, __FILE__, __LINE__); 2663 2634 2664 - /* Copy the new buffer offsets back to the user's exec list. */ 2665 - ret = copy_to_user((struct drm_i915_relocation_entry __user *) 2666 - (uintptr_t) args->buffers_ptr, 2667 - exec_list, 2668 - sizeof(*exec_list) * args->buffer_count); 2669 - if (ret) 2670 - DRM_ERROR("failed to copy %d exec entries " 2671 - "back to user (%d)\n", 2672 - args->buffer_count, ret); 2673 2635 err: 2674 2636 for (i = 0; i < pinned; i++) 2675 2637 i915_gem_object_unpin(object_list[i]); ··· 2669 2649 drm_gem_object_unreference(object_list[i]); 2670 2650 2671 2651 mutex_unlock(&dev->struct_mutex); 2652 + 2653 + if (!ret) { 2654 + /* Copy the new buffer offsets back to the user's exec list. */ 2655 + ret = copy_to_user((struct drm_i915_relocation_entry __user *) 2656 + (uintptr_t) args->buffers_ptr, 2657 + exec_list, 2658 + sizeof(*exec_list) * args->buffer_count); 2659 + if (ret) 2660 + DRM_ERROR("failed to copy %d exec entries " 2661 + "back to user (%d)\n", 2662 + args->buffer_count, ret); 2663 + } 2672 2664 2673 2665 pre_mutex_err: 2674 2666 drm_free(object_list, sizeof(*object_list) * args->buffer_count, ··· 2785 2753 if (obj_priv->pin_filp != NULL && obj_priv->pin_filp != file_priv) { 2786 2754 DRM_ERROR("Already pinned in i915_gem_pin_ioctl(): %d\n", 2787 2755 args->handle); 2756 + drm_gem_object_unreference(obj); 2788 2757 mutex_unlock(&dev->struct_mutex); 2789 2758 return -EINVAL; 2790 2759 } ··· 2918 2885 void i915_gem_free_object(struct drm_gem_object *obj) 2919 2886 { 2920 2887 struct drm_device *dev = obj->dev; 2921 - struct drm_gem_mm *mm = dev->mm_private; 2922 - struct drm_map_list *list; 2923 - struct drm_map *map; 2924 2888 struct drm_i915_gem_object *obj_priv = obj->driver_private; 2925 2889 2926 2890 while (obj_priv->pin_count > 0) ··· 2928 2898 2929 2899 i915_gem_object_unbind(obj); 2930 2900 2931 - list = &obj->map_list; 2932 - drm_ht_remove_item(&mm->offset_hash, &list->hash); 2933 - 2934 - if (list->file_offset_node) { 2935 - drm_mm_put_block(list->file_offset_node); 2936 - list->file_offset_node = NULL; 2937 - } 2938 - 2939 - map = list->map; 2940 - if (map) { 2941 - drm_free(map, sizeof(*map), DRM_MEM_DRIVER); 2942 - list->map = NULL; 2943 - } 2901 + i915_gem_free_mmap_offset(obj); 2944 2902 2945 2903 drm_free(obj_priv->page_cpu_valid, 1, DRM_MEM_DRIVER); 2946 2904 drm_free(obj->driver_private, 1, DRM_MEM_DRIVER); ··· 3113 3095 if (dev_priv->hw_status_page == NULL) { 3114 3096 DRM_ERROR("Failed to map status page.\n"); 3115 3097 memset(&dev_priv->hws_map, 0, sizeof(dev_priv->hws_map)); 3098 + i915_gem_object_unpin(obj); 3116 3099 drm_gem_object_unreference(obj); 3117 3100 return -EINVAL; 3118 3101 } ··· 3124 3105 DRM_DEBUG("hws offset: 0x%08x\n", dev_priv->status_gfx_addr); 3125 3106 3126 3107 return 0; 3108 + } 3109 + 3110 + static void 3111 + i915_gem_cleanup_hws(struct drm_device *dev) 3112 + { 3113 + drm_i915_private_t *dev_priv = dev->dev_private; 3114 + struct drm_gem_object *obj = dev_priv->hws_obj; 3115 + struct drm_i915_gem_object *obj_priv = obj->driver_private; 3116 + 3117 + if (dev_priv->hws_obj == NULL) 3118 + return; 3119 + 3120 + kunmap(obj_priv->page_list[0]); 3121 + i915_gem_object_unpin(obj); 3122 + drm_gem_object_unreference(obj); 3123 + dev_priv->hws_obj = NULL; 3124 + memset(&dev_priv->hws_map, 0, sizeof(dev_priv->hws_map)); 3125 + dev_priv->hw_status_page = NULL; 3126 + 3127 + /* Write high address into HWS_PGA when disabling. */ 3128 + I915_WRITE(HWS_PGA, 0x1ffff000); 3127 3129 } 3128 3130 3129 3131 int ··· 3164 3124 obj = drm_gem_object_alloc(dev, 128 * 1024); 3165 3125 if (obj == NULL) { 3166 3126 DRM_ERROR("Failed to allocate ringbuffer\n"); 3127 + i915_gem_cleanup_hws(dev); 3167 3128 return -ENOMEM; 3168 3129 } 3169 3130 obj_priv = obj->driver_private; ··· 3172 3131 ret = i915_gem_object_pin(obj, 4096); 3173 3132 if (ret != 0) { 3174 3133 drm_gem_object_unreference(obj); 3134 + i915_gem_cleanup_hws(dev); 3175 3135 return ret; 3176 3136 } 3177 3137 ··· 3190 3148 if (ring->map.handle == NULL) { 3191 3149 DRM_ERROR("Failed to map ringbuffer.\n"); 3192 3150 memset(&dev_priv->ring, 0, sizeof(dev_priv->ring)); 3151 + i915_gem_object_unpin(obj); 3193 3152 drm_gem_object_unreference(obj); 3153 + i915_gem_cleanup_hws(dev); 3194 3154 return -EINVAL; 3195 3155 } 3196 3156 ring->ring_obj = obj; ··· 3272 3228 dev_priv->ring.ring_obj = NULL; 3273 3229 memset(&dev_priv->ring, 0, sizeof(dev_priv->ring)); 3274 3230 3275 - if (dev_priv->hws_obj != NULL) { 3276 - struct drm_gem_object *obj = dev_priv->hws_obj; 3277 - struct drm_i915_gem_object *obj_priv = obj->driver_private; 3278 - 3279 - kunmap(obj_priv->page_list[0]); 3280 - i915_gem_object_unpin(obj); 3281 - drm_gem_object_unreference(obj); 3282 - dev_priv->hws_obj = NULL; 3283 - memset(&dev_priv->hws_map, 0, sizeof(dev_priv->hws_map)); 3284 - dev_priv->hw_status_page = NULL; 3285 - 3286 - /* Write high address into HWS_PGA when disabling. */ 3287 - I915_WRITE(HWS_PGA, 0x1ffff000); 3288 - } 3231 + i915_gem_cleanup_hws(dev); 3289 3232 } 3290 3233 3291 3234 int
+2 -4
drivers/gpu/drm/i915/i915_gem_tiling.c
··· 299 299 } 300 300 obj_priv->stride = args->stride; 301 301 302 - mutex_unlock(&dev->struct_mutex); 303 - 304 302 drm_gem_object_unreference(obj); 303 + mutex_unlock(&dev->struct_mutex); 305 304 306 305 return 0; 307 306 } ··· 339 340 DRM_ERROR("unknown tiling mode\n"); 340 341 } 341 342 342 - mutex_unlock(&dev->struct_mutex); 343 - 344 343 drm_gem_object_unreference(obj); 344 + mutex_unlock(&dev->struct_mutex); 345 345 346 346 return 0; 347 347 }
+8
drivers/gpu/drm/i915/intel_bios.c
··· 135 135 if (general) { 136 136 dev_priv->int_tv_support = general->int_tv_support; 137 137 dev_priv->int_crt_support = general->int_crt_support; 138 + dev_priv->lvds_use_ssc = general->enable_ssc; 139 + 140 + if (dev_priv->lvds_use_ssc) { 141 + if (IS_I855(dev_priv->dev)) 142 + dev_priv->lvds_ssc_freq = general->ssc_freq ? 66 : 48; 143 + else 144 + dev_priv->lvds_ssc_freq = general->ssc_freq ? 100 : 96; 145 + } 138 146 } 139 147 } 140 148
+85 -75
drivers/gpu/drm/i915/intel_display.c
··· 90 90 #define I9XX_DOT_MAX 400000 91 91 #define I9XX_VCO_MIN 1400000 92 92 #define I9XX_VCO_MAX 2800000 93 - #define I9XX_N_MIN 3 94 - #define I9XX_N_MAX 8 93 + #define I9XX_N_MIN 1 94 + #define I9XX_N_MAX 6 95 95 #define I9XX_M_MIN 70 96 96 #define I9XX_M_MAX 120 97 97 #define I9XX_M1_MIN 10 98 - #define I9XX_M1_MAX 20 98 + #define I9XX_M1_MAX 22 99 99 #define I9XX_M2_MIN 5 100 100 #define I9XX_M2_MAX 9 101 101 #define I9XX_P_SDVO_DAC_MIN 5 ··· 189 189 return limit; 190 190 } 191 191 192 - /** Derive the pixel clock for the given refclk and divisors for 8xx chips. */ 193 - 194 - static void i8xx_clock(int refclk, intel_clock_t *clock) 192 + static void intel_clock(int refclk, intel_clock_t *clock) 195 193 { 196 194 clock->m = 5 * (clock->m1 + 2) + (clock->m2 + 2); 197 195 clock->p = clock->p1 * clock->p2; 198 196 clock->vco = refclk * clock->m / (clock->n + 2); 199 197 clock->dot = clock->vco / clock->p; 200 - } 201 - 202 - /** Derive the pixel clock for the given refclk and divisors for 9xx chips. */ 203 - 204 - static void i9xx_clock(int refclk, intel_clock_t *clock) 205 - { 206 - clock->m = 5 * (clock->m1 + 2) + (clock->m2 + 2); 207 - clock->p = clock->p1 * clock->p2; 208 - clock->vco = refclk * clock->m / (clock->n + 2); 209 - clock->dot = clock->vco / clock->p; 210 - } 211 - 212 - static void intel_clock(struct drm_device *dev, int refclk, 213 - intel_clock_t *clock) 214 - { 215 - if (IS_I9XX(dev)) 216 - i9xx_clock (refclk, clock); 217 - else 218 - i8xx_clock (refclk, clock); 219 198 } 220 199 221 200 /** ··· 217 238 return false; 218 239 } 219 240 220 - #define INTELPllInvalid(s) { /* ErrorF (s) */; return false; } 241 + #define INTELPllInvalid(s) do { DRM_DEBUG(s); return false; } while (0) 221 242 /** 222 243 * Returns whether the given set of divisors are valid for a given refclk with 223 244 * the given connectors. ··· 297 318 clock.p1 <= limit->p1.max; clock.p1++) { 298 319 int this_err; 299 320 300 - intel_clock(dev, refclk, &clock); 321 + intel_clock(refclk, &clock); 301 322 302 323 if (!intel_PLL_is_valid(crtc, &clock)) 303 324 continue; ··· 322 343 udelay(20000); 323 344 } 324 345 325 - static void 346 + static int 326 347 intel_pipe_set_base(struct drm_crtc *crtc, int x, int y, 327 348 struct drm_framebuffer *old_fb) 328 349 { ··· 340 361 int dspstride = (pipe == 0) ? DSPASTRIDE : DSPBSTRIDE; 341 362 int dspcntr_reg = (pipe == 0) ? DSPACNTR : DSPBCNTR; 342 363 u32 dspcntr, alignment; 364 + int ret; 343 365 344 366 /* no fb bound */ 345 367 if (!crtc->fb) { 346 368 DRM_DEBUG("No FB bound\n"); 347 - return; 369 + return 0; 370 + } 371 + 372 + switch (pipe) { 373 + case 0: 374 + case 1: 375 + break; 376 + default: 377 + DRM_ERROR("Can't update pipe %d in SAREA\n", pipe); 378 + return -EINVAL; 348 379 } 349 380 350 381 intel_fb = to_intel_framebuffer(crtc->fb); ··· 366 377 alignment = 64 * 1024; 367 378 break; 368 379 case I915_TILING_X: 369 - if (IS_I9XX(dev)) 370 - alignment = 1024 * 1024; 371 - else 372 - alignment = 512 * 1024; 380 + /* pin() will align the object as required by fence */ 381 + alignment = 0; 373 382 break; 374 383 case I915_TILING_Y: 375 384 /* FIXME: Is this true? */ 376 385 DRM_ERROR("Y tiled not allowed for scan out buffers\n"); 377 - return; 386 + return -EINVAL; 378 387 default: 379 388 BUG(); 380 389 } 381 390 382 - if (i915_gem_object_pin(intel_fb->obj, alignment)) 383 - return; 391 + mutex_lock(&dev->struct_mutex); 392 + ret = i915_gem_object_pin(intel_fb->obj, alignment); 393 + if (ret != 0) { 394 + mutex_unlock(&dev->struct_mutex); 395 + return ret; 396 + } 384 397 385 - i915_gem_object_set_to_gtt_domain(intel_fb->obj, 1); 386 - 387 - Start = obj_priv->gtt_offset; 388 - Offset = y * crtc->fb->pitch + x * (crtc->fb->bits_per_pixel / 8); 389 - 390 - I915_WRITE(dspstride, crtc->fb->pitch); 398 + ret = i915_gem_object_set_to_gtt_domain(intel_fb->obj, 1); 399 + if (ret != 0) { 400 + i915_gem_object_unpin(intel_fb->obj); 401 + mutex_unlock(&dev->struct_mutex); 402 + return ret; 403 + } 391 404 392 405 dspcntr = I915_READ(dspcntr_reg); 393 406 /* Mask out pixel format bits in case we change it */ ··· 410 419 break; 411 420 default: 412 421 DRM_ERROR("Unknown color depth\n"); 413 - return; 422 + i915_gem_object_unpin(intel_fb->obj); 423 + mutex_unlock(&dev->struct_mutex); 424 + return -EINVAL; 414 425 } 415 426 I915_WRITE(dspcntr_reg, dspcntr); 416 427 428 + Start = obj_priv->gtt_offset; 429 + Offset = y * crtc->fb->pitch + x * (crtc->fb->bits_per_pixel / 8); 430 + 417 431 DRM_DEBUG("Writing base %08lX %08lX %d %d\n", Start, Offset, x, y); 432 + I915_WRITE(dspstride, crtc->fb->pitch); 418 433 if (IS_I965G(dev)) { 419 434 I915_WRITE(dspbase, Offset); 420 435 I915_READ(dspbase); ··· 437 440 intel_fb = to_intel_framebuffer(old_fb); 438 441 i915_gem_object_unpin(intel_fb->obj); 439 442 } 443 + mutex_unlock(&dev->struct_mutex); 440 444 441 445 if (!dev->primary->master) 442 - return; 446 + return 0; 443 447 444 448 master_priv = dev->primary->master->driver_priv; 445 449 if (!master_priv->sarea_priv) 446 - return; 450 + return 0; 447 451 448 - switch (pipe) { 449 - case 0: 450 - master_priv->sarea_priv->pipeA_x = x; 451 - master_priv->sarea_priv->pipeA_y = y; 452 - break; 453 - case 1: 452 + if (pipe) { 454 453 master_priv->sarea_priv->pipeB_x = x; 455 454 master_priv->sarea_priv->pipeB_y = y; 456 - break; 457 - default: 458 - DRM_ERROR("Can't update pipe %d in SAREA\n", pipe); 459 - break; 455 + } else { 456 + master_priv->sarea_priv->pipeA_x = x; 457 + master_priv->sarea_priv->pipeA_y = y; 460 458 } 459 + 460 + return 0; 461 461 } 462 462 463 463 ··· 702 708 return 1; 703 709 } 704 710 705 - static void intel_crtc_mode_set(struct drm_crtc *crtc, 706 - struct drm_display_mode *mode, 707 - struct drm_display_mode *adjusted_mode, 708 - int x, int y, 709 - struct drm_framebuffer *old_fb) 711 + static int intel_crtc_mode_set(struct drm_crtc *crtc, 712 + struct drm_display_mode *mode, 713 + struct drm_display_mode *adjusted_mode, 714 + int x, int y, 715 + struct drm_framebuffer *old_fb) 710 716 { 711 717 struct drm_device *dev = crtc->dev; 712 718 struct drm_i915_private *dev_priv = dev->dev_private; ··· 726 732 int dspsize_reg = (pipe == 0) ? DSPASIZE : DSPBSIZE; 727 733 int dsppos_reg = (pipe == 0) ? DSPAPOS : DSPBPOS; 728 734 int pipesrc_reg = (pipe == 0) ? PIPEASRC : PIPEBSRC; 729 - int refclk; 735 + int refclk, num_outputs = 0; 730 736 intel_clock_t clock; 731 737 u32 dpll = 0, fp = 0, dspcntr, pipeconf; 732 738 bool ok, is_sdvo = false, is_dvo = false; 733 739 bool is_crt = false, is_lvds = false, is_tv = false; 734 740 struct drm_mode_config *mode_config = &dev->mode_config; 735 741 struct drm_connector *connector; 742 + int ret; 736 743 737 744 drm_vblank_pre_modeset(dev, pipe); 738 745 ··· 763 768 is_crt = true; 764 769 break; 765 770 } 771 + 772 + num_outputs++; 766 773 } 767 774 768 - if (IS_I9XX(dev)) { 775 + if (is_lvds && dev_priv->lvds_use_ssc && num_outputs < 2) { 776 + refclk = dev_priv->lvds_ssc_freq * 1000; 777 + DRM_DEBUG("using SSC reference clock of %d MHz\n", refclk / 1000); 778 + } else if (IS_I9XX(dev)) { 769 779 refclk = 96000; 770 780 } else { 771 781 refclk = 48000; ··· 779 779 ok = intel_find_best_PLL(crtc, adjusted_mode->clock, refclk, &clock); 780 780 if (!ok) { 781 781 DRM_ERROR("Couldn't find PLL settings for mode!\n"); 782 - return; 782 + return -EINVAL; 783 783 } 784 784 785 785 fp = clock.n << 16 | clock.m1 << 8 | clock.m2; ··· 829 829 } 830 830 } 831 831 832 - if (is_tv) { 832 + if (is_sdvo && is_tv) 833 + dpll |= PLL_REF_INPUT_TVCLKINBC; 834 + else if (is_tv) 833 835 /* XXX: just matching BIOS for now */ 834 - /* dpll |= PLL_REF_INPUT_TVCLKINBC; */ 836 + /* dpll |= PLL_REF_INPUT_TVCLKINBC; */ 835 837 dpll |= 3; 836 - } 838 + else if (is_lvds && dev_priv->lvds_use_ssc && num_outputs < 2) 839 + dpll |= PLLB_REF_INPUT_SPREADSPECTRUMIN; 837 840 else 838 841 dpll |= PLL_REF_INPUT_DREFCLK; 839 842 ··· 953 950 I915_WRITE(dspcntr_reg, dspcntr); 954 951 955 952 /* Flush the plane changes */ 956 - intel_pipe_set_base(crtc, x, y, old_fb); 953 + ret = intel_pipe_set_base(crtc, x, y, old_fb); 954 + if (ret != 0) 955 + return ret; 957 956 958 957 drm_vblank_post_modeset(dev, pipe); 958 + 959 + return 0; 959 960 } 960 961 961 962 /** Loads the palette/gamma unit for the CRTC with the prepared values */ ··· 1030 1023 } 1031 1024 1032 1025 /* we only need to pin inside GTT if cursor is non-phy */ 1026 + mutex_lock(&dev->struct_mutex); 1033 1027 if (!dev_priv->cursor_needs_physical) { 1034 1028 ret = i915_gem_object_pin(bo, PAGE_SIZE); 1035 1029 if (ret) { 1036 1030 DRM_ERROR("failed to pin cursor bo\n"); 1037 - goto fail; 1031 + goto fail_locked; 1038 1032 } 1039 1033 addr = obj_priv->gtt_offset; 1040 1034 } else { 1041 1035 ret = i915_gem_attach_phys_object(dev, bo, (pipe == 0) ? I915_GEM_PHYS_CURSOR_0 : I915_GEM_PHYS_CURSOR_1); 1042 1036 if (ret) { 1043 1037 DRM_ERROR("failed to attach phys object\n"); 1044 - goto fail; 1038 + goto fail_locked; 1045 1039 } 1046 1040 addr = obj_priv->phys_obj->handle->busaddr; 1047 1041 } ··· 1062 1054 i915_gem_detach_phys_object(dev, intel_crtc->cursor_bo); 1063 1055 } else 1064 1056 i915_gem_object_unpin(intel_crtc->cursor_bo); 1065 - mutex_lock(&dev->struct_mutex); 1066 1057 drm_gem_object_unreference(intel_crtc->cursor_bo); 1067 - mutex_unlock(&dev->struct_mutex); 1068 1058 } 1059 + mutex_unlock(&dev->struct_mutex); 1069 1060 1070 1061 intel_crtc->cursor_addr = addr; 1071 1062 intel_crtc->cursor_bo = bo; ··· 1072 1065 return 0; 1073 1066 fail: 1074 1067 mutex_lock(&dev->struct_mutex); 1068 + fail_locked: 1075 1069 drm_gem_object_unreference(bo); 1076 1070 mutex_unlock(&dev->struct_mutex); 1077 1071 return ret; ··· 1300 1292 } 1301 1293 1302 1294 /* XXX: Handle the 100Mhz refclk */ 1303 - i9xx_clock(96000, &clock); 1295 + intel_clock(96000, &clock); 1304 1296 } else { 1305 1297 bool is_lvds = (pipe == 1) && (I915_READ(LVDS) & LVDS_PORT_EN); 1306 1298 ··· 1312 1304 if ((dpll & PLL_REF_INPUT_MASK) == 1313 1305 PLLB_REF_INPUT_SPREADSPECTRUMIN) { 1314 1306 /* XXX: might not be 66MHz */ 1315 - i8xx_clock(66000, &clock); 1307 + intel_clock(66000, &clock); 1316 1308 } else 1317 - i8xx_clock(48000, &clock); 1309 + intel_clock(48000, &clock); 1318 1310 } else { 1319 1311 if (dpll & PLL_P1_DIVIDE_BY_TWO) 1320 1312 clock.p1 = 2; ··· 1327 1319 else 1328 1320 clock.p2 = 2; 1329 1321 1330 - i8xx_clock(48000, &clock); 1322 + intel_clock(48000, &clock); 1331 1323 } 1332 1324 } 1333 1325 ··· 1606 1598 1607 1599 ret = intel_framebuffer_create(dev, mode_cmd, &fb, obj); 1608 1600 if (ret) { 1601 + mutex_lock(&dev->struct_mutex); 1609 1602 drm_gem_object_unreference(obj); 1603 + mutex_unlock(&dev->struct_mutex); 1610 1604 return NULL; 1611 1605 } 1612 1606
+5 -3
drivers/gpu/drm/i915/intel_fb.c
··· 473 473 ret = intel_framebuffer_create(dev, &mode_cmd, &fb, fbo); 474 474 if (ret) { 475 475 DRM_ERROR("failed to allocate fb.\n"); 476 - goto out_unref; 476 + goto out_unpin; 477 477 } 478 478 479 479 list_add(&fb->filp_head, &dev->mode_config.fb_kernel_list); ··· 484 484 info = framebuffer_alloc(sizeof(struct intelfb_par), device); 485 485 if (!info) { 486 486 ret = -ENOMEM; 487 - goto out_unref; 487 + goto out_unpin; 488 488 } 489 489 490 490 par = info->par; ··· 513 513 size); 514 514 if (!info->screen_base) { 515 515 ret = -ENOSPC; 516 - goto out_unref; 516 + goto out_unpin; 517 517 } 518 518 info->screen_size = size; 519 519 ··· 608 608 mutex_unlock(&dev->struct_mutex); 609 609 return 0; 610 610 611 + out_unpin: 612 + i915_gem_object_unpin(fbo); 611 613 out_unref: 612 614 drm_gem_object_unreference(fbo); 613 615 mutex_unlock(&dev->struct_mutex);
-2
drivers/gpu/drm/i915/intel_lvds.c
··· 481 481 if (dev_priv->panel_fixed_mode) { 482 482 dev_priv->panel_fixed_mode->type |= 483 483 DRM_MODE_TYPE_PREFERRED; 484 - drm_mode_probed_add(connector, 485 - dev_priv->panel_fixed_mode); 486 484 goto out; 487 485 } 488 486 }
+1 -1
drivers/gpu/drm/i915/intel_sdvo.c
··· 193 193 194 194 #define SDVO_CMD_NAME_ENTRY(cmd) {cmd, #cmd} 195 195 /** Mapping of command numbers to names, for debug output */ 196 - const static struct _sdvo_cmd_name { 196 + static const struct _sdvo_cmd_name { 197 197 u8 cmd; 198 198 char *name; 199 199 } sdvo_cmd_names[] = {
+1 -1
drivers/gpu/drm/i915/intel_tv.c
··· 411 411 * These values account for -1s required. 412 412 */ 413 413 414 - const static struct tv_mode tv_modes[] = { 414 + static const struct tv_mode tv_modes[] = { 415 415 { 416 416 .name = "NTSC-M", 417 417 .clock = 107520,
+15 -6
drivers/gpu/drm/radeon/radeon_cp.c
··· 557 557 } 558 558 559 559 static void radeon_cp_init_ring_buffer(struct drm_device * dev, 560 - drm_radeon_private_t * dev_priv) 560 + drm_radeon_private_t *dev_priv, 561 + struct drm_file *file_priv) 561 562 { 563 + struct drm_radeon_master_private *master_priv; 562 564 u32 ring_start, cur_read_ptr; 563 565 u32 tmp; 564 566 ··· 678 676 679 677 dev_priv->scratch[2] = 0; 680 678 RADEON_WRITE(RADEON_LAST_CLEAR_REG, 0); 679 + 680 + /* reset sarea copies of these */ 681 + master_priv = file_priv->master->driver_priv; 682 + if (master_priv->sarea_priv) { 683 + master_priv->sarea_priv->last_frame = 0; 684 + master_priv->sarea_priv->last_dispatch = 0; 685 + master_priv->sarea_priv->last_clear = 0; 686 + } 681 687 682 688 radeon_do_wait_for_idle(dev_priv); 683 689 ··· 1225 1215 } 1226 1216 1227 1217 radeon_cp_load_microcode(dev_priv); 1228 - radeon_cp_init_ring_buffer(dev, dev_priv); 1218 + radeon_cp_init_ring_buffer(dev, dev_priv, file_priv); 1229 1219 1230 1220 dev_priv->last_buf = 0; 1231 1221 ··· 1291 1281 * 1292 1282 * Charl P. Botha <http://cpbotha.net> 1293 1283 */ 1294 - static int radeon_do_resume_cp(struct drm_device * dev) 1284 + static int radeon_do_resume_cp(struct drm_device *dev, struct drm_file *file_priv) 1295 1285 { 1296 1286 drm_radeon_private_t *dev_priv = dev->dev_private; 1297 1287 ··· 1314 1304 } 1315 1305 1316 1306 radeon_cp_load_microcode(dev_priv); 1317 - radeon_cp_init_ring_buffer(dev, dev_priv); 1307 + radeon_cp_init_ring_buffer(dev, dev_priv, file_priv); 1318 1308 1319 1309 radeon_do_engine_reset(dev); 1320 1310 radeon_irq_set_state(dev, RADEON_SW_INT_ENABLE, 1); ··· 1489 1479 */ 1490 1480 int radeon_cp_resume(struct drm_device *dev, void *data, struct drm_file *file_priv) 1491 1481 { 1492 - 1493 - return radeon_do_resume_cp(dev); 1482 + return radeon_do_resume_cp(dev, file_priv); 1494 1483 } 1495 1484 1496 1485 int radeon_engine_reset(struct drm_device *dev, void *data, struct drm_file *file_priv)
+7 -6
drivers/hid/hid-core.c
··· 1300 1300 { HID_USB_DEVICE(USB_VENDOR_ID_SONY, USB_DEVICE_ID_SONY_PS3_CONTROLLER) }, 1301 1301 { HID_USB_DEVICE(USB_VENDOR_ID_SONY, USB_DEVICE_ID_SONY_VAIO_VGX_MOUSE) }, 1302 1302 { HID_USB_DEVICE(USB_VENDOR_ID_SUNPLUS, USB_DEVICE_ID_SUNPLUS_WDESKTOP) }, 1303 + { HID_USB_DEVICE(USB_VENDOR_ID_THRUSTMASTER, 0xb300) }, 1304 + { HID_USB_DEVICE(USB_VENDOR_ID_THRUSTMASTER, 0xb304) }, 1305 + { HID_USB_DEVICE(USB_VENDOR_ID_THRUSTMASTER, 0xb651) }, 1306 + { HID_USB_DEVICE(USB_VENDOR_ID_THRUSTMASTER, 0xb654) }, 1303 1307 { HID_USB_DEVICE(USB_VENDOR_ID_TOPSEED, USB_DEVICE_ID_TOPSEED_CYBERLINK) }, 1308 + { HID_USB_DEVICE(USB_VENDOR_ID_ZEROPLUS, 0x0005) }, 1309 + { HID_USB_DEVICE(USB_VENDOR_ID_ZEROPLUS, 0x0030) }, 1304 1310 1305 1311 { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_APPLE, 0x030c) }, 1306 1312 { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_PRESENTER_8K_BT) }, ··· 1611 1605 { HID_USB_DEVICE(USB_VENDOR_ID_PANJIT, 0x0002) }, 1612 1606 { HID_USB_DEVICE(USB_VENDOR_ID_PANJIT, 0x0003) }, 1613 1607 { HID_USB_DEVICE(USB_VENDOR_ID_PANJIT, 0x0004) }, 1608 + { HID_USB_DEVICE(USB_VENDOR_ID_POWERCOM, USB_DEVICE_ID_POWERCOM_UPS) }, 1614 1609 { HID_USB_DEVICE(USB_VENDOR_ID_SOUNDGRAPH, USB_DEVICE_ID_SOUNDGRAPH_IMON_LCD) }, 1615 1610 { HID_USB_DEVICE(USB_VENDOR_ID_SOUNDGRAPH, USB_DEVICE_ID_SOUNDGRAPH_IMON_LCD2) }, 1616 1611 { HID_USB_DEVICE(USB_VENDOR_ID_SOUNDGRAPH, USB_DEVICE_ID_SOUNDGRAPH_IMON_LCD3) }, ··· 1619 1612 { HID_USB_DEVICE(USB_VENDOR_ID_SOUNDGRAPH, USB_DEVICE_ID_SOUNDGRAPH_IMON_LCD5) }, 1620 1613 { HID_USB_DEVICE(USB_VENDOR_ID_TENX, USB_DEVICE_ID_TENX_IBUDDY1) }, 1621 1614 { HID_USB_DEVICE(USB_VENDOR_ID_TENX, USB_DEVICE_ID_TENX_IBUDDY2) }, 1622 - { HID_USB_DEVICE(USB_VENDOR_ID_THRUSTMASTER, 0xb300) }, 1623 - { HID_USB_DEVICE(USB_VENDOR_ID_THRUSTMASTER, 0xb304) }, 1624 - { HID_USB_DEVICE(USB_VENDOR_ID_THRUSTMASTER, 0xb651) }, 1625 - { HID_USB_DEVICE(USB_VENDOR_ID_THRUSTMASTER, 0xb654) }, 1626 1615 { HID_USB_DEVICE(USB_VENDOR_ID_VERNIER, USB_DEVICE_ID_VERNIER_LABPRO) }, 1627 1616 { HID_USB_DEVICE(USB_VENDOR_ID_VERNIER, USB_DEVICE_ID_VERNIER_GOTEMP) }, 1628 1617 { HID_USB_DEVICE(USB_VENDOR_ID_VERNIER, USB_DEVICE_ID_VERNIER_SKIP) }, ··· 1629 1626 { HID_USB_DEVICE(USB_VENDOR_ID_WISEGROUP, USB_DEVICE_ID_1_PHIDGETSERVO_20) }, 1630 1627 { HID_USB_DEVICE(USB_VENDOR_ID_WISEGROUP, USB_DEVICE_ID_8_8_4_IF_KIT) }, 1631 1628 { HID_USB_DEVICE(USB_VENDOR_ID_YEALINK, USB_DEVICE_ID_YEALINK_P1K_P4K_B2K) }, 1632 - { HID_USB_DEVICE(USB_VENDOR_ID_ZEROPLUS, 0x0005) }, 1633 - { HID_USB_DEVICE(USB_VENDOR_ID_ZEROPLUS, 0x0030) }, 1634 1629 { } 1635 1630 }; 1636 1631
+3
drivers/hid/hid-ids.h
··· 348 348 #define USB_VENDOR_ID_PLAYDOTCOM 0x0b43 349 349 #define USB_DEVICE_ID_PLAYDOTCOM_EMS_USBII 0x0003 350 350 351 + #define USB_VENDOR_ID_POWERCOM 0x0d9f 352 + #define USB_DEVICE_ID_POWERCOM_UPS 0x0002 353 + 351 354 #define USB_VENDOR_ID_SAITEK 0x06a3 352 355 #define USB_DEVICE_ID_SAITEK_RUMBLEPAD 0xff17 353 356
+9 -5
drivers/hid/hidraw.c
··· 267 267 default: 268 268 { 269 269 struct hid_device *hid = dev->hid; 270 - if (_IOC_TYPE(cmd) != 'H' || _IOC_DIR(cmd) != _IOC_READ) 271 - return -EINVAL; 270 + if (_IOC_TYPE(cmd) != 'H' || _IOC_DIR(cmd) != _IOC_READ) { 271 + ret = -EINVAL; 272 + break; 273 + } 272 274 273 275 if (_IOC_NR(cmd) == _IOC_NR(HIDIOCGRAWNAME(0))) { 274 276 int len; ··· 279 277 len = strlen(hid->name) + 1; 280 278 if (len > _IOC_SIZE(cmd)) 281 279 len = _IOC_SIZE(cmd); 282 - return copy_to_user(user_arg, hid->name, len) ? 280 + ret = copy_to_user(user_arg, hid->name, len) ? 283 281 -EFAULT : len; 282 + break; 284 283 } 285 284 286 285 if (_IOC_NR(cmd) == _IOC_NR(HIDIOCGRAWPHYS(0))) { ··· 291 288 len = strlen(hid->phys) + 1; 292 289 if (len > _IOC_SIZE(cmd)) 293 290 len = _IOC_SIZE(cmd); 294 - return copy_to_user(user_arg, hid->phys, len) ? 291 + ret = copy_to_user(user_arg, hid->phys, len) ? 295 292 -EFAULT : len; 293 + break; 296 294 } 297 295 } 298 296 299 - ret = -ENOTTY; 297 + ret = -ENOTTY; 300 298 } 301 299 unlock_kernel(); 302 300 return ret;
+2 -2
drivers/hwmon/f71882fg.c
··· 1872 1872 1873 1873 devid = superio_inw(sioaddr, SIO_REG_MANID); 1874 1874 if (devid != SIO_FINTEK_ID) { 1875 - printk(KERN_INFO DRVNAME ": Not a Fintek device\n"); 1875 + pr_debug(DRVNAME ": Not a Fintek device\n"); 1876 1876 goto exit; 1877 1877 } 1878 1878 ··· 1932 1932 res.name = f71882fg_pdev->name; 1933 1933 err = acpi_check_resource_conflict(&res); 1934 1934 if (err) 1935 - return err; 1935 + goto exit_device_put; 1936 1936 1937 1937 err = platform_device_add_resources(f71882fg_pdev, &res, 1); 1938 1938 if (err) {
+81 -4
drivers/hwmon/hp_accel.c
··· 166 166 }, \ 167 167 .driver_data = &lis3lv02d_axis_##_axis \ 168 168 } 169 + 170 + #define AXIS_DMI_MATCH2(_ident, _class1, _name1, \ 171 + _class2, _name2, \ 172 + _axis) { \ 173 + .ident = _ident, \ 174 + .callback = lis3lv02d_dmi_matched, \ 175 + .matches = { \ 176 + DMI_MATCH(DMI_##_class1, _name1), \ 177 + DMI_MATCH(DMI_##_class2, _name2), \ 178 + }, \ 179 + .driver_data = &lis3lv02d_axis_##_axis \ 180 + } 169 181 static struct dmi_system_id lis3lv02d_dmi_ids[] = { 170 182 /* product names are truncated to match all kinds of a same model */ 171 183 AXIS_DMI_MATCH("NC64x0", "HP Compaq nc64", x_inverted), ··· 191 179 AXIS_DMI_MATCH("NC673x", "HP Compaq 673", xy_rotated_left_usd), 192 180 AXIS_DMI_MATCH("NC651xx", "HP Compaq 651", xy_rotated_right), 193 181 AXIS_DMI_MATCH("NC671xx", "HP Compaq 671", xy_swap_yz_inverted), 182 + /* Intel-based HP Pavilion dv5 */ 183 + AXIS_DMI_MATCH2("HPDV5_I", 184 + PRODUCT_NAME, "HP Pavilion dv5", 185 + BOARD_NAME, "3603", 186 + x_inverted), 187 + /* AMD-based HP Pavilion dv5 */ 188 + AXIS_DMI_MATCH2("HPDV5_A", 189 + PRODUCT_NAME, "HP Pavilion dv5", 190 + BOARD_NAME, "3600", 191 + y_inverted), 194 192 { NULL, } 195 193 /* Laptop models without axis info (yet): 196 194 * "NC6910" "HP Compaq 6910" ··· 235 213 .set_brightness = hpled_set, 236 214 }; 237 215 216 + static acpi_status 217 + lis3lv02d_get_resource(struct acpi_resource *resource, void *context) 218 + { 219 + if (resource->type == ACPI_RESOURCE_TYPE_EXTENDED_IRQ) { 220 + struct acpi_resource_extended_irq *irq; 221 + u32 *device_irq = context; 222 + 223 + irq = &resource->data.extended_irq; 224 + *device_irq = irq->interrupts[0]; 225 + } 226 + 227 + return AE_OK; 228 + } 229 + 230 + static void lis3lv02d_enum_resources(struct acpi_device *device) 231 + { 232 + acpi_status status; 233 + 234 + status = acpi_walk_resources(device->handle, METHOD_NAME__CRS, 235 + lis3lv02d_get_resource, &adev.irq); 236 + if (ACPI_FAILURE(status)) 237 + printk(KERN_DEBUG DRIVER_NAME ": Error getting resources\n"); 238 + } 239 + 240 + static s16 lis3lv02d_read_16(acpi_handle handle, int reg) 241 + { 242 + u8 lo, hi; 243 + 244 + adev.read(handle, reg - 1, &lo); 245 + adev.read(handle, reg, &hi); 246 + /* In "12 bit right justified" mode, bit 6, bit 7, bit 8 = bit 5 */ 247 + return (s16)((hi << 8) | lo); 248 + } 249 + 250 + static s16 lis3lv02d_read_8(acpi_handle handle, int reg) 251 + { 252 + s8 lo; 253 + adev.read(handle, reg, &lo); 254 + return lo; 255 + } 256 + 238 257 static int lis3lv02d_add(struct acpi_device *device) 239 258 { 240 - u8 val; 241 259 int ret; 242 260 243 261 if (!device) ··· 291 229 strcpy(acpi_device_class(device), ACPI_MDPS_CLASS); 292 230 device->driver_data = &adev; 293 231 294 - lis3lv02d_acpi_read(device->handle, WHO_AM_I, &val); 295 - if ((val != LIS3LV02DL_ID) && (val != LIS302DL_ID)) { 232 + lis3lv02d_acpi_read(device->handle, WHO_AM_I, &adev.whoami); 233 + switch (adev.whoami) { 234 + case LIS_DOUBLE_ID: 235 + printk(KERN_INFO DRIVER_NAME ": 2-byte sensor found\n"); 236 + adev.read_data = lis3lv02d_read_16; 237 + adev.mdps_max_val = 2048; 238 + break; 239 + case LIS_SINGLE_ID: 240 + printk(KERN_INFO DRIVER_NAME ": 1-byte sensor found\n"); 241 + adev.read_data = lis3lv02d_read_8; 242 + adev.mdps_max_val = 128; 243 + break; 244 + default: 296 245 printk(KERN_ERR DRIVER_NAME 297 - ": Accelerometer chip not LIS3LV02D{L,Q}\n"); 246 + ": unknown sensor type 0x%X\n", adev.whoami); 247 + return -EINVAL; 298 248 } 299 249 300 250 /* If possible use a "standard" axes order */ ··· 320 246 ret = led_classdev_register(NULL, &hpled_led.led_classdev); 321 247 if (ret) 322 248 return ret; 249 + 250 + /* obtain IRQ number of our device from ACPI */ 251 + lis3lv02d_enum_resources(adev.device); 323 252 324 253 ret = lis3lv02d_init_device(&adev); 325 254 if (ret) {
+160 -35
drivers/hwmon/lis3lv02d.c
··· 3 3 * 4 4 * Copyright (C) 2007-2008 Yan Burman 5 5 * Copyright (C) 2008 Eric Piel 6 - * Copyright (C) 2008 Pavel Machek 6 + * Copyright (C) 2008-2009 Pavel Machek 7 7 * 8 8 * This program is free software; you can redistribute it and/or modify 9 9 * it under the terms of the GNU General Public License as published by ··· 35 35 #include <linux/poll.h> 36 36 #include <linux/freezer.h> 37 37 #include <linux/uaccess.h> 38 + #include <linux/miscdevice.h> 38 39 #include <acpi/acpi_drivers.h> 39 40 #include <asm/atomic.h> 40 41 #include "lis3lv02d.h" ··· 53 52 * joystick. 54 53 */ 55 54 56 - /* Maximum value our axis may get for the input device (signed 12 bits) */ 57 - #define MDPS_MAX_VAL 2048 55 + struct acpi_lis3lv02d adev = { 56 + .misc_wait = __WAIT_QUEUE_HEAD_INITIALIZER(adev.misc_wait), 57 + }; 58 58 59 - struct acpi_lis3lv02d adev; 60 59 EXPORT_SYMBOL_GPL(adev); 61 60 62 61 static int lis3lv02d_add_fs(struct acpi_device *device); 63 - 64 - static s16 lis3lv02d_read_16(acpi_handle handle, int reg) 65 - { 66 - u8 lo, hi; 67 - 68 - adev.read(handle, reg, &lo); 69 - adev.read(handle, reg + 1, &hi); 70 - /* In "12 bit right justified" mode, bit 6, bit 7, bit 8 = bit 5 */ 71 - return (s16)((hi << 8) | lo); 72 - } 73 62 74 63 /** 75 64 * lis3lv02d_get_axis - For the given axis, give the value converted ··· 89 98 { 90 99 int position[3]; 91 100 92 - position[0] = lis3lv02d_read_16(handle, OUTX_L); 93 - position[1] = lis3lv02d_read_16(handle, OUTY_L); 94 - position[2] = lis3lv02d_read_16(handle, OUTZ_L); 101 + position[0] = adev.read_data(handle, OUTX); 102 + position[1] = adev.read_data(handle, OUTY); 103 + position[2] = adev.read_data(handle, OUTZ); 95 104 96 105 *x = lis3lv02d_get_axis(adev.ac.x, position); 97 106 *y = lis3lv02d_get_axis(adev.ac.y, position); ··· 101 110 void lis3lv02d_poweroff(acpi_handle handle) 102 111 { 103 112 adev.is_on = 0; 104 - /* disable X,Y,Z axis and power down */ 105 - adev.write(handle, CTRL_REG1, 0x00); 106 113 } 107 114 EXPORT_SYMBOL_GPL(lis3lv02d_poweroff); 108 115 109 116 void lis3lv02d_poweron(acpi_handle handle) 110 117 { 111 - u8 val; 112 - 113 118 adev.is_on = 1; 114 119 adev.init(handle); 115 - adev.write(handle, FF_WU_CFG, 0); 116 - /* 117 - * BDU: LSB and MSB values are not updated until both have been read. 118 - * So the value read will always be correct. 119 - * IEN: Interrupt for free-fall and DD, not for data-ready. 120 - */ 121 - adev.read(handle, CTRL_REG2, &val); 122 - val |= CTRL2_BDU | CTRL2_IEN; 123 - adev.write(handle, CTRL_REG2, val); 124 120 } 125 121 EXPORT_SYMBOL_GPL(lis3lv02d_poweron); 126 122 ··· 139 161 lis3lv02d_poweroff(dev->device->handle); 140 162 mutex_unlock(&dev->lock); 141 163 } 164 + 165 + static irqreturn_t lis302dl_interrupt(int irq, void *dummy) 166 + { 167 + /* 168 + * Be careful: on some HP laptops the bios force DD when on battery and 169 + * the lid is closed. This leads to interrupts as soon as a little move 170 + * is done. 171 + */ 172 + atomic_inc(&adev.count); 173 + 174 + wake_up_interruptible(&adev.misc_wait); 175 + kill_fasync(&adev.async_queue, SIGIO, POLL_IN); 176 + return IRQ_HANDLED; 177 + } 178 + 179 + static int lis3lv02d_misc_open(struct inode *inode, struct file *file) 180 + { 181 + int ret; 182 + 183 + if (test_and_set_bit(0, &adev.misc_opened)) 184 + return -EBUSY; /* already open */ 185 + 186 + atomic_set(&adev.count, 0); 187 + 188 + /* 189 + * The sensor can generate interrupts for free-fall and direction 190 + * detection (distinguishable with FF_WU_SRC and DD_SRC) but to keep 191 + * the things simple and _fast_ we activate it only for free-fall, so 192 + * no need to read register (very slow with ACPI). For the same reason, 193 + * we forbid shared interrupts. 194 + * 195 + * IRQF_TRIGGER_RISING seems pointless on HP laptops because the 196 + * io-apic is not configurable (and generates a warning) but I keep it 197 + * in case of support for other hardware. 198 + */ 199 + ret = request_irq(adev.irq, lis302dl_interrupt, IRQF_TRIGGER_RISING, 200 + DRIVER_NAME, &adev); 201 + 202 + if (ret) { 203 + clear_bit(0, &adev.misc_opened); 204 + printk(KERN_ERR DRIVER_NAME ": IRQ%d allocation failed\n", adev.irq); 205 + return -EBUSY; 206 + } 207 + lis3lv02d_increase_use(&adev); 208 + printk("lis3: registered interrupt %d\n", adev.irq); 209 + return 0; 210 + } 211 + 212 + static int lis3lv02d_misc_release(struct inode *inode, struct file *file) 213 + { 214 + fasync_helper(-1, file, 0, &adev.async_queue); 215 + lis3lv02d_decrease_use(&adev); 216 + free_irq(adev.irq, &adev); 217 + clear_bit(0, &adev.misc_opened); /* release the device */ 218 + return 0; 219 + } 220 + 221 + static ssize_t lis3lv02d_misc_read(struct file *file, char __user *buf, 222 + size_t count, loff_t *pos) 223 + { 224 + DECLARE_WAITQUEUE(wait, current); 225 + u32 data; 226 + unsigned char byte_data; 227 + ssize_t retval = 1; 228 + 229 + if (count < 1) 230 + return -EINVAL; 231 + 232 + add_wait_queue(&adev.misc_wait, &wait); 233 + while (true) { 234 + set_current_state(TASK_INTERRUPTIBLE); 235 + data = atomic_xchg(&adev.count, 0); 236 + if (data) 237 + break; 238 + 239 + if (file->f_flags & O_NONBLOCK) { 240 + retval = -EAGAIN; 241 + goto out; 242 + } 243 + 244 + if (signal_pending(current)) { 245 + retval = -ERESTARTSYS; 246 + goto out; 247 + } 248 + 249 + schedule(); 250 + } 251 + 252 + if (data < 255) 253 + byte_data = data; 254 + else 255 + byte_data = 255; 256 + 257 + /* make sure we are not going into copy_to_user() with 258 + * TASK_INTERRUPTIBLE state */ 259 + set_current_state(TASK_RUNNING); 260 + if (copy_to_user(buf, &byte_data, sizeof(byte_data))) 261 + retval = -EFAULT; 262 + 263 + out: 264 + __set_current_state(TASK_RUNNING); 265 + remove_wait_queue(&adev.misc_wait, &wait); 266 + 267 + return retval; 268 + } 269 + 270 + static unsigned int lis3lv02d_misc_poll(struct file *file, poll_table *wait) 271 + { 272 + poll_wait(file, &adev.misc_wait, wait); 273 + if (atomic_read(&adev.count)) 274 + return POLLIN | POLLRDNORM; 275 + return 0; 276 + } 277 + 278 + static int lis3lv02d_misc_fasync(int fd, struct file *file, int on) 279 + { 280 + return fasync_helper(fd, file, on, &adev.async_queue); 281 + } 282 + 283 + static const struct file_operations lis3lv02d_misc_fops = { 284 + .owner = THIS_MODULE, 285 + .llseek = no_llseek, 286 + .read = lis3lv02d_misc_read, 287 + .open = lis3lv02d_misc_open, 288 + .release = lis3lv02d_misc_release, 289 + .poll = lis3lv02d_misc_poll, 290 + .fasync = lis3lv02d_misc_fasync, 291 + }; 292 + 293 + static struct miscdevice lis3lv02d_misc_device = { 294 + .minor = MISC_DYNAMIC_MINOR, 295 + .name = "freefall", 296 + .fops = &lis3lv02d_misc_fops, 297 + }; 142 298 143 299 /** 144 300 * lis3lv02d_joystick_kthread - Kthread polling function ··· 315 203 lis3lv02d_decrease_use(&adev); 316 204 } 317 205 318 - 319 206 static inline void lis3lv02d_calibrate_joystick(void) 320 207 { 321 208 lis3lv02d_get_xyz(adev.device->handle, &adev.xcalib, &adev.ycalib, &adev.zcalib); ··· 342 231 adev.idev->close = lis3lv02d_joystick_close; 343 232 344 233 set_bit(EV_ABS, adev.idev->evbit); 345 - input_set_abs_params(adev.idev, ABS_X, -MDPS_MAX_VAL, MDPS_MAX_VAL, 3, 3); 346 - input_set_abs_params(adev.idev, ABS_Y, -MDPS_MAX_VAL, MDPS_MAX_VAL, 3, 3); 347 - input_set_abs_params(adev.idev, ABS_Z, -MDPS_MAX_VAL, MDPS_MAX_VAL, 3, 3); 234 + input_set_abs_params(adev.idev, ABS_X, -adev.mdps_max_val, adev.mdps_max_val, 3, 3); 235 + input_set_abs_params(adev.idev, ABS_Y, -adev.mdps_max_val, adev.mdps_max_val, 3, 3); 236 + input_set_abs_params(adev.idev, ABS_Z, -adev.mdps_max_val, adev.mdps_max_val, 3, 3); 348 237 349 238 err = input_register_device(adev.idev); 350 239 if (err) { ··· 361 250 if (!adev.idev) 362 251 return; 363 252 253 + misc_deregister(&lis3lv02d_misc_device); 364 254 input_unregister_device(adev.idev); 365 255 adev.idev = NULL; 366 256 } ··· 380 268 if (lis3lv02d_joystick_enable()) 381 269 printk(KERN_ERR DRIVER_NAME ": joystick initialization failed\n"); 382 270 271 + printk("lis3_init_device: irq %d\n", dev->irq); 272 + 273 + /* if we did not get an IRQ from ACPI - we have nothing more to do */ 274 + if (!dev->irq) { 275 + printk(KERN_ERR DRIVER_NAME 276 + ": No IRQ in ACPI. Disabling /dev/freefall\n"); 277 + goto out; 278 + } 279 + 280 + printk("lis3: registering device\n"); 281 + if (misc_register(&lis3lv02d_misc_device)) 282 + printk(KERN_ERR DRIVER_NAME ": misc_register failed\n"); 283 + out: 383 284 lis3lv02d_decrease_use(dev); 384 285 return 0; 385 286 } ··· 476 351 EXPORT_SYMBOL_GPL(lis3lv02d_remove_fs); 477 352 478 353 MODULE_DESCRIPTION("ST LIS3LV02Dx three-axis digital accelerometer driver"); 479 - MODULE_AUTHOR("Yan Burman and Eric Piel"); 354 + MODULE_AUTHOR("Yan Burman, Eric Piel, Pavel Machek"); 480 355 MODULE_LICENSE("GPL"); 481 356
+18 -3
drivers/hwmon/lis3lv02d.h
··· 22 22 /* 23 23 * The actual chip is STMicroelectronics LIS3LV02DL or LIS3LV02DQ that seems to 24 24 * be connected via SPI. There exists also several similar chips (such as LIS302DL or 25 - * LIS3L02DQ) but not in the HP laptops and they have slightly different registers. 25 + * LIS3L02DQ) and they have slightly different registers, but we can provide a 26 + * common interface for all of them. 26 27 * They can also be connected via I²C. 27 28 */ 28 29 29 - #define LIS3LV02DL_ID 0x3A /* Also the LIS3LV02DQ */ 30 - #define LIS302DL_ID 0x3B /* Also the LIS202DL! */ 30 + /* 2-byte registers */ 31 + #define LIS_DOUBLE_ID 0x3A /* LIS3LV02D[LQ] */ 32 + /* 1-byte registers */ 33 + #define LIS_SINGLE_ID 0x3B /* LIS[32]02DL and others */ 31 34 32 35 enum lis3lv02d_reg { 33 36 WHO_AM_I = 0x0F, ··· 47 44 STATUS_REG = 0x27, 48 45 OUTX_L = 0x28, 49 46 OUTX_H = 0x29, 47 + OUTX = 0x29, 50 48 OUTY_L = 0x2A, 51 49 OUTY_H = 0x2B, 50 + OUTY = 0x2B, 52 51 OUTZ_L = 0x2C, 53 52 OUTZ_H = 0x2D, 53 + OUTZ = 0x2D, 54 54 FF_WU_CFG = 0x30, 55 55 FF_WU_SRC = 0x31, 56 56 FF_WU_ACK = 0x32, ··· 165 159 acpi_status (*write) (acpi_handle handle, int reg, u8 val); 166 160 acpi_status (*read) (acpi_handle handle, int reg, u8 *ret); 167 161 162 + u8 whoami; /* 3Ah: 2-byte registries, 3Bh: 1-byte registries */ 163 + s16 (*read_data) (acpi_handle handle, int reg); 164 + int mdps_max_val; 165 + 168 166 struct input_dev *idev; /* input device */ 169 167 struct task_struct *kthread; /* kthread for input */ 170 168 struct mutex lock; ··· 180 170 unsigned char is_on; /* whether the device is on or off */ 181 171 unsigned char usage; /* usage counter */ 182 172 struct axis_conversion ac; /* hw -> logical axis */ 173 + 174 + u32 irq; /* IRQ number */ 175 + struct fasync_struct *async_queue; /* queue for the misc device */ 176 + wait_queue_head_t misc_wait; /* Wait queue for the misc device */ 177 + unsigned long misc_opened; /* bit0: whether the device is open */ 183 178 }; 184 179 185 180 int lis3lv02d_init_device(struct acpi_lis3lv02d *dev);
+1 -1
drivers/hwmon/vt1211.c
··· 1262 1262 res.name = pdev->name; 1263 1263 err = acpi_check_resource_conflict(&res); 1264 1264 if (err) 1265 - goto EXIT; 1265 + goto EXIT_DEV_PUT; 1266 1266 1267 1267 err = platform_device_add_resources(pdev, &res, 1); 1268 1268 if (err) {
+1 -1
drivers/hwmon/w83627ehf.c
··· 1548 1548 1549 1549 err = acpi_check_resource_conflict(&res); 1550 1550 if (err) 1551 - goto exit; 1551 + goto exit_device_put; 1552 1552 1553 1553 err = platform_device_add_resources(pdev, &res, 1); 1554 1554 if (err) {
+1 -1
drivers/md/dm-io.c
··· 328 328 struct dpages old_pages = *dp; 329 329 330 330 if (sync) 331 - rw |= (1 << BIO_RW_SYNC); 331 + rw |= (1 << BIO_RW_SYNCIO) | (1 << BIO_RW_UNPLUG); 332 332 333 333 /* 334 334 * For multiple regions we need to be careful to rewind
+1 -1
drivers/md/dm-kcopyd.c
··· 344 344 { 345 345 int r; 346 346 struct dm_io_request io_req = { 347 - .bi_rw = job->rw | (1 << BIO_RW_SYNC), 347 + .bi_rw = job->rw | (1 << BIO_RW_SYNCIO) | (1 << BIO_RW_UNPLUG), 348 348 .mem.type = DM_IO_PAGE_LIST, 349 349 .mem.ptr.pl = job->pages, 350 350 .mem.offset = job->offset,
+2 -2
drivers/md/md.c
··· 474 474 * causes ENOTSUPP, we allocate a spare bio... 475 475 */ 476 476 struct bio *bio = bio_alloc(GFP_NOIO, 1); 477 - int rw = (1<<BIO_RW) | (1<<BIO_RW_SYNC); 477 + int rw = (1<<BIO_RW) | (1<<BIO_RW_SYNCIO) | (1<<BIO_RW_UNPLUG); 478 478 479 479 bio->bi_bdev = rdev->bdev; 480 480 bio->bi_sector = sector; ··· 531 531 struct completion event; 532 532 int ret; 533 533 534 - rw |= (1 << BIO_RW_SYNC); 534 + rw |= (1 << BIO_RW_SYNCIO) | (1 << BIO_RW_UNPLUG); 535 535 536 536 bio->bi_bdev = bdev; 537 537 bio->bi_sector = sector;
+4 -6
drivers/media/common/tuners/tuner-simple.c
··· 318 318 u8 *config, u8 *cb) 319 319 { 320 320 struct tuner_simple_priv *priv = fe->tuner_priv; 321 - u8 tuneraddr; 322 321 int rc; 323 322 324 323 /* tv norm specific stuff for multi-norm tuners */ ··· 386 387 387 388 case TUNER_PHILIPS_TUV1236D: 388 389 { 390 + struct tuner_i2c_props i2c = priv->i2c_props; 389 391 /* 0x40 -> ATSC antenna input 1 */ 390 392 /* 0x48 -> ATSC antenna input 2 */ 391 393 /* 0x00 -> NTSC antenna input 1 */ ··· 398 398 buffer[1] = 0x04; 399 399 } 400 400 /* set to the correct mode (analog or digital) */ 401 - tuneraddr = priv->i2c_props.addr; 402 - priv->i2c_props.addr = 0x0a; 403 - rc = tuner_i2c_xfer_send(&priv->i2c_props, &buffer[0], 2); 401 + i2c.addr = 0x0a; 402 + rc = tuner_i2c_xfer_send(&i2c, &buffer[0], 2); 404 403 if (2 != rc) 405 404 tuner_warn("i2c i/o error: rc == %d " 406 405 "(should be 2)\n", rc); 407 - rc = tuner_i2c_xfer_send(&priv->i2c_props, &buffer[2], 2); 406 + rc = tuner_i2c_xfer_send(&i2c, &buffer[2], 2); 408 407 if (2 != rc) 409 408 tuner_warn("i2c i/o error: rc == %d " 410 409 "(should be 2)\n", rc); 411 - priv->i2c_props.addr = tuneraddr; 412 410 break; 413 411 } 414 412 }
+7 -9
drivers/media/dvb/dvb-core/dmxdev.c
··· 364 364 enum dmx_success success) 365 365 { 366 366 struct dmxdev_filter *dmxdevfilter = filter->priv; 367 - unsigned long flags; 368 367 int ret; 369 368 370 369 if (dmxdevfilter->buffer.error) { 371 370 wake_up(&dmxdevfilter->buffer.queue); 372 371 return 0; 373 372 } 374 - spin_lock_irqsave(&dmxdevfilter->dev->lock, flags); 373 + spin_lock(&dmxdevfilter->dev->lock); 375 374 if (dmxdevfilter->state != DMXDEV_STATE_GO) { 376 - spin_unlock_irqrestore(&dmxdevfilter->dev->lock, flags); 375 + spin_unlock(&dmxdevfilter->dev->lock); 377 376 return 0; 378 377 } 379 378 del_timer(&dmxdevfilter->timer); ··· 391 392 } 392 393 if (dmxdevfilter->params.sec.flags & DMX_ONESHOT) 393 394 dmxdevfilter->state = DMXDEV_STATE_DONE; 394 - spin_unlock_irqrestore(&dmxdevfilter->dev->lock, flags); 395 + spin_unlock(&dmxdevfilter->dev->lock); 395 396 wake_up(&dmxdevfilter->buffer.queue); 396 397 return 0; 397 398 } ··· 403 404 { 404 405 struct dmxdev_filter *dmxdevfilter = feed->priv; 405 406 struct dvb_ringbuffer *buffer; 406 - unsigned long flags; 407 407 int ret; 408 408 409 - spin_lock_irqsave(&dmxdevfilter->dev->lock, flags); 409 + spin_lock(&dmxdevfilter->dev->lock); 410 410 if (dmxdevfilter->params.pes.output == DMX_OUT_DECODER) { 411 - spin_unlock_irqrestore(&dmxdevfilter->dev->lock, flags); 411 + spin_unlock(&dmxdevfilter->dev->lock); 412 412 return 0; 413 413 } 414 414 ··· 417 419 else 418 420 buffer = &dmxdevfilter->dev->dvr_buffer; 419 421 if (buffer->error) { 420 - spin_unlock_irqrestore(&dmxdevfilter->dev->lock, flags); 422 + spin_unlock(&dmxdevfilter->dev->lock); 421 423 wake_up(&buffer->queue); 422 424 return 0; 423 425 } ··· 428 430 dvb_ringbuffer_flush(buffer); 429 431 buffer->error = ret; 430 432 } 431 - spin_unlock_irqrestore(&dmxdevfilter->dev->lock, flags); 433 + spin_unlock(&dmxdevfilter->dev->lock); 432 434 wake_up(&buffer->queue); 433 435 return 0; 434 436 }
+6 -10
drivers/media/dvb/dvb-core/dvb_demux.c
··· 399 399 void dvb_dmx_swfilter_packets(struct dvb_demux *demux, const u8 *buf, 400 400 size_t count) 401 401 { 402 - unsigned long flags; 403 - 404 - spin_lock_irqsave(&demux->lock, flags); 402 + spin_lock(&demux->lock); 405 403 406 404 while (count--) { 407 405 if (buf[0] == 0x47) ··· 407 409 buf += 188; 408 410 } 409 411 410 - spin_unlock_irqrestore(&demux->lock, flags); 412 + spin_unlock(&demux->lock); 411 413 } 412 414 413 415 EXPORT_SYMBOL(dvb_dmx_swfilter_packets); 414 416 415 417 void dvb_dmx_swfilter(struct dvb_demux *demux, const u8 *buf, size_t count) 416 418 { 417 - unsigned long flags; 418 419 int p = 0, i, j; 419 420 420 - spin_lock_irqsave(&demux->lock, flags); 421 + spin_lock(&demux->lock); 421 422 422 423 if (demux->tsbufp) { 423 424 i = demux->tsbufp; ··· 449 452 } 450 453 451 454 bailout: 452 - spin_unlock_irqrestore(&demux->lock, flags); 455 + spin_unlock(&demux->lock); 453 456 } 454 457 455 458 EXPORT_SYMBOL(dvb_dmx_swfilter); 456 459 457 460 void dvb_dmx_swfilter_204(struct dvb_demux *demux, const u8 *buf, size_t count) 458 461 { 459 - unsigned long flags; 460 462 int p = 0, i, j; 461 463 u8 tmppack[188]; 462 464 463 - spin_lock_irqsave(&demux->lock, flags); 465 + spin_lock(&demux->lock); 464 466 465 467 if (demux->tsbufp) { 466 468 i = demux->tsbufp; ··· 500 504 } 501 505 502 506 bailout: 503 - spin_unlock_irqrestore(&demux->lock, flags); 507 + spin_unlock(&demux->lock); 504 508 } 505 509 506 510 EXPORT_SYMBOL(dvb_dmx_swfilter_204);
+46 -9
drivers/media/radio/radio-si470x.c
··· 98 98 * - blacklisted KWorld radio in hid-core.c and hid-ids.h 99 99 * 2008-12-03 Mark Lord <mlord@pobox.com> 100 100 * - add support for DealExtreme USB Radio 101 + * 2009-01-31 Bob Ross <pigiron@gmx.com> 102 + * - correction of stereo detection/setting 103 + * - correction of signal strength indicator scaling 104 + * 2009-01-31 Rick Bronson <rick@efn.org> 105 + * Tobias Lorenz <tobias.lorenz@gmx.net> 106 + * - add LED status output 101 107 * 102 108 * ToDo: 103 109 * - add firmware download/update support 104 110 * - RDS support: interrupt mode, instead of polling 105 - * - add LED status output (check if that's not already done in firmware) 106 111 */ 107 112 108 113 ··· 887 882 888 883 889 884 /************************************************************************** 885 + * General Driver Functions - LED_REPORT 886 + **************************************************************************/ 887 + 888 + /* 889 + * si470x_set_led_state - sets the led state 890 + */ 891 + static int si470x_set_led_state(struct si470x_device *radio, 892 + unsigned char led_state) 893 + { 894 + unsigned char buf[LED_REPORT_SIZE]; 895 + int retval; 896 + 897 + buf[0] = LED_REPORT; 898 + buf[1] = LED_COMMAND; 899 + buf[2] = led_state; 900 + 901 + retval = si470x_set_report(radio, (void *) &buf, sizeof(buf)); 902 + 903 + return (retval < 0) ? -EINVAL : 0; 904 + } 905 + 906 + 907 + 908 + /************************************************************************** 890 909 * RDS Driver Functions 891 910 **************************************************************************/ 892 911 ··· 1414 1385 }; 1415 1386 1416 1387 /* stereo indicator == stereo (instead of mono) */ 1417 - if ((radio->registers[STATUSRSSI] & STATUSRSSI_ST) == 1) 1418 - tuner->rxsubchans = V4L2_TUNER_SUB_MONO | V4L2_TUNER_SUB_STEREO; 1419 - else 1388 + if ((radio->registers[STATUSRSSI] & STATUSRSSI_ST) == 0) 1420 1389 tuner->rxsubchans = V4L2_TUNER_SUB_MONO; 1390 + else 1391 + tuner->rxsubchans = V4L2_TUNER_SUB_MONO | V4L2_TUNER_SUB_STEREO; 1421 1392 1422 1393 /* mono/stereo selector */ 1423 - if ((radio->registers[POWERCFG] & POWERCFG_MONO) == 1) 1424 - tuner->audmode = V4L2_TUNER_MODE_MONO; 1425 - else 1394 + if ((radio->registers[POWERCFG] & POWERCFG_MONO) == 0) 1426 1395 tuner->audmode = V4L2_TUNER_MODE_STEREO; 1396 + else 1397 + tuner->audmode = V4L2_TUNER_MODE_MONO; 1427 1398 1428 1399 /* min is worst, max is best; signal:0..0xffff; rssi: 0..0xff */ 1429 - tuner->signal = (radio->registers[STATUSRSSI] & STATUSRSSI_RSSI) 1430 - * 0x0101; 1400 + /* measured in units of dbµV in 1 db increments (max at ~75 dbµV) */ 1401 + tuner->signal = (radio->registers[STATUSRSSI] & STATUSRSSI_RSSI); 1402 + /* the ideal factor is 0xffff/75 = 873,8 */ 1403 + tuner->signal = (tuner->signal * 873) + (8 * tuner->signal / 10); 1431 1404 1432 1405 /* automatic frequency control: -1: freq to low, 1 freq to high */ 1433 1406 /* AFCRL does only indicate that freq. differs, not if too low/high */ ··· 1663 1632 /* set initial frequency */ 1664 1633 si470x_set_freq(radio, 87.5 * FREQ_MUL); /* available in all regions */ 1665 1634 1635 + /* set led to connect state */ 1636 + si470x_set_led_state(radio, BLINK_GREEN_LED); 1637 + 1666 1638 /* rds buffer allocation */ 1667 1639 radio->buf_size = rds_buf * 3; 1668 1640 radio->buffer = kmalloc(radio->buf_size, GFP_KERNEL); ··· 1749 1715 cancel_delayed_work_sync(&radio->work); 1750 1716 usb_set_intfdata(intf, NULL); 1751 1717 if (radio->users == 0) { 1718 + /* set led to disconnect state */ 1719 + si470x_set_led_state(radio, BLINK_ORANGE_LED); 1720 + 1752 1721 video_unregister_device(radio->videodev); 1753 1722 kfree(radio->buffer); 1754 1723 kfree(radio);
+5
drivers/media/video/gspca/gspca.c
··· 422 422 if (urb == NULL) 423 423 break; 424 424 425 + BUG_ON(!gspca_dev->dev); 425 426 gspca_dev->urb[i] = NULL; 426 427 if (!gspca_dev->present) 427 428 usb_kill_urb(urb); ··· 1951 1950 { 1952 1951 struct gspca_dev *gspca_dev = usb_get_intfdata(intf); 1953 1952 1953 + mutex_lock(&gspca_dev->usb_lock); 1954 1954 gspca_dev->present = 0; 1955 + mutex_unlock(&gspca_dev->usb_lock); 1955 1956 1957 + destroy_urbs(gspca_dev); 1958 + gspca_dev->dev = NULL; 1956 1959 usb_set_intfdata(intf, NULL); 1957 1960 1958 1961 /* release the device */
+13 -13
drivers/media/video/ivtv/ivtv-ioctl.c
··· 393 393 return 0; 394 394 } 395 395 396 - v4l2_subdev_call(itv->sd_video, video, s_fmt, fmt); 396 + v4l2_subdev_call(itv->sd_video, video, g_fmt, fmt); 397 397 vbifmt->service_set = ivtv_get_service_set(vbifmt); 398 398 return 0; 399 399 } ··· 1748 1748 break; 1749 1749 } 1750 1750 1751 + case IVTV_IOC_DMA_FRAME: 1752 + case VIDEO_GET_PTS: 1753 + case VIDEO_GET_FRAME_COUNT: 1754 + case VIDEO_GET_EVENT: 1755 + case VIDEO_PLAY: 1756 + case VIDEO_STOP: 1757 + case VIDEO_FREEZE: 1758 + case VIDEO_CONTINUE: 1759 + case VIDEO_COMMAND: 1760 + case VIDEO_TRY_COMMAND: 1761 + return ivtv_decoder_ioctls(file, cmd, (void *)arg); 1762 + 1751 1763 default: 1752 1764 return -EINVAL; 1753 1765 } ··· 1801 1789 itv->audio_bilingual_mode = arg; 1802 1790 ivtv_vapi(itv, CX2341X_DEC_SET_AUDIO_MODE, 2, itv->audio_bilingual_mode, itv->audio_stereo_mode); 1803 1791 return 0; 1804 - 1805 - case IVTV_IOC_DMA_FRAME: 1806 - case VIDEO_GET_PTS: 1807 - case VIDEO_GET_FRAME_COUNT: 1808 - case VIDEO_GET_EVENT: 1809 - case VIDEO_PLAY: 1810 - case VIDEO_STOP: 1811 - case VIDEO_FREEZE: 1812 - case VIDEO_CONTINUE: 1813 - case VIDEO_COMMAND: 1814 - case VIDEO_TRY_COMMAND: 1815 - return ivtv_decoder_ioctls(filp, cmd, (void *)arg); 1816 1792 1817 1793 default: 1818 1794 break;
+2 -2
drivers/mfd/htc-egpio.c
··· 286 286 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 287 287 if (!res) 288 288 goto fail; 289 - ei->base_addr = ioremap_nocache(res->start, res->end - res->start); 289 + ei->base_addr = ioremap_nocache(res->start, resource_size(res)); 290 290 if (!ei->base_addr) 291 291 goto fail; 292 292 pr_debug("EGPIO phys=%08x virt=%p\n", (u32)res->start, ei->base_addr); ··· 307 307 308 308 ei->nchips = pdata->num_chips; 309 309 ei->chip = kzalloc(sizeof(struct egpio_chip) * ei->nchips, GFP_KERNEL); 310 - if (!ei) { 310 + if (!ei->chip) { 311 311 ret = -ENOMEM; 312 312 goto fail; 313 313 }
+1
drivers/mfd/pcf50633-core.c
··· 678 678 679 679 static struct i2c_device_id pcf50633_id_table[] = { 680 680 {"pcf50633", 0x73}, 681 + {/* end of list */} 681 682 }; 682 683 683 684 static struct i2c_driver pcf50633_driver = {
+13 -13
drivers/mfd/sm501.c
··· 1050 1050 return gpiochip_add(gchip); 1051 1051 } 1052 1052 1053 - static int sm501_register_gpio(struct sm501_devdata *sm) 1053 + static int __devinit sm501_register_gpio(struct sm501_devdata *sm) 1054 1054 { 1055 1055 struct sm501_gpio *gpio = &sm->gpio; 1056 1056 resource_size_t iobase = sm->io_res->start + SM501_GPIO; ··· 1321 1321 * Common init code for an SM501 1322 1322 */ 1323 1323 1324 - static int sm501_init_dev(struct sm501_devdata *sm) 1324 + static int __devinit sm501_init_dev(struct sm501_devdata *sm) 1325 1325 { 1326 1326 struct sm501_initdata *idata; 1327 1327 struct sm501_platdata *pdata; ··· 1397 1397 return 0; 1398 1398 } 1399 1399 1400 - static int sm501_plat_probe(struct platform_device *dev) 1400 + static int __devinit sm501_plat_probe(struct platform_device *dev) 1401 1401 { 1402 1402 struct sm501_devdata *sm; 1403 1403 int ret; ··· 1586 1586 .gpio_base = -1, 1587 1587 }; 1588 1588 1589 - static int sm501_pci_probe(struct pci_dev *dev, 1590 - const struct pci_device_id *id) 1589 + static int __devinit sm501_pci_probe(struct pci_dev *dev, 1590 + const struct pci_device_id *id) 1591 1591 { 1592 1592 struct sm501_devdata *sm; 1593 1593 int err; ··· 1693 1693 sm501_gpio_remove(sm); 1694 1694 } 1695 1695 1696 - static void sm501_pci_remove(struct pci_dev *dev) 1696 + static void __devexit sm501_pci_remove(struct pci_dev *dev) 1697 1697 { 1698 1698 struct sm501_devdata *sm = pci_get_drvdata(dev); 1699 1699 ··· 1727 1727 1728 1728 MODULE_DEVICE_TABLE(pci, sm501_pci_tbl); 1729 1729 1730 - static struct pci_driver sm501_pci_drv = { 1730 + static struct pci_driver sm501_pci_driver = { 1731 1731 .name = "sm501", 1732 1732 .id_table = sm501_pci_tbl, 1733 1733 .probe = sm501_pci_probe, 1734 - .remove = sm501_pci_remove, 1734 + .remove = __devexit_p(sm501_pci_remove), 1735 1735 }; 1736 1736 1737 1737 MODULE_ALIAS("platform:sm501"); 1738 1738 1739 - static struct platform_driver sm501_plat_drv = { 1739 + static struct platform_driver sm501_plat_driver = { 1740 1740 .driver = { 1741 1741 .name = "sm501", 1742 1742 .owner = THIS_MODULE, ··· 1749 1749 1750 1750 static int __init sm501_base_init(void) 1751 1751 { 1752 - platform_driver_register(&sm501_plat_drv); 1753 - return pci_register_driver(&sm501_pci_drv); 1752 + platform_driver_register(&sm501_plat_driver); 1753 + return pci_register_driver(&sm501_pci_driver); 1754 1754 } 1755 1755 1756 1756 static void __exit sm501_base_exit(void) 1757 1757 { 1758 - platform_driver_unregister(&sm501_plat_drv); 1759 - pci_unregister_driver(&sm501_pci_drv); 1758 + platform_driver_unregister(&sm501_plat_driver); 1759 + pci_unregister_driver(&sm501_pci_driver); 1760 1760 } 1761 1761 1762 1762 module_init(sm501_base_init);
+1 -1
drivers/mfd/twl4030-core.c
··· 38 38 #include <linux/i2c.h> 39 39 #include <linux/i2c/twl4030.h> 40 40 41 - #ifdef CONFIG_ARM 41 + #if defined(CONFIG_ARCH_OMAP2) || defined(CONFIG_ARCH_OMAP3) 42 42 #include <mach/cpu.h> 43 43 #endif 44 44
+35 -13
drivers/mfd/wm8350-core.c
··· 1111 1111 do { 1112 1112 schedule_timeout_interruptible(1); 1113 1113 reg = wm8350_reg_read(wm8350, WM8350_DIGITISER_CONTROL_1); 1114 - } while (tries-- && (reg & WM8350_AUXADC_POLL)); 1114 + } while (--tries && (reg & WM8350_AUXADC_POLL)); 1115 1115 1116 1116 if (!tries) 1117 1117 dev_err(wm8350->dev, "adc chn %d read timeout\n", channel); ··· 1297 1297 int wm8350_device_init(struct wm8350 *wm8350, int irq, 1298 1298 struct wm8350_platform_data *pdata) 1299 1299 { 1300 - int ret = -EINVAL; 1300 + int ret; 1301 1301 u16 id1, id2, mask_rev; 1302 1302 u16 cust_id, mode, chip_rev; 1303 1303 1304 1304 /* get WM8350 revision and config mode */ 1305 - wm8350->read_dev(wm8350, WM8350_RESET_ID, sizeof(id1), &id1); 1306 - wm8350->read_dev(wm8350, WM8350_ID, sizeof(id2), &id2); 1307 - wm8350->read_dev(wm8350, WM8350_REVISION, sizeof(mask_rev), &mask_rev); 1305 + ret = wm8350->read_dev(wm8350, WM8350_RESET_ID, sizeof(id1), &id1); 1306 + if (ret != 0) { 1307 + dev_err(wm8350->dev, "Failed to read ID: %d\n", ret); 1308 + goto err; 1309 + } 1310 + 1311 + ret = wm8350->read_dev(wm8350, WM8350_ID, sizeof(id2), &id2); 1312 + if (ret != 0) { 1313 + dev_err(wm8350->dev, "Failed to read ID: %d\n", ret); 1314 + goto err; 1315 + } 1316 + 1317 + ret = wm8350->read_dev(wm8350, WM8350_REVISION, sizeof(mask_rev), 1318 + &mask_rev); 1319 + if (ret != 0) { 1320 + dev_err(wm8350->dev, "Failed to read revision: %d\n", ret); 1321 + goto err; 1322 + } 1308 1323 1309 1324 id1 = be16_to_cpu(id1); 1310 1325 id2 = be16_to_cpu(id2); ··· 1419 1404 return ret; 1420 1405 } 1421 1406 1422 - if (pdata && pdata->init) { 1423 - ret = pdata->init(wm8350); 1424 - if (ret != 0) { 1425 - dev_err(wm8350->dev, "Platform init() failed: %d\n", 1426 - ret); 1427 - goto err; 1428 - } 1429 - } 1407 + wm8350_reg_write(wm8350, WM8350_SYSTEM_INTERRUPTS_MASK, 0xFFFF); 1408 + wm8350_reg_write(wm8350, WM8350_INT_STATUS_1_MASK, 0xFFFF); 1409 + wm8350_reg_write(wm8350, WM8350_INT_STATUS_2_MASK, 0xFFFF); 1410 + wm8350_reg_write(wm8350, WM8350_UNDER_VOLTAGE_INT_STATUS_MASK, 0xFFFF); 1411 + wm8350_reg_write(wm8350, WM8350_GPIO_INT_STATUS_MASK, 0xFFFF); 1412 + wm8350_reg_write(wm8350, WM8350_COMPARATOR_INT_STATUS_MASK, 0xFFFF); 1430 1413 1431 1414 mutex_init(&wm8350->auxadc_mutex); 1432 1415 mutex_init(&wm8350->irq_mutex); ··· 1442 1429 goto err; 1443 1430 } 1444 1431 wm8350->chip_irq = irq; 1432 + 1433 + if (pdata && pdata->init) { 1434 + ret = pdata->init(wm8350); 1435 + if (ret != 0) { 1436 + dev_err(wm8350->dev, "Platform init() failed: %d\n", 1437 + ret); 1438 + goto err; 1439 + } 1440 + } 1445 1441 1446 1442 wm8350_reg_write(wm8350, WM8350_SYSTEM_INTERRUPTS_MASK, 0x0); 1447 1443
+1 -1
drivers/mfd/wm8350-regmap.c
··· 3188 3188 { 0x7CFF, 0x0C00, 0x7FFF }, /* R1 - ID */ 3189 3189 { 0x0000, 0x0000, 0x0000 }, /* R2 */ 3190 3190 { 0xBE3B, 0xBE3B, 0x8000 }, /* R3 - System Control 1 */ 3191 - { 0xFCF7, 0xFCF7, 0xF800 }, /* R4 - System Control 2 */ 3191 + { 0xFEF7, 0xFEF7, 0xF800 }, /* R4 - System Control 2 */ 3192 3192 { 0x80FF, 0x80FF, 0x8000 }, /* R5 - System Hibernate */ 3193 3193 { 0xFB0E, 0xFB0E, 0x0000 }, /* R6 - Interface Control */ 3194 3194 { 0x0000, 0x0000, 0x0000 }, /* R7 */
+1 -1
drivers/mmc/card/block.c
··· 584 584 if (err) 585 585 goto out; 586 586 587 - string_get_size(get_capacity(md->disk) << 9, STRING_UNITS_2, 587 + string_get_size((u64)get_capacity(md->disk) << 9, STRING_UNITS_2, 588 588 cap_str, sizeof(cap_str)); 589 589 printk(KERN_INFO "%s: %s %s %s %s\n", 590 590 md->disk->disk_name, mmc_card_id(card), mmc_card_name(card),
+1 -1
drivers/mmc/card/mmc_test.c
··· 494 494 495 495 sg_init_one(&sg, test->buffer, 512); 496 496 497 - ret = mmc_test_simple_transfer(test, &sg, 1, 0, 1, 512, 1); 497 + ret = mmc_test_simple_transfer(test, &sg, 1, 0, 1, 512, 0); 498 498 if (ret) 499 499 return ret; 500 500
+3 -2
drivers/mmc/host/atmel-mci.c
··· 1548 1548 { 1549 1549 struct dw_dma_slave *dws = slave; 1550 1550 1551 - if (dws->dma_dev == chan->device->dev) 1551 + if (dws->dma_dev == chan->device->dev) { 1552 + chan->private = dws; 1552 1553 return true; 1553 - else 1554 + } else 1554 1555 return false; 1555 1556 } 1556 1557 #endif
+68 -30
drivers/mmc/host/omap_hsmmc.c
··· 55 55 #define VS30 (1 << 25) 56 56 #define SDVS18 (0x5 << 9) 57 57 #define SDVS30 (0x6 << 9) 58 + #define SDVS33 (0x7 << 9) 58 59 #define SDVSCLR 0xFFFFF1FF 59 60 #define SDVSDET 0x00000400 60 61 #define AUTOIDLE 0x1 ··· 376 375 } 377 376 #endif /* CONFIG_MMC_DEBUG */ 378 377 378 + /* 379 + * MMC controller internal state machines reset 380 + * 381 + * Used to reset command or data internal state machines, using respectively 382 + * SRC or SRD bit of SYSCTL register 383 + * Can be called from interrupt context 384 + */ 385 + static inline void mmc_omap_reset_controller_fsm(struct mmc_omap_host *host, 386 + unsigned long bit) 387 + { 388 + unsigned long i = 0; 389 + unsigned long limit = (loops_per_jiffy * 390 + msecs_to_jiffies(MMC_TIMEOUT_MS)); 391 + 392 + OMAP_HSMMC_WRITE(host->base, SYSCTL, 393 + OMAP_HSMMC_READ(host->base, SYSCTL) | bit); 394 + 395 + while ((OMAP_HSMMC_READ(host->base, SYSCTL) & bit) && 396 + (i++ < limit)) 397 + cpu_relax(); 398 + 399 + if (OMAP_HSMMC_READ(host->base, SYSCTL) & bit) 400 + dev_err(mmc_dev(host->mmc), 401 + "Timeout waiting on controller reset in %s\n", 402 + __func__); 403 + } 379 404 380 405 /* 381 406 * MMC controller IRQ handler ··· 430 403 (status & CMD_CRC)) { 431 404 if (host->cmd) { 432 405 if (status & CMD_TIMEOUT) { 433 - OMAP_HSMMC_WRITE(host->base, SYSCTL, 434 - OMAP_HSMMC_READ(host->base, 435 - SYSCTL) | SRC); 436 - while (OMAP_HSMMC_READ(host->base, 437 - SYSCTL) & SRC) 438 - ; 439 - 406 + mmc_omap_reset_controller_fsm(host, SRC); 440 407 host->cmd->error = -ETIMEDOUT; 441 408 } else { 442 409 host->cmd->error = -EILSEQ; 443 410 } 444 411 end_cmd = 1; 445 412 } 446 - if (host->data) 413 + if (host->data) { 447 414 mmc_dma_cleanup(host); 415 + mmc_omap_reset_controller_fsm(host, SRD); 416 + } 448 417 } 449 418 if ((status & DATA_TIMEOUT) || 450 419 (status & DATA_CRC)) { ··· 449 426 mmc_dma_cleanup(host); 450 427 else 451 428 host->data->error = -EILSEQ; 452 - OMAP_HSMMC_WRITE(host->base, SYSCTL, 453 - OMAP_HSMMC_READ(host->base, 454 - SYSCTL) | SRD); 455 - while (OMAP_HSMMC_READ(host->base, 456 - SYSCTL) & SRD) 457 - ; 429 + mmc_omap_reset_controller_fsm(host, SRD); 458 430 end_trans = 1; 459 431 } 460 432 } ··· 474 456 } 475 457 476 458 /* 477 - * Switch MMC operating voltage 459 + * Switch MMC interface voltage ... only relevant for MMC1. 460 + * 461 + * MMC2 and MMC3 use fixed 1.8V levels, and maybe a transceiver. 462 + * The MMC2 transceiver controls are used instead of DAT4..DAT7. 463 + * Some chips, like eMMC ones, use internal transceivers. 478 464 */ 479 465 static int omap_mmc_switch_opcond(struct mmc_omap_host *host, int vdd) 480 466 { 481 467 u32 reg_val = 0; 482 468 int ret; 469 + 470 + if (host->id != OMAP_MMC1_DEVID) 471 + return 0; 483 472 484 473 /* Disable the clocks */ 485 474 clk_disable(host->fclk); ··· 510 485 OMAP_HSMMC_WRITE(host->base, HCTL, 511 486 OMAP_HSMMC_READ(host->base, HCTL) & SDVSCLR); 512 487 reg_val = OMAP_HSMMC_READ(host->base, HCTL); 488 + 513 489 /* 514 490 * If a MMC dual voltage card is detected, the set_ios fn calls 515 491 * this fn with VDD bit set for 1.8V. Upon card removal from the 516 492 * slot, omap_mmc_set_ios sets the VDD back to 3V on MMC_POWER_OFF. 517 493 * 518 - * Only MMC1 supports 3.0V. MMC2 will not function if SDVS30 is 519 - * set in HCTL. 494 + * Cope with a bit of slop in the range ... per data sheets: 495 + * - "1.8V" for vdds_mmc1/vdds_mmc1a can be up to 2.45V max, 496 + * but recommended values are 1.71V to 1.89V 497 + * - "3.0V" for vdds_mmc1/vdds_mmc1a can be up to 3.5V max, 498 + * but recommended values are 2.7V to 3.3V 499 + * 500 + * Board setup code shouldn't permit anything very out-of-range. 501 + * TWL4030-family VMMC1 and VSIM regulators are fine (avoiding the 502 + * middle range) but VSIM can't power DAT4..DAT7 at more than 3V. 520 503 */ 521 - if (host->id == OMAP_MMC1_DEVID && (((1 << vdd) == MMC_VDD_32_33) || 522 - ((1 << vdd) == MMC_VDD_33_34))) 523 - reg_val |= SDVS30; 524 - if ((1 << vdd) == MMC_VDD_165_195) 504 + if ((1 << vdd) <= MMC_VDD_23_24) 525 505 reg_val |= SDVS18; 506 + else 507 + reg_val |= SDVS30; 526 508 527 509 OMAP_HSMMC_WRITE(host->base, HCTL, reg_val); 528 510 ··· 549 517 { 550 518 struct mmc_omap_host *host = container_of(work, struct mmc_omap_host, 551 519 mmc_carddetect_work); 520 + struct omap_mmc_slot_data *slot = &mmc_slot(host); 521 + 522 + host->carddetect = slot->card_detect(slot->card_detect_irq); 552 523 553 524 sysfs_notify(&host->mmc->class_dev.kobj, NULL, "cover_switch"); 554 525 if (host->carddetect) { 555 526 mmc_detect_change(host->mmc, (HZ * 200) / 1000); 556 527 } else { 557 - OMAP_HSMMC_WRITE(host->base, SYSCTL, 558 - OMAP_HSMMC_READ(host->base, SYSCTL) | SRD); 559 - while (OMAP_HSMMC_READ(host->base, SYSCTL) & SRD) 560 - ; 561 - 528 + mmc_omap_reset_controller_fsm(host, SRD); 562 529 mmc_detect_change(host->mmc, (HZ * 50) / 1000); 563 530 } 564 531 } ··· 569 538 { 570 539 struct mmc_omap_host *host = (struct mmc_omap_host *)dev_id; 571 540 572 - host->carddetect = mmc_slot(host).card_detect(irq); 573 541 schedule_work(&host->mmc_carddetect_work); 574 542 575 543 return IRQ_HANDLED; ··· 787 757 case MMC_POWER_OFF: 788 758 mmc_slot(host).set_power(host->dev, host->slot_id, 0, 0); 789 759 /* 790 - * Reset bus voltage to 3V if it got set to 1.8V earlier. 760 + * Reset interface voltage to 3V if it's 1.8V now; 761 + * only relevant on MMC-1, the others always use 1.8V. 762 + * 791 763 * REVISIT: If we are able to detect cards after unplugging 792 764 * a 1.8V card, this code should not be needed. 793 765 */ 766 + if (host->id != OMAP_MMC1_DEVID) 767 + break; 794 768 if (!(OMAP_HSMMC_READ(host->base, HCTL) & SDVSDET)) { 795 769 int vdd = fls(host->mmc->ocr_avail) - 1; 796 770 if (omap_mmc_switch_opcond(host, vdd) != 0) ··· 818 784 } 819 785 820 786 if (host->id == OMAP_MMC1_DEVID) { 821 - /* Only MMC1 can operate at 3V/1.8V */ 787 + /* Only MMC1 can interface at 3V without some flavor 788 + * of external transceiver; but they all handle 1.8V. 789 + */ 822 790 if ((OMAP_HSMMC_READ(host->base, HCTL) & SDVSDET) && 823 791 (ios->vdd == DUAL_VOLT_OCR_BIT)) { 824 792 /* ··· 1173 1137 " level suspend\n"); 1174 1138 } 1175 1139 1176 - if (!(OMAP_HSMMC_READ(host->base, HCTL) & SDVSDET)) { 1140 + if (host->id == OMAP_MMC1_DEVID 1141 + && !(OMAP_HSMMC_READ(host->base, HCTL) 1142 + & SDVSDET)) { 1177 1143 OMAP_HSMMC_WRITE(host->base, HCTL, 1178 1144 OMAP_HSMMC_READ(host->base, HCTL) 1179 1145 & SDVSCLR);
+1 -1
drivers/mmc/host/s3cmci.c
··· 329 329 330 330 to_ptr = host->base + host->sdidata; 331 331 332 - while ((fifo = fifo_free(host))) { 332 + while ((fifo = fifo_free(host)) > 3) { 333 333 if (!host->pio_bytes) { 334 334 res = get_data_buffer(host, &host->pio_bytes, 335 335 &host->pio_ptr);
+1 -2
drivers/mmc/host/sdhci-pci.c
··· 144 144 SDHCI_QUIRK_32BIT_DMA_SIZE | 145 145 SDHCI_QUIRK_32BIT_ADMA_SIZE | 146 146 SDHCI_QUIRK_RESET_AFTER_REQUEST | 147 - SDHCI_QUIRK_BROKEN_SMALL_PIO | 148 - SDHCI_QUIRK_FORCE_HIGHSPEED; 147 + SDHCI_QUIRK_BROKEN_SMALL_PIO; 149 148 } 150 149 151 150 /*
+4 -3
drivers/mmc/host/sdhci.c
··· 1636 1636 mmc->f_max = host->max_clk; 1637 1637 mmc->caps = MMC_CAP_4_BIT_DATA | MMC_CAP_SDIO_IRQ; 1638 1638 1639 - if ((caps & SDHCI_CAN_DO_HISPD) || 1640 - (host->quirks & SDHCI_QUIRK_FORCE_HIGHSPEED)) 1639 + if (caps & SDHCI_CAN_DO_HISPD) 1641 1640 mmc->caps |= MMC_CAP_SD_HIGHSPEED; 1642 1641 1643 1642 mmc->ocr_avail = 0; ··· 1722 1723 #endif 1723 1724 1724 1725 #ifdef SDHCI_USE_LEDS_CLASS 1725 - host->led.name = mmc_hostname(mmc); 1726 + snprintf(host->led_name, sizeof(host->led_name), 1727 + "%s::", mmc_hostname(mmc)); 1728 + host->led.name = host->led_name; 1726 1729 host->led.brightness = LED_OFF; 1727 1730 host->led.default_trigger = mmc_hostname(mmc); 1728 1731 host->led.brightness_set = sdhci_led_control;
+1 -2
drivers/mmc/host/sdhci.h
··· 208 208 #define SDHCI_QUIRK_BROKEN_TIMEOUT_VAL (1<<12) 209 209 /* Controller has an issue with buffer bits for small transfers */ 210 210 #define SDHCI_QUIRK_BROKEN_SMALL_PIO (1<<13) 211 - /* Controller supports high speed but doesn't have the caps bit set */ 212 - #define SDHCI_QUIRK_FORCE_HIGHSPEED (1<<14) 213 211 214 212 int irq; /* Device IRQ */ 215 213 void __iomem * ioaddr; /* Mapped address */ ··· 220 222 221 223 #if defined(CONFIG_LEDS_CLASS) || defined(CONFIG_LEDS_CLASS_MODULE) 222 224 struct led_classdev led; /* LED control */ 225 + char led_name[32]; 223 226 #endif 224 227 225 228 spinlock_t lock; /* Mutex */
+2 -1
drivers/mtd/nand/atmel_nand.c
··· 139 139 struct nand_chip *nand_chip = mtd->priv; 140 140 struct atmel_nand_host *host = nand_chip->priv; 141 141 142 - return gpio_get_value(host->board->rdy_pin); 142 + return gpio_get_value(host->board->rdy_pin) ^ 143 + !!host->board->rdy_pin_active_low; 143 144 } 144 145 145 146 /*
+15 -1
drivers/pci/intel-iommu.c
··· 61 61 /* global iommu list, set NULL for ignored DMAR units */ 62 62 static struct intel_iommu **g_iommus; 63 63 64 + static int rwbf_quirk; 65 + 64 66 /* 65 67 * 0: Present 66 68 * 1-11: Reserved ··· 787 785 u32 val; 788 786 unsigned long flag; 789 787 790 - if (!cap_rwbf(iommu->cap)) 788 + if (!rwbf_quirk && !cap_rwbf(iommu->cap)) 791 789 return; 792 790 val = iommu->gcmd | DMA_GCMD_WBF; 793 791 ··· 3139 3137 .unmap = intel_iommu_unmap_range, 3140 3138 .iova_to_phys = intel_iommu_iova_to_phys, 3141 3139 }; 3140 + 3141 + static void __devinit quirk_iommu_rwbf(struct pci_dev *dev) 3142 + { 3143 + /* 3144 + * Mobile 4 Series Chipset neglects to set RWBF capability, 3145 + * but needs it: 3146 + */ 3147 + printk(KERN_INFO "DMAR: Forcing write-buffer flush capability\n"); 3148 + rwbf_quirk = 1; 3149 + } 3150 + 3151 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x2a40, quirk_iommu_rwbf);
+4 -6
drivers/pci/msi.c
··· 103 103 } 104 104 } 105 105 106 - /* 107 - * Essentially, this is ((1 << (1 << x)) - 1), but without the 108 - * undefinedness of a << 32. 109 - */ 110 106 static inline __attribute_const__ u32 msi_mask(unsigned x) 111 107 { 112 - static const u32 mask[] = { 1, 2, 4, 0xf, 0xff, 0xffff, 0xffffffff }; 113 - return mask[x]; 108 + /* Don't shift by >= width of type */ 109 + if (x >= 5) 110 + return 0xffffffff; 111 + return (1 << (1 << x)) - 1; 114 112 } 115 113 116 114 static void msix_flush_writes(struct irq_desc *desc)
+9 -4
drivers/pci/pci.c
··· 1540 1540 } 1541 1541 1542 1542 /** 1543 - * pci_request_region - Reserved PCI I/O and memory resource 1543 + * __pci_request_region - Reserved PCI I/O and memory resource 1544 1544 * @pdev: PCI device whose resources are to be reserved 1545 1545 * @bar: BAR to be reserved 1546 1546 * @res_name: Name to be associated with resource. 1547 + * @exclusive: whether the region access is exclusive or not 1547 1548 * 1548 1549 * Mark the PCI region associated with PCI device @pdev BR @bar as 1549 1550 * being reserved by owner @res_name. Do not access any 1550 1551 * address inside the PCI regions unless this call returns 1551 1552 * successfully. 1553 + * 1554 + * If @exclusive is set, then the region is marked so that userspace 1555 + * is explicitly not allowed to map the resource via /dev/mem or 1556 + * sysfs MMIO access. 1552 1557 * 1553 1558 * Returns 0 on success, or %EBUSY on error. A warning 1554 1559 * message is also printed on failure. ··· 1593 1588 } 1594 1589 1595 1590 /** 1596 - * pci_request_region - Reserved PCI I/O and memory resource 1591 + * pci_request_region - Reserve PCI I/O and memory resource 1597 1592 * @pdev: PCI device whose resources are to be reserved 1598 1593 * @bar: BAR to be reserved 1599 - * @res_name: Name to be associated with resource. 1594 + * @res_name: Name to be associated with resource 1600 1595 * 1601 - * Mark the PCI region associated with PCI device @pdev BR @bar as 1596 + * Mark the PCI region associated with PCI device @pdev BAR @bar as 1602 1597 * being reserved by owner @res_name. Do not access any 1603 1598 * address inside the PCI regions unless this call returns 1604 1599 * successfully.
+10 -10
drivers/pci/pci.h
··· 16 16 #endif 17 17 18 18 /** 19 - * Firmware PM callbacks 19 + * struct pci_platform_pm_ops - Firmware PM callbacks 20 20 * 21 - * @is_manageable - returns 'true' if given device is power manageable by the 22 - * platform firmware 21 + * @is_manageable: returns 'true' if given device is power manageable by the 22 + * platform firmware 23 23 * 24 - * @set_state - invokes the platform firmware to set the device's power state 24 + * @set_state: invokes the platform firmware to set the device's power state 25 25 * 26 - * @choose_state - returns PCI power state of given device preferred by the 27 - * platform; to be used during system-wide transitions from a 28 - * sleeping state to the working state and vice versa 26 + * @choose_state: returns PCI power state of given device preferred by the 27 + * platform; to be used during system-wide transitions from a 28 + * sleeping state to the working state and vice versa 29 29 * 30 - * @can_wakeup - returns 'true' if given device is capable of waking up the 31 - * system from a sleeping state 30 + * @can_wakeup: returns 'true' if given device is capable of waking up the 31 + * system from a sleeping state 32 32 * 33 - * @sleep_wake - enables/disables the system wake up capability of given device 33 + * @sleep_wake: enables/disables the system wake up capability of given device 34 34 * 35 35 * If given platform is generally capable of power managing PCI devices, all of 36 36 * these callbacks are mandatory.
+1
drivers/pci/rom.c
··· 55 55 56 56 /** 57 57 * pci_get_rom_size - obtain the actual size of the ROM image 58 + * @pdev: target PCI device 58 59 * @rom: kernel virtual pointer to image of ROM 59 60 * @size: size of PCI window 60 61 * return: size of actual ROM image
+2
drivers/platform/x86/Kconfig
··· 62 62 depends on EXPERIMENTAL 63 63 depends on BACKLIGHT_CLASS_DEVICE 64 64 depends on RFKILL 65 + depends on POWER_SUPPLY 65 66 default n 66 67 ---help--- 67 68 This driver adds support for rfkill and backlight control to Dell ··· 302 301 config EEEPC_LAPTOP 303 302 tristate "Eee PC Hotkey Driver (EXPERIMENTAL)" 304 303 depends on ACPI 304 + depends on INPUT 305 305 depends on EXPERIMENTAL 306 306 select BACKLIGHT_CLASS_DEVICE 307 307 select HWMON
+18 -7
drivers/platform/x86/fujitsu-laptop.c
··· 166 166 struct platform_device *pf_device; 167 167 struct kfifo *fifo; 168 168 spinlock_t fifo_lock; 169 + int rfkill_supported; 169 170 int rfkill_state; 170 171 int logolamp_registered; 171 172 int kblamps_registered; ··· 527 526 show_lid_state(struct device *dev, 528 527 struct device_attribute *attr, char *buf) 529 528 { 530 - if (fujitsu_hotkey->rfkill_state == UNSUPPORTED_CMD) 529 + if (!(fujitsu_hotkey->rfkill_supported & 0x100)) 531 530 return sprintf(buf, "unknown\n"); 532 531 if (fujitsu_hotkey->rfkill_state & 0x100) 533 532 return sprintf(buf, "open\n"); ··· 539 538 show_dock_state(struct device *dev, 540 539 struct device_attribute *attr, char *buf) 541 540 { 542 - if (fujitsu_hotkey->rfkill_state == UNSUPPORTED_CMD) 541 + if (!(fujitsu_hotkey->rfkill_supported & 0x200)) 543 542 return sprintf(buf, "unknown\n"); 544 543 if (fujitsu_hotkey->rfkill_state & 0x200) 545 544 return sprintf(buf, "docked\n"); ··· 551 550 show_radios_state(struct device *dev, 552 551 struct device_attribute *attr, char *buf) 553 552 { 554 - if (fujitsu_hotkey->rfkill_state == UNSUPPORTED_CMD) 553 + if (!(fujitsu_hotkey->rfkill_supported & 0x20)) 555 554 return sprintf(buf, "unknown\n"); 556 555 if (fujitsu_hotkey->rfkill_state & 0x20) 557 556 return sprintf(buf, "on\n"); ··· 929 928 ; /* No action, result is discarded */ 930 929 vdbg_printk(FUJLAPTOP_DBG_INFO, "Discarded %i ringbuffer entries\n", i); 931 930 932 - fujitsu_hotkey->rfkill_state = 933 - call_fext_func(FUNC_RFKILL, 0x4, 0x0, 0x0); 931 + fujitsu_hotkey->rfkill_supported = 932 + call_fext_func(FUNC_RFKILL, 0x0, 0x0, 0x0); 933 + 934 + /* Make sure our bitmask of supported functions is cleared if the 935 + RFKILL function block is not implemented, like on the S7020. */ 936 + if (fujitsu_hotkey->rfkill_supported == UNSUPPORTED_CMD) 937 + fujitsu_hotkey->rfkill_supported = 0; 938 + 939 + if (fujitsu_hotkey->rfkill_supported) 940 + fujitsu_hotkey->rfkill_state = 941 + call_fext_func(FUNC_RFKILL, 0x4, 0x0, 0x0); 934 942 935 943 /* Suspect this is a keymap of the application panel, print it */ 936 944 printk(KERN_INFO "fujitsu-laptop: BTNI: [0x%x]\n", ··· 1015 1005 1016 1006 input = fujitsu_hotkey->input; 1017 1007 1018 - fujitsu_hotkey->rfkill_state = 1019 - call_fext_func(FUNC_RFKILL, 0x4, 0x0, 0x0); 1008 + if (fujitsu_hotkey->rfkill_supported) 1009 + fujitsu_hotkey->rfkill_state = 1010 + call_fext_func(FUNC_RFKILL, 0x4, 0x0, 0x0); 1020 1011 1021 1012 switch (event) { 1022 1013 case ACPI_FUJITSU_NOTIFY_CODE1:
+4 -1
drivers/s390/char/sclp.c
··· 280 280 rc = 0; 281 281 for (offset = sizeof(struct sccb_header); offset < sccb->length; 282 282 offset += evbuf->length) { 283 - /* Search for event handler */ 284 283 evbuf = (struct evbuf_header *) ((addr_t) sccb + offset); 284 + /* Check for malformed hardware response */ 285 + if (evbuf->length == 0) 286 + break; 287 + /* Search for event handler */ 285 288 reg = NULL; 286 289 list_for_each(l, &sclp_reg_list) { 287 290 reg = list_entry(l, struct sclp_register, list);
+5
drivers/s390/char/sclp_cmd.c
··· 19 19 #include <linux/memory.h> 20 20 #include <asm/chpid.h> 21 21 #include <asm/sclp.h> 22 + #include <asm/setup.h> 22 23 23 24 #include "sclp.h" 24 25 ··· 475 474 goto skip_add; 476 475 if (start + size > VMEM_MAX_PHYS) 477 476 size = VMEM_MAX_PHYS - start; 477 + if (memory_end_set && (start >= memory_end)) 478 + goto skip_add; 479 + if (memory_end_set && (start + size > memory_end)) 480 + size = memory_end - start; 478 481 add_memory(0, start, size); 479 482 skip_add: 480 483 first_rn = rn;
+11 -4
drivers/scsi/ibmvscsi/ibmvfc.c
··· 1573 1573 vfc_cmd->resp_len = sizeof(vfc_cmd->rsp); 1574 1574 vfc_cmd->cancel_key = (unsigned long)cmnd->device->hostdata; 1575 1575 vfc_cmd->tgt_scsi_id = rport->port_id; 1576 - if ((rport->supported_classes & FC_COS_CLASS3) && 1577 - (fc_host_supported_classes(vhost->host) & FC_COS_CLASS3)) 1578 - vfc_cmd->flags = IBMVFC_CLASS_3_ERR; 1579 1576 vfc_cmd->iu.xfer_len = scsi_bufflen(cmnd); 1580 1577 int_to_scsilun(cmnd->device->lun, &vfc_cmd->iu.lun); 1581 1578 memcpy(vfc_cmd->iu.cdb, cmnd->cmnd, cmnd->cmd_len); ··· 3263 3266 return -ENOMEM; 3264 3267 } 3265 3268 3269 + memset(tgt, 0, sizeof(*tgt)); 3266 3270 tgt->scsi_id = scsi_id; 3267 3271 tgt->new_scsi_id = scsi_id; 3268 3272 tgt->vhost = vhost; ··· 3574 3576 static void ibmvfc_tgt_add_rport(struct ibmvfc_target *tgt) 3575 3577 { 3576 3578 struct ibmvfc_host *vhost = tgt->vhost; 3577 - struct fc_rport *rport; 3579 + struct fc_rport *rport = tgt->rport; 3578 3580 unsigned long flags; 3581 + 3582 + if (rport) { 3583 + tgt_dbg(tgt, "Setting rport roles\n"); 3584 + fc_remote_port_rolechg(rport, tgt->ids.roles); 3585 + spin_lock_irqsave(vhost->host->host_lock, flags); 3586 + ibmvfc_set_tgt_action(tgt, IBMVFC_TGT_ACTION_NONE); 3587 + spin_unlock_irqrestore(vhost->host->host_lock, flags); 3588 + return; 3589 + } 3579 3590 3580 3591 tgt_dbg(tgt, "Adding rport\n"); 3581 3592 rport = fc_remote_port_add(vhost->host, 0, &tgt->ids);
+1 -1
drivers/scsi/ibmvscsi/ibmvfc.h
··· 32 32 #define IBMVFC_DRIVER_VERSION "1.0.4" 33 33 #define IBMVFC_DRIVER_DATE "(November 14, 2008)" 34 34 35 - #define IBMVFC_DEFAULT_TIMEOUT 15 35 + #define IBMVFC_DEFAULT_TIMEOUT 60 36 36 #define IBMVFC_INIT_TIMEOUT 120 37 37 #define IBMVFC_MAX_REQUESTS_DEFAULT 100 38 38
+1
drivers/scsi/ibmvscsi/ibmvscsi.c
··· 432 432 sdev_printk(KERN_ERR, cmd->device, 433 433 "Can't allocate memory " 434 434 "for indirect table\n"); 435 + scsi_dma_unmap(cmd); 435 436 return 0; 436 437 } 437 438 }
+2 -1
drivers/scsi/libiscsi.c
··· 1998 1998 if (!shost->can_queue) 1999 1999 shost->can_queue = ISCSI_DEF_XMIT_CMDS_MAX; 2000 2000 2001 + if (!shost->transportt->eh_timed_out) 2002 + shost->transportt->eh_timed_out = iscsi_eh_cmd_timed_out; 2001 2003 return scsi_add_host(shost, pdev); 2002 2004 } 2003 2005 EXPORT_SYMBOL_GPL(iscsi_host_add); ··· 2022 2020 shost = scsi_host_alloc(sht, sizeof(struct iscsi_host) + dd_data_size); 2023 2021 if (!shost) 2024 2022 return NULL; 2025 - shost->transportt->eh_timed_out = iscsi_eh_cmd_timed_out; 2026 2023 2027 2024 if (qdepth > ISCSI_MAX_CMD_PER_LUN || qdepth < 1) { 2028 2025 if (qdepth != 0)
+1
drivers/scsi/lpfc/lpfc_els.c
··· 5258 5258 sizeof(struct lpfc_name)); 5259 5259 break; 5260 5260 default: 5261 + kfree(els_data); 5261 5262 return; 5262 5263 } 5263 5264 memcpy(els_data->wwpn, &ndlp->nlp_portname, sizeof(struct lpfc_name));
+6 -7
drivers/scsi/qla2xxx/qla_attr.c
··· 1265 1265 test_bit(FCPORT_UPDATE_NEEDED, &vha->dpc_flags)) 1266 1266 msleep(1000); 1267 1267 1268 - if (ha->mqenable) { 1269 - if (qla25xx_delete_queues(vha, 0) != QLA_SUCCESS) 1270 - qla_printk(KERN_WARNING, ha, 1271 - "Queue delete failed.\n"); 1272 - vha->req_ques[0] = ha->req_q_map[0]->id; 1273 - } 1274 - 1275 1268 qla24xx_disable_vp(vha); 1276 1269 1277 1270 fc_remove_host(vha->host); ··· 1285 1292 "has stopped\n", 1286 1293 vha->host_no, vha->vp_idx, vha)); 1287 1294 } 1295 + 1296 + if (ha->mqenable) { 1297 + if (qla25xx_delete_queues(vha, 0) != QLA_SUCCESS) 1298 + qla_printk(KERN_WARNING, ha, 1299 + "Queue delete failed.\n"); 1300 + } 1288 1301 1289 1302 scsi_host_put(vha->host); 1290 1303 qla_printk(KERN_INFO, ha, "vport %d deleted\n", id);
+5
drivers/scsi/qla2xxx/qla_def.h
··· 2135 2135 /* Work events. */ 2136 2136 enum qla_work_type { 2137 2137 QLA_EVT_AEN, 2138 + QLA_EVT_IDC_ACK, 2138 2139 }; 2139 2140 2140 2141 ··· 2150 2149 enum fc_host_event_code code; 2151 2150 u32 data; 2152 2151 } aen; 2152 + struct { 2153 + #define QLA_IDC_ACK_REGS 7 2154 + uint16_t mb[QLA_IDC_ACK_REGS]; 2155 + } idc_ack; 2153 2156 } u; 2154 2157 }; 2155 2158
+1 -1
drivers/scsi/qla2xxx/qla_devtbl.h
··· 72 72 "QLA2462", "Sun PCI-X 2.0 to 4Gb FC, Dual Channel", /* 0x141 */ 73 73 "QLE2460", "Sun PCI-Express to 2Gb FC, Single Channel", /* 0x142 */ 74 74 "QLE2462", "Sun PCI-Express to 4Gb FC, Single Channel", /* 0x143 */ 75 - "QEM2462" "Server I/O Module 4Gb FC, Dual Channel", /* 0x144 */ 75 + "QEM2462", "Server I/O Module 4Gb FC, Dual Channel", /* 0x144 */ 76 76 "QLE2440", "PCI-Express to 4Gb FC, Single Channel", /* 0x145 */ 77 77 "QLE2464", "PCI-Express to 4Gb FC, Quad Channel", /* 0x146 */ 78 78 "QLA2440", "PCI-X 2.0 to 4Gb FC, Single Channel", /* 0x147 */
+2
drivers/scsi/qla2xxx/qla_fw.h
··· 1402 1402 #define MBA_IDC_NOTIFY 0x8101 1403 1403 #define MBA_IDC_TIME_EXT 0x8102 1404 1404 1405 + #define MBC_IDC_ACK 0x101 1406 + 1405 1407 struct nvram_81xx { 1406 1408 /* NVRAM header. */ 1407 1409 uint8_t id[4];
+5 -4
drivers/scsi/qla2xxx/qla_gbl.h
··· 72 72 extern void qla2x00_abort_all_cmds(scsi_qla_host_t *, int); 73 73 extern int qla2x00_post_aen_work(struct scsi_qla_host *, enum 74 74 fc_host_event_code, u32); 75 + extern int qla2x00_post_idc_ack_work(struct scsi_qla_host *, uint16_t *); 75 76 76 77 extern void qla2x00_abort_fcport_cmds(fc_port_t *); 77 78 extern struct scsi_qla_host *qla2x00_create_host(struct scsi_host_template *, ··· 267 266 268 267 extern int qla84xx_verify_chip(struct scsi_qla_host *, uint16_t *); 269 268 269 + extern int qla81xx_idc_ack(scsi_qla_host_t *, uint16_t *); 270 + 270 271 /* 271 272 * Global Function Prototypes in qla_isr.c source file. 272 273 */ ··· 379 376 380 377 /* Globa function prototypes for multi-q */ 381 378 extern int qla25xx_request_irq(struct rsp_que *); 382 - extern int qla25xx_init_req_que(struct scsi_qla_host *, struct req_que *, 383 - uint8_t); 384 - extern int qla25xx_init_rsp_que(struct scsi_qla_host *, struct rsp_que *, 385 - uint8_t); 379 + extern int qla25xx_init_req_que(struct scsi_qla_host *, struct req_que *); 380 + extern int qla25xx_init_rsp_que(struct scsi_qla_host *, struct rsp_que *); 386 381 extern int qla25xx_create_req_que(struct qla_hw_data *, uint16_t, uint8_t, 387 382 uint16_t, uint8_t, uint8_t); 388 383 extern int qla25xx_create_rsp_que(struct qla_hw_data *, uint16_t, uint8_t,
+3 -4
drivers/scsi/qla2xxx/qla_init.c
··· 1226 1226 icb->firmware_options_2 |= 1227 1227 __constant_cpu_to_le32(BIT_18); 1228 1228 1229 - icb->firmware_options_2 |= __constant_cpu_to_le32(BIT_22); 1229 + icb->firmware_options_2 &= __constant_cpu_to_le32(~BIT_22); 1230 1230 icb->firmware_options_2 |= __constant_cpu_to_le32(BIT_23); 1231 - ha->rsp_q_map[0]->options = icb->firmware_options_2; 1232 1231 1233 1232 WRT_REG_DWORD(&reg->isp25mq.req_q_in, 0); 1234 1233 WRT_REG_DWORD(&reg->isp25mq.req_q_out, 0); ··· 3492 3493 rsp = ha->rsp_q_map[i]; 3493 3494 if (rsp) { 3494 3495 rsp->options &= ~BIT_0; 3495 - ret = qla25xx_init_rsp_que(base_vha, rsp, rsp->options); 3496 + ret = qla25xx_init_rsp_que(base_vha, rsp); 3496 3497 if (ret != QLA_SUCCESS) 3497 3498 DEBUG2_17(printk(KERN_WARNING 3498 3499 "%s Rsp que:%d init failed\n", __func__, ··· 3506 3507 if (req) { 3507 3508 /* Clear outstanding commands array. */ 3508 3509 req->options &= ~BIT_0; 3509 - ret = qla25xx_init_req_que(base_vha, req, req->options); 3510 + ret = qla25xx_init_req_que(base_vha, req); 3510 3511 if (ret != QLA_SUCCESS) 3511 3512 DEBUG2_17(printk(KERN_WARNING 3512 3513 "%s Req que:%d init failed\n", __func__,
+35 -23
drivers/scsi/qla2xxx/qla_isr.c
··· 266 266 } 267 267 } 268 268 269 + static void 270 + qla81xx_idc_event(scsi_qla_host_t *vha, uint16_t aen, uint16_t descr) 271 + { 272 + static char *event[] = 273 + { "Complete", "Request Notification", "Time Extension" }; 274 + int rval; 275 + struct device_reg_24xx __iomem *reg24 = &vha->hw->iobase->isp24; 276 + uint16_t __iomem *wptr; 277 + uint16_t cnt, timeout, mb[QLA_IDC_ACK_REGS]; 278 + 279 + /* Seed data -- mailbox1 -> mailbox7. */ 280 + wptr = (uint16_t __iomem *)&reg24->mailbox1; 281 + for (cnt = 0; cnt < QLA_IDC_ACK_REGS; cnt++, wptr++) 282 + mb[cnt] = RD_REG_WORD(wptr); 283 + 284 + DEBUG2(printk("scsi(%ld): Inter-Driver Commucation %s -- " 285 + "%04x %04x %04x %04x %04x %04x %04x.\n", vha->host_no, 286 + event[aen & 0xff], 287 + mb[0], mb[1], mb[2], mb[3], mb[4], mb[5], mb[6])); 288 + 289 + /* Acknowledgement needed? [Notify && non-zero timeout]. */ 290 + timeout = (descr >> 8) & 0xf; 291 + if (aen != MBA_IDC_NOTIFY || !timeout) 292 + return; 293 + 294 + DEBUG2(printk("scsi(%ld): Inter-Driver Commucation %s -- " 295 + "ACK timeout=%d.\n", vha->host_no, event[aen & 0xff], timeout)); 296 + 297 + rval = qla2x00_post_idc_ack_work(vha, mb); 298 + if (rval != QLA_SUCCESS) 299 + qla_printk(KERN_WARNING, vha->hw, 300 + "IDC failed to post ACK.\n"); 301 + } 302 + 269 303 /** 270 304 * qla2x00_async_event() - Process aynchronous events. 271 305 * @ha: SCSI driver HA context ··· 748 714 "%04x %04x %04x\n", vha->host_no, mb[1], mb[2], mb[3])); 749 715 break; 750 716 case MBA_IDC_COMPLETE: 751 - DEBUG2(printk("scsi(%ld): Inter-Driver Commucation " 752 - "Complete -- %04x %04x %04x\n", vha->host_no, mb[1], mb[2], 753 - mb[3])); 754 - break; 755 717 case MBA_IDC_NOTIFY: 756 - DEBUG2(printk("scsi(%ld): Inter-Driver Commucation " 757 - "Request Notification -- %04x %04x %04x\n", vha->host_no, 758 - mb[1], mb[2], mb[3])); 759 - /**** Mailbox registers 4 - 7 valid!!! */ 760 - break; 761 718 case MBA_IDC_TIME_EXT: 762 - DEBUG2(printk("scsi(%ld): Inter-Driver Commucation " 763 - "Time Extension -- %04x %04x %04x\n", vha->host_no, mb[1], 764 - mb[2], mb[3])); 765 - /**** Mailbox registers 4 - 7 valid!!! */ 719 + qla81xx_idc_event(vha, mb[0], mb[1]); 766 720 break; 767 721 } 768 722 ··· 1729 1707 struct qla_hw_data *ha; 1730 1708 struct rsp_que *rsp; 1731 1709 struct device_reg_24xx __iomem *reg; 1732 - uint16_t msix_disabled_hccr = 0; 1733 1710 1734 1711 rsp = (struct rsp_que *) dev_id; 1735 1712 if (!rsp) { ··· 1741 1720 1742 1721 spin_lock_irq(&ha->hardware_lock); 1743 1722 1744 - msix_disabled_hccr = rsp->options; 1745 - if (!rsp->id) 1746 - msix_disabled_hccr &= __constant_cpu_to_le32(BIT_22); 1747 - else 1748 - msix_disabled_hccr &= __constant_cpu_to_le32(BIT_6); 1749 - 1750 1723 qla24xx_process_response_queue(rsp); 1751 - 1752 - if (!msix_disabled_hccr) 1753 - WRT_REG_DWORD(&reg->hccr, HCCRX_CLR_RISC_INT); 1754 1724 1755 1725 spin_unlock_irq(&ha->hardware_lock); 1756 1726
+32 -8
drivers/scsi/qla2xxx/qla_mbx.c
··· 3090 3090 } 3091 3091 3092 3092 int 3093 - qla25xx_init_req_que(struct scsi_qla_host *vha, struct req_que *req, 3094 - uint8_t options) 3093 + qla25xx_init_req_que(struct scsi_qla_host *vha, struct req_que *req) 3095 3094 { 3096 3095 int rval; 3097 3096 unsigned long flags; ··· 3100 3101 struct qla_hw_data *ha = vha->hw; 3101 3102 3102 3103 mcp->mb[0] = MBC_INITIALIZE_MULTIQ; 3103 - mcp->mb[1] = options; 3104 + mcp->mb[1] = req->options; 3104 3105 mcp->mb[2] = MSW(LSD(req->dma)); 3105 3106 mcp->mb[3] = LSW(LSD(req->dma)); 3106 3107 mcp->mb[6] = MSW(MSD(req->dma)); ··· 3127 3128 mcp->tov = 60; 3128 3129 3129 3130 spin_lock_irqsave(&ha->hardware_lock, flags); 3130 - if (!(options & BIT_0)) { 3131 + if (!(req->options & BIT_0)) { 3131 3132 WRT_REG_DWORD(&reg->req_q_in, 0); 3132 3133 WRT_REG_DWORD(&reg->req_q_out, 0); 3133 3134 } ··· 3141 3142 } 3142 3143 3143 3144 int 3144 - qla25xx_init_rsp_que(struct scsi_qla_host *vha, struct rsp_que *rsp, 3145 - uint8_t options) 3145 + qla25xx_init_rsp_que(struct scsi_qla_host *vha, struct rsp_que *rsp) 3146 3146 { 3147 3147 int rval; 3148 3148 unsigned long flags; ··· 3151 3153 struct qla_hw_data *ha = vha->hw; 3152 3154 3153 3155 mcp->mb[0] = MBC_INITIALIZE_MULTIQ; 3154 - mcp->mb[1] = options; 3156 + mcp->mb[1] = rsp->options; 3155 3157 mcp->mb[2] = MSW(LSD(rsp->dma)); 3156 3158 mcp->mb[3] = LSW(LSD(rsp->dma)); 3157 3159 mcp->mb[6] = MSW(MSD(rsp->dma)); ··· 3176 3178 mcp->tov = 60; 3177 3179 3178 3180 spin_lock_irqsave(&ha->hardware_lock, flags); 3179 - if (!(options & BIT_0)) { 3181 + if (!(rsp->options & BIT_0)) { 3180 3182 WRT_REG_DWORD(&reg->rsp_q_out, 0); 3181 3183 WRT_REG_DWORD(&reg->rsp_q_in, 0); 3182 3184 } ··· 3191 3193 return rval; 3192 3194 } 3193 3195 3196 + int 3197 + qla81xx_idc_ack(scsi_qla_host_t *vha, uint16_t *mb) 3198 + { 3199 + int rval; 3200 + mbx_cmd_t mc; 3201 + mbx_cmd_t *mcp = &mc; 3202 + 3203 + DEBUG11(printk("%s(%ld): entered.\n", __func__, vha->host_no)); 3204 + 3205 + mcp->mb[0] = MBC_IDC_ACK; 3206 + memcpy(&mcp->mb[1], mb, QLA_IDC_ACK_REGS * sizeof(uint16_t)); 3207 + mcp->out_mb = MBX_7|MBX_6|MBX_5|MBX_4|MBX_3|MBX_2|MBX_1|MBX_0; 3208 + mcp->in_mb = MBX_0; 3209 + mcp->tov = MBX_TOV_SECONDS; 3210 + mcp->flags = 0; 3211 + rval = qla2x00_mailbox_command(vha, mcp); 3212 + 3213 + if (rval != QLA_SUCCESS) { 3214 + DEBUG2_3_11(printk("%s(%ld): failed=%x (%x).\n", __func__, 3215 + vha->host_no, rval, mcp->mb[0])); 3216 + } else { 3217 + DEBUG11(printk("%s(%ld): done.\n", __func__, vha->host_no)); 3218 + } 3219 + 3220 + return rval; 3221 + }
+6 -6
drivers/scsi/qla2xxx/qla_mid.c
··· 396 396 397 397 qla2x00_start_timer(vha, qla2x00_timer, WATCH_INTERVAL); 398 398 399 - memset(vha->req_ques, 0, sizeof(vha->req_ques) * QLA_MAX_HOST_QUES); 399 + memset(vha->req_ques, 0, sizeof(vha->req_ques)); 400 400 vha->req_ques[0] = ha->req_q_map[0]->id; 401 401 host->can_queue = ha->req_q_map[0]->length + 128; 402 402 host->this_id = 255; ··· 471 471 472 472 if (req) { 473 473 req->options |= BIT_0; 474 - ret = qla25xx_init_req_que(vha, req, req->options); 474 + ret = qla25xx_init_req_que(vha, req); 475 475 } 476 476 if (ret == QLA_SUCCESS) 477 477 qla25xx_free_req_que(vha, req); ··· 486 486 487 487 if (rsp) { 488 488 rsp->options |= BIT_0; 489 - ret = qla25xx_init_rsp_que(vha, rsp, rsp->options); 489 + ret = qla25xx_init_rsp_que(vha, rsp); 490 490 } 491 491 if (ret == QLA_SUCCESS) 492 492 qla25xx_free_rsp_que(vha, rsp); ··· 502 502 503 503 req->options |= BIT_3; 504 504 req->qos = qos; 505 - ret = qla25xx_init_req_que(vha, req, req->options); 505 + ret = qla25xx_init_req_que(vha, req); 506 506 if (ret != QLA_SUCCESS) 507 507 DEBUG2_17(printk(KERN_WARNING "%s failed\n", __func__)); 508 508 /* restore options bit */ ··· 632 632 req->max_q_depth = ha->req_q_map[0]->max_q_depth; 633 633 mutex_unlock(&ha->vport_lock); 634 634 635 - ret = qla25xx_init_req_que(base_vha, req, options); 635 + ret = qla25xx_init_req_que(base_vha, req); 636 636 if (ret != QLA_SUCCESS) { 637 637 qla_printk(KERN_WARNING, ha, "%s failed\n", __func__); 638 638 mutex_lock(&ha->vport_lock); ··· 710 710 if (ret) 711 711 goto que_failed; 712 712 713 - ret = qla25xx_init_rsp_que(base_vha, rsp, options); 713 + ret = qla25xx_init_rsp_que(base_vha, rsp); 714 714 if (ret != QLA_SUCCESS) { 715 715 qla_printk(KERN_WARNING, ha, "%s failed\n", __func__); 716 716 mutex_lock(&ha->vport_lock);
+16
drivers/scsi/qla2xxx/qla_os.c
··· 2522 2522 return qla2x00_post_work(vha, e, 1); 2523 2523 } 2524 2524 2525 + int 2526 + qla2x00_post_idc_ack_work(struct scsi_qla_host *vha, uint16_t *mb) 2527 + { 2528 + struct qla_work_evt *e; 2529 + 2530 + e = qla2x00_alloc_work(vha, QLA_EVT_IDC_ACK, 1); 2531 + if (!e) 2532 + return QLA_FUNCTION_FAILED; 2533 + 2534 + memcpy(e->u.idc_ack.mb, mb, QLA_IDC_ACK_REGS * sizeof(uint16_t)); 2535 + return qla2x00_post_work(vha, e, 1); 2536 + } 2537 + 2525 2538 static void 2526 2539 qla2x00_do_work(struct scsi_qla_host *vha) 2527 2540 { ··· 2551 2538 case QLA_EVT_AEN: 2552 2539 fc_host_post_event(vha->host, fc_get_event_number(), 2553 2540 e->u.aen.code, e->u.aen.data); 2541 + break; 2542 + case QLA_EVT_IDC_ACK: 2543 + qla81xx_idc_ack(vha, e->u.idc_ack.mb); 2554 2544 break; 2555 2545 } 2556 2546 if (e->flags & QLA_EVT_FLAG_FREE)
+1 -1
drivers/scsi/qla2xxx/qla_sup.c
··· 684 684 "end=0x%x size=0x%x.\n", le32_to_cpu(region->code), start, 685 685 le32_to_cpu(region->end) >> 2, le32_to_cpu(region->size))); 686 686 687 - switch (le32_to_cpu(region->code)) { 687 + switch (le32_to_cpu(region->code) & 0xff) { 688 688 case FLT_REG_FW: 689 689 ha->flt_region_fw = start; 690 690 break;
+1 -1
drivers/scsi/qla2xxx/qla_version.h
··· 7 7 /* 8 8 * Driver version 9 9 */ 10 - #define QLA2XXX_VERSION "8.03.00-k2" 10 + #define QLA2XXX_VERSION "8.03.00-k3" 11 11 12 12 #define QLA_DRIVER_MAJOR_VER 8 13 13 #define QLA_DRIVER_MINOR_VER 3
+1
drivers/scsi/scsi_scan.c
··· 317 317 return sdev; 318 318 319 319 out_device_destroy: 320 + scsi_device_set_state(sdev, SDEV_DEL); 320 321 transport_destroy_device(&sdev->sdev_gendev); 321 322 put_device(&sdev->sdev_gendev); 322 323 out:
+1 -1
drivers/scsi/sg.c
··· 1078 1078 case BLKTRACESETUP: 1079 1079 return blk_trace_setup(sdp->device->request_queue, 1080 1080 sdp->disk->disk_name, 1081 - sdp->device->sdev_gendev.devt, 1081 + MKDEV(SCSI_GENERIC_MAJOR, sdp->index), 1082 1082 (char *)arg); 1083 1083 case BLKTRACESTART: 1084 1084 return blk_trace_startstop(sdp->device->request_queue, 1);
+15
drivers/serial/8250.c
··· 2083 2083 2084 2084 serial8250_set_mctrl(&up->port, up->port.mctrl); 2085 2085 2086 + /* Serial over Lan (SoL) hack: 2087 + Intel 8257x Gigabit ethernet chips have a 2088 + 16550 emulation, to be used for Serial Over Lan. 2089 + Those chips take a longer time than a normal 2090 + serial device to signalize that a transmission 2091 + data was queued. Due to that, the above test generally 2092 + fails. One solution would be to delay the reading of 2093 + iir. However, this is not reliable, since the timeout 2094 + is variable. So, let's just don't test if we receive 2095 + TX irq. This way, we'll never enable UART_BUG_TXEN. 2096 + */ 2097 + if (up->port.flags & UPF_NO_TXEN_TEST) 2098 + goto dont_test_tx_en; 2099 + 2086 2100 /* 2087 2101 * Do a quick test to see if we receive an 2088 2102 * interrupt when we enable the TX irq. ··· 2116 2102 up->bugs &= ~UART_BUG_TXEN; 2117 2103 } 2118 2104 2105 + dont_test_tx_en: 2119 2106 spin_unlock_irqrestore(&up->port.lock, flags); 2120 2107 2121 2108 /*
+36
drivers/serial/8250_pci.c
··· 798 798 return setup_port(priv, port, bar, offset, board->reg_shift); 799 799 } 800 800 801 + static int skip_tx_en_setup(struct serial_private *priv, 802 + const struct pciserial_board *board, 803 + struct uart_port *port, int idx) 804 + { 805 + port->flags |= UPF_NO_TXEN_TEST; 806 + printk(KERN_DEBUG "serial8250: skipping TxEn test for device " 807 + "[%04x:%04x] subsystem [%04x:%04x]\n", 808 + priv->dev->vendor, 809 + priv->dev->device, 810 + priv->dev->subsystem_vendor, 811 + priv->dev->subsystem_device); 812 + 813 + return pci_default_setup(priv, board, port, idx); 814 + } 815 + 801 816 /* This should be in linux/pci_ids.h */ 802 817 #define PCI_VENDOR_ID_SBSMODULARIO 0x124B 803 818 #define PCI_SUBVENDOR_ID_SBSMODULARIO 0x124B ··· 878 863 .subdevice = PCI_ANY_ID, 879 864 .init = pci_inteli960ni_init, 880 865 .setup = pci_default_setup, 866 + }, 867 + { 868 + .vendor = PCI_VENDOR_ID_INTEL, 869 + .device = PCI_DEVICE_ID_INTEL_8257X_SOL, 870 + .subvendor = PCI_ANY_ID, 871 + .subdevice = PCI_ANY_ID, 872 + .setup = skip_tx_en_setup, 873 + }, 874 + { 875 + .vendor = PCI_VENDOR_ID_INTEL, 876 + .device = PCI_DEVICE_ID_INTEL_82573L_SOL, 877 + .subvendor = PCI_ANY_ID, 878 + .subdevice = PCI_ANY_ID, 879 + .setup = skip_tx_en_setup, 880 + }, 881 + { 882 + .vendor = PCI_VENDOR_ID_INTEL, 883 + .device = PCI_DEVICE_ID_INTEL_82573E_SOL, 884 + .subvendor = PCI_ANY_ID, 885 + .subdevice = PCI_ANY_ID, 886 + .setup = skip_tx_en_setup, 881 887 }, 882 888 /* 883 889 * ITE
+4
drivers/serial/atmel_serial.c
··· 877 877 } 878 878 } 879 879 880 + /* Save current CSR for comparison in atmel_tasklet_func() */ 881 + atmel_port->irq_status_prev = UART_GET_CSR(port); 882 + atmel_port->irq_status = atmel_port->irq_status_prev; 883 + 880 884 /* 881 885 * Finally, enable the serial port 882 886 */
+3
drivers/serial/jsm/jsm_driver.c
··· 84 84 brd->pci_dev = pdev; 85 85 if (pdev->device == PCIE_DEVICE_ID_NEO_4_IBM) 86 86 brd->maxports = 4; 87 + else if (pdev->device == PCI_DEVICE_ID_DIGI_NEO_8) 88 + brd->maxports = 8; 87 89 else 88 90 brd->maxports = 2; 89 91 ··· 214 212 { PCI_DEVICE(PCI_VENDOR_ID_DIGI, PCI_DEVICE_ID_NEO_2RJ45), 0, 0, 2 }, 215 213 { PCI_DEVICE(PCI_VENDOR_ID_DIGI, PCI_DEVICE_ID_NEO_2RJ45PRI), 0, 0, 3 }, 216 214 { PCI_DEVICE(PCI_VENDOR_ID_DIGI, PCIE_DEVICE_ID_NEO_4_IBM), 0, 0, 4 }, 215 + { PCI_DEVICE(PCI_VENDOR_ID_DIGI, PCI_DEVICE_ID_DIGI_NEO_8), 0, 0, 5 }, 217 216 { 0, } 218 217 }; 219 218 MODULE_DEVICE_TABLE(pci, jsm_pci_tbl);
+1 -1
drivers/spi/spi_gpio.c
··· 114 114 115 115 static inline int getmiso(const struct spi_device *spi) 116 116 { 117 - return gpio_get_value(SPI_MISO_GPIO); 117 + return !!gpio_get_value(SPI_MISO_GPIO); 118 118 } 119 119 120 120 #undef pdata
+2 -13
drivers/usb/core/hcd-pci.c
··· 298 298 EXPORT_SYMBOL_GPL(usb_hcd_pci_suspend); 299 299 300 300 /** 301 - * usb_hcd_pci_resume_early - resume a PCI-based HCD before IRQs are enabled 302 - * @dev: USB Host Controller being resumed 303 - * 304 - * Store this function in the HCD's struct pci_driver as .resume_early. 305 - */ 306 - int usb_hcd_pci_resume_early(struct pci_dev *dev) 307 - { 308 - pci_restore_state(dev); 309 - return 0; 310 - } 311 - EXPORT_SYMBOL_GPL(usb_hcd_pci_resume_early); 312 - 313 - /** 314 301 * usb_hcd_pci_resume - power management resume of a PCI-based HCD 315 302 * @dev: USB Host Controller being resumed 316 303 * ··· 319 332 of_node, 0, 1); 320 333 } 321 334 #endif 335 + 336 + pci_restore_state(dev); 322 337 323 338 hcd = pci_get_drvdata(dev); 324 339 if (hcd->state != HC_STATE_SUSPENDED) {
-1
drivers/usb/core/hcd.h
··· 257 257 258 258 #ifdef CONFIG_PM 259 259 extern int usb_hcd_pci_suspend(struct pci_dev *dev, pm_message_t msg); 260 - extern int usb_hcd_pci_resume_early(struct pci_dev *dev); 261 260 extern int usb_hcd_pci_resume(struct pci_dev *dev); 262 261 #endif /* CONFIG_PM */ 263 262
+2 -2
drivers/usb/gadget/pxa25x_udc.c
··· 904 904 905 905 /* most IN status is the same, but ISO can't stall */ 906 906 *ep->reg_udccs = UDCCS_BI_TPC|UDCCS_BI_FTF|UDCCS_BI_TUR 907 - | (ep->bmAttributes == USB_ENDPOINT_XFER_ISOC) 908 - ? 0 : UDCCS_BI_SST; 907 + | (ep->bmAttributes == USB_ENDPOINT_XFER_ISOC 908 + ? 0 : UDCCS_BI_SST); 909 909 } 910 910 911 911
-1
drivers/usb/host/ehci-pci.c
··· 432 432 433 433 #ifdef CONFIG_PM 434 434 .suspend = usb_hcd_pci_suspend, 435 - .resume_early = usb_hcd_pci_resume_early, 436 435 .resume = usb_hcd_pci_resume, 437 436 #endif 438 437 .shutdown = usb_hcd_pci_shutdown,
-1
drivers/usb/host/ohci-pci.c
··· 487 487 488 488 #ifdef CONFIG_PM 489 489 .suspend = usb_hcd_pci_suspend, 490 - .resume_early = usb_hcd_pci_resume_early, 491 490 .resume = usb_hcd_pci_resume, 492 491 #endif 493 492
-1
drivers/usb/host/uhci-hcd.c
··· 942 942 943 943 #ifdef CONFIG_PM 944 944 .suspend = usb_hcd_pci_suspend, 945 - .resume_early = usb_hcd_pci_resume_early, 946 945 .resume = usb_hcd_pci_resume, 947 946 #endif /* PM */ 948 947 };
+2 -2
drivers/usb/host/whci/asl.c
··· 227 227 * Now that the ASL is updated, complete the removal of any 228 228 * removed qsets. 229 229 */ 230 - spin_lock(&whc->lock); 230 + spin_lock_irq(&whc->lock); 231 231 232 232 list_for_each_entry_safe(qset, t, &whc->async_removed_list, list_node) { 233 233 qset_remove_complete(whc, qset); 234 234 } 235 235 236 - spin_unlock(&whc->lock); 236 + spin_unlock_irq(&whc->lock); 237 237 } 238 238 239 239 /**
+2 -2
drivers/usb/host/whci/pzl.c
··· 255 255 * Now that the PZL is updated, complete the removal of any 256 256 * removed qsets. 257 257 */ 258 - spin_lock(&whc->lock); 258 + spin_lock_irq(&whc->lock); 259 259 260 260 list_for_each_entry_safe(qset, t, &whc->periodic_removed_list, list_node) { 261 261 qset_remove_complete(whc, qset); 262 262 } 263 263 264 - spin_unlock(&whc->lock); 264 + spin_unlock_irq(&whc->lock); 265 265 } 266 266 267 267 /**
+2 -8
drivers/video/Kconfig
··· 1054 1054 1055 1055 config FB_I810 1056 1056 tristate "Intel 810/815 support (EXPERIMENTAL)" 1057 - depends on EXPERIMENTAL && PCI && X86_32 1058 - select AGP 1059 - select AGP_INTEL 1060 - select FB 1057 + depends on EXPERIMENTAL && FB && PCI && X86_32 && AGP_INTEL 1061 1058 select FB_MODE_HELPERS 1062 1059 select FB_CFB_FILLRECT 1063 1060 select FB_CFB_COPYAREA ··· 1117 1120 1118 1121 config FB_INTEL 1119 1122 tristate "Intel 830M/845G/852GM/855GM/865G/915G/945G/945GM/965G/965GM support (EXPERIMENTAL)" 1120 - depends on EXPERIMENTAL && PCI && X86 1121 - select FB 1122 - select AGP 1123 - select AGP_INTEL 1123 + depends on EXPERIMENTAL && FB && PCI && X86 && AGP_INTEL 1124 1124 select FB_MODE_HELPERS 1125 1125 select FB_CFB_FILLRECT 1126 1126 select FB_CFB_COPYAREA
-1
drivers/video/aty/aty128fb.c
··· 2365 2365 static void aty128_set_suspend(struct aty128fb_par *par, int suspend) 2366 2366 { 2367 2367 u32 pmgt; 2368 - u16 pwr_command; 2369 2368 struct pci_dev *pdev = par->pdev; 2370 2369 2371 2370 if (!par->pm_reg)
+1 -1
drivers/watchdog/Kconfig
··· 406 406 ---help--- 407 407 Hardware driver for the intel TCO timer based watchdog devices. 408 408 These drivers are included in the Intel 82801 I/O Controller 409 - Hub family (from ICH0 up to ICH8) and in the Intel 6300ESB 409 + Hub family (from ICH0 up to ICH10) and in the Intel 63xxESB 410 410 controller hub. 411 411 412 412 The TCO (Total Cost of Ownership) timer is a watchdog timer
+2 -2
drivers/watchdog/at91rm9200_wdt.c
··· 107 107 static int at91_wdt_settimeout(int new_time) 108 108 { 109 109 /* 110 - * All counting occurs at SLOW_CLOCK / 128 = 0.256 Hz 110 + * All counting occurs at SLOW_CLOCK / 128 = 256 Hz 111 111 * 112 112 * Since WDV is a 16-bit counter, the maximum period is 113 - * 65536 / 0.256 = 256 seconds. 113 + * 65536 / 256 = 256 seconds. 114 114 */ 115 115 if ((new_time <= 0) || (new_time > WDT_MAX_TIME)) 116 116 return -EINVAL;
+1
drivers/watchdog/at91sam9_wdt.c
··· 18 18 #include <linux/errno.h> 19 19 #include <linux/fs.h> 20 20 #include <linux/init.h> 21 + #include <linux/io.h> 21 22 #include <linux/kernel.h> 22 23 #include <linux/miscdevice.h> 23 24 #include <linux/module.h>
+28 -4
drivers/watchdog/iTCO_vendor_support.c
··· 1 1 /* 2 2 * intel TCO vendor specific watchdog driver support 3 3 * 4 - * (c) Copyright 2006-2008 Wim Van Sebroeck <wim@iguana.be>. 4 + * (c) Copyright 2006-2009 Wim Van Sebroeck <wim@iguana.be>. 5 5 * 6 6 * This program is free software; you can redistribute it and/or 7 7 * modify it under the terms of the GNU General Public License ··· 19 19 20 20 /* Module and version information */ 21 21 #define DRV_NAME "iTCO_vendor_support" 22 - #define DRV_VERSION "1.02" 22 + #define DRV_VERSION "1.03" 23 23 #define PFX DRV_NAME ": " 24 24 25 25 /* Includes */ ··· 76 76 * time is about 40 seconds, and the minimum hang time is about 77 77 * 20.6 seconds. 78 78 */ 79 + 80 + static void supermicro_old_pre_start(unsigned long acpibase) 81 + { 82 + unsigned long val32; 83 + 84 + /* Bit 13: TCO_EN -> 0 = Disables TCO logic generating an SMI# */ 85 + val32 = inl(SMI_EN); 86 + val32 &= 0xffffdfff; /* Turn off SMI clearing watchdog */ 87 + outl(val32, SMI_EN); /* Needed to activate watchdog */ 88 + } 89 + 90 + static void supermicro_old_pre_stop(unsigned long acpibase) 91 + { 92 + unsigned long val32; 93 + 94 + /* Bit 13: TCO_EN -> 1 = Enables the TCO logic to generate SMI# */ 95 + val32 = inl(SMI_EN); 96 + val32 |= 0x00002000; /* Turn on SMI clearing watchdog */ 97 + outl(val32, SMI_EN); /* Needed to deactivate watchdog */ 98 + } 79 99 80 100 static void supermicro_old_pre_keepalive(unsigned long acpibase) 81 101 { ··· 248 228 void iTCO_vendor_pre_start(unsigned long acpibase, 249 229 unsigned int heartbeat) 250 230 { 251 - if (vendorsupport == SUPERMICRO_NEW_BOARD) 231 + if (vendorsupport == SUPERMICRO_OLD_BOARD) 232 + supermicro_old_pre_start(acpibase); 233 + else if (vendorsupport == SUPERMICRO_NEW_BOARD) 252 234 supermicro_new_pre_start(heartbeat); 253 235 } 254 236 EXPORT_SYMBOL(iTCO_vendor_pre_start); 255 237 256 238 void iTCO_vendor_pre_stop(unsigned long acpibase) 257 239 { 258 - if (vendorsupport == SUPERMICRO_NEW_BOARD) 240 + if (vendorsupport == SUPERMICRO_OLD_BOARD) 241 + supermicro_old_pre_stop(acpibase); 242 + else if (vendorsupport == SUPERMICRO_NEW_BOARD) 259 243 supermicro_new_pre_stop(); 260 244 } 261 245 EXPORT_SYMBOL(iTCO_vendor_pre_stop);
+14 -21
drivers/watchdog/iTCO_wdt.c
··· 1 1 /* 2 - * intel TCO Watchdog Driver (Used in i82801 and i6300ESB chipsets) 2 + * intel TCO Watchdog Driver (Used in i82801 and i63xxESB chipsets) 3 3 * 4 - * (c) Copyright 2006-2008 Wim Van Sebroeck <wim@iguana.be>. 4 + * (c) Copyright 2006-2009 Wim Van Sebroeck <wim@iguana.be>. 5 5 * 6 6 * This program is free software; you can redistribute it and/or 7 7 * modify it under the terms of the GNU General Public License ··· 63 63 64 64 /* Module and version information */ 65 65 #define DRV_NAME "iTCO_wdt" 66 - #define DRV_VERSION "1.04" 66 + #define DRV_VERSION "1.05" 67 67 #define PFX DRV_NAME ": " 68 68 69 69 /* Includes */ ··· 236 236 237 237 /* Address definitions for the TCO */ 238 238 /* TCO base address */ 239 - #define TCOBASE iTCO_wdt_private.ACPIBASE + 0x60 239 + #define TCOBASE iTCO_wdt_private.ACPIBASE + 0x60 240 240 /* SMI Control and Enable Register */ 241 - #define SMI_EN iTCO_wdt_private.ACPIBASE + 0x30 241 + #define SMI_EN iTCO_wdt_private.ACPIBASE + 0x30 242 242 243 243 #define TCO_RLD TCOBASE + 0x00 /* TCO Timer Reload and Curr. Value */ 244 244 #define TCOv1_TMR TCOBASE + 0x01 /* TCOv1 Timer Initial Value */ 245 - #define TCO_DAT_IN TCOBASE + 0x02 /* TCO Data In Register */ 246 - #define TCO_DAT_OUT TCOBASE + 0x03 /* TCO Data Out Register */ 247 - #define TCO1_STS TCOBASE + 0x04 /* TCO1 Status Register */ 248 - #define TCO2_STS TCOBASE + 0x06 /* TCO2 Status Register */ 245 + #define TCO_DAT_IN TCOBASE + 0x02 /* TCO Data In Register */ 246 + #define TCO_DAT_OUT TCOBASE + 0x03 /* TCO Data Out Register */ 247 + #define TCO1_STS TCOBASE + 0x04 /* TCO1 Status Register */ 248 + #define TCO2_STS TCOBASE + 0x06 /* TCO2 Status Register */ 249 249 #define TCO1_CNT TCOBASE + 0x08 /* TCO1 Control Register */ 250 250 #define TCO2_CNT TCOBASE + 0x0a /* TCO2 Control Register */ 251 251 #define TCOv2_TMR TCOBASE + 0x12 /* TCOv2 Timer Initial Value */ ··· 338 338 static int iTCO_wdt_start(void) 339 339 { 340 340 unsigned int val; 341 - unsigned long val32; 342 341 343 342 spin_lock(&iTCO_wdt_private.io_lock); 344 343 ··· 349 350 printk(KERN_ERR PFX "failed to reset NO_REBOOT flag, reboot disabled by hardware\n"); 350 351 return -EIO; 351 352 } 352 - 353 - /* Bit 13: TCO_EN -> 0 = Disables TCO logic generating an SMI# */ 354 - val32 = inl(SMI_EN); 355 - val32 &= 0xffffdfff; /* Turn off SMI clearing watchdog */ 356 - outl(val32, SMI_EN); 357 353 358 354 /* Force the timer to its reload value by writing to the TCO_RLD 359 355 register */ ··· 372 378 static int iTCO_wdt_stop(void) 373 379 { 374 380 unsigned int val; 375 - unsigned long val32; 376 381 377 382 spin_lock(&iTCO_wdt_private.io_lock); 378 383 ··· 382 389 val |= 0x0800; 383 390 outw(val, TCO1_CNT); 384 391 val = inw(TCO1_CNT); 385 - 386 - /* Bit 13: TCO_EN -> 1 = Enables the TCO logic to generate SMI# */ 387 - val32 = inl(SMI_EN); 388 - val32 |= 0x00002000; 389 - outl(val32, SMI_EN); 390 392 391 393 /* Set the NO_REBOOT bit to prevent later reboots, just for sure */ 392 394 iTCO_wdt_set_NO_REBOOT_bit(); ··· 637 649 int ret; 638 650 u32 base_address; 639 651 unsigned long RCBA; 652 + unsigned long val32; 640 653 641 654 /* 642 655 * Find the ACPI/PM base I/O address which is the base ··· 684 695 ret = -EIO; 685 696 goto out; 686 697 } 698 + /* Bit 13: TCO_EN -> 0 = Disables TCO logic generating an SMI# */ 699 + val32 = inl(SMI_EN); 700 + val32 &= 0xffffdfff; /* Turn off SMI clearing watchdog */ 701 + outl(val32, SMI_EN); 687 702 688 703 /* The TCO I/O registers reside in a 32-byte range pointed to 689 704 by the TCOBASE value */
+3 -2
fs/bio.c
··· 302 302 struct bio *bio_alloc_bioset(gfp_t gfp_mask, int nr_iovecs, struct bio_set *bs) 303 303 { 304 304 struct bio *bio = NULL; 305 + void *p; 305 306 306 307 if (bs) { 307 - void *p = mempool_alloc(bs->bio_pool, gfp_mask); 308 + p = mempool_alloc(bs->bio_pool, gfp_mask); 308 309 309 310 if (p) 310 311 bio = p + bs->front_pad; ··· 330 329 } 331 330 if (unlikely(!bvl)) { 332 331 if (bs) 333 - mempool_free(bio, bs->bio_pool); 332 + mempool_free(p, bs->bio_pool); 334 333 else 335 334 kfree(bio); 336 335 bio = NULL;
+37 -21
fs/btrfs/ctree.c
··· 38 38 static int del_ptr(struct btrfs_trans_handle *trans, struct btrfs_root *root, 39 39 struct btrfs_path *path, int level, int slot); 40 40 41 - inline void btrfs_init_path(struct btrfs_path *p) 42 - { 43 - memset(p, 0, sizeof(*p)); 44 - } 45 - 46 41 struct btrfs_path *btrfs_alloc_path(void) 47 42 { 48 43 struct btrfs_path *path; 49 - path = kmem_cache_alloc(btrfs_path_cachep, GFP_NOFS); 50 - if (path) { 51 - btrfs_init_path(path); 44 + path = kmem_cache_zalloc(btrfs_path_cachep, GFP_NOFS); 45 + if (path) 52 46 path->reada = 1; 53 - } 54 47 return path; 55 48 } 56 49 ··· 62 69 63 70 /* 64 71 * reset all the locked nodes in the patch to spinning locks. 72 + * 73 + * held is used to keep lockdep happy, when lockdep is enabled 74 + * we set held to a blocking lock before we go around and 75 + * retake all the spinlocks in the path. You can safely use NULL 76 + * for held 65 77 */ 66 - noinline void btrfs_clear_path_blocking(struct btrfs_path *p) 78 + noinline void btrfs_clear_path_blocking(struct btrfs_path *p, 79 + struct extent_buffer *held) 67 80 { 68 81 int i; 69 - for (i = 0; i < BTRFS_MAX_LEVEL; i++) { 82 + 83 + #ifdef CONFIG_DEBUG_LOCK_ALLOC 84 + /* lockdep really cares that we take all of these spinlocks 85 + * in the right order. If any of the locks in the path are not 86 + * currently blocking, it is going to complain. So, make really 87 + * really sure by forcing the path to blocking before we clear 88 + * the path blocking. 89 + */ 90 + if (held) 91 + btrfs_set_lock_blocking(held); 92 + btrfs_set_path_blocking(p); 93 + #endif 94 + 95 + for (i = BTRFS_MAX_LEVEL - 1; i >= 0; i--) { 70 96 if (p->nodes[i] && p->locks[i]) 71 97 btrfs_clear_lock_blocking(p->nodes[i]); 72 98 } 99 + 100 + #ifdef CONFIG_DEBUG_LOCK_ALLOC 101 + if (held) 102 + btrfs_clear_lock_blocking(held); 103 + #endif 73 104 } 74 105 75 106 /* this also releases the path */ ··· 303 286 trans->transid, level, &ins); 304 287 BUG_ON(ret); 305 288 cow = btrfs_init_new_buffer(trans, root, prealloc_dest, 306 - buf->len); 289 + buf->len, level); 307 290 } else { 308 291 cow = btrfs_alloc_free_block(trans, root, buf->len, 309 292 parent_start, ··· 934 917 935 918 /* promote the child to a root */ 936 919 child = read_node_slot(root, mid, 0); 920 + BUG_ON(!child); 937 921 btrfs_tree_lock(child); 938 922 btrfs_set_lock_blocking(child); 939 - BUG_ON(!child); 940 923 ret = btrfs_cow_block(trans, root, child, mid, 0, &child, 0); 941 924 BUG_ON(ret); 942 925 ··· 1583 1566 if (!p->skip_locking) 1584 1567 p->locks[level] = 1; 1585 1568 1586 - btrfs_clear_path_blocking(p); 1569 + btrfs_clear_path_blocking(p, NULL); 1587 1570 1588 1571 /* 1589 1572 * we have a lock on b and as long as we aren't changing ··· 1622 1605 1623 1606 btrfs_set_path_blocking(p); 1624 1607 sret = split_node(trans, root, p, level); 1625 - btrfs_clear_path_blocking(p); 1608 + btrfs_clear_path_blocking(p, NULL); 1626 1609 1627 1610 BUG_ON(sret > 0); 1628 1611 if (sret) { ··· 1642 1625 1643 1626 btrfs_set_path_blocking(p); 1644 1627 sret = balance_level(trans, root, p, level); 1645 - btrfs_clear_path_blocking(p); 1628 + btrfs_clear_path_blocking(p, NULL); 1646 1629 1647 1630 if (sret) { 1648 1631 ret = sret; ··· 1705 1688 if (!p->skip_locking) { 1706 1689 int lret; 1707 1690 1708 - btrfs_clear_path_blocking(p); 1691 + btrfs_clear_path_blocking(p, NULL); 1709 1692 lret = btrfs_try_spin_lock(b); 1710 1693 1711 1694 if (!lret) { 1712 1695 btrfs_set_path_blocking(p); 1713 1696 btrfs_tree_lock(b); 1714 - btrfs_clear_path_blocking(p); 1697 + btrfs_clear_path_blocking(p, b); 1715 1698 } 1716 1699 } 1717 1700 } else { ··· 1723 1706 btrfs_set_path_blocking(p); 1724 1707 sret = split_leaf(trans, root, key, 1725 1708 p, ins_len, ret == 0); 1726 - btrfs_clear_path_blocking(p); 1709 + btrfs_clear_path_blocking(p, NULL); 1727 1710 1728 1711 BUG_ON(sret > 0); 1729 1712 if (sret) { ··· 3943 3926 btrfs_release_path(root, path); 3944 3927 goto again; 3945 3928 } else { 3946 - btrfs_clear_path_blocking(path); 3947 3929 goto out; 3948 3930 } 3949 3931 } ··· 3962 3946 path->locks[level - 1] = 1; 3963 3947 path->nodes[level - 1] = cur; 3964 3948 unlock_up(path, level, 1); 3965 - btrfs_clear_path_blocking(path); 3949 + btrfs_clear_path_blocking(path, NULL); 3966 3950 } 3967 3951 out: 3968 3952 if (ret == 0)
+3 -8
fs/btrfs/ctree.h
··· 43 43 44 44 #define BTRFS_ACL_NOT_CACHED ((void *)-1) 45 45 46 - #ifdef CONFIG_LOCKDEP 47 - # define BTRFS_MAX_LEVEL 7 48 - #else 49 - # define BTRFS_MAX_LEVEL 8 50 - #endif 46 + #define BTRFS_MAX_LEVEL 8 51 47 52 48 /* holds pointers to all of the tree roots */ 53 49 #define BTRFS_ROOT_TREE_OBJECTID 1ULL ··· 1711 1715 u64 empty_size); 1712 1716 struct extent_buffer *btrfs_init_new_buffer(struct btrfs_trans_handle *trans, 1713 1717 struct btrfs_root *root, 1714 - u64 bytenr, u32 blocksize); 1718 + u64 bytenr, u32 blocksize, 1719 + int level); 1715 1720 int btrfs_alloc_extent(struct btrfs_trans_handle *trans, 1716 1721 struct btrfs_root *root, 1717 1722 u64 num_bytes, u64 parent, u64 min_bytes, ··· 1831 1834 void btrfs_release_path(struct btrfs_root *root, struct btrfs_path *p); 1832 1835 struct btrfs_path *btrfs_alloc_path(void); 1833 1836 void btrfs_free_path(struct btrfs_path *p); 1834 - void btrfs_init_path(struct btrfs_path *p); 1835 1837 void btrfs_set_path_blocking(struct btrfs_path *p); 1836 - void btrfs_clear_path_blocking(struct btrfs_path *p); 1837 1838 void btrfs_unlock_up_safe(struct btrfs_path *p, int level); 1838 1839 1839 1840 int btrfs_del_items(struct btrfs_trans_handle *trans, struct btrfs_root *root,
+45 -1
fs/btrfs/disk-io.c
··· 75 75 struct btrfs_work work; 76 76 }; 77 77 78 + /* These are used to set the lockdep class on the extent buffer locks. 79 + * The class is set by the readpage_end_io_hook after the buffer has 80 + * passed csum validation but before the pages are unlocked. 81 + * 82 + * The lockdep class is also set by btrfs_init_new_buffer on freshly 83 + * allocated blocks. 84 + * 85 + * The class is based on the level in the tree block, which allows lockdep 86 + * to know that lower nodes nest inside the locks of higher nodes. 87 + * 88 + * We also add a check to make sure the highest level of the tree is 89 + * the same as our lockdep setup here. If BTRFS_MAX_LEVEL changes, this 90 + * code needs update as well. 91 + */ 92 + #ifdef CONFIG_DEBUG_LOCK_ALLOC 93 + # if BTRFS_MAX_LEVEL != 8 94 + # error 95 + # endif 96 + static struct lock_class_key btrfs_eb_class[BTRFS_MAX_LEVEL + 1]; 97 + static const char *btrfs_eb_name[BTRFS_MAX_LEVEL + 1] = { 98 + /* leaf */ 99 + "btrfs-extent-00", 100 + "btrfs-extent-01", 101 + "btrfs-extent-02", 102 + "btrfs-extent-03", 103 + "btrfs-extent-04", 104 + "btrfs-extent-05", 105 + "btrfs-extent-06", 106 + "btrfs-extent-07", 107 + /* highest possible level */ 108 + "btrfs-extent-08", 109 + }; 110 + #endif 111 + 78 112 /* 79 113 * extents on the btree inode are pretty simple, there's one extent 80 114 * that covers the entire device ··· 381 347 return ret; 382 348 } 383 349 350 + #ifdef CONFIG_DEBUG_LOCK_ALLOC 351 + void btrfs_set_buffer_lockdep_class(struct extent_buffer *eb, int level) 352 + { 353 + lockdep_set_class_and_name(&eb->lock, 354 + &btrfs_eb_class[level], 355 + btrfs_eb_name[level]); 356 + } 357 + #endif 358 + 384 359 static int btree_readpage_end_io_hook(struct page *page, u64 start, u64 end, 385 360 struct extent_state *state) 386 361 { ··· 434 391 goto err; 435 392 } 436 393 found_level = btrfs_header_level(eb); 394 + 395 + btrfs_set_buffer_lockdep_class(eb, found_level); 437 396 438 397 ret = csum_tree_block(root, eb, 1); 439 398 if (ret) ··· 1822 1777 ret = find_and_setup_root(tree_root, fs_info, 1823 1778 BTRFS_DEV_TREE_OBJECTID, dev_root); 1824 1779 dev_root->track_dirty = 1; 1825 - 1826 1780 if (ret) 1827 1781 goto fail_extent_root; 1828 1782
+10
fs/btrfs/disk-io.h
··· 101 101 int btrfs_add_log_tree(struct btrfs_trans_handle *trans, 102 102 struct btrfs_root *root); 103 103 int btree_lock_page_hook(struct page *page); 104 + 105 + 106 + #ifdef CONFIG_DEBUG_LOCK_ALLOC 107 + void btrfs_set_buffer_lockdep_class(struct extent_buffer *eb, int level); 108 + #else 109 + static inline void btrfs_set_buffer_lockdep_class(struct extent_buffer *eb, 110 + int level) 111 + { 112 + } 113 + #endif 104 114 #endif
+51 -32
fs/btrfs/extent-tree.c
··· 1323 1323 int btrfs_extent_post_op(struct btrfs_trans_handle *trans, 1324 1324 struct btrfs_root *root) 1325 1325 { 1326 - finish_current_insert(trans, root->fs_info->extent_root, 1); 1327 - del_pending_extents(trans, root->fs_info->extent_root, 1); 1326 + u64 start; 1327 + u64 end; 1328 + int ret; 1329 + 1330 + while(1) { 1331 + finish_current_insert(trans, root->fs_info->extent_root, 1); 1332 + del_pending_extents(trans, root->fs_info->extent_root, 1); 1333 + 1334 + /* is there more work to do? */ 1335 + ret = find_first_extent_bit(&root->fs_info->pending_del, 1336 + 0, &start, &end, EXTENT_WRITEBACK); 1337 + if (!ret) 1338 + continue; 1339 + ret = find_first_extent_bit(&root->fs_info->extent_ins, 1340 + 0, &start, &end, EXTENT_WRITEBACK); 1341 + if (!ret) 1342 + continue; 1343 + break; 1344 + } 1328 1345 return 0; 1329 1346 } 1330 1347 ··· 2228 2211 u64 end; 2229 2212 u64 priv; 2230 2213 u64 search = 0; 2231 - u64 skipped = 0; 2232 2214 struct btrfs_fs_info *info = extent_root->fs_info; 2233 2215 struct btrfs_path *path; 2234 2216 struct pending_extent_op *extent_op, *tmp; 2235 2217 struct list_head insert_list, update_list; 2236 2218 int ret; 2237 - int num_inserts = 0, max_inserts; 2219 + int num_inserts = 0, max_inserts, restart = 0; 2238 2220 2239 2221 path = btrfs_alloc_path(); 2240 2222 INIT_LIST_HEAD(&insert_list); ··· 2249 2233 ret = find_first_extent_bit(&info->extent_ins, search, &start, 2250 2234 &end, EXTENT_WRITEBACK); 2251 2235 if (ret) { 2252 - if (skipped && all && !num_inserts && 2236 + if (restart && !num_inserts && 2253 2237 list_empty(&update_list)) { 2254 - skipped = 0; 2238 + restart = 0; 2255 2239 search = 0; 2256 2240 continue; 2257 2241 } 2258 - mutex_unlock(&info->extent_ins_mutex); 2259 2242 break; 2260 2243 } 2261 2244 2262 2245 ret = try_lock_extent(&info->extent_ins, start, end, GFP_NOFS); 2263 2246 if (!ret) { 2264 - skipped = 1; 2247 + if (all) 2248 + restart = 1; 2265 2249 search = end + 1; 2266 2250 if (need_resched()) { 2267 2251 mutex_unlock(&info->extent_ins_mutex); ··· 2280 2264 list_add_tail(&extent_op->list, &insert_list); 2281 2265 search = end + 1; 2282 2266 if (num_inserts == max_inserts) { 2283 - mutex_unlock(&info->extent_ins_mutex); 2267 + restart = 1; 2284 2268 break; 2285 2269 } 2286 2270 } else if (extent_op->type == PENDING_BACKREF_UPDATE) { ··· 2296 2280 * somebody marked this thing for deletion then just unlock it and be 2297 2281 * done, the free_extents will handle it 2298 2282 */ 2299 - mutex_lock(&info->extent_ins_mutex); 2300 2283 list_for_each_entry_safe(extent_op, tmp, &update_list, list) { 2301 2284 clear_extent_bits(&info->extent_ins, extent_op->bytenr, 2302 2285 extent_op->bytenr + extent_op->num_bytes - 1, ··· 2317 2302 if (!list_empty(&update_list)) { 2318 2303 ret = update_backrefs(trans, extent_root, path, &update_list); 2319 2304 BUG_ON(ret); 2305 + 2306 + /* we may have COW'ed new blocks, so lets start over */ 2307 + if (all) 2308 + restart = 1; 2320 2309 } 2321 2310 2322 2311 /* ··· 2328 2309 * need to make sure everything is cleaned then reset everything and 2329 2310 * go back to the beginning 2330 2311 */ 2331 - if (!num_inserts && all && skipped) { 2312 + if (!num_inserts && restart) { 2332 2313 search = 0; 2333 - skipped = 0; 2314 + restart = 0; 2334 2315 INIT_LIST_HEAD(&update_list); 2335 2316 INIT_LIST_HEAD(&insert_list); 2336 2317 goto again; ··· 2387 2368 BUG_ON(ret); 2388 2369 2389 2370 /* 2390 - * if we broke out of the loop in order to insert stuff because we hit 2391 - * the maximum number of inserts at a time we can handle, then loop 2392 - * back and pick up where we left off 2371 + * if restart is set for whatever reason we need to go back and start 2372 + * searching through the pending list again. 2373 + * 2374 + * We just inserted some extents, which could have resulted in new 2375 + * blocks being allocated, which would result in new blocks needing 2376 + * updates, so if all is set we _must_ restart to get the updated 2377 + * blocks. 2393 2378 */ 2394 - if (num_inserts == max_inserts) { 2395 - INIT_LIST_HEAD(&insert_list); 2396 - INIT_LIST_HEAD(&update_list); 2397 - num_inserts = 0; 2398 - goto again; 2399 - } 2400 - 2401 - /* 2402 - * again, if we need to make absolutely sure there are no more pending 2403 - * extent operations left and we know that we skipped some, go back to 2404 - * the beginning and do it all again 2405 - */ 2406 - if (all && skipped) { 2379 + if (restart || all) { 2407 2380 INIT_LIST_HEAD(&insert_list); 2408 2381 INIT_LIST_HEAD(&update_list); 2409 2382 search = 0; 2410 - skipped = 0; 2383 + restart = 0; 2411 2384 num_inserts = 0; 2412 2385 goto again; 2413 2386 } ··· 2720 2709 goto again; 2721 2710 } 2722 2711 2712 + if (!err) 2713 + finish_current_insert(trans, extent_root, 0); 2723 2714 return err; 2724 2715 } 2725 2716 ··· 2872 2859 2873 2860 if (data & BTRFS_BLOCK_GROUP_METADATA) { 2874 2861 last_ptr = &root->fs_info->last_alloc; 2875 - empty_cluster = 64 * 1024; 2862 + if (!btrfs_test_opt(root, SSD)) 2863 + empty_cluster = 64 * 1024; 2876 2864 } 2877 2865 2878 2866 if ((data & BTRFS_BLOCK_GROUP_DATA) && btrfs_test_opt(root, SSD)) ··· 3416 3402 3417 3403 struct extent_buffer *btrfs_init_new_buffer(struct btrfs_trans_handle *trans, 3418 3404 struct btrfs_root *root, 3419 - u64 bytenr, u32 blocksize) 3405 + u64 bytenr, u32 blocksize, 3406 + int level) 3420 3407 { 3421 3408 struct extent_buffer *buf; 3422 3409 ··· 3425 3410 if (!buf) 3426 3411 return ERR_PTR(-ENOMEM); 3427 3412 btrfs_set_header_generation(buf, trans->transid); 3413 + btrfs_set_buffer_lockdep_class(buf, level); 3428 3414 btrfs_tree_lock(buf); 3429 3415 clean_tree_block(trans, root, buf); 3430 3416 ··· 3469 3453 return ERR_PTR(ret); 3470 3454 } 3471 3455 3472 - buf = btrfs_init_new_buffer(trans, root, ins.objectid, blocksize); 3456 + buf = btrfs_init_new_buffer(trans, root, ins.objectid, 3457 + blocksize, level); 3473 3458 return buf; 3474 3459 } 3475 3460 ··· 5658 5641 prev_block = block_start; 5659 5642 } 5660 5643 5644 + mutex_lock(&extent_root->fs_info->trans_mutex); 5661 5645 btrfs_record_root_in_trans(found_root); 5646 + mutex_unlock(&extent_root->fs_info->trans_mutex); 5662 5647 if (ref_path->owner_objectid >= BTRFS_FIRST_FREE_OBJECTID) { 5663 5648 /* 5664 5649 * try to update data extent references while
-2
fs/btrfs/extent_io.c
··· 415 415 416 416 node = tree_insert(&tree->state, prealloc->end, &prealloc->rb_node); 417 417 if (node) { 418 - struct extent_state *found; 419 - found = rb_entry(node, struct extent_state, rb_node); 420 418 free_extent_state(prealloc); 421 419 return -EEXIST; 422 420 }
+4 -4
fs/btrfs/file.c
··· 1222 1222 /* 1223 1223 * ok we haven't committed the transaction yet, lets do a commit 1224 1224 */ 1225 - if (file->private_data) 1225 + if (file && file->private_data) 1226 1226 btrfs_ioctl_trans_end(file); 1227 1227 1228 1228 trans = btrfs_start_transaction(root, 1); ··· 1231 1231 goto out; 1232 1232 } 1233 1233 1234 - ret = btrfs_log_dentry_safe(trans, root, file->f_dentry); 1234 + ret = btrfs_log_dentry_safe(trans, root, dentry); 1235 1235 if (ret < 0) 1236 1236 goto out; 1237 1237 ··· 1245 1245 * file again, but that will end up using the synchronization 1246 1246 * inside btrfs_sync_log to keep things safe. 1247 1247 */ 1248 - mutex_unlock(&file->f_dentry->d_inode->i_mutex); 1248 + mutex_unlock(&dentry->d_inode->i_mutex); 1249 1249 1250 1250 if (ret > 0) { 1251 1251 ret = btrfs_commit_transaction(trans, root); ··· 1253 1253 btrfs_sync_log(trans, root); 1254 1254 ret = btrfs_end_transaction(trans, root); 1255 1255 } 1256 - mutex_lock(&file->f_dentry->d_inode->i_mutex); 1256 + mutex_lock(&dentry->d_inode->i_mutex); 1257 1257 out: 1258 1258 return ret > 0 ? EIO : ret; 1259 1259 }
-1
fs/btrfs/inode-map.c
··· 84 84 search_key.type = 0; 85 85 search_key.offset = 0; 86 86 87 - btrfs_init_path(path); 88 87 start_found = 0; 89 88 ret = btrfs_search_slot(trans, root, &search_key, path, 0, 0); 90 89 if (ret < 0)
+1 -3
fs/btrfs/inode.c
··· 2531 2531 key.offset = (u64)-1; 2532 2532 key.type = (u8)-1; 2533 2533 2534 - btrfs_init_path(path); 2535 - 2536 2534 search_again: 2537 2535 ret = btrfs_search_slot(trans, root, &key, path, -1, 1); 2538 2536 if (ret < 0) ··· 4261 4263 { 4262 4264 if (PageWriteback(page) || PageDirty(page)) 4263 4265 return 0; 4264 - return __btrfs_releasepage(page, gfp_flags); 4266 + return __btrfs_releasepage(page, gfp_flags & GFP_NOFS); 4265 4267 } 4266 4268 4267 4269 static void btrfs_invalidatepage(struct page *page, unsigned long offset)
-11
fs/btrfs/locking.c
··· 25 25 #include "extent_io.h" 26 26 #include "locking.h" 27 27 28 - /* 29 - * btrfs_header_level() isn't free, so don't call it when lockdep isn't 30 - * on 31 - */ 32 - #ifdef CONFIG_DEBUG_LOCK_ALLOC 33 - static inline void spin_nested(struct extent_buffer *eb) 34 - { 35 - spin_lock_nested(&eb->lock, BTRFS_MAX_LEVEL - btrfs_header_level(eb)); 36 - } 37 - #else 38 28 static inline void spin_nested(struct extent_buffer *eb) 39 29 { 40 30 spin_lock(&eb->lock); 41 31 } 42 - #endif 43 32 44 33 /* 45 34 * Setting a lock to blocking will drop the spinlock and set the
+4 -1
fs/btrfs/super.c
··· 379 379 btrfs_start_delalloc_inodes(root); 380 380 btrfs_wait_ordered_extents(root, 0); 381 381 382 - btrfs_clean_old_snapshots(root); 383 382 trans = btrfs_start_transaction(root, 1); 384 383 ret = btrfs_commit_transaction(trans, root); 385 384 sb->s_dirt = 0; ··· 509 510 { 510 511 struct btrfs_root *root = btrfs_sb(sb); 511 512 int ret; 513 + 514 + ret = btrfs_parse_options(root, data); 515 + if (ret) 516 + return -EINVAL; 512 517 513 518 if ((*flags & MS_RDONLY) == (sb->s_flags & MS_RDONLY)) 514 519 return 0;
+2
fs/btrfs/transaction.c
··· 688 688 num_bytes -= btrfs_root_used(&dirty->root->root_item); 689 689 bytes_used = btrfs_root_used(&root->root_item); 690 690 if (num_bytes) { 691 + mutex_lock(&root->fs_info->trans_mutex); 691 692 btrfs_record_root_in_trans(root); 693 + mutex_unlock(&root->fs_info->trans_mutex); 692 694 btrfs_set_root_used(&root->root_item, 693 695 bytes_used - num_bytes); 694 696 }
+2
fs/btrfs/tree-log.c
··· 2832 2832 BUG_ON(!wc.replay_dest); 2833 2833 2834 2834 wc.replay_dest->log_root = log; 2835 + mutex_lock(&fs_info->trans_mutex); 2835 2836 btrfs_record_root_in_trans(wc.replay_dest); 2837 + mutex_unlock(&fs_info->trans_mutex); 2836 2838 ret = walk_log_tree(trans, log, &wc); 2837 2839 BUG_ON(ret); 2838 2840
+2 -4
fs/btrfs/volumes.c
··· 2894 2894 free_extent_map(em); 2895 2895 } 2896 2896 2897 - map = kzalloc(sizeof(*map), GFP_NOFS); 2898 - if (!map) 2899 - return -ENOMEM; 2900 - 2901 2897 em = alloc_extent_map(GFP_NOFS); 2902 2898 if (!em) 2903 2899 return -ENOMEM; ··· 3102 3106 if (!sb) 3103 3107 return -ENOMEM; 3104 3108 btrfs_set_buffer_uptodate(sb); 3109 + btrfs_set_buffer_lockdep_class(sb, 0); 3110 + 3105 3111 write_extent_buffer(sb, super_copy, 0, BTRFS_SUPER_INFO_SIZE); 3106 3112 array_size = btrfs_super_sys_array_size(super_copy); 3107 3113
+2 -1
fs/buffer.c
··· 777 777 __inc_zone_page_state(page, NR_FILE_DIRTY); 778 778 __inc_bdi_stat(mapping->backing_dev_info, 779 779 BDI_RECLAIMABLE); 780 + task_dirty_inc(current); 780 781 task_io_account_write(PAGE_CACHE_SIZE); 781 782 } 782 783 radix_tree_tag_set(&mapping->page_tree, ··· 3109 3108 if (test_clear_buffer_dirty(bh)) { 3110 3109 get_bh(bh); 3111 3110 bh->b_end_io = end_buffer_write_sync; 3112 - ret = submit_bh(WRITE_SYNC, bh); 3111 + ret = submit_bh(WRITE, bh); 3113 3112 wait_on_buffer(bh); 3114 3113 if (buffer_eopnotsupp(bh)) { 3115 3114 clear_buffer_eopnotsupp(bh);
+14 -1
fs/cifs/CHANGES
··· 1 + Version 1.57 2 + ------------ 3 + Improve support for multiple security contexts to the same server. We 4 + used to use the same "vcnumber" for all connections which could cause 5 + the server to treat subsequent connections, especially those that 6 + are authenticated as guest, as reconnections, invalidating the earlier 7 + user's smb session. This fix allows cifs to mount multiple times to the 8 + same server with different userids without risking invalidating earlier 9 + established security contexts. 10 + 1 11 Version 1.56 2 12 ------------ 3 13 Add "forcemandatorylock" mount option to allow user to use mandatory ··· 17 7 top of the share. Fix problem in 2.6.28 resolving DFS paths to 18 8 Samba servers (worked to Windows). Fix rmdir so that pending search 19 9 (readdir) requests do not get invalid results which include the now 20 - removed directory. 10 + removed directory. Fix oops in cifs_dfs_ref.c when prefixpath is not reachable 11 + when using DFS. Add better file create support to servers which support 12 + the CIFS POSIX protocol extensions (this adds support for new flags 13 + on create, and improves semantics for write of locked ranges). 21 14 22 15 Version 1.55 23 16 ------------
+1 -1
fs/cifs/cifsfs.h
··· 100 100 extern const struct export_operations cifs_export_ops; 101 101 #endif /* EXPERIMENTAL */ 102 102 103 - #define CIFS_VERSION "1.56" 103 + #define CIFS_VERSION "1.57" 104 104 #endif /* _CIFSFS_H */
+5 -1
fs/cifs/cifsglob.h
··· 164 164 /* multiplexed reads or writes */ 165 165 unsigned int maxBuf; /* maxBuf specifies the maximum */ 166 166 /* message size the server can send or receive for non-raw SMBs */ 167 - unsigned int maxRw; /* maxRw specifies the maximum */ 167 + unsigned int max_rw; /* maxRw specifies the maximum */ 168 168 /* message size the server can send or receive for */ 169 169 /* SMB_COM_WRITE_RAW or SMB_COM_READ_RAW. */ 170 + unsigned int max_vcs; /* maximum number of smb sessions, at least 171 + those that can be specified uniquely with 172 + vcnumbers */ 170 173 char sessid[4]; /* unique token id for this session */ 171 174 /* (returned on Negotiate */ 172 175 int capabilities; /* allow selective disabling of caps by smb sess */ ··· 213 210 unsigned overrideSecFlg; /* if non-zero override global sec flags */ 214 211 __u16 ipc_tid; /* special tid for connection to IPC share */ 215 212 __u16 flags; 213 + __u16 vcnum; 216 214 char *serverOS; /* name of operating system underlying server */ 217 215 char *serverNOS; /* name of network operating system of server */ 218 216 char *serverDomain; /* security realm of server */
+4
fs/cifs/cifsproto.h
··· 42 42 #define GetXid() (int)_GetXid(); cFYI(1,("CIFS VFS: in %s as Xid: %d with uid: %d",__func__, xid,current_fsuid())); 43 43 #define FreeXid(curr_xid) {_FreeXid(curr_xid); cFYI(1,("CIFS VFS: leaving %s (xid = %d) rc = %d",__func__,curr_xid,(int)rc));} 44 44 extern char *build_path_from_dentry(struct dentry *); 45 + extern char *cifs_build_path_to_root(struct cifs_sb_info *cifs_sb); 45 46 extern char *build_wildcard_path_from_dentry(struct dentry *direntry); 46 47 /* extern void renew_parental_timestamps(struct dentry *direntry);*/ 47 48 extern int SendReceive(const unsigned int /* xid */ , struct cifsSesInfo *, ··· 92 91 extern __le64 cnvrtDosCifsTm(__u16 date, __u16 time); 93 92 extern struct timespec cnvrtDosUnixTm(__u16 date, __u16 time); 94 93 94 + extern void posix_fill_in_inode(struct inode *tmp_inode, 95 + FILE_UNIX_BASIC_INFO *pData, int isNewInode); 96 + extern struct inode *cifs_new_inode(struct super_block *sb, __u64 *inum); 95 97 extern int cifs_get_inode_info(struct inode **pinode, 96 98 const unsigned char *search_path, 97 99 FILE_ALL_INFO *pfile_info,
+4 -3
fs/cifs/cifssmb.c
··· 528 528 server->maxReq = le16_to_cpu(rsp->MaxMpxCount); 529 529 server->maxBuf = min((__u32)le16_to_cpu(rsp->MaxBufSize), 530 530 (__u32)CIFSMaxBufSize + MAX_CIFS_HDR_SIZE); 531 + server->max_vcs = le16_to_cpu(rsp->MaxNumberVcs); 531 532 GETU32(server->sessid) = le32_to_cpu(rsp->SessionKey); 532 533 /* even though we do not use raw we might as well set this 533 534 accurately, in case we ever find a need for it */ 534 535 if ((le16_to_cpu(rsp->RawMode) & RAW_ENABLE) == RAW_ENABLE) { 535 - server->maxRw = 0xFF00; 536 + server->max_rw = 0xFF00; 536 537 server->capabilities = CAP_MPX_MODE | CAP_RAW_MODE; 537 538 } else { 538 - server->maxRw = 0;/* we do not need to use raw anyway */ 539 + server->max_rw = 0;/* do not need to use raw anyway */ 539 540 server->capabilities = CAP_MPX_MODE; 540 541 } 541 542 tmp = (__s16)le16_to_cpu(rsp->ServerTimeZone); ··· 639 638 /* probably no need to store and check maxvcs */ 640 639 server->maxBuf = min(le32_to_cpu(pSMBr->MaxBufferSize), 641 640 (__u32) CIFSMaxBufSize + MAX_CIFS_HDR_SIZE); 642 - server->maxRw = le32_to_cpu(pSMBr->MaxRawSize); 641 + server->max_rw = le32_to_cpu(pSMBr->MaxRawSize); 643 642 cFYI(DBG2, ("Max buf = %d", ses->server->maxBuf)); 644 643 GETU32(ses->server->sessid) = le32_to_cpu(pSMBr->SessionKey); 645 644 server->capabilities = le32_to_cpu(pSMBr->Capabilities);
+48 -3
fs/cifs/connect.c
··· 23 23 #include <linux/string.h> 24 24 #include <linux/list.h> 25 25 #include <linux/wait.h> 26 - #include <linux/ipv6.h> 27 26 #include <linux/pagemap.h> 28 27 #include <linux/ctype.h> 29 28 #include <linux/utsname.h> ··· 34 35 #include <linux/freezer.h> 35 36 #include <asm/uaccess.h> 36 37 #include <asm/processor.h> 38 + #include <net/ipv6.h> 37 39 #include "cifspdu.h" 38 40 #include "cifsglob.h" 39 41 #include "cifsproto.h" ··· 1379 1379 server->addr.sockAddr.sin_addr.s_addr)) 1380 1380 continue; 1381 1381 else if (addr->ss_family == AF_INET6 && 1382 - memcmp(&server->addr.sockAddr6.sin6_addr, 1383 - &addr6->sin6_addr, sizeof(addr6->sin6_addr))) 1382 + !ipv6_addr_equal(&server->addr.sockAddr6.sin6_addr, 1383 + &addr6->sin6_addr)) 1384 1384 continue; 1385 1385 1386 1386 ++server->srv_count; ··· 2180 2180 "mount option supported")); 2181 2181 } 2182 2182 2183 + static int 2184 + is_path_accessible(int xid, struct cifsTconInfo *tcon, 2185 + struct cifs_sb_info *cifs_sb, const char *full_path) 2186 + { 2187 + int rc; 2188 + __u64 inode_num; 2189 + FILE_ALL_INFO *pfile_info; 2190 + 2191 + rc = CIFSGetSrvInodeNumber(xid, tcon, full_path, &inode_num, 2192 + cifs_sb->local_nls, 2193 + cifs_sb->mnt_cifs_flags & 2194 + CIFS_MOUNT_MAP_SPECIAL_CHR); 2195 + if (rc != -EOPNOTSUPP) 2196 + return rc; 2197 + 2198 + pfile_info = kmalloc(sizeof(FILE_ALL_INFO), GFP_KERNEL); 2199 + if (pfile_info == NULL) 2200 + return -ENOMEM; 2201 + 2202 + rc = CIFSSMBQPathInfo(xid, tcon, full_path, pfile_info, 2203 + 0 /* not legacy */, cifs_sb->local_nls, 2204 + cifs_sb->mnt_cifs_flags & 2205 + CIFS_MOUNT_MAP_SPECIAL_CHR); 2206 + kfree(pfile_info); 2207 + return rc; 2208 + } 2209 + 2183 2210 int 2184 2211 cifs_mount(struct super_block *sb, struct cifs_sb_info *cifs_sb, 2185 2212 char *mount_data, const char *devname) ··· 2217 2190 struct cifsSesInfo *pSesInfo = NULL; 2218 2191 struct cifsTconInfo *tcon = NULL; 2219 2192 struct TCP_Server_Info *srvTcp = NULL; 2193 + char *full_path; 2220 2194 2221 2195 xid = GetXid(); 2222 2196 ··· 2453 2425 if (!(tcon->ses->capabilities & CAP_LARGE_READ_X)) 2454 2426 cifs_sb->rsize = min(cifs_sb->rsize, 2455 2427 (tcon->ses->server->maxBuf - MAX_CIFS_HDR_SIZE)); 2428 + 2429 + if (!rc && cifs_sb->prepathlen) { 2430 + /* build_path_to_root works only when we have a valid tcon */ 2431 + full_path = cifs_build_path_to_root(cifs_sb); 2432 + if (full_path == NULL) { 2433 + rc = -ENOMEM; 2434 + goto mount_fail_check; 2435 + } 2436 + rc = is_path_accessible(xid, tcon, cifs_sb, full_path); 2437 + if (rc) { 2438 + cERROR(1, ("Path %s in not accessible: %d", 2439 + full_path, rc)); 2440 + kfree(full_path); 2441 + goto mount_fail_check; 2442 + } 2443 + kfree(full_path); 2444 + } 2456 2445 2457 2446 /* volume_info->password is freed above when existing session found 2458 2447 (in which case it is not needed anymore) but when new sesion is created
+202 -99
fs/cifs/dir.c
··· 3 3 * 4 4 * vfs operations that deal with dentries 5 5 * 6 - * Copyright (C) International Business Machines Corp., 2002,2008 6 + * Copyright (C) International Business Machines Corp., 2002,2009 7 7 * Author(s): Steve French (sfrench@us.ibm.com) 8 8 * 9 9 * This library is free software; you can redistribute it and/or modify ··· 129 129 return full_path; 130 130 } 131 131 132 + static int cifs_posix_open(char *full_path, struct inode **pinode, 133 + struct super_block *sb, int mode, int oflags, 134 + int *poplock, __u16 *pnetfid, int xid) 135 + { 136 + int rc; 137 + __u32 oplock; 138 + FILE_UNIX_BASIC_INFO *presp_data; 139 + __u32 posix_flags = 0; 140 + struct cifs_sb_info *cifs_sb = CIFS_SB(sb); 141 + 142 + cFYI(1, ("posix open %s", full_path)); 143 + 144 + presp_data = kzalloc(sizeof(FILE_UNIX_BASIC_INFO), GFP_KERNEL); 145 + if (presp_data == NULL) 146 + return -ENOMEM; 147 + 148 + /* So far cifs posix extensions can only map the following flags. 149 + There are other valid fmode oflags such as FMODE_LSEEK, FMODE_PREAD, but 150 + so far we do not seem to need them, and we can treat them as local only */ 151 + if ((oflags & (FMODE_READ | FMODE_WRITE)) == 152 + (FMODE_READ | FMODE_WRITE)) 153 + posix_flags = SMB_O_RDWR; 154 + else if (oflags & FMODE_READ) 155 + posix_flags = SMB_O_RDONLY; 156 + else if (oflags & FMODE_WRITE) 157 + posix_flags = SMB_O_WRONLY; 158 + if (oflags & O_CREAT) 159 + posix_flags |= SMB_O_CREAT; 160 + if (oflags & O_EXCL) 161 + posix_flags |= SMB_O_EXCL; 162 + if (oflags & O_TRUNC) 163 + posix_flags |= SMB_O_TRUNC; 164 + if (oflags & O_APPEND) 165 + posix_flags |= SMB_O_APPEND; 166 + if (oflags & O_SYNC) 167 + posix_flags |= SMB_O_SYNC; 168 + if (oflags & O_DIRECTORY) 169 + posix_flags |= SMB_O_DIRECTORY; 170 + if (oflags & O_NOFOLLOW) 171 + posix_flags |= SMB_O_NOFOLLOW; 172 + if (oflags & O_DIRECT) 173 + posix_flags |= SMB_O_DIRECT; 174 + 175 + 176 + rc = CIFSPOSIXCreate(xid, cifs_sb->tcon, posix_flags, mode, 177 + pnetfid, presp_data, &oplock, full_path, 178 + cifs_sb->local_nls, cifs_sb->mnt_cifs_flags & 179 + CIFS_MOUNT_MAP_SPECIAL_CHR); 180 + if (rc) 181 + goto posix_open_ret; 182 + 183 + if (presp_data->Type == cpu_to_le32(-1)) 184 + goto posix_open_ret; /* open ok, caller does qpathinfo */ 185 + 186 + /* get new inode and set it up */ 187 + if (!pinode) 188 + goto posix_open_ret; /* caller does not need info */ 189 + 190 + *pinode = cifs_new_inode(sb, &presp_data->UniqueId); 191 + 192 + /* We do not need to close the file if new_inode fails since 193 + the caller will retry qpathinfo as long as inode is null */ 194 + if (*pinode == NULL) 195 + goto posix_open_ret; 196 + 197 + posix_fill_in_inode(*pinode, presp_data, 1); 198 + 199 + posix_open_ret: 200 + kfree(presp_data); 201 + return rc; 202 + } 203 + 132 204 static void setup_cifs_dentry(struct cifsTconInfo *tcon, 133 205 struct dentry *direntry, 134 206 struct inode *newinode) ··· 222 150 int xid; 223 151 int create_options = CREATE_NOT_DIR; 224 152 int oplock = 0; 225 - /* BB below access is too much for the mknod to request */ 153 + int oflags; 154 + /* 155 + * BB below access is probably too much for mknod to request 156 + * but we have to do query and setpathinfo so requesting 157 + * less could fail (unless we want to request getatr and setatr 158 + * permissions (only). At least for POSIX we do not have to 159 + * request so much. 160 + */ 226 161 int desiredAccess = GENERIC_READ | GENERIC_WRITE; 227 162 __u16 fileHandle; 228 163 struct cifs_sb_info *cifs_sb; ··· 253 174 } 254 175 255 176 mode &= ~current->fs->umask; 177 + if (oplockEnabled) 178 + oplock = REQ_OPLOCK; 179 + 180 + if (nd && (nd->flags & LOOKUP_OPEN)) 181 + oflags = nd->intent.open.flags; 182 + else 183 + oflags = FMODE_READ; 184 + 185 + if (tcon->unix_ext && (tcon->ses->capabilities & CAP_UNIX) && 186 + (CIFS_UNIX_POSIX_PATH_OPS_CAP & 187 + le64_to_cpu(tcon->fsUnixInfo.Capability))) { 188 + rc = cifs_posix_open(full_path, &newinode, inode->i_sb, 189 + mode, oflags, &oplock, &fileHandle, xid); 190 + /* EIO could indicate that (posix open) operation is not 191 + supported, despite what server claimed in capability 192 + negotation. EREMOTE indicates DFS junction, which is not 193 + handled in posix open */ 194 + 195 + if ((rc == 0) && (newinode == NULL)) 196 + goto cifs_create_get_file_info; /* query inode info */ 197 + else if (rc == 0) /* success, no need to query */ 198 + goto cifs_create_set_dentry; 199 + else if ((rc != -EIO) && (rc != -EREMOTE) && 200 + (rc != -EOPNOTSUPP)) /* path not found or net err */ 201 + goto cifs_create_out; 202 + /* else fallthrough to retry, using older open call, this is 203 + case where server does not support this SMB level, and 204 + falsely claims capability (also get here for DFS case 205 + which should be rare for path not covered on files) */ 206 + } 256 207 257 208 if (nd && (nd->flags & LOOKUP_OPEN)) { 258 - int oflags = nd->intent.open.flags; 259 - 209 + /* if the file is going to stay open, then we 210 + need to set the desired access properly */ 260 211 desiredAccess = 0; 261 212 if (oflags & FMODE_READ) 262 - desiredAccess |= GENERIC_READ; 213 + desiredAccess |= GENERIC_READ; /* is this too little? */ 263 214 if (oflags & FMODE_WRITE) { 264 215 desiredAccess |= GENERIC_WRITE; 265 216 if (!(oflags & FMODE_READ)) ··· 308 199 309 200 /* BB add processing to set equivalent of mode - e.g. via CreateX with 310 201 ACLs */ 311 - if (oplockEnabled) 312 - oplock = REQ_OPLOCK; 313 202 314 203 buf = kmalloc(sizeof(FILE_ALL_INFO), GFP_KERNEL); 315 204 if (buf == NULL) { ··· 340 233 } 341 234 if (rc) { 342 235 cFYI(1, ("cifs_create returned 0x%x", rc)); 343 - } else { 344 - /* If Open reported that we actually created a file 345 - then we now have to set the mode if possible */ 346 - if ((tcon->unix_ext) && (oplock & CIFS_CREATE_ACTION)) { 347 - struct cifs_unix_set_info_args args = { 236 + goto cifs_create_out; 237 + } 238 + 239 + /* If Open reported that we actually created a file 240 + then we now have to set the mode if possible */ 241 + if ((tcon->unix_ext) && (oplock & CIFS_CREATE_ACTION)) { 242 + struct cifs_unix_set_info_args args = { 348 243 .mode = mode, 349 244 .ctime = NO_CHANGE_64, 350 245 .atime = NO_CHANGE_64, 351 246 .mtime = NO_CHANGE_64, 352 247 .device = 0, 353 - }; 248 + }; 354 249 355 - if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SET_UID) { 356 - args.uid = (__u64) current_fsuid(); 357 - if (inode->i_mode & S_ISGID) 358 - args.gid = (__u64) inode->i_gid; 359 - else 360 - args.gid = (__u64) current_fsgid(); 361 - } else { 362 - args.uid = NO_CHANGE_64; 363 - args.gid = NO_CHANGE_64; 364 - } 365 - CIFSSMBUnixSetInfo(xid, tcon, full_path, &args, 366 - cifs_sb->local_nls, 367 - cifs_sb->mnt_cifs_flags & 368 - CIFS_MOUNT_MAP_SPECIAL_CHR); 250 + if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SET_UID) { 251 + args.uid = (__u64) current_fsuid(); 252 + if (inode->i_mode & S_ISGID) 253 + args.gid = (__u64) inode->i_gid; 254 + else 255 + args.gid = (__u64) current_fsgid(); 369 256 } else { 370 - /* BB implement mode setting via Windows security 371 - descriptors e.g. */ 372 - /* CIFSSMBWinSetPerms(xid,tcon,path,mode,-1,-1,nls);*/ 373 - 374 - /* Could set r/o dos attribute if mode & 0222 == 0 */ 257 + args.uid = NO_CHANGE_64; 258 + args.gid = NO_CHANGE_64; 375 259 } 260 + CIFSSMBUnixSetInfo(xid, tcon, full_path, &args, 261 + cifs_sb->local_nls, 262 + cifs_sb->mnt_cifs_flags & CIFS_MOUNT_MAP_SPECIAL_CHR); 263 + } else { 264 + /* BB implement mode setting via Windows security 265 + descriptors e.g. */ 266 + /* CIFSSMBWinSetPerms(xid,tcon,path,mode,-1,-1,nls);*/ 376 267 377 - /* server might mask mode so we have to query for it */ 378 - if (tcon->unix_ext) 379 - rc = cifs_get_inode_info_unix(&newinode, full_path, 380 - inode->i_sb, xid); 381 - else { 382 - rc = cifs_get_inode_info(&newinode, full_path, 383 - buf, inode->i_sb, xid, 384 - &fileHandle); 385 - if (newinode) { 386 - if (cifs_sb->mnt_cifs_flags & 387 - CIFS_MOUNT_DYNPERM) 388 - newinode->i_mode = mode; 389 - if ((oplock & CIFS_CREATE_ACTION) && 390 - (cifs_sb->mnt_cifs_flags & 391 - CIFS_MOUNT_SET_UID)) { 392 - newinode->i_uid = current_fsuid(); 393 - if (inode->i_mode & S_ISGID) 394 - newinode->i_gid = 395 - inode->i_gid; 396 - else 397 - newinode->i_gid = 398 - current_fsgid(); 399 - } 268 + /* Could set r/o dos attribute if mode & 0222 == 0 */ 269 + } 270 + 271 + cifs_create_get_file_info: 272 + /* server might mask mode so we have to query for it */ 273 + if (tcon->unix_ext) 274 + rc = cifs_get_inode_info_unix(&newinode, full_path, 275 + inode->i_sb, xid); 276 + else { 277 + rc = cifs_get_inode_info(&newinode, full_path, buf, 278 + inode->i_sb, xid, &fileHandle); 279 + if (newinode) { 280 + if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_DYNPERM) 281 + newinode->i_mode = mode; 282 + if ((oplock & CIFS_CREATE_ACTION) && 283 + (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SET_UID)) { 284 + newinode->i_uid = current_fsuid(); 285 + if (inode->i_mode & S_ISGID) 286 + newinode->i_gid = inode->i_gid; 287 + else 288 + newinode->i_gid = current_fsgid(); 400 289 } 401 290 } 291 + } 402 292 403 - if (rc != 0) { 404 - cFYI(1, ("Create worked, get_inode_info failed rc = %d", 405 - rc)); 406 - } else 407 - setup_cifs_dentry(tcon, direntry, newinode); 293 + cifs_create_set_dentry: 294 + if (rc == 0) 295 + setup_cifs_dentry(tcon, direntry, newinode); 296 + else 297 + cFYI(1, ("Create worked, get_inode_info failed rc = %d", rc)); 408 298 409 - if ((nd == NULL /* nfsd case - nfs srv does not set nd */) || 410 - (!(nd->flags & LOOKUP_OPEN))) { 411 - /* mknod case - do not leave file open */ 412 - CIFSSMBClose(xid, tcon, fileHandle); 413 - } else if (newinode) { 414 - struct cifsFileInfo *pCifsFile = 415 - kzalloc(sizeof(struct cifsFileInfo), GFP_KERNEL); 299 + /* nfsd case - nfs srv does not set nd */ 300 + if ((nd == NULL) || (!(nd->flags & LOOKUP_OPEN))) { 301 + /* mknod case - do not leave file open */ 302 + CIFSSMBClose(xid, tcon, fileHandle); 303 + } else if (newinode) { 304 + struct cifsFileInfo *pCifsFile = 305 + kzalloc(sizeof(struct cifsFileInfo), GFP_KERNEL); 416 306 417 - if (pCifsFile == NULL) 418 - goto cifs_create_out; 419 - pCifsFile->netfid = fileHandle; 420 - pCifsFile->pid = current->tgid; 421 - pCifsFile->pInode = newinode; 422 - pCifsFile->invalidHandle = false; 423 - pCifsFile->closePend = false; 424 - init_MUTEX(&pCifsFile->fh_sem); 425 - mutex_init(&pCifsFile->lock_mutex); 426 - INIT_LIST_HEAD(&pCifsFile->llist); 427 - atomic_set(&pCifsFile->wrtPending, 0); 307 + if (pCifsFile == NULL) 308 + goto cifs_create_out; 309 + pCifsFile->netfid = fileHandle; 310 + pCifsFile->pid = current->tgid; 311 + pCifsFile->pInode = newinode; 312 + pCifsFile->invalidHandle = false; 313 + pCifsFile->closePend = false; 314 + init_MUTEX(&pCifsFile->fh_sem); 315 + mutex_init(&pCifsFile->lock_mutex); 316 + INIT_LIST_HEAD(&pCifsFile->llist); 317 + atomic_set(&pCifsFile->wrtPending, 0); 428 318 429 - /* set the following in open now 319 + /* set the following in open now 430 320 pCifsFile->pfile = file; */ 431 - write_lock(&GlobalSMBSeslock); 432 - list_add(&pCifsFile->tlist, &tcon->openFileList); 433 - pCifsInode = CIFS_I(newinode); 434 - if (pCifsInode) { 435 - /* if readable file instance put first in list*/ 436 - if (write_only) { 437 - list_add_tail(&pCifsFile->flist, 438 - &pCifsInode->openFileList); 439 - } else { 440 - list_add(&pCifsFile->flist, 441 - &pCifsInode->openFileList); 442 - } 443 - if ((oplock & 0xF) == OPLOCK_EXCLUSIVE) { 444 - pCifsInode->clientCanCacheAll = true; 445 - pCifsInode->clientCanCacheRead = true; 446 - cFYI(1, ("Exclusive Oplock inode %p", 447 - newinode)); 448 - } else if ((oplock & 0xF) == OPLOCK_READ) 449 - pCifsInode->clientCanCacheRead = true; 321 + write_lock(&GlobalSMBSeslock); 322 + list_add(&pCifsFile->tlist, &tcon->openFileList); 323 + pCifsInode = CIFS_I(newinode); 324 + if (pCifsInode) { 325 + /* if readable file instance put first in list*/ 326 + if (write_only) { 327 + list_add_tail(&pCifsFile->flist, 328 + &pCifsInode->openFileList); 329 + } else { 330 + list_add(&pCifsFile->flist, 331 + &pCifsInode->openFileList); 450 332 } 451 - write_unlock(&GlobalSMBSeslock); 333 + if ((oplock & 0xF) == OPLOCK_EXCLUSIVE) { 334 + pCifsInode->clientCanCacheAll = true; 335 + pCifsInode->clientCanCacheRead = true; 336 + cFYI(1, ("Exclusive Oplock inode %p", 337 + newinode)); 338 + } else if ((oplock & 0xF) == OPLOCK_READ) 339 + pCifsInode->clientCanCacheRead = true; 452 340 } 341 + write_unlock(&GlobalSMBSeslock); 453 342 } 454 343 cifs_create_out: 455 344 kfree(buf);
+64 -40
fs/cifs/inode.c
··· 199 199 pfnd_dat->Gid = cpu_to_le64(pinode->i_gid); 200 200 } 201 201 202 + /** 203 + * cifs_new inode - create new inode, initialize, and hash it 204 + * @sb - pointer to superblock 205 + * @inum - if valid pointer and serverino is enabled, replace i_ino with val 206 + * 207 + * Create a new inode, initialize it for CIFS and hash it. Returns the new 208 + * inode or NULL if one couldn't be allocated. 209 + * 210 + * If the share isn't mounted with "serverino" or inum is a NULL pointer then 211 + * we'll just use the inode number assigned by new_inode(). Note that this can 212 + * mean i_ino collisions since the i_ino assigned by new_inode is not 213 + * guaranteed to be unique. 214 + */ 215 + struct inode * 216 + cifs_new_inode(struct super_block *sb, __u64 *inum) 217 + { 218 + struct inode *inode; 219 + 220 + inode = new_inode(sb); 221 + if (inode == NULL) 222 + return NULL; 223 + 224 + /* 225 + * BB: Is i_ino == 0 legal? Here, we assume that it is. If it isn't we 226 + * stop passing inum as ptr. Are there sanity checks we can use to 227 + * ensure that the server is really filling in that field? Also, 228 + * if serverino is disabled, perhaps we should be using iunique()? 229 + */ 230 + if (inum && (CIFS_SB(sb)->mnt_cifs_flags & CIFS_MOUNT_SERVER_INUM)) 231 + inode->i_ino = (unsigned long) *inum; 232 + 233 + /* 234 + * must set this here instead of cifs_alloc_inode since VFS will 235 + * clobber i_flags 236 + */ 237 + if (sb->s_flags & MS_NOATIME) 238 + inode->i_flags |= S_NOATIME | S_NOCMTIME; 239 + 240 + insert_inode_hash(inode); 241 + 242 + return inode; 243 + } 244 + 202 245 int cifs_get_inode_info_unix(struct inode **pinode, 203 246 const unsigned char *full_path, struct super_block *sb, int xid) 204 247 { ··· 276 233 277 234 /* get new inode */ 278 235 if (*pinode == NULL) { 279 - *pinode = new_inode(sb); 236 + *pinode = cifs_new_inode(sb, &find_data.UniqueId); 280 237 if (*pinode == NULL) { 281 238 rc = -ENOMEM; 282 239 goto cgiiu_exit; 283 240 } 284 - /* Is an i_ino of zero legal? */ 285 - /* note ino incremented to unique num in new_inode */ 286 - /* Are there sanity checks we can use to ensure that 287 - the server is really filling in that field? */ 288 - if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SERVER_INUM) 289 - (*pinode)->i_ino = (unsigned long)find_data.UniqueId; 290 - 291 - if (sb->s_flags & MS_NOATIME) 292 - (*pinode)->i_flags |= S_NOATIME | S_NOCMTIME; 293 - 294 - insert_inode_hash(*pinode); 295 241 } 296 242 297 243 inode = *pinode; ··· 497 465 498 466 /* get new inode */ 499 467 if (*pinode == NULL) { 500 - *pinode = new_inode(sb); 501 - if (*pinode == NULL) { 502 - rc = -ENOMEM; 503 - goto cgii_exit; 504 - } 468 + __u64 inode_num; 469 + __u64 *pinum = &inode_num; 470 + 505 471 /* Is an i_ino of zero legal? Can we use that to check 506 472 if the server supports returning inode numbers? Are 507 473 there other sanity checks we can use to ensure that ··· 516 486 517 487 if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SERVER_INUM) { 518 488 int rc1 = 0; 519 - __u64 inode_num; 520 489 521 490 rc1 = CIFSGetSrvInodeNumber(xid, pTcon, 522 - full_path, &inode_num, 491 + full_path, pinum, 523 492 cifs_sb->local_nls, 524 493 cifs_sb->mnt_cifs_flags & 525 494 CIFS_MOUNT_MAP_SPECIAL_CHR); 526 495 if (rc1) { 527 496 cFYI(1, ("GetSrvInodeNum rc %d", rc1)); 497 + pinum = NULL; 528 498 /* BB EOPNOSUPP disable SERVER_INUM? */ 529 - } else /* do we need cast or hash to ino? */ 530 - (*pinode)->i_ino = inode_num; 531 - } /* else ino incremented to unique num in new_inode*/ 532 - if (sb->s_flags & MS_NOATIME) 533 - (*pinode)->i_flags |= S_NOATIME | S_NOCMTIME; 534 - insert_inode_hash(*pinode); 499 + } 500 + } else { 501 + pinum = NULL; 502 + } 503 + 504 + *pinode = cifs_new_inode(sb, pinum); 505 + if (*pinode == NULL) { 506 + rc = -ENOMEM; 507 + goto cgii_exit; 508 + } 535 509 } 536 510 inode = *pinode; 537 511 cifsInfo = CIFS_I(inode); ··· 655 621 .lookup = cifs_lookup, 656 622 }; 657 623 658 - static char *build_path_to_root(struct cifs_sb_info *cifs_sb) 624 + char *cifs_build_path_to_root(struct cifs_sb_info *cifs_sb) 659 625 { 660 626 int pplen = cifs_sb->prepathlen; 661 627 int dfsplen; ··· 712 678 return inode; 713 679 714 680 cifs_sb = CIFS_SB(inode->i_sb); 715 - full_path = build_path_to_root(cifs_sb); 681 + full_path = cifs_build_path_to_root(cifs_sb); 716 682 if (full_path == NULL) 717 683 return ERR_PTR(-ENOMEM); 718 684 ··· 1051 1017 return rc; 1052 1018 } 1053 1019 1054 - static void posix_fill_in_inode(struct inode *tmp_inode, 1020 + void posix_fill_in_inode(struct inode *tmp_inode, 1055 1021 FILE_UNIX_BASIC_INFO *pData, int isNewInode) 1056 1022 { 1057 1023 struct cifsInodeInfo *cifsInfo = CIFS_I(tmp_inode); ··· 1148 1114 else 1149 1115 direntry->d_op = &cifs_dentry_ops; 1150 1116 1151 - newinode = new_inode(inode->i_sb); 1117 + newinode = cifs_new_inode(inode->i_sb, 1118 + &pInfo->UniqueId); 1152 1119 if (newinode == NULL) { 1153 1120 kfree(pInfo); 1154 1121 goto mkdir_get_info; 1155 1122 } 1156 1123 1157 - /* Is an i_ino of zero legal? */ 1158 - /* Are there sanity checks we can use to ensure that 1159 - the server is really filling in that field? */ 1160 - if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SERVER_INUM) { 1161 - newinode->i_ino = 1162 - (unsigned long)pInfo->UniqueId; 1163 - } /* note ino incremented to unique num in new_inode */ 1164 - if (inode->i_sb->s_flags & MS_NOATIME) 1165 - newinode->i_flags |= S_NOATIME | S_NOCMTIME; 1166 1124 newinode->i_nlink = 2; 1167 - 1168 - insert_inode_hash(newinode); 1169 1125 d_instantiate(direntry, newinode); 1170 1126 1171 1127 /* we already checked in POSIXCreate whether
+26 -32
fs/cifs/readdir.c
··· 56 56 } 57 57 #endif /* DEBUG2 */ 58 58 59 - /* Returns one if new inode created (which therefore needs to be hashed) */ 59 + /* Returns 1 if new inode created, 2 if both dentry and inode were */ 60 60 /* Might check in the future if inode number changed so we can rehash inode */ 61 - static int construct_dentry(struct qstr *qstring, struct file *file, 62 - struct inode **ptmp_inode, struct dentry **pnew_dentry) 61 + static int 62 + construct_dentry(struct qstr *qstring, struct file *file, 63 + struct inode **ptmp_inode, struct dentry **pnew_dentry, 64 + __u64 *inum) 63 65 { 64 - struct dentry *tmp_dentry; 65 - struct cifs_sb_info *cifs_sb; 66 - struct cifsTconInfo *pTcon; 66 + struct dentry *tmp_dentry = NULL; 67 + struct super_block *sb = file->f_path.dentry->d_sb; 67 68 int rc = 0; 68 69 69 70 cFYI(1, ("For %s", qstring->name)); 70 - cifs_sb = CIFS_SB(file->f_path.dentry->d_sb); 71 - pTcon = cifs_sb->tcon; 72 71 73 72 qstring->hash = full_name_hash(qstring->name, qstring->len); 74 73 tmp_dentry = d_lookup(file->f_path.dentry, qstring); 75 74 if (tmp_dentry) { 75 + /* BB: overwrite old name? i.e. tmp_dentry->d_name and 76 + * tmp_dentry->d_name.len?? 77 + */ 76 78 cFYI(0, ("existing dentry with inode 0x%p", 77 79 tmp_dentry->d_inode)); 78 80 *ptmp_inode = tmp_dentry->d_inode; 79 - /* BB overwrite old name? i.e. tmp_dentry->d_name and tmp_dentry->d_name.len??*/ 80 81 if (*ptmp_inode == NULL) { 81 - *ptmp_inode = new_inode(file->f_path.dentry->d_sb); 82 + *ptmp_inode = cifs_new_inode(sb, inum); 82 83 if (*ptmp_inode == NULL) 83 84 return rc; 84 85 rc = 1; 85 86 } 86 - if (file->f_path.dentry->d_sb->s_flags & MS_NOATIME) 87 - (*ptmp_inode)->i_flags |= S_NOATIME | S_NOCMTIME; 88 87 } else { 89 88 tmp_dentry = d_alloc(file->f_path.dentry, qstring); 90 89 if (tmp_dentry == NULL) { ··· 92 93 return rc; 93 94 } 94 95 95 - *ptmp_inode = new_inode(file->f_path.dentry->d_sb); 96 - if (pTcon->nocase) 96 + if (CIFS_SB(sb)->tcon->nocase) 97 97 tmp_dentry->d_op = &cifs_ci_dentry_ops; 98 98 else 99 99 tmp_dentry->d_op = &cifs_dentry_ops; 100 + 101 + *ptmp_inode = cifs_new_inode(sb, inum); 100 102 if (*ptmp_inode == NULL) 101 103 return rc; 102 - if (file->f_path.dentry->d_sb->s_flags & MS_NOATIME) 103 - (*ptmp_inode)->i_flags |= S_NOATIME | S_NOCMTIME; 104 104 rc = 2; 105 105 } 106 106 ··· 820 822 /* inode num, inode type and filename returned */ 821 823 static int cifs_get_name_from_search_buf(struct qstr *pqst, 822 824 char *current_entry, __u16 level, unsigned int unicode, 823 - struct cifs_sb_info *cifs_sb, int max_len, ino_t *pinum) 825 + struct cifs_sb_info *cifs_sb, int max_len, __u64 *pinum) 824 826 { 825 827 int rc = 0; 826 828 unsigned int len = 0; ··· 840 842 len = strnlen(filename, PATH_MAX); 841 843 } 842 844 843 - /* BB fixme - hash low and high 32 bits if not 64 bit arch BB */ 844 - if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SERVER_INUM) 845 - *pinum = pFindData->UniqueId; 845 + *pinum = pFindData->UniqueId; 846 846 } else if (level == SMB_FIND_FILE_DIRECTORY_INFO) { 847 847 FILE_DIRECTORY_INFO *pFindData = 848 848 (FILE_DIRECTORY_INFO *)current_entry; ··· 903 907 struct qstr qstring; 904 908 struct cifsFileInfo *pCifsF; 905 909 unsigned int obj_type; 906 - ino_t inum; 910 + __u64 inum; 907 911 struct cifs_sb_info *cifs_sb; 908 912 struct inode *tmp_inode; 909 913 struct dentry *tmp_dentry; ··· 936 940 if (rc) 937 941 return rc; 938 942 939 - rc = construct_dentry(&qstring, file, &tmp_inode, &tmp_dentry); 943 + /* only these two infolevels return valid inode numbers */ 944 + if (pCifsF->srch_inf.info_level == SMB_FIND_FILE_UNIX || 945 + pCifsF->srch_inf.info_level == SMB_FIND_FILE_ID_FULL_DIR_INFO) 946 + rc = construct_dentry(&qstring, file, &tmp_inode, &tmp_dentry, 947 + &inum); 948 + else 949 + rc = construct_dentry(&qstring, file, &tmp_inode, &tmp_dentry, 950 + NULL); 951 + 940 952 if ((tmp_inode == NULL) || (tmp_dentry == NULL)) 941 953 return -ENOMEM; 942 - 943 - if (rc) { 944 - /* inode created, we need to hash it with right inode number */ 945 - if (inum != 0) { 946 - /* BB fixme - hash the 2 32 quantities bits together if 947 - * necessary BB */ 948 - tmp_inode->i_ino = inum; 949 - } 950 - insert_inode_hash(tmp_inode); 951 - } 952 954 953 955 /* we pass in rc below, indicating whether it is a new inode, 954 956 so we can figure out whether to invalidate the inode cached
+87 -4
fs/cifs/sess.c
··· 34 34 extern void SMBNTencrypt(unsigned char *passwd, unsigned char *c8, 35 35 unsigned char *p24); 36 36 37 + /* Checks if this is the first smb session to be reconnected after 38 + the socket has been reestablished (so we know whether to use vc 0). 39 + Called while holding the cifs_tcp_ses_lock, so do not block */ 40 + static bool is_first_ses_reconnect(struct cifsSesInfo *ses) 41 + { 42 + struct list_head *tmp; 43 + struct cifsSesInfo *tmp_ses; 44 + 45 + list_for_each(tmp, &ses->server->smb_ses_list) { 46 + tmp_ses = list_entry(tmp, struct cifsSesInfo, 47 + smb_ses_list); 48 + if (tmp_ses->need_reconnect == false) 49 + return false; 50 + } 51 + /* could not find a session that was already connected, 52 + this must be the first one we are reconnecting */ 53 + return true; 54 + } 55 + 56 + /* 57 + * vc number 0 is treated specially by some servers, and should be the 58 + * first one we request. After that we can use vcnumbers up to maxvcs, 59 + * one for each smb session (some Windows versions set maxvcs incorrectly 60 + * so maxvc=1 can be ignored). If we have too many vcs, we can reuse 61 + * any vc but zero (some servers reset the connection on vcnum zero) 62 + * 63 + */ 64 + static __le16 get_next_vcnum(struct cifsSesInfo *ses) 65 + { 66 + __u16 vcnum = 0; 67 + struct list_head *tmp; 68 + struct cifsSesInfo *tmp_ses; 69 + __u16 max_vcs = ses->server->max_vcs; 70 + __u16 i; 71 + int free_vc_found = 0; 72 + 73 + /* Quoting the MS-SMB specification: "Windows-based SMB servers set this 74 + field to one but do not enforce this limit, which allows an SMB client 75 + to establish more virtual circuits than allowed by this value ... but 76 + other server implementations can enforce this limit." */ 77 + if (max_vcs < 2) 78 + max_vcs = 0xFFFF; 79 + 80 + write_lock(&cifs_tcp_ses_lock); 81 + if ((ses->need_reconnect) && is_first_ses_reconnect(ses)) 82 + goto get_vc_num_exit; /* vcnum will be zero */ 83 + for (i = ses->server->srv_count - 1; i < max_vcs; i++) { 84 + if (i == 0) /* this is the only connection, use vc 0 */ 85 + break; 86 + 87 + free_vc_found = 1; 88 + 89 + list_for_each(tmp, &ses->server->smb_ses_list) { 90 + tmp_ses = list_entry(tmp, struct cifsSesInfo, 91 + smb_ses_list); 92 + if (tmp_ses->vcnum == i) { 93 + free_vc_found = 0; 94 + break; /* found duplicate, try next vcnum */ 95 + } 96 + } 97 + if (free_vc_found) 98 + break; /* we found a vcnumber that will work - use it */ 99 + } 100 + 101 + if (i == 0) 102 + vcnum = 0; /* for most common case, ie if one smb session, use 103 + vc zero. Also for case when no free vcnum, zero 104 + is safest to send (some clients only send zero) */ 105 + else if (free_vc_found == 0) 106 + vcnum = 1; /* we can not reuse vc=0 safely, since some servers 107 + reset all uids on that, but 1 is ok. */ 108 + else 109 + vcnum = i; 110 + ses->vcnum = vcnum; 111 + get_vc_num_exit: 112 + write_unlock(&cifs_tcp_ses_lock); 113 + 114 + return le16_to_cpu(vcnum); 115 + } 116 + 37 117 static __u32 cifs_ssetup_hdr(struct cifsSesInfo *ses, SESSION_SETUP_ANDX *pSMB) 38 118 { 39 119 __u32 capabilities = 0; 40 120 41 121 /* init fields common to all four types of SessSetup */ 42 - /* note that header is initialized to zero in header_assemble */ 122 + /* Note that offsets for first seven fields in req struct are same */ 123 + /* in CIFS Specs so does not matter which of 3 forms of struct */ 124 + /* that we use in next few lines */ 125 + /* Note that header is initialized to zero in header_assemble */ 43 126 pSMB->req.AndXCommand = 0xFF; 44 127 pSMB->req.MaxBufferSize = cpu_to_le16(ses->server->maxBuf); 45 128 pSMB->req.MaxMpxCount = cpu_to_le16(ses->server->maxReq); 129 + pSMB->req.VcNumber = get_next_vcnum(ses); 46 130 47 131 /* Now no need to set SMBFLG_CASELESS or obsolete CANONICAL PATH */ 48 132 ··· 155 71 if (ses->capabilities & CAP_UNIX) 156 72 capabilities |= CAP_UNIX; 157 73 158 - /* BB check whether to init vcnum BB */ 159 74 return capabilities; 160 75 } 161 76 ··· 311 228 312 229 kfree(ses->serverOS); 313 230 /* UTF-8 string will not grow more than four times as big as UCS-16 */ 314 - ses->serverOS = kzalloc(4 * len, GFP_KERNEL); 231 + ses->serverOS = kzalloc((4 * len) + 2 /* trailing null */, GFP_KERNEL); 315 232 if (ses->serverOS != NULL) 316 233 cifs_strfromUCS_le(ses->serverOS, (__le16 *)data, len, nls_cp); 317 234 data += 2 * (len + 1); ··· 324 241 return rc; 325 242 326 243 kfree(ses->serverNOS); 327 - ses->serverNOS = kzalloc(4 * len, GFP_KERNEL); /* BB this is wrong length FIXME BB */ 244 + ses->serverNOS = kzalloc((4 * len) + 2 /* trailing null */, GFP_KERNEL); 328 245 if (ses->serverNOS != NULL) { 329 246 cifs_strfromUCS_le(ses->serverNOS, (__le16 *)data, len, 330 247 nls_cp);
+2
fs/compat_ioctl.c
··· 1938 1938 /* Big K */ 1939 1939 COMPATIBLE_IOCTL(PIO_FONT) 1940 1940 COMPATIBLE_IOCTL(GIO_FONT) 1941 + COMPATIBLE_IOCTL(PIO_CMAP) 1942 + COMPATIBLE_IOCTL(GIO_CMAP) 1941 1943 ULONG_IOCTL(KDSIGACCEPT) 1942 1944 COMPATIBLE_IOCTL(KDGETKEYCODE) 1943 1945 COMPATIBLE_IOCTL(KDSETKEYCODE)
+1 -1
fs/ext4/ext4.h
··· 868 868 { 869 869 unsigned len = le16_to_cpu(dlen); 870 870 871 - if (len == EXT4_MAX_REC_LEN) 871 + if (len == EXT4_MAX_REC_LEN || len == 0) 872 872 return 1 << 16; 873 873 return len; 874 874 }
+23 -4
fs/ext4/inode.c
··· 47 47 static inline int ext4_begin_ordered_truncate(struct inode *inode, 48 48 loff_t new_size) 49 49 { 50 - return jbd2_journal_begin_ordered_truncate(&EXT4_I(inode)->jinode, 51 - new_size); 50 + return jbd2_journal_begin_ordered_truncate( 51 + EXT4_SB(inode->i_sb)->s_journal, 52 + &EXT4_I(inode)->jinode, 53 + new_size); 52 54 } 53 55 54 56 static void ext4_invalidatepage(struct page *page, unsigned long offset); ··· 2439 2437 int no_nrwrite_index_update; 2440 2438 int pages_written = 0; 2441 2439 long pages_skipped; 2440 + int range_cyclic, cycled = 1, io_done = 0; 2442 2441 int needed_blocks, ret = 0, nr_to_writebump = 0; 2443 2442 struct ext4_sb_info *sbi = EXT4_SB(mapping->host->i_sb); 2444 2443 ··· 2491 2488 if (wbc->range_start == 0 && wbc->range_end == LLONG_MAX) 2492 2489 range_whole = 1; 2493 2490 2494 - if (wbc->range_cyclic) 2491 + range_cyclic = wbc->range_cyclic; 2492 + if (wbc->range_cyclic) { 2495 2493 index = mapping->writeback_index; 2496 - else 2494 + if (index) 2495 + cycled = 0; 2496 + wbc->range_start = index << PAGE_CACHE_SHIFT; 2497 + wbc->range_end = LLONG_MAX; 2498 + wbc->range_cyclic = 0; 2499 + } else 2497 2500 index = wbc->range_start >> PAGE_CACHE_SHIFT; 2498 2501 2499 2502 mpd.wbc = wbc; ··· 2513 2504 wbc->no_nrwrite_index_update = 1; 2514 2505 pages_skipped = wbc->pages_skipped; 2515 2506 2507 + retry: 2516 2508 while (!ret && wbc->nr_to_write > 0) { 2517 2509 2518 2510 /* ··· 2556 2546 pages_written += mpd.pages_written; 2557 2547 wbc->pages_skipped = pages_skipped; 2558 2548 ret = 0; 2549 + io_done = 1; 2559 2550 } else if (wbc->nr_to_write) 2560 2551 /* 2561 2552 * There is no more writeout needed ··· 2565 2554 */ 2566 2555 break; 2567 2556 } 2557 + if (!io_done && !cycled) { 2558 + cycled = 1; 2559 + index = 0; 2560 + wbc->range_start = index << PAGE_CACHE_SHIFT; 2561 + wbc->range_end = mapping->writeback_index - 1; 2562 + goto retry; 2563 + } 2568 2564 if (pages_skipped != wbc->pages_skipped) 2569 2565 printk(KERN_EMERG "This should not happen leaving %s " 2570 2566 "with nr_to_write = %ld ret = %d\n", ··· 2579 2561 2580 2562 /* Update index */ 2581 2563 index += pages_written; 2564 + wbc->range_cyclic = range_cyclic; 2582 2565 if (wbc->range_cyclic || (range_whole && wbc->nr_to_write > 0)) 2583 2566 /* 2584 2567 * set the writeback_index so that range_cyclic
+19 -13
fs/ext4/mballoc.c
··· 3693 3693 pa->pa_free = pa->pa_len; 3694 3694 atomic_set(&pa->pa_count, 1); 3695 3695 spin_lock_init(&pa->pa_lock); 3696 + INIT_LIST_HEAD(&pa->pa_inode_list); 3697 + INIT_LIST_HEAD(&pa->pa_group_list); 3696 3698 pa->pa_deleted = 0; 3697 3699 pa->pa_linear = 0; 3698 3700 ··· 3757 3755 atomic_set(&pa->pa_count, 1); 3758 3756 spin_lock_init(&pa->pa_lock); 3759 3757 INIT_LIST_HEAD(&pa->pa_inode_list); 3758 + INIT_LIST_HEAD(&pa->pa_group_list); 3760 3759 pa->pa_deleted = 0; 3761 3760 pa->pa_linear = 1; 3762 3761 ··· 4479 4476 pa->pa_free -= ac->ac_b_ex.fe_len; 4480 4477 pa->pa_len -= ac->ac_b_ex.fe_len; 4481 4478 spin_unlock(&pa->pa_lock); 4482 - /* 4483 - * We want to add the pa to the right bucket. 4484 - * Remove it from the list and while adding 4485 - * make sure the list to which we are adding 4486 - * doesn't grow big. 4487 - */ 4488 - if (likely(pa->pa_free)) { 4489 - spin_lock(pa->pa_obj_lock); 4490 - list_del_rcu(&pa->pa_inode_list); 4491 - spin_unlock(pa->pa_obj_lock); 4492 - ext4_mb_add_n_trim(ac); 4493 - } 4494 4479 } 4495 - ext4_mb_put_pa(ac, ac->ac_sb, pa); 4496 4480 } 4497 4481 if (ac->alloc_semp) 4498 4482 up_read(ac->alloc_semp); 4483 + if (pa) { 4484 + /* 4485 + * We want to add the pa to the right bucket. 4486 + * Remove it from the list and while adding 4487 + * make sure the list to which we are adding 4488 + * doesn't grow big. We need to release 4489 + * alloc_semp before calling ext4_mb_add_n_trim() 4490 + */ 4491 + if (pa->pa_linear && likely(pa->pa_free)) { 4492 + spin_lock(pa->pa_obj_lock); 4493 + list_del_rcu(&pa->pa_inode_list); 4494 + spin_unlock(pa->pa_obj_lock); 4495 + ext4_mb_add_n_trim(ac); 4496 + } 4497 + ext4_mb_put_pa(ac, ac->ac_sb, pa); 4498 + } 4499 4499 if (ac->ac_bitmap_page) 4500 4500 page_cache_release(ac->ac_bitmap_page); 4501 4501 if (ac->ac_buddy_page)
+3 -5
fs/ext4/migrate.c
··· 481 481 + 1); 482 482 if (IS_ERR(handle)) { 483 483 retval = PTR_ERR(handle); 484 - goto err_out; 484 + return retval; 485 485 } 486 486 tmp_inode = ext4_new_inode(handle, 487 487 inode->i_sb->s_root->d_inode, ··· 489 489 if (IS_ERR(tmp_inode)) { 490 490 retval = -ENOMEM; 491 491 ext4_journal_stop(handle); 492 - tmp_inode = NULL; 493 - goto err_out; 492 + return retval; 494 493 } 495 494 i_size_write(tmp_inode, i_size_read(inode)); 496 495 /* ··· 617 618 618 619 ext4_journal_stop(handle); 619 620 620 - if (tmp_inode) 621 - iput(tmp_inode); 621 + iput(tmp_inode); 622 622 623 623 return retval; 624 624 }
+7 -4
fs/ext4/super.c
··· 3046 3046 static int ext4_sync_fs(struct super_block *sb, int wait) 3047 3047 { 3048 3048 int ret = 0; 3049 + tid_t target; 3049 3050 3050 3051 trace_mark(ext4_sync_fs, "dev %s wait %d", sb->s_id, wait); 3051 3052 sb->s_dirt = 0; 3052 3053 if (EXT4_SB(sb)->s_journal) { 3053 - if (wait) 3054 - ret = ext4_force_commit(sb); 3055 - else 3056 - jbd2_journal_start_commit(EXT4_SB(sb)->s_journal, NULL); 3054 + if (jbd2_journal_start_commit(EXT4_SB(sb)->s_journal, 3055 + &target)) { 3056 + if (wait) 3057 + jbd2_log_wait_commit(EXT4_SB(sb)->s_journal, 3058 + target); 3059 + } 3057 3060 } else { 3058 3061 ext4_commit_super(sb, EXT4_SB(sb)->s_es, wait); 3059 3062 }
+11 -6
fs/jbd2/journal.c
··· 450 450 } 451 451 452 452 /* 453 - * Called under j_state_lock. Returns true if a transaction was started. 453 + * Called under j_state_lock. Returns true if a transaction commit was started. 454 454 */ 455 455 int __jbd2_log_start_commit(journal_t *journal, tid_t target) 456 456 { ··· 518 518 519 519 /* 520 520 * Start a commit of the current running transaction (if any). Returns true 521 - * if a transaction was started, and fills its tid in at *ptid 521 + * if a transaction is going to be committed (or is currently already 522 + * committing), and fills its tid in at *ptid 522 523 */ 523 524 int jbd2_journal_start_commit(journal_t *journal, tid_t *ptid) 524 525 { ··· 529 528 if (journal->j_running_transaction) { 530 529 tid_t tid = journal->j_running_transaction->t_tid; 531 530 532 - ret = __jbd2_log_start_commit(journal, tid); 533 - if (ret && ptid) 531 + __jbd2_log_start_commit(journal, tid); 532 + /* There's a running transaction and we've just made sure 533 + * it's commit has been scheduled. */ 534 + if (ptid) 534 535 *ptid = tid; 535 - } else if (journal->j_committing_transaction && ptid) { 536 + ret = 1; 537 + } else if (journal->j_committing_transaction) { 536 538 /* 537 539 * If ext3_write_super() recently started a commit, then we 538 540 * have to wait for completion of that transaction 539 541 */ 540 - *ptid = journal->j_committing_transaction->t_tid; 542 + if (ptid) 543 + *ptid = journal->j_committing_transaction->t_tid; 541 544 ret = 1; 542 545 } 543 546 spin_unlock(&journal->j_state_lock);
+31 -11
fs/jbd2/transaction.c
··· 2129 2129 } 2130 2130 2131 2131 /* 2132 - * This function must be called when inode is journaled in ordered mode 2133 - * before truncation happens. It starts writeout of truncated part in 2134 - * case it is in the committing transaction so that we stand to ordered 2135 - * mode consistency guarantees. 2132 + * File truncate and transaction commit interact with each other in a 2133 + * non-trivial way. If a transaction writing data block A is 2134 + * committing, we cannot discard the data by truncate until we have 2135 + * written them. Otherwise if we crashed after the transaction with 2136 + * write has committed but before the transaction with truncate has 2137 + * committed, we could see stale data in block A. This function is a 2138 + * helper to solve this problem. It starts writeout of the truncated 2139 + * part in case it is in the committing transaction. 2140 + * 2141 + * Filesystem code must call this function when inode is journaled in 2142 + * ordered mode before truncation happens and after the inode has been 2143 + * placed on orphan list with the new inode size. The second condition 2144 + * avoids the race that someone writes new data and we start 2145 + * committing the transaction after this function has been called but 2146 + * before a transaction for truncate is started (and furthermore it 2147 + * allows us to optimize the case where the addition to orphan list 2148 + * happens in the same transaction as write --- we don't have to write 2149 + * any data in such case). 2136 2150 */ 2137 - int jbd2_journal_begin_ordered_truncate(struct jbd2_inode *inode, 2151 + int jbd2_journal_begin_ordered_truncate(journal_t *journal, 2152 + struct jbd2_inode *jinode, 2138 2153 loff_t new_size) 2139 2154 { 2140 - journal_t *journal; 2141 - transaction_t *commit_trans; 2155 + transaction_t *inode_trans, *commit_trans; 2142 2156 int ret = 0; 2143 2157 2144 - if (!inode->i_transaction && !inode->i_next_transaction) 2158 + /* This is a quick check to avoid locking if not necessary */ 2159 + if (!jinode->i_transaction) 2145 2160 goto out; 2146 - journal = inode->i_transaction->t_journal; 2161 + /* Locks are here just to force reading of recent values, it is 2162 + * enough that the transaction was not committing before we started 2163 + * a transaction adding the inode to orphan list */ 2147 2164 spin_lock(&journal->j_state_lock); 2148 2165 commit_trans = journal->j_committing_transaction; 2149 2166 spin_unlock(&journal->j_state_lock); 2150 - if (inode->i_transaction == commit_trans) { 2151 - ret = filemap_fdatawrite_range(inode->i_vfs_inode->i_mapping, 2167 + spin_lock(&journal->j_list_lock); 2168 + inode_trans = jinode->i_transaction; 2169 + spin_unlock(&journal->j_list_lock); 2170 + if (inode_trans == commit_trans) { 2171 + ret = filemap_fdatawrite_range(jinode->i_vfs_inode->i_mapping, 2152 2172 new_size, LLONG_MAX); 2153 2173 if (ret) 2154 2174 jbd2_journal_abort(journal, ret);
+4 -2
fs/namespace.c
··· 614 614 */ 615 615 for_each_possible_cpu(cpu) { 616 616 struct mnt_writer *cpu_writer = &per_cpu(mnt_writers, cpu); 617 - if (cpu_writer->mnt != mnt) 618 - continue; 619 617 spin_lock(&cpu_writer->lock); 618 + if (cpu_writer->mnt != mnt) { 619 + spin_unlock(&cpu_writer->lock); 620 + continue; 621 + } 620 622 atomic_add(cpu_writer->count, &mnt->__mnt_writers); 621 623 cpu_writer->count = 0; 622 624 /*
+1 -1
fs/notify/inotify/inotify.c
··· 156 156 int ret; 157 157 158 158 do { 159 - if (unlikely(!idr_pre_get(&ih->idr, GFP_KERNEL))) 159 + if (unlikely(!idr_pre_get(&ih->idr, GFP_NOFS))) 160 160 return -ENOSPC; 161 161 ret = idr_get_new_above(&ih->idr, watch, ih->last_wd+1, &watch->wd); 162 162 } while (ret == -EAGAIN);
+4 -2
fs/ocfs2/journal.h
··· 513 513 static inline int ocfs2_begin_ordered_truncate(struct inode *inode, 514 514 loff_t new_size) 515 515 { 516 - return jbd2_journal_begin_ordered_truncate(&OCFS2_I(inode)->ip_jinode, 517 - new_size); 516 + return jbd2_journal_begin_ordered_truncate( 517 + OCFS2_SB(inode->i_sb)->journal->j_journal, 518 + &OCFS2_I(inode)->ip_jinode, 519 + new_size); 518 520 } 519 521 520 522 #endif /* OCFS2_JOURNAL_H */
+32 -4
fs/seq_file.c
··· 48 48 */ 49 49 file->f_version = 0; 50 50 51 - /* SEQ files support lseek, but not pread/pwrite */ 52 - file->f_mode &= ~(FMODE_PREAD | FMODE_PWRITE); 51 + /* 52 + * seq_files support lseek() and pread(). They do not implement 53 + * write() at all, but we clear FMODE_PWRITE here for historical 54 + * reasons. 55 + * 56 + * If a client of seq_files a) implements file.write() and b) wishes to 57 + * support pwrite() then that client will need to implement its own 58 + * file.open() which calls seq_open() and then sets FMODE_PWRITE. 59 + */ 60 + file->f_mode &= ~FMODE_PWRITE; 53 61 return 0; 54 62 } 55 63 EXPORT_SYMBOL(seq_open); ··· 139 131 int err = 0; 140 132 141 133 mutex_lock(&m->lock); 134 + 135 + /* Don't assume *ppos is where we left it */ 136 + if (unlikely(*ppos != m->read_pos)) { 137 + m->read_pos = *ppos; 138 + while ((err = traverse(m, *ppos)) == -EAGAIN) 139 + ; 140 + if (err) { 141 + /* With prejudice... */ 142 + m->read_pos = 0; 143 + m->version = 0; 144 + m->index = 0; 145 + m->count = 0; 146 + goto Done; 147 + } 148 + } 149 + 142 150 /* 143 151 * seq_file->op->..m_start/m_stop/m_next may do special actions 144 152 * or optimisations based on the file->f_version, so we want to ··· 254 230 Done: 255 231 if (!copied) 256 232 copied = err; 257 - else 233 + else { 258 234 *ppos += copied; 235 + m->read_pos += copied; 236 + } 259 237 file->f_version = m->version; 260 238 mutex_unlock(&m->lock); 261 239 return copied; ··· 292 266 if (offset < 0) 293 267 break; 294 268 retval = offset; 295 - if (offset != file->f_pos) { 269 + if (offset != m->read_pos) { 296 270 while ((retval=traverse(m, offset)) == -EAGAIN) 297 271 ; 298 272 if (retval) { 299 273 /* with extreme prejudice... */ 300 274 file->f_pos = 0; 275 + m->read_pos = 0; 301 276 m->version = 0; 302 277 m->index = 0; 303 278 m->count = 0; 304 279 } else { 280 + m->read_pos = offset; 305 281 retval = file->f_pos = offset; 306 282 } 307 283 }
+16 -1
fs/super.c
··· 82 82 * lock ordering than usbfs: 83 83 */ 84 84 lockdep_set_class(&s->s_lock, &type->s_lock_key); 85 - down_write(&s->s_umount); 85 + /* 86 + * sget() can have s_umount recursion. 87 + * 88 + * When it cannot find a suitable sb, it allocates a new 89 + * one (this one), and tries again to find a suitable old 90 + * one. 91 + * 92 + * In case that succeeds, it will acquire the s_umount 93 + * lock of the old one. Since these are clearly distrinct 94 + * locks, and this object isn't exposed yet, there's no 95 + * risk of deadlocks. 96 + * 97 + * Annotate this by putting this lock in a different 98 + * subclass. 99 + */ 100 + down_write_nested(&s->s_umount, SINGLE_DEPTH_NESTING); 86 101 s->s_count = S_BIAS; 87 102 atomic_set(&s->s_active, 1); 88 103 mutex_init(&s->s_vfs_rename_mutex);
+6 -6
fs/timerfd.c
··· 186 186 BUILD_BUG_ON(TFD_CLOEXEC != O_CLOEXEC); 187 187 BUILD_BUG_ON(TFD_NONBLOCK != O_NONBLOCK); 188 188 189 - if (flags & ~(TFD_CLOEXEC | TFD_NONBLOCK)) 190 - return -EINVAL; 191 - if (clockid != CLOCK_MONOTONIC && 192 - clockid != CLOCK_REALTIME) 189 + if ((flags & ~TFD_CREATE_FLAGS) || 190 + (clockid != CLOCK_MONOTONIC && 191 + clockid != CLOCK_REALTIME)) 193 192 return -EINVAL; 194 193 195 194 ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); ··· 200 201 hrtimer_init(&ctx->tmr, clockid, HRTIMER_MODE_ABS); 201 202 202 203 ufd = anon_inode_getfd("[timerfd]", &timerfd_fops, ctx, 203 - flags & (O_CLOEXEC | O_NONBLOCK)); 204 + flags & TFD_SHARED_FCNTL_FLAGS); 204 205 if (ufd < 0) 205 206 kfree(ctx); 206 207 ··· 218 219 if (copy_from_user(&ktmr, utmr, sizeof(ktmr))) 219 220 return -EFAULT; 220 221 221 - if (!timespec_valid(&ktmr.it_value) || 222 + if ((flags & ~TFD_SETTIME_FLAGS) || 223 + !timespec_valid(&ktmr.it_value) || 222 224 !timespec_valid(&ktmr.it_interval)) 223 225 return -EINVAL; 224 226
+76 -3
fs/xfs/linux-2.6/xfs_buf.c
··· 166 166 } 167 167 168 168 /* 169 + * Mapping of multi-page buffers into contiguous virtual space 170 + */ 171 + 172 + typedef struct a_list { 173 + void *vm_addr; 174 + struct a_list *next; 175 + } a_list_t; 176 + 177 + static a_list_t *as_free_head; 178 + static int as_list_len; 179 + static DEFINE_SPINLOCK(as_lock); 180 + 181 + /* 182 + * Try to batch vunmaps because they are costly. 183 + */ 184 + STATIC void 185 + free_address( 186 + void *addr) 187 + { 188 + a_list_t *aentry; 189 + 190 + #ifdef CONFIG_XEN 191 + /* 192 + * Xen needs to be able to make sure it can get an exclusive 193 + * RO mapping of pages it wants to turn into a pagetable. If 194 + * a newly allocated page is also still being vmap()ed by xfs, 195 + * it will cause pagetable construction to fail. This is a 196 + * quick workaround to always eagerly unmap pages so that Xen 197 + * is happy. 198 + */ 199 + vunmap(addr); 200 + return; 201 + #endif 202 + 203 + aentry = kmalloc(sizeof(a_list_t), GFP_NOWAIT); 204 + if (likely(aentry)) { 205 + spin_lock(&as_lock); 206 + aentry->next = as_free_head; 207 + aentry->vm_addr = addr; 208 + as_free_head = aentry; 209 + as_list_len++; 210 + spin_unlock(&as_lock); 211 + } else { 212 + vunmap(addr); 213 + } 214 + } 215 + 216 + STATIC void 217 + purge_addresses(void) 218 + { 219 + a_list_t *aentry, *old; 220 + 221 + if (as_free_head == NULL) 222 + return; 223 + 224 + spin_lock(&as_lock); 225 + aentry = as_free_head; 226 + as_free_head = NULL; 227 + as_list_len = 0; 228 + spin_unlock(&as_lock); 229 + 230 + while ((old = aentry) != NULL) { 231 + vunmap(aentry->vm_addr); 232 + aentry = aentry->next; 233 + kfree(old); 234 + } 235 + } 236 + 237 + /* 169 238 * Internal xfs_buf_t object manipulation 170 239 */ 171 240 ··· 333 264 uint i; 334 265 335 266 if ((bp->b_flags & XBF_MAPPED) && (bp->b_page_count > 1)) 336 - vm_unmap_ram(bp->b_addr - bp->b_offset, bp->b_page_count); 267 + free_address(bp->b_addr - bp->b_offset); 337 268 338 269 for (i = 0; i < bp->b_page_count; i++) { 339 270 struct page *page = bp->b_pages[i]; ··· 455 386 bp->b_addr = page_address(bp->b_pages[0]) + bp->b_offset; 456 387 bp->b_flags |= XBF_MAPPED; 457 388 } else if (flags & XBF_MAPPED) { 458 - bp->b_addr = vm_map_ram(bp->b_pages, bp->b_page_count, 459 - -1, PAGE_KERNEL); 389 + if (as_list_len > 64) 390 + purge_addresses(); 391 + bp->b_addr = vmap(bp->b_pages, bp->b_page_count, 392 + VM_MAP, PAGE_KERNEL); 460 393 if (unlikely(bp->b_addr == NULL)) 461 394 return -ENOMEM; 462 395 bp->b_addr += bp->b_offset; ··· 1743 1672 count++; 1744 1673 } 1745 1674 1675 + if (as_list_len > 0) 1676 + purge_addresses(); 1746 1677 if (count) 1747 1678 blk_run_address_space(target->bt_mapping); 1748 1679
+1 -1
include/asm-frv/pgtable.h
··· 478 478 #define __swp_type(x) (((x).val >> 2) & 0x1f) 479 479 #define __swp_offset(x) ((x).val >> 8) 480 480 #define __swp_entry(type, offset) ((swp_entry_t) { ((type) << 2) | ((offset) << 8) }) 481 - #define __pte_to_swp_entry(pte) ((swp_entry_t) { (pte).pte }) 481 + #define __pte_to_swp_entry(_pte) ((swp_entry_t) { (_pte).pte }) 482 482 #define __swp_entry_to_pte(x) ((pte_t) { (x).val }) 483 483 484 484 static inline int pte_file(pte_t pte)
+2
include/drm/drmP.h
··· 1321 1321 struct drm_gem_object *drm_gem_object_alloc(struct drm_device *dev, 1322 1322 size_t size); 1323 1323 void drm_gem_object_handle_free(struct kref *kref); 1324 + void drm_gem_vm_open(struct vm_area_struct *vma); 1325 + void drm_gem_vm_close(struct vm_area_struct *vma); 1324 1326 int drm_gem_mmap(struct file *filp, struct vm_area_struct *vma); 1325 1327 1326 1328 static inline void
+1 -1
include/drm/drm_crtc.h
··· 609 609 extern char *drm_get_dvi_i_select_name(int val); 610 610 extern char *drm_get_tv_subconnector_name(int val); 611 611 extern char *drm_get_tv_select_name(int val); 612 - extern void drm_fb_release(struct file *filp); 612 + extern void drm_fb_release(struct drm_file *file_priv); 613 613 extern int drm_mode_group_init_legacy_group(struct drm_device *dev, struct drm_mode_group *group); 614 614 extern struct edid *drm_get_edid(struct drm_connector *connector, 615 615 struct i2c_adapter *adapter);
+5 -5
include/drm/drm_crtc_helper.h
··· 54 54 struct drm_display_mode *mode, 55 55 struct drm_display_mode *adjusted_mode); 56 56 /* Actually set the mode */ 57 - void (*mode_set)(struct drm_crtc *crtc, struct drm_display_mode *mode, 58 - struct drm_display_mode *adjusted_mode, int x, int y, 59 - struct drm_framebuffer *old_fb); 57 + int (*mode_set)(struct drm_crtc *crtc, struct drm_display_mode *mode, 58 + struct drm_display_mode *adjusted_mode, int x, int y, 59 + struct drm_framebuffer *old_fb); 60 60 61 61 /* Move the crtc on the current fb to the given position *optional* */ 62 - void (*mode_set_base)(struct drm_crtc *crtc, int x, int y, 63 - struct drm_framebuffer *old_fb); 62 + int (*mode_set_base)(struct drm_crtc *crtc, int x, int y, 63 + struct drm_framebuffer *old_fb); 64 64 }; 65 65 66 66 struct drm_encoder_helper_funcs {
-2
include/linux/bio.h
··· 171 171 #define BIO_RW_FAILFAST_TRANSPORT 8 172 172 #define BIO_RW_FAILFAST_DRIVER 9 173 173 174 - #define BIO_RW_SYNC (BIO_RW_SYNCIO | BIO_RW_UNPLUG) 175 - 176 174 #define bio_rw_flagged(bio, flag) ((bio)->bi_rw & (1 << (flag))) 177 175 178 176 /*
+1
include/linux/blktrace_api.h
··· 15 15 BLK_TC_WRITE = 1 << 1, /* writes */ 16 16 BLK_TC_BARRIER = 1 << 2, /* barrier */ 17 17 BLK_TC_SYNC = 1 << 3, /* sync IO */ 18 + BLK_TC_SYNCIO = BLK_TC_SYNC, 18 19 BLK_TC_QUEUE = 1 << 4, /* queueing/merging */ 19 20 BLK_TC_REQUEUE = 1 << 5, /* requeueing */ 20 21 BLK_TC_ISSUE = 1 << 6, /* issue */
+2
include/linux/device.h
··· 147 147 extern struct device_driver *driver_find(const char *name, 148 148 struct bus_type *bus); 149 149 extern int driver_probe_done(void); 150 + extern int wait_for_device_probe(void); 151 + 150 152 151 153 /* sysfs interface for exporting driver attributes */ 152 154
+2
include/linux/dmaengine.h
··· 121 121 * @local: per-cpu pointer to a struct dma_chan_percpu 122 122 * @client-count: how many clients are using this channel 123 123 * @table_count: number of appearances in the mem-to-mem allocation table 124 + * @private: private data for certain client-channel associations 124 125 */ 125 126 struct dma_chan { 126 127 struct dma_device *device; ··· 135 134 struct dma_chan_percpu *local; 136 135 int client_count; 137 136 int table_count; 137 + void *private; 138 138 }; 139 139 140 140 /**
+1 -1
include/linux/firmware-map.h
··· 1 1 /* 2 2 * include/linux/firmware-map.h: 3 3 * Copyright (C) 2008 SUSE LINUX Products GmbH 4 - * by Bernhard Walle <bwalle@suse.de> 4 + * by Bernhard Walle <bernhard.walle@gmx.de> 5 5 * 6 6 * This program is free software; you can redistribute it and/or modify 7 7 * it under the terms of the GNU General Public License v2.0 as published by
+15 -9
include/linux/fs.h
··· 54 54 #define MAY_ACCESS 16 55 55 #define MAY_OPEN 32 56 56 57 + /* 58 + * flags in file.f_mode. Note that FMODE_READ and FMODE_WRITE must correspond 59 + * to O_WRONLY and O_RDWR via the strange trick in __dentry_open() 60 + */ 61 + 57 62 /* file is open for reading */ 58 63 #define FMODE_READ ((__force fmode_t)1) 59 64 /* file is open for writing */ 60 65 #define FMODE_WRITE ((__force fmode_t)2) 61 66 /* file is seekable */ 62 67 #define FMODE_LSEEK ((__force fmode_t)4) 63 - /* file can be accessed using pread/pwrite */ 68 + /* file can be accessed using pread */ 64 69 #define FMODE_PREAD ((__force fmode_t)8) 65 - #define FMODE_PWRITE FMODE_PREAD /* These go hand in hand */ 70 + /* file can be accessed using pwrite */ 71 + #define FMODE_PWRITE ((__force fmode_t)16) 66 72 /* File is opened for execution with sys_execve / sys_uselib */ 67 - #define FMODE_EXEC ((__force fmode_t)16) 73 + #define FMODE_EXEC ((__force fmode_t)32) 68 74 /* File is opened with O_NDELAY (only set for block devices) */ 69 - #define FMODE_NDELAY ((__force fmode_t)32) 75 + #define FMODE_NDELAY ((__force fmode_t)64) 70 76 /* File is opened with O_EXCL (only set for block devices) */ 71 - #define FMODE_EXCL ((__force fmode_t)64) 77 + #define FMODE_EXCL ((__force fmode_t)128) 72 78 /* File is opened using open(.., 3, ..) and is writeable only for ioctls 73 79 (specialy hack for floppy.c) */ 74 - #define FMODE_WRITE_IOCTL ((__force fmode_t)128) 80 + #define FMODE_WRITE_IOCTL ((__force fmode_t)256) 75 81 76 82 /* 77 83 * Don't update ctime and mtime. ··· 93 87 #define WRITE 1 94 88 #define READA 2 /* read-ahead - don't block if no resources */ 95 89 #define SWRITE 3 /* for ll_rw_block() - wait for buffer lock */ 96 - #define READ_SYNC (READ | (1 << BIO_RW_SYNC)) 90 + #define READ_SYNC (READ | (1 << BIO_RW_SYNCIO) | (1 << BIO_RW_UNPLUG)) 97 91 #define READ_META (READ | (1 << BIO_RW_META)) 98 - #define WRITE_SYNC (WRITE | (1 << BIO_RW_SYNC)) 99 - #define SWRITE_SYNC (SWRITE | (1 << BIO_RW_SYNC)) 92 + #define WRITE_SYNC (WRITE | (1 << BIO_RW_SYNCIO) | (1 << BIO_RW_UNPLUG)) 93 + #define SWRITE_SYNC (SWRITE | (1 << BIO_RW_SYNCIO) | (1 << BIO_RW_UNPLUG)) 100 94 #define WRITE_BARRIER (WRITE | (1 << BIO_RW_BARRIER)) 101 95 #define DISCARD_NOBARRIER (1 << BIO_RW_DISCARD) 102 96 #define DISCARD_BARRIER ((1 << BIO_RW_DISCARD) | (1 << BIO_RW_BARRIER))
+2 -1
include/linux/jbd2.h
··· 1150 1150 extern int jbd2_journal_bmap(journal_t *, unsigned long, unsigned long long *); 1151 1151 extern int jbd2_journal_force_commit(journal_t *); 1152 1152 extern int jbd2_journal_file_inode(handle_t *handle, struct jbd2_inode *inode); 1153 - extern int jbd2_journal_begin_ordered_truncate(struct jbd2_inode *inode, loff_t new_size); 1153 + extern int jbd2_journal_begin_ordered_truncate(journal_t *journal, 1154 + struct jbd2_inode *inode, loff_t new_size); 1154 1155 extern void jbd2_journal_init_jbd_inode(struct jbd2_inode *jinode, struct inode *inode); 1155 1156 extern void jbd2_journal_release_jbd_inode(journal_t *journal, struct jbd2_inode *jinode); 1156 1157
+5 -5
include/linux/kvm.h
··· 58 58 __u32 pad; 59 59 union { 60 60 char dummy[512]; /* reserving space */ 61 - #ifdef CONFIG_X86 61 + #ifdef __KVM_HAVE_PIT 62 62 struct kvm_pic_state pic; 63 63 #endif 64 - #if defined(CONFIG_X86) || defined(CONFIG_IA64) 64 + #ifdef __KVM_HAVE_IOAPIC 65 65 struct kvm_ioapic_state ioapic; 66 66 #endif 67 67 } chip; ··· 384 384 #define KVM_CAP_MP_STATE 14 385 385 #define KVM_CAP_COALESCED_MMIO 15 386 386 #define KVM_CAP_SYNC_MMU 16 /* Changes to host mmap are reflected in guest */ 387 - #if defined(CONFIG_X86)||defined(CONFIG_IA64) 387 + #ifdef __KVM_HAVE_DEVICE_ASSIGNMENT 388 388 #define KVM_CAP_DEVICE_ASSIGNMENT 17 389 389 #endif 390 390 #define KVM_CAP_IOMMU 18 391 - #if defined(CONFIG_X86) 391 + #ifdef __KVM_HAVE_MSI 392 392 #define KVM_CAP_DEVICE_MSI 20 393 393 #endif 394 394 /* Bug in KVM_SET_USER_MEMORY_REGION fixed: */ 395 395 #define KVM_CAP_DESTROY_MEMORY_REGION_WORKS 21 396 - #if defined(CONFIG_X86) 396 + #ifdef __KVM_HAVE_USER_NMI 397 397 #define KVM_CAP_USER_NMI 22 398 398 #endif 399 399
+1
include/linux/kvm_host.h
··· 285 285 struct kvm *kvm_arch_create_vm(void); 286 286 void kvm_arch_destroy_vm(struct kvm *kvm); 287 287 void kvm_free_all_assigned_devices(struct kvm *kvm); 288 + void kvm_arch_sync_events(struct kvm *kvm); 288 289 289 290 int kvm_cpu_get_interrupt(struct kvm_vcpu *v); 290 291 int kvm_cpu_has_interrupt(struct kvm_vcpu *v);
+18 -3
include/linux/mm.h
··· 1041 1041 typedef int (*work_fn_t)(unsigned long, unsigned long, void *); 1042 1042 extern void work_with_active_regions(int nid, work_fn_t work_fn, void *data); 1043 1043 extern void sparse_memory_present_with_active_regions(int nid); 1044 - #ifndef CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID 1045 - extern int early_pfn_to_nid(unsigned long pfn); 1046 - #endif /* CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID */ 1047 1044 #endif /* CONFIG_ARCH_POPULATES_NODE_MAP */ 1045 + 1046 + #if !defined(CONFIG_ARCH_POPULATES_NODE_MAP) && \ 1047 + !defined(CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID) 1048 + static inline int __early_pfn_to_nid(unsigned long pfn) 1049 + { 1050 + return 0; 1051 + } 1052 + #else 1053 + /* please see mm/page_alloc.c */ 1054 + extern int __meminit early_pfn_to_nid(unsigned long pfn); 1055 + #ifdef CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID 1056 + /* there is a per-arch backend function. */ 1057 + extern int __meminit __early_pfn_to_nid(unsigned long pfn); 1058 + #endif /* CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID */ 1059 + #endif 1060 + 1048 1061 extern void set_dma_reserve(unsigned long new_dma_reserve); 1049 1062 extern void memmap_init_zone(unsigned long, int, unsigned long, 1050 1063 unsigned long, enum memmap_context); ··· 1172 1159 1173 1160 /* mm/page-writeback.c */ 1174 1161 int write_one_page(struct page *page, int wait); 1162 + void task_dirty_inc(struct task_struct *tsk); 1175 1163 1176 1164 /* readahead.c */ 1177 1165 #define VM_MAX_READAHEAD 128 /* kbytes */ ··· 1318 1304 1319 1305 extern void *alloc_locked_buffer(size_t size); 1320 1306 extern void free_locked_buffer(void *buffer, size_t size); 1307 + extern void release_locked_buffer(void *buffer, size_t size); 1321 1308 #endif /* __KERNEL__ */ 1322 1309 #endif /* _LINUX_MM_H */
+1 -1
include/linux/mmzone.h
··· 1071 1071 #endif /* CONFIG_SPARSEMEM */ 1072 1072 1073 1073 #ifdef CONFIG_NODES_SPAN_OTHER_NODES 1074 - #define early_pfn_in_nid(pfn, nid) (early_pfn_to_nid(pfn) == (nid)) 1074 + bool early_pfn_in_nid(unsigned long pfn, int nid); 1075 1075 #else 1076 1076 #define early_pfn_in_nid(pfn, nid) (1) 1077 1077 #endif
+5
include/linux/pci_ids.h
··· 1312 1312 #define PCI_DEVICE_ID_VIA_VT3351 0x0351 1313 1313 #define PCI_DEVICE_ID_VIA_VT3364 0x0364 1314 1314 #define PCI_DEVICE_ID_VIA_8371_0 0x0391 1315 + #define PCI_DEVICE_ID_VIA_6415 0x0415 1315 1316 #define PCI_DEVICE_ID_VIA_8501_0 0x0501 1316 1317 #define PCI_DEVICE_ID_VIA_82C561 0x0561 1317 1318 #define PCI_DEVICE_ID_VIA_82C586_1 0x0571 ··· 1445 1444 #define PCI_DEVICE_ID_DIGI_DF_M_E 0x0071 1446 1445 #define PCI_DEVICE_ID_DIGI_DF_M_IOM2_A 0x0072 1447 1446 #define PCI_DEVICE_ID_DIGI_DF_M_A 0x0073 1447 + #define PCI_DEVICE_ID_DIGI_NEO_8 0x00B1 1448 1448 #define PCI_DEVICE_ID_NEO_2DB9 0x00C8 1449 1449 #define PCI_DEVICE_ID_NEO_2DB9PRI 0x00C9 1450 1450 #define PCI_DEVICE_ID_NEO_2RJ45 0x00CA ··· 2323 2321 #define PCI_DEVICE_ID_INTEL_82378 0x0484 2324 2322 #define PCI_DEVICE_ID_INTEL_I960 0x0960 2325 2323 #define PCI_DEVICE_ID_INTEL_I960RM 0x0962 2324 + #define PCI_DEVICE_ID_INTEL_8257X_SOL 0x1062 2325 + #define PCI_DEVICE_ID_INTEL_82573E_SOL 0x1085 2326 + #define PCI_DEVICE_ID_INTEL_82573L_SOL 0x108F 2326 2327 #define PCI_DEVICE_ID_INTEL_82815_MC 0x1130 2327 2328 #define PCI_DEVICE_ID_INTEL_82815_CGC 0x1132 2328 2329 #define PCI_DEVICE_ID_INTEL_82092AA_0 0x1221
+1
include/linux/seq_file.h
··· 19 19 size_t from; 20 20 size_t count; 21 21 loff_t index; 22 + loff_t read_pos; 22 23 u64 version; 23 24 struct mutex lock; 24 25 const struct seq_operations *op;
+1
include/linux/serial_core.h
··· 296 296 #define UPF_HARDPPS_CD ((__force upf_t) (1 << 11)) 297 297 #define UPF_LOW_LATENCY ((__force upf_t) (1 << 13)) 298 298 #define UPF_BUGGY_UART ((__force upf_t) (1 << 14)) 299 + #define UPF_NO_TXEN_TEST ((__force upf_t) (1 << 15)) 299 300 #define UPF_MAGIC_MULTIPLIER ((__force upf_t) (1 << 16)) 300 301 #define UPF_CONS_FLOW ((__force upf_t) (1 << 23)) 301 302 #define UPF_SHARE_IRQ ((__force upf_t) (1 << 24))
+1
include/linux/slab.h
··· 127 127 void * __must_check __krealloc(const void *, size_t, gfp_t); 128 128 void * __must_check krealloc(const void *, size_t, gfp_t); 129 129 void kfree(const void *); 130 + void kzfree(const void *); 130 131 size_t ksize(const void *); 131 132 132 133 /*
+7
include/linux/spi/spi_bitbang.h
··· 83 83 * int getmiso(struct spi_device *); 84 84 * void spidelay(unsigned); 85 85 * 86 + * setsck()'s is_on parameter is a zero/nonzero boolean. 87 + * 88 + * setmosi()'s is_on parameter is a zero/nonzero boolean. 89 + * 90 + * getmiso() is required to return 0 or 1 only. Any other value is invalid 91 + * and will result in improper operation. 92 + * 86 93 * A non-inlined routine would call bitbang_txrx_*() routines. The 87 94 * main loop could easily compile down to a handful of instructions, 88 95 * especially if the delay is a NOP (to run at peak speed).
+12 -4
include/linux/timerfd.h
··· 11 11 /* For O_CLOEXEC and O_NONBLOCK */ 12 12 #include <linux/fcntl.h> 13 13 14 - /* Flags for timerfd_settime. */ 14 + /* 15 + * CAREFUL: Check include/asm-generic/fcntl.h when defining 16 + * new flags, since they might collide with O_* ones. We want 17 + * to re-use O_* flags that couldn't possibly have a meaning 18 + * from eventfd, in order to leave a free define-space for 19 + * shared O_* flags. 20 + */ 15 21 #define TFD_TIMER_ABSTIME (1 << 0) 16 - 17 - /* Flags for timerfd_create. */ 18 22 #define TFD_CLOEXEC O_CLOEXEC 19 23 #define TFD_NONBLOCK O_NONBLOCK 20 24 25 + #define TFD_SHARED_FCNTL_FLAGS (TFD_CLOEXEC | TFD_NONBLOCK) 26 + /* Flags for timerfd_create. */ 27 + #define TFD_CREATE_FLAGS TFD_SHARED_FCNTL_FLAGS 28 + /* Flags for timerfd_settime. */ 29 + #define TFD_SETTIME_FLAGS TFD_TIMER_ABSTIME 21 30 22 31 #endif /* _LINUX_TIMERFD_H */ 23 -
+4
include/linux/vmalloc.h
··· 84 84 unsigned long flags, void *caller); 85 85 extern struct vm_struct *__get_vm_area(unsigned long size, unsigned long flags, 86 86 unsigned long start, unsigned long end); 87 + extern struct vm_struct *__get_vm_area_caller(unsigned long size, 88 + unsigned long flags, 89 + unsigned long start, unsigned long end, 90 + void *caller); 87 91 extern struct vm_struct *get_vm_area_node(unsigned long size, 88 92 unsigned long flags, int node, 89 93 gfp_t gfp_mask);
+9 -4
init/do_mounts.c
··· 370 370 ssleep(root_delay); 371 371 } 372 372 373 - /* wait for the known devices to complete their probing */ 374 - while (driver_probe_done() != 0) 375 - msleep(100); 376 - async_synchronize_full(); 373 + /* 374 + * wait for the known devices to complete their probing 375 + * 376 + * Note: this is a potential source of long boot delays. 377 + * For example, it is not atypical to wait 5 seconds here 378 + * for the touchpad of a laptop to initialize. 379 + */ 380 + wait_for_device_probe(); 377 381 378 382 md_run_setup(); 379 383 ··· 403 399 while (driver_probe_done() != 0 || 404 400 (ROOT_DEV = name_to_dev_t(saved_root_name)) == 0) 405 401 msleep(100); 402 + async_synchronize_full(); 406 403 } 407 404 408 405 is_floppy = MAJOR(ROOT_DEV) == FLOPPY_MAJOR;
+3 -2
init/do_mounts_md.c
··· 281 281 */ 282 282 printk(KERN_INFO "md: Waiting for all devices to be available before autodetect\n"); 283 283 printk(KERN_INFO "md: If you don't use raid, use raid=noautodetect\n"); 284 - while (driver_probe_done() < 0) 285 - msleep(100); 284 + 285 + wait_for_device_probe(); 286 + 286 287 fd = sys_open("/dev/md0", 0, 0); 287 288 if (fd >= 0) { 288 289 sys_ioctl(fd, RAID_AUTORUN, raid_autopart);
+1
kernel/Makefile
··· 51 51 obj-$(CONFIG_MODULES) += module.o 52 52 obj-$(CONFIG_KALLSYMS) += kallsyms.o 53 53 obj-$(CONFIG_PM) += power/ 54 + obj-$(CONFIG_FREEZER) += power/ 54 55 obj-$(CONFIG_BSD_PROCESS_ACCT) += acct.o 55 56 obj-$(CONFIG_KEXEC) += kexec.o 56 57 obj-$(CONFIG_BACKTRACE_SELF_TEST) += backtracetest.o
+1 -1
kernel/cgroup.c
··· 1122 1122 1123 1123 mutex_unlock(&cgroup_mutex); 1124 1124 1125 - kfree(root); 1126 1125 kill_litter_super(sb); 1126 + kfree(root); 1127 1127 } 1128 1128 1129 1129 static struct file_system_type cgroup_fs_type = {
+27 -24
kernel/futex.c
··· 1165 1165 u32 val, ktime_t *abs_time, u32 bitset, int clockrt) 1166 1166 { 1167 1167 struct task_struct *curr = current; 1168 + struct restart_block *restart; 1168 1169 DECLARE_WAITQUEUE(wait, curr); 1169 1170 struct futex_hash_bucket *hb; 1170 1171 struct futex_q q; ··· 1217 1216 1218 1217 if (!ret) 1219 1218 goto retry; 1220 - return ret; 1219 + goto out; 1221 1220 } 1222 1221 ret = -EWOULDBLOCK; 1223 - if (uval != val) 1224 - goto out_unlock_put_key; 1222 + if (unlikely(uval != val)) { 1223 + queue_unlock(&q, hb); 1224 + goto out_put_key; 1225 + } 1225 1226 1226 1227 /* Only actually queue if *uaddr contained val. */ 1227 1228 queue_me(&q, hb); ··· 1287 1284 */ 1288 1285 1289 1286 /* If we were woken (and unqueued), we succeeded, whatever. */ 1287 + ret = 0; 1290 1288 if (!unqueue_me(&q)) 1291 - return 0; 1289 + goto out_put_key; 1290 + ret = -ETIMEDOUT; 1292 1291 if (rem) 1293 - return -ETIMEDOUT; 1292 + goto out_put_key; 1294 1293 1295 1294 /* 1296 1295 * We expect signal_pending(current), but another thread may 1297 1296 * have handled it for us already. 1298 1297 */ 1298 + ret = -ERESTARTSYS; 1299 1299 if (!abs_time) 1300 - return -ERESTARTSYS; 1301 - else { 1302 - struct restart_block *restart; 1303 - restart = &current_thread_info()->restart_block; 1304 - restart->fn = futex_wait_restart; 1305 - restart->futex.uaddr = (u32 *)uaddr; 1306 - restart->futex.val = val; 1307 - restart->futex.time = abs_time->tv64; 1308 - restart->futex.bitset = bitset; 1309 - restart->futex.flags = 0; 1300 + goto out_put_key; 1310 1301 1311 - if (fshared) 1312 - restart->futex.flags |= FLAGS_SHARED; 1313 - if (clockrt) 1314 - restart->futex.flags |= FLAGS_CLOCKRT; 1315 - return -ERESTART_RESTARTBLOCK; 1316 - } 1302 + restart = &current_thread_info()->restart_block; 1303 + restart->fn = futex_wait_restart; 1304 + restart->futex.uaddr = (u32 *)uaddr; 1305 + restart->futex.val = val; 1306 + restart->futex.time = abs_time->tv64; 1307 + restart->futex.bitset = bitset; 1308 + restart->futex.flags = 0; 1317 1309 1318 - out_unlock_put_key: 1319 - queue_unlock(&q, hb); 1310 + if (fshared) 1311 + restart->futex.flags |= FLAGS_SHARED; 1312 + if (clockrt) 1313 + restart->futex.flags |= FLAGS_CLOCKRT; 1314 + 1315 + ret = -ERESTART_RESTARTBLOCK; 1316 + 1317 + out_put_key: 1320 1318 put_futex_key(fshared, &q.key); 1321 - 1322 1319 out: 1323 1320 return ret; 1324 1321 }
+30 -30
kernel/posix-cpu-timers.c
··· 681 681 } 682 682 683 683 /* 684 + * Sample a process (thread group) timer for the given group_leader task. 685 + * Must be called with tasklist_lock held for reading. 686 + */ 687 + static int cpu_timer_sample_group(const clockid_t which_clock, 688 + struct task_struct *p, 689 + union cpu_time_count *cpu) 690 + { 691 + struct task_cputime cputime; 692 + 693 + thread_group_cputimer(p, &cputime); 694 + switch (CPUCLOCK_WHICH(which_clock)) { 695 + default: 696 + return -EINVAL; 697 + case CPUCLOCK_PROF: 698 + cpu->cpu = cputime_add(cputime.utime, cputime.stime); 699 + break; 700 + case CPUCLOCK_VIRT: 701 + cpu->cpu = cputime.utime; 702 + break; 703 + case CPUCLOCK_SCHED: 704 + cpu->sched = cputime.sum_exec_runtime + task_delta_exec(p); 705 + break; 706 + } 707 + return 0; 708 + } 709 + 710 + /* 684 711 * Guts of sys_timer_settime for CPU timers. 685 712 * This is called with the timer locked and interrupts disabled. 686 713 * If we return TIMER_RETRY, it's necessary to release the timer's lock ··· 768 741 if (CPUCLOCK_PERTHREAD(timer->it_clock)) { 769 742 cpu_clock_sample(timer->it_clock, p, &val); 770 743 } else { 771 - cpu_clock_sample_group(timer->it_clock, p, &val); 744 + cpu_timer_sample_group(timer->it_clock, p, &val); 772 745 } 773 746 774 747 if (old) { ··· 916 889 read_unlock(&tasklist_lock); 917 890 goto dead; 918 891 } else { 919 - cpu_clock_sample_group(timer->it_clock, p, &now); 892 + cpu_timer_sample_group(timer->it_clock, p, &now); 920 893 clear_dead = (unlikely(p->exit_state) && 921 894 thread_group_empty(p)); 922 895 } ··· 1271 1244 clear_dead_task(timer, now); 1272 1245 goto out_unlock; 1273 1246 } 1274 - cpu_clock_sample_group(timer->it_clock, p, &now); 1247 + cpu_timer_sample_group(timer->it_clock, p, &now); 1275 1248 bump_cpu_timer(timer, now); 1276 1249 /* Leave the tasklist_lock locked for the call below. */ 1277 1250 } ··· 1433 1406 } 1434 1407 spin_unlock(&timer->it_lock); 1435 1408 } 1436 - } 1437 - 1438 - /* 1439 - * Sample a process (thread group) timer for the given group_leader task. 1440 - * Must be called with tasklist_lock held for reading. 1441 - */ 1442 - static int cpu_timer_sample_group(const clockid_t which_clock, 1443 - struct task_struct *p, 1444 - union cpu_time_count *cpu) 1445 - { 1446 - struct task_cputime cputime; 1447 - 1448 - thread_group_cputimer(p, &cputime); 1449 - switch (CPUCLOCK_WHICH(which_clock)) { 1450 - default: 1451 - return -EINVAL; 1452 - case CPUCLOCK_PROF: 1453 - cpu->cpu = cputime_add(cputime.utime, cputime.stime); 1454 - break; 1455 - case CPUCLOCK_VIRT: 1456 - cpu->cpu = cputime.utime; 1457 - break; 1458 - case CPUCLOCK_SCHED: 1459 - cpu->sched = cputime.sum_exec_runtime + task_delta_exec(p); 1460 - break; 1461 - } 1462 - return 0; 1463 1409 } 1464 1410 1465 1411 /*
+1 -1
kernel/power/Makefile
··· 3 3 EXTRA_CFLAGS += -DDEBUG 4 4 endif 5 5 6 - obj-y := main.o 6 + obj-$(CONFIG_PM) += main.o 7 7 obj-$(CONFIG_PM_SLEEP) += console.o 8 8 obj-$(CONFIG_FREEZER) += process.o 9 9 obj-$(CONFIG_HIBERNATION) += swsusp.o disk.o snapshot.o swap.o user.o
+6
kernel/power/console.c
··· 78 78 } 79 79 set_console(orig_fgconsole); 80 80 release_console_sem(); 81 + 82 + if (vt_waitactive(orig_fgconsole)) { 83 + pr_debug("Resume: Can't switch VCs."); 84 + return; 85 + } 86 + 81 87 kmsg_redirect = orig_kmsg; 82 88 } 83 89 #endif
+11
kernel/power/disk.c
··· 595 595 unsigned int flags; 596 596 597 597 /* 598 + * If the user said "noresume".. bail out early. 599 + */ 600 + if (noresume) 601 + return 0; 602 + 603 + /* 598 604 * name_to_dev_t() below takes a sysfs buffer mutex when sysfs 599 605 * is configured into the kernel. Since the regular hibernate 600 606 * trigger path is via sysfs which takes a buffer mutex before ··· 616 610 mutex_unlock(&pm_mutex); 617 611 return -ENOENT; 618 612 } 613 + /* 614 + * Some device discovery might still be in progress; we need 615 + * to wait for this to finish. 616 + */ 617 + wait_for_device_probe(); 619 618 swsusp_resume_device = name_to_dev_t(resume_file); 620 619 pr_debug("PM: Resume from partition %s\n", resume_file); 621 620 } else {
+3 -2
kernel/power/swap.c
··· 60 60 static int submit(int rw, pgoff_t page_off, struct page *page, 61 61 struct bio **bio_chain) 62 62 { 63 + const int bio_rw = rw | (1 << BIO_RW_SYNCIO) | (1 << BIO_RW_UNPLUG); 63 64 struct bio *bio; 64 65 65 66 bio = bio_alloc(__GFP_WAIT | __GFP_HIGH, 1); ··· 81 80 bio_get(bio); 82 81 83 82 if (bio_chain == NULL) { 84 - submit_bio(rw | (1 << BIO_RW_SYNC), bio); 83 + submit_bio(bio_rw, bio); 85 84 wait_on_page_locked(page); 86 85 if (rw == READ) 87 86 bio_set_pages_dirty(bio); ··· 91 90 get_page(page); /* These pages are freed later */ 92 91 bio->bi_private = *bio_chain; 93 92 *bio_chain = bio; 94 - submit_bio(rw | (1 << BIO_RW_SYNC), bio); 93 + submit_bio(bio_rw, bio); 95 94 } 96 95 return 0; 97 96 }
+6 -6
kernel/power/user.c
··· 95 95 data->swap = swsusp_resume_device ? 96 96 swap_type_of(swsusp_resume_device, 0, NULL) : -1; 97 97 data->mode = O_RDONLY; 98 - error = pm_notifier_call_chain(PM_RESTORE_PREPARE); 99 - if (error) 100 - pm_notifier_call_chain(PM_POST_RESTORE); 101 - } else { 102 - data->swap = -1; 103 - data->mode = O_WRONLY; 104 98 error = pm_notifier_call_chain(PM_HIBERNATION_PREPARE); 105 99 if (error) 106 100 pm_notifier_call_chain(PM_POST_HIBERNATION); 101 + } else { 102 + data->swap = -1; 103 + data->mode = O_WRONLY; 104 + error = pm_notifier_call_chain(PM_RESTORE_PREPARE); 105 + if (error) 106 + pm_notifier_call_chain(PM_POST_RESTORE); 107 107 } 108 108 if (error) 109 109 atomic_inc(&snapshot_device_available);
+9 -6
kernel/printk.c
··· 73 73 * driver system. 74 74 */ 75 75 static DECLARE_MUTEX(console_sem); 76 - static DECLARE_MUTEX(secondary_console_sem); 77 76 struct console *console_drivers; 78 77 EXPORT_SYMBOL_GPL(console_drivers); 79 78 ··· 890 891 printk("Suspending console(s) (use no_console_suspend to debug)\n"); 891 892 acquire_console_sem(); 892 893 console_suspended = 1; 894 + up(&console_sem); 893 895 } 894 896 895 897 void resume_console(void) 896 898 { 897 899 if (!console_suspend_enabled) 898 900 return; 901 + down(&console_sem); 899 902 console_suspended = 0; 900 903 release_console_sem(); 901 904 } ··· 913 912 void acquire_console_sem(void) 914 913 { 915 914 BUG_ON(in_interrupt()); 916 - if (console_suspended) { 917 - down(&secondary_console_sem); 918 - return; 919 - } 920 915 down(&console_sem); 916 + if (console_suspended) 917 + return; 921 918 console_locked = 1; 922 919 console_may_schedule = 1; 923 920 } ··· 925 926 { 926 927 if (down_trylock(&console_sem)) 927 928 return -1; 929 + if (console_suspended) { 930 + up(&console_sem); 931 + return -1; 932 + } 928 933 console_locked = 1; 929 934 console_may_schedule = 0; 930 935 return 0; ··· 982 979 unsigned wake_klogd = 0; 983 980 984 981 if (console_suspended) { 985 - up(&secondary_console_sem); 982 + up(&console_sem); 986 983 return; 987 984 } 988 985
+12 -3
kernel/sched.c
··· 6944 6944 6945 6945 static void rq_attach_root(struct rq *rq, struct root_domain *rd) 6946 6946 { 6947 + struct root_domain *old_rd = NULL; 6947 6948 unsigned long flags; 6948 6949 6949 6950 spin_lock_irqsave(&rq->lock, flags); 6950 6951 6951 6952 if (rq->rd) { 6952 - struct root_domain *old_rd = rq->rd; 6953 + old_rd = rq->rd; 6953 6954 6954 6955 if (cpumask_test_cpu(rq->cpu, old_rd->online)) 6955 6956 set_rq_offline(rq); 6956 6957 6957 6958 cpumask_clear_cpu(rq->cpu, old_rd->span); 6958 6959 6959 - if (atomic_dec_and_test(&old_rd->refcount)) 6960 - free_rootdomain(old_rd); 6960 + /* 6961 + * If we dont want to free the old_rt yet then 6962 + * set old_rd to NULL to skip the freeing later 6963 + * in this function: 6964 + */ 6965 + if (!atomic_dec_and_test(&old_rd->refcount)) 6966 + old_rd = NULL; 6961 6967 } 6962 6968 6963 6969 atomic_inc(&rd->refcount); ··· 6974 6968 set_rq_online(rq); 6975 6969 6976 6970 spin_unlock_irqrestore(&rq->lock, flags); 6971 + 6972 + if (old_rd) 6973 + free_rootdomain(old_rd); 6977 6974 } 6978 6975 6979 6976 static int __init_refok init_rootdomain(struct root_domain *rd, bool bootmem)
+25
kernel/trace/Kconfig
··· 52 52 depends on HAVE_FUNCTION_TRACER 53 53 depends on DEBUG_KERNEL 54 54 select FRAME_POINTER 55 + select KALLSYMS 55 56 select TRACING 56 57 select CONTEXT_SWITCH_TRACER 57 58 help ··· 239 238 depends on DEBUG_KERNEL 240 239 select FUNCTION_TRACER 241 240 select STACKTRACE 241 + select KALLSYMS 242 242 help 243 243 This special tracer records the maximum stack footprint of the 244 244 kernel and displays it in debugfs/tracing/stack_trace. ··· 303 301 a series of tests are made to verify that the tracer is 304 302 functioning properly. It will do tests on all the configured 305 303 tracers of ftrace. 304 + 305 + config MMIOTRACE 306 + bool "Memory mapped IO tracing" 307 + depends on HAVE_MMIOTRACE_SUPPORT && DEBUG_KERNEL && PCI 308 + select TRACING 309 + help 310 + Mmiotrace traces Memory Mapped I/O access and is meant for 311 + debugging and reverse engineering. It is called from the ioremap 312 + implementation and works via page faults. Tracing is disabled by 313 + default and can be enabled at run-time. 314 + 315 + See Documentation/tracers/mmiotrace.txt. 316 + If you are not helping to develop drivers, say N. 317 + 318 + config MMIOTRACE_TEST 319 + tristate "Test module for mmiotrace" 320 + depends on MMIOTRACE && m 321 + help 322 + This is a dumb module for testing mmiotrace. It is very dangerous 323 + as it will write garbage to IO memory starting at a given address. 324 + However, it should be safe to use on e.g. unused portion of VRAM. 325 + 326 + Say N, unless you absolutely know what you are doing. 306 327 307 328 endmenu
+5 -1
kernel/trace/ftrace.c
··· 2033 2033 static int start_graph_tracing(void) 2034 2034 { 2035 2035 struct ftrace_ret_stack **ret_stack_list; 2036 - int ret; 2036 + int ret, cpu; 2037 2037 2038 2038 ret_stack_list = kmalloc(FTRACE_RETSTACK_ALLOC_SIZE * 2039 2039 sizeof(struct ftrace_ret_stack *), ··· 2041 2041 2042 2042 if (!ret_stack_list) 2043 2043 return -ENOMEM; 2044 + 2045 + /* The cpu_boot init_task->ret_stack will never be freed */ 2046 + for_each_online_cpu(cpu) 2047 + ftrace_graph_init_task(idle_task(cpu)); 2044 2048 2045 2049 do { 2046 2050 ret = alloc_retstack_tasklist(ret_stack_list);
+10 -4
kernel/trace/trace_mmiotrace.c
··· 9 9 #include <linux/kernel.h> 10 10 #include <linux/mmiotrace.h> 11 11 #include <linux/pci.h> 12 + #include <asm/atomic.h> 12 13 13 14 #include "trace.h" 14 15 ··· 20 19 static struct trace_array *mmio_trace_array; 21 20 static bool overrun_detected; 22 21 static unsigned long prev_overruns; 22 + static atomic_t dropped_count; 23 23 24 24 static void mmio_reset_data(struct trace_array *tr) 25 25 { ··· 123 121 124 122 static unsigned long count_overruns(struct trace_iterator *iter) 125 123 { 126 - unsigned long cnt = 0; 124 + unsigned long cnt = atomic_xchg(&dropped_count, 0); 127 125 unsigned long over = ring_buffer_overruns(iter->tr->buffer); 128 126 129 127 if (over > prev_overruns) 130 - cnt = over - prev_overruns; 128 + cnt += over - prev_overruns; 131 129 prev_overruns = over; 132 130 return cnt; 133 131 } ··· 312 310 313 311 event = ring_buffer_lock_reserve(tr->buffer, sizeof(*entry), 314 312 &irq_flags); 315 - if (!event) 313 + if (!event) { 314 + atomic_inc(&dropped_count); 316 315 return; 316 + } 317 317 entry = ring_buffer_event_data(event); 318 318 tracing_generic_entry_update(&entry->ent, 0, preempt_count()); 319 319 entry->ent.type = TRACE_MMIO_RW; ··· 342 338 343 339 event = ring_buffer_lock_reserve(tr->buffer, sizeof(*entry), 344 340 &irq_flags); 345 - if (!event) 341 + if (!event) { 342 + atomic_inc(&dropped_count); 346 343 return; 344 + } 347 345 entry = ring_buffer_event_data(event); 348 346 tracing_generic_entry_update(&entry->ent, 0, preempt_count()); 349 347 entry->ent.type = TRACE_MMIO_MAP;
+19
kernel/trace/trace_selftest.c
··· 23 23 { 24 24 struct ring_buffer_event *event; 25 25 struct trace_entry *entry; 26 + unsigned int loops = 0; 26 27 27 28 while ((event = ring_buffer_consume(tr->buffer, cpu, NULL))) { 28 29 entry = ring_buffer_event_data(event); 29 30 31 + /* 32 + * The ring buffer is a size of trace_buf_size, if 33 + * we loop more than the size, there's something wrong 34 + * with the ring buffer. 35 + */ 36 + if (loops++ > trace_buf_size) { 37 + printk(KERN_CONT ".. bad ring buffer "); 38 + goto failed; 39 + } 30 40 if (!trace_valid_entry(entry)) { 31 41 printk(KERN_CONT ".. invalid entry %d ", 32 42 entry->type); ··· 67 57 68 58 cnt = ring_buffer_entries(tr->buffer); 69 59 60 + /* 61 + * The trace_test_buffer_cpu runs a while loop to consume all data. 62 + * If the calling tracer is broken, and is constantly filling 63 + * the buffer, this will run forever, and hard lock the box. 64 + * We disable the ring buffer while we do this test to prevent 65 + * a hard lock up. 66 + */ 67 + tracing_off(); 70 68 for_each_possible_cpu(cpu) { 71 69 ret = trace_test_buffer_cpu(tr, cpu); 72 70 if (ret) 73 71 break; 74 72 } 73 + tracing_on(); 75 74 __raw_spin_unlock(&ftrace_max_lock); 76 75 local_irq_restore(flags); 77 76
+1 -1
lib/Kconfig.debug
··· 838 838 839 839 If unsure, say N. 840 840 841 - menuconfig BUILD_DOCSRC 841 + config BUILD_DOCSRC 842 842 bool "Build targets in Documentation/ tree" 843 843 depends on HEADERS_CHECK 844 844 help
+6 -1
mm/mlock.c
··· 660 660 return buffer; 661 661 } 662 662 663 - void free_locked_buffer(void *buffer, size_t size) 663 + void release_locked_buffer(void *buffer, size_t size) 664 664 { 665 665 unsigned long pgsz = PAGE_ALIGN(size) >> PAGE_SHIFT; 666 666 ··· 670 670 current->mm->locked_vm -= pgsz; 671 671 672 672 up_write(&current->mm->mmap_sem); 673 + } 674 + 675 + void free_locked_buffer(void *buffer, size_t size) 676 + { 677 + release_locked_buffer(buffer, size); 673 678 674 679 kfree(buffer); 675 680 }
+3 -10
mm/page-writeback.c
··· 240 240 } 241 241 EXPORT_SYMBOL_GPL(bdi_writeout_inc); 242 242 243 - static inline void task_dirty_inc(struct task_struct *tsk) 243 + void task_dirty_inc(struct task_struct *tsk) 244 244 { 245 245 prop_inc_single(&vm_dirties, &tsk->dirties); 246 246 } ··· 1230 1230 __inc_zone_page_state(page, NR_FILE_DIRTY); 1231 1231 __inc_bdi_stat(mapping->backing_dev_info, 1232 1232 BDI_RECLAIMABLE); 1233 + task_dirty_inc(current); 1233 1234 task_io_account_write(PAGE_CACHE_SIZE); 1234 1235 } 1235 1236 radix_tree_tag_set(&mapping->page_tree, ··· 1263 1262 * If the mapping doesn't provide a set_page_dirty a_op, then 1264 1263 * just fall through and assume that it wants buffer_heads. 1265 1264 */ 1266 - static int __set_page_dirty(struct page *page) 1265 + int set_page_dirty(struct page *page) 1267 1266 { 1268 1267 struct address_space *mapping = page_mapping(page); 1269 1268 ··· 1280 1279 return 1; 1281 1280 } 1282 1281 return 0; 1283 - } 1284 - 1285 - int set_page_dirty(struct page *page) 1286 - { 1287 - int ret = __set_page_dirty(page); 1288 - if (ret) 1289 - task_dirty_inc(current); 1290 - return ret; 1291 1282 } 1292 1283 EXPORT_SYMBOL(set_page_dirty); 1293 1284
+26 -3
mm/page_alloc.c
··· 2989 2989 * was used and there are no special requirements, this is a convenient 2990 2990 * alternative 2991 2991 */ 2992 - int __meminit early_pfn_to_nid(unsigned long pfn) 2992 + int __meminit __early_pfn_to_nid(unsigned long pfn) 2993 2993 { 2994 2994 int i; 2995 2995 ··· 3000 3000 if (start_pfn <= pfn && pfn < end_pfn) 3001 3001 return early_node_map[i].nid; 3002 3002 } 3003 - 3004 - return 0; 3003 + /* This is a memory hole */ 3004 + return -1; 3005 3005 } 3006 3006 #endif /* CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID */ 3007 + 3008 + int __meminit early_pfn_to_nid(unsigned long pfn) 3009 + { 3010 + int nid; 3011 + 3012 + nid = __early_pfn_to_nid(pfn); 3013 + if (nid >= 0) 3014 + return nid; 3015 + /* just returns 0 */ 3016 + return 0; 3017 + } 3018 + 3019 + #ifdef CONFIG_NODES_SPAN_OTHER_NODES 3020 + bool __meminit early_pfn_in_nid(unsigned long pfn, int node) 3021 + { 3022 + int nid; 3023 + 3024 + nid = __early_pfn_to_nid(pfn); 3025 + if (nid >= 0 && nid != node) 3026 + return false; 3027 + return true; 3028 + } 3029 + #endif 3007 3030 3008 3031 /* Basic iterator support to walk early_node_map[] */ 3009 3032 #define for_each_active_range_index_in_nid(i, nid) \
+1 -1
mm/page_io.c
··· 111 111 goto out; 112 112 } 113 113 if (wbc->sync_mode == WB_SYNC_ALL) 114 - rw |= (1 << BIO_RW_SYNC); 114 + rw |= (1 << BIO_RW_SYNCIO) | (1 << BIO_RW_UNPLUG); 115 115 count_vm_event(PSWPOUT); 116 116 set_page_writeback(page); 117 117 unlock_page(page);
+2 -2
mm/swapfile.c
··· 635 635 636 636 if (!bdev) { 637 637 if (bdev_p) 638 - *bdev_p = sis->bdev; 638 + *bdev_p = bdget(sis->bdev->bd_dev); 639 639 640 640 spin_unlock(&swap_lock); 641 641 return i; ··· 647 647 struct swap_extent, list); 648 648 if (se->start_block == offset) { 649 649 if (bdev_p) 650 - *bdev_p = sis->bdev; 650 + *bdev_p = bdget(sis->bdev->bd_dev); 651 651 652 652 spin_unlock(&swap_lock); 653 653 bdput(bdev);
+20
mm/util.c
··· 129 129 } 130 130 EXPORT_SYMBOL(krealloc); 131 131 132 + /** 133 + * kzfree - like kfree but zero memory 134 + * @p: object to free memory of 135 + * 136 + * The memory of the object @p points to is zeroed before freed. 137 + * If @p is %NULL, kzfree() does nothing. 138 + */ 139 + void kzfree(const void *p) 140 + { 141 + size_t ks; 142 + void *mem = (void *)p; 143 + 144 + if (unlikely(ZERO_OR_NULL_PTR(mem))) 145 + return; 146 + ks = ksize(mem); 147 + memset(mem, 0, ks); 148 + kfree(mem); 149 + } 150 + EXPORT_SYMBOL(kzfree); 151 + 132 152 /* 133 153 * strndup_user - duplicate an existing string from user space 134 154 * @s: The string to duplicate
+10
mm/vmalloc.c
··· 1012 1012 void unmap_kernel_range(unsigned long addr, unsigned long size) 1013 1013 { 1014 1014 unsigned long end = addr + size; 1015 + 1016 + flush_cache_vunmap(addr, end); 1015 1017 vunmap_page_range(addr, end); 1016 1018 flush_tlb_kernel_range(addr, end); 1017 1019 } ··· 1107 1105 __builtin_return_address(0)); 1108 1106 } 1109 1107 EXPORT_SYMBOL_GPL(__get_vm_area); 1108 + 1109 + struct vm_struct *__get_vm_area_caller(unsigned long size, unsigned long flags, 1110 + unsigned long start, unsigned long end, 1111 + void *caller) 1112 + { 1113 + return __get_vm_area_node(size, flags, start, end, -1, GFP_KERNEL, 1114 + caller); 1115 + } 1110 1116 1111 1117 /** 1112 1118 * get_vm_area - reserve a contiguous kernel virtual area
+12 -16
mm/vmscan.c
··· 2057 2057 int pass, struct scan_control *sc) 2058 2058 { 2059 2059 struct zone *zone; 2060 - unsigned long nr_to_scan, ret = 0; 2061 - enum lru_list l; 2060 + unsigned long ret = 0; 2062 2061 2063 2062 for_each_zone(zone) { 2063 + enum lru_list l; 2064 2064 2065 2065 if (!populated_zone(zone)) 2066 2066 continue; 2067 - 2068 2067 if (zone_is_all_unreclaimable(zone) && prio != DEF_PRIORITY) 2069 2068 continue; 2070 2069 2071 2070 for_each_evictable_lru(l) { 2071 + enum zone_stat_item ls = NR_LRU_BASE + l; 2072 + unsigned long lru_pages = zone_page_state(zone, ls); 2073 + 2072 2074 /* For pass = 0, we don't shrink the active list */ 2073 - if (pass == 0 && 2074 - (l == LRU_ACTIVE || l == LRU_ACTIVE_FILE)) 2075 + if (pass == 0 && (l == LRU_ACTIVE_ANON || 2076 + l == LRU_ACTIVE_FILE)) 2075 2077 continue; 2076 2078 2077 - zone->lru[l].nr_scan += 2078 - (zone_page_state(zone, NR_LRU_BASE + l) 2079 - >> prio) + 1; 2079 + zone->lru[l].nr_scan += (lru_pages >> prio) + 1; 2080 2080 if (zone->lru[l].nr_scan >= nr_pages || pass > 3) { 2081 + unsigned long nr_to_scan; 2082 + 2081 2083 zone->lru[l].nr_scan = 0; 2082 - nr_to_scan = min(nr_pages, 2083 - zone_page_state(zone, 2084 - NR_LRU_BASE + l)); 2084 + nr_to_scan = min(nr_pages, lru_pages); 2085 2085 ret += shrink_list(l, nr_to_scan, zone, 2086 2086 sc, prio); 2087 2087 if (ret >= nr_pages) ··· 2089 2089 } 2090 2090 } 2091 2091 } 2092 - 2093 2092 return ret; 2094 2093 } 2095 2094 ··· 2111 2112 .may_swap = 0, 2112 2113 .swap_cluster_max = nr_pages, 2113 2114 .may_writepage = 1, 2114 - .swappiness = vm_swappiness, 2115 2115 .isolate_pages = isolate_pages_global, 2116 2116 }; 2117 2117 ··· 2144 2146 int prio; 2145 2147 2146 2148 /* Force reclaiming mapped pages in the passes #3 and #4 */ 2147 - if (pass > 2) { 2149 + if (pass > 2) 2148 2150 sc.may_swap = 1; 2149 - sc.swappiness = 100; 2150 - } 2151 2151 2152 2152 for (prio = DEF_PRIORITY; prio >= 0; prio--) { 2153 2153 unsigned long nr_to_scan = nr_pages - ret;
+2 -2
scripts/bootgraph.pl
··· 51 51 52 52 while (<>) { 53 53 my $line = $_; 54 - if ($line =~ /([0-9\.]+)\] calling ([a-zA-Z0-9\_]+)\+/) { 54 + if ($line =~ /([0-9\.]+)\] calling ([a-zA-Z0-9\_\.]+)\+/) { 55 55 my $func = $2; 56 56 if ($done == 0) { 57 57 $start{$func} = $1; ··· 87 87 $count = $count + 1; 88 88 } 89 89 90 - if ($line =~ /([0-9\.]+)\] initcall ([a-zA-Z0-9\_]+)\+.*returned/) { 90 + if ($line =~ /([0-9\.]+)\] initcall ([a-zA-Z0-9\_\.]+)\+.*returned/) { 91 91 if ($done == 0) { 92 92 $end{$2} = $1; 93 93 $maxtime = $1;
+158 -13
scripts/markup_oops.pl
··· 1 - #!/usr/bin/perl -w 1 + #!/usr/bin/perl 2 2 3 3 use File::Basename; 4 4 ··· 29 29 my $target = "0"; 30 30 my $function; 31 31 my $module = ""; 32 - my $func_offset; 32 + my $func_offset = 0; 33 33 my $vmaoffset = 0; 34 34 35 + my %regs; 36 + 37 + 38 + sub parse_x86_regs 39 + { 40 + my ($line) = @_; 41 + if ($line =~ /EAX: ([0-9a-f]+) EBX: ([0-9a-f]+) ECX: ([0-9a-f]+) EDX: ([0-9a-f]+)/) { 42 + $regs{"%eax"} = $1; 43 + $regs{"%ebx"} = $2; 44 + $regs{"%ecx"} = $3; 45 + $regs{"%edx"} = $4; 46 + } 47 + if ($line =~ /ESI: ([0-9a-f]+) EDI: ([0-9a-f]+) EBP: ([0-9a-f]+) ESP: ([0-9a-f]+)/) { 48 + $regs{"%esi"} = $1; 49 + $regs{"%edi"} = $2; 50 + $regs{"%esp"} = $4; 51 + } 52 + if ($line =~ /RAX: ([0-9a-f]+) RBX: ([0-9a-f]+) RCX: ([0-9a-f]+)/) { 53 + $regs{"%eax"} = $1; 54 + $regs{"%ebx"} = $2; 55 + $regs{"%ecx"} = $3; 56 + } 57 + if ($line =~ /RDX: ([0-9a-f]+) RSI: ([0-9a-f]+) RDI: ([0-9a-f]+)/) { 58 + $regs{"%edx"} = $1; 59 + $regs{"%esi"} = $2; 60 + $regs{"%edi"} = $3; 61 + } 62 + if ($line =~ /RBP: ([0-9a-f]+) R08: ([0-9a-f]+) R09: ([0-9a-f]+)/) { 63 + $regs{"%r08"} = $2; 64 + $regs{"%r09"} = $3; 65 + } 66 + if ($line =~ /R10: ([0-9a-f]+) R11: ([0-9a-f]+) R12: ([0-9a-f]+)/) { 67 + $regs{"%r10"} = $1; 68 + $regs{"%r11"} = $2; 69 + $regs{"%r12"} = $3; 70 + } 71 + if ($line =~ /R13: ([0-9a-f]+) R14: ([0-9a-f]+) R15: ([0-9a-f]+)/) { 72 + $regs{"%r13"} = $1; 73 + $regs{"%r14"} = $2; 74 + $regs{"%r15"} = $3; 75 + } 76 + } 77 + 78 + sub reg_name 79 + { 80 + my ($reg) = @_; 81 + $reg =~ s/r(.)x/e\1x/; 82 + $reg =~ s/r(.)i/e\1i/; 83 + $reg =~ s/r(.)p/e\1p/; 84 + return $reg; 85 + } 86 + 87 + sub process_x86_regs 88 + { 89 + my ($line, $cntr) = @_; 90 + my $str = ""; 91 + if (length($line) < 40) { 92 + return ""; # not an asm istruction 93 + } 94 + 95 + # find the arguments to the instruction 96 + if ($line =~ /([0-9a-zA-Z\,\%\(\)\-\+]+)$/) { 97 + $lastword = $1; 98 + } else { 99 + return ""; 100 + } 101 + 102 + # we need to find the registers that get clobbered, 103 + # since their value is no longer relevant for previous 104 + # instructions in the stream. 105 + 106 + $clobber = $lastword; 107 + # first, remove all memory operands, they're read only 108 + $clobber =~ s/\([a-z0-9\%\,]+\)//g; 109 + # then, remove everything before the comma, thats the read part 110 + $clobber =~ s/.*\,//g; 111 + 112 + # if this is the instruction that faulted, we haven't actually done 113 + # the write yet... nothing is clobbered. 114 + if ($cntr == 0) { 115 + $clobber = ""; 116 + } 117 + 118 + foreach $reg (keys(%regs)) { 119 + my $clobberprime = reg_name($clobber); 120 + my $lastwordprime = reg_name($lastword); 121 + my $val = $regs{$reg}; 122 + if ($val =~ /^[0]+$/) { 123 + $val = "0"; 124 + } else { 125 + $val =~ s/^0*//; 126 + } 127 + 128 + # first check if we're clobbering this register; if we do 129 + # we print it with a =>, and then delete its value 130 + if ($clobber =~ /$reg/ || $clobberprime =~ /$reg/) { 131 + if (length($val) > 0) { 132 + $str = $str . " $reg => $val "; 133 + } 134 + $regs{$reg} = ""; 135 + $val = ""; 136 + } 137 + # now check if we're reading this register 138 + if ($lastword =~ /$reg/ || $lastwordprime =~ /$reg/) { 139 + if (length($val) > 0) { 140 + $str = $str . " $reg = $val "; 141 + } 142 + } 143 + } 144 + return $str; 145 + } 146 + 147 + # parse the oops 35 148 while (<STDIN>) { 36 149 my $line = $_; 37 150 if ($line =~ /EIP: 0060:\[\<([a-z0-9]+)\>\]/) { 38 151 $target = $1; 39 152 } 153 + if ($line =~ /RIP: 0010:\[\<([a-z0-9]+)\>\]/) { 154 + $target = $1; 155 + } 40 156 if ($line =~ /EIP is at ([a-zA-Z0-9\_]+)\+(0x[0-9a-f]+)\/0x[a-f0-9]/) { 157 + $function = $1; 158 + $func_offset = $2; 159 + } 160 + if ($line =~ /RIP: 0010:\[\<[0-9a-f]+\>\] \[\<[0-9a-f]+\>\] ([a-zA-Z0-9\_]+)\+(0x[0-9a-f]+)\/0x[a-f0-9]/) { 41 161 $function = $1; 42 162 $func_offset = $2; 43 163 } ··· 166 46 if ($line =~ /EIP is at ([a-zA-Z0-9\_]+)\+(0x[0-9a-f]+)\/0x[a-f0-9]+\W\[([a-zA-Z0-9\_\-]+)\]/) { 167 47 $module = $3; 168 48 } 49 + if ($line =~ /RIP: 0010:\[\<[0-9a-f]+\>\] \[\<[0-9a-f]+\>\] ([a-zA-Z0-9\_]+)\+(0x[0-9a-f]+)\/0x[a-f0-9]+\W\[([a-zA-Z0-9\_\-]+)\]/) { 50 + $module = $3; 51 + } 52 + parse_x86_regs($line); 169 53 } 170 54 171 55 my $decodestart = hex($target) - hex($func_offset); 172 - my $decodestop = $decodestart + 8192; 56 + my $decodestop = hex($target) + 8192; 173 57 if ($target eq "0") { 174 58 print "No oops found!\n"; 175 59 print "Usage: \n"; ··· 208 84 my $state = 0; 209 85 my $center = 0; 210 86 my @lines; 87 + my @reglines; 211 88 212 89 sub InRange { 213 90 my ($address, $target) = @_; ··· 313 188 314 189 my $i; 315 190 316 - my $fulltext = ""; 317 - $i = $start; 318 - while ($i < $finish) { 319 - if ($i == $center) { 320 - $fulltext = $fulltext . "*$lines[$i] <----- faulting instruction\n"; 321 - } else { 322 - $fulltext = $fulltext . " $lines[$i]\n"; 323 - } 324 - $i = $i +1; 191 + 192 + # start annotating the registers in the asm. 193 + # this goes from the oopsing point back, so that the annotator 194 + # can track (opportunistically) which registers got written and 195 + # whos value no longer is relevant. 196 + 197 + $i = $center; 198 + while ($i >= $start) { 199 + $reglines[$i] = process_x86_regs($lines[$i], $center - $i); 200 + $i = $i - 1; 325 201 } 326 202 327 - print $fulltext; 203 + $i = $start; 204 + while ($i < $finish) { 205 + my $line; 206 + if ($i == $center) { 207 + $line = "*$lines[$i] "; 208 + } else { 209 + $line = " $lines[$i] "; 210 + } 211 + print $line; 212 + if (defined($reglines[$i]) && length($reglines[$i]) > 0) { 213 + my $c = 60 - length($line); 214 + while ($c > 0) { print " "; $c = $c - 1; }; 215 + print "| $reglines[$i]"; 216 + } 217 + if ($i == $center) { 218 + print "<--- faulting instruction"; 219 + } 220 + print "\n"; 221 + $i = $i +1; 222 + } 328 223
+1
scripts/mod/file2alias.c
··· 210 210 static int do_hid_entry(const char *filename, 211 211 struct hid_device_id *id, char *alias) 212 212 { 213 + id->bus = TO_NATIVE(id->bus); 213 214 id->vendor = TO_NATIVE(id->vendor); 214 215 id->product = TO_NATIVE(id->product); 215 216
+8
scripts/package/mkspec
··· 86 86 echo 'cp System.map $RPM_BUILD_ROOT'"/boot/System.map-$KERNELRELEASE" 87 87 88 88 echo 'cp .config $RPM_BUILD_ROOT'"/boot/config-$KERNELRELEASE" 89 + 90 + echo "%ifnarch ppc64" 91 + echo 'cp vmlinux vmlinux.orig' 92 + echo 'bzip2 -9 vmlinux' 93 + echo 'mv vmlinux.bz2 $RPM_BUILD_ROOT'"/boot/vmlinux-$KERNELRELEASE.bz2" 94 + echo 'mv vmlinux.orig vmlinux' 95 + echo "%endif" 96 + 89 97 echo "" 90 98 echo "%clean" 91 99 echo '#echo -rf $RPM_BUILD_ROOT'
+1 -8
scripts/setlocalversion
··· 58 58 # Check for svn and a svn repo. 59 59 if rev=`svn info 2>/dev/null | grep '^Last Changed Rev'`; then 60 60 rev=`echo $rev | awk '{print $NF}'` 61 - changes=`svn status 2>/dev/null | grep '^[AMD]' | wc -l` 62 - 63 - # Are there uncommitted changes? 64 - if [ $changes != 0 ]; then 65 - printf -- '-svn%s%s' "$rev" -dirty 66 - else 67 - printf -- '-svn%s' "$rev" 68 - fi 61 + printf -- '-svn%s' "$rev" 69 62 70 63 # All done with svn 71 64 exit
+9 -3
scripts/tags.sh
··· 76 76 77 77 all_kconfigs() 78 78 { 79 - find_sources $ALLSOURCE_ARCHS 'Kconfig*' 79 + for arch in $ALLSOURCE_ARCHS; do 80 + find_sources $arch 'Kconfig*' 81 + done 82 + find_other_sources 'Kconfig*' 80 83 } 81 84 82 85 all_defconfigs() ··· 102 99 -I ____cacheline_internodealigned_in_smp \ 103 100 -I EXPORT_SYMBOL,EXPORT_SYMBOL_GPL \ 104 101 --extra=+f --c-kinds=+px \ 105 - --regex-asm='/^ENTRY\(([^)]*)\).*/\1/' 102 + --regex-asm='/^ENTRY\(([^)]*)\).*/\1/' \ 103 + --regex-c='/^SYSCALL_DEFINE[[:digit:]]?\(([^,)]*).*/sys_\1/' 106 104 107 105 all_kconfigs | xargs $1 -a \ 108 106 --langdef=kconfig --language-force=kconfig \ ··· 121 117 122 118 emacs() 123 119 { 124 - all_sources | xargs $1 -a 120 + all_sources | xargs $1 -a \ 121 + --regex='/^ENTRY(\([^)]*\)).*/\1/' \ 122 + --regex='/^SYSCALL_DEFINE[0-9]?(\([^,)]*\).*/sys_\1/' 125 123 126 124 all_kconfigs | xargs $1 -a \ 127 125 --regex='/^[ \t]*\(\(menu\)*config\)[ \t]+\([a-zA-Z0-9_]+\)/\3/'
+1 -1
sound/core/jack.c
··· 47 47 int err; 48 48 49 49 snprintf(jack->name, sizeof(jack->name), "%s %s", 50 - card->longname, jack->id); 50 + card->shortname, jack->id); 51 51 jack->input_dev->name = jack->name; 52 52 53 53 /* Default to the sound card device. */
+2 -6
sound/pci/hda/hda_intel.c
··· 1947 1947 return 0; 1948 1948 } 1949 1949 1950 - static int azx_resume_early(struct pci_dev *pci) 1951 - { 1952 - return pci_restore_state(pci); 1953 - } 1954 - 1955 1950 static int azx_resume(struct pci_dev *pci) 1956 1951 { 1957 1952 struct snd_card *card = pci_get_drvdata(pci); 1958 1953 struct azx *chip = card->private_data; 1959 1954 1955 + pci_set_power_state(pci, PCI_D0); 1956 + pci_restore_state(pci); 1960 1957 if (pci_enable_device(pci) < 0) { 1961 1958 printk(KERN_ERR "hda-intel: pci_enable_device failed, " 1962 1959 "disabling device\n"); ··· 2465 2468 .remove = __devexit_p(azx_remove), 2466 2469 #ifdef CONFIG_PM 2467 2470 .suspend = azx_suspend, 2468 - .resume_early = azx_resume_early, 2469 2471 .resume = azx_resume, 2470 2472 #endif 2471 2473 };
+4 -13
sound/pci/oxygen/virtuoso.c
··· 26 26 * SPI 0 -> 1st PCM1796 (front) 27 27 * SPI 1 -> 2nd PCM1796 (surround) 28 28 * SPI 2 -> 3rd PCM1796 (center/LFE) 29 - * SPI 4 -> 4th PCM1796 (back) and EEPROM self-destruct (do not use!) 29 + * SPI 4 -> 4th PCM1796 (back) 30 30 * 31 31 * GPIO 2 -> M0 of CS5381 32 32 * GPIO 3 -> M1 of CS5381 ··· 207 207 static inline void pcm1796_write_spi(struct oxygen *chip, unsigned int codec, 208 208 u8 reg, u8 value) 209 209 { 210 - /* 211 - * We don't want to do writes on SPI 4 because the EEPROM, which shares 212 - * the same pin, might get confused and broken. We'd better take care 213 - * that the driver works with the default register values ... 214 - */ 215 - #if 0 216 210 /* maps ALSA channel pair number to SPI output */ 217 211 static const u8 codec_map[4] = { 218 212 0, 1, 2, 4 ··· 217 223 (codec_map[codec] << OXYGEN_SPI_CODEC_SHIFT) | 218 224 OXYGEN_SPI_CEN_LATCH_CLOCK_HI, 219 225 (reg << 8) | value); 220 - #endif 221 226 } 222 227 223 228 static inline void pcm1796_write_i2c(struct oxygen *chip, unsigned int codec, ··· 750 757 751 758 static int xonar_d2_control_filter(struct snd_kcontrol_new *template) 752 759 { 753 - if (!strncmp(template->name, "Master Playback ", 16)) 754 - /* disable volume/mute because they would require SPI writes */ 755 - return 1; 756 760 if (!strncmp(template->name, "CD Capture ", 11)) 757 761 /* CD in is actually connected to the video in pin */ 758 762 template->private_value ^= AC97_CD ^ AC97_VIDEO; ··· 840 850 .dac_volume_min = 0x0f, 841 851 .dac_volume_max = 0xff, 842 852 .misc_flags = OXYGEN_MISC_MIDI, 843 - .function_flags = OXYGEN_FUNCTION_SPI, 844 - .dac_i2s_format = OXYGEN_I2S_FORMAT_I2S, 853 + .function_flags = OXYGEN_FUNCTION_SPI | 854 + OXYGEN_FUNCTION_ENABLE_SPI_4_5, 855 + .dac_i2s_format = OXYGEN_I2S_FORMAT_LJUST, 845 856 .adc_i2s_format = OXYGEN_I2S_FORMAT_LJUST, 846 857 }; 847 858
+11 -9
sound/usb/usbaudio.c
··· 2524 2524 * build the rate table and bitmap flags 2525 2525 */ 2526 2526 int r, idx; 2527 - unsigned int nonzero_rates = 0; 2528 2527 2529 2528 fp->rate_table = kmalloc(sizeof(int) * nr_rates, GFP_KERNEL); 2530 2529 if (fp->rate_table == NULL) { ··· 2531 2532 return -1; 2532 2533 } 2533 2534 2534 - fp->nr_rates = nr_rates; 2535 - fp->rate_min = fp->rate_max = combine_triple(&fmt[8]); 2535 + fp->nr_rates = 0; 2536 + fp->rate_min = fp->rate_max = 0; 2536 2537 for (r = 0, idx = offset + 1; r < nr_rates; r++, idx += 3) { 2537 2538 unsigned int rate = combine_triple(&fmt[idx]); 2539 + if (!rate) 2540 + continue; 2538 2541 /* C-Media CM6501 mislabels its 96 kHz altsetting */ 2539 2542 if (rate == 48000 && nr_rates == 1 && 2540 - chip->usb_id == USB_ID(0x0d8c, 0x0201) && 2543 + (chip->usb_id == USB_ID(0x0d8c, 0x0201) || 2544 + chip->usb_id == USB_ID(0x0d8c, 0x0102)) && 2541 2545 fp->altsetting == 5 && fp->maxpacksize == 392) 2542 2546 rate = 96000; 2543 - fp->rate_table[r] = rate; 2544 - nonzero_rates |= rate; 2545 - if (rate < fp->rate_min) 2547 + fp->rate_table[fp->nr_rates] = rate; 2548 + if (!fp->rate_min || rate < fp->rate_min) 2546 2549 fp->rate_min = rate; 2547 - else if (rate > fp->rate_max) 2550 + if (!fp->rate_max || rate > fp->rate_max) 2548 2551 fp->rate_max = rate; 2549 2552 fp->rates |= snd_pcm_rate_to_rate_bit(rate); 2553 + fp->nr_rates++; 2550 2554 } 2551 - if (!nonzero_rates) { 2555 + if (!fp->nr_rates) { 2552 2556 hwc_debug("All rates were zero. Skipping format!\n"); 2553 2557 return -1; 2554 2558 }
+1
sound/usb/usbmidi.c
··· 1625 1625 } 1626 1626 1627 1627 ep_info.out_ep = get_endpoint(hostif, 2)->bEndpointAddress & USB_ENDPOINT_NUMBER_MASK; 1628 + ep_info.out_interval = 0; 1628 1629 ep_info.out_cables = endpoint->out_cables & 0x5555; 1629 1630 err = snd_usbmidi_out_endpoint_create(umidi, &ep_info, &umidi->endpoints[0]); 1630 1631 if (err < 0)
+2 -4
virt/kvm/iommu.c
··· 73 73 { 74 74 int i, r = 0; 75 75 76 - down_read(&kvm->slots_lock); 77 76 for (i = 0; i < kvm->nmemslots; i++) { 78 77 r = kvm_iommu_map_pages(kvm, kvm->memslots[i].base_gfn, 79 78 kvm->memslots[i].npages); 80 79 if (r) 81 80 break; 82 81 } 83 - up_read(&kvm->slots_lock); 82 + 84 83 return r; 85 84 } 86 85 ··· 189 190 static int kvm_iommu_unmap_memslots(struct kvm *kvm) 190 191 { 191 192 int i; 192 - down_read(&kvm->slots_lock); 193 + 193 194 for (i = 0; i < kvm->nmemslots; i++) { 194 195 kvm_iommu_put_pages(kvm, kvm->memslots[i].base_gfn, 195 196 kvm->memslots[i].npages); 196 197 } 197 - up_read(&kvm->slots_lock); 198 198 199 199 return 0; 200 200 }
+33 -10
virt/kvm/kvm_main.c
··· 173 173 assigned_dev->host_irq_disabled = false; 174 174 } 175 175 mutex_unlock(&assigned_dev->kvm->lock); 176 - kvm_put_kvm(assigned_dev->kvm); 177 176 } 178 177 179 178 static irqreturn_t kvm_assigned_dev_intr(int irq, void *dev_id) 180 179 { 181 180 struct kvm_assigned_dev_kernel *assigned_dev = 182 181 (struct kvm_assigned_dev_kernel *) dev_id; 183 - 184 - kvm_get_kvm(assigned_dev->kvm); 185 182 186 183 schedule_work(&assigned_dev->interrupt_work); 187 184 ··· 210 213 } 211 214 } 212 215 216 + /* The function implicit hold kvm->lock mutex due to cancel_work_sync() */ 213 217 static void kvm_free_assigned_irq(struct kvm *kvm, 214 218 struct kvm_assigned_dev_kernel *assigned_dev) 215 219 { ··· 226 228 if (!assigned_dev->irq_requested_type) 227 229 return; 228 230 229 - if (cancel_work_sync(&assigned_dev->interrupt_work)) 230 - /* We had pending work. That means we will have to take 231 - * care of kvm_put_kvm. 232 - */ 233 - kvm_put_kvm(kvm); 231 + /* 232 + * In kvm_free_device_irq, cancel_work_sync return true if: 233 + * 1. work is scheduled, and then cancelled. 234 + * 2. work callback is executed. 235 + * 236 + * The first one ensured that the irq is disabled and no more events 237 + * would happen. But for the second one, the irq may be enabled (e.g. 238 + * for MSI). So we disable irq here to prevent further events. 239 + * 240 + * Notice this maybe result in nested disable if the interrupt type is 241 + * INTx, but it's OK for we are going to free it. 242 + * 243 + * If this function is a part of VM destroy, please ensure that till 244 + * now, the kvm state is still legal for probably we also have to wait 245 + * interrupt_work done. 246 + */ 247 + disable_irq_nosync(assigned_dev->host_irq); 248 + cancel_work_sync(&assigned_dev->interrupt_work); 234 249 235 250 free_irq(assigned_dev->host_irq, (void *)assigned_dev); 236 251 ··· 296 285 297 286 if (irqchip_in_kernel(kvm)) { 298 287 if (!msi2intx && 299 - adev->irq_requested_type & KVM_ASSIGNED_DEV_HOST_MSI) { 300 - free_irq(adev->host_irq, (void *)kvm); 288 + (adev->irq_requested_type & KVM_ASSIGNED_DEV_HOST_MSI)) { 289 + free_irq(adev->host_irq, (void *)adev); 301 290 pci_disable_msi(adev->dev); 302 291 } 303 292 ··· 466 455 struct kvm_assigned_dev_kernel *match; 467 456 struct pci_dev *dev; 468 457 458 + down_read(&kvm->slots_lock); 469 459 mutex_lock(&kvm->lock); 470 460 471 461 match = kvm_find_assigned_dev(&kvm->arch.assigned_dev_head, ··· 528 516 529 517 out: 530 518 mutex_unlock(&kvm->lock); 519 + up_read(&kvm->slots_lock); 531 520 return r; 532 521 out_list_del: 533 522 list_del(&match->list); ··· 540 527 out_free: 541 528 kfree(match); 542 529 mutex_unlock(&kvm->lock); 530 + up_read(&kvm->slots_lock); 543 531 return r; 544 532 } 545 533 #endif ··· 803 789 return young; 804 790 } 805 791 792 + static void kvm_mmu_notifier_release(struct mmu_notifier *mn, 793 + struct mm_struct *mm) 794 + { 795 + struct kvm *kvm = mmu_notifier_to_kvm(mn); 796 + kvm_arch_flush_shadow(kvm); 797 + } 798 + 806 799 static const struct mmu_notifier_ops kvm_mmu_notifier_ops = { 807 800 .invalidate_page = kvm_mmu_notifier_invalidate_page, 808 801 .invalidate_range_start = kvm_mmu_notifier_invalidate_range_start, 809 802 .invalidate_range_end = kvm_mmu_notifier_invalidate_range_end, 810 803 .clear_flush_young = kvm_mmu_notifier_clear_flush_young, 804 + .release = kvm_mmu_notifier_release, 811 805 }; 812 806 #endif /* CONFIG_MMU_NOTIFIER && KVM_ARCH_WANT_MMU_NOTIFIER */ 813 807 ··· 905 883 { 906 884 struct mm_struct *mm = kvm->mm; 907 885 886 + kvm_arch_sync_events(kvm); 908 887 spin_lock(&kvm_lock); 909 888 list_del(&kvm->vm_list); 910 889 spin_unlock(&kvm_lock);