Merge branches 'release', 'bugzilla-12011', 'bugzilla-12632', 'misc' and 'suspend' into release

Len Brown 5acfac5a 5423a0cb

+5021 -1544
-1
CREDITS
··· 2166 2167 N: Pavel Machek 2168 E: pavel@ucw.cz 2169 - E: pavel@suse.cz 2170 D: Softcursor for vga, hypertech cdrom support, vcsa bugfix, nbd 2171 D: sun4/330 port, capabilities for elf, speedup for rm on ext2, USB, 2172 D: work on suspend-to-ram/disk, killing duplicates from ioctl32
··· 2166 2167 N: Pavel Machek 2168 E: pavel@ucw.cz 2169 D: Softcursor for vga, hypertech cdrom support, vcsa bugfix, nbd 2170 D: sun4/330 port, capabilities for elf, speedup for rm on ext2, USB, 2171 D: work on suspend-to-ram/disk, killing duplicates from ioctl32
+1 -1
Documentation/ABI/testing/sysfs-firmware-memmap
··· 1 What: /sys/firmware/memmap/ 2 Date: June 2008 3 - Contact: Bernhard Walle <bwalle@suse.de> 4 Description: 5 On all platforms, the firmware provides a memory map which the 6 kernel reads. The resources from that memory map are registered
··· 1 What: /sys/firmware/memmap/ 2 Date: June 2008 3 + Contact: Bernhard Walle <bernhard.walle@gmx.de> 4 Description: 5 On all platforms, the firmware provides a memory map which the 6 kernel reads. The resources from that memory map are registered
+1 -1
Documentation/PCI/PCIEBUS-HOWTO.txt
··· 93 94 int pcie_port_service_register(struct pcie_port_service_driver *new) 95 96 - This API replaces the Linux Driver Model's pci_module_init API. A 97 service driver should always calls pcie_port_service_register at 98 module init. Note that after service driver being loaded, calls 99 such as pci_enable_device(dev) and pci_set_master(dev) are no longer
··· 93 94 int pcie_port_service_register(struct pcie_port_service_driver *new) 95 96 + This API replaces the Linux Driver Model's pci_register_driver API. A 97 service driver should always calls pcie_port_service_register at 98 module init. Note that after service driver being loaded, calls 99 such as pci_enable_device(dev) and pci_set_master(dev) are no longer
+2 -4
Documentation/cgroups/cgroups.txt
··· 252 When a task is moved from one cgroup to another, it gets a new 253 css_set pointer - if there's an already existing css_set with the 254 desired collection of cgroups then that group is reused, else a new 255 - css_set is allocated. Note that the current implementation uses a 256 - linear search to locate an appropriate existing css_set, so isn't 257 - very efficient. A future version will use a hash table for better 258 - performance. 259 260 To allow access from a cgroup to the css_sets (and hence tasks) 261 that comprise it, a set of cg_cgroup_link objects form a lattice;
··· 252 When a task is moved from one cgroup to another, it gets a new 253 css_set pointer - if there's an already existing css_set with the 254 desired collection of cgroups then that group is reused, else a new 255 + css_set is allocated. The appropriate existing css_set is located by 256 + looking into a hash table. 257 258 To allow access from a cgroup to the css_sets (and hence tasks) 259 that comprise it, a set of cg_cgroup_link objects form a lattice;
+36 -27
Documentation/cgroups/cpusets.txt
··· 142 - in fork and exit, to attach and detach a task from its cpuset. 143 - in sched_setaffinity, to mask the requested CPUs by what's 144 allowed in that tasks cpuset. 145 - - in sched.c migrate_all_tasks(), to keep migrating tasks within 146 the CPUs allowed by their cpuset, if possible. 147 - in the mbind and set_mempolicy system calls, to mask the requested 148 Memory Nodes by what's allowed in that tasks cpuset. ··· 175 - mem_exclusive flag: is memory placement exclusive? 176 - mem_hardwall flag: is memory allocation hardwalled 177 - memory_pressure: measure of how much paging pressure in cpuset 178 179 In addition, the root cpuset only has the following file: 180 - memory_pressure_enabled flag: compute memory_pressure? ··· 256 257 This is useful both on tightly managed systems running a wide mix of 258 submitted jobs, which may choose to terminate or re-prioritize jobs that 259 - are trying to use more memory than allowed on the nodes assigned them, 260 and with tightly coupled, long running, massively parallel scientific 261 computing jobs that will dramatically fail to meet required performance 262 goals if they start to use more memory than allowed to them. ··· 382 The algorithmic cost of load balancing and its impact on key shared 383 kernel data structures such as the task list increases more than 384 linearly with the number of CPUs being balanced. So the scheduler 385 - has support to partition the systems CPUs into a number of sched 386 domains such that it only load balances within each sched domain. 387 Each sched domain covers some subset of the CPUs in the system; 388 no two sched domains overlap; some CPUs might not be in any sched ··· 489 The internal kernel cpuset to scheduler interface passes from the 490 cpuset code to the scheduler code a partition of the load balanced 491 CPUs in the system. This partition is a set of subsets (represented 492 - as an array of cpumask_t) of CPUs, pairwise disjoint, that cover all 493 - the CPUs that must be load balanced. 494 495 - Whenever the 'sched_load_balance' flag changes, or CPUs come or go 496 - from a cpuset with this flag enabled, or a cpuset with this flag 497 - enabled is removed, the cpuset code builds a new such partition and 498 - passes it to the scheduler sched domain setup code, to have the sched 499 - domains rebuilt as necessary. 500 501 This partition exactly defines what sched domains the scheduler should 502 - setup - one sched domain for each element (cpumask_t) in the partition. 503 504 The scheduler remembers the currently active sched domain partitions. 505 When the scheduler routine partition_sched_domains() is invoked from ··· 568 requests 0 and others are -1 then 0 is used. 569 570 Note that modifying this file will have both good and bad effects, 571 - and whether it is acceptable or not will be depend on your situation. 572 Don't modify this file if you are not sure. 573 574 If your situation is: ··· 609 610 If a cpuset has its 'cpus' modified, then each task in that cpuset 611 will have its allowed CPU placement changed immediately. Similarly, 612 - if a tasks pid is written to a cpusets 'tasks' file, in either its 613 - current cpuset or another cpuset, then its allowed CPU placement is 614 - changed immediately. If such a task had been bound to some subset 615 - of its cpuset using the sched_setaffinity() call, the task will be 616 - allowed to run on any CPU allowed in its new cpuset, negating the 617 - affect of the prior sched_setaffinity() call. 618 619 In summary, the memory placement of a task whose cpuset is changed is 620 updated by the kernel, on the next allocation of a page for that task, 621 - but the processor placement is not updated, until that tasks pid is 622 - rewritten to the 'tasks' file of its cpuset. This is done to avoid 623 - impacting the scheduler code in the kernel with a check for changes 624 - in a tasks processor placement. 625 626 Normally, once a page is allocated (given a physical page 627 of main memory) then that page stays on whatever node it ··· 686 # The next line should display '/Charlie' 687 cat /proc/self/cpuset 688 689 - In the future, a C library interface to cpusets will likely be 690 - available. For now, the only way to query or modify cpusets is 691 - via the cpuset file system, using the various cd, mkdir, echo, cat, 692 - rmdir commands from the shell, or their equivalent from C. 693 694 The sched_setaffinity calls can also be done at the shell prompt using 695 SGI's runon or Robert Love's taskset. The mbind and set_mempolicy ··· 765 766 is equivalent to 767 768 - mount -t cgroup -ocpuset X /dev/cpuset 769 echo "/sbin/cpuset_release_agent" > /dev/cpuset/release_agent 770 771 2.2 Adding/removing cpus
··· 142 - in fork and exit, to attach and detach a task from its cpuset. 143 - in sched_setaffinity, to mask the requested CPUs by what's 144 allowed in that tasks cpuset. 145 + - in sched.c migrate_live_tasks(), to keep migrating tasks within 146 the CPUs allowed by their cpuset, if possible. 147 - in the mbind and set_mempolicy system calls, to mask the requested 148 Memory Nodes by what's allowed in that tasks cpuset. ··· 175 - mem_exclusive flag: is memory placement exclusive? 176 - mem_hardwall flag: is memory allocation hardwalled 177 - memory_pressure: measure of how much paging pressure in cpuset 178 + - memory_spread_page flag: if set, spread page cache evenly on allowed nodes 179 + - memory_spread_slab flag: if set, spread slab cache evenly on allowed nodes 180 + - sched_load_balance flag: if set, load balance within CPUs on that cpuset 181 + - sched_relax_domain_level: the searching range when migrating tasks 182 183 In addition, the root cpuset only has the following file: 184 - memory_pressure_enabled flag: compute memory_pressure? ··· 252 253 This is useful both on tightly managed systems running a wide mix of 254 submitted jobs, which may choose to terminate or re-prioritize jobs that 255 + are trying to use more memory than allowed on the nodes assigned to them, 256 and with tightly coupled, long running, massively parallel scientific 257 computing jobs that will dramatically fail to meet required performance 258 goals if they start to use more memory than allowed to them. ··· 378 The algorithmic cost of load balancing and its impact on key shared 379 kernel data structures such as the task list increases more than 380 linearly with the number of CPUs being balanced. So the scheduler 381 + has support to partition the systems CPUs into a number of sched 382 domains such that it only load balances within each sched domain. 383 Each sched domain covers some subset of the CPUs in the system; 384 no two sched domains overlap; some CPUs might not be in any sched ··· 485 The internal kernel cpuset to scheduler interface passes from the 486 cpuset code to the scheduler code a partition of the load balanced 487 CPUs in the system. This partition is a set of subsets (represented 488 + as an array of struct cpumask) of CPUs, pairwise disjoint, that cover 489 + all the CPUs that must be load balanced. 490 491 + The cpuset code builds a new such partition and passes it to the 492 + scheduler sched domain setup code, to have the sched domains rebuilt 493 + as necessary, whenever: 494 + - the 'sched_load_balance' flag of a cpuset with non-empty CPUs changes, 495 + - or CPUs come or go from a cpuset with this flag enabled, 496 + - or 'sched_relax_domain_level' value of a cpuset with non-empty CPUs 497 + and with this flag enabled changes, 498 + - or a cpuset with non-empty CPUs and with this flag enabled is removed, 499 + - or a cpu is offlined/onlined. 500 501 This partition exactly defines what sched domains the scheduler should 502 + setup - one sched domain for each element (struct cpumask) in the 503 + partition. 504 505 The scheduler remembers the currently active sched domain partitions. 506 When the scheduler routine partition_sched_domains() is invoked from ··· 559 requests 0 and others are -1 then 0 is used. 560 561 Note that modifying this file will have both good and bad effects, 562 + and whether it is acceptable or not depends on your situation. 563 Don't modify this file if you are not sure. 564 565 If your situation is: ··· 600 601 If a cpuset has its 'cpus' modified, then each task in that cpuset 602 will have its allowed CPU placement changed immediately. Similarly, 603 + if a tasks pid is written to another cpusets 'tasks' file, then its 604 + allowed CPU placement is changed immediately. If such a task had been 605 + bound to some subset of its cpuset using the sched_setaffinity() call, 606 + the task will be allowed to run on any CPU allowed in its new cpuset, 607 + negating the effect of the prior sched_setaffinity() call. 608 609 In summary, the memory placement of a task whose cpuset is changed is 610 updated by the kernel, on the next allocation of a page for that task, 611 + and the processor placement is updated immediately. 612 613 Normally, once a page is allocated (given a physical page 614 of main memory) then that page stays on whatever node it ··· 681 # The next line should display '/Charlie' 682 cat /proc/self/cpuset 683 684 + There are ways to query or modify cpusets: 685 + - via the cpuset file system directly, using the various cd, mkdir, echo, 686 + cat, rmdir commands from the shell, or their equivalent from C. 687 + - via the C library libcpuset. 688 + - via the C library libcgroup. 689 + (http://sourceforge.net/proects/libcg/) 690 + - via the python application cset. 691 + (http://developer.novell.com/wiki/index.php/Cpuset) 692 693 The sched_setaffinity calls can also be done at the shell prompt using 694 SGI's runon or Robert Love's taskset. The mbind and set_mempolicy ··· 756 757 is equivalent to 758 759 + mount -t cgroup -ocpuset,noprefix X /dev/cpuset 760 echo "/sbin/cpuset_release_agent" > /dev/cpuset/release_agent 761 762 2.2 Adding/removing cpus
+101
Documentation/hwmon/hpfall.c
···
··· 1 + /* Disk protection for HP machines. 2 + * 3 + * Copyright 2008 Eric Piel 4 + * Copyright 2009 Pavel Machek <pavel@suse.cz> 5 + * 6 + * GPLv2. 7 + */ 8 + 9 + #include <stdio.h> 10 + #include <stdlib.h> 11 + #include <unistd.h> 12 + #include <fcntl.h> 13 + #include <sys/stat.h> 14 + #include <sys/types.h> 15 + #include <string.h> 16 + #include <stdint.h> 17 + #include <errno.h> 18 + #include <signal.h> 19 + 20 + void write_int(char *path, int i) 21 + { 22 + char buf[1024]; 23 + int fd = open(path, O_RDWR); 24 + if (fd < 0) { 25 + perror("open"); 26 + exit(1); 27 + } 28 + sprintf(buf, "%d", i); 29 + if (write(fd, buf, strlen(buf)) != strlen(buf)) { 30 + perror("write"); 31 + exit(1); 32 + } 33 + close(fd); 34 + } 35 + 36 + void set_led(int on) 37 + { 38 + write_int("/sys/class/leds/hp::hddprotect/brightness", on); 39 + } 40 + 41 + void protect(int seconds) 42 + { 43 + write_int("/sys/block/sda/device/unload_heads", seconds*1000); 44 + } 45 + 46 + int on_ac(void) 47 + { 48 + // /sys/class/power_supply/AC0/online 49 + } 50 + 51 + int lid_open(void) 52 + { 53 + // /proc/acpi/button/lid/LID/state 54 + } 55 + 56 + void ignore_me(void) 57 + { 58 + protect(0); 59 + set_led(0); 60 + 61 + } 62 + 63 + int main(int argc, char* argv[]) 64 + { 65 + int fd, ret; 66 + 67 + fd = open("/dev/freefall", O_RDONLY); 68 + if (fd < 0) { 69 + perror("open"); 70 + return EXIT_FAILURE; 71 + } 72 + 73 + signal(SIGALRM, ignore_me); 74 + 75 + for (;;) { 76 + unsigned char count; 77 + 78 + ret = read(fd, &count, sizeof(count)); 79 + alarm(0); 80 + if ((ret == -1) && (errno == EINTR)) { 81 + /* Alarm expired, time to unpark the heads */ 82 + continue; 83 + } 84 + 85 + if (ret != sizeof(count)) { 86 + perror("read"); 87 + break; 88 + } 89 + 90 + protect(21); 91 + set_led(1); 92 + if (1 || on_ac() || lid_open()) { 93 + alarm(2); 94 + } else { 95 + alarm(20); 96 + } 97 + } 98 + 99 + close(fd); 100 + return EXIT_SUCCESS; 101 + }
+8
Documentation/hwmon/lis3lv02d
··· 33 This driver also provides an absolute input class device, allowing 34 the laptop to act as a pinball machine-esque joystick. 35 36 Axes orientation 37 ---------------- 38
··· 33 This driver also provides an absolute input class device, allowing 34 the laptop to act as a pinball machine-esque joystick. 35 36 + Another feature of the driver is misc device called "freefall" that 37 + acts similar to /dev/rtc and reacts on free-fall interrupts received 38 + from the device. It supports blocking operations, poll/select and 39 + fasync operation modes. You must read 1 bytes from the device. The 40 + result is number of free-fall interrupts since the last successful 41 + read (or 255 if number of interrupts would not fit). 42 + 43 + 44 Axes orientation 45 ---------------- 46
+2 -4
Documentation/tracers/mmiotrace.txt
··· 78 events were lost, the trace is incomplete. You should enlarge the buffers and 79 try again. Buffers are enlarged by first seeing how large the current buffers 80 are: 81 - $ cat /debug/tracing/trace_entries 82 gives you a number. Approximately double this number and write it back, for 83 instance: 84 - $ echo 0 > /debug/tracing/tracing_enabled 85 - $ echo 128000 > /debug/tracing/trace_entries 86 - $ echo 1 > /debug/tracing/tracing_enabled 87 Then start again from the top. 88 89 If you are doing a trace for a driver project, e.g. Nouveau, you should also
··· 78 events were lost, the trace is incomplete. You should enlarge the buffers and 79 try again. Buffers are enlarged by first seeing how large the current buffers 80 are: 81 + $ cat /debug/tracing/buffer_size_kb 82 gives you a number. Approximately double this number and write it back, for 83 instance: 84 + $ echo 128000 > /debug/tracing/buffer_size_kb 85 Then start again from the top. 86 87 If you are doing a trace for a driver project, e.g. Nouveau, you should also
+18 -11
MAINTAINERS
··· 692 L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) 693 S: Maintained 694 695 ARPD SUPPORT 696 P: Jonathan Layes 697 L: netdev@vger.kernel.org ··· 1912 S: Maintained 1913 1914 HARD DRIVE ACTIVE PROTECTION SYSTEM (HDAPS) DRIVER 1915 - P: Robert Love 1916 - M: rlove@rlove.org 1917 - M: linux-kernel@vger.kernel.org 1918 - W: http://www.kernel.org/pub/linux/kernel/people/rml/hdaps/ 1919 S: Maintained 1920 1921 GSPCA FINEPIX SUBDRIVER ··· 2008 2009 HIBERNATION (aka Software Suspend, aka swsusp) 2010 P: Pavel Machek 2011 - M: pavel@suse.cz 2012 P: Rafael J. Wysocki 2013 M: rjw@sisk.pl 2014 L: linux-pm@lists.linux-foundation.org ··· 3334 M: jeremy@xensource.com 3335 P: Chris Wright 3336 M: chrisw@sous-sol.org 3337 - P: Zachary Amsden 3338 - M: zach@vmware.com 3339 P: Rusty Russell 3340 M: rusty@rustcorp.com.au 3341 L: virtualization@lists.osdl.org ··· 4179 P: Len Brown 4180 M: len.brown@intel.com 4181 P: Pavel Machek 4182 - M: pavel@suse.cz 4183 P: Rafael J. Wysocki 4184 M: rjw@sisk.pl 4185 L: linux-pm@lists.linux-foundation.org ··· 4931 S: Maintained 4932 4933 ZR36067 VIDEO FOR LINUX DRIVER 4934 - P: Ronald Bultje 4935 - M: rbultje@ronald.bitfreak.net 4936 L: mjpeg-users@lists.sourceforge.net 4937 W: http://mjpeg.sourceforge.net/driver-zoran/ 4938 - S: Maintained 4939 4940 ZS DECSTATION Z85C30 SERIAL DRIVER 4941 P: Maciej W. Rozycki
··· 692 L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) 693 S: Maintained 694 695 + ARM/NUVOTON W90X900 ARM ARCHITECTURE 696 + P: Wan ZongShun 697 + M: mcuos.com@gmail.com 698 + L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) 699 + W: http://www.mcuos.com 700 + S: Maintained 701 + 702 ARPD SUPPORT 703 P: Jonathan Layes 704 L: netdev@vger.kernel.org ··· 1905 S: Maintained 1906 1907 HARD DRIVE ACTIVE PROTECTION SYSTEM (HDAPS) DRIVER 1908 + P: Frank Seidel 1909 + M: frank@f-seidel.de 1910 + L: lm-sensors@lm-sensors.org 1911 + W: http://www.kernel.org/pub/linux/kernel/people/fseidel/hdaps/ 1912 S: Maintained 1913 1914 GSPCA FINEPIX SUBDRIVER ··· 2001 2002 HIBERNATION (aka Software Suspend, aka swsusp) 2003 P: Pavel Machek 2004 + M: pavel@ucw.cz 2005 P: Rafael J. Wysocki 2006 M: rjw@sisk.pl 2007 L: linux-pm@lists.linux-foundation.org ··· 3327 M: jeremy@xensource.com 3328 P: Chris Wright 3329 M: chrisw@sous-sol.org 3330 + P: Alok Kataria 3331 + M: akataria@vmware.com 3332 P: Rusty Russell 3333 M: rusty@rustcorp.com.au 3334 L: virtualization@lists.osdl.org ··· 4172 P: Len Brown 4173 M: len.brown@intel.com 4174 P: Pavel Machek 4175 + M: pavel@ucw.cz 4176 P: Rafael J. Wysocki 4177 M: rjw@sisk.pl 4178 L: linux-pm@lists.linux-foundation.org ··· 4924 S: Maintained 4925 4926 ZR36067 VIDEO FOR LINUX DRIVER 4927 L: mjpeg-users@lists.sourceforge.net 4928 + L: linux-media@vger.kernel.org 4929 W: http://mjpeg.sourceforge.net/driver-zoran/ 4930 + T: Mercurial http://linuxtv.org/hg/v4l-dvb 4931 + S: Odd Fixes 4932 4933 ZS DECSTATION Z85C30 SERIAL DRIVER 4934 P: Maciej W. Rozycki
+1 -1
Makefile
··· 389 # output directory. 390 outputmakefile: 391 ifneq ($(KBUILD_SRC),) 392 $(Q)$(CONFIG_SHELL) $(srctree)/scripts/mkmakefile \ 393 $(srctree) $(objtree) $(VERSION) $(PATCHLEVEL) 394 endif ··· 947 mkdir -p include2; \ 948 ln -fsn $(srctree)/include/asm-$(SRCARCH) include2/asm; \ 949 fi 950 - ln -fsn $(srctree) source 951 endif 952 953 # prepare2 creates a makefile if using a separate output directory
··· 389 # output directory. 390 outputmakefile: 391 ifneq ($(KBUILD_SRC),) 392 + $(Q)ln -fsn $(srctree) source 393 $(Q)$(CONFIG_SHELL) $(srctree)/scripts/mkmakefile \ 394 $(srctree) $(objtree) $(VERSION) $(PATCHLEVEL) 395 endif ··· 946 mkdir -p include2; \ 947 ln -fsn $(srctree)/include/asm-$(SRCARCH) include2/asm; \ 948 fi 949 endif 950 951 # prepare2 creates a makefile if using a separate output directory
+1 -1
README
··· 188 values to random values. 189 190 You can find more information on using the Linux kernel config tools 191 - in Documentation/kbuild/make-configs.txt. 192 193 NOTES on "make config": 194 - having unnecessary drivers will make the kernel bigger, and can
··· 188 values to random values. 189 190 You can find more information on using the Linux kernel config tools 191 + in Documentation/kbuild/kconfig.txt. 192 193 NOTES on "make config": 194 - having unnecessary drivers will make the kernel bigger, and can
+4 -4
arch/alpha/kernel/process.c
··· 93 if (cpuid != boot_cpuid) { 94 flags |= 0x00040000UL; /* "remain halted" */ 95 *pflags = flags; 96 - cpu_clear(cpuid, cpu_present_map); 97 - cpu_clear(cpuid, cpu_possible_map); 98 halt(); 99 } 100 #endif ··· 120 121 #ifdef CONFIG_SMP 122 /* Wait for the secondaries to halt. */ 123 - cpu_clear(boot_cpuid, cpu_present_map); 124 - cpu_clear(boot_cpuid, cpu_possible_map); 125 while (cpus_weight(cpu_present_map)) 126 barrier(); 127 #endif
··· 93 if (cpuid != boot_cpuid) { 94 flags |= 0x00040000UL; /* "remain halted" */ 95 *pflags = flags; 96 + set_cpu_present(cpuid, false); 97 + set_cpu_possible(cpuid, false); 98 halt(); 99 } 100 #endif ··· 120 121 #ifdef CONFIG_SMP 122 /* Wait for the secondaries to halt. */ 123 + set_cpu_present(boot_cpuid, false); 124 + set_cpu_possible(boot_cpuid, false); 125 while (cpus_weight(cpu_present_map)) 126 barrier(); 127 #endif
+6 -6
arch/alpha/kernel/smp.c
··· 120 smp_callin(void) 121 { 122 int cpuid = hard_smp_processor_id(); 123 - cpumask_t mask = cpu_online_map; 124 125 - if (cpu_test_and_set(cpuid, mask)) { 126 printk("??, cpu 0x%x already present??\n", cpuid); 127 BUG(); 128 } 129 130 /* Turn on machine checks. */ 131 wrmces(7); ··· 436 ((char *)cpubase + i*hwrpb->processor_size); 437 if ((cpu->flags & 0x1cc) == 0x1cc) { 438 smp_num_probed++; 439 - cpu_set(i, cpu_possible_map); 440 - cpu_set(i, cpu_present_map); 441 cpu->pal_revision = boot_cpu_palrev; 442 } 443 ··· 470 471 /* Nothing to do on a UP box, or when told not to. */ 472 if (smp_num_probed == 1 || max_cpus == 0) { 473 - cpu_possible_map = cpumask_of_cpu(boot_cpuid); 474 - cpu_present_map = cpumask_of_cpu(boot_cpuid); 475 printk(KERN_INFO "SMP mode deactivated.\n"); 476 return; 477 }
··· 120 smp_callin(void) 121 { 122 int cpuid = hard_smp_processor_id(); 123 124 + if (cpu_online(cpuid)) { 125 printk("??, cpu 0x%x already present??\n", cpuid); 126 BUG(); 127 } 128 + set_cpu_online(cpuid, true); 129 130 /* Turn on machine checks. */ 131 wrmces(7); ··· 436 ((char *)cpubase + i*hwrpb->processor_size); 437 if ((cpu->flags & 0x1cc) == 0x1cc) { 438 smp_num_probed++; 439 + set_cpu_possible(i, true); 440 + set_cpu_present(i, true); 441 cpu->pal_revision = boot_cpu_palrev; 442 } 443 ··· 470 471 /* Nothing to do on a UP box, or when told not to. */ 472 if (smp_num_probed == 1 || max_cpus == 0) { 473 + init_cpu_possible(cpumask_of(boot_cpuid)); 474 + init_cpu_present(cpumask_of(boot_cpuid)); 475 printk(KERN_INFO "SMP mode deactivated.\n"); 476 return; 477 }
+1 -1
arch/arm/configs/at91sam9260ek_defconfig
··· 608 # Watchdog Device Drivers 609 # 610 # CONFIG_SOFT_WATCHDOG is not set 611 - CONFIG_AT91SAM9_WATCHDOG=y 612 613 # 614 # USB-based Watchdog Cards
··· 608 # Watchdog Device Drivers 609 # 610 # CONFIG_SOFT_WATCHDOG is not set 611 + CONFIG_AT91SAM9X_WATCHDOG=y 612 613 # 614 # USB-based Watchdog Cards
+1 -1
arch/arm/configs/at91sam9261ek_defconfig
··· 700 # Watchdog Device Drivers 701 # 702 # CONFIG_SOFT_WATCHDOG is not set 703 - CONFIG_AT91SAM9_WATCHDOG=y 704 705 # 706 # USB-based Watchdog Cards
··· 700 # Watchdog Device Drivers 701 # 702 # CONFIG_SOFT_WATCHDOG is not set 703 + CONFIG_AT91SAM9X_WATCHDOG=y 704 705 # 706 # USB-based Watchdog Cards
+1 -1
arch/arm/configs/at91sam9263ek_defconfig
··· 710 # Watchdog Device Drivers 711 # 712 # CONFIG_SOFT_WATCHDOG is not set 713 - CONFIG_AT91SAM9_WATCHDOG=y 714 715 # 716 # USB-based Watchdog Cards
··· 710 # Watchdog Device Drivers 711 # 712 # CONFIG_SOFT_WATCHDOG is not set 713 + CONFIG_AT91SAM9X_WATCHDOG=y 714 715 # 716 # USB-based Watchdog Cards
+1 -1
arch/arm/configs/at91sam9rlek_defconfig
··· 606 # Watchdog Device Drivers 607 # 608 # CONFIG_SOFT_WATCHDOG is not set 609 - CONFIG_AT91SAM9_WATCHDOG=y 610 611 # 612 # Sonics Silicon Backplane
··· 606 # Watchdog Device Drivers 607 # 608 # CONFIG_SOFT_WATCHDOG is not set 609 + CONFIG_AT91SAM9X_WATCHDOG=y 610 611 # 612 # Sonics Silicon Backplane
+1 -1
arch/arm/configs/qil-a9260_defconfig
··· 727 # Watchdog Device Drivers 728 # 729 # CONFIG_SOFT_WATCHDOG is not set 730 - # CONFIG_AT91SAM9_WATCHDOG is not set 731 732 # 733 # USB-based Watchdog Cards
··· 727 # Watchdog Device Drivers 728 # 729 # CONFIG_SOFT_WATCHDOG is not set 730 + # CONFIG_AT91SAM9X_WATCHDOG is not set 731 732 # 733 # USB-based Watchdog Cards
+2 -2
arch/arm/kernel/elf.c
··· 74 */ 75 int arm_elf_read_implies_exec(const struct elf32_hdr *x, int executable_stack) 76 { 77 - if (executable_stack != EXSTACK_ENABLE_X) 78 return 1; 79 - if (cpu_architecture() <= CPU_ARCH_ARMv6) 80 return 1; 81 return 0; 82 }
··· 74 */ 75 int arm_elf_read_implies_exec(const struct elf32_hdr *x, int executable_stack) 76 { 77 + if (executable_stack != EXSTACK_DISABLE_X) 78 return 1; 79 + if (cpu_architecture() < CPU_ARCH_ARMv6) 80 return 1; 81 return 0; 82 }
+1 -1
arch/arm/mach-at91/at91cap9_devices.c
··· 697 * Watchdog 698 * -------------------------------------------------------------------- */ 699 700 - #if defined(CONFIG_AT91SAM9_WATCHDOG) || defined(CONFIG_AT91SAM9_WATCHDOG_MODULE) 701 static struct platform_device at91cap9_wdt_device = { 702 .name = "at91_wdt", 703 .id = -1,
··· 697 * Watchdog 698 * -------------------------------------------------------------------- */ 699 700 + #if defined(CONFIG_AT91SAM9X_WATCHDOG) || defined(CONFIG_AT91SAM9X_WATCHDOG_MODULE) 701 static struct platform_device at91cap9_wdt_device = { 702 .name = "at91_wdt", 703 .id = -1,
+1 -1
arch/arm/mach-at91/at91sam9260_devices.c
··· 643 * Watchdog 644 * -------------------------------------------------------------------- */ 645 646 - #if defined(CONFIG_AT91SAM9_WATCHDOG) || defined(CONFIG_AT91SAM9_WATCHDOG_MODULE) 647 static struct platform_device at91sam9260_wdt_device = { 648 .name = "at91_wdt", 649 .id = -1,
··· 643 * Watchdog 644 * -------------------------------------------------------------------- */ 645 646 + #if defined(CONFIG_AT91SAM9X_WATCHDOG) || defined(CONFIG_AT91SAM9X_WATCHDOG_MODULE) 647 static struct platform_device at91sam9260_wdt_device = { 648 .name = "at91_wdt", 649 .id = -1,
+1 -1
arch/arm/mach-at91/at91sam9261_devices.c
··· 621 * Watchdog 622 * -------------------------------------------------------------------- */ 623 624 - #if defined(CONFIG_AT91SAM9_WATCHDOG) || defined(CONFIG_AT91SAM9_WATCHDOG_MODULE) 625 static struct platform_device at91sam9261_wdt_device = { 626 .name = "at91_wdt", 627 .id = -1,
··· 621 * Watchdog 622 * -------------------------------------------------------------------- */ 623 624 + #if defined(CONFIG_AT91SAM9X_WATCHDOG) || defined(CONFIG_AT91SAM9X_WATCHDOG_MODULE) 625 static struct platform_device at91sam9261_wdt_device = { 626 .name = "at91_wdt", 627 .id = -1,
+1 -1
arch/arm/mach-at91/at91sam9263_devices.c
··· 854 * Watchdog 855 * -------------------------------------------------------------------- */ 856 857 - #if defined(CONFIG_AT91SAM9_WATCHDOG) || defined(CONFIG_AT91SAM9_WATCHDOG_MODULE) 858 static struct platform_device at91sam9263_wdt_device = { 859 .name = "at91_wdt", 860 .id = -1,
··· 854 * Watchdog 855 * -------------------------------------------------------------------- */ 856 857 + #if defined(CONFIG_AT91SAM9X_WATCHDOG) || defined(CONFIG_AT91SAM9X_WATCHDOG_MODULE) 858 static struct platform_device at91sam9263_wdt_device = { 859 .name = "at91_wdt", 860 .id = -1,
+1 -1
arch/arm/mach-at91/at91sam9rl_devices.c
··· 609 * Watchdog 610 * -------------------------------------------------------------------- */ 611 612 - #if defined(CONFIG_AT91SAM9_WATCHDOG) || defined(CONFIG_AT91SAM9_WATCHDOG_MODULE) 613 static struct platform_device at91sam9rl_wdt_device = { 614 .name = "at91_wdt", 615 .id = -1,
··· 609 * Watchdog 610 * -------------------------------------------------------------------- */ 611 612 + #if defined(CONFIG_AT91SAM9X_WATCHDOG) || defined(CONFIG_AT91SAM9X_WATCHDOG_MODULE) 613 static struct platform_device at91sam9rl_wdt_device = { 614 .name = "at91_wdt", 615 .id = -1,
+10 -5
arch/arm/mach-at91/gpio.c
··· 490 491 /*--------------------------------------------------------------------------*/ 492 493 - /* This lock class tells lockdep that GPIO irqs are in a different 494 * category than their parents, so it won't report false recursion. 495 */ 496 static struct lock_class_key gpio_lock_class; ··· 509 prev = this, this++) { 510 unsigned id = this->id; 511 unsigned i; 512 - 513 - /* enable PIO controller's clock */ 514 - clk_enable(this->clock); 515 516 __raw_writel(~0, this->regbase + PIO_IDR); 517 ··· 554 data->chipbase = PIN_BASE + i * 32; 555 data->regbase = data->offset + (void __iomem *)AT91_VA_BASE_SYS; 556 557 - /* AT91SAM9263_ID_PIOCDE groups PIOC, PIOD, PIOE */ 558 if (last && last->id == data->id) 559 last->next = data; 560 }
··· 490 491 /*--------------------------------------------------------------------------*/ 492 493 + /* 494 + * This lock class tells lockdep that GPIO irqs are in a different 495 * category than their parents, so it won't report false recursion. 496 */ 497 static struct lock_class_key gpio_lock_class; ··· 508 prev = this, this++) { 509 unsigned id = this->id; 510 unsigned i; 511 512 __raw_writel(~0, this->regbase + PIO_IDR); 513 ··· 556 data->chipbase = PIN_BASE + i * 32; 557 data->regbase = data->offset + (void __iomem *)AT91_VA_BASE_SYS; 558 559 + /* enable PIO controller's clock */ 560 + clk_enable(data->clock); 561 + 562 + /* 563 + * Some processors share peripheral ID between multiple GPIO banks. 564 + * SAM9263 (PIOC, PIOD, PIOE) 565 + * CAP9 (PIOA, PIOB, PIOC, PIOD) 566 + */ 567 if (last && last->id == data->id) 568 last->next = data; 569 }
+1
arch/arm/mach-at91/include/mach/board.h
··· 93 u8 enable_pin; /* chip enable */ 94 u8 det_pin; /* card detect */ 95 u8 rdy_pin; /* ready/busy */ 96 u8 ale; /* address line number connected to ALE */ 97 u8 cle; /* address line number connected to CLE */ 98 u8 bus_width_16; /* buswidth is 16 bit */
··· 93 u8 enable_pin; /* chip enable */ 94 u8 det_pin; /* card detect */ 95 u8 rdy_pin; /* ready/busy */ 96 + u8 rdy_pin_active_low; /* rdy_pin value is inverted */ 97 u8 ale; /* address line number connected to ALE */ 98 u8 cle; /* address line number connected to CLE */ 99 u8 bus_width_16; /* buswidth is 16 bit */
-3
arch/arm/mach-ep93xx/include/mach/gesbc9312.h
··· 1 - /* 2 - * arch/arm/mach-ep93xx/include/mach/gesbc9312.h 3 - */
···
-1
arch/arm/mach-ep93xx/include/mach/hardware.h
··· 10 11 #include "platform.h" 12 13 - #include "gesbc9312.h" 14 #include "ts72xx.h" 15 16 #endif
··· 10 11 #include "platform.h" 12 13 #include "ts72xx.h" 14 15 #endif
+1 -1
arch/arm/mach-kirkwood/irq.c
··· 42 writel(0, GPIO_EDGE_CAUSE(32)); 43 44 for (i = IRQ_KIRKWOOD_GPIO_START; i < NR_IRQS; i++) { 45 - set_irq_chip(i, &orion_gpio_irq_level_chip); 46 set_irq_handler(i, handle_level_irq); 47 irq_desc[i].status |= IRQ_LEVEL; 48 set_irq_flags(i, IRQF_VALID);
··· 42 writel(0, GPIO_EDGE_CAUSE(32)); 43 44 for (i = IRQ_KIRKWOOD_GPIO_START; i < NR_IRQS; i++) { 45 + set_irq_chip(i, &orion_gpio_irq_chip); 46 set_irq_handler(i, handle_level_irq); 47 irq_desc[i].status |= IRQ_LEVEL; 48 set_irq_flags(i, IRQF_VALID);
+1 -1
arch/arm/mach-mv78xx0/irq.c
··· 40 writel(0, GPIO_EDGE_CAUSE(0)); 41 42 for (i = IRQ_MV78XX0_GPIO_START; i < NR_IRQS; i++) { 43 - set_irq_chip(i, &orion_gpio_irq_level_chip); 44 set_irq_handler(i, handle_level_irq); 45 irq_desc[i].status |= IRQ_LEVEL; 46 set_irq_flags(i, IRQF_VALID);
··· 40 writel(0, GPIO_EDGE_CAUSE(0)); 41 42 for (i = IRQ_MV78XX0_GPIO_START; i < NR_IRQS; i++) { 43 + set_irq_chip(i, &orion_gpio_irq_chip); 44 set_irq_handler(i, handle_level_irq); 45 irq_desc[i].status |= IRQ_LEVEL; 46 set_irq_flags(i, IRQF_VALID);
+8 -8
arch/arm/mach-omap2/clock.c
··· 565 * 566 * Given a struct clk of a rate-selectable clksel clock, and a clock divisor, 567 * find the corresponding register field value. The return register value is 568 - * the value before left-shifting. Returns 0xffffffff on error 569 */ 570 u32 omap2_divisor_to_clksel(struct clk *clk, u32 div) 571 { ··· 577 578 clks = omap2_get_clksel_by_parent(clk, clk->parent); 579 if (clks == NULL) 580 - return 0; 581 582 for (clkr = clks->rates; clkr->div; clkr++) { 583 if ((clkr->flags & cpu_mask) && (clkr->div == div)) ··· 588 printk(KERN_ERR "clock: Could not find divisor %d for " 589 "clock %s parent %s\n", div, clk->name, 590 clk->parent->name); 591 - return 0; 592 } 593 594 return clkr->val; ··· 708 return 0; 709 710 for (clkr = clks->rates; clkr->div; clkr++) { 711 - if (clkr->flags & (cpu_mask | DEFAULT_RATE)) 712 break; /* Found the default rate for this platform */ 713 } 714 ··· 746 return -EINVAL; 747 748 if (clk->usecount > 0) 749 - _omap2_clk_disable(clk); 750 751 /* Set new source value (previous dividers if any in effect) */ 752 reg_val = __raw_readl(src_addr) & ~field_mask; ··· 759 wmb(); 760 } 761 762 - if (clk->usecount > 0) 763 - _omap2_clk_enable(clk); 764 - 765 clk->parent = new_parent; 766 767 /* CLKSEL clocks follow their parents' rates, divided by a divisor */ 768 clk->rate = new_parent->rate;
··· 565 * 566 * Given a struct clk of a rate-selectable clksel clock, and a clock divisor, 567 * find the corresponding register field value. The return register value is 568 + * the value before left-shifting. Returns ~0 on error 569 */ 570 u32 omap2_divisor_to_clksel(struct clk *clk, u32 div) 571 { ··· 577 578 clks = omap2_get_clksel_by_parent(clk, clk->parent); 579 if (clks == NULL) 580 + return ~0; 581 582 for (clkr = clks->rates; clkr->div; clkr++) { 583 if ((clkr->flags & cpu_mask) && (clkr->div == div)) ··· 588 printk(KERN_ERR "clock: Could not find divisor %d for " 589 "clock %s parent %s\n", div, clk->name, 590 clk->parent->name); 591 + return ~0; 592 } 593 594 return clkr->val; ··· 708 return 0; 709 710 for (clkr = clks->rates; clkr->div; clkr++) { 711 + if (clkr->flags & cpu_mask && clkr->flags & DEFAULT_RATE) 712 break; /* Found the default rate for this platform */ 713 } 714 ··· 746 return -EINVAL; 747 748 if (clk->usecount > 0) 749 + omap2_clk_disable(clk); 750 751 /* Set new source value (previous dividers if any in effect) */ 752 reg_val = __raw_readl(src_addr) & ~field_mask; ··· 759 wmb(); 760 } 761 762 clk->parent = new_parent; 763 + 764 + if (clk->usecount > 0) 765 + omap2_clk_enable(clk); 766 767 /* CLKSEL clocks follow their parents' rates, divided by a divisor */ 768 clk->rate = new_parent->rate;
+1 -1
arch/arm/mach-orion5x/irq.c
··· 44 * User can use set_type() if he wants to use edge types handlers. 45 */ 46 for (i = IRQ_ORION5X_GPIO_START; i < NR_IRQS; i++) { 47 - set_irq_chip(i, &orion_gpio_irq_level_chip); 48 set_irq_handler(i, handle_level_irq); 49 irq_desc[i].status |= IRQ_LEVEL; 50 set_irq_flags(i, IRQF_VALID);
··· 44 * User can use set_type() if he wants to use edge types handlers. 45 */ 46 for (i = IRQ_ORION5X_GPIO_START; i < NR_IRQS; i++) { 47 + set_irq_chip(i, &orion_gpio_irq_chip); 48 set_irq_handler(i, handle_level_irq); 49 irq_desc[i].status |= IRQ_LEVEL; 50 set_irq_flags(i, IRQF_VALID);
+2 -1
arch/arm/mm/mmu.c
··· 693 * Check whether this memory bank would entirely overlap 694 * the vmalloc area. 695 */ 696 - if (__va(bank->start) >= VMALLOC_MIN) { 697 printk(KERN_NOTICE "Ignoring RAM at %.8lx-%.8lx " 698 "(vmalloc region overlap).\n", 699 bank->start, bank->start + bank->size - 1);
··· 693 * Check whether this memory bank would entirely overlap 694 * the vmalloc area. 695 */ 696 + if (__va(bank->start) >= VMALLOC_MIN || 697 + __va(bank->start) < PAGE_OFFSET) { 698 printk(KERN_NOTICE "Ignoring RAM at %.8lx-%.8lx " 699 "(vmalloc region overlap).\n", 700 bank->start, bank->start + bank->size - 1);
+26 -49
arch/arm/plat-orion/gpio.c
··· 265 * polarity LEVEL mask 266 * 267 ****************************************************************************/ 268 - static void gpio_irq_edge_ack(u32 irq) 269 - { 270 - int pin = irq_to_gpio(irq); 271 272 - writel(~(1 << (pin & 31)), GPIO_EDGE_CAUSE(pin)); 273 } 274 275 - static void gpio_irq_edge_mask(u32 irq) 276 { 277 int pin = irq_to_gpio(irq); 278 - u32 u; 279 - 280 - u = readl(GPIO_EDGE_MASK(pin)); 281 u &= ~(1 << (pin & 31)); 282 - writel(u, GPIO_EDGE_MASK(pin)); 283 } 284 285 - static void gpio_irq_edge_unmask(u32 irq) 286 { 287 int pin = irq_to_gpio(irq); 288 - u32 u; 289 - 290 - u = readl(GPIO_EDGE_MASK(pin)); 291 u |= 1 << (pin & 31); 292 - writel(u, GPIO_EDGE_MASK(pin)); 293 - } 294 - 295 - static void gpio_irq_level_mask(u32 irq) 296 - { 297 - int pin = irq_to_gpio(irq); 298 - u32 u; 299 - 300 - u = readl(GPIO_LEVEL_MASK(pin)); 301 - u &= ~(1 << (pin & 31)); 302 - writel(u, GPIO_LEVEL_MASK(pin)); 303 - } 304 - 305 - static void gpio_irq_level_unmask(u32 irq) 306 - { 307 - int pin = irq_to_gpio(irq); 308 - u32 u; 309 - 310 - u = readl(GPIO_LEVEL_MASK(pin)); 311 - u |= 1 << (pin & 31); 312 - writel(u, GPIO_LEVEL_MASK(pin)); 313 } 314 315 static int gpio_irq_set_type(u32 irq, u32 type) ··· 316 * Set edge/level type. 317 */ 318 if (type & (IRQ_TYPE_EDGE_RISING | IRQ_TYPE_EDGE_FALLING)) { 319 - desc->chip = &orion_gpio_irq_edge_chip; 320 } else if (type & (IRQ_TYPE_LEVEL_HIGH | IRQ_TYPE_LEVEL_LOW)) { 321 - desc->chip = &orion_gpio_irq_level_chip; 322 } else { 323 printk(KERN_ERR "failed to set irq=%d (type=%d)\n", irq, type); 324 return -EINVAL; ··· 356 return 0; 357 } 358 359 - struct irq_chip orion_gpio_irq_edge_chip = { 360 - .name = "orion_gpio_irq_edge", 361 - .ack = gpio_irq_edge_ack, 362 - .mask = gpio_irq_edge_mask, 363 - .unmask = gpio_irq_edge_unmask, 364 - .set_type = gpio_irq_set_type, 365 - }; 366 - 367 - struct irq_chip orion_gpio_irq_level_chip = { 368 - .name = "orion_gpio_irq_level", 369 - .mask = gpio_irq_level_mask, 370 - .mask_ack = gpio_irq_level_mask, 371 - .unmask = gpio_irq_level_unmask, 372 .set_type = gpio_irq_set_type, 373 }; 374
··· 265 * polarity LEVEL mask 266 * 267 ****************************************************************************/ 268 269 + static void gpio_irq_ack(u32 irq) 270 + { 271 + int type = irq_desc[irq].status & IRQ_TYPE_SENSE_MASK; 272 + if (type & (IRQ_TYPE_EDGE_RISING | IRQ_TYPE_EDGE_FALLING)) { 273 + int pin = irq_to_gpio(irq); 274 + writel(~(1 << (pin & 31)), GPIO_EDGE_CAUSE(pin)); 275 + } 276 } 277 278 + static void gpio_irq_mask(u32 irq) 279 { 280 int pin = irq_to_gpio(irq); 281 + int type = irq_desc[irq].status & IRQ_TYPE_SENSE_MASK; 282 + u32 reg = (type & (IRQ_TYPE_EDGE_RISING | IRQ_TYPE_EDGE_FALLING)) ? 283 + GPIO_EDGE_MASK(pin) : GPIO_LEVEL_MASK(pin); 284 + u32 u = readl(reg); 285 u &= ~(1 << (pin & 31)); 286 + writel(u, reg); 287 } 288 289 + static void gpio_irq_unmask(u32 irq) 290 { 291 int pin = irq_to_gpio(irq); 292 + int type = irq_desc[irq].status & IRQ_TYPE_SENSE_MASK; 293 + u32 reg = (type & (IRQ_TYPE_EDGE_RISING | IRQ_TYPE_EDGE_FALLING)) ? 294 + GPIO_EDGE_MASK(pin) : GPIO_LEVEL_MASK(pin); 295 + u32 u = readl(reg); 296 u |= 1 << (pin & 31); 297 + writel(u, reg); 298 } 299 300 static int gpio_irq_set_type(u32 irq, u32 type) ··· 331 * Set edge/level type. 332 */ 333 if (type & (IRQ_TYPE_EDGE_RISING | IRQ_TYPE_EDGE_FALLING)) { 334 + desc->handle_irq = handle_edge_irq; 335 } else if (type & (IRQ_TYPE_LEVEL_HIGH | IRQ_TYPE_LEVEL_LOW)) { 336 + desc->handle_irq = handle_level_irq; 337 } else { 338 printk(KERN_ERR "failed to set irq=%d (type=%d)\n", irq, type); 339 return -EINVAL; ··· 371 return 0; 372 } 373 374 + struct irq_chip orion_gpio_irq_chip = { 375 + .name = "orion_gpio", 376 + .ack = gpio_irq_ack, 377 + .mask = gpio_irq_mask, 378 + .unmask = gpio_irq_unmask, 379 .set_type = gpio_irq_set_type, 380 }; 381
+1 -2
arch/arm/plat-orion/include/plat/gpio.h
··· 31 /* 32 * GPIO interrupt handling. 33 */ 34 - extern struct irq_chip orion_gpio_irq_edge_chip; 35 - extern struct irq_chip orion_gpio_irq_level_chip; 36 void orion_gpio_irq_handler(int irqoff); 37 38
··· 31 /* 32 * GPIO interrupt handling. 33 */ 34 + extern struct irq_chip orion_gpio_irq_chip; 35 void orion_gpio_irq_handler(int irqoff); 36 37
+1
arch/avr32/mach-at32ap/include/mach/board.h
··· 116 int enable_pin; /* chip enable */ 117 int det_pin; /* card detect */ 118 int rdy_pin; /* ready/busy */ 119 u8 ale; /* address line number connected to ALE */ 120 u8 cle; /* address line number connected to CLE */ 121 u8 bus_width_16; /* buswidth is 16 bit */
··· 116 int enable_pin; /* chip enable */ 117 int det_pin; /* card detect */ 118 int rdy_pin; /* ready/busy */ 119 + u8 rdy_pin_active_low; /* rdy_pin value is inverted */ 120 u8 ale; /* address line number connected to ALE */ 121 u8 cle; /* address line number connected to CLE */ 122 u8 bus_width_16; /* buswidth is 16 bit */
+5 -2
arch/ia64/Kconfig
··· 221 222 config IA64_XEN_GUEST 223 bool "Xen guest" 224 depends on XEN 225 226 endchoice 227 ··· 483 default y if VIRTUAL_MEM_MAP 484 485 config HAVE_ARCH_EARLY_PFN_TO_NID 486 - def_bool y 487 - depends on NEED_MULTIPLE_NODES 488 489 config HAVE_ARCH_NODEDATA_EXTENSION 490 def_bool y
··· 221 222 config IA64_XEN_GUEST 223 bool "Xen guest" 224 + select SWIOTLB 225 depends on XEN 226 + help 227 + Build a kernel that runs on Xen guest domain. At this moment only 228 + 16KB page size in supported. 229 230 endchoice 231 ··· 479 default y if VIRTUAL_MEM_MAP 480 481 config HAVE_ARCH_EARLY_PFN_TO_NID 482 + def_bool NUMA && SPARSEMEM 483 484 config HAVE_ARCH_NODEDATA_EXTENSION 485 def_bool y
+1601
arch/ia64/configs/xen_domu_defconfig
···
··· 1 + # 2 + # Automatically generated make config: don't edit 3 + # Linux kernel version: 2.6.29-rc1 4 + # Fri Jan 16 11:49:59 2009 5 + # 6 + CONFIG_DEFCONFIG_LIST="/lib/modules/$UNAME_RELEASE/.config" 7 + 8 + # 9 + # General setup 10 + # 11 + CONFIG_EXPERIMENTAL=y 12 + CONFIG_LOCK_KERNEL=y 13 + CONFIG_INIT_ENV_ARG_LIMIT=32 14 + CONFIG_LOCALVERSION="" 15 + CONFIG_LOCALVERSION_AUTO=y 16 + CONFIG_SWAP=y 17 + CONFIG_SYSVIPC=y 18 + CONFIG_SYSVIPC_SYSCTL=y 19 + CONFIG_POSIX_MQUEUE=y 20 + # CONFIG_BSD_PROCESS_ACCT is not set 21 + # CONFIG_TASKSTATS is not set 22 + # CONFIG_AUDIT is not set 23 + CONFIG_IKCONFIG=y 24 + CONFIG_IKCONFIG_PROC=y 25 + CONFIG_LOG_BUF_SHIFT=20 26 + CONFIG_HAVE_UNSTABLE_SCHED_CLOCK=y 27 + # CONFIG_GROUP_SCHED is not set 28 + 29 + # 30 + # Control Group support 31 + # 32 + # CONFIG_CGROUPS is not set 33 + CONFIG_SYSFS_DEPRECATED=y 34 + CONFIG_SYSFS_DEPRECATED_V2=y 35 + # CONFIG_RELAY is not set 36 + CONFIG_NAMESPACES=y 37 + # CONFIG_UTS_NS is not set 38 + # CONFIG_IPC_NS is not set 39 + # CONFIG_USER_NS is not set 40 + # CONFIG_PID_NS is not set 41 + CONFIG_BLK_DEV_INITRD=y 42 + CONFIG_INITRAMFS_SOURCE="" 43 + CONFIG_CC_OPTIMIZE_FOR_SIZE=y 44 + CONFIG_SYSCTL=y 45 + # CONFIG_EMBEDDED is not set 46 + CONFIG_SYSCTL_SYSCALL=y 47 + CONFIG_KALLSYMS=y 48 + CONFIG_KALLSYMS_ALL=y 49 + CONFIG_KALLSYMS_STRIP_GENERATED=y 50 + # CONFIG_KALLSYMS_EXTRA_PASS is not set 51 + CONFIG_HOTPLUG=y 52 + CONFIG_PRINTK=y 53 + CONFIG_BUG=y 54 + CONFIG_ELF_CORE=y 55 + CONFIG_COMPAT_BRK=y 56 + CONFIG_BASE_FULL=y 57 + CONFIG_FUTEX=y 58 + CONFIG_ANON_INODES=y 59 + CONFIG_EPOLL=y 60 + CONFIG_SIGNALFD=y 61 + CONFIG_TIMERFD=y 62 + CONFIG_EVENTFD=y 63 + CONFIG_SHMEM=y 64 + CONFIG_AIO=y 65 + CONFIG_VM_EVENT_COUNTERS=y 66 + CONFIG_PCI_QUIRKS=y 67 + CONFIG_SLUB_DEBUG=y 68 + # CONFIG_SLAB is not set 69 + CONFIG_SLUB=y 70 + # CONFIG_SLOB is not set 71 + # CONFIG_PROFILING is not set 72 + CONFIG_HAVE_OPROFILE=y 73 + # CONFIG_KPROBES is not set 74 + CONFIG_HAVE_KPROBES=y 75 + CONFIG_HAVE_KRETPROBES=y 76 + CONFIG_HAVE_ARCH_TRACEHOOK=y 77 + CONFIG_HAVE_DMA_ATTRS=y 78 + CONFIG_USE_GENERIC_SMP_HELPERS=y 79 + # CONFIG_HAVE_GENERIC_DMA_COHERENT is not set 80 + CONFIG_SLABINFO=y 81 + CONFIG_RT_MUTEXES=y 82 + CONFIG_BASE_SMALL=0 83 + CONFIG_MODULES=y 84 + # CONFIG_MODULE_FORCE_LOAD is not set 85 + CONFIG_MODULE_UNLOAD=y 86 + # CONFIG_MODULE_FORCE_UNLOAD is not set 87 + CONFIG_MODVERSIONS=y 88 + CONFIG_MODULE_SRCVERSION_ALL=y 89 + CONFIG_STOP_MACHINE=y 90 + CONFIG_BLOCK=y 91 + # CONFIG_BLK_DEV_IO_TRACE is not set 92 + # CONFIG_BLK_DEV_BSG is not set 93 + # CONFIG_BLK_DEV_INTEGRITY is not set 94 + 95 + # 96 + # IO Schedulers 97 + # 98 + CONFIG_IOSCHED_NOOP=y 99 + CONFIG_IOSCHED_AS=y 100 + CONFIG_IOSCHED_DEADLINE=y 101 + CONFIG_IOSCHED_CFQ=y 102 + CONFIG_DEFAULT_AS=y 103 + # CONFIG_DEFAULT_DEADLINE is not set 104 + # CONFIG_DEFAULT_CFQ is not set 105 + # CONFIG_DEFAULT_NOOP is not set 106 + CONFIG_DEFAULT_IOSCHED="anticipatory" 107 + CONFIG_CLASSIC_RCU=y 108 + # CONFIG_TREE_RCU is not set 109 + # CONFIG_PREEMPT_RCU is not set 110 + # CONFIG_TREE_RCU_TRACE is not set 111 + # CONFIG_PREEMPT_RCU_TRACE is not set 112 + CONFIG_FREEZER=y 113 + 114 + # 115 + # Processor type and features 116 + # 117 + CONFIG_IA64=y 118 + CONFIG_64BIT=y 119 + CONFIG_ZONE_DMA=y 120 + CONFIG_QUICKLIST=y 121 + CONFIG_MMU=y 122 + CONFIG_SWIOTLB=y 123 + CONFIG_IOMMU_HELPER=y 124 + CONFIG_RWSEM_XCHGADD_ALGORITHM=y 125 + CONFIG_HUGETLB_PAGE_SIZE_VARIABLE=y 126 + CONFIG_GENERIC_FIND_NEXT_BIT=y 127 + CONFIG_GENERIC_CALIBRATE_DELAY=y 128 + CONFIG_GENERIC_TIME=y 129 + CONFIG_GENERIC_TIME_VSYSCALL=y 130 + CONFIG_HAVE_SETUP_PER_CPU_AREA=y 131 + CONFIG_DMI=y 132 + CONFIG_EFI=y 133 + CONFIG_GENERIC_IOMAP=y 134 + CONFIG_SCHED_OMIT_FRAME_POINTER=y 135 + CONFIG_AUDIT_ARCH=y 136 + CONFIG_PARAVIRT_GUEST=y 137 + CONFIG_PARAVIRT=y 138 + CONFIG_XEN=y 139 + CONFIG_XEN_XENCOMM=y 140 + CONFIG_NO_IDLE_HZ=y 141 + # CONFIG_IA64_GENERIC is not set 142 + # CONFIG_IA64_DIG is not set 143 + # CONFIG_IA64_DIG_VTD is not set 144 + # CONFIG_IA64_HP_ZX1 is not set 145 + # CONFIG_IA64_HP_ZX1_SWIOTLB is not set 146 + # CONFIG_IA64_SGI_SN2 is not set 147 + # CONFIG_IA64_SGI_UV is not set 148 + # CONFIG_IA64_HP_SIM is not set 149 + CONFIG_IA64_XEN_GUEST=y 150 + # CONFIG_ITANIUM is not set 151 + CONFIG_MCKINLEY=y 152 + # CONFIG_IA64_PAGE_SIZE_4KB is not set 153 + # CONFIG_IA64_PAGE_SIZE_8KB is not set 154 + CONFIG_IA64_PAGE_SIZE_16KB=y 155 + # CONFIG_IA64_PAGE_SIZE_64KB is not set 156 + CONFIG_PGTABLE_3=y 157 + # CONFIG_PGTABLE_4 is not set 158 + CONFIG_HZ=250 159 + # CONFIG_HZ_100 is not set 160 + CONFIG_HZ_250=y 161 + # CONFIG_HZ_300 is not set 162 + # CONFIG_HZ_1000 is not set 163 + # CONFIG_SCHED_HRTICK is not set 164 + CONFIG_IA64_L1_CACHE_SHIFT=7 165 + CONFIG_IA64_CYCLONE=y 166 + CONFIG_IOSAPIC=y 167 + CONFIG_FORCE_MAX_ZONEORDER=17 168 + # CONFIG_VIRT_CPU_ACCOUNTING is not set 169 + CONFIG_SMP=y 170 + CONFIG_NR_CPUS=16 171 + CONFIG_HOTPLUG_CPU=y 172 + CONFIG_ARCH_ENABLE_MEMORY_HOTPLUG=y 173 + CONFIG_ARCH_ENABLE_MEMORY_HOTREMOVE=y 174 + # CONFIG_SCHED_SMT is not set 175 + CONFIG_PERMIT_BSP_REMOVE=y 176 + CONFIG_FORCE_CPEI_RETARGET=y 177 + CONFIG_PREEMPT_NONE=y 178 + # CONFIG_PREEMPT_VOLUNTARY is not set 179 + # CONFIG_PREEMPT is not set 180 + CONFIG_SELECT_MEMORY_MODEL=y 181 + CONFIG_FLATMEM_MANUAL=y 182 + # CONFIG_DISCONTIGMEM_MANUAL is not set 183 + # CONFIG_SPARSEMEM_MANUAL is not set 184 + CONFIG_FLATMEM=y 185 + CONFIG_FLAT_NODE_MEM_MAP=y 186 + CONFIG_SPARSEMEM_VMEMMAP_ENABLE=y 187 + CONFIG_PAGEFLAGS_EXTENDED=y 188 + CONFIG_SPLIT_PTLOCK_CPUS=4 189 + CONFIG_MIGRATION=y 190 + CONFIG_PHYS_ADDR_T_64BIT=y 191 + CONFIG_ZONE_DMA_FLAG=1 192 + CONFIG_BOUNCE=y 193 + CONFIG_NR_QUICK=1 194 + CONFIG_VIRT_TO_BUS=y 195 + CONFIG_UNEVICTABLE_LRU=y 196 + CONFIG_ARCH_SELECT_MEMORY_MODEL=y 197 + CONFIG_ARCH_DISCONTIGMEM_ENABLE=y 198 + CONFIG_ARCH_FLATMEM_ENABLE=y 199 + CONFIG_ARCH_SPARSEMEM_ENABLE=y 200 + CONFIG_ARCH_POPULATES_NODE_MAP=y 201 + CONFIG_VIRTUAL_MEM_MAP=y 202 + CONFIG_HOLES_IN_ZONE=y 203 + # CONFIG_IA32_SUPPORT is not set 204 + # CONFIG_COMPAT_FOR_U64_ALIGNMENT is not set 205 + CONFIG_IA64_MCA_RECOVERY=y 206 + CONFIG_PERFMON=y 207 + CONFIG_IA64_PALINFO=y 208 + # CONFIG_IA64_MC_ERR_INJECT is not set 209 + # CONFIG_IA64_ESI is not set 210 + # CONFIG_IA64_HP_AML_NFW is not set 211 + CONFIG_KEXEC=y 212 + # CONFIG_CRASH_DUMP is not set 213 + 214 + # 215 + # Firmware Drivers 216 + # 217 + # CONFIG_FIRMWARE_MEMMAP is not set 218 + CONFIG_EFI_VARS=y 219 + CONFIG_EFI_PCDP=y 220 + CONFIG_DMIID=y 221 + CONFIG_BINFMT_ELF=y 222 + # CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set 223 + # CONFIG_HAVE_AOUT is not set 224 + CONFIG_BINFMT_MISC=m 225 + 226 + # 227 + # Power management and ACPI options 228 + # 229 + CONFIG_PM=y 230 + # CONFIG_PM_DEBUG is not set 231 + CONFIG_PM_SLEEP=y 232 + CONFIG_SUSPEND=y 233 + CONFIG_SUSPEND_FREEZER=y 234 + CONFIG_ACPI=y 235 + CONFIG_ACPI_SLEEP=y 236 + CONFIG_ACPI_PROCFS=y 237 + CONFIG_ACPI_PROCFS_POWER=y 238 + CONFIG_ACPI_SYSFS_POWER=y 239 + CONFIG_ACPI_PROC_EVENT=y 240 + CONFIG_ACPI_BUTTON=m 241 + CONFIG_ACPI_FAN=m 242 + # CONFIG_ACPI_DOCK is not set 243 + CONFIG_ACPI_PROCESSOR=m 244 + CONFIG_ACPI_HOTPLUG_CPU=y 245 + CONFIG_ACPI_THERMAL=m 246 + # CONFIG_ACPI_CUSTOM_DSDT is not set 247 + CONFIG_ACPI_BLACKLIST_YEAR=0 248 + # CONFIG_ACPI_DEBUG is not set 249 + # CONFIG_ACPI_PCI_SLOT is not set 250 + CONFIG_ACPI_SYSTEM=y 251 + CONFIG_ACPI_CONTAINER=m 252 + 253 + # 254 + # CPU Frequency scaling 255 + # 256 + # CONFIG_CPU_FREQ is not set 257 + 258 + # 259 + # Bus options (PCI, PCMCIA) 260 + # 261 + CONFIG_PCI=y 262 + CONFIG_PCI_DOMAINS=y 263 + CONFIG_PCI_SYSCALL=y 264 + # CONFIG_PCIEPORTBUS is not set 265 + CONFIG_ARCH_SUPPORTS_MSI=y 266 + # CONFIG_PCI_MSI is not set 267 + CONFIG_PCI_LEGACY=y 268 + # CONFIG_PCI_DEBUG is not set 269 + # CONFIG_PCI_STUB is not set 270 + CONFIG_HOTPLUG_PCI=m 271 + # CONFIG_HOTPLUG_PCI_FAKE is not set 272 + CONFIG_HOTPLUG_PCI_ACPI=m 273 + # CONFIG_HOTPLUG_PCI_ACPI_IBM is not set 274 + # CONFIG_HOTPLUG_PCI_CPCI is not set 275 + # CONFIG_HOTPLUG_PCI_SHPC is not set 276 + # CONFIG_PCCARD is not set 277 + CONFIG_NET=y 278 + 279 + # 280 + # Networking options 281 + # 282 + # CONFIG_NET_NS is not set 283 + CONFIG_COMPAT_NET_DEV_OPS=y 284 + CONFIG_PACKET=y 285 + # CONFIG_PACKET_MMAP is not set 286 + CONFIG_UNIX=y 287 + CONFIG_XFRM=y 288 + # CONFIG_XFRM_USER is not set 289 + # CONFIG_XFRM_SUB_POLICY is not set 290 + # CONFIG_XFRM_MIGRATE is not set 291 + # CONFIG_XFRM_STATISTICS is not set 292 + # CONFIG_NET_KEY is not set 293 + CONFIG_INET=y 294 + CONFIG_IP_MULTICAST=y 295 + # CONFIG_IP_ADVANCED_ROUTER is not set 296 + CONFIG_IP_FIB_HASH=y 297 + # CONFIG_IP_PNP is not set 298 + # CONFIG_NET_IPIP is not set 299 + # CONFIG_NET_IPGRE is not set 300 + # CONFIG_IP_MROUTE is not set 301 + CONFIG_ARPD=y 302 + CONFIG_SYN_COOKIES=y 303 + # CONFIG_INET_AH is not set 304 + # CONFIG_INET_ESP is not set 305 + # CONFIG_INET_IPCOMP is not set 306 + # CONFIG_INET_XFRM_TUNNEL is not set 307 + # CONFIG_INET_TUNNEL is not set 308 + CONFIG_INET_XFRM_MODE_TRANSPORT=y 309 + CONFIG_INET_XFRM_MODE_TUNNEL=y 310 + CONFIG_INET_XFRM_MODE_BEET=y 311 + # CONFIG_INET_LRO is not set 312 + CONFIG_INET_DIAG=y 313 + CONFIG_INET_TCP_DIAG=y 314 + # CONFIG_TCP_CONG_ADVANCED is not set 315 + CONFIG_TCP_CONG_CUBIC=y 316 + CONFIG_DEFAULT_TCP_CONG="cubic" 317 + # CONFIG_TCP_MD5SIG is not set 318 + # CONFIG_IPV6 is not set 319 + # CONFIG_NETWORK_SECMARK is not set 320 + # CONFIG_NETFILTER is not set 321 + # CONFIG_IP_DCCP is not set 322 + # CONFIG_IP_SCTP is not set 323 + # CONFIG_TIPC is not set 324 + # CONFIG_ATM is not set 325 + # CONFIG_BRIDGE is not set 326 + # CONFIG_NET_DSA is not set 327 + # CONFIG_VLAN_8021Q is not set 328 + # CONFIG_DECNET is not set 329 + # CONFIG_LLC2 is not set 330 + # CONFIG_IPX is not set 331 + # CONFIG_ATALK is not set 332 + # CONFIG_X25 is not set 333 + # CONFIG_LAPB is not set 334 + # CONFIG_ECONET is not set 335 + # CONFIG_WAN_ROUTER is not set 336 + # CONFIG_NET_SCHED is not set 337 + # CONFIG_DCB is not set 338 + 339 + # 340 + # Network testing 341 + # 342 + # CONFIG_NET_PKTGEN is not set 343 + # CONFIG_HAMRADIO is not set 344 + # CONFIG_CAN is not set 345 + # CONFIG_IRDA is not set 346 + # CONFIG_BT is not set 347 + # CONFIG_AF_RXRPC is not set 348 + # CONFIG_PHONET is not set 349 + # CONFIG_WIRELESS is not set 350 + # CONFIG_WIMAX is not set 351 + # CONFIG_RFKILL is not set 352 + # CONFIG_NET_9P is not set 353 + 354 + # 355 + # Device Drivers 356 + # 357 + 358 + # 359 + # Generic Driver Options 360 + # 361 + CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 362 + CONFIG_STANDALONE=y 363 + CONFIG_PREVENT_FIRMWARE_BUILD=y 364 + CONFIG_FW_LOADER=y 365 + CONFIG_FIRMWARE_IN_KERNEL=y 366 + CONFIG_EXTRA_FIRMWARE="" 367 + # CONFIG_DEBUG_DRIVER is not set 368 + # CONFIG_DEBUG_DEVRES is not set 369 + # CONFIG_SYS_HYPERVISOR is not set 370 + # CONFIG_CONNECTOR is not set 371 + # CONFIG_MTD is not set 372 + # CONFIG_PARPORT is not set 373 + CONFIG_PNP=y 374 + CONFIG_PNP_DEBUG_MESSAGES=y 375 + 376 + # 377 + # Protocols 378 + # 379 + CONFIG_PNPACPI=y 380 + CONFIG_BLK_DEV=y 381 + # CONFIG_BLK_CPQ_DA is not set 382 + # CONFIG_BLK_CPQ_CISS_DA is not set 383 + # CONFIG_BLK_DEV_DAC960 is not set 384 + # CONFIG_BLK_DEV_UMEM is not set 385 + # CONFIG_BLK_DEV_COW_COMMON is not set 386 + CONFIG_BLK_DEV_LOOP=m 387 + CONFIG_BLK_DEV_CRYPTOLOOP=m 388 + CONFIG_BLK_DEV_NBD=m 389 + # CONFIG_BLK_DEV_SX8 is not set 390 + # CONFIG_BLK_DEV_UB is not set 391 + CONFIG_BLK_DEV_RAM=y 392 + CONFIG_BLK_DEV_RAM_COUNT=16 393 + CONFIG_BLK_DEV_RAM_SIZE=4096 394 + # CONFIG_BLK_DEV_XIP is not set 395 + # CONFIG_CDROM_PKTCDVD is not set 396 + # CONFIG_ATA_OVER_ETH is not set 397 + CONFIG_XEN_BLKDEV_FRONTEND=y 398 + # CONFIG_BLK_DEV_HD is not set 399 + CONFIG_MISC_DEVICES=y 400 + # CONFIG_PHANTOM is not set 401 + # CONFIG_EEPROM_93CX6 is not set 402 + # CONFIG_SGI_IOC4 is not set 403 + # CONFIG_TIFM_CORE is not set 404 + # CONFIG_ICS932S401 is not set 405 + # CONFIG_ENCLOSURE_SERVICES is not set 406 + # CONFIG_HP_ILO is not set 407 + # CONFIG_C2PORT is not set 408 + CONFIG_HAVE_IDE=y 409 + CONFIG_IDE=y 410 + 411 + # 412 + # Please see Documentation/ide/ide.txt for help/info on IDE drives 413 + # 414 + CONFIG_IDE_TIMINGS=y 415 + CONFIG_IDE_ATAPI=y 416 + # CONFIG_BLK_DEV_IDE_SATA is not set 417 + CONFIG_IDE_GD=y 418 + CONFIG_IDE_GD_ATA=y 419 + # CONFIG_IDE_GD_ATAPI is not set 420 + CONFIG_BLK_DEV_IDECD=y 421 + CONFIG_BLK_DEV_IDECD_VERBOSE_ERRORS=y 422 + # CONFIG_BLK_DEV_IDETAPE is not set 423 + # CONFIG_BLK_DEV_IDEACPI is not set 424 + # CONFIG_IDE_TASK_IOCTL is not set 425 + CONFIG_IDE_PROC_FS=y 426 + 427 + # 428 + # IDE chipset support/bugfixes 429 + # 430 + # CONFIG_IDE_GENERIC is not set 431 + # CONFIG_BLK_DEV_PLATFORM is not set 432 + # CONFIG_BLK_DEV_IDEPNP is not set 433 + CONFIG_BLK_DEV_IDEDMA_SFF=y 434 + 435 + # 436 + # PCI IDE chipsets support 437 + # 438 + CONFIG_BLK_DEV_IDEPCI=y 439 + CONFIG_IDEPCI_PCIBUS_ORDER=y 440 + # CONFIG_BLK_DEV_OFFBOARD is not set 441 + CONFIG_BLK_DEV_GENERIC=y 442 + # CONFIG_BLK_DEV_OPTI621 is not set 443 + CONFIG_BLK_DEV_IDEDMA_PCI=y 444 + # CONFIG_BLK_DEV_AEC62XX is not set 445 + # CONFIG_BLK_DEV_ALI15X3 is not set 446 + # CONFIG_BLK_DEV_AMD74XX is not set 447 + CONFIG_BLK_DEV_CMD64X=y 448 + # CONFIG_BLK_DEV_TRIFLEX is not set 449 + # CONFIG_BLK_DEV_CS5520 is not set 450 + # CONFIG_BLK_DEV_CS5530 is not set 451 + # CONFIG_BLK_DEV_HPT366 is not set 452 + # CONFIG_BLK_DEV_JMICRON is not set 453 + # CONFIG_BLK_DEV_SC1200 is not set 454 + CONFIG_BLK_DEV_PIIX=y 455 + # CONFIG_BLK_DEV_IT8172 is not set 456 + # CONFIG_BLK_DEV_IT8213 is not set 457 + # CONFIG_BLK_DEV_IT821X is not set 458 + # CONFIG_BLK_DEV_NS87415 is not set 459 + # CONFIG_BLK_DEV_PDC202XX_OLD is not set 460 + # CONFIG_BLK_DEV_PDC202XX_NEW is not set 461 + # CONFIG_BLK_DEV_SVWKS is not set 462 + # CONFIG_BLK_DEV_SIIMAGE is not set 463 + # CONFIG_BLK_DEV_SLC90E66 is not set 464 + # CONFIG_BLK_DEV_TRM290 is not set 465 + # CONFIG_BLK_DEV_VIA82CXXX is not set 466 + # CONFIG_BLK_DEV_TC86C001 is not set 467 + CONFIG_BLK_DEV_IDEDMA=y 468 + 469 + # 470 + # SCSI device support 471 + # 472 + # CONFIG_RAID_ATTRS is not set 473 + CONFIG_SCSI=y 474 + CONFIG_SCSI_DMA=y 475 + # CONFIG_SCSI_TGT is not set 476 + CONFIG_SCSI_NETLINK=y 477 + CONFIG_SCSI_PROC_FS=y 478 + 479 + # 480 + # SCSI support type (disk, tape, CD-ROM) 481 + # 482 + CONFIG_BLK_DEV_SD=y 483 + CONFIG_CHR_DEV_ST=m 484 + # CONFIG_CHR_DEV_OSST is not set 485 + CONFIG_BLK_DEV_SR=m 486 + # CONFIG_BLK_DEV_SR_VENDOR is not set 487 + CONFIG_CHR_DEV_SG=m 488 + # CONFIG_CHR_DEV_SCH is not set 489 + 490 + # 491 + # Some SCSI devices (e.g. CD jukebox) support multiple LUNs 492 + # 493 + # CONFIG_SCSI_MULTI_LUN is not set 494 + # CONFIG_SCSI_CONSTANTS is not set 495 + # CONFIG_SCSI_LOGGING is not set 496 + # CONFIG_SCSI_SCAN_ASYNC is not set 497 + CONFIG_SCSI_WAIT_SCAN=m 498 + 499 + # 500 + # SCSI Transports 501 + # 502 + CONFIG_SCSI_SPI_ATTRS=y 503 + CONFIG_SCSI_FC_ATTRS=y 504 + # CONFIG_SCSI_ISCSI_ATTRS is not set 505 + # CONFIG_SCSI_SAS_LIBSAS is not set 506 + # CONFIG_SCSI_SRP_ATTRS is not set 507 + CONFIG_SCSI_LOWLEVEL=y 508 + # CONFIG_ISCSI_TCP is not set 509 + # CONFIG_SCSI_CXGB3_ISCSI is not set 510 + # CONFIG_BLK_DEV_3W_XXXX_RAID is not set 511 + # CONFIG_SCSI_3W_9XXX is not set 512 + # CONFIG_SCSI_ACARD is not set 513 + # CONFIG_SCSI_AACRAID is not set 514 + # CONFIG_SCSI_AIC7XXX is not set 515 + # CONFIG_SCSI_AIC7XXX_OLD is not set 516 + # CONFIG_SCSI_AIC79XX is not set 517 + # CONFIG_SCSI_AIC94XX is not set 518 + # CONFIG_SCSI_DPT_I2O is not set 519 + # CONFIG_SCSI_ADVANSYS is not set 520 + # CONFIG_SCSI_ARCMSR is not set 521 + # CONFIG_MEGARAID_NEWGEN is not set 522 + # CONFIG_MEGARAID_LEGACY is not set 523 + # CONFIG_MEGARAID_SAS is not set 524 + # CONFIG_SCSI_HPTIOP is not set 525 + # CONFIG_LIBFC is not set 526 + # CONFIG_FCOE is not set 527 + # CONFIG_SCSI_DMX3191D is not set 528 + # CONFIG_SCSI_FUTURE_DOMAIN is not set 529 + # CONFIG_SCSI_IPS is not set 530 + # CONFIG_SCSI_INITIO is not set 531 + # CONFIG_SCSI_INIA100 is not set 532 + # CONFIG_SCSI_MVSAS is not set 533 + # CONFIG_SCSI_STEX is not set 534 + CONFIG_SCSI_SYM53C8XX_2=y 535 + CONFIG_SCSI_SYM53C8XX_DMA_ADDRESSING_MODE=1 536 + CONFIG_SCSI_SYM53C8XX_DEFAULT_TAGS=16 537 + CONFIG_SCSI_SYM53C8XX_MAX_TAGS=64 538 + CONFIG_SCSI_SYM53C8XX_MMIO=y 539 + CONFIG_SCSI_QLOGIC_1280=y 540 + # CONFIG_SCSI_QLA_FC is not set 541 + # CONFIG_SCSI_QLA_ISCSI is not set 542 + # CONFIG_SCSI_LPFC is not set 543 + # CONFIG_SCSI_DC395x is not set 544 + # CONFIG_SCSI_DC390T is not set 545 + # CONFIG_SCSI_DEBUG is not set 546 + # CONFIG_SCSI_SRP is not set 547 + # CONFIG_SCSI_DH is not set 548 + # CONFIG_ATA is not set 549 + CONFIG_MD=y 550 + CONFIG_BLK_DEV_MD=m 551 + CONFIG_MD_LINEAR=m 552 + CONFIG_MD_RAID0=m 553 + CONFIG_MD_RAID1=m 554 + # CONFIG_MD_RAID10 is not set 555 + # CONFIG_MD_RAID456 is not set 556 + CONFIG_MD_MULTIPATH=m 557 + # CONFIG_MD_FAULTY is not set 558 + CONFIG_BLK_DEV_DM=m 559 + # CONFIG_DM_DEBUG is not set 560 + CONFIG_DM_CRYPT=m 561 + CONFIG_DM_SNAPSHOT=m 562 + CONFIG_DM_MIRROR=m 563 + CONFIG_DM_ZERO=m 564 + # CONFIG_DM_MULTIPATH is not set 565 + # CONFIG_DM_DELAY is not set 566 + # CONFIG_DM_UEVENT is not set 567 + CONFIG_FUSION=y 568 + CONFIG_FUSION_SPI=y 569 + CONFIG_FUSION_FC=y 570 + # CONFIG_FUSION_SAS is not set 571 + CONFIG_FUSION_MAX_SGE=128 572 + CONFIG_FUSION_CTL=y 573 + # CONFIG_FUSION_LOGGING is not set 574 + 575 + # 576 + # IEEE 1394 (FireWire) support 577 + # 578 + 579 + # 580 + # Enable only one of the two stacks, unless you know what you are doing 581 + # 582 + # CONFIG_FIREWIRE is not set 583 + # CONFIG_IEEE1394 is not set 584 + # CONFIG_I2O is not set 585 + CONFIG_NETDEVICES=y 586 + CONFIG_DUMMY=m 587 + # CONFIG_BONDING is not set 588 + # CONFIG_MACVLAN is not set 589 + # CONFIG_EQUALIZER is not set 590 + # CONFIG_TUN is not set 591 + # CONFIG_VETH is not set 592 + # CONFIG_NET_SB1000 is not set 593 + # CONFIG_ARCNET is not set 594 + CONFIG_PHYLIB=y 595 + 596 + # 597 + # MII PHY device drivers 598 + # 599 + # CONFIG_MARVELL_PHY is not set 600 + # CONFIG_DAVICOM_PHY is not set 601 + # CONFIG_QSEMI_PHY is not set 602 + # CONFIG_LXT_PHY is not set 603 + # CONFIG_CICADA_PHY is not set 604 + # CONFIG_VITESSE_PHY is not set 605 + # CONFIG_SMSC_PHY is not set 606 + # CONFIG_BROADCOM_PHY is not set 607 + # CONFIG_ICPLUS_PHY is not set 608 + # CONFIG_REALTEK_PHY is not set 609 + # CONFIG_NATIONAL_PHY is not set 610 + # CONFIG_STE10XP is not set 611 + # CONFIG_LSI_ET1011C_PHY is not set 612 + # CONFIG_FIXED_PHY is not set 613 + # CONFIG_MDIO_BITBANG is not set 614 + CONFIG_NET_ETHERNET=y 615 + CONFIG_MII=m 616 + # CONFIG_HAPPYMEAL is not set 617 + # CONFIG_SUNGEM is not set 618 + # CONFIG_CASSINI is not set 619 + # CONFIG_NET_VENDOR_3COM is not set 620 + CONFIG_NET_TULIP=y 621 + # CONFIG_DE2104X is not set 622 + CONFIG_TULIP=m 623 + # CONFIG_TULIP_MWI is not set 624 + # CONFIG_TULIP_MMIO is not set 625 + # CONFIG_TULIP_NAPI is not set 626 + # CONFIG_DE4X5 is not set 627 + # CONFIG_WINBOND_840 is not set 628 + # CONFIG_DM9102 is not set 629 + # CONFIG_ULI526X is not set 630 + # CONFIG_HP100 is not set 631 + # CONFIG_IBM_NEW_EMAC_ZMII is not set 632 + # CONFIG_IBM_NEW_EMAC_RGMII is not set 633 + # CONFIG_IBM_NEW_EMAC_TAH is not set 634 + # CONFIG_IBM_NEW_EMAC_EMAC4 is not set 635 + # CONFIG_IBM_NEW_EMAC_NO_FLOW_CTRL is not set 636 + # CONFIG_IBM_NEW_EMAC_MAL_CLR_ICINTSTAT is not set 637 + # CONFIG_IBM_NEW_EMAC_MAL_COMMON_ERR is not set 638 + CONFIG_NET_PCI=y 639 + # CONFIG_PCNET32 is not set 640 + # CONFIG_AMD8111_ETH is not set 641 + # CONFIG_ADAPTEC_STARFIRE is not set 642 + # CONFIG_B44 is not set 643 + # CONFIG_FORCEDETH is not set 644 + CONFIG_E100=m 645 + # CONFIG_FEALNX is not set 646 + # CONFIG_NATSEMI is not set 647 + # CONFIG_NE2K_PCI is not set 648 + # CONFIG_8139CP is not set 649 + # CONFIG_8139TOO is not set 650 + # CONFIG_R6040 is not set 651 + # CONFIG_SIS900 is not set 652 + # CONFIG_EPIC100 is not set 653 + # CONFIG_SMSC9420 is not set 654 + # CONFIG_SUNDANCE is not set 655 + # CONFIG_TLAN is not set 656 + # CONFIG_VIA_RHINE is not set 657 + # CONFIG_SC92031 is not set 658 + # CONFIG_ATL2 is not set 659 + CONFIG_NETDEV_1000=y 660 + # CONFIG_ACENIC is not set 661 + # CONFIG_DL2K is not set 662 + CONFIG_E1000=y 663 + # CONFIG_E1000E is not set 664 + # CONFIG_IP1000 is not set 665 + # CONFIG_IGB is not set 666 + # CONFIG_NS83820 is not set 667 + # CONFIG_HAMACHI is not set 668 + # CONFIG_YELLOWFIN is not set 669 + # CONFIG_R8169 is not set 670 + # CONFIG_SIS190 is not set 671 + # CONFIG_SKGE is not set 672 + # CONFIG_SKY2 is not set 673 + # CONFIG_VIA_VELOCITY is not set 674 + CONFIG_TIGON3=y 675 + # CONFIG_BNX2 is not set 676 + # CONFIG_QLA3XXX is not set 677 + # CONFIG_ATL1 is not set 678 + # CONFIG_ATL1E is not set 679 + # CONFIG_JME is not set 680 + CONFIG_NETDEV_10000=y 681 + # CONFIG_CHELSIO_T1 is not set 682 + CONFIG_CHELSIO_T3_DEPENDS=y 683 + # CONFIG_CHELSIO_T3 is not set 684 + # CONFIG_ENIC is not set 685 + # CONFIG_IXGBE is not set 686 + # CONFIG_IXGB is not set 687 + # CONFIG_S2IO is not set 688 + # CONFIG_MYRI10GE is not set 689 + # CONFIG_NETXEN_NIC is not set 690 + # CONFIG_NIU is not set 691 + # CONFIG_MLX4_EN is not set 692 + # CONFIG_MLX4_CORE is not set 693 + # CONFIG_TEHUTI is not set 694 + # CONFIG_BNX2X is not set 695 + # CONFIG_QLGE is not set 696 + # CONFIG_SFC is not set 697 + # CONFIG_TR is not set 698 + 699 + # 700 + # Wireless LAN 701 + # 702 + # CONFIG_WLAN_PRE80211 is not set 703 + # CONFIG_WLAN_80211 is not set 704 + # CONFIG_IWLWIFI_LEDS is not set 705 + 706 + # 707 + # Enable WiMAX (Networking options) to see the WiMAX drivers 708 + # 709 + 710 + # 711 + # USB Network Adapters 712 + # 713 + # CONFIG_USB_CATC is not set 714 + # CONFIG_USB_KAWETH is not set 715 + # CONFIG_USB_PEGASUS is not set 716 + # CONFIG_USB_RTL8150 is not set 717 + # CONFIG_USB_USBNET is not set 718 + # CONFIG_WAN is not set 719 + CONFIG_XEN_NETDEV_FRONTEND=y 720 + # CONFIG_FDDI is not set 721 + # CONFIG_HIPPI is not set 722 + # CONFIG_PPP is not set 723 + # CONFIG_SLIP is not set 724 + # CONFIG_NET_FC is not set 725 + CONFIG_NETCONSOLE=y 726 + # CONFIG_NETCONSOLE_DYNAMIC is not set 727 + CONFIG_NETPOLL=y 728 + # CONFIG_NETPOLL_TRAP is not set 729 + CONFIG_NET_POLL_CONTROLLER=y 730 + # CONFIG_ISDN is not set 731 + # CONFIG_PHONE is not set 732 + 733 + # 734 + # Input device support 735 + # 736 + CONFIG_INPUT=y 737 + # CONFIG_INPUT_FF_MEMLESS is not set 738 + # CONFIG_INPUT_POLLDEV is not set 739 + 740 + # 741 + # Userland interfaces 742 + # 743 + CONFIG_INPUT_MOUSEDEV=y 744 + CONFIG_INPUT_MOUSEDEV_PSAUX=y 745 + CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024 746 + CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768 747 + # CONFIG_INPUT_JOYDEV is not set 748 + # CONFIG_INPUT_EVDEV is not set 749 + # CONFIG_INPUT_EVBUG is not set 750 + 751 + # 752 + # Input Device Drivers 753 + # 754 + CONFIG_INPUT_KEYBOARD=y 755 + CONFIG_KEYBOARD_ATKBD=y 756 + # CONFIG_KEYBOARD_SUNKBD is not set 757 + # CONFIG_KEYBOARD_LKKBD is not set 758 + # CONFIG_KEYBOARD_XTKBD is not set 759 + # CONFIG_KEYBOARD_NEWTON is not set 760 + # CONFIG_KEYBOARD_STOWAWAY is not set 761 + CONFIG_INPUT_MOUSE=y 762 + CONFIG_MOUSE_PS2=y 763 + CONFIG_MOUSE_PS2_ALPS=y 764 + CONFIG_MOUSE_PS2_LOGIPS2PP=y 765 + CONFIG_MOUSE_PS2_SYNAPTICS=y 766 + CONFIG_MOUSE_PS2_LIFEBOOK=y 767 + CONFIG_MOUSE_PS2_TRACKPOINT=y 768 + # CONFIG_MOUSE_PS2_ELANTECH is not set 769 + # CONFIG_MOUSE_PS2_TOUCHKIT is not set 770 + # CONFIG_MOUSE_SERIAL is not set 771 + # CONFIG_MOUSE_APPLETOUCH is not set 772 + # CONFIG_MOUSE_BCM5974 is not set 773 + # CONFIG_MOUSE_VSXXXAA is not set 774 + # CONFIG_INPUT_JOYSTICK is not set 775 + # CONFIG_INPUT_TABLET is not set 776 + # CONFIG_INPUT_TOUCHSCREEN is not set 777 + # CONFIG_INPUT_MISC is not set 778 + 779 + # 780 + # Hardware I/O ports 781 + # 782 + CONFIG_SERIO=y 783 + CONFIG_SERIO_I8042=y 784 + # CONFIG_SERIO_SERPORT is not set 785 + # CONFIG_SERIO_PCIPS2 is not set 786 + CONFIG_SERIO_LIBPS2=y 787 + # CONFIG_SERIO_RAW is not set 788 + CONFIG_GAMEPORT=m 789 + # CONFIG_GAMEPORT_NS558 is not set 790 + # CONFIG_GAMEPORT_L4 is not set 791 + # CONFIG_GAMEPORT_EMU10K1 is not set 792 + # CONFIG_GAMEPORT_FM801 is not set 793 + 794 + # 795 + # Character devices 796 + # 797 + CONFIG_VT=y 798 + CONFIG_CONSOLE_TRANSLATIONS=y 799 + CONFIG_VT_CONSOLE=y 800 + CONFIG_HW_CONSOLE=y 801 + # CONFIG_VT_HW_CONSOLE_BINDING is not set 802 + CONFIG_DEVKMEM=y 803 + CONFIG_SERIAL_NONSTANDARD=y 804 + # CONFIG_COMPUTONE is not set 805 + # CONFIG_ROCKETPORT is not set 806 + # CONFIG_CYCLADES is not set 807 + # CONFIG_DIGIEPCA is not set 808 + # CONFIG_MOXA_INTELLIO is not set 809 + # CONFIG_MOXA_SMARTIO is not set 810 + # CONFIG_ISI is not set 811 + # CONFIG_SYNCLINKMP is not set 812 + # CONFIG_SYNCLINK_GT is not set 813 + # CONFIG_N_HDLC is not set 814 + # CONFIG_RISCOM8 is not set 815 + # CONFIG_SPECIALIX is not set 816 + # CONFIG_SX is not set 817 + # CONFIG_RIO is not set 818 + # CONFIG_STALDRV is not set 819 + # CONFIG_NOZOMI is not set 820 + 821 + # 822 + # Serial drivers 823 + # 824 + CONFIG_SERIAL_8250=y 825 + CONFIG_SERIAL_8250_CONSOLE=y 826 + CONFIG_SERIAL_8250_PCI=y 827 + CONFIG_SERIAL_8250_PNP=y 828 + CONFIG_SERIAL_8250_NR_UARTS=6 829 + CONFIG_SERIAL_8250_RUNTIME_UARTS=4 830 + CONFIG_SERIAL_8250_EXTENDED=y 831 + CONFIG_SERIAL_8250_SHARE_IRQ=y 832 + # CONFIG_SERIAL_8250_DETECT_IRQ is not set 833 + # CONFIG_SERIAL_8250_RSA is not set 834 + 835 + # 836 + # Non-8250 serial port support 837 + # 838 + CONFIG_SERIAL_CORE=y 839 + CONFIG_SERIAL_CORE_CONSOLE=y 840 + # CONFIG_SERIAL_JSM is not set 841 + CONFIG_UNIX98_PTYS=y 842 + # CONFIG_DEVPTS_MULTIPLE_INSTANCES is not set 843 + CONFIG_LEGACY_PTYS=y 844 + CONFIG_LEGACY_PTY_COUNT=256 845 + CONFIG_HVC_DRIVER=y 846 + CONFIG_HVC_IRQ=y 847 + CONFIG_HVC_XEN=y 848 + # CONFIG_IPMI_HANDLER is not set 849 + # CONFIG_HW_RANDOM is not set 850 + CONFIG_EFI_RTC=y 851 + # CONFIG_R3964 is not set 852 + # CONFIG_APPLICOM is not set 853 + CONFIG_RAW_DRIVER=m 854 + CONFIG_MAX_RAW_DEVS=256 855 + CONFIG_HPET=y 856 + CONFIG_HPET_MMAP=y 857 + # CONFIG_HANGCHECK_TIMER is not set 858 + # CONFIG_TCG_TPM is not set 859 + CONFIG_DEVPORT=y 860 + CONFIG_I2C=m 861 + CONFIG_I2C_BOARDINFO=y 862 + # CONFIG_I2C_CHARDEV is not set 863 + CONFIG_I2C_HELPER_AUTO=y 864 + CONFIG_I2C_ALGOBIT=m 865 + 866 + # 867 + # I2C Hardware Bus support 868 + # 869 + 870 + # 871 + # PC SMBus host controller drivers 872 + # 873 + # CONFIG_I2C_ALI1535 is not set 874 + # CONFIG_I2C_ALI1563 is not set 875 + # CONFIG_I2C_ALI15X3 is not set 876 + # CONFIG_I2C_AMD756 is not set 877 + # CONFIG_I2C_AMD8111 is not set 878 + # CONFIG_I2C_I801 is not set 879 + # CONFIG_I2C_ISCH is not set 880 + # CONFIG_I2C_PIIX4 is not set 881 + # CONFIG_I2C_NFORCE2 is not set 882 + # CONFIG_I2C_SIS5595 is not set 883 + # CONFIG_I2C_SIS630 is not set 884 + # CONFIG_I2C_SIS96X is not set 885 + # CONFIG_I2C_VIA is not set 886 + # CONFIG_I2C_VIAPRO is not set 887 + 888 + # 889 + # I2C system bus drivers (mostly embedded / system-on-chip) 890 + # 891 + # CONFIG_I2C_OCORES is not set 892 + # CONFIG_I2C_SIMTEC is not set 893 + 894 + # 895 + # External I2C/SMBus adapter drivers 896 + # 897 + # CONFIG_I2C_PARPORT_LIGHT is not set 898 + # CONFIG_I2C_TAOS_EVM is not set 899 + # CONFIG_I2C_TINY_USB is not set 900 + 901 + # 902 + # Graphics adapter I2C/DDC channel drivers 903 + # 904 + # CONFIG_I2C_VOODOO3 is not set 905 + 906 + # 907 + # Other I2C/SMBus bus drivers 908 + # 909 + # CONFIG_I2C_PCA_PLATFORM is not set 910 + # CONFIG_I2C_STUB is not set 911 + 912 + # 913 + # Miscellaneous I2C Chip support 914 + # 915 + # CONFIG_DS1682 is not set 916 + # CONFIG_AT24 is not set 917 + # CONFIG_SENSORS_EEPROM is not set 918 + # CONFIG_SENSORS_PCF8574 is not set 919 + # CONFIG_PCF8575 is not set 920 + # CONFIG_SENSORS_PCA9539 is not set 921 + # CONFIG_SENSORS_PCF8591 is not set 922 + # CONFIG_SENSORS_MAX6875 is not set 923 + # CONFIG_SENSORS_TSL2550 is not set 924 + # CONFIG_I2C_DEBUG_CORE is not set 925 + # CONFIG_I2C_DEBUG_ALGO is not set 926 + # CONFIG_I2C_DEBUG_BUS is not set 927 + # CONFIG_I2C_DEBUG_CHIP is not set 928 + # CONFIG_SPI is not set 929 + # CONFIG_W1 is not set 930 + CONFIG_POWER_SUPPLY=y 931 + # CONFIG_POWER_SUPPLY_DEBUG is not set 932 + # CONFIG_PDA_POWER is not set 933 + # CONFIG_BATTERY_DS2760 is not set 934 + # CONFIG_BATTERY_BQ27x00 is not set 935 + CONFIG_HWMON=y 936 + # CONFIG_HWMON_VID is not set 937 + # CONFIG_SENSORS_AD7414 is not set 938 + # CONFIG_SENSORS_AD7418 is not set 939 + # CONFIG_SENSORS_ADM1021 is not set 940 + # CONFIG_SENSORS_ADM1025 is not set 941 + # CONFIG_SENSORS_ADM1026 is not set 942 + # CONFIG_SENSORS_ADM1029 is not set 943 + # CONFIG_SENSORS_ADM1031 is not set 944 + # CONFIG_SENSORS_ADM9240 is not set 945 + # CONFIG_SENSORS_ADT7462 is not set 946 + # CONFIG_SENSORS_ADT7470 is not set 947 + # CONFIG_SENSORS_ADT7473 is not set 948 + # CONFIG_SENSORS_ATXP1 is not set 949 + # CONFIG_SENSORS_DS1621 is not set 950 + # CONFIG_SENSORS_I5K_AMB is not set 951 + # CONFIG_SENSORS_F71805F is not set 952 + # CONFIG_SENSORS_F71882FG is not set 953 + # CONFIG_SENSORS_F75375S is not set 954 + # CONFIG_SENSORS_GL518SM is not set 955 + # CONFIG_SENSORS_GL520SM is not set 956 + # CONFIG_SENSORS_IT87 is not set 957 + # CONFIG_SENSORS_LM63 is not set 958 + # CONFIG_SENSORS_LM75 is not set 959 + # CONFIG_SENSORS_LM77 is not set 960 + # CONFIG_SENSORS_LM78 is not set 961 + # CONFIG_SENSORS_LM80 is not set 962 + # CONFIG_SENSORS_LM83 is not set 963 + # CONFIG_SENSORS_LM85 is not set 964 + # CONFIG_SENSORS_LM87 is not set 965 + # CONFIG_SENSORS_LM90 is not set 966 + # CONFIG_SENSORS_LM92 is not set 967 + # CONFIG_SENSORS_LM93 is not set 968 + # CONFIG_SENSORS_LTC4245 is not set 969 + # CONFIG_SENSORS_MAX1619 is not set 970 + # CONFIG_SENSORS_MAX6650 is not set 971 + # CONFIG_SENSORS_PC87360 is not set 972 + # CONFIG_SENSORS_PC87427 is not set 973 + # CONFIG_SENSORS_SIS5595 is not set 974 + # CONFIG_SENSORS_DME1737 is not set 975 + # CONFIG_SENSORS_SMSC47M1 is not set 976 + # CONFIG_SENSORS_SMSC47M192 is not set 977 + # CONFIG_SENSORS_SMSC47B397 is not set 978 + # CONFIG_SENSORS_ADS7828 is not set 979 + # CONFIG_SENSORS_THMC50 is not set 980 + # CONFIG_SENSORS_VIA686A is not set 981 + # CONFIG_SENSORS_VT1211 is not set 982 + # CONFIG_SENSORS_VT8231 is not set 983 + # CONFIG_SENSORS_W83781D is not set 984 + # CONFIG_SENSORS_W83791D is not set 985 + # CONFIG_SENSORS_W83792D is not set 986 + # CONFIG_SENSORS_W83793 is not set 987 + # CONFIG_SENSORS_W83L785TS is not set 988 + # CONFIG_SENSORS_W83L786NG is not set 989 + # CONFIG_SENSORS_W83627HF is not set 990 + # CONFIG_SENSORS_W83627EHF is not set 991 + # CONFIG_SENSORS_LIS3LV02D is not set 992 + # CONFIG_HWMON_DEBUG_CHIP is not set 993 + CONFIG_THERMAL=m 994 + # CONFIG_THERMAL_HWMON is not set 995 + # CONFIG_WATCHDOG is not set 996 + CONFIG_SSB_POSSIBLE=y 997 + 998 + # 999 + # Sonics Silicon Backplane 1000 + # 1001 + # CONFIG_SSB is not set 1002 + 1003 + # 1004 + # Multifunction device drivers 1005 + # 1006 + # CONFIG_MFD_CORE is not set 1007 + # CONFIG_MFD_SM501 is not set 1008 + # CONFIG_HTC_PASIC3 is not set 1009 + # CONFIG_MFD_TMIO is not set 1010 + # CONFIG_MFD_WM8400 is not set 1011 + # CONFIG_MFD_WM8350_I2C is not set 1012 + # CONFIG_MFD_PCF50633 is not set 1013 + # CONFIG_REGULATOR is not set 1014 + 1015 + # 1016 + # Multimedia devices 1017 + # 1018 + 1019 + # 1020 + # Multimedia core support 1021 + # 1022 + # CONFIG_VIDEO_DEV is not set 1023 + # CONFIG_DVB_CORE is not set 1024 + # CONFIG_VIDEO_MEDIA is not set 1025 + 1026 + # 1027 + # Multimedia drivers 1028 + # 1029 + CONFIG_DAB=y 1030 + # CONFIG_USB_DABUSB is not set 1031 + 1032 + # 1033 + # Graphics support 1034 + # 1035 + CONFIG_AGP=m 1036 + CONFIG_DRM=m 1037 + CONFIG_DRM_TDFX=m 1038 + CONFIG_DRM_R128=m 1039 + CONFIG_DRM_RADEON=m 1040 + CONFIG_DRM_MGA=m 1041 + CONFIG_DRM_SIS=m 1042 + # CONFIG_DRM_VIA is not set 1043 + # CONFIG_DRM_SAVAGE is not set 1044 + # CONFIG_VGASTATE is not set 1045 + # CONFIG_VIDEO_OUTPUT_CONTROL is not set 1046 + # CONFIG_FB is not set 1047 + # CONFIG_BACKLIGHT_LCD_SUPPORT is not set 1048 + 1049 + # 1050 + # Display device support 1051 + # 1052 + # CONFIG_DISPLAY_SUPPORT is not set 1053 + 1054 + # 1055 + # Console display driver support 1056 + # 1057 + CONFIG_VGA_CONSOLE=y 1058 + # CONFIG_VGACON_SOFT_SCROLLBACK is not set 1059 + CONFIG_DUMMY_CONSOLE=y 1060 + # CONFIG_SOUND is not set 1061 + CONFIG_HID_SUPPORT=y 1062 + CONFIG_HID=y 1063 + # CONFIG_HID_DEBUG is not set 1064 + # CONFIG_HIDRAW is not set 1065 + 1066 + # 1067 + # USB Input Devices 1068 + # 1069 + CONFIG_USB_HID=y 1070 + # CONFIG_HID_PID is not set 1071 + # CONFIG_USB_HIDDEV is not set 1072 + 1073 + # 1074 + # Special HID drivers 1075 + # 1076 + CONFIG_HID_COMPAT=y 1077 + CONFIG_HID_A4TECH=y 1078 + CONFIG_HID_APPLE=y 1079 + CONFIG_HID_BELKIN=y 1080 + CONFIG_HID_CHERRY=y 1081 + CONFIG_HID_CHICONY=y 1082 + CONFIG_HID_CYPRESS=y 1083 + CONFIG_HID_EZKEY=y 1084 + CONFIG_HID_GYRATION=y 1085 + CONFIG_HID_LOGITECH=y 1086 + # CONFIG_LOGITECH_FF is not set 1087 + # CONFIG_LOGIRUMBLEPAD2_FF is not set 1088 + CONFIG_HID_MICROSOFT=y 1089 + CONFIG_HID_MONTEREY=y 1090 + CONFIG_HID_NTRIG=y 1091 + CONFIG_HID_PANTHERLORD=y 1092 + # CONFIG_PANTHERLORD_FF is not set 1093 + CONFIG_HID_PETALYNX=y 1094 + CONFIG_HID_SAMSUNG=y 1095 + CONFIG_HID_SONY=y 1096 + CONFIG_HID_SUNPLUS=y 1097 + # CONFIG_GREENASIA_FF is not set 1098 + CONFIG_HID_TOPSEED=y 1099 + # CONFIG_THRUSTMASTER_FF is not set 1100 + # CONFIG_ZEROPLUS_FF is not set 1101 + CONFIG_USB_SUPPORT=y 1102 + CONFIG_USB_ARCH_HAS_HCD=y 1103 + CONFIG_USB_ARCH_HAS_OHCI=y 1104 + CONFIG_USB_ARCH_HAS_EHCI=y 1105 + CONFIG_USB=y 1106 + # CONFIG_USB_DEBUG is not set 1107 + # CONFIG_USB_ANNOUNCE_NEW_DEVICES is not set 1108 + 1109 + # 1110 + # Miscellaneous USB options 1111 + # 1112 + CONFIG_USB_DEVICEFS=y 1113 + CONFIG_USB_DEVICE_CLASS=y 1114 + # CONFIG_USB_DYNAMIC_MINORS is not set 1115 + # CONFIG_USB_SUSPEND is not set 1116 + # CONFIG_USB_OTG is not set 1117 + # CONFIG_USB_MON is not set 1118 + # CONFIG_USB_WUSB is not set 1119 + # CONFIG_USB_WUSB_CBAF is not set 1120 + 1121 + # 1122 + # USB Host Controller Drivers 1123 + # 1124 + # CONFIG_USB_C67X00_HCD is not set 1125 + CONFIG_USB_EHCI_HCD=m 1126 + # CONFIG_USB_EHCI_ROOT_HUB_TT is not set 1127 + # CONFIG_USB_EHCI_TT_NEWSCHED is not set 1128 + # CONFIG_USB_OXU210HP_HCD is not set 1129 + # CONFIG_USB_ISP116X_HCD is not set 1130 + # CONFIG_USB_ISP1760_HCD is not set 1131 + CONFIG_USB_OHCI_HCD=m 1132 + # CONFIG_USB_OHCI_BIG_ENDIAN_DESC is not set 1133 + # CONFIG_USB_OHCI_BIG_ENDIAN_MMIO is not set 1134 + CONFIG_USB_OHCI_LITTLE_ENDIAN=y 1135 + CONFIG_USB_UHCI_HCD=y 1136 + # CONFIG_USB_SL811_HCD is not set 1137 + # CONFIG_USB_R8A66597_HCD is not set 1138 + # CONFIG_USB_WHCI_HCD is not set 1139 + # CONFIG_USB_HWA_HCD is not set 1140 + 1141 + # 1142 + # USB Device Class drivers 1143 + # 1144 + # CONFIG_USB_ACM is not set 1145 + # CONFIG_USB_PRINTER is not set 1146 + # CONFIG_USB_WDM is not set 1147 + # CONFIG_USB_TMC is not set 1148 + 1149 + # 1150 + # NOTE: USB_STORAGE depends on SCSI but BLK_DEV_SD may also be needed; 1151 + # 1152 + 1153 + # 1154 + # see USB_STORAGE Help for more information 1155 + # 1156 + CONFIG_USB_STORAGE=m 1157 + # CONFIG_USB_STORAGE_DEBUG is not set 1158 + # CONFIG_USB_STORAGE_DATAFAB is not set 1159 + # CONFIG_USB_STORAGE_FREECOM is not set 1160 + # CONFIG_USB_STORAGE_ISD200 is not set 1161 + # CONFIG_USB_STORAGE_USBAT is not set 1162 + # CONFIG_USB_STORAGE_SDDR09 is not set 1163 + # CONFIG_USB_STORAGE_SDDR55 is not set 1164 + # CONFIG_USB_STORAGE_JUMPSHOT is not set 1165 + # CONFIG_USB_STORAGE_ALAUDA is not set 1166 + # CONFIG_USB_STORAGE_ONETOUCH is not set 1167 + # CONFIG_USB_STORAGE_KARMA is not set 1168 + # CONFIG_USB_STORAGE_CYPRESS_ATACB is not set 1169 + # CONFIG_USB_LIBUSUAL is not set 1170 + 1171 + # 1172 + # USB Imaging devices 1173 + # 1174 + # CONFIG_USB_MDC800 is not set 1175 + # CONFIG_USB_MICROTEK is not set 1176 + 1177 + # 1178 + # USB port drivers 1179 + # 1180 + # CONFIG_USB_SERIAL is not set 1181 + 1182 + # 1183 + # USB Miscellaneous drivers 1184 + # 1185 + # CONFIG_USB_EMI62 is not set 1186 + # CONFIG_USB_EMI26 is not set 1187 + # CONFIG_USB_ADUTUX is not set 1188 + # CONFIG_USB_SEVSEG is not set 1189 + # CONFIG_USB_RIO500 is not set 1190 + # CONFIG_USB_LEGOTOWER is not set 1191 + # CONFIG_USB_LCD is not set 1192 + # CONFIG_USB_BERRY_CHARGE is not set 1193 + # CONFIG_USB_LED is not set 1194 + # CONFIG_USB_CYPRESS_CY7C63 is not set 1195 + # CONFIG_USB_CYTHERM is not set 1196 + # CONFIG_USB_PHIDGET is not set 1197 + # CONFIG_USB_IDMOUSE is not set 1198 + # CONFIG_USB_FTDI_ELAN is not set 1199 + # CONFIG_USB_APPLEDISPLAY is not set 1200 + # CONFIG_USB_SISUSBVGA is not set 1201 + # CONFIG_USB_LD is not set 1202 + # CONFIG_USB_TRANCEVIBRATOR is not set 1203 + # CONFIG_USB_IOWARRIOR is not set 1204 + # CONFIG_USB_TEST is not set 1205 + # CONFIG_USB_ISIGHTFW is not set 1206 + # CONFIG_USB_VST is not set 1207 + # CONFIG_USB_GADGET is not set 1208 + 1209 + # 1210 + # OTG and related infrastructure 1211 + # 1212 + # CONFIG_UWB is not set 1213 + # CONFIG_MMC is not set 1214 + # CONFIG_MEMSTICK is not set 1215 + # CONFIG_NEW_LEDS is not set 1216 + # CONFIG_ACCESSIBILITY is not set 1217 + # CONFIG_INFINIBAND is not set 1218 + # CONFIG_RTC_CLASS is not set 1219 + # CONFIG_DMADEVICES is not set 1220 + # CONFIG_UIO is not set 1221 + CONFIG_XEN_BALLOON=y 1222 + CONFIG_XEN_SCRUB_PAGES=y 1223 + CONFIG_XENFS=y 1224 + CONFIG_XEN_COMPAT_XENFS=y 1225 + # CONFIG_STAGING is not set 1226 + # CONFIG_MSPEC is not set 1227 + 1228 + # 1229 + # File systems 1230 + # 1231 + CONFIG_EXT2_FS=y 1232 + CONFIG_EXT2_FS_XATTR=y 1233 + CONFIG_EXT2_FS_POSIX_ACL=y 1234 + CONFIG_EXT2_FS_SECURITY=y 1235 + # CONFIG_EXT2_FS_XIP is not set 1236 + CONFIG_EXT3_FS=y 1237 + CONFIG_EXT3_FS_XATTR=y 1238 + CONFIG_EXT3_FS_POSIX_ACL=y 1239 + CONFIG_EXT3_FS_SECURITY=y 1240 + # CONFIG_EXT4_FS is not set 1241 + CONFIG_JBD=y 1242 + CONFIG_FS_MBCACHE=y 1243 + CONFIG_REISERFS_FS=y 1244 + # CONFIG_REISERFS_CHECK is not set 1245 + # CONFIG_REISERFS_PROC_INFO is not set 1246 + CONFIG_REISERFS_FS_XATTR=y 1247 + CONFIG_REISERFS_FS_POSIX_ACL=y 1248 + CONFIG_REISERFS_FS_SECURITY=y 1249 + # CONFIG_JFS_FS is not set 1250 + CONFIG_FS_POSIX_ACL=y 1251 + CONFIG_FILE_LOCKING=y 1252 + CONFIG_XFS_FS=y 1253 + # CONFIG_XFS_QUOTA is not set 1254 + # CONFIG_XFS_POSIX_ACL is not set 1255 + # CONFIG_XFS_RT is not set 1256 + # CONFIG_XFS_DEBUG is not set 1257 + # CONFIG_GFS2_FS is not set 1258 + # CONFIG_OCFS2_FS is not set 1259 + # CONFIG_BTRFS_FS is not set 1260 + CONFIG_DNOTIFY=y 1261 + CONFIG_INOTIFY=y 1262 + CONFIG_INOTIFY_USER=y 1263 + # CONFIG_QUOTA is not set 1264 + CONFIG_AUTOFS_FS=y 1265 + CONFIG_AUTOFS4_FS=y 1266 + # CONFIG_FUSE_FS is not set 1267 + 1268 + # 1269 + # CD-ROM/DVD Filesystems 1270 + # 1271 + CONFIG_ISO9660_FS=m 1272 + CONFIG_JOLIET=y 1273 + # CONFIG_ZISOFS is not set 1274 + CONFIG_UDF_FS=m 1275 + CONFIG_UDF_NLS=y 1276 + 1277 + # 1278 + # DOS/FAT/NT Filesystems 1279 + # 1280 + CONFIG_FAT_FS=y 1281 + # CONFIG_MSDOS_FS is not set 1282 + CONFIG_VFAT_FS=y 1283 + CONFIG_FAT_DEFAULT_CODEPAGE=437 1284 + CONFIG_FAT_DEFAULT_IOCHARSET="iso8859-1" 1285 + CONFIG_NTFS_FS=m 1286 + # CONFIG_NTFS_DEBUG is not set 1287 + # CONFIG_NTFS_RW is not set 1288 + 1289 + # 1290 + # Pseudo filesystems 1291 + # 1292 + CONFIG_PROC_FS=y 1293 + CONFIG_PROC_KCORE=y 1294 + CONFIG_PROC_SYSCTL=y 1295 + CONFIG_PROC_PAGE_MONITOR=y 1296 + CONFIG_SYSFS=y 1297 + CONFIG_TMPFS=y 1298 + # CONFIG_TMPFS_POSIX_ACL is not set 1299 + CONFIG_HUGETLBFS=y 1300 + CONFIG_HUGETLB_PAGE=y 1301 + # CONFIG_CONFIGFS_FS is not set 1302 + CONFIG_MISC_FILESYSTEMS=y 1303 + # CONFIG_ADFS_FS is not set 1304 + # CONFIG_AFFS_FS is not set 1305 + # CONFIG_HFS_FS is not set 1306 + # CONFIG_HFSPLUS_FS is not set 1307 + # CONFIG_BEFS_FS is not set 1308 + # CONFIG_BFS_FS is not set 1309 + # CONFIG_EFS_FS is not set 1310 + # CONFIG_CRAMFS is not set 1311 + # CONFIG_SQUASHFS is not set 1312 + # CONFIG_VXFS_FS is not set 1313 + # CONFIG_MINIX_FS is not set 1314 + # CONFIG_OMFS_FS is not set 1315 + # CONFIG_HPFS_FS is not set 1316 + # CONFIG_QNX4FS_FS is not set 1317 + # CONFIG_ROMFS_FS is not set 1318 + # CONFIG_SYSV_FS is not set 1319 + # CONFIG_UFS_FS is not set 1320 + CONFIG_NETWORK_FILESYSTEMS=y 1321 + CONFIG_NFS_FS=m 1322 + CONFIG_NFS_V3=y 1323 + # CONFIG_NFS_V3_ACL is not set 1324 + CONFIG_NFS_V4=y 1325 + CONFIG_NFSD=m 1326 + CONFIG_NFSD_V3=y 1327 + # CONFIG_NFSD_V3_ACL is not set 1328 + CONFIG_NFSD_V4=y 1329 + CONFIG_LOCKD=m 1330 + CONFIG_LOCKD_V4=y 1331 + CONFIG_EXPORTFS=m 1332 + CONFIG_NFS_COMMON=y 1333 + CONFIG_SUNRPC=m 1334 + CONFIG_SUNRPC_GSS=m 1335 + # CONFIG_SUNRPC_REGISTER_V4 is not set 1336 + CONFIG_RPCSEC_GSS_KRB5=m 1337 + # CONFIG_RPCSEC_GSS_SPKM3 is not set 1338 + CONFIG_SMB_FS=m 1339 + CONFIG_SMB_NLS_DEFAULT=y 1340 + CONFIG_SMB_NLS_REMOTE="cp437" 1341 + CONFIG_CIFS=m 1342 + # CONFIG_CIFS_STATS is not set 1343 + # CONFIG_CIFS_WEAK_PW_HASH is not set 1344 + # CONFIG_CIFS_XATTR is not set 1345 + # CONFIG_CIFS_DEBUG2 is not set 1346 + # CONFIG_CIFS_EXPERIMENTAL is not set 1347 + # CONFIG_NCP_FS is not set 1348 + # CONFIG_CODA_FS is not set 1349 + # CONFIG_AFS_FS is not set 1350 + 1351 + # 1352 + # Partition Types 1353 + # 1354 + CONFIG_PARTITION_ADVANCED=y 1355 + # CONFIG_ACORN_PARTITION is not set 1356 + # CONFIG_OSF_PARTITION is not set 1357 + # CONFIG_AMIGA_PARTITION is not set 1358 + # CONFIG_ATARI_PARTITION is not set 1359 + # CONFIG_MAC_PARTITION is not set 1360 + CONFIG_MSDOS_PARTITION=y 1361 + # CONFIG_BSD_DISKLABEL is not set 1362 + # CONFIG_MINIX_SUBPARTITION is not set 1363 + # CONFIG_SOLARIS_X86_PARTITION is not set 1364 + # CONFIG_UNIXWARE_DISKLABEL is not set 1365 + # CONFIG_LDM_PARTITION is not set 1366 + CONFIG_SGI_PARTITION=y 1367 + # CONFIG_ULTRIX_PARTITION is not set 1368 + # CONFIG_SUN_PARTITION is not set 1369 + # CONFIG_KARMA_PARTITION is not set 1370 + CONFIG_EFI_PARTITION=y 1371 + # CONFIG_SYSV68_PARTITION is not set 1372 + CONFIG_NLS=y 1373 + CONFIG_NLS_DEFAULT="iso8859-1" 1374 + CONFIG_NLS_CODEPAGE_437=y 1375 + CONFIG_NLS_CODEPAGE_737=m 1376 + CONFIG_NLS_CODEPAGE_775=m 1377 + CONFIG_NLS_CODEPAGE_850=m 1378 + CONFIG_NLS_CODEPAGE_852=m 1379 + CONFIG_NLS_CODEPAGE_855=m 1380 + CONFIG_NLS_CODEPAGE_857=m 1381 + CONFIG_NLS_CODEPAGE_860=m 1382 + CONFIG_NLS_CODEPAGE_861=m 1383 + CONFIG_NLS_CODEPAGE_862=m 1384 + CONFIG_NLS_CODEPAGE_863=m 1385 + CONFIG_NLS_CODEPAGE_864=m 1386 + CONFIG_NLS_CODEPAGE_865=m 1387 + CONFIG_NLS_CODEPAGE_866=m 1388 + CONFIG_NLS_CODEPAGE_869=m 1389 + CONFIG_NLS_CODEPAGE_936=m 1390 + CONFIG_NLS_CODEPAGE_950=m 1391 + CONFIG_NLS_CODEPAGE_932=m 1392 + CONFIG_NLS_CODEPAGE_949=m 1393 + CONFIG_NLS_CODEPAGE_874=m 1394 + CONFIG_NLS_ISO8859_8=m 1395 + CONFIG_NLS_CODEPAGE_1250=m 1396 + CONFIG_NLS_CODEPAGE_1251=m 1397 + # CONFIG_NLS_ASCII is not set 1398 + CONFIG_NLS_ISO8859_1=y 1399 + CONFIG_NLS_ISO8859_2=m 1400 + CONFIG_NLS_ISO8859_3=m 1401 + CONFIG_NLS_ISO8859_4=m 1402 + CONFIG_NLS_ISO8859_5=m 1403 + CONFIG_NLS_ISO8859_6=m 1404 + CONFIG_NLS_ISO8859_7=m 1405 + CONFIG_NLS_ISO8859_9=m 1406 + CONFIG_NLS_ISO8859_13=m 1407 + CONFIG_NLS_ISO8859_14=m 1408 + CONFIG_NLS_ISO8859_15=m 1409 + CONFIG_NLS_KOI8_R=m 1410 + CONFIG_NLS_KOI8_U=m 1411 + CONFIG_NLS_UTF8=m 1412 + # CONFIG_DLM is not set 1413 + 1414 + # 1415 + # Kernel hacking 1416 + # 1417 + # CONFIG_PRINTK_TIME is not set 1418 + CONFIG_ENABLE_WARN_DEPRECATED=y 1419 + CONFIG_ENABLE_MUST_CHECK=y 1420 + CONFIG_FRAME_WARN=2048 1421 + CONFIG_MAGIC_SYSRQ=y 1422 + # CONFIG_UNUSED_SYMBOLS is not set 1423 + # CONFIG_DEBUG_FS is not set 1424 + # CONFIG_HEADERS_CHECK is not set 1425 + CONFIG_DEBUG_KERNEL=y 1426 + # CONFIG_DEBUG_SHIRQ is not set 1427 + CONFIG_DETECT_SOFTLOCKUP=y 1428 + # CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC is not set 1429 + CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC_VALUE=0 1430 + CONFIG_SCHED_DEBUG=y 1431 + # CONFIG_SCHEDSTATS is not set 1432 + # CONFIG_TIMER_STATS is not set 1433 + # CONFIG_DEBUG_OBJECTS is not set 1434 + # CONFIG_SLUB_DEBUG_ON is not set 1435 + # CONFIG_SLUB_STATS is not set 1436 + # CONFIG_DEBUG_RT_MUTEXES is not set 1437 + # CONFIG_RT_MUTEX_TESTER is not set 1438 + # CONFIG_DEBUG_SPINLOCK is not set 1439 + CONFIG_DEBUG_MUTEXES=y 1440 + # CONFIG_DEBUG_SPINLOCK_SLEEP is not set 1441 + # CONFIG_DEBUG_LOCKING_API_SELFTESTS is not set 1442 + # CONFIG_DEBUG_KOBJECT is not set 1443 + # CONFIG_DEBUG_INFO is not set 1444 + # CONFIG_DEBUG_VM is not set 1445 + # CONFIG_DEBUG_WRITECOUNT is not set 1446 + CONFIG_DEBUG_MEMORY_INIT=y 1447 + # CONFIG_DEBUG_LIST is not set 1448 + # CONFIG_DEBUG_SG is not set 1449 + # CONFIG_DEBUG_NOTIFIERS is not set 1450 + # CONFIG_BOOT_PRINTK_DELAY is not set 1451 + # CONFIG_RCU_TORTURE_TEST is not set 1452 + # CONFIG_RCU_CPU_STALL_DETECTOR is not set 1453 + # CONFIG_BACKTRACE_SELF_TEST is not set 1454 + # CONFIG_DEBUG_BLOCK_EXT_DEVT is not set 1455 + # CONFIG_FAULT_INJECTION is not set 1456 + # CONFIG_SYSCTL_SYSCALL_CHECK is not set 1457 + 1458 + # 1459 + # Tracers 1460 + # 1461 + # CONFIG_SCHED_TRACER is not set 1462 + # CONFIG_CONTEXT_SWITCH_TRACER is not set 1463 + # CONFIG_BOOT_TRACER is not set 1464 + # CONFIG_TRACE_BRANCH_PROFILING is not set 1465 + # CONFIG_DYNAMIC_PRINTK_DEBUG is not set 1466 + # CONFIG_SAMPLES is not set 1467 + CONFIG_IA64_GRANULE_16MB=y 1468 + # CONFIG_IA64_GRANULE_64MB is not set 1469 + # CONFIG_IA64_PRINT_HAZARDS is not set 1470 + # CONFIG_DISABLE_VHPT is not set 1471 + # CONFIG_IA64_DEBUG_CMPXCHG is not set 1472 + # CONFIG_IA64_DEBUG_IRQ is not set 1473 + 1474 + # 1475 + # Security options 1476 + # 1477 + # CONFIG_KEYS is not set 1478 + # CONFIG_SECURITY is not set 1479 + # CONFIG_SECURITYFS is not set 1480 + # CONFIG_SECURITY_FILE_CAPABILITIES is not set 1481 + CONFIG_CRYPTO=y 1482 + 1483 + # 1484 + # Crypto core or helper 1485 + # 1486 + # CONFIG_CRYPTO_FIPS is not set 1487 + CONFIG_CRYPTO_ALGAPI=y 1488 + CONFIG_CRYPTO_ALGAPI2=y 1489 + CONFIG_CRYPTO_AEAD2=y 1490 + CONFIG_CRYPTO_BLKCIPHER=m 1491 + CONFIG_CRYPTO_BLKCIPHER2=y 1492 + CONFIG_CRYPTO_HASH=y 1493 + CONFIG_CRYPTO_HASH2=y 1494 + CONFIG_CRYPTO_RNG2=y 1495 + CONFIG_CRYPTO_MANAGER=m 1496 + CONFIG_CRYPTO_MANAGER2=y 1497 + # CONFIG_CRYPTO_GF128MUL is not set 1498 + # CONFIG_CRYPTO_NULL is not set 1499 + # CONFIG_CRYPTO_CRYPTD is not set 1500 + # CONFIG_CRYPTO_AUTHENC is not set 1501 + # CONFIG_CRYPTO_TEST is not set 1502 + 1503 + # 1504 + # Authenticated Encryption with Associated Data 1505 + # 1506 + # CONFIG_CRYPTO_CCM is not set 1507 + # CONFIG_CRYPTO_GCM is not set 1508 + # CONFIG_CRYPTO_SEQIV is not set 1509 + 1510 + # 1511 + # Block modes 1512 + # 1513 + CONFIG_CRYPTO_CBC=m 1514 + # CONFIG_CRYPTO_CTR is not set 1515 + # CONFIG_CRYPTO_CTS is not set 1516 + CONFIG_CRYPTO_ECB=m 1517 + # CONFIG_CRYPTO_LRW is not set 1518 + CONFIG_CRYPTO_PCBC=m 1519 + # CONFIG_CRYPTO_XTS is not set 1520 + 1521 + # 1522 + # Hash modes 1523 + # 1524 + # CONFIG_CRYPTO_HMAC is not set 1525 + # CONFIG_CRYPTO_XCBC is not set 1526 + 1527 + # 1528 + # Digest 1529 + # 1530 + # CONFIG_CRYPTO_CRC32C is not set 1531 + # CONFIG_CRYPTO_MD4 is not set 1532 + CONFIG_CRYPTO_MD5=y 1533 + # CONFIG_CRYPTO_MICHAEL_MIC is not set 1534 + # CONFIG_CRYPTO_RMD128 is not set 1535 + # CONFIG_CRYPTO_RMD160 is not set 1536 + # CONFIG_CRYPTO_RMD256 is not set 1537 + # CONFIG_CRYPTO_RMD320 is not set 1538 + # CONFIG_CRYPTO_SHA1 is not set 1539 + # CONFIG_CRYPTO_SHA256 is not set 1540 + # CONFIG_CRYPTO_SHA512 is not set 1541 + # CONFIG_CRYPTO_TGR192 is not set 1542 + # CONFIG_CRYPTO_WP512 is not set 1543 + 1544 + # 1545 + # Ciphers 1546 + # 1547 + # CONFIG_CRYPTO_AES is not set 1548 + # CONFIG_CRYPTO_ANUBIS is not set 1549 + # CONFIG_CRYPTO_ARC4 is not set 1550 + # CONFIG_CRYPTO_BLOWFISH is not set 1551 + # CONFIG_CRYPTO_CAMELLIA is not set 1552 + # CONFIG_CRYPTO_CAST5 is not set 1553 + # CONFIG_CRYPTO_CAST6 is not set 1554 + CONFIG_CRYPTO_DES=m 1555 + # CONFIG_CRYPTO_FCRYPT is not set 1556 + # CONFIG_CRYPTO_KHAZAD is not set 1557 + # CONFIG_CRYPTO_SALSA20 is not set 1558 + # CONFIG_CRYPTO_SEED is not set 1559 + # CONFIG_CRYPTO_SERPENT is not set 1560 + # CONFIG_CRYPTO_TEA is not set 1561 + # CONFIG_CRYPTO_TWOFISH is not set 1562 + 1563 + # 1564 + # Compression 1565 + # 1566 + # CONFIG_CRYPTO_DEFLATE is not set 1567 + # CONFIG_CRYPTO_LZO is not set 1568 + 1569 + # 1570 + # Random Number Generation 1571 + # 1572 + # CONFIG_CRYPTO_ANSI_CPRNG is not set 1573 + CONFIG_CRYPTO_HW=y 1574 + # CONFIG_CRYPTO_DEV_HIFN_795X is not set 1575 + CONFIG_HAVE_KVM=y 1576 + CONFIG_VIRTUALIZATION=y 1577 + # CONFIG_KVM is not set 1578 + # CONFIG_VIRTIO_PCI is not set 1579 + # CONFIG_VIRTIO_BALLOON is not set 1580 + 1581 + # 1582 + # Library routines 1583 + # 1584 + CONFIG_BITREVERSE=y 1585 + CONFIG_GENERIC_FIND_LAST_BIT=y 1586 + # CONFIG_CRC_CCITT is not set 1587 + # CONFIG_CRC16 is not set 1588 + # CONFIG_CRC_T10DIF is not set 1589 + CONFIG_CRC_ITU_T=m 1590 + CONFIG_CRC32=y 1591 + # CONFIG_CRC7 is not set 1592 + # CONFIG_LIBCRC32C is not set 1593 + CONFIG_PLIST=y 1594 + CONFIG_HAS_IOMEM=y 1595 + CONFIG_HAS_IOPORT=y 1596 + CONFIG_HAS_DMA=y 1597 + CONFIG_GENERIC_HARDIRQS=y 1598 + CONFIG_GENERIC_IRQ_PROBE=y 1599 + CONFIG_GENERIC_PENDING_IRQ=y 1600 + CONFIG_IRQ_PER_CPU=y 1601 + # CONFIG_IOMMU_API is not set
+4
arch/ia64/include/asm/kvm.h
··· 25 26 #include <linux/ioctl.h> 27 28 /* Architectural interrupt line count. */ 29 #define KVM_NR_INTERRUPTS 256 30
··· 25 26 #include <linux/ioctl.h> 27 28 + /* Select x86 specific features in <linux/kvm.h> */ 29 + #define __KVM_HAVE_IOAPIC 30 + #define __KVM_HAVE_DEVICE_ASSIGNMENT 31 + 32 /* Architectural interrupt line count. */ 33 #define KVM_NR_INTERRUPTS 256 34
-4
arch/ia64/include/asm/mmzone.h
··· 31 #endif 32 } 33 34 - #ifdef CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID 35 - extern int early_pfn_to_nid(unsigned long pfn); 36 - #endif 37 - 38 #ifdef CONFIG_IA64_DIG /* DIG systems are small */ 39 # define MAX_PHYSNODE_ID 8 40 # define NR_NODE_MEMBLKS (MAX_NUMNODES * 8)
··· 31 #endif 32 } 33 34 #ifdef CONFIG_IA64_DIG /* DIG systems are small */ 35 # define MAX_PHYSNODE_ID 8 36 # define NR_NODE_MEMBLKS (MAX_NUMNODES * 8)
+1 -1
arch/ia64/include/asm/sn/bte.h
··· 39 /* BTE status register only supports 16 bits for length field */ 40 #define BTE_LEN_BITS (16) 41 #define BTE_LEN_MASK ((1 << BTE_LEN_BITS) - 1) 42 - #define BTE_MAX_XFER ((1 << BTE_LEN_BITS) * L1_CACHE_BYTES) 43 44 45 /* Define hardware */
··· 39 /* BTE status register only supports 16 bits for length field */ 40 #define BTE_LEN_BITS (16) 41 #define BTE_LEN_MASK ((1 << BTE_LEN_BITS) - 1) 42 + #define BTE_MAX_XFER (BTE_LEN_MASK << L1_CACHE_SHIFT) 43 44 45 /* Define hardware */
+3 -2
arch/ia64/kernel/smpboot.c
··· 736 return -EBUSY; 737 } 738 739 if (migrate_platform_irqs(cpu)) { 740 cpu_set(cpu, cpu_online_map); 741 - return (-EBUSY); 742 } 743 744 remove_siblinginfo(cpu); 745 fixup_irqs(); 746 - cpu_clear(cpu, cpu_online_map); 747 local_flush_tlb_all(); 748 cpu_clear(cpu, cpu_callin_map); 749 return 0;
··· 736 return -EBUSY; 737 } 738 739 + cpu_clear(cpu, cpu_online_map); 740 + 741 if (migrate_platform_irqs(cpu)) { 742 cpu_set(cpu, cpu_online_map); 743 + return -EBUSY; 744 } 745 746 remove_siblinginfo(cpu); 747 fixup_irqs(); 748 local_flush_tlb_all(); 749 cpu_clear(cpu, cpu_callin_map); 750 return 0;
+4
arch/ia64/kvm/kvm-ia64.c
··· 1337 } 1338 } 1339 1340 void kvm_arch_destroy_vm(struct kvm *kvm) 1341 { 1342 kvm_iommu_unmap_guest(kvm);
··· 1337 } 1338 } 1339 1340 + void kvm_arch_sync_events(struct kvm *kvm) 1341 + { 1342 + } 1343 + 1344 void kvm_arch_destroy_vm(struct kvm *kvm) 1345 { 1346 kvm_iommu_unmap_guest(kvm);
+9 -8
arch/ia64/kvm/process.c
··· 455 if (!vmm_fpswa_interface) 456 return (fpswa_ret_t) {-1, 0, 0, 0}; 457 458 - /* 459 - * Just let fpswa driver to use hardware fp registers. 460 - * No fp register is valid in memory. 461 - */ 462 memset(&fp_state, 0, sizeof(fp_state_t)); 463 464 /* 465 * unsigned long (*EFI_FPSWA) ( 466 * unsigned long trap_type, 467 * void *Bundle, ··· 550 status = vmm_handle_fpu_swa(0, regs, isr); 551 if (!status) 552 return ; 553 - else if (-EAGAIN == status) { 554 - vcpu_decrement_iip(vcpu); 555 - return ; 556 - } 557 break; 558 } 559
··· 455 if (!vmm_fpswa_interface) 456 return (fpswa_ret_t) {-1, 0, 0, 0}; 457 458 memset(&fp_state, 0, sizeof(fp_state_t)); 459 460 /* 461 + * compute fp_state. only FP registers f6 - f11 are used by the 462 + * vmm, so set those bits in the mask and set the low volatile 463 + * pointer to point to these registers. 464 + */ 465 + fp_state.bitmask_low64 = 0xfc0; /* bit6..bit11 */ 466 + 467 + fp_state.fp_state_low_volatile = (fp_state_low_volatile_t *) &regs->f6; 468 + 469 + /* 470 * unsigned long (*EFI_FPSWA) ( 471 * unsigned long trap_type, 472 * void *Bundle, ··· 545 status = vmm_handle_fpu_swa(0, regs, isr); 546 if (!status) 547 return ; 548 break; 549 } 550
+2 -2
arch/ia64/mm/numa.c
··· 58 * SPARSEMEM to allocate the SPARSEMEM sectionmap on the NUMA node where 59 * the section resides. 60 */ 61 - int early_pfn_to_nid(unsigned long pfn) 62 { 63 int i, section = pfn >> PFN_SECTION_SHIFT, ssec, esec; 64 ··· 70 return node_memblk[i].nid; 71 } 72 73 - return 0; 74 } 75 76 #ifdef CONFIG_MEMORY_HOTPLUG
··· 58 * SPARSEMEM to allocate the SPARSEMEM sectionmap on the NUMA node where 59 * the section resides. 60 */ 61 + int __meminit __early_pfn_to_nid(unsigned long pfn) 62 { 63 int i, section = pfn >> PFN_SECTION_SHIFT, ssec, esec; 64 ··· 70 return node_memblk[i].nid; 71 } 72 73 + return -1; 74 } 75 76 #ifdef CONFIG_MEMORY_HOTPLUG
+4 -3
arch/ia64/sn/kernel/bte.c
··· 97 return BTE_SUCCESS; 98 } 99 100 - BUG_ON((len & L1_CACHE_MASK) || 101 - (src & L1_CACHE_MASK) || (dest & L1_CACHE_MASK)); 102 - BUG_ON(!(len < ((BTE_LEN_MASK + 1) << L1_CACHE_SHIFT))); 103 104 /* 105 * Start with interface corresponding to cpu number
··· 97 return BTE_SUCCESS; 98 } 99 100 + BUG_ON(len & L1_CACHE_MASK); 101 + BUG_ON(src & L1_CACHE_MASK); 102 + BUG_ON(dest & L1_CACHE_MASK); 103 + BUG_ON(len > BTE_MAX_XFER); 104 105 /* 106 * Start with interface corresponding to cpu number
+1 -2
arch/ia64/xen/Kconfig
··· 8 depends on PARAVIRT && MCKINLEY && IA64_PAGE_SIZE_16KB && EXPERIMENTAL 9 select XEN_XENCOMM 10 select NO_IDLE_HZ 11 - 12 - # those are required to save/restore. 13 select ARCH_SUSPEND_POSSIBLE 14 select SUSPEND 15 select PM_SLEEP
··· 8 depends on PARAVIRT && MCKINLEY && IA64_PAGE_SIZE_16KB && EXPERIMENTAL 9 select XEN_XENCOMM 10 select NO_IDLE_HZ 11 + # followings are required to save/restore. 12 select ARCH_SUSPEND_POSSIBLE 13 select SUSPEND 14 select PM_SLEEP
+2 -2
arch/ia64/xen/xen_pv_ops.c
··· 153 xen_setup_vcpu_info_placement(); 154 } 155 156 - static const struct pv_init_ops xen_init_ops __initdata = { 157 .banner = xen_banner, 158 159 .reserve_memory = xen_reserve_memory, ··· 337 HYPERVISOR_physdev_op(PHYSDEVOP_apic_write, &apic_op); 338 } 339 340 - static const struct pv_iosapic_ops xen_iosapic_ops __initdata = { 341 .pcat_compat_init = xen_pcat_compat_init, 342 .__get_irq_chip = xen_iosapic_get_irq_chip, 343
··· 153 xen_setup_vcpu_info_placement(); 154 } 155 156 + static const struct pv_init_ops xen_init_ops __initconst = { 157 .banner = xen_banner, 158 159 .reserve_memory = xen_reserve_memory, ··· 337 HYPERVISOR_physdev_op(PHYSDEVOP_apic_write, &apic_op); 338 } 339 340 + static const struct pv_iosapic_ops xen_iosapic_ops __initconst = { 341 .pcat_compat_init = xen_pcat_compat_init, 342 .__get_irq_chip = xen_iosapic_get_irq_chip, 343
+1
arch/mn10300/Kconfig
··· 7 8 config MN10300 9 def_bool y 10 11 config AM33 12 def_bool y
··· 7 8 config MN10300 9 def_bool y 10 + select HAVE_OPROFILE 11 12 config AM33 13 def_bool y
+1 -1
arch/mn10300/unit-asb2305/pci.c
··· 173 BRIDGEREGB(where) = value; 174 } else { 175 if (bus->number == 0 && 176 - (devfn == PCI_DEVFN(2, 0) && devfn == PCI_DEVFN(3, 0)) 177 ) 178 __pcidebug("<= %02x", bus, devfn, where, value); 179 CONFIG_ADDRESS = CONFIG_CMD(bus, devfn, where);
··· 173 BRIDGEREGB(where) = value; 174 } else { 175 if (bus->number == 0 && 176 + (devfn == PCI_DEVFN(2, 0) || devfn == PCI_DEVFN(3, 0)) 177 ) 178 __pcidebug("<= %02x", bus, devfn, where, value); 179 CONFIG_ADDRESS = CONFIG_CMD(bus, devfn, where);
+1 -1
arch/powerpc/include/asm/pgtable-4k.h
··· 60 /* It should be preserving the high 48 bits and then specifically */ 61 /* preserving _PAGE_SECONDARY | _PAGE_GROUP_IX */ 62 #define _PAGE_CHG_MASK (PAGE_MASK | _PAGE_ACCESSED | _PAGE_DIRTY | \ 63 - _PAGE_HPTEFLAGS) 64 65 /* Bits to mask out from a PMD to get to the PTE page */ 66 #define PMD_MASKED_BITS 0
··· 60 /* It should be preserving the high 48 bits and then specifically */ 61 /* preserving _PAGE_SECONDARY | _PAGE_GROUP_IX */ 62 #define _PAGE_CHG_MASK (PAGE_MASK | _PAGE_ACCESSED | _PAGE_DIRTY | \ 63 + _PAGE_HPTEFLAGS | _PAGE_SPECIAL) 64 65 /* Bits to mask out from a PMD to get to the PTE page */ 66 #define PMD_MASKED_BITS 0
+1 -1
arch/powerpc/include/asm/pgtable-64k.h
··· 114 * pgprot changes 115 */ 116 #define _PAGE_CHG_MASK (PTE_RPN_MASK | _PAGE_HPTEFLAGS | _PAGE_DIRTY | \ 117 - _PAGE_ACCESSED) 118 119 /* Bits to mask out from a PMD to get to the PTE page */ 120 #define PMD_MASKED_BITS 0x1ff
··· 114 * pgprot changes 115 */ 116 #define _PAGE_CHG_MASK (PTE_RPN_MASK | _PAGE_HPTEFLAGS | _PAGE_DIRTY | \ 117 + _PAGE_ACCESSED | _PAGE_SPECIAL) 118 119 /* Bits to mask out from a PMD to get to the PTE page */ 120 #define PMD_MASKED_BITS 0x1ff
+2 -1
arch/powerpc/include/asm/pgtable-ppc32.h
··· 429 #define PMD_PAGE_SIZE(pmd) bad_call_to_PMD_PAGE_SIZE() 430 #endif 431 432 - #define _PAGE_CHG_MASK (PAGE_MASK | _PAGE_ACCESSED | _PAGE_DIRTY) 433 434 435 #define PAGE_PROT_BITS (_PAGE_GUARDED | _PAGE_COHERENT | _PAGE_NO_CACHE | \
··· 429 #define PMD_PAGE_SIZE(pmd) bad_call_to_PMD_PAGE_SIZE() 430 #endif 431 432 + #define _PAGE_CHG_MASK (PAGE_MASK | _PAGE_ACCESSED | _PAGE_DIRTY | \ 433 + _PAGE_SPECIAL) 434 435 436 #define PAGE_PROT_BITS (_PAGE_GUARDED | _PAGE_COHERENT | _PAGE_NO_CACHE | \
+6 -1
arch/powerpc/kernel/align.c
··· 646 unsigned int areg, struct pt_regs *regs, 647 unsigned int flags, unsigned int length) 648 { 649 - char *ptr = (char *) &current->thread.TS_FPR(reg); 650 int ret = 0; 651 652 flush_vsx_to_thread(current); 653 654 if (flags & ST) 655 ret = __copy_to_user(addr, ptr, length);
··· 646 unsigned int areg, struct pt_regs *regs, 647 unsigned int flags, unsigned int length) 648 { 649 + char *ptr; 650 int ret = 0; 651 652 flush_vsx_to_thread(current); 653 + 654 + if (reg < 32) 655 + ptr = (char *) &current->thread.TS_FPR(reg); 656 + else 657 + ptr = (char *) &current->thread.vr[reg - 32]; 658 659 if (flags & ST) 660 ret = __copy_to_user(addr, ptr, length);
+4
arch/powerpc/kvm/powerpc.c
··· 125 } 126 } 127 128 void kvm_arch_destroy_vm(struct kvm *kvm) 129 { 130 kvmppc_free_vcpus(kvm);
··· 125 } 126 } 127 128 + void kvm_arch_sync_events(struct kvm *kvm) 129 + { 130 + } 131 + 132 void kvm_arch_destroy_vm(struct kvm *kvm) 133 { 134 kvmppc_free_vcpus(kvm);
+3 -2
arch/powerpc/mm/numa.c
··· 19 #include <linux/notifier.h> 20 #include <linux/lmb.h> 21 #include <linux/of.h> 22 #include <asm/sparsemem.h> 23 #include <asm/prom.h> 24 #include <asm/system.h> ··· 883 unsigned long physbase = lmb.reserved.region[i].base; 884 unsigned long size = lmb.reserved.region[i].size; 885 unsigned long start_pfn = physbase >> PAGE_SHIFT; 886 - unsigned long end_pfn = ((physbase + size) >> PAGE_SHIFT); 887 struct node_active_region node_ar; 888 unsigned long node_end_pfn = node->node_start_pfn + 889 node->node_spanned_pages; ··· 909 */ 910 if (end_pfn > node_ar.end_pfn) 911 reserve_size = (node_ar.end_pfn << PAGE_SHIFT) 912 - - (start_pfn << PAGE_SHIFT); 913 /* 914 * Only worry about *this* node, others may not 915 * yet have valid NODE_DATA().
··· 19 #include <linux/notifier.h> 20 #include <linux/lmb.h> 21 #include <linux/of.h> 22 + #include <linux/pfn.h> 23 #include <asm/sparsemem.h> 24 #include <asm/prom.h> 25 #include <asm/system.h> ··· 882 unsigned long physbase = lmb.reserved.region[i].base; 883 unsigned long size = lmb.reserved.region[i].size; 884 unsigned long start_pfn = physbase >> PAGE_SHIFT; 885 + unsigned long end_pfn = PFN_UP(physbase + size); 886 struct node_active_region node_ar; 887 unsigned long node_end_pfn = node->node_start_pfn + 888 node->node_spanned_pages; ··· 908 */ 909 if (end_pfn > node_ar.end_pfn) 910 reserve_size = (node_ar.end_pfn << PAGE_SHIFT) 911 + - physbase; 912 /* 913 * Only worry about *this* node, others may not 914 * yet have valid NODE_DATA().
+1 -1
arch/powerpc/platforms/ps3/mm.c
··· 328 return result; 329 } 330 331 - core_initcall(ps3_mm_add_memory); 332 333 /*============================================================================*/ 334 /* dma routines */
··· 328 return result; 329 } 330 331 + device_initcall(ps3_mm_add_memory); 332 333 /*============================================================================*/ 334 /* dma routines */
+1 -1
arch/s390/include/asm/cputime.h
··· 145 value->tv_usec = rp.subreg.even / 4096; 146 value->tv_sec = rp.subreg.odd; 147 #else 148 - value->tv_usec = cputime % 4096000000ULL; 149 value->tv_sec = cputime / 4096000000ULL; 150 #endif 151 }
··· 145 value->tv_usec = rp.subreg.even / 4096; 146 value->tv_sec = rp.subreg.odd; 147 #else 148 + value->tv_usec = (cputime % 4096000000ULL) / 4096; 149 value->tv_sec = cputime / 4096000000ULL; 150 #endif 151 }
+2
arch/s390/include/asm/setup.h
··· 43 44 extern struct mem_chunk memory_chunk[]; 45 extern unsigned long real_memory_size; 46 47 void detect_memory_layout(struct mem_chunk chunk[]); 48
··· 43 44 extern struct mem_chunk memory_chunk[]; 45 extern unsigned long real_memory_size; 46 + extern int memory_end_set; 47 + extern unsigned long memory_end; 48 49 void detect_memory_layout(struct mem_chunk chunk[]); 50
+7 -2
arch/s390/kernel/setup.c
··· 82 83 struct mem_chunk __initdata memory_chunk[MEMORY_CHUNKS]; 84 volatile int __cpu_logical_map[NR_CPUS]; /* logical cpu to cpu address */ 85 - static unsigned long __initdata memory_end; 86 87 /* 88 * This is set up by the setup-routine at boot-time ··· 283 static int __init early_parse_mem(char *p) 284 { 285 memory_end = memparse(p, &p); 286 return 0; 287 } 288 early_param("mem", early_parse_mem); ··· 511 int i; 512 513 #if defined(CONFIG_ZFCPDUMP) || defined(CONFIG_ZFCPDUMP_MODULE) 514 - if (ipl_info.type == IPL_TYPE_FCP_DUMP) 515 memory_end = ZFCPDUMP_HSA_SIZE; 516 #endif 517 memory_size = 0; 518 memory_end &= PAGE_MASK;
··· 82 83 struct mem_chunk __initdata memory_chunk[MEMORY_CHUNKS]; 84 volatile int __cpu_logical_map[NR_CPUS]; /* logical cpu to cpu address */ 85 + 86 + int __initdata memory_end_set; 87 + unsigned long __initdata memory_end; 88 89 /* 90 * This is set up by the setup-routine at boot-time ··· 281 static int __init early_parse_mem(char *p) 282 { 283 memory_end = memparse(p, &p); 284 + memory_end_set = 1; 285 return 0; 286 } 287 early_param("mem", early_parse_mem); ··· 508 int i; 509 510 #if defined(CONFIG_ZFCPDUMP) || defined(CONFIG_ZFCPDUMP_MODULE) 511 + if (ipl_info.type == IPL_TYPE_FCP_DUMP) { 512 memory_end = ZFCPDUMP_HSA_SIZE; 513 + memory_end_set = 1; 514 + } 515 #endif 516 memory_size = 0; 517 memory_end &= PAGE_MASK;
+4
arch/s390/kvm/kvm-s390.c
··· 212 } 213 } 214 215 void kvm_arch_destroy_vm(struct kvm *kvm) 216 { 217 kvm_free_vcpus(kvm);
··· 212 } 213 } 214 215 + void kvm_arch_sync_events(struct kvm *kvm) 216 + { 217 + } 218 + 219 void kvm_arch_destroy_vm(struct kvm *kvm) 220 { 221 kvm_free_vcpus(kvm);
+3 -3
arch/um/drivers/vde_user.c
··· 78 { 79 struct vde_open_args *args; 80 81 - vpri->args = kmalloc(sizeof(struct vde_open_args), UM_GFP_KERNEL); 82 if (vpri->args == NULL) { 83 printk(UM_KERN_ERR "vde_init_libstuff - vde_open_args " 84 "allocation failed"); ··· 91 args->group = init->group; 92 args->mode = init->mode ? init->mode : 0700; 93 94 - args->port ? printk(UM_KERN_INFO "port %d", args->port) : 95 - printk(UM_KERN_INFO "undefined port"); 96 } 97 98 int vde_user_read(void *conn, void *buf, int len)
··· 78 { 79 struct vde_open_args *args; 80 81 + vpri->args = uml_kmalloc(sizeof(struct vde_open_args), UM_GFP_KERNEL); 82 if (vpri->args == NULL) { 83 printk(UM_KERN_ERR "vde_init_libstuff - vde_open_args " 84 "allocation failed"); ··· 91 args->group = init->group; 92 args->mode = init->mode ? init->mode : 0700; 93 94 + args->port ? printk("port %d", args->port) : 95 + printk("undefined port"); 96 } 97 98 int vde_user_read(void *conn, void *buf, int len)
+2 -22
arch/x86/Kconfig.debug
··· 174 Add a simple leak tracer to the IOMMU code. This is useful when you 175 are debugging a buggy device driver that leaks IOMMU mappings. 176 177 - config MMIOTRACE 178 - bool "Memory mapped IO tracing" 179 - depends on DEBUG_KERNEL && PCI 180 - select TRACING 181 - help 182 - Mmiotrace traces Memory Mapped I/O access and is meant for 183 - debugging and reverse engineering. It is called from the ioremap 184 - implementation and works via page faults. Tracing is disabled by 185 - default and can be enabled at run-time. 186 - 187 - See Documentation/tracers/mmiotrace.txt. 188 - If you are not helping to develop drivers, say N. 189 - 190 - config MMIOTRACE_TEST 191 - tristate "Test module for mmiotrace" 192 - depends on MMIOTRACE && m 193 - help 194 - This is a dumb module for testing mmiotrace. It is very dangerous 195 - as it will write garbage to IO memory starting at a given address. 196 - However, it should be safe to use on e.g. unused portion of VRAM. 197 - 198 - Say N, unless you absolutely know what you are doing. 199 200 # 201 # IO delay types:
··· 174 Add a simple leak tracer to the IOMMU code. This is useful when you 175 are debugging a buggy device driver that leaks IOMMU mappings. 176 177 + config HAVE_MMIOTRACE_SUPPORT 178 + def_bool y 179 180 # 181 # IO delay types:
+7
arch/x86/include/asm/kvm.h
··· 9 #include <linux/types.h> 10 #include <linux/ioctl.h> 11 12 /* Architectural interrupt line count. */ 13 #define KVM_NR_INTERRUPTS 256 14
··· 9 #include <linux/types.h> 10 #include <linux/ioctl.h> 11 12 + /* Select x86 specific features in <linux/kvm.h> */ 13 + #define __KVM_HAVE_PIT 14 + #define __KVM_HAVE_IOAPIC 15 + #define __KVM_HAVE_DEVICE_ASSIGNMENT 16 + #define __KVM_HAVE_MSI 17 + #define __KVM_HAVE_USER_NMI 18 + 19 /* Architectural interrupt line count. */ 20 #define KVM_NR_INTERRUPTS 256 21
-2
arch/x86/include/asm/mmzone_32.h
··· 32 get_memcfg_numa_flat(); 33 } 34 35 - extern int early_pfn_to_nid(unsigned long pfn); 36 - 37 extern void resume_map_numa_kva(pgd_t *pgd); 38 39 #else /* !CONFIG_NUMA */
··· 32 get_memcfg_numa_flat(); 33 } 34 35 extern void resume_map_numa_kva(pgd_t *pgd); 36 37 #else /* !CONFIG_NUMA */
-2
arch/x86/include/asm/mmzone_64.h
··· 40 #define node_end_pfn(nid) (NODE_DATA(nid)->node_start_pfn + \ 41 NODE_DATA(nid)->node_spanned_pages) 42 43 - extern int early_pfn_to_nid(unsigned long pfn); 44 - 45 #ifdef CONFIG_NUMA_EMU 46 #define FAKE_NODE_MIN_SIZE (64 * 1024 * 1024) 47 #define FAKE_NODE_MIN_HASH_MASK (~(FAKE_NODE_MIN_SIZE - 1UL))
··· 40 #define node_end_pfn(nid) (NODE_DATA(nid)->node_start_pfn + \ 41 NODE_DATA(nid)->node_spanned_pages) 42 43 #ifdef CONFIG_NUMA_EMU 44 #define FAKE_NODE_MIN_SIZE (64 * 1024 * 1024) 45 #define FAKE_NODE_MIN_HASH_MASK (~(FAKE_NODE_MIN_SIZE - 1UL))
-1
arch/x86/include/asm/page.h
··· 57 typedef struct { pgprotval_t pgprot; } pgprot_t; 58 59 extern int page_is_ram(unsigned long pagenr); 60 - extern int pagerange_is_ram(unsigned long start, unsigned long end); 61 extern int devmem_is_allowed(unsigned long pagenr); 62 extern void map_devmem(unsigned long pfn, unsigned long size, 63 pgprot_t vma_prot);
··· 57 typedef struct { pgprotval_t pgprot; } pgprot_t; 58 59 extern int page_is_ram(unsigned long pagenr); 60 extern int devmem_is_allowed(unsigned long pagenr); 61 extern void map_devmem(unsigned long pfn, unsigned long size, 62 pgprot_t vma_prot);
+2 -15
arch/x86/include/asm/paravirt.h
··· 1352 PVOP_VCALL0(pv_cpu_ops.lazy_mode.leave); 1353 } 1354 1355 - static inline void arch_flush_lazy_cpu_mode(void) 1356 - { 1357 - if (unlikely(paravirt_get_lazy_mode() == PARAVIRT_LAZY_CPU)) { 1358 - arch_leave_lazy_cpu_mode(); 1359 - arch_enter_lazy_cpu_mode(); 1360 - } 1361 - } 1362 - 1363 1364 #define __HAVE_ARCH_ENTER_LAZY_MMU_MODE 1365 static inline void arch_enter_lazy_mmu_mode(void) ··· 1365 PVOP_VCALL0(pv_mmu_ops.lazy_mode.leave); 1366 } 1367 1368 - static inline void arch_flush_lazy_mmu_mode(void) 1369 - { 1370 - if (unlikely(paravirt_get_lazy_mode() == PARAVIRT_LAZY_MMU)) { 1371 - arch_leave_lazy_mmu_mode(); 1372 - arch_enter_lazy_mmu_mode(); 1373 - } 1374 - } 1375 1376 static inline void __set_fixmap(unsigned /* enum fixed_addresses */ idx, 1377 unsigned long phys, pgprot_t flags)
··· 1352 PVOP_VCALL0(pv_cpu_ops.lazy_mode.leave); 1353 } 1354 1355 + void arch_flush_lazy_cpu_mode(void); 1356 1357 #define __HAVE_ARCH_ENTER_LAZY_MMU_MODE 1358 static inline void arch_enter_lazy_mmu_mode(void) ··· 1372 PVOP_VCALL0(pv_mmu_ops.lazy_mode.leave); 1373 } 1374 1375 + void arch_flush_lazy_mmu_mode(void); 1376 1377 static inline void __set_fixmap(unsigned /* enum fixed_addresses */ idx, 1378 unsigned long phys, pgprot_t flags)
+10 -20
arch/x86/kernel/acpi/wakeup_64.S
··· 13 * Hooray, we are in Long 64-bit mode (but still running in low memory) 14 */ 15 ENTRY(wakeup_long64) 16 - wakeup_long64: 17 movq saved_magic, %rax 18 movq $0x123456789abcdef0, %rdx 19 cmpq %rdx, %rax ··· 33 34 movq saved_rip, %rax 35 jmp *%rax 36 37 bogus_64_magic: 38 jmp bogus_64_magic 39 40 - .align 2 41 - .p2align 4,,15 42 - .globl do_suspend_lowlevel 43 - .type do_suspend_lowlevel,@function 44 - do_suspend_lowlevel: 45 - .LFB5: 46 subq $8, %rsp 47 xorl %eax, %eax 48 call save_processor_state ··· 62 pushfq 63 popq pt_regs_flags(%rax) 64 65 - movq $.L97, saved_rip(%rip) 66 67 movq %rsp, saved_rsp 68 movq %rbp, saved_rbp ··· 73 addq $8, %rsp 74 movl $3, %edi 75 xorl %eax, %eax 76 - jmp acpi_enter_sleep_state 77 - .L97: 78 - .p2align 4,,7 79 - .L99: 80 - .align 4 81 - movl $24, %eax 82 - movw %ax, %ds 83 84 /* We don't restore %rax, it must be 0 anyway */ 85 movq $saved_context, %rax 86 movq saved_context_cr4(%rax), %rbx ··· 110 xorl %eax, %eax 111 addq $8, %rsp 112 jmp restore_processor_state 113 - .LFE5: 114 - .Lfe5: 115 - .size do_suspend_lowlevel, .Lfe5-do_suspend_lowlevel 116 - 117 .data 118 - ALIGN 119 ENTRY(saved_rbp) .quad 0 120 ENTRY(saved_rsi) .quad 0 121 ENTRY(saved_rdi) .quad 0
··· 13 * Hooray, we are in Long 64-bit mode (but still running in low memory) 14 */ 15 ENTRY(wakeup_long64) 16 movq saved_magic, %rax 17 movq $0x123456789abcdef0, %rdx 18 cmpq %rdx, %rax ··· 34 35 movq saved_rip, %rax 36 jmp *%rax 37 + ENDPROC(wakeup_long64) 38 39 bogus_64_magic: 40 jmp bogus_64_magic 41 42 + ENTRY(do_suspend_lowlevel) 43 subq $8, %rsp 44 xorl %eax, %eax 45 call save_processor_state ··· 67 pushfq 68 popq pt_regs_flags(%rax) 69 70 + movq $resume_point, saved_rip(%rip) 71 72 movq %rsp, saved_rsp 73 movq %rbp, saved_rbp ··· 78 addq $8, %rsp 79 movl $3, %edi 80 xorl %eax, %eax 81 + call acpi_enter_sleep_state 82 + /* in case something went wrong, restore the machine status and go on */ 83 + jmp resume_point 84 85 + .align 4 86 + resume_point: 87 /* We don't restore %rax, it must be 0 anyway */ 88 movq $saved_context, %rax 89 movq saved_context_cr4(%rax), %rbx ··· 117 xorl %eax, %eax 118 addq $8, %rsp 119 jmp restore_processor_state 120 + ENDPROC(do_suspend_lowlevel) 121 + 122 .data 123 ENTRY(saved_rbp) .quad 0 124 ENTRY(saved_rsi) .quad 0 125 ENTRY(saved_rdi) .quad 0
+1 -1
arch/x86/kernel/apic.c
··· 862 } 863 864 /* lets not touch this if we didn't frob it */ 865 - #if defined(CONFIG_X86_MCE_P4THERMAL) || defined(X86_MCE_INTEL) 866 if (maxlvt >= 5) { 867 v = apic_read(APIC_LVTTHMR); 868 apic_write(APIC_LVTTHMR, v | APIC_LVT_MASKED);
··· 862 } 863 864 /* lets not touch this if we didn't frob it */ 865 + #if defined(CONFIG_X86_MCE_P4THERMAL) || defined(CONFIG_X86_MCE_INTEL) 866 if (maxlvt >= 5) { 867 v = apic_read(APIC_LVTTHMR); 868 apic_write(APIC_LVTTHMR, v | APIC_LVT_MASKED);
+7 -5
arch/x86/kernel/cpu/cpufreq/powernow-k8.c
··· 1157 data->cpu = pol->cpu; 1158 data->currpstate = HW_PSTATE_INVALID; 1159 1160 - rc = powernow_k8_cpu_init_acpi(data); 1161 - if (rc) { 1162 /* 1163 * Use the PSB BIOS structure. This is only availabe on 1164 * an UP version, and is deprecated by AMD. ··· 1175 "ACPI maintainers and complain to your BIOS " 1176 "vendor.\n"); 1177 #endif 1178 - goto err_out; 1179 } 1180 if (pol->cpu != 0) { 1181 printk(KERN_ERR FW_BUG PFX "No ACPI _PSS objects for " 1182 "CPU other than CPU0. Complain to your BIOS " 1183 "vendor.\n"); 1184 - goto err_out; 1185 } 1186 rc = find_psb_table(data); 1187 if (rc) { 1188 - goto err_out; 1189 } 1190 /* Take a crude guess here. 1191 * That guess was in microseconds, so multiply with 1000 */
··· 1157 data->cpu = pol->cpu; 1158 data->currpstate = HW_PSTATE_INVALID; 1159 1160 + if (powernow_k8_cpu_init_acpi(data)) { 1161 /* 1162 * Use the PSB BIOS structure. This is only availabe on 1163 * an UP version, and is deprecated by AMD. ··· 1176 "ACPI maintainers and complain to your BIOS " 1177 "vendor.\n"); 1178 #endif 1179 + kfree(data); 1180 + return -ENODEV; 1181 } 1182 if (pol->cpu != 0) { 1183 printk(KERN_ERR FW_BUG PFX "No ACPI _PSS objects for " 1184 "CPU other than CPU0. Complain to your BIOS " 1185 "vendor.\n"); 1186 + kfree(data); 1187 + return -ENODEV; 1188 } 1189 rc = find_psb_table(data); 1190 if (rc) { 1191 + kfree(data); 1192 + return -ENODEV; 1193 } 1194 /* Take a crude guess here. 1195 * That guess was in microseconds, so multiply with 1000 */
+4 -3
arch/x86/kernel/cpu/mcheck/mce_64.c
··· 295 * If we know that the error was in user space, send a 296 * SIGBUS. Otherwise, panic if tolerance is low. 297 * 298 - * do_exit() takes an awful lot of locks and has a slight 299 * risk of deadlocking. 300 */ 301 if (user_space) { 302 - do_exit(SIGBUS); 303 } else if (panic_on_oops || tolerant < 2) { 304 mce_panic("Uncorrected machine check", 305 &panicm, mcestart); ··· 490 491 } 492 493 - static void __cpuinit mce_cpu_features(struct cpuinfo_x86 *c) 494 { 495 switch (c->x86_vendor) { 496 case X86_VENDOR_INTEL: ··· 734 static int mce_resume(struct sys_device *dev) 735 { 736 mce_init(NULL); 737 return 0; 738 } 739
··· 295 * If we know that the error was in user space, send a 296 * SIGBUS. Otherwise, panic if tolerance is low. 297 * 298 + * force_sig() takes an awful lot of locks and has a slight 299 * risk of deadlocking. 300 */ 301 if (user_space) { 302 + force_sig(SIGBUS, current); 303 } else if (panic_on_oops || tolerant < 2) { 304 mce_panic("Uncorrected machine check", 305 &panicm, mcestart); ··· 490 491 } 492 493 + static void mce_cpu_features(struct cpuinfo_x86 *c) 494 { 495 switch (c->x86_vendor) { 496 case X86_VENDOR_INTEL: ··· 734 static int mce_resume(struct sys_device *dev) 735 { 736 mce_init(NULL); 737 + mce_cpu_features(&current_cpu_data); 738 return 0; 739 } 740
+1 -1
arch/x86/kernel/cpu/mcheck/mce_amd_64.c
··· 121 } 122 123 /* cpu init entry point, called from mce.c with preempt off */ 124 - void __cpuinit mce_amd_feature_init(struct cpuinfo_x86 *c) 125 { 126 unsigned int bank, block; 127 unsigned int cpu = smp_processor_id();
··· 121 } 122 123 /* cpu init entry point, called from mce.c with preempt off */ 124 + void mce_amd_feature_init(struct cpuinfo_x86 *c) 125 { 126 unsigned int bank, block; 127 unsigned int cpu = smp_processor_id();
+2 -2
arch/x86/kernel/cpu/mcheck/mce_intel_64.c
··· 30 irq_exit(); 31 } 32 33 - static void __cpuinit intel_init_thermal(struct cpuinfo_x86 *c) 34 { 35 u32 l, h; 36 int tm2 = 0; ··· 84 return; 85 } 86 87 - void __cpuinit mce_intel_feature_init(struct cpuinfo_x86 *c) 88 { 89 intel_init_thermal(c); 90 }
··· 30 irq_exit(); 31 } 32 33 + static void intel_init_thermal(struct cpuinfo_x86 *c) 34 { 35 u32 l, h; 36 int tm2 = 0; ··· 84 return; 85 } 86 87 + void mce_intel_feature_init(struct cpuinfo_x86 *c) 88 { 89 intel_init_thermal(c); 90 }
+2
arch/x86/kernel/hpet.c
··· 269 now = hpet_readl(HPET_COUNTER); 270 cmp = now + (unsigned long) delta; 271 cfg = hpet_readl(HPET_Tn_CFG(timer)); 272 cfg |= HPET_TN_ENABLE | HPET_TN_PERIODIC | 273 HPET_TN_SETVAL | HPET_TN_32BIT; 274 hpet_writel(cfg, HPET_Tn_CFG(timer));
··· 269 now = hpet_readl(HPET_COUNTER); 270 cmp = now + (unsigned long) delta; 271 cfg = hpet_readl(HPET_Tn_CFG(timer)); 272 + /* Make sure we use edge triggered interrupts */ 273 + cfg &= ~HPET_TN_LEVEL; 274 cfg |= HPET_TN_ENABLE | HPET_TN_PERIODIC | 275 HPET_TN_SETVAL | HPET_TN_32BIT; 276 hpet_writel(cfg, HPET_Tn_CFG(timer));
+1 -1
arch/x86/kernel/olpc.c
··· 203 static void __init platform_detect(void) 204 { 205 /* stopgap until OFW support is added to the kernel */ 206 - olpc_platform_info.boardrev = 0xc2; 207 } 208 #endif 209
··· 203 static void __init platform_detect(void) 204 { 205 /* stopgap until OFW support is added to the kernel */ 206 + olpc_platform_info.boardrev = olpc_board(0xc2); 207 } 208 #endif 209
+26
arch/x86/kernel/paravirt.c
··· 268 return __get_cpu_var(paravirt_lazy_mode); 269 } 270 271 struct pv_info pv_info = { 272 .name = "bare hardware", 273 .paravirt_enabled = 0,
··· 268 return __get_cpu_var(paravirt_lazy_mode); 269 } 270 271 + void arch_flush_lazy_mmu_mode(void) 272 + { 273 + preempt_disable(); 274 + 275 + if (paravirt_get_lazy_mode() == PARAVIRT_LAZY_MMU) { 276 + WARN_ON(preempt_count() == 1); 277 + arch_leave_lazy_mmu_mode(); 278 + arch_enter_lazy_mmu_mode(); 279 + } 280 + 281 + preempt_enable(); 282 + } 283 + 284 + void arch_flush_lazy_cpu_mode(void) 285 + { 286 + preempt_disable(); 287 + 288 + if (paravirt_get_lazy_mode() == PARAVIRT_LAZY_CPU) { 289 + WARN_ON(preempt_count() == 1); 290 + arch_leave_lazy_cpu_mode(); 291 + arch_enter_lazy_cpu_mode(); 292 + } 293 + 294 + preempt_enable(); 295 + } 296 + 297 struct pv_info pv_info = { 298 .name = "bare hardware", 299 .paravirt_enabled = 0,
-3
arch/x86/kernel/process_32.c
··· 104 check_pgt_cache(); 105 rmb(); 106 107 - if (rcu_pending(cpu)) 108 - rcu_check_callbacks(cpu, 0); 109 - 110 if (cpu_is_offline(cpu)) 111 play_dead(); 112
··· 104 check_pgt_cache(); 105 rmb(); 106 107 if (cpu_is_offline(cpu)) 108 play_dead(); 109
+10 -6
arch/x86/kernel/ptrace.c
··· 810 811 static void ptrace_bts_detach(struct task_struct *child) 812 { 813 - if (unlikely(child->bts)) { 814 - ds_release_bts(child->bts); 815 - child->bts = NULL; 816 - 817 - ptrace_bts_free_buffer(child); 818 - } 819 } 820 #else 821 static inline void ptrace_bts_fork(struct task_struct *tsk) {}
··· 810 811 static void ptrace_bts_detach(struct task_struct *child) 812 { 813 + /* 814 + * Ptrace_detach() races with ptrace_untrace() in case 815 + * the child dies and is reaped by another thread. 816 + * 817 + * We only do the memory accounting at this point and 818 + * leave the buffer deallocation and the bts tracer 819 + * release to ptrace_bts_untrace() which will be called 820 + * later on with tasklist_lock held. 821 + */ 822 + release_locked_buffer(child->bts_buffer, child->bts_size); 823 } 824 #else 825 static inline void ptrace_bts_fork(struct task_struct *tsk) {}
+9 -1
arch/x86/kernel/traps.c
··· 99 local_irq_enable(); 100 } 101 102 static inline void preempt_conditional_cli(struct pt_regs *regs) 103 { 104 if (regs->flags & X86_EFLAGS_IF) ··· 632 633 #ifdef CONFIG_X86_32 634 debug_vm86: 635 handle_vm86_trap((struct kernel_vm86_regs *) regs, error_code, 1); 636 - preempt_conditional_cli(regs); 637 return; 638 #endif 639
··· 99 local_irq_enable(); 100 } 101 102 + static inline void conditional_cli(struct pt_regs *regs) 103 + { 104 + if (regs->flags & X86_EFLAGS_IF) 105 + local_irq_disable(); 106 + } 107 + 108 static inline void preempt_conditional_cli(struct pt_regs *regs) 109 { 110 if (regs->flags & X86_EFLAGS_IF) ··· 626 627 #ifdef CONFIG_X86_32 628 debug_vm86: 629 + /* reenable preemption: handle_vm86_trap() might sleep */ 630 + dec_preempt_count(); 631 handle_vm86_trap((struct kernel_vm86_regs *) regs, error_code, 1); 632 + conditional_cli(regs); 633 return; 634 #endif 635
+4 -1
arch/x86/kernel/vmiclock_32.c
··· 283 #endif 284 285 /** vmi clocksource */ 286 287 static cycle_t read_real_cycles(void) 288 { 289 - return vmi_timer_ops.get_cycle_counter(VMI_CYCLES_REAL); 290 } 291 292 static struct clocksource clocksource_vmi = {
··· 283 #endif 284 285 /** vmi clocksource */ 286 + static struct clocksource clocksource_vmi; 287 288 static cycle_t read_real_cycles(void) 289 { 290 + cycle_t ret = (cycle_t)vmi_timer_ops.get_cycle_counter(VMI_CYCLES_REAL); 291 + return ret >= clocksource_vmi.cycle_last ? 292 + ret : clocksource_vmi.cycle_last; 293 } 294 295 static struct clocksource clocksource_vmi = {
+1 -1
arch/x86/kvm/i8254.c
··· 207 hrtimer_add_expires_ns(&pt->timer, pt->period); 208 pt->scheduled = hrtimer_get_expires_ns(&pt->timer); 209 if (pt->period) 210 - ps->channels[0].count_load_time = hrtimer_get_expires(&pt->timer); 211 212 return (pt->period == 0 ? 0 : 1); 213 }
··· 207 hrtimer_add_expires_ns(&pt->timer, pt->period); 208 pt->scheduled = hrtimer_get_expires_ns(&pt->timer); 209 if (pt->period) 210 + ps->channels[0].count_load_time = ktime_get(); 211 212 return (pt->period == 0 ? 0 : 1); 213 }
-7
arch/x86/kvm/irq.c
··· 87 } 88 EXPORT_SYMBOL_GPL(kvm_inject_pending_timer_irqs); 89 90 - void kvm_timer_intr_post(struct kvm_vcpu *vcpu, int vec) 91 - { 92 - kvm_apic_timer_intr_post(vcpu, vec); 93 - /* TODO: PIT, RTC etc. */ 94 - } 95 - EXPORT_SYMBOL_GPL(kvm_timer_intr_post); 96 - 97 void __kvm_migrate_timers(struct kvm_vcpu *vcpu) 98 { 99 __kvm_migrate_apic_timer(vcpu);
··· 87 } 88 EXPORT_SYMBOL_GPL(kvm_inject_pending_timer_irqs); 89 90 void __kvm_migrate_timers(struct kvm_vcpu *vcpu) 91 { 92 __kvm_migrate_apic_timer(vcpu);
-1
arch/x86/kvm/irq.h
··· 89 90 void kvm_pic_reset(struct kvm_kpic_state *s); 91 92 - void kvm_timer_intr_post(struct kvm_vcpu *vcpu, int vec); 93 void kvm_inject_pending_timer_irqs(struct kvm_vcpu *vcpu); 94 void kvm_inject_apic_timer_irqs(struct kvm_vcpu *vcpu); 95 void kvm_apic_nmi_wd_deliver(struct kvm_vcpu *vcpu);
··· 89 90 void kvm_pic_reset(struct kvm_kpic_state *s); 91 92 void kvm_inject_pending_timer_irqs(struct kvm_vcpu *vcpu); 93 void kvm_inject_apic_timer_irqs(struct kvm_vcpu *vcpu); 94 void kvm_apic_nmi_wd_deliver(struct kvm_vcpu *vcpu);
+14 -50
arch/x86/kvm/lapic.c
··· 35 #include "kvm_cache_regs.h" 36 #include "irq.h" 37 38 #define PRId64 "d" 39 #define PRIx64 "llx" 40 #define PRIu64 "u" ··· 517 518 static u32 apic_get_tmcct(struct kvm_lapic *apic) 519 { 520 - u64 counter_passed; 521 - ktime_t passed, now; 522 u32 tmcct; 523 524 ASSERT(apic != NULL); 525 526 - now = apic->timer.dev.base->get_time(); 527 - tmcct = apic_get_reg(apic, APIC_TMICT); 528 - 529 /* if initial count is 0, current count should also be 0 */ 530 - if (tmcct == 0) 531 return 0; 532 533 - if (unlikely(ktime_to_ns(now) <= 534 - ktime_to_ns(apic->timer.last_update))) { 535 - /* Wrap around */ 536 - passed = ktime_add(( { 537 - (ktime_t) { 538 - .tv64 = KTIME_MAX - 539 - (apic->timer.last_update).tv64}; } 540 - ), now); 541 - apic_debug("time elapsed\n"); 542 - } else 543 - passed = ktime_sub(now, apic->timer.last_update); 544 545 - counter_passed = div64_u64(ktime_to_ns(passed), 546 - (APIC_BUS_CYCLE_NS * apic->timer.divide_count)); 547 - 548 - if (counter_passed > tmcct) { 549 - if (unlikely(!apic_lvtt_period(apic))) { 550 - /* one-shot timers stick at 0 until reset */ 551 - tmcct = 0; 552 - } else { 553 - /* 554 - * periodic timers reset to APIC_TMICT when they 555 - * hit 0. The while loop simulates this happening N 556 - * times. (counter_passed %= tmcct) would also work, 557 - * but might be slower or not work on 32-bit?? 558 - */ 559 - while (counter_passed > tmcct) 560 - counter_passed -= tmcct; 561 - tmcct -= counter_passed; 562 - } 563 - } else { 564 - tmcct -= counter_passed; 565 - } 566 567 return tmcct; 568 } ··· 628 static void start_apic_timer(struct kvm_lapic *apic) 629 { 630 ktime_t now = apic->timer.dev.base->get_time(); 631 - 632 - apic->timer.last_update = now; 633 634 apic->timer.period = apic_get_reg(apic, APIC_TMICT) * 635 APIC_BUS_CYCLE_NS * apic->timer.divide_count; ··· 1082 if (kvm_apic_local_deliver(apic, APIC_LVTT)) 1083 atomic_dec(&apic->timer.pending); 1084 } 1085 - } 1086 - 1087 - void kvm_apic_timer_intr_post(struct kvm_vcpu *vcpu, int vec) 1088 - { 1089 - struct kvm_lapic *apic = vcpu->arch.apic; 1090 - 1091 - if (apic && apic_lvt_vector(apic, APIC_LVTT) == vec) 1092 - apic->timer.last_update = ktime_add_ns( 1093 - apic->timer.last_update, 1094 - apic->timer.period); 1095 } 1096 1097 int kvm_get_apic_interrupt(struct kvm_vcpu *vcpu)
··· 35 #include "kvm_cache_regs.h" 36 #include "irq.h" 37 38 + #ifndef CONFIG_X86_64 39 + #define mod_64(x, y) ((x) - (y) * div64_u64(x, y)) 40 + #else 41 + #define mod_64(x, y) ((x) % (y)) 42 + #endif 43 + 44 #define PRId64 "d" 45 #define PRIx64 "llx" 46 #define PRIu64 "u" ··· 511 512 static u32 apic_get_tmcct(struct kvm_lapic *apic) 513 { 514 + ktime_t remaining; 515 + s64 ns; 516 u32 tmcct; 517 518 ASSERT(apic != NULL); 519 520 /* if initial count is 0, current count should also be 0 */ 521 + if (apic_get_reg(apic, APIC_TMICT) == 0) 522 return 0; 523 524 + remaining = hrtimer_expires_remaining(&apic->timer.dev); 525 + if (ktime_to_ns(remaining) < 0) 526 + remaining = ktime_set(0, 0); 527 528 + ns = mod_64(ktime_to_ns(remaining), apic->timer.period); 529 + tmcct = div64_u64(ns, (APIC_BUS_CYCLE_NS * apic->timer.divide_count)); 530 531 return tmcct; 532 } ··· 652 static void start_apic_timer(struct kvm_lapic *apic) 653 { 654 ktime_t now = apic->timer.dev.base->get_time(); 655 656 apic->timer.period = apic_get_reg(apic, APIC_TMICT) * 657 APIC_BUS_CYCLE_NS * apic->timer.divide_count; ··· 1108 if (kvm_apic_local_deliver(apic, APIC_LVTT)) 1109 atomic_dec(&apic->timer.pending); 1110 } 1111 } 1112 1113 int kvm_get_apic_interrupt(struct kvm_vcpu *vcpu)
-2
arch/x86/kvm/lapic.h
··· 12 atomic_t pending; 13 s64 period; /* unit: ns */ 14 u32 divide_count; 15 - ktime_t last_update; 16 struct hrtimer dev; 17 } timer; 18 struct kvm_vcpu *vcpu; ··· 41 void kvm_apic_post_state_restore(struct kvm_vcpu *vcpu); 42 int kvm_lapic_enabled(struct kvm_vcpu *vcpu); 43 int kvm_lapic_find_highest_irr(struct kvm_vcpu *vcpu); 44 - void kvm_apic_timer_intr_post(struct kvm_vcpu *vcpu, int vec); 45 46 void kvm_lapic_set_vapic_addr(struct kvm_vcpu *vcpu, gpa_t vapic_addr); 47 void kvm_lapic_sync_from_vapic(struct kvm_vcpu *vcpu);
··· 12 atomic_t pending; 13 s64 period; /* unit: ns */ 14 u32 divide_count; 15 struct hrtimer dev; 16 } timer; 17 struct kvm_vcpu *vcpu; ··· 42 void kvm_apic_post_state_restore(struct kvm_vcpu *vcpu); 43 int kvm_lapic_enabled(struct kvm_vcpu *vcpu); 44 int kvm_lapic_find_highest_irr(struct kvm_vcpu *vcpu); 45 46 void kvm_lapic_set_vapic_addr(struct kvm_vcpu *vcpu, gpa_t vapic_addr); 47 void kvm_lapic_sync_from_vapic(struct kvm_vcpu *vcpu);
+7 -2
arch/x86/kvm/mmu.c
··· 1698 if (largepage) 1699 spte |= PT_PAGE_SIZE_MASK; 1700 if (mt_mask) { 1701 - mt_mask = get_memory_type(vcpu, gfn) << 1702 - kvm_x86_ops->get_mt_mask_shift(); 1703 spte |= mt_mask; 1704 } 1705
··· 1698 if (largepage) 1699 spte |= PT_PAGE_SIZE_MASK; 1700 if (mt_mask) { 1701 + if (!kvm_is_mmio_pfn(pfn)) { 1702 + mt_mask = get_memory_type(vcpu, gfn) << 1703 + kvm_x86_ops->get_mt_mask_shift(); 1704 + mt_mask |= VMX_EPT_IGMT_BIT; 1705 + } else 1706 + mt_mask = MTRR_TYPE_UNCACHABLE << 1707 + kvm_x86_ops->get_mt_mask_shift(); 1708 spte |= mt_mask; 1709 } 1710
-1
arch/x86/kvm/svm.c
··· 1600 /* Okay, we can deliver the interrupt: grab it and update PIC state. */ 1601 intr_vector = kvm_cpu_get_interrupt(vcpu); 1602 svm_inject_irq(svm, intr_vector); 1603 - kvm_timer_intr_post(vcpu, intr_vector); 1604 out: 1605 update_cr8_intercept(vcpu); 1606 }
··· 1600 /* Okay, we can deliver the interrupt: grab it and update PIC state. */ 1601 intr_vector = kvm_cpu_get_interrupt(vcpu); 1602 svm_inject_irq(svm, intr_vector); 1603 out: 1604 update_cr8_intercept(vcpu); 1605 }
+2 -3
arch/x86/kvm/vmx.c
··· 903 data = vmcs_readl(GUEST_SYSENTER_ESP); 904 break; 905 default: 906 msr = find_msr_entry(to_vmx(vcpu), msr_index); 907 if (msr) { 908 data = msr->data; ··· 3286 } 3287 if (vcpu->arch.interrupt.pending) { 3288 vmx_inject_irq(vcpu, vcpu->arch.interrupt.nr); 3289 - kvm_timer_intr_post(vcpu, vcpu->arch.interrupt.nr); 3290 if (kvm_cpu_has_interrupt(vcpu)) 3291 enable_irq_window(vcpu); 3292 } ··· 3687 if (vm_need_ept()) { 3688 bypass_guest_pf = 0; 3689 kvm_mmu_set_base_ptes(VMX_EPT_READABLE_MASK | 3690 - VMX_EPT_WRITABLE_MASK | 3691 - VMX_EPT_IGMT_BIT); 3692 kvm_mmu_set_mask_ptes(0ull, 0ull, 0ull, 0ull, 3693 VMX_EPT_EXECUTABLE_MASK, 3694 VMX_EPT_DEFAULT_MT << VMX_EPT_MT_EPTE_SHIFT);
··· 903 data = vmcs_readl(GUEST_SYSENTER_ESP); 904 break; 905 default: 906 + vmx_load_host_state(to_vmx(vcpu)); 907 msr = find_msr_entry(to_vmx(vcpu), msr_index); 908 if (msr) { 909 data = msr->data; ··· 3285 } 3286 if (vcpu->arch.interrupt.pending) { 3287 vmx_inject_irq(vcpu, vcpu->arch.interrupt.nr); 3288 if (kvm_cpu_has_interrupt(vcpu)) 3289 enable_irq_window(vcpu); 3290 } ··· 3687 if (vm_need_ept()) { 3688 bypass_guest_pf = 0; 3689 kvm_mmu_set_base_ptes(VMX_EPT_READABLE_MASK | 3690 + VMX_EPT_WRITABLE_MASK); 3691 kvm_mmu_set_mask_ptes(0ull, 0ull, 0ull, 0ull, 3692 VMX_EPT_EXECUTABLE_MASK, 3693 VMX_EPT_DEFAULT_MT << VMX_EPT_MT_EPTE_SHIFT);
+8 -2
arch/x86/kvm/x86.c
··· 967 case KVM_CAP_MMU_SHADOW_CACHE_CONTROL: 968 case KVM_CAP_SET_TSS_ADDR: 969 case KVM_CAP_EXT_CPUID: 970 - case KVM_CAP_CLOCKSOURCE: 971 case KVM_CAP_PIT: 972 case KVM_CAP_NOP_IO_DELAY: 973 case KVM_CAP_MP_STATE: ··· 990 break; 991 case KVM_CAP_IOMMU: 992 r = iommu_found(); 993 break; 994 default: 995 r = 0; ··· 4129 4130 } 4131 4132 - void kvm_arch_destroy_vm(struct kvm *kvm) 4133 { 4134 kvm_free_all_assigned_devices(kvm); 4135 kvm_iommu_unmap_guest(kvm); 4136 kvm_free_pit(kvm); 4137 kfree(kvm->arch.vpic);
··· 967 case KVM_CAP_MMU_SHADOW_CACHE_CONTROL: 968 case KVM_CAP_SET_TSS_ADDR: 969 case KVM_CAP_EXT_CPUID: 970 case KVM_CAP_PIT: 971 case KVM_CAP_NOP_IO_DELAY: 972 case KVM_CAP_MP_STATE: ··· 991 break; 992 case KVM_CAP_IOMMU: 993 r = iommu_found(); 994 + break; 995 + case KVM_CAP_CLOCKSOURCE: 996 + r = boot_cpu_has(X86_FEATURE_CONSTANT_TSC); 997 break; 998 default: 999 r = 0; ··· 4127 4128 } 4129 4130 + void kvm_arch_sync_events(struct kvm *kvm) 4131 { 4132 kvm_free_all_assigned_devices(kvm); 4133 + } 4134 + 4135 + void kvm_arch_destroy_vm(struct kvm *kvm) 4136 + { 4137 kvm_iommu_unmap_guest(kvm); 4138 kvm_free_pit(kvm); 4139 kfree(kvm->arch.vpic);
-19
arch/x86/mm/ioremap.c
··· 134 return 0; 135 } 136 137 - int pagerange_is_ram(unsigned long start, unsigned long end) 138 - { 139 - int ram_page = 0, not_rampage = 0; 140 - unsigned long page_nr; 141 - 142 - for (page_nr = (start >> PAGE_SHIFT); page_nr < (end >> PAGE_SHIFT); 143 - ++page_nr) { 144 - if (page_is_ram(page_nr)) 145 - ram_page = 1; 146 - else 147 - not_rampage = 1; 148 - 149 - if (ram_page == not_rampage) 150 - return -1; 151 - } 152 - 153 - return ram_page; 154 - } 155 - 156 /* 157 * Fix up the linear direct mapping of the kernel to avoid cache attribute 158 * conflicts.
··· 134 return 0; 135 } 136 137 /* 138 * Fix up the linear direct mapping of the kernel to avoid cache attribute 139 * conflicts.
+1 -1
arch/x86/mm/numa_64.c
··· 145 return shift; 146 } 147 148 - int early_pfn_to_nid(unsigned long pfn) 149 { 150 return phys_to_nid(pfn << PAGE_SHIFT); 151 }
··· 145 return shift; 146 } 147 148 + int __meminit __early_pfn_to_nid(unsigned long pfn) 149 { 150 return phys_to_nid(pfn << PAGE_SHIFT); 151 }
+19 -11
arch/x86/mm/pageattr.c
··· 508 #endif 509 510 /* 511 - * Install the new, split up pagetable. Important details here: 512 * 513 - * On Intel the NX bit of all levels must be cleared to make a 514 - * page executable. See section 4.13.2 of Intel 64 and IA-32 515 - * Architectures Software Developer's Manual). 516 - * 517 - * Mark the entry present. The current mapping might be 518 - * set to not present, which we preserved above. 519 */ 520 - ref_prot = pte_pgprot(pte_mkexec(pte_clrhuge(*kpte))); 521 - pgprot_val(ref_prot) |= _PAGE_PRESENT; 522 - __set_pmd_pte(kpte, address, mk_pte(base, ref_prot)); 523 base = NULL; 524 525 out_unlock: ··· 570 address = cpa->vaddr[cpa->curpage]; 571 else 572 address = *cpa->vaddr; 573 - 574 repeat: 575 kpte = lookup_address(address, &level); 576 if (!kpte) ··· 806 807 vm_unmap_aliases(); 808 809 cpa.vaddr = addr; 810 cpa.numpages = numpages; 811 cpa.mask_set = mask_set; ··· 854 cpa_flush_range(*addr, numpages, cache); 855 } else 856 cpa_flush_all(cache); 857 858 out: 859 return ret;
··· 508 #endif 509 510 /* 511 + * Install the new, split up pagetable. 512 * 513 + * We use the standard kernel pagetable protections for the new 514 + * pagetable protections, the actual ptes set above control the 515 + * primary protection behavior: 516 */ 517 + __set_pmd_pte(kpte, address, mk_pte(base, __pgprot(_KERNPG_TABLE))); 518 base = NULL; 519 520 out_unlock: ··· 575 address = cpa->vaddr[cpa->curpage]; 576 else 577 address = *cpa->vaddr; 578 repeat: 579 kpte = lookup_address(address, &level); 580 if (!kpte) ··· 812 813 vm_unmap_aliases(); 814 815 + /* 816 + * If we're called with lazy mmu updates enabled, the 817 + * in-memory pte state may be stale. Flush pending updates to 818 + * bring them up to date. 819 + */ 820 + arch_flush_lazy_mmu_mode(); 821 + 822 cpa.vaddr = addr; 823 cpa.numpages = numpages; 824 cpa.mask_set = mask_set; ··· 853 cpa_flush_range(*addr, numpages, cache); 854 } else 855 cpa_flush_all(cache); 856 + 857 + /* 858 + * If we've been called with lazy mmu updates enabled, then 859 + * make sure that everything gets flushed out before we 860 + * return. 861 + */ 862 + arch_flush_lazy_mmu_mode(); 863 864 out: 865 return ret;
+45 -38
arch/x86/mm/pat.c
··· 211 static struct memtype *cached_entry; 212 static u64 cached_start; 213 214 /* 215 * For RAM pages, mark the pages as non WB memory type using 216 * PageNonWB (PG_arch_1). We allow only one set_memory_uc() or ··· 363 if (new_type) 364 *new_type = actual_type; 365 366 - /* 367 - * For legacy reasons, some parts of the physical address range in the 368 - * legacy 1MB region is treated as non-RAM (even when listed as RAM in 369 - * the e820 tables). So we will track the memory attributes of this 370 - * legacy 1MB region using the linear memtype_list always. 371 - */ 372 - if (end >= ISA_END_ADDRESS) { 373 - is_range_ram = pagerange_is_ram(start, end); 374 - if (is_range_ram == 1) 375 - return reserve_ram_pages_type(start, end, req_type, 376 - new_type); 377 - else if (is_range_ram < 0) 378 - return -EINVAL; 379 - } 380 381 new = kmalloc(sizeof(struct memtype), GFP_KERNEL); 382 if (!new) ··· 465 if (is_ISA_range(start, end - 1)) 466 return 0; 467 468 - /* 469 - * For legacy reasons, some parts of the physical address range in the 470 - * legacy 1MB region is treated as non-RAM (even when listed as RAM in 471 - * the e820 tables). So we will track the memory attributes of this 472 - * legacy 1MB region using the linear memtype_list always. 473 - */ 474 - if (end >= ISA_END_ADDRESS) { 475 - is_range_ram = pagerange_is_ram(start, end); 476 - if (is_range_ram == 1) 477 - return free_ram_pages_type(start, end); 478 - else if (is_range_ram < 0) 479 - return -EINVAL; 480 - } 481 482 spin_lock(&memtype_lock); 483 list_for_each_entry(entry, &memtype_list, nd) { ··· 637 unsigned long flags; 638 unsigned long want_flags = (pgprot_val(*vma_prot) & _PAGE_CACHE_MASK); 639 640 - is_ram = pagerange_is_ram(paddr, paddr + size); 641 642 - if (is_ram != 0) { 643 - /* 644 - * For mapping RAM pages, drivers need to call 645 - * set_memory_[uc|wc|wb] directly, for reserve and free, before 646 - * setting up the PTE. 647 - */ 648 - WARN_ON_ONCE(1); 649 - return 0; 650 - } 651 652 ret = reserve_memtype(paddr, paddr + size, want_flags, &flags); 653 if (ret) ··· 700 { 701 int is_ram; 702 703 - is_ram = pagerange_is_ram(paddr, paddr + size); 704 if (is_ram == 0) 705 free_memtype(paddr, paddr + size); 706 }
··· 211 static struct memtype *cached_entry; 212 static u64 cached_start; 213 214 + static int pat_pagerange_is_ram(unsigned long start, unsigned long end) 215 + { 216 + int ram_page = 0, not_rampage = 0; 217 + unsigned long page_nr; 218 + 219 + for (page_nr = (start >> PAGE_SHIFT); page_nr < (end >> PAGE_SHIFT); 220 + ++page_nr) { 221 + /* 222 + * For legacy reasons, physical address range in the legacy ISA 223 + * region is tracked as non-RAM. This will allow users of 224 + * /dev/mem to map portions of legacy ISA region, even when 225 + * some of those portions are listed(or not even listed) with 226 + * different e820 types(RAM/reserved/..) 227 + */ 228 + if (page_nr >= (ISA_END_ADDRESS >> PAGE_SHIFT) && 229 + page_is_ram(page_nr)) 230 + ram_page = 1; 231 + else 232 + not_rampage = 1; 233 + 234 + if (ram_page == not_rampage) 235 + return -1; 236 + } 237 + 238 + return ram_page; 239 + } 240 + 241 /* 242 * For RAM pages, mark the pages as non WB memory type using 243 * PageNonWB (PG_arch_1). We allow only one set_memory_uc() or ··· 336 if (new_type) 337 *new_type = actual_type; 338 339 + is_range_ram = pat_pagerange_is_ram(start, end); 340 + if (is_range_ram == 1) 341 + return reserve_ram_pages_type(start, end, req_type, 342 + new_type); 343 + else if (is_range_ram < 0) 344 + return -EINVAL; 345 346 new = kmalloc(sizeof(struct memtype), GFP_KERNEL); 347 if (!new) ··· 446 if (is_ISA_range(start, end - 1)) 447 return 0; 448 449 + is_range_ram = pat_pagerange_is_ram(start, end); 450 + if (is_range_ram == 1) 451 + return free_ram_pages_type(start, end); 452 + else if (is_range_ram < 0) 453 + return -EINVAL; 454 455 spin_lock(&memtype_lock); 456 list_for_each_entry(entry, &memtype_list, nd) { ··· 626 unsigned long flags; 627 unsigned long want_flags = (pgprot_val(*vma_prot) & _PAGE_CACHE_MASK); 628 629 + is_ram = pat_pagerange_is_ram(paddr, paddr + size); 630 631 + /* 632 + * reserve_pfn_range() doesn't support RAM pages. 633 + */ 634 + if (is_ram != 0) 635 + return -EINVAL; 636 637 ret = reserve_memtype(paddr, paddr + size, want_flags, &flags); 638 if (ret) ··· 693 { 694 int is_ram; 695 696 + is_ram = pat_pagerange_is_ram(paddr, paddr + size); 697 if (is_ram == 0) 698 free_memtype(paddr, paddr + size); 699 }
+8 -1
block/blk-timeout.c
··· 209 { 210 unsigned long flags; 211 struct request *rq, *tmp; 212 213 spin_lock_irqsave(q->queue_lock, flags); 214 215 elv_abort_queue(q); 216 217 - list_for_each_entry_safe(rq, tmp, &q->timeout_list, timeout_list) 218 blk_abort_request(rq); 219 220 spin_unlock_irqrestore(q->queue_lock, flags);
··· 209 { 210 unsigned long flags; 211 struct request *rq, *tmp; 212 + LIST_HEAD(list); 213 214 spin_lock_irqsave(q->queue_lock, flags); 215 216 elv_abort_queue(q); 217 218 + /* 219 + * Splice entries to local list, to avoid deadlocking if entries 220 + * get readded to the timeout list by error handling 221 + */ 222 + list_splice_init(&q->timeout_list, &list); 223 + 224 + list_for_each_entry_safe(rq, tmp, &list, timeout_list) 225 blk_abort_request(rq); 226 227 spin_unlock_irqrestore(q->queue_lock, flags);
+1 -1
block/blktrace.c
··· 142 143 what |= ddir_act[rw & WRITE]; 144 what |= MASK_TC_BIT(rw, BARRIER); 145 - what |= MASK_TC_BIT(rw, SYNC); 146 what |= MASK_TC_BIT(rw, AHEAD); 147 what |= MASK_TC_BIT(rw, META); 148 what |= MASK_TC_BIT(rw, DISCARD);
··· 142 143 what |= ddir_act[rw & WRITE]; 144 what |= MASK_TC_BIT(rw, BARRIER); 145 + what |= MASK_TC_BIT(rw, SYNCIO); 146 what |= MASK_TC_BIT(rw, AHEAD); 147 what |= MASK_TC_BIT(rw, META); 148 what |= MASK_TC_BIT(rw, DISCARD);
+10 -7
block/bsg.c
··· 244 * map sg_io_v4 to a request. 245 */ 246 static struct request * 247 - bsg_map_hdr(struct bsg_device *bd, struct sg_io_v4 *hdr, fmode_t has_write_perm) 248 { 249 struct request_queue *q = bd->queue; 250 struct request *rq, *next_rq = NULL; ··· 307 if (ret) 308 goto out; 309 } 310 return rq; 311 out: 312 if (rq->cmd != rq->__cmd) ··· 353 static void bsg_add_command(struct bsg_device *bd, struct request_queue *q, 354 struct bsg_command *bc, struct request *rq) 355 { 356 - rq->sense = bc->sense; 357 - rq->sense_len = 0; 358 - 359 /* 360 * add bc command to busy queue and submit rq for io 361 */ ··· 421 { 422 int ret = 0; 423 424 - dprintk("rq %p bio %p %u\n", rq, bio, rq->errors); 425 /* 426 * fill in all the output members 427 */ ··· 637 /* 638 * get a request, fill in the blanks, and add to request queue 639 */ 640 - rq = bsg_map_hdr(bd, &bc->hdr, has_write_perm); 641 if (IS_ERR(rq)) { 642 ret = PTR_ERR(rq); 643 rq = NULL; ··· 924 struct request *rq; 925 struct bio *bio, *bidi_bio = NULL; 926 struct sg_io_v4 hdr; 927 928 if (copy_from_user(&hdr, uarg, sizeof(hdr))) 929 return -EFAULT; 930 931 - rq = bsg_map_hdr(bd, &hdr, file->f_mode & FMODE_WRITE); 932 if (IS_ERR(rq)) 933 return PTR_ERR(rq); 934
··· 244 * map sg_io_v4 to a request. 245 */ 246 static struct request * 247 + bsg_map_hdr(struct bsg_device *bd, struct sg_io_v4 *hdr, fmode_t has_write_perm, 248 + u8 *sense) 249 { 250 struct request_queue *q = bd->queue; 251 struct request *rq, *next_rq = NULL; ··· 306 if (ret) 307 goto out; 308 } 309 + 310 + rq->sense = sense; 311 + rq->sense_len = 0; 312 + 313 return rq; 314 out: 315 if (rq->cmd != rq->__cmd) ··· 348 static void bsg_add_command(struct bsg_device *bd, struct request_queue *q, 349 struct bsg_command *bc, struct request *rq) 350 { 351 /* 352 * add bc command to busy queue and submit rq for io 353 */ ··· 419 { 420 int ret = 0; 421 422 + dprintk("rq %p bio %p 0x%x\n", rq, bio, rq->errors); 423 /* 424 * fill in all the output members 425 */ ··· 635 /* 636 * get a request, fill in the blanks, and add to request queue 637 */ 638 + rq = bsg_map_hdr(bd, &bc->hdr, has_write_perm, bc->sense); 639 if (IS_ERR(rq)) { 640 ret = PTR_ERR(rq); 641 rq = NULL; ··· 922 struct request *rq; 923 struct bio *bio, *bidi_bio = NULL; 924 struct sg_io_v4 hdr; 925 + u8 sense[SCSI_SENSE_BUFFERSIZE]; 926 927 if (copy_from_user(&hdr, uarg, sizeof(hdr))) 928 return -EFAULT; 929 930 + rq = bsg_map_hdr(bd, &hdr, file->f_mode & FMODE_WRITE, sense); 931 if (IS_ERR(rq)) 932 return PTR_ERR(rq); 933
+8
block/genhd.c
··· 1087 if (strcmp(dev_name(dev), name)) 1088 continue; 1089 1090 part = disk_get_part(disk, partno); 1091 if (part) { 1092 devt = part_devt(part);
··· 1087 if (strcmp(dev_name(dev), name)) 1088 continue; 1089 1090 + if (partno < disk->minors) { 1091 + /* We need to return the right devno, even 1092 + * if the partition doesn't exist yet. 1093 + */ 1094 + devt = MKDEV(MAJOR(dev->devt), 1095 + MINOR(dev->devt) + partno); 1096 + break; 1097 + } 1098 part = disk_get_part(disk, partno); 1099 if (part) { 1100 devt = part_devt(part);
+7 -1
crypto/lrw.c
··· 45 46 static inline void setbit128_bbe(void *b, int bit) 47 { 48 - __set_bit(bit ^ 0x78, b); 49 } 50 51 static int setkey(struct crypto_tfm *parent, const u8 *key,
··· 45 46 static inline void setbit128_bbe(void *b, int bit) 47 { 48 + __set_bit(bit ^ (0x80 - 49 + #ifdef __BIG_ENDIAN 50 + BITS_PER_LONG 51 + #else 52 + BITS_PER_BYTE 53 + #endif 54 + ), b); 55 } 56 57 static int setkey(struct crypto_tfm *parent, const u8 *key,
-7
drivers/acpi/Kconfig
··· 254 help you correlate PCI bus addresses with the physical geography 255 of your slots. If you are unsure, say N. 256 257 - config ACPI_SYSTEM 258 - bool 259 - default y 260 - help 261 - This driver will enable your system to shut down using ACPI, and 262 - dump your ACPI DSDT table using /proc/acpi/dsdt. 263 - 264 config X86_PM_TIMER 265 bool "Power Management Timer Support" if EMBEDDED 266 depends on X86
··· 254 help you correlate PCI bus addresses with the physical geography 255 of your slots. If you are unsure, say N. 256 257 config X86_PM_TIMER 258 bool "Power Management Timer Support" if EMBEDDED 259 depends on X86
+1 -1
drivers/acpi/Makefile
··· 52 obj-$(CONFIG_ACPI_CONTAINER) += container.o 53 obj-$(CONFIG_ACPI_THERMAL) += thermal.o 54 obj-y += power.o 55 - obj-$(CONFIG_ACPI_SYSTEM) += system.o event.o 56 obj-$(CONFIG_ACPI_DEBUG) += debug.o 57 obj-$(CONFIG_ACPI_NUMA) += numa.o 58 obj-$(CONFIG_ACPI_HOTPLUG_MEMORY) += acpi_memhotplug.o
··· 52 obj-$(CONFIG_ACPI_CONTAINER) += container.o 53 obj-$(CONFIG_ACPI_THERMAL) += thermal.o 54 obj-y += power.o 55 + obj-y += system.o event.o 56 obj-$(CONFIG_ACPI_DEBUG) += debug.o 57 obj-$(CONFIG_ACPI_NUMA) += numa.o 58 obj-$(CONFIG_ACPI_HOTPLUG_MEMORY) += acpi_memhotplug.o
+24 -1
drivers/acpi/battery.c
··· 138 139 static int acpi_battery_get_state(struct acpi_battery *battery); 140 141 static int acpi_battery_get_property(struct power_supply *psy, 142 enum power_supply_property psp, 143 union power_supply_propval *val) ··· 178 val->intval = POWER_SUPPLY_STATUS_DISCHARGING; 179 else if (battery->state & 0x02) 180 val->intval = POWER_SUPPLY_STATUS_CHARGING; 181 - else if (battery->state == 0) 182 val->intval = POWER_SUPPLY_STATUS_FULL; 183 else 184 val->intval = POWER_SUPPLY_STATUS_UNKNOWN;
··· 138 139 static int acpi_battery_get_state(struct acpi_battery *battery); 140 141 + static int acpi_battery_is_charged(struct acpi_battery *battery) 142 + { 143 + /* either charging or discharging */ 144 + if (battery->state != 0) 145 + return 0; 146 + 147 + /* battery not reporting charge */ 148 + if (battery->capacity_now == ACPI_BATTERY_VALUE_UNKNOWN || 149 + battery->capacity_now == 0) 150 + return 0; 151 + 152 + /* good batteries update full_charge as the batteries degrade */ 153 + if (battery->full_charge_capacity == battery->capacity_now) 154 + return 1; 155 + 156 + /* fallback to using design values for broken batteries */ 157 + if (battery->design_capacity == battery->capacity_now) 158 + return 1; 159 + 160 + /* we don't do any sort of metric based on percentages */ 161 + return 0; 162 + } 163 + 164 static int acpi_battery_get_property(struct power_supply *psy, 165 enum power_supply_property psp, 166 union power_supply_propval *val) ··· 155 val->intval = POWER_SUPPLY_STATUS_DISCHARGING; 156 else if (battery->state & 0x02) 157 val->intval = POWER_SUPPLY_STATUS_CHARGING; 158 + else if (acpi_battery_is_charged(battery)) 159 val->intval = POWER_SUPPLY_STATUS_FULL; 160 else 161 val->intval = POWER_SUPPLY_STATUS_UNKNOWN;
+21 -7
drivers/ata/libata-sff.c
··· 773 else 774 iowrite32_rep(data_addr, buf, words); 775 776 if (unlikely(slop)) { 777 - __le32 pad; 778 if (rw == READ) { 779 - pad = cpu_to_le32(ioread32(ap->ioaddr.data_addr)); 780 - memcpy(buf + buflen - slop, &pad, slop); 781 } else { 782 - memcpy(&pad, buf + buflen - slop, slop); 783 - iowrite32(le32_to_cpu(pad), ap->ioaddr.data_addr); 784 } 785 - words++; 786 } 787 - return words << 2; 788 } 789 EXPORT_SYMBOL_GPL(ata_sff_data_xfer32); 790
··· 773 else 774 iowrite32_rep(data_addr, buf, words); 775 776 + /* Transfer trailing bytes, if any */ 777 if (unlikely(slop)) { 778 + unsigned char pad[4]; 779 + 780 + /* Point buf to the tail of buffer */ 781 + buf += buflen - slop; 782 + 783 + /* 784 + * Use io*_rep() accessors here as well to avoid pointlessly 785 + * swapping bytes to and fro on the big endian machines... 786 + */ 787 if (rw == READ) { 788 + if (slop < 3) 789 + ioread16_rep(data_addr, pad, 1); 790 + else 791 + ioread32_rep(data_addr, pad, 1); 792 + memcpy(buf, pad, slop); 793 } else { 794 + memcpy(pad, buf, slop); 795 + if (slop < 3) 796 + iowrite16_rep(data_addr, pad, 1); 797 + else 798 + iowrite32_rep(data_addr, pad, 1); 799 } 800 } 801 + return (buflen + 1) & ~1; 802 } 803 EXPORT_SYMBOL_GPL(ata_sff_data_xfer32); 804
+3 -1
drivers/ata/pata_via.c
··· 110 { "vt8237s", PCI_DEVICE_ID_VIA_8237S, 0x00, 0x2f, VIA_UDMA_133 | VIA_BAD_AST }, 111 { "vt8251", PCI_DEVICE_ID_VIA_8251, 0x00, 0x2f, VIA_UDMA_133 | VIA_BAD_AST }, 112 { "cx700", PCI_DEVICE_ID_VIA_CX700, 0x00, 0x2f, VIA_UDMA_133 | VIA_BAD_AST | VIA_SATA_PATA }, 113 - { "vt6410", PCI_DEVICE_ID_VIA_6410, 0x00, 0x2f, VIA_UDMA_133 | VIA_BAD_AST | VIA_NO_ENABLES}, 114 { "vt8237a", PCI_DEVICE_ID_VIA_8237A, 0x00, 0x2f, VIA_UDMA_133 | VIA_BAD_AST }, 115 { "vt8237", PCI_DEVICE_ID_VIA_8237, 0x00, 0x2f, VIA_UDMA_133 | VIA_BAD_AST }, 116 { "vt8235", PCI_DEVICE_ID_VIA_8235, 0x00, 0x2f, VIA_UDMA_133 | VIA_BAD_AST }, ··· 594 #endif 595 596 static const struct pci_device_id via[] = { 597 { PCI_VDEVICE(VIA, 0x0571), }, 598 { PCI_VDEVICE(VIA, 0x0581), }, 599 { PCI_VDEVICE(VIA, 0x1571), },
··· 110 { "vt8237s", PCI_DEVICE_ID_VIA_8237S, 0x00, 0x2f, VIA_UDMA_133 | VIA_BAD_AST }, 111 { "vt8251", PCI_DEVICE_ID_VIA_8251, 0x00, 0x2f, VIA_UDMA_133 | VIA_BAD_AST }, 112 { "cx700", PCI_DEVICE_ID_VIA_CX700, 0x00, 0x2f, VIA_UDMA_133 | VIA_BAD_AST | VIA_SATA_PATA }, 113 + { "vt6410", PCI_DEVICE_ID_VIA_6410, 0x00, 0x2f, VIA_UDMA_133 | VIA_BAD_AST | VIA_NO_ENABLES }, 114 + { "vt6415", PCI_DEVICE_ID_VIA_6415, 0x00, 0x2f, VIA_UDMA_133 | VIA_BAD_AST | VIA_NO_ENABLES }, 115 { "vt8237a", PCI_DEVICE_ID_VIA_8237A, 0x00, 0x2f, VIA_UDMA_133 | VIA_BAD_AST }, 116 { "vt8237", PCI_DEVICE_ID_VIA_8237, 0x00, 0x2f, VIA_UDMA_133 | VIA_BAD_AST }, 117 { "vt8235", PCI_DEVICE_ID_VIA_8235, 0x00, 0x2f, VIA_UDMA_133 | VIA_BAD_AST }, ··· 593 #endif 594 595 static const struct pci_device_id via[] = { 596 + { PCI_VDEVICE(VIA, 0x0415), }, 597 { PCI_VDEVICE(VIA, 0x0571), }, 598 { PCI_VDEVICE(VIA, 0x0581), }, 599 { PCI_VDEVICE(VIA, 0x1571), },
+8 -6
drivers/ata/sata_nv.c
··· 421 .hardreset = ATA_OP_NULL, 422 }; 423 424 - /* OSDL bz3352 reports that nf2/3 controllers can't determine device 425 - * signature reliably. Also, the following thread reports detection 426 - * failure on cold boot with the standard debouncing timing. 427 * 428 * http://thread.gmane.org/gmane.linux.ide/34098 429 * 430 - * Debounce with hotplug timing and request follow-up SRST. 431 */ 432 static struct ata_port_operations nv_nf2_ops = { 433 - .inherits = &nv_common_ops, 434 .freeze = nv_nf2_freeze, 435 .thaw = nv_nf2_thaw, 436 - .hardreset = nv_noclassify_hardreset, 437 }; 438 439 /* For initial probing after boot and hot plugging, hardreset mostly
··· 421 .hardreset = ATA_OP_NULL, 422 }; 423 424 + /* nf2 is ripe with hardreset related problems. 425 + * 426 + * kernel bz#3352 reports nf2/3 controllers can't determine device 427 + * signature reliably. The following thread reports detection failure 428 + * on cold boot with the standard debouncing timing. 429 * 430 * http://thread.gmane.org/gmane.linux.ide/34098 431 * 432 + * And bz#12176 reports that hardreset simply doesn't work on nf2. 433 + * Give up on it and just don't do hardreset. 434 */ 435 static struct ata_port_operations nv_nf2_ops = { 436 + .inherits = &nv_generic_ops, 437 .freeze = nv_nf2_freeze, 438 .thaw = nv_nf2_thaw, 439 }; 440 441 /* For initial probing after boot and hot plugging, hardreset mostly
+17
drivers/base/dd.c
··· 18 */ 19 20 #include <linux/device.h> 21 #include <linux/module.h> 22 #include <linux/kthread.h> 23 #include <linux/wait.h> 24 25 #include "base.h" 26 #include "power/power.h" ··· 166 atomic_read(&probe_count)); 167 if (atomic_read(&probe_count)) 168 return -EBUSY; 169 return 0; 170 } 171
··· 18 */ 19 20 #include <linux/device.h> 21 + #include <linux/delay.h> 22 #include <linux/module.h> 23 #include <linux/kthread.h> 24 #include <linux/wait.h> 25 + #include <linux/async.h> 26 27 #include "base.h" 28 #include "power/power.h" ··· 164 atomic_read(&probe_count)); 165 if (atomic_read(&probe_count)) 166 return -EBUSY; 167 + return 0; 168 + } 169 + 170 + /** 171 + * wait_for_device_probe 172 + * Wait for device probing to be completed. 173 + * 174 + * Note: this function polls at 100 msec intervals. 175 + */ 176 + int wait_for_device_probe(void) 177 + { 178 + /* wait for the known devices to complete their probing */ 179 + while (driver_probe_done() != 0) 180 + msleep(100); 181 + async_synchronize_full(); 182 return 0; 183 } 184
+1
drivers/block/aoe/aoe.h
··· 18 enum { 19 AOECMD_ATA, 20 AOECMD_CFG, 21 22 AOEFL_RSP = (1<<3), 23 AOEFL_ERR = (1<<2),
··· 18 enum { 19 AOECMD_ATA, 20 AOECMD_CFG, 21 + AOECMD_VEND_MIN = 0xf0, 22 23 AOEFL_RSP = (1<<3), 24 AOEFL_ERR = (1<<2),
+2
drivers/block/aoe/aoenet.c
··· 142 aoecmd_cfg_rsp(skb); 143 break; 144 default: 145 printk(KERN_INFO "aoe: unknown cmd %d\n", h->cmd); 146 } 147 exit:
··· 142 aoecmd_cfg_rsp(skb); 143 break; 144 default: 145 + if (h->cmd >= AOECMD_VEND_MIN) 146 + break; /* don't complain about vendor commands */ 147 printk(KERN_INFO "aoe: unknown cmd %d\n", h->cmd); 148 } 149 exit:
+215
drivers/block/cciss.c
··· 3390 kfree(p); 3391 } 3392 3393 /* 3394 * This is it. Find all the controllers and register them. I really hate 3395 * stealing all these major device numbers. ··· 3600 int rc; 3601 int dac, return_code; 3602 InquiryData_struct *inq_buff = NULL; 3603 3604 i = alloc_cciss_hba(); 3605 if (i < 0)
··· 3390 kfree(p); 3391 } 3392 3393 + /* Send a message CDB to the firmware. */ 3394 + static __devinit int cciss_message(struct pci_dev *pdev, unsigned char opcode, unsigned char type) 3395 + { 3396 + typedef struct { 3397 + CommandListHeader_struct CommandHeader; 3398 + RequestBlock_struct Request; 3399 + ErrDescriptor_struct ErrorDescriptor; 3400 + } Command; 3401 + static const size_t cmd_sz = sizeof(Command) + sizeof(ErrorInfo_struct); 3402 + Command *cmd; 3403 + dma_addr_t paddr64; 3404 + uint32_t paddr32, tag; 3405 + void __iomem *vaddr; 3406 + int i, err; 3407 + 3408 + vaddr = ioremap_nocache(pci_resource_start(pdev, 0), pci_resource_len(pdev, 0)); 3409 + if (vaddr == NULL) 3410 + return -ENOMEM; 3411 + 3412 + /* The Inbound Post Queue only accepts 32-bit physical addresses for the 3413 + CCISS commands, so they must be allocated from the lower 4GiB of 3414 + memory. */ 3415 + err = pci_set_consistent_dma_mask(pdev, DMA_32BIT_MASK); 3416 + if (err) { 3417 + iounmap(vaddr); 3418 + return -ENOMEM; 3419 + } 3420 + 3421 + cmd = pci_alloc_consistent(pdev, cmd_sz, &paddr64); 3422 + if (cmd == NULL) { 3423 + iounmap(vaddr); 3424 + return -ENOMEM; 3425 + } 3426 + 3427 + /* This must fit, because of the 32-bit consistent DMA mask. Also, 3428 + although there's no guarantee, we assume that the address is at 3429 + least 4-byte aligned (most likely, it's page-aligned). */ 3430 + paddr32 = paddr64; 3431 + 3432 + cmd->CommandHeader.ReplyQueue = 0; 3433 + cmd->CommandHeader.SGList = 0; 3434 + cmd->CommandHeader.SGTotal = 0; 3435 + cmd->CommandHeader.Tag.lower = paddr32; 3436 + cmd->CommandHeader.Tag.upper = 0; 3437 + memset(&cmd->CommandHeader.LUN.LunAddrBytes, 0, 8); 3438 + 3439 + cmd->Request.CDBLen = 16; 3440 + cmd->Request.Type.Type = TYPE_MSG; 3441 + cmd->Request.Type.Attribute = ATTR_HEADOFQUEUE; 3442 + cmd->Request.Type.Direction = XFER_NONE; 3443 + cmd->Request.Timeout = 0; /* Don't time out */ 3444 + cmd->Request.CDB[0] = opcode; 3445 + cmd->Request.CDB[1] = type; 3446 + memset(&cmd->Request.CDB[2], 0, 14); /* the rest of the CDB is reserved */ 3447 + 3448 + cmd->ErrorDescriptor.Addr.lower = paddr32 + sizeof(Command); 3449 + cmd->ErrorDescriptor.Addr.upper = 0; 3450 + cmd->ErrorDescriptor.Len = sizeof(ErrorInfo_struct); 3451 + 3452 + writel(paddr32, vaddr + SA5_REQUEST_PORT_OFFSET); 3453 + 3454 + for (i = 0; i < 10; i++) { 3455 + tag = readl(vaddr + SA5_REPLY_PORT_OFFSET); 3456 + if ((tag & ~3) == paddr32) 3457 + break; 3458 + schedule_timeout_uninterruptible(HZ); 3459 + } 3460 + 3461 + iounmap(vaddr); 3462 + 3463 + /* we leak the DMA buffer here ... no choice since the controller could 3464 + still complete the command. */ 3465 + if (i == 10) { 3466 + printk(KERN_ERR "cciss: controller message %02x:%02x timed out\n", 3467 + opcode, type); 3468 + return -ETIMEDOUT; 3469 + } 3470 + 3471 + pci_free_consistent(pdev, cmd_sz, cmd, paddr64); 3472 + 3473 + if (tag & 2) { 3474 + printk(KERN_ERR "cciss: controller message %02x:%02x failed\n", 3475 + opcode, type); 3476 + return -EIO; 3477 + } 3478 + 3479 + printk(KERN_INFO "cciss: controller message %02x:%02x succeeded\n", 3480 + opcode, type); 3481 + return 0; 3482 + } 3483 + 3484 + #define cciss_soft_reset_controller(p) cciss_message(p, 1, 0) 3485 + #define cciss_noop(p) cciss_message(p, 3, 0) 3486 + 3487 + static __devinit int cciss_reset_msi(struct pci_dev *pdev) 3488 + { 3489 + /* the #defines are stolen from drivers/pci/msi.h. */ 3490 + #define msi_control_reg(base) (base + PCI_MSI_FLAGS) 3491 + #define PCI_MSIX_FLAGS_ENABLE (1 << 15) 3492 + 3493 + int pos; 3494 + u16 control = 0; 3495 + 3496 + pos = pci_find_capability(pdev, PCI_CAP_ID_MSI); 3497 + if (pos) { 3498 + pci_read_config_word(pdev, msi_control_reg(pos), &control); 3499 + if (control & PCI_MSI_FLAGS_ENABLE) { 3500 + printk(KERN_INFO "cciss: resetting MSI\n"); 3501 + pci_write_config_word(pdev, msi_control_reg(pos), control & ~PCI_MSI_FLAGS_ENABLE); 3502 + } 3503 + } 3504 + 3505 + pos = pci_find_capability(pdev, PCI_CAP_ID_MSIX); 3506 + if (pos) { 3507 + pci_read_config_word(pdev, msi_control_reg(pos), &control); 3508 + if (control & PCI_MSIX_FLAGS_ENABLE) { 3509 + printk(KERN_INFO "cciss: resetting MSI-X\n"); 3510 + pci_write_config_word(pdev, msi_control_reg(pos), control & ~PCI_MSIX_FLAGS_ENABLE); 3511 + } 3512 + } 3513 + 3514 + return 0; 3515 + } 3516 + 3517 + /* This does a hard reset of the controller using PCI power management 3518 + * states. */ 3519 + static __devinit int cciss_hard_reset_controller(struct pci_dev *pdev) 3520 + { 3521 + u16 pmcsr, saved_config_space[32]; 3522 + int i, pos; 3523 + 3524 + printk(KERN_INFO "cciss: using PCI PM to reset controller\n"); 3525 + 3526 + /* This is very nearly the same thing as 3527 + 3528 + pci_save_state(pci_dev); 3529 + pci_set_power_state(pci_dev, PCI_D3hot); 3530 + pci_set_power_state(pci_dev, PCI_D0); 3531 + pci_restore_state(pci_dev); 3532 + 3533 + but we can't use these nice canned kernel routines on 3534 + kexec, because they also check the MSI/MSI-X state in PCI 3535 + configuration space and do the wrong thing when it is 3536 + set/cleared. Also, the pci_save/restore_state functions 3537 + violate the ordering requirements for restoring the 3538 + configuration space from the CCISS document (see the 3539 + comment below). So we roll our own .... */ 3540 + 3541 + for (i = 0; i < 32; i++) 3542 + pci_read_config_word(pdev, 2*i, &saved_config_space[i]); 3543 + 3544 + pos = pci_find_capability(pdev, PCI_CAP_ID_PM); 3545 + if (pos == 0) { 3546 + printk(KERN_ERR "cciss_reset_controller: PCI PM not supported\n"); 3547 + return -ENODEV; 3548 + } 3549 + 3550 + /* Quoting from the Open CISS Specification: "The Power 3551 + * Management Control/Status Register (CSR) controls the power 3552 + * state of the device. The normal operating state is D0, 3553 + * CSR=00h. The software off state is D3, CSR=03h. To reset 3554 + * the controller, place the interface device in D3 then to 3555 + * D0, this causes a secondary PCI reset which will reset the 3556 + * controller." */ 3557 + 3558 + /* enter the D3hot power management state */ 3559 + pci_read_config_word(pdev, pos + PCI_PM_CTRL, &pmcsr); 3560 + pmcsr &= ~PCI_PM_CTRL_STATE_MASK; 3561 + pmcsr |= PCI_D3hot; 3562 + pci_write_config_word(pdev, pos + PCI_PM_CTRL, pmcsr); 3563 + 3564 + schedule_timeout_uninterruptible(HZ >> 1); 3565 + 3566 + /* enter the D0 power management state */ 3567 + pmcsr &= ~PCI_PM_CTRL_STATE_MASK; 3568 + pmcsr |= PCI_D0; 3569 + pci_write_config_word(pdev, pos + PCI_PM_CTRL, pmcsr); 3570 + 3571 + schedule_timeout_uninterruptible(HZ >> 1); 3572 + 3573 + /* Restore the PCI configuration space. The Open CISS 3574 + * Specification says, "Restore the PCI Configuration 3575 + * Registers, offsets 00h through 60h. It is important to 3576 + * restore the command register, 16-bits at offset 04h, 3577 + * last. Do not restore the configuration status register, 3578 + * 16-bits at offset 06h." Note that the offset is 2*i. */ 3579 + for (i = 0; i < 32; i++) { 3580 + if (i == 2 || i == 3) 3581 + continue; 3582 + pci_write_config_word(pdev, 2*i, saved_config_space[i]); 3583 + } 3584 + wmb(); 3585 + pci_write_config_word(pdev, 4, saved_config_space[2]); 3586 + 3587 + return 0; 3588 + } 3589 + 3590 /* 3591 * This is it. Find all the controllers and register them. I really hate 3592 * stealing all these major device numbers. ··· 3403 int rc; 3404 int dac, return_code; 3405 InquiryData_struct *inq_buff = NULL; 3406 + 3407 + if (reset_devices) { 3408 + /* Reset the controller with a PCI power-cycle */ 3409 + if (cciss_hard_reset_controller(pdev) || cciss_reset_msi(pdev)) 3410 + return -ENODEV; 3411 + 3412 + /* Some devices (notably the HP Smart Array 5i Controller) 3413 + need a little pause here */ 3414 + schedule_timeout_uninterruptible(30*HZ); 3415 + 3416 + /* Now try to get the controller to respond to a no-op */ 3417 + for (i=0; i<12; i++) { 3418 + if (cciss_noop(pdev) == 0) 3419 + break; 3420 + else 3421 + printk("cciss: no-op failed%s\n", (i < 11 ? "; re-trying" : "")); 3422 + } 3423 + } 3424 3425 i = alloc_cciss_hba(); 3426 if (i < 0)
+52 -27
drivers/block/floppy.c
··· 558 static void recalibrate_floppy(void); 559 static void floppy_shutdown(unsigned long); 560 561 static int floppy_grab_irq_and_dma(void); 562 static void floppy_release_irq_and_dma(void); 563 ··· 4276 FDCS->rawcmd = 2; 4277 if (user_reset_fdc(-1, FD_RESET_ALWAYS, 0)) { 4278 /* free ioports reserved by floppy_grab_irq_and_dma() */ 4279 - release_region(FDCS->address + 2, 4); 4280 - release_region(FDCS->address + 7, 1); 4281 FDCS->address = -1; 4282 FDCS->version = FDC_NONE; 4283 continue; ··· 4285 FDCS->version = get_fdc_version(); 4286 if (FDCS->version == FDC_NONE) { 4287 /* free ioports reserved by floppy_grab_irq_and_dma() */ 4288 - release_region(FDCS->address + 2, 4); 4289 - release_region(FDCS->address + 7, 1); 4290 FDCS->address = -1; 4291 continue; 4292 } ··· 4358 4359 static DEFINE_SPINLOCK(floppy_usage_lock); 4360 4361 static int floppy_grab_irq_and_dma(void) 4362 { 4363 unsigned long flags; ··· 4440 4441 for (fdc = 0; fdc < N_FDC; fdc++) { 4442 if (FDCS->address != -1) { 4443 - if (!request_region(FDCS->address + 2, 4, "floppy")) { 4444 - DPRINT("Floppy io-port 0x%04lx in use\n", 4445 - FDCS->address + 2); 4446 - goto cleanup1; 4447 - } 4448 - if (!request_region(FDCS->address + 7, 1, "floppy DIR")) { 4449 - DPRINT("Floppy io-port 0x%04lx in use\n", 4450 - FDCS->address + 7); 4451 - goto cleanup2; 4452 - } 4453 - /* address + 6 is reserved, and may be taken by IDE. 4454 - * Unfortunately, Adaptec doesn't know this :-(, */ 4455 } 4456 } 4457 for (fdc = 0; fdc < N_FDC; fdc++) { ··· 4463 fdc = 0; 4464 irqdma_allocated = 1; 4465 return 0; 4466 - cleanup2: 4467 - release_region(FDCS->address + 2, 4); 4468 - cleanup1: 4469 fd_free_irq(); 4470 fd_free_dma(); 4471 - while (--fdc >= 0) { 4472 - release_region(FDCS->address + 2, 4); 4473 - release_region(FDCS->address + 7, 1); 4474 - } 4475 spin_lock_irqsave(&floppy_usage_lock, flags); 4476 usage_count--; 4477 spin_unlock_irqrestore(&floppy_usage_lock, flags); ··· 4528 #endif 4529 old_fdc = fdc; 4530 for (fdc = 0; fdc < N_FDC; fdc++) 4531 - if (FDCS->address != -1) { 4532 - release_region(FDCS->address + 2, 4); 4533 - release_region(FDCS->address + 7, 1); 4534 - } 4535 fdc = old_fdc; 4536 } 4537
··· 558 static void recalibrate_floppy(void); 559 static void floppy_shutdown(unsigned long); 560 561 + static int floppy_request_regions(int); 562 + static void floppy_release_regions(int); 563 static int floppy_grab_irq_and_dma(void); 564 static void floppy_release_irq_and_dma(void); 565 ··· 4274 FDCS->rawcmd = 2; 4275 if (user_reset_fdc(-1, FD_RESET_ALWAYS, 0)) { 4276 /* free ioports reserved by floppy_grab_irq_and_dma() */ 4277 + floppy_release_regions(fdc); 4278 FDCS->address = -1; 4279 FDCS->version = FDC_NONE; 4280 continue; ··· 4284 FDCS->version = get_fdc_version(); 4285 if (FDCS->version == FDC_NONE) { 4286 /* free ioports reserved by floppy_grab_irq_and_dma() */ 4287 + floppy_release_regions(fdc); 4288 FDCS->address = -1; 4289 continue; 4290 } ··· 4358 4359 static DEFINE_SPINLOCK(floppy_usage_lock); 4360 4361 + static const struct io_region { 4362 + int offset; 4363 + int size; 4364 + } io_regions[] = { 4365 + { 2, 1 }, 4366 + /* address + 3 is sometimes reserved by pnp bios for motherboard */ 4367 + { 4, 2 }, 4368 + /* address + 6 is reserved, and may be taken by IDE. 4369 + * Unfortunately, Adaptec doesn't know this :-(, */ 4370 + { 7, 1 }, 4371 + }; 4372 + 4373 + static void floppy_release_allocated_regions(int fdc, const struct io_region *p) 4374 + { 4375 + while (p != io_regions) { 4376 + p--; 4377 + release_region(FDCS->address + p->offset, p->size); 4378 + } 4379 + } 4380 + 4381 + #define ARRAY_END(X) (&((X)[ARRAY_SIZE(X)])) 4382 + 4383 + static int floppy_request_regions(int fdc) 4384 + { 4385 + const struct io_region *p; 4386 + 4387 + for (p = io_regions; p < ARRAY_END(io_regions); p++) { 4388 + if (!request_region(FDCS->address + p->offset, p->size, "floppy")) { 4389 + DPRINT("Floppy io-port 0x%04lx in use\n", FDCS->address + p->offset); 4390 + floppy_release_allocated_regions(fdc, p); 4391 + return -EBUSY; 4392 + } 4393 + } 4394 + return 0; 4395 + } 4396 + 4397 + static void floppy_release_regions(int fdc) 4398 + { 4399 + floppy_release_allocated_regions(fdc, ARRAY_END(io_regions)); 4400 + } 4401 + 4402 static int floppy_grab_irq_and_dma(void) 4403 { 4404 unsigned long flags; ··· 4399 4400 for (fdc = 0; fdc < N_FDC; fdc++) { 4401 if (FDCS->address != -1) { 4402 + if (floppy_request_regions(fdc)) 4403 + goto cleanup; 4404 } 4405 } 4406 for (fdc = 0; fdc < N_FDC; fdc++) { ··· 4432 fdc = 0; 4433 irqdma_allocated = 1; 4434 return 0; 4435 + cleanup: 4436 fd_free_irq(); 4437 fd_free_dma(); 4438 + while (--fdc >= 0) 4439 + floppy_release_regions(fdc); 4440 spin_lock_irqsave(&floppy_usage_lock, flags); 4441 usage_count--; 4442 spin_unlock_irqrestore(&floppy_usage_lock, flags); ··· 4501 #endif 4502 old_fdc = fdc; 4503 for (fdc = 0; fdc < N_FDC; fdc++) 4504 + if (FDCS->address != -1) 4505 + floppy_release_regions(fdc); 4506 fdc = old_fdc; 4507 } 4508
+1 -1
drivers/block/paride/pg.c
··· 422 423 for (k = 0; k < len; k++) { 424 char c = *buf++; 425 - if (c != ' ' || c != l) 426 l = *targ++ = c; 427 } 428 if (l == ' ')
··· 422 423 for (k = 0; k < len; k++) { 424 char c = *buf++; 425 + if (c != ' ' && c != l) 426 l = *targ++ = c; 427 } 428 if (l == ' ')
+3 -2
drivers/char/sx.c
··· 1746 sx_dprintk(SX_DEBUG_FIRMWARE, "returning type= %ld\n", rc); 1747 break; 1748 case SXIO_DO_RAMTEST: 1749 - if (sx_initialized) /* Already initialized: better not ramtest the board. */ 1750 rc = -EPERM; 1751 break; 1752 if (IS_SX_BOARD(board)) { 1753 rc = do_memtest(board, 0, 0x7000); 1754 if (!rc) ··· 1789 nbytes - i : SX_CHUNK_SIZE)) { 1790 kfree(tmp); 1791 rc = -EFAULT; 1792 - break; 1793 } 1794 memcpy_toio(board->base2 + offset + i, tmp, 1795 (i + SX_CHUNK_SIZE > nbytes) ?
··· 1746 sx_dprintk(SX_DEBUG_FIRMWARE, "returning type= %ld\n", rc); 1747 break; 1748 case SXIO_DO_RAMTEST: 1749 + if (sx_initialized) { /* Already initialized: better not ramtest the board. */ 1750 rc = -EPERM; 1751 break; 1752 + } 1753 if (IS_SX_BOARD(board)) { 1754 rc = do_memtest(board, 0, 0x7000); 1755 if (!rc) ··· 1788 nbytes - i : SX_CHUNK_SIZE)) { 1789 kfree(tmp); 1790 rc = -EFAULT; 1791 + goto out; 1792 } 1793 memcpy_toio(board->base2 + offset + i, tmp, 1794 (i + SX_CHUNK_SIZE > nbytes) ?
+2
drivers/dma/dmaengine.c
··· 518 dma_chan_name(chan), err); 519 else 520 break; 521 chan = NULL; 522 } 523 } ··· 537 WARN_ONCE(chan->client_count != 1, 538 "chan reference count %d != 1\n", chan->client_count); 539 dma_chan_put(chan); 540 mutex_unlock(&dma_list_mutex); 541 } 542 EXPORT_SYMBOL_GPL(dma_release_channel);
··· 518 dma_chan_name(chan), err); 519 else 520 break; 521 + chan->private = NULL; 522 chan = NULL; 523 } 524 } ··· 536 WARN_ONCE(chan->client_count != 1, 537 "chan reference count %d != 1\n", chan->client_count); 538 dma_chan_put(chan); 539 + chan->private = NULL; 540 mutex_unlock(&dma_list_mutex); 541 } 542 EXPORT_SYMBOL_GPL(dma_release_channel);
+2 -3
drivers/dma/dw_dmac.c
··· 560 unsigned long flags) 561 { 562 struct dw_dma_chan *dwc = to_dw_dma_chan(chan); 563 - struct dw_dma_slave *dws = dwc->dws; 564 struct dw_desc *prev; 565 struct dw_desc *first; 566 u32 ctllo; ··· 790 cfghi = DWC_CFGH_FIFO_MODE; 791 cfglo = 0; 792 793 - dws = dwc->dws; 794 if (dws) { 795 /* 796 * We need controller-specific data to set up slave ··· 866 spin_lock_bh(&dwc->lock); 867 list_splice_init(&dwc->free_list, &list); 868 dwc->descs_allocated = 0; 869 - dwc->dws = NULL; 870 871 /* Disable interrupts */ 872 channel_clear_bit(dw, MASK.XFER, dwc->mask);
··· 560 unsigned long flags) 561 { 562 struct dw_dma_chan *dwc = to_dw_dma_chan(chan); 563 + struct dw_dma_slave *dws = chan->private; 564 struct dw_desc *prev; 565 struct dw_desc *first; 566 u32 ctllo; ··· 790 cfghi = DWC_CFGH_FIFO_MODE; 791 cfglo = 0; 792 793 + dws = chan->private; 794 if (dws) { 795 /* 796 * We need controller-specific data to set up slave ··· 866 spin_lock_bh(&dwc->lock); 867 list_splice_init(&dwc->free_list, &list); 868 dwc->descs_allocated = 0; 869 870 /* Disable interrupts */ 871 channel_clear_bit(dw, MASK.XFER, dwc->mask);
-2
drivers/dma/dw_dmac_regs.h
··· 139 struct list_head queue; 140 struct list_head free_list; 141 142 - struct dw_dma_slave *dws; 143 - 144 unsigned int descs_allocated; 145 }; 146
··· 139 struct list_head queue; 140 struct list_head free_list; 141 142 unsigned int descs_allocated; 143 }; 144
+1 -1
drivers/firmware/memmap.c
··· 1 /* 2 * linux/drivers/firmware/memmap.c 3 * Copyright (C) 2008 SUSE LINUX Products GmbH 4 - * by Bernhard Walle <bwalle@suse.de> 5 * 6 * This program is free software; you can redistribute it and/or modify 7 * it under the terms of the GNU General Public License v2.0 as published by
··· 1 /* 2 * linux/drivers/firmware/memmap.c 3 * Copyright (C) 2008 SUSE LINUX Products GmbH 4 + * by Bernhard Walle <bernhard.walle@gmx.de> 5 * 6 * This program is free software; you can redistribute it and/or modify 7 * it under the terms of the GNU General Public License v2.0 as published by
+6 -7
drivers/gpu/drm/Kconfig
··· 80 XFree86 4.4 and above. If unsure, build this and i830 as modules and 81 the X server will load the correct one. 82 83 - endchoice 84 - 85 config DRM_I915_KMS 86 bool "Enable modesetting on intel by default" 87 depends on DRM_I915 88 help 89 - Choose this option if you want kernel modesetting enabled by default, 90 - and you have a new enough userspace to support this. Running old 91 - userspaces with this enabled will cause pain. Note that this causes 92 - the driver to bind to PCI devices, which precludes loading things 93 - like intelfb. 94 95 96 config DRM_MGA 97 tristate "Matrox g200/g400"
··· 80 XFree86 4.4 and above. If unsure, build this and i830 as modules and 81 the X server will load the correct one. 82 83 config DRM_I915_KMS 84 bool "Enable modesetting on intel by default" 85 depends on DRM_I915 86 help 87 + Choose this option if you want kernel modesetting enabled by default, 88 + and you have a new enough userspace to support this. Running old 89 + userspaces with this enabled will cause pain. Note that this causes 90 + the driver to bind to PCI devices, which precludes loading things 91 + like intelfb. 92 93 + endchoice 94 95 config DRM_MGA 96 tristate "Matrox g200/g400"
+1 -2
drivers/gpu/drm/drm_crtc.c
··· 1741 * RETURNS: 1742 * Zero on success, errno on failure. 1743 */ 1744 - void drm_fb_release(struct file *filp) 1745 { 1746 - struct drm_file *priv = filp->private_data; 1747 struct drm_device *dev = priv->minor->dev; 1748 struct drm_framebuffer *fb, *tfb; 1749
··· 1741 * RETURNS: 1742 * Zero on success, errno on failure. 1743 */ 1744 + void drm_fb_release(struct drm_file *priv) 1745 { 1746 struct drm_device *dev = priv->minor->dev; 1747 struct drm_framebuffer *fb, *tfb; 1748
+16 -5
drivers/gpu/drm/drm_crtc_helper.c
··· 512 if (drm_mode_equal(&saved_mode, &crtc->mode)) { 513 if (saved_x != crtc->x || saved_y != crtc->y || 514 depth_changed || bpp_changed) { 515 - crtc_funcs->mode_set_base(crtc, crtc->x, crtc->y, 516 - old_fb); 517 goto done; 518 } 519 } ··· 552 /* Set up the DPLL and any encoders state that needs to adjust or depend 553 * on the DPLL. 554 */ 555 - crtc_funcs->mode_set(crtc, mode, adjusted_mode, x, y, old_fb); 556 557 list_for_each_entry(encoder, &dev->mode_config.encoder_list, head) { 558 ··· 754 if (!drm_crtc_helper_set_mode(set->crtc, set->mode, 755 set->x, set->y, 756 old_fb)) { 757 ret = -EINVAL; 758 goto fail_set_mode; 759 } ··· 769 old_fb = set->crtc->fb; 770 if (set->crtc->fb != set->fb) 771 set->crtc->fb = set->fb; 772 - crtc_funcs->mode_set_base(set->crtc, set->x, set->y, old_fb); 773 } 774 775 kfree(save_encoders); ··· 782 fail_set_mode: 783 set->crtc->enabled = save_enabled; 784 count = 0; 785 - list_for_each_entry(connector, &dev->mode_config.connector_list, head) 786 connector->encoder->crtc = save_crtcs[count++]; 787 fail_no_encoder: 788 kfree(save_crtcs); 789 count = 0;
··· 512 if (drm_mode_equal(&saved_mode, &crtc->mode)) { 513 if (saved_x != crtc->x || saved_y != crtc->y || 514 depth_changed || bpp_changed) { 515 + ret = !crtc_funcs->mode_set_base(crtc, crtc->x, crtc->y, 516 + old_fb); 517 goto done; 518 } 519 } ··· 552 /* Set up the DPLL and any encoders state that needs to adjust or depend 553 * on the DPLL. 554 */ 555 + ret = !crtc_funcs->mode_set(crtc, mode, adjusted_mode, x, y, old_fb); 556 + if (!ret) 557 + goto done; 558 559 list_for_each_entry(encoder, &dev->mode_config.encoder_list, head) { 560 ··· 752 if (!drm_crtc_helper_set_mode(set->crtc, set->mode, 753 set->x, set->y, 754 old_fb)) { 755 + DRM_ERROR("failed to set mode on crtc %p\n", 756 + set->crtc); 757 ret = -EINVAL; 758 goto fail_set_mode; 759 } ··· 765 old_fb = set->crtc->fb; 766 if (set->crtc->fb != set->fb) 767 set->crtc->fb = set->fb; 768 + ret = crtc_funcs->mode_set_base(set->crtc, 769 + set->x, set->y, old_fb); 770 + if (ret != 0) 771 + goto fail_set_mode; 772 } 773 774 kfree(save_encoders); ··· 775 fail_set_mode: 776 set->crtc->enabled = save_enabled; 777 count = 0; 778 + list_for_each_entry(connector, &dev->mode_config.connector_list, head) { 779 + if (!connector->encoder) 780 + continue; 781 + 782 connector->encoder->crtc = save_crtcs[count++]; 783 + } 784 fail_no_encoder: 785 kfree(save_crtcs); 786 count = 0;
+3
drivers/gpu/drm/drm_fops.c
··· 457 if (dev->driver->driver_features & DRIVER_GEM) 458 drm_gem_release(dev, file_priv); 459 460 mutex_lock(&dev->ctxlist_mutex); 461 if (!list_empty(&dev->ctxlist)) { 462 struct drm_ctx_list *pos, *n;
··· 457 if (dev->driver->driver_features & DRIVER_GEM) 458 drm_gem_release(dev, file_priv); 459 460 + if (dev->driver->driver_features & DRIVER_MODESET) 461 + drm_fb_release(file_priv); 462 + 463 mutex_lock(&dev->ctxlist_mutex); 464 if (!list_empty(&dev->ctxlist)) { 465 struct drm_ctx_list *pos, *n;
+56 -25
drivers/gpu/drm/drm_gem.c
··· 104 105 if (drm_mm_init(&mm->offset_manager, DRM_FILE_PAGE_OFFSET_START, 106 DRM_FILE_PAGE_OFFSET_SIZE)) { 107 - drm_free(mm, sizeof(struct drm_gem_mm), DRM_MEM_MM); 108 drm_ht_remove(&mm->offset_hash); 109 return -ENOMEM; 110 } 111 ··· 295 return -EBADF; 296 297 again: 298 - if (idr_pre_get(&dev->object_name_idr, GFP_KERNEL) == 0) 299 - return -ENOMEM; 300 301 spin_lock(&dev->object_name_lock); 302 - if (obj->name) { 303 - args->name = obj->name; 304 spin_unlock(&dev->object_name_lock); 305 - return 0; 306 } 307 - ret = idr_get_new_above(&dev->object_name_idr, obj, 1, 308 - &obj->name); 309 - spin_unlock(&dev->object_name_lock); 310 - if (ret == -EAGAIN) 311 - goto again; 312 313 - if (ret != 0) { 314 - mutex_lock(&dev->struct_mutex); 315 - drm_gem_object_unreference(obj); 316 - mutex_unlock(&dev->struct_mutex); 317 - return ret; 318 - } 319 - 320 - /* 321 - * Leave the reference from the lookup around as the 322 - * name table now holds one 323 - */ 324 - args->name = (uint64_t) obj->name; 325 - 326 - return 0; 327 } 328 329 /** ··· 450 spin_lock(&dev->object_name_lock); 451 if (obj->name) { 452 idr_remove(&dev->object_name_idr, obj->name); 453 spin_unlock(&dev->object_name_lock); 454 /* 455 * The object name held a reference to this object, drop ··· 462 463 } 464 EXPORT_SYMBOL(drm_gem_object_handle_free); 465 466 /** 467 * drm_gem_mmap - memory map routine for GEM objects ··· 543 prot |= _PAGE_CACHE_WC; 544 #endif 545 vma->vm_page_prot = __pgprot(prot); 546 547 vma->vm_file = filp; /* Needed for drm_vm_open() */ 548 drm_vm_open_locked(vma);
··· 104 105 if (drm_mm_init(&mm->offset_manager, DRM_FILE_PAGE_OFFSET_START, 106 DRM_FILE_PAGE_OFFSET_SIZE)) { 107 drm_ht_remove(&mm->offset_hash); 108 + drm_free(mm, sizeof(struct drm_gem_mm), DRM_MEM_MM); 109 return -ENOMEM; 110 } 111 ··· 295 return -EBADF; 296 297 again: 298 + if (idr_pre_get(&dev->object_name_idr, GFP_KERNEL) == 0) { 299 + ret = -ENOMEM; 300 + goto err; 301 + } 302 303 spin_lock(&dev->object_name_lock); 304 + if (!obj->name) { 305 + ret = idr_get_new_above(&dev->object_name_idr, obj, 1, 306 + &obj->name); 307 + args->name = (uint64_t) obj->name; 308 spin_unlock(&dev->object_name_lock); 309 + 310 + if (ret == -EAGAIN) 311 + goto again; 312 + 313 + if (ret != 0) 314 + goto err; 315 + 316 + /* Allocate a reference for the name table. */ 317 + drm_gem_object_reference(obj); 318 + } else { 319 + args->name = (uint64_t) obj->name; 320 + spin_unlock(&dev->object_name_lock); 321 + ret = 0; 322 } 323 324 + err: 325 + mutex_lock(&dev->struct_mutex); 326 + drm_gem_object_unreference(obj); 327 + mutex_unlock(&dev->struct_mutex); 328 + return ret; 329 } 330 331 /** ··· 448 spin_lock(&dev->object_name_lock); 449 if (obj->name) { 450 idr_remove(&dev->object_name_idr, obj->name); 451 + obj->name = 0; 452 spin_unlock(&dev->object_name_lock); 453 /* 454 * The object name held a reference to this object, drop ··· 459 460 } 461 EXPORT_SYMBOL(drm_gem_object_handle_free); 462 + 463 + void drm_gem_vm_open(struct vm_area_struct *vma) 464 + { 465 + struct drm_gem_object *obj = vma->vm_private_data; 466 + 467 + drm_gem_object_reference(obj); 468 + } 469 + EXPORT_SYMBOL(drm_gem_vm_open); 470 + 471 + void drm_gem_vm_close(struct vm_area_struct *vma) 472 + { 473 + struct drm_gem_object *obj = vma->vm_private_data; 474 + struct drm_device *dev = obj->dev; 475 + 476 + mutex_lock(&dev->struct_mutex); 477 + drm_gem_object_unreference(obj); 478 + mutex_unlock(&dev->struct_mutex); 479 + } 480 + EXPORT_SYMBOL(drm_gem_vm_close); 481 + 482 483 /** 484 * drm_gem_mmap - memory map routine for GEM objects ··· 520 prot |= _PAGE_CACHE_WC; 521 #endif 522 vma->vm_page_prot = __pgprot(prot); 523 + 524 + /* Take a ref for this mapping of the object, so that the fault 525 + * handler can dereference the mmap offset's pointer to the object. 526 + * This reference is cleaned up by the corresponding vm_close 527 + * (which should happen whether the vma was created by this call, or 528 + * by a vm_open due to mremap or partial unmap or whatever). 529 + */ 530 + drm_gem_object_reference(obj); 531 532 vma->vm_file = filp; /* Needed for drm_vm_open() */ 533 drm_vm_open_locked(vma);
+2
drivers/gpu/drm/i915/i915_drv.c
··· 94 95 static struct vm_operations_struct i915_gem_vm_ops = { 96 .fault = i915_gem_fault, 97 }; 98 99 static struct drm_driver driver = {
··· 94 95 static struct vm_operations_struct i915_gem_vm_ops = { 96 .fault = i915_gem_fault, 97 + .open = drm_gem_vm_open, 98 + .close = drm_gem_vm_close, 99 }; 100 101 static struct drm_driver driver = {
+2
drivers/gpu/drm/i915/i915_drv.h
··· 184 unsigned int lvds_dither:1; 185 unsigned int lvds_vbt:1; 186 unsigned int int_crt_support:1; 187 188 struct drm_i915_fence_reg fence_regs[16]; /* assume 965 */ 189 int fence_reg_start; /* 4 if userland hasn't ioctl'd us yet */
··· 184 unsigned int lvds_dither:1; 185 unsigned int lvds_vbt:1; 186 unsigned int int_crt_support:1; 187 + unsigned int lvds_use_ssc:1; 188 + int lvds_ssc_freq; 189 190 struct drm_i915_fence_reg fence_regs[16]; /* assume 965 */ 191 int fence_reg_start; /* 4 if userland hasn't ioctl'd us yet */
+75 -44
drivers/gpu/drm/i915/i915_gem.c
··· 607 case -EAGAIN: 608 return VM_FAULT_OOM; 609 case -EFAULT: 610 - case -EBUSY: 611 - DRM_ERROR("can't insert pfn?? fault or busy...\n"); 612 return VM_FAULT_SIGBUS; 613 default: 614 return VM_FAULT_NOPAGE; ··· 680 drm_free(list->map, sizeof(struct drm_map_list), DRM_MEM_DRIVER); 681 682 return ret; 683 } 684 685 /** ··· 780 781 if (!obj_priv->mmap_offset) { 782 ret = i915_gem_create_mmap_offset(obj); 783 - if (ret) 784 return ret; 785 } 786 787 args->offset = obj_priv->mmap_offset; ··· 2276 (int) reloc.offset, 2277 reloc.read_domains, 2278 reloc.write_domain); 2279 return -EINVAL; 2280 } 2281 ··· 2507 if (dev_priv->mm.wedged) { 2508 DRM_ERROR("Execbuf while wedged\n"); 2509 mutex_unlock(&dev->struct_mutex); 2510 - return -EIO; 2511 } 2512 2513 if (dev_priv->mm.suspended) { 2514 DRM_ERROR("Execbuf while VT-switched.\n"); 2515 mutex_unlock(&dev->struct_mutex); 2516 - return -EBUSY; 2517 } 2518 2519 /* Look up object handles */ ··· 2661 2662 i915_verify_inactive(dev, __FILE__, __LINE__); 2663 2664 - /* Copy the new buffer offsets back to the user's exec list. */ 2665 - ret = copy_to_user((struct drm_i915_relocation_entry __user *) 2666 - (uintptr_t) args->buffers_ptr, 2667 - exec_list, 2668 - sizeof(*exec_list) * args->buffer_count); 2669 - if (ret) 2670 - DRM_ERROR("failed to copy %d exec entries " 2671 - "back to user (%d)\n", 2672 - args->buffer_count, ret); 2673 err: 2674 for (i = 0; i < pinned; i++) 2675 i915_gem_object_unpin(object_list[i]); ··· 2669 drm_gem_object_unreference(object_list[i]); 2670 2671 mutex_unlock(&dev->struct_mutex); 2672 2673 pre_mutex_err: 2674 drm_free(object_list, sizeof(*object_list) * args->buffer_count, ··· 2785 if (obj_priv->pin_filp != NULL && obj_priv->pin_filp != file_priv) { 2786 DRM_ERROR("Already pinned in i915_gem_pin_ioctl(): %d\n", 2787 args->handle); 2788 mutex_unlock(&dev->struct_mutex); 2789 return -EINVAL; 2790 } ··· 2918 void i915_gem_free_object(struct drm_gem_object *obj) 2919 { 2920 struct drm_device *dev = obj->dev; 2921 - struct drm_gem_mm *mm = dev->mm_private; 2922 - struct drm_map_list *list; 2923 - struct drm_map *map; 2924 struct drm_i915_gem_object *obj_priv = obj->driver_private; 2925 2926 while (obj_priv->pin_count > 0) ··· 2928 2929 i915_gem_object_unbind(obj); 2930 2931 - list = &obj->map_list; 2932 - drm_ht_remove_item(&mm->offset_hash, &list->hash); 2933 - 2934 - if (list->file_offset_node) { 2935 - drm_mm_put_block(list->file_offset_node); 2936 - list->file_offset_node = NULL; 2937 - } 2938 - 2939 - map = list->map; 2940 - if (map) { 2941 - drm_free(map, sizeof(*map), DRM_MEM_DRIVER); 2942 - list->map = NULL; 2943 - } 2944 2945 drm_free(obj_priv->page_cpu_valid, 1, DRM_MEM_DRIVER); 2946 drm_free(obj->driver_private, 1, DRM_MEM_DRIVER); ··· 3113 if (dev_priv->hw_status_page == NULL) { 3114 DRM_ERROR("Failed to map status page.\n"); 3115 memset(&dev_priv->hws_map, 0, sizeof(dev_priv->hws_map)); 3116 drm_gem_object_unreference(obj); 3117 return -EINVAL; 3118 } ··· 3124 DRM_DEBUG("hws offset: 0x%08x\n", dev_priv->status_gfx_addr); 3125 3126 return 0; 3127 } 3128 3129 int ··· 3164 obj = drm_gem_object_alloc(dev, 128 * 1024); 3165 if (obj == NULL) { 3166 DRM_ERROR("Failed to allocate ringbuffer\n"); 3167 return -ENOMEM; 3168 } 3169 obj_priv = obj->driver_private; ··· 3172 ret = i915_gem_object_pin(obj, 4096); 3173 if (ret != 0) { 3174 drm_gem_object_unreference(obj); 3175 return ret; 3176 } 3177 ··· 3190 if (ring->map.handle == NULL) { 3191 DRM_ERROR("Failed to map ringbuffer.\n"); 3192 memset(&dev_priv->ring, 0, sizeof(dev_priv->ring)); 3193 drm_gem_object_unreference(obj); 3194 return -EINVAL; 3195 } 3196 ring->ring_obj = obj; ··· 3272 dev_priv->ring.ring_obj = NULL; 3273 memset(&dev_priv->ring, 0, sizeof(dev_priv->ring)); 3274 3275 - if (dev_priv->hws_obj != NULL) { 3276 - struct drm_gem_object *obj = dev_priv->hws_obj; 3277 - struct drm_i915_gem_object *obj_priv = obj->driver_private; 3278 - 3279 - kunmap(obj_priv->page_list[0]); 3280 - i915_gem_object_unpin(obj); 3281 - drm_gem_object_unreference(obj); 3282 - dev_priv->hws_obj = NULL; 3283 - memset(&dev_priv->hws_map, 0, sizeof(dev_priv->hws_map)); 3284 - dev_priv->hw_status_page = NULL; 3285 - 3286 - /* Write high address into HWS_PGA when disabling. */ 3287 - I915_WRITE(HWS_PGA, 0x1ffff000); 3288 - } 3289 } 3290 3291 int
··· 607 case -EAGAIN: 608 return VM_FAULT_OOM; 609 case -EFAULT: 610 return VM_FAULT_SIGBUS; 611 default: 612 return VM_FAULT_NOPAGE; ··· 682 drm_free(list->map, sizeof(struct drm_map_list), DRM_MEM_DRIVER); 683 684 return ret; 685 + } 686 + 687 + static void 688 + i915_gem_free_mmap_offset(struct drm_gem_object *obj) 689 + { 690 + struct drm_device *dev = obj->dev; 691 + struct drm_i915_gem_object *obj_priv = obj->driver_private; 692 + struct drm_gem_mm *mm = dev->mm_private; 693 + struct drm_map_list *list; 694 + 695 + list = &obj->map_list; 696 + drm_ht_remove_item(&mm->offset_hash, &list->hash); 697 + 698 + if (list->file_offset_node) { 699 + drm_mm_put_block(list->file_offset_node); 700 + list->file_offset_node = NULL; 701 + } 702 + 703 + if (list->map) { 704 + drm_free(list->map, sizeof(struct drm_map), DRM_MEM_DRIVER); 705 + list->map = NULL; 706 + } 707 + 708 + obj_priv->mmap_offset = 0; 709 } 710 711 /** ··· 758 759 if (!obj_priv->mmap_offset) { 760 ret = i915_gem_create_mmap_offset(obj); 761 + if (ret) { 762 + drm_gem_object_unreference(obj); 763 + mutex_unlock(&dev->struct_mutex); 764 return ret; 765 + } 766 } 767 768 args->offset = obj_priv->mmap_offset; ··· 2251 (int) reloc.offset, 2252 reloc.read_domains, 2253 reloc.write_domain); 2254 + drm_gem_object_unreference(target_obj); 2255 + i915_gem_object_unpin(obj); 2256 return -EINVAL; 2257 } 2258 ··· 2480 if (dev_priv->mm.wedged) { 2481 DRM_ERROR("Execbuf while wedged\n"); 2482 mutex_unlock(&dev->struct_mutex); 2483 + ret = -EIO; 2484 + goto pre_mutex_err; 2485 } 2486 2487 if (dev_priv->mm.suspended) { 2488 DRM_ERROR("Execbuf while VT-switched.\n"); 2489 mutex_unlock(&dev->struct_mutex); 2490 + ret = -EBUSY; 2491 + goto pre_mutex_err; 2492 } 2493 2494 /* Look up object handles */ ··· 2632 2633 i915_verify_inactive(dev, __FILE__, __LINE__); 2634 2635 err: 2636 for (i = 0; i < pinned; i++) 2637 i915_gem_object_unpin(object_list[i]); ··· 2649 drm_gem_object_unreference(object_list[i]); 2650 2651 mutex_unlock(&dev->struct_mutex); 2652 + 2653 + if (!ret) { 2654 + /* Copy the new buffer offsets back to the user's exec list. */ 2655 + ret = copy_to_user((struct drm_i915_relocation_entry __user *) 2656 + (uintptr_t) args->buffers_ptr, 2657 + exec_list, 2658 + sizeof(*exec_list) * args->buffer_count); 2659 + if (ret) 2660 + DRM_ERROR("failed to copy %d exec entries " 2661 + "back to user (%d)\n", 2662 + args->buffer_count, ret); 2663 + } 2664 2665 pre_mutex_err: 2666 drm_free(object_list, sizeof(*object_list) * args->buffer_count, ··· 2753 if (obj_priv->pin_filp != NULL && obj_priv->pin_filp != file_priv) { 2754 DRM_ERROR("Already pinned in i915_gem_pin_ioctl(): %d\n", 2755 args->handle); 2756 + drm_gem_object_unreference(obj); 2757 mutex_unlock(&dev->struct_mutex); 2758 return -EINVAL; 2759 } ··· 2885 void i915_gem_free_object(struct drm_gem_object *obj) 2886 { 2887 struct drm_device *dev = obj->dev; 2888 struct drm_i915_gem_object *obj_priv = obj->driver_private; 2889 2890 while (obj_priv->pin_count > 0) ··· 2898 2899 i915_gem_object_unbind(obj); 2900 2901 + i915_gem_free_mmap_offset(obj); 2902 2903 drm_free(obj_priv->page_cpu_valid, 1, DRM_MEM_DRIVER); 2904 drm_free(obj->driver_private, 1, DRM_MEM_DRIVER); ··· 3095 if (dev_priv->hw_status_page == NULL) { 3096 DRM_ERROR("Failed to map status page.\n"); 3097 memset(&dev_priv->hws_map, 0, sizeof(dev_priv->hws_map)); 3098 + i915_gem_object_unpin(obj); 3099 drm_gem_object_unreference(obj); 3100 return -EINVAL; 3101 } ··· 3105 DRM_DEBUG("hws offset: 0x%08x\n", dev_priv->status_gfx_addr); 3106 3107 return 0; 3108 + } 3109 + 3110 + static void 3111 + i915_gem_cleanup_hws(struct drm_device *dev) 3112 + { 3113 + drm_i915_private_t *dev_priv = dev->dev_private; 3114 + struct drm_gem_object *obj = dev_priv->hws_obj; 3115 + struct drm_i915_gem_object *obj_priv = obj->driver_private; 3116 + 3117 + if (dev_priv->hws_obj == NULL) 3118 + return; 3119 + 3120 + kunmap(obj_priv->page_list[0]); 3121 + i915_gem_object_unpin(obj); 3122 + drm_gem_object_unreference(obj); 3123 + dev_priv->hws_obj = NULL; 3124 + memset(&dev_priv->hws_map, 0, sizeof(dev_priv->hws_map)); 3125 + dev_priv->hw_status_page = NULL; 3126 + 3127 + /* Write high address into HWS_PGA when disabling. */ 3128 + I915_WRITE(HWS_PGA, 0x1ffff000); 3129 } 3130 3131 int ··· 3124 obj = drm_gem_object_alloc(dev, 128 * 1024); 3125 if (obj == NULL) { 3126 DRM_ERROR("Failed to allocate ringbuffer\n"); 3127 + i915_gem_cleanup_hws(dev); 3128 return -ENOMEM; 3129 } 3130 obj_priv = obj->driver_private; ··· 3131 ret = i915_gem_object_pin(obj, 4096); 3132 if (ret != 0) { 3133 drm_gem_object_unreference(obj); 3134 + i915_gem_cleanup_hws(dev); 3135 return ret; 3136 } 3137 ··· 3148 if (ring->map.handle == NULL) { 3149 DRM_ERROR("Failed to map ringbuffer.\n"); 3150 memset(&dev_priv->ring, 0, sizeof(dev_priv->ring)); 3151 + i915_gem_object_unpin(obj); 3152 drm_gem_object_unreference(obj); 3153 + i915_gem_cleanup_hws(dev); 3154 return -EINVAL; 3155 } 3156 ring->ring_obj = obj; ··· 3228 dev_priv->ring.ring_obj = NULL; 3229 memset(&dev_priv->ring, 0, sizeof(dev_priv->ring)); 3230 3231 + i915_gem_cleanup_hws(dev); 3232 } 3233 3234 int
+2 -4
drivers/gpu/drm/i915/i915_gem_tiling.c
··· 299 } 300 obj_priv->stride = args->stride; 301 302 - mutex_unlock(&dev->struct_mutex); 303 - 304 drm_gem_object_unreference(obj); 305 306 return 0; 307 } ··· 339 DRM_ERROR("unknown tiling mode\n"); 340 } 341 342 - mutex_unlock(&dev->struct_mutex); 343 - 344 drm_gem_object_unreference(obj); 345 346 return 0; 347 }
··· 299 } 300 obj_priv->stride = args->stride; 301 302 drm_gem_object_unreference(obj); 303 + mutex_unlock(&dev->struct_mutex); 304 305 return 0; 306 } ··· 340 DRM_ERROR("unknown tiling mode\n"); 341 } 342 343 drm_gem_object_unreference(obj); 344 + mutex_unlock(&dev->struct_mutex); 345 346 return 0; 347 }
+8
drivers/gpu/drm/i915/intel_bios.c
··· 135 if (general) { 136 dev_priv->int_tv_support = general->int_tv_support; 137 dev_priv->int_crt_support = general->int_crt_support; 138 } 139 } 140
··· 135 if (general) { 136 dev_priv->int_tv_support = general->int_tv_support; 137 dev_priv->int_crt_support = general->int_crt_support; 138 + dev_priv->lvds_use_ssc = general->enable_ssc; 139 + 140 + if (dev_priv->lvds_use_ssc) { 141 + if (IS_I855(dev_priv->dev)) 142 + dev_priv->lvds_ssc_freq = general->ssc_freq ? 66 : 48; 143 + else 144 + dev_priv->lvds_ssc_freq = general->ssc_freq ? 100 : 96; 145 + } 146 } 147 } 148
+85 -75
drivers/gpu/drm/i915/intel_display.c
··· 90 #define I9XX_DOT_MAX 400000 91 #define I9XX_VCO_MIN 1400000 92 #define I9XX_VCO_MAX 2800000 93 - #define I9XX_N_MIN 3 94 - #define I9XX_N_MAX 8 95 #define I9XX_M_MIN 70 96 #define I9XX_M_MAX 120 97 #define I9XX_M1_MIN 10 98 - #define I9XX_M1_MAX 20 99 #define I9XX_M2_MIN 5 100 #define I9XX_M2_MAX 9 101 #define I9XX_P_SDVO_DAC_MIN 5 ··· 189 return limit; 190 } 191 192 - /** Derive the pixel clock for the given refclk and divisors for 8xx chips. */ 193 - 194 - static void i8xx_clock(int refclk, intel_clock_t *clock) 195 { 196 clock->m = 5 * (clock->m1 + 2) + (clock->m2 + 2); 197 clock->p = clock->p1 * clock->p2; 198 clock->vco = refclk * clock->m / (clock->n + 2); 199 clock->dot = clock->vco / clock->p; 200 - } 201 - 202 - /** Derive the pixel clock for the given refclk and divisors for 9xx chips. */ 203 - 204 - static void i9xx_clock(int refclk, intel_clock_t *clock) 205 - { 206 - clock->m = 5 * (clock->m1 + 2) + (clock->m2 + 2); 207 - clock->p = clock->p1 * clock->p2; 208 - clock->vco = refclk * clock->m / (clock->n + 2); 209 - clock->dot = clock->vco / clock->p; 210 - } 211 - 212 - static void intel_clock(struct drm_device *dev, int refclk, 213 - intel_clock_t *clock) 214 - { 215 - if (IS_I9XX(dev)) 216 - i9xx_clock (refclk, clock); 217 - else 218 - i8xx_clock (refclk, clock); 219 } 220 221 /** ··· 217 return false; 218 } 219 220 - #define INTELPllInvalid(s) { /* ErrorF (s) */; return false; } 221 /** 222 * Returns whether the given set of divisors are valid for a given refclk with 223 * the given connectors. ··· 297 clock.p1 <= limit->p1.max; clock.p1++) { 298 int this_err; 299 300 - intel_clock(dev, refclk, &clock); 301 302 if (!intel_PLL_is_valid(crtc, &clock)) 303 continue; ··· 322 udelay(20000); 323 } 324 325 - static void 326 intel_pipe_set_base(struct drm_crtc *crtc, int x, int y, 327 struct drm_framebuffer *old_fb) 328 { ··· 340 int dspstride = (pipe == 0) ? DSPASTRIDE : DSPBSTRIDE; 341 int dspcntr_reg = (pipe == 0) ? DSPACNTR : DSPBCNTR; 342 u32 dspcntr, alignment; 343 344 /* no fb bound */ 345 if (!crtc->fb) { 346 DRM_DEBUG("No FB bound\n"); 347 - return; 348 } 349 350 intel_fb = to_intel_framebuffer(crtc->fb); ··· 366 alignment = 64 * 1024; 367 break; 368 case I915_TILING_X: 369 - if (IS_I9XX(dev)) 370 - alignment = 1024 * 1024; 371 - else 372 - alignment = 512 * 1024; 373 break; 374 case I915_TILING_Y: 375 /* FIXME: Is this true? */ 376 DRM_ERROR("Y tiled not allowed for scan out buffers\n"); 377 - return; 378 default: 379 BUG(); 380 } 381 382 - if (i915_gem_object_pin(intel_fb->obj, alignment)) 383 - return; 384 385 - i915_gem_object_set_to_gtt_domain(intel_fb->obj, 1); 386 - 387 - Start = obj_priv->gtt_offset; 388 - Offset = y * crtc->fb->pitch + x * (crtc->fb->bits_per_pixel / 8); 389 - 390 - I915_WRITE(dspstride, crtc->fb->pitch); 391 392 dspcntr = I915_READ(dspcntr_reg); 393 /* Mask out pixel format bits in case we change it */ ··· 410 break; 411 default: 412 DRM_ERROR("Unknown color depth\n"); 413 - return; 414 } 415 I915_WRITE(dspcntr_reg, dspcntr); 416 417 DRM_DEBUG("Writing base %08lX %08lX %d %d\n", Start, Offset, x, y); 418 if (IS_I965G(dev)) { 419 I915_WRITE(dspbase, Offset); 420 I915_READ(dspbase); ··· 437 intel_fb = to_intel_framebuffer(old_fb); 438 i915_gem_object_unpin(intel_fb->obj); 439 } 440 441 if (!dev->primary->master) 442 - return; 443 444 master_priv = dev->primary->master->driver_priv; 445 if (!master_priv->sarea_priv) 446 - return; 447 448 - switch (pipe) { 449 - case 0: 450 - master_priv->sarea_priv->pipeA_x = x; 451 - master_priv->sarea_priv->pipeA_y = y; 452 - break; 453 - case 1: 454 master_priv->sarea_priv->pipeB_x = x; 455 master_priv->sarea_priv->pipeB_y = y; 456 - break; 457 - default: 458 - DRM_ERROR("Can't update pipe %d in SAREA\n", pipe); 459 - break; 460 } 461 } 462 463 ··· 702 return 1; 703 } 704 705 - static void intel_crtc_mode_set(struct drm_crtc *crtc, 706 - struct drm_display_mode *mode, 707 - struct drm_display_mode *adjusted_mode, 708 - int x, int y, 709 - struct drm_framebuffer *old_fb) 710 { 711 struct drm_device *dev = crtc->dev; 712 struct drm_i915_private *dev_priv = dev->dev_private; ··· 726 int dspsize_reg = (pipe == 0) ? DSPASIZE : DSPBSIZE; 727 int dsppos_reg = (pipe == 0) ? DSPAPOS : DSPBPOS; 728 int pipesrc_reg = (pipe == 0) ? PIPEASRC : PIPEBSRC; 729 - int refclk; 730 intel_clock_t clock; 731 u32 dpll = 0, fp = 0, dspcntr, pipeconf; 732 bool ok, is_sdvo = false, is_dvo = false; 733 bool is_crt = false, is_lvds = false, is_tv = false; 734 struct drm_mode_config *mode_config = &dev->mode_config; 735 struct drm_connector *connector; 736 737 drm_vblank_pre_modeset(dev, pipe); 738 ··· 763 is_crt = true; 764 break; 765 } 766 } 767 768 - if (IS_I9XX(dev)) { 769 refclk = 96000; 770 } else { 771 refclk = 48000; ··· 779 ok = intel_find_best_PLL(crtc, adjusted_mode->clock, refclk, &clock); 780 if (!ok) { 781 DRM_ERROR("Couldn't find PLL settings for mode!\n"); 782 - return; 783 } 784 785 fp = clock.n << 16 | clock.m1 << 8 | clock.m2; ··· 829 } 830 } 831 832 - if (is_tv) { 833 /* XXX: just matching BIOS for now */ 834 - /* dpll |= PLL_REF_INPUT_TVCLKINBC; */ 835 dpll |= 3; 836 - } 837 else 838 dpll |= PLL_REF_INPUT_DREFCLK; 839 ··· 953 I915_WRITE(dspcntr_reg, dspcntr); 954 955 /* Flush the plane changes */ 956 - intel_pipe_set_base(crtc, x, y, old_fb); 957 958 drm_vblank_post_modeset(dev, pipe); 959 } 960 961 /** Loads the palette/gamma unit for the CRTC with the prepared values */ ··· 1030 } 1031 1032 /* we only need to pin inside GTT if cursor is non-phy */ 1033 if (!dev_priv->cursor_needs_physical) { 1034 ret = i915_gem_object_pin(bo, PAGE_SIZE); 1035 if (ret) { 1036 DRM_ERROR("failed to pin cursor bo\n"); 1037 - goto fail; 1038 } 1039 addr = obj_priv->gtt_offset; 1040 } else { 1041 ret = i915_gem_attach_phys_object(dev, bo, (pipe == 0) ? I915_GEM_PHYS_CURSOR_0 : I915_GEM_PHYS_CURSOR_1); 1042 if (ret) { 1043 DRM_ERROR("failed to attach phys object\n"); 1044 - goto fail; 1045 } 1046 addr = obj_priv->phys_obj->handle->busaddr; 1047 } ··· 1062 i915_gem_detach_phys_object(dev, intel_crtc->cursor_bo); 1063 } else 1064 i915_gem_object_unpin(intel_crtc->cursor_bo); 1065 - mutex_lock(&dev->struct_mutex); 1066 drm_gem_object_unreference(intel_crtc->cursor_bo); 1067 - mutex_unlock(&dev->struct_mutex); 1068 } 1069 1070 intel_crtc->cursor_addr = addr; 1071 intel_crtc->cursor_bo = bo; ··· 1072 return 0; 1073 fail: 1074 mutex_lock(&dev->struct_mutex); 1075 drm_gem_object_unreference(bo); 1076 mutex_unlock(&dev->struct_mutex); 1077 return ret; ··· 1300 } 1301 1302 /* XXX: Handle the 100Mhz refclk */ 1303 - i9xx_clock(96000, &clock); 1304 } else { 1305 bool is_lvds = (pipe == 1) && (I915_READ(LVDS) & LVDS_PORT_EN); 1306 ··· 1312 if ((dpll & PLL_REF_INPUT_MASK) == 1313 PLLB_REF_INPUT_SPREADSPECTRUMIN) { 1314 /* XXX: might not be 66MHz */ 1315 - i8xx_clock(66000, &clock); 1316 } else 1317 - i8xx_clock(48000, &clock); 1318 } else { 1319 if (dpll & PLL_P1_DIVIDE_BY_TWO) 1320 clock.p1 = 2; ··· 1327 else 1328 clock.p2 = 2; 1329 1330 - i8xx_clock(48000, &clock); 1331 } 1332 } 1333 ··· 1606 1607 ret = intel_framebuffer_create(dev, mode_cmd, &fb, obj); 1608 if (ret) { 1609 drm_gem_object_unreference(obj); 1610 return NULL; 1611 } 1612
··· 90 #define I9XX_DOT_MAX 400000 91 #define I9XX_VCO_MIN 1400000 92 #define I9XX_VCO_MAX 2800000 93 + #define I9XX_N_MIN 1 94 + #define I9XX_N_MAX 6 95 #define I9XX_M_MIN 70 96 #define I9XX_M_MAX 120 97 #define I9XX_M1_MIN 10 98 + #define I9XX_M1_MAX 22 99 #define I9XX_M2_MIN 5 100 #define I9XX_M2_MAX 9 101 #define I9XX_P_SDVO_DAC_MIN 5 ··· 189 return limit; 190 } 191 192 + static void intel_clock(int refclk, intel_clock_t *clock) 193 { 194 clock->m = 5 * (clock->m1 + 2) + (clock->m2 + 2); 195 clock->p = clock->p1 * clock->p2; 196 clock->vco = refclk * clock->m / (clock->n + 2); 197 clock->dot = clock->vco / clock->p; 198 } 199 200 /** ··· 238 return false; 239 } 240 241 + #define INTELPllInvalid(s) do { DRM_DEBUG(s); return false; } while (0) 242 /** 243 * Returns whether the given set of divisors are valid for a given refclk with 244 * the given connectors. ··· 318 clock.p1 <= limit->p1.max; clock.p1++) { 319 int this_err; 320 321 + intel_clock(refclk, &clock); 322 323 if (!intel_PLL_is_valid(crtc, &clock)) 324 continue; ··· 343 udelay(20000); 344 } 345 346 + static int 347 intel_pipe_set_base(struct drm_crtc *crtc, int x, int y, 348 struct drm_framebuffer *old_fb) 349 { ··· 361 int dspstride = (pipe == 0) ? DSPASTRIDE : DSPBSTRIDE; 362 int dspcntr_reg = (pipe == 0) ? DSPACNTR : DSPBCNTR; 363 u32 dspcntr, alignment; 364 + int ret; 365 366 /* no fb bound */ 367 if (!crtc->fb) { 368 DRM_DEBUG("No FB bound\n"); 369 + return 0; 370 + } 371 + 372 + switch (pipe) { 373 + case 0: 374 + case 1: 375 + break; 376 + default: 377 + DRM_ERROR("Can't update pipe %d in SAREA\n", pipe); 378 + return -EINVAL; 379 } 380 381 intel_fb = to_intel_framebuffer(crtc->fb); ··· 377 alignment = 64 * 1024; 378 break; 379 case I915_TILING_X: 380 + /* pin() will align the object as required by fence */ 381 + alignment = 0; 382 break; 383 case I915_TILING_Y: 384 /* FIXME: Is this true? */ 385 DRM_ERROR("Y tiled not allowed for scan out buffers\n"); 386 + return -EINVAL; 387 default: 388 BUG(); 389 } 390 391 + mutex_lock(&dev->struct_mutex); 392 + ret = i915_gem_object_pin(intel_fb->obj, alignment); 393 + if (ret != 0) { 394 + mutex_unlock(&dev->struct_mutex); 395 + return ret; 396 + } 397 398 + ret = i915_gem_object_set_to_gtt_domain(intel_fb->obj, 1); 399 + if (ret != 0) { 400 + i915_gem_object_unpin(intel_fb->obj); 401 + mutex_unlock(&dev->struct_mutex); 402 + return ret; 403 + } 404 405 dspcntr = I915_READ(dspcntr_reg); 406 /* Mask out pixel format bits in case we change it */ ··· 419 break; 420 default: 421 DRM_ERROR("Unknown color depth\n"); 422 + i915_gem_object_unpin(intel_fb->obj); 423 + mutex_unlock(&dev->struct_mutex); 424 + return -EINVAL; 425 } 426 I915_WRITE(dspcntr_reg, dspcntr); 427 428 + Start = obj_priv->gtt_offset; 429 + Offset = y * crtc->fb->pitch + x * (crtc->fb->bits_per_pixel / 8); 430 + 431 DRM_DEBUG("Writing base %08lX %08lX %d %d\n", Start, Offset, x, y); 432 + I915_WRITE(dspstride, crtc->fb->pitch); 433 if (IS_I965G(dev)) { 434 I915_WRITE(dspbase, Offset); 435 I915_READ(dspbase); ··· 440 intel_fb = to_intel_framebuffer(old_fb); 441 i915_gem_object_unpin(intel_fb->obj); 442 } 443 + mutex_unlock(&dev->struct_mutex); 444 445 if (!dev->primary->master) 446 + return 0; 447 448 master_priv = dev->primary->master->driver_priv; 449 if (!master_priv->sarea_priv) 450 + return 0; 451 452 + if (pipe) { 453 master_priv->sarea_priv->pipeB_x = x; 454 master_priv->sarea_priv->pipeB_y = y; 455 + } else { 456 + master_priv->sarea_priv->pipeA_x = x; 457 + master_priv->sarea_priv->pipeA_y = y; 458 } 459 + 460 + return 0; 461 } 462 463 ··· 708 return 1; 709 } 710 711 + static int intel_crtc_mode_set(struct drm_crtc *crtc, 712 + struct drm_display_mode *mode, 713 + struct drm_display_mode *adjusted_mode, 714 + int x, int y, 715 + struct drm_framebuffer *old_fb) 716 { 717 struct drm_device *dev = crtc->dev; 718 struct drm_i915_private *dev_priv = dev->dev_private; ··· 732 int dspsize_reg = (pipe == 0) ? DSPASIZE : DSPBSIZE; 733 int dsppos_reg = (pipe == 0) ? DSPAPOS : DSPBPOS; 734 int pipesrc_reg = (pipe == 0) ? PIPEASRC : PIPEBSRC; 735 + int refclk, num_outputs = 0; 736 intel_clock_t clock; 737 u32 dpll = 0, fp = 0, dspcntr, pipeconf; 738 bool ok, is_sdvo = false, is_dvo = false; 739 bool is_crt = false, is_lvds = false, is_tv = false; 740 struct drm_mode_config *mode_config = &dev->mode_config; 741 struct drm_connector *connector; 742 + int ret; 743 744 drm_vblank_pre_modeset(dev, pipe); 745 ··· 768 is_crt = true; 769 break; 770 } 771 + 772 + num_outputs++; 773 } 774 775 + if (is_lvds && dev_priv->lvds_use_ssc && num_outputs < 2) { 776 + refclk = dev_priv->lvds_ssc_freq * 1000; 777 + DRM_DEBUG("using SSC reference clock of %d MHz\n", refclk / 1000); 778 + } else if (IS_I9XX(dev)) { 779 refclk = 96000; 780 } else { 781 refclk = 48000; ··· 779 ok = intel_find_best_PLL(crtc, adjusted_mode->clock, refclk, &clock); 780 if (!ok) { 781 DRM_ERROR("Couldn't find PLL settings for mode!\n"); 782 + return -EINVAL; 783 } 784 785 fp = clock.n << 16 | clock.m1 << 8 | clock.m2; ··· 829 } 830 } 831 832 + if (is_sdvo && is_tv) 833 + dpll |= PLL_REF_INPUT_TVCLKINBC; 834 + else if (is_tv) 835 /* XXX: just matching BIOS for now */ 836 + /* dpll |= PLL_REF_INPUT_TVCLKINBC; */ 837 dpll |= 3; 838 + else if (is_lvds && dev_priv->lvds_use_ssc && num_outputs < 2) 839 + dpll |= PLLB_REF_INPUT_SPREADSPECTRUMIN; 840 else 841 dpll |= PLL_REF_INPUT_DREFCLK; 842 ··· 950 I915_WRITE(dspcntr_reg, dspcntr); 951 952 /* Flush the plane changes */ 953 + ret = intel_pipe_set_base(crtc, x, y, old_fb); 954 + if (ret != 0) 955 + return ret; 956 957 drm_vblank_post_modeset(dev, pipe); 958 + 959 + return 0; 960 } 961 962 /** Loads the palette/gamma unit for the CRTC with the prepared values */ ··· 1023 } 1024 1025 /* we only need to pin inside GTT if cursor is non-phy */ 1026 + mutex_lock(&dev->struct_mutex); 1027 if (!dev_priv->cursor_needs_physical) { 1028 ret = i915_gem_object_pin(bo, PAGE_SIZE); 1029 if (ret) { 1030 DRM_ERROR("failed to pin cursor bo\n"); 1031 + goto fail_locked; 1032 } 1033 addr = obj_priv->gtt_offset; 1034 } else { 1035 ret = i915_gem_attach_phys_object(dev, bo, (pipe == 0) ? I915_GEM_PHYS_CURSOR_0 : I915_GEM_PHYS_CURSOR_1); 1036 if (ret) { 1037 DRM_ERROR("failed to attach phys object\n"); 1038 + goto fail_locked; 1039 } 1040 addr = obj_priv->phys_obj->handle->busaddr; 1041 } ··· 1054 i915_gem_detach_phys_object(dev, intel_crtc->cursor_bo); 1055 } else 1056 i915_gem_object_unpin(intel_crtc->cursor_bo); 1057 drm_gem_object_unreference(intel_crtc->cursor_bo); 1058 } 1059 + mutex_unlock(&dev->struct_mutex); 1060 1061 intel_crtc->cursor_addr = addr; 1062 intel_crtc->cursor_bo = bo; ··· 1065 return 0; 1066 fail: 1067 mutex_lock(&dev->struct_mutex); 1068 + fail_locked: 1069 drm_gem_object_unreference(bo); 1070 mutex_unlock(&dev->struct_mutex); 1071 return ret; ··· 1292 } 1293 1294 /* XXX: Handle the 100Mhz refclk */ 1295 + intel_clock(96000, &clock); 1296 } else { 1297 bool is_lvds = (pipe == 1) && (I915_READ(LVDS) & LVDS_PORT_EN); 1298 ··· 1304 if ((dpll & PLL_REF_INPUT_MASK) == 1305 PLLB_REF_INPUT_SPREADSPECTRUMIN) { 1306 /* XXX: might not be 66MHz */ 1307 + intel_clock(66000, &clock); 1308 } else 1309 + intel_clock(48000, &clock); 1310 } else { 1311 if (dpll & PLL_P1_DIVIDE_BY_TWO) 1312 clock.p1 = 2; ··· 1319 else 1320 clock.p2 = 2; 1321 1322 + intel_clock(48000, &clock); 1323 } 1324 } 1325 ··· 1598 1599 ret = intel_framebuffer_create(dev, mode_cmd, &fb, obj); 1600 if (ret) { 1601 + mutex_lock(&dev->struct_mutex); 1602 drm_gem_object_unreference(obj); 1603 + mutex_unlock(&dev->struct_mutex); 1604 return NULL; 1605 } 1606
+5 -3
drivers/gpu/drm/i915/intel_fb.c
··· 473 ret = intel_framebuffer_create(dev, &mode_cmd, &fb, fbo); 474 if (ret) { 475 DRM_ERROR("failed to allocate fb.\n"); 476 - goto out_unref; 477 } 478 479 list_add(&fb->filp_head, &dev->mode_config.fb_kernel_list); ··· 484 info = framebuffer_alloc(sizeof(struct intelfb_par), device); 485 if (!info) { 486 ret = -ENOMEM; 487 - goto out_unref; 488 } 489 490 par = info->par; ··· 513 size); 514 if (!info->screen_base) { 515 ret = -ENOSPC; 516 - goto out_unref; 517 } 518 info->screen_size = size; 519 ··· 608 mutex_unlock(&dev->struct_mutex); 609 return 0; 610 611 out_unref: 612 drm_gem_object_unreference(fbo); 613 mutex_unlock(&dev->struct_mutex);
··· 473 ret = intel_framebuffer_create(dev, &mode_cmd, &fb, fbo); 474 if (ret) { 475 DRM_ERROR("failed to allocate fb.\n"); 476 + goto out_unpin; 477 } 478 479 list_add(&fb->filp_head, &dev->mode_config.fb_kernel_list); ··· 484 info = framebuffer_alloc(sizeof(struct intelfb_par), device); 485 if (!info) { 486 ret = -ENOMEM; 487 + goto out_unpin; 488 } 489 490 par = info->par; ··· 513 size); 514 if (!info->screen_base) { 515 ret = -ENOSPC; 516 + goto out_unpin; 517 } 518 info->screen_size = size; 519 ··· 608 mutex_unlock(&dev->struct_mutex); 609 return 0; 610 611 + out_unpin: 612 + i915_gem_object_unpin(fbo); 613 out_unref: 614 drm_gem_object_unreference(fbo); 615 mutex_unlock(&dev->struct_mutex);
-2
drivers/gpu/drm/i915/intel_lvds.c
··· 481 if (dev_priv->panel_fixed_mode) { 482 dev_priv->panel_fixed_mode->type |= 483 DRM_MODE_TYPE_PREFERRED; 484 - drm_mode_probed_add(connector, 485 - dev_priv->panel_fixed_mode); 486 goto out; 487 } 488 }
··· 481 if (dev_priv->panel_fixed_mode) { 482 dev_priv->panel_fixed_mode->type |= 483 DRM_MODE_TYPE_PREFERRED; 484 goto out; 485 } 486 }
+1 -1
drivers/gpu/drm/i915/intel_sdvo.c
··· 193 194 #define SDVO_CMD_NAME_ENTRY(cmd) {cmd, #cmd} 195 /** Mapping of command numbers to names, for debug output */ 196 - const static struct _sdvo_cmd_name { 197 u8 cmd; 198 char *name; 199 } sdvo_cmd_names[] = {
··· 193 194 #define SDVO_CMD_NAME_ENTRY(cmd) {cmd, #cmd} 195 /** Mapping of command numbers to names, for debug output */ 196 + static const struct _sdvo_cmd_name { 197 u8 cmd; 198 char *name; 199 } sdvo_cmd_names[] = {
+1 -1
drivers/gpu/drm/i915/intel_tv.c
··· 411 * These values account for -1s required. 412 */ 413 414 - const static struct tv_mode tv_modes[] = { 415 { 416 .name = "NTSC-M", 417 .clock = 107520,
··· 411 * These values account for -1s required. 412 */ 413 414 + static const struct tv_mode tv_modes[] = { 415 { 416 .name = "NTSC-M", 417 .clock = 107520,
+15 -6
drivers/gpu/drm/radeon/radeon_cp.c
··· 557 } 558 559 static void radeon_cp_init_ring_buffer(struct drm_device * dev, 560 - drm_radeon_private_t * dev_priv) 561 { 562 u32 ring_start, cur_read_ptr; 563 u32 tmp; 564 ··· 678 679 dev_priv->scratch[2] = 0; 680 RADEON_WRITE(RADEON_LAST_CLEAR_REG, 0); 681 682 radeon_do_wait_for_idle(dev_priv); 683 ··· 1225 } 1226 1227 radeon_cp_load_microcode(dev_priv); 1228 - radeon_cp_init_ring_buffer(dev, dev_priv); 1229 1230 dev_priv->last_buf = 0; 1231 ··· 1291 * 1292 * Charl P. Botha <http://cpbotha.net> 1293 */ 1294 - static int radeon_do_resume_cp(struct drm_device * dev) 1295 { 1296 drm_radeon_private_t *dev_priv = dev->dev_private; 1297 ··· 1314 } 1315 1316 radeon_cp_load_microcode(dev_priv); 1317 - radeon_cp_init_ring_buffer(dev, dev_priv); 1318 1319 radeon_do_engine_reset(dev); 1320 radeon_irq_set_state(dev, RADEON_SW_INT_ENABLE, 1); ··· 1489 */ 1490 int radeon_cp_resume(struct drm_device *dev, void *data, struct drm_file *file_priv) 1491 { 1492 - 1493 - return radeon_do_resume_cp(dev); 1494 } 1495 1496 int radeon_engine_reset(struct drm_device *dev, void *data, struct drm_file *file_priv)
··· 557 } 558 559 static void radeon_cp_init_ring_buffer(struct drm_device * dev, 560 + drm_radeon_private_t *dev_priv, 561 + struct drm_file *file_priv) 562 { 563 + struct drm_radeon_master_private *master_priv; 564 u32 ring_start, cur_read_ptr; 565 u32 tmp; 566 ··· 676 677 dev_priv->scratch[2] = 0; 678 RADEON_WRITE(RADEON_LAST_CLEAR_REG, 0); 679 + 680 + /* reset sarea copies of these */ 681 + master_priv = file_priv->master->driver_priv; 682 + if (master_priv->sarea_priv) { 683 + master_priv->sarea_priv->last_frame = 0; 684 + master_priv->sarea_priv->last_dispatch = 0; 685 + master_priv->sarea_priv->last_clear = 0; 686 + } 687 688 radeon_do_wait_for_idle(dev_priv); 689 ··· 1215 } 1216 1217 radeon_cp_load_microcode(dev_priv); 1218 + radeon_cp_init_ring_buffer(dev, dev_priv, file_priv); 1219 1220 dev_priv->last_buf = 0; 1221 ··· 1281 * 1282 * Charl P. Botha <http://cpbotha.net> 1283 */ 1284 + static int radeon_do_resume_cp(struct drm_device *dev, struct drm_file *file_priv) 1285 { 1286 drm_radeon_private_t *dev_priv = dev->dev_private; 1287 ··· 1304 } 1305 1306 radeon_cp_load_microcode(dev_priv); 1307 + radeon_cp_init_ring_buffer(dev, dev_priv, file_priv); 1308 1309 radeon_do_engine_reset(dev); 1310 radeon_irq_set_state(dev, RADEON_SW_INT_ENABLE, 1); ··· 1479 */ 1480 int radeon_cp_resume(struct drm_device *dev, void *data, struct drm_file *file_priv) 1481 { 1482 + return radeon_do_resume_cp(dev, file_priv); 1483 } 1484 1485 int radeon_engine_reset(struct drm_device *dev, void *data, struct drm_file *file_priv)
+7 -6
drivers/hid/hid-core.c
··· 1300 { HID_USB_DEVICE(USB_VENDOR_ID_SONY, USB_DEVICE_ID_SONY_PS3_CONTROLLER) }, 1301 { HID_USB_DEVICE(USB_VENDOR_ID_SONY, USB_DEVICE_ID_SONY_VAIO_VGX_MOUSE) }, 1302 { HID_USB_DEVICE(USB_VENDOR_ID_SUNPLUS, USB_DEVICE_ID_SUNPLUS_WDESKTOP) }, 1303 { HID_USB_DEVICE(USB_VENDOR_ID_TOPSEED, USB_DEVICE_ID_TOPSEED_CYBERLINK) }, 1304 1305 { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_APPLE, 0x030c) }, 1306 { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_PRESENTER_8K_BT) }, ··· 1611 { HID_USB_DEVICE(USB_VENDOR_ID_PANJIT, 0x0002) }, 1612 { HID_USB_DEVICE(USB_VENDOR_ID_PANJIT, 0x0003) }, 1613 { HID_USB_DEVICE(USB_VENDOR_ID_PANJIT, 0x0004) }, 1614 { HID_USB_DEVICE(USB_VENDOR_ID_SOUNDGRAPH, USB_DEVICE_ID_SOUNDGRAPH_IMON_LCD) }, 1615 { HID_USB_DEVICE(USB_VENDOR_ID_SOUNDGRAPH, USB_DEVICE_ID_SOUNDGRAPH_IMON_LCD2) }, 1616 { HID_USB_DEVICE(USB_VENDOR_ID_SOUNDGRAPH, USB_DEVICE_ID_SOUNDGRAPH_IMON_LCD3) }, ··· 1619 { HID_USB_DEVICE(USB_VENDOR_ID_SOUNDGRAPH, USB_DEVICE_ID_SOUNDGRAPH_IMON_LCD5) }, 1620 { HID_USB_DEVICE(USB_VENDOR_ID_TENX, USB_DEVICE_ID_TENX_IBUDDY1) }, 1621 { HID_USB_DEVICE(USB_VENDOR_ID_TENX, USB_DEVICE_ID_TENX_IBUDDY2) }, 1622 - { HID_USB_DEVICE(USB_VENDOR_ID_THRUSTMASTER, 0xb300) }, 1623 - { HID_USB_DEVICE(USB_VENDOR_ID_THRUSTMASTER, 0xb304) }, 1624 - { HID_USB_DEVICE(USB_VENDOR_ID_THRUSTMASTER, 0xb651) }, 1625 - { HID_USB_DEVICE(USB_VENDOR_ID_THRUSTMASTER, 0xb654) }, 1626 { HID_USB_DEVICE(USB_VENDOR_ID_VERNIER, USB_DEVICE_ID_VERNIER_LABPRO) }, 1627 { HID_USB_DEVICE(USB_VENDOR_ID_VERNIER, USB_DEVICE_ID_VERNIER_GOTEMP) }, 1628 { HID_USB_DEVICE(USB_VENDOR_ID_VERNIER, USB_DEVICE_ID_VERNIER_SKIP) }, ··· 1629 { HID_USB_DEVICE(USB_VENDOR_ID_WISEGROUP, USB_DEVICE_ID_1_PHIDGETSERVO_20) }, 1630 { HID_USB_DEVICE(USB_VENDOR_ID_WISEGROUP, USB_DEVICE_ID_8_8_4_IF_KIT) }, 1631 { HID_USB_DEVICE(USB_VENDOR_ID_YEALINK, USB_DEVICE_ID_YEALINK_P1K_P4K_B2K) }, 1632 - { HID_USB_DEVICE(USB_VENDOR_ID_ZEROPLUS, 0x0005) }, 1633 - { HID_USB_DEVICE(USB_VENDOR_ID_ZEROPLUS, 0x0030) }, 1634 { } 1635 }; 1636
··· 1300 { HID_USB_DEVICE(USB_VENDOR_ID_SONY, USB_DEVICE_ID_SONY_PS3_CONTROLLER) }, 1301 { HID_USB_DEVICE(USB_VENDOR_ID_SONY, USB_DEVICE_ID_SONY_VAIO_VGX_MOUSE) }, 1302 { HID_USB_DEVICE(USB_VENDOR_ID_SUNPLUS, USB_DEVICE_ID_SUNPLUS_WDESKTOP) }, 1303 + { HID_USB_DEVICE(USB_VENDOR_ID_THRUSTMASTER, 0xb300) }, 1304 + { HID_USB_DEVICE(USB_VENDOR_ID_THRUSTMASTER, 0xb304) }, 1305 + { HID_USB_DEVICE(USB_VENDOR_ID_THRUSTMASTER, 0xb651) }, 1306 + { HID_USB_DEVICE(USB_VENDOR_ID_THRUSTMASTER, 0xb654) }, 1307 { HID_USB_DEVICE(USB_VENDOR_ID_TOPSEED, USB_DEVICE_ID_TOPSEED_CYBERLINK) }, 1308 + { HID_USB_DEVICE(USB_VENDOR_ID_ZEROPLUS, 0x0005) }, 1309 + { HID_USB_DEVICE(USB_VENDOR_ID_ZEROPLUS, 0x0030) }, 1310 1311 { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_APPLE, 0x030c) }, 1312 { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_PRESENTER_8K_BT) }, ··· 1605 { HID_USB_DEVICE(USB_VENDOR_ID_PANJIT, 0x0002) }, 1606 { HID_USB_DEVICE(USB_VENDOR_ID_PANJIT, 0x0003) }, 1607 { HID_USB_DEVICE(USB_VENDOR_ID_PANJIT, 0x0004) }, 1608 + { HID_USB_DEVICE(USB_VENDOR_ID_POWERCOM, USB_DEVICE_ID_POWERCOM_UPS) }, 1609 { HID_USB_DEVICE(USB_VENDOR_ID_SOUNDGRAPH, USB_DEVICE_ID_SOUNDGRAPH_IMON_LCD) }, 1610 { HID_USB_DEVICE(USB_VENDOR_ID_SOUNDGRAPH, USB_DEVICE_ID_SOUNDGRAPH_IMON_LCD2) }, 1611 { HID_USB_DEVICE(USB_VENDOR_ID_SOUNDGRAPH, USB_DEVICE_ID_SOUNDGRAPH_IMON_LCD3) }, ··· 1612 { HID_USB_DEVICE(USB_VENDOR_ID_SOUNDGRAPH, USB_DEVICE_ID_SOUNDGRAPH_IMON_LCD5) }, 1613 { HID_USB_DEVICE(USB_VENDOR_ID_TENX, USB_DEVICE_ID_TENX_IBUDDY1) }, 1614 { HID_USB_DEVICE(USB_VENDOR_ID_TENX, USB_DEVICE_ID_TENX_IBUDDY2) }, 1615 { HID_USB_DEVICE(USB_VENDOR_ID_VERNIER, USB_DEVICE_ID_VERNIER_LABPRO) }, 1616 { HID_USB_DEVICE(USB_VENDOR_ID_VERNIER, USB_DEVICE_ID_VERNIER_GOTEMP) }, 1617 { HID_USB_DEVICE(USB_VENDOR_ID_VERNIER, USB_DEVICE_ID_VERNIER_SKIP) }, ··· 1626 { HID_USB_DEVICE(USB_VENDOR_ID_WISEGROUP, USB_DEVICE_ID_1_PHIDGETSERVO_20) }, 1627 { HID_USB_DEVICE(USB_VENDOR_ID_WISEGROUP, USB_DEVICE_ID_8_8_4_IF_KIT) }, 1628 { HID_USB_DEVICE(USB_VENDOR_ID_YEALINK, USB_DEVICE_ID_YEALINK_P1K_P4K_B2K) }, 1629 { } 1630 }; 1631
+3
drivers/hid/hid-ids.h
··· 348 #define USB_VENDOR_ID_PLAYDOTCOM 0x0b43 349 #define USB_DEVICE_ID_PLAYDOTCOM_EMS_USBII 0x0003 350 351 #define USB_VENDOR_ID_SAITEK 0x06a3 352 #define USB_DEVICE_ID_SAITEK_RUMBLEPAD 0xff17 353
··· 348 #define USB_VENDOR_ID_PLAYDOTCOM 0x0b43 349 #define USB_DEVICE_ID_PLAYDOTCOM_EMS_USBII 0x0003 350 351 + #define USB_VENDOR_ID_POWERCOM 0x0d9f 352 + #define USB_DEVICE_ID_POWERCOM_UPS 0x0002 353 + 354 #define USB_VENDOR_ID_SAITEK 0x06a3 355 #define USB_DEVICE_ID_SAITEK_RUMBLEPAD 0xff17 356
+9 -5
drivers/hid/hidraw.c
··· 267 default: 268 { 269 struct hid_device *hid = dev->hid; 270 - if (_IOC_TYPE(cmd) != 'H' || _IOC_DIR(cmd) != _IOC_READ) 271 - return -EINVAL; 272 273 if (_IOC_NR(cmd) == _IOC_NR(HIDIOCGRAWNAME(0))) { 274 int len; ··· 279 len = strlen(hid->name) + 1; 280 if (len > _IOC_SIZE(cmd)) 281 len = _IOC_SIZE(cmd); 282 - return copy_to_user(user_arg, hid->name, len) ? 283 -EFAULT : len; 284 } 285 286 if (_IOC_NR(cmd) == _IOC_NR(HIDIOCGRAWPHYS(0))) { ··· 291 len = strlen(hid->phys) + 1; 292 if (len > _IOC_SIZE(cmd)) 293 len = _IOC_SIZE(cmd); 294 - return copy_to_user(user_arg, hid->phys, len) ? 295 -EFAULT : len; 296 } 297 } 298 299 - ret = -ENOTTY; 300 } 301 unlock_kernel(); 302 return ret;
··· 267 default: 268 { 269 struct hid_device *hid = dev->hid; 270 + if (_IOC_TYPE(cmd) != 'H' || _IOC_DIR(cmd) != _IOC_READ) { 271 + ret = -EINVAL; 272 + break; 273 + } 274 275 if (_IOC_NR(cmd) == _IOC_NR(HIDIOCGRAWNAME(0))) { 276 int len; ··· 277 len = strlen(hid->name) + 1; 278 if (len > _IOC_SIZE(cmd)) 279 len = _IOC_SIZE(cmd); 280 + ret = copy_to_user(user_arg, hid->name, len) ? 281 -EFAULT : len; 282 + break; 283 } 284 285 if (_IOC_NR(cmd) == _IOC_NR(HIDIOCGRAWPHYS(0))) { ··· 288 len = strlen(hid->phys) + 1; 289 if (len > _IOC_SIZE(cmd)) 290 len = _IOC_SIZE(cmd); 291 + ret = copy_to_user(user_arg, hid->phys, len) ? 292 -EFAULT : len; 293 + break; 294 } 295 } 296 297 + ret = -ENOTTY; 298 } 299 unlock_kernel(); 300 return ret;
+2 -2
drivers/hwmon/f71882fg.c
··· 1872 1873 devid = superio_inw(sioaddr, SIO_REG_MANID); 1874 if (devid != SIO_FINTEK_ID) { 1875 - printk(KERN_INFO DRVNAME ": Not a Fintek device\n"); 1876 goto exit; 1877 } 1878 ··· 1932 res.name = f71882fg_pdev->name; 1933 err = acpi_check_resource_conflict(&res); 1934 if (err) 1935 - return err; 1936 1937 err = platform_device_add_resources(f71882fg_pdev, &res, 1); 1938 if (err) {
··· 1872 1873 devid = superio_inw(sioaddr, SIO_REG_MANID); 1874 if (devid != SIO_FINTEK_ID) { 1875 + pr_debug(DRVNAME ": Not a Fintek device\n"); 1876 goto exit; 1877 } 1878 ··· 1932 res.name = f71882fg_pdev->name; 1933 err = acpi_check_resource_conflict(&res); 1934 if (err) 1935 + goto exit_device_put; 1936 1937 err = platform_device_add_resources(f71882fg_pdev, &res, 1); 1938 if (err) {
+81 -4
drivers/hwmon/hp_accel.c
··· 166 }, \ 167 .driver_data = &lis3lv02d_axis_##_axis \ 168 } 169 static struct dmi_system_id lis3lv02d_dmi_ids[] = { 170 /* product names are truncated to match all kinds of a same model */ 171 AXIS_DMI_MATCH("NC64x0", "HP Compaq nc64", x_inverted), ··· 191 AXIS_DMI_MATCH("NC673x", "HP Compaq 673", xy_rotated_left_usd), 192 AXIS_DMI_MATCH("NC651xx", "HP Compaq 651", xy_rotated_right), 193 AXIS_DMI_MATCH("NC671xx", "HP Compaq 671", xy_swap_yz_inverted), 194 { NULL, } 195 /* Laptop models without axis info (yet): 196 * "NC6910" "HP Compaq 6910" ··· 235 .set_brightness = hpled_set, 236 }; 237 238 static int lis3lv02d_add(struct acpi_device *device) 239 { 240 - u8 val; 241 int ret; 242 243 if (!device) ··· 291 strcpy(acpi_device_class(device), ACPI_MDPS_CLASS); 292 device->driver_data = &adev; 293 294 - lis3lv02d_acpi_read(device->handle, WHO_AM_I, &val); 295 - if ((val != LIS3LV02DL_ID) && (val != LIS302DL_ID)) { 296 printk(KERN_ERR DRIVER_NAME 297 - ": Accelerometer chip not LIS3LV02D{L,Q}\n"); 298 } 299 300 /* If possible use a "standard" axes order */ ··· 320 ret = led_classdev_register(NULL, &hpled_led.led_classdev); 321 if (ret) 322 return ret; 323 324 ret = lis3lv02d_init_device(&adev); 325 if (ret) {
··· 166 }, \ 167 .driver_data = &lis3lv02d_axis_##_axis \ 168 } 169 + 170 + #define AXIS_DMI_MATCH2(_ident, _class1, _name1, \ 171 + _class2, _name2, \ 172 + _axis) { \ 173 + .ident = _ident, \ 174 + .callback = lis3lv02d_dmi_matched, \ 175 + .matches = { \ 176 + DMI_MATCH(DMI_##_class1, _name1), \ 177 + DMI_MATCH(DMI_##_class2, _name2), \ 178 + }, \ 179 + .driver_data = &lis3lv02d_axis_##_axis \ 180 + } 181 static struct dmi_system_id lis3lv02d_dmi_ids[] = { 182 /* product names are truncated to match all kinds of a same model */ 183 AXIS_DMI_MATCH("NC64x0", "HP Compaq nc64", x_inverted), ··· 179 AXIS_DMI_MATCH("NC673x", "HP Compaq 673", xy_rotated_left_usd), 180 AXIS_DMI_MATCH("NC651xx", "HP Compaq 651", xy_rotated_right), 181 AXIS_DMI_MATCH("NC671xx", "HP Compaq 671", xy_swap_yz_inverted), 182 + /* Intel-based HP Pavilion dv5 */ 183 + AXIS_DMI_MATCH2("HPDV5_I", 184 + PRODUCT_NAME, "HP Pavilion dv5", 185 + BOARD_NAME, "3603", 186 + x_inverted), 187 + /* AMD-based HP Pavilion dv5 */ 188 + AXIS_DMI_MATCH2("HPDV5_A", 189 + PRODUCT_NAME, "HP Pavilion dv5", 190 + BOARD_NAME, "3600", 191 + y_inverted), 192 { NULL, } 193 /* Laptop models without axis info (yet): 194 * "NC6910" "HP Compaq 6910" ··· 213 .set_brightness = hpled_set, 214 }; 215 216 + static acpi_status 217 + lis3lv02d_get_resource(struct acpi_resource *resource, void *context) 218 + { 219 + if (resource->type == ACPI_RESOURCE_TYPE_EXTENDED_IRQ) { 220 + struct acpi_resource_extended_irq *irq; 221 + u32 *device_irq = context; 222 + 223 + irq = &resource->data.extended_irq; 224 + *device_irq = irq->interrupts[0]; 225 + } 226 + 227 + return AE_OK; 228 + } 229 + 230 + static void lis3lv02d_enum_resources(struct acpi_device *device) 231 + { 232 + acpi_status status; 233 + 234 + status = acpi_walk_resources(device->handle, METHOD_NAME__CRS, 235 + lis3lv02d_get_resource, &adev.irq); 236 + if (ACPI_FAILURE(status)) 237 + printk(KERN_DEBUG DRIVER_NAME ": Error getting resources\n"); 238 + } 239 + 240 + static s16 lis3lv02d_read_16(acpi_handle handle, int reg) 241 + { 242 + u8 lo, hi; 243 + 244 + adev.read(handle, reg - 1, &lo); 245 + adev.read(handle, reg, &hi); 246 + /* In "12 bit right justified" mode, bit 6, bit 7, bit 8 = bit 5 */ 247 + return (s16)((hi << 8) | lo); 248 + } 249 + 250 + static s16 lis3lv02d_read_8(acpi_handle handle, int reg) 251 + { 252 + s8 lo; 253 + adev.read(handle, reg, &lo); 254 + return lo; 255 + } 256 + 257 static int lis3lv02d_add(struct acpi_device *device) 258 { 259 int ret; 260 261 if (!device) ··· 229 strcpy(acpi_device_class(device), ACPI_MDPS_CLASS); 230 device->driver_data = &adev; 231 232 + lis3lv02d_acpi_read(device->handle, WHO_AM_I, &adev.whoami); 233 + switch (adev.whoami) { 234 + case LIS_DOUBLE_ID: 235 + printk(KERN_INFO DRIVER_NAME ": 2-byte sensor found\n"); 236 + adev.read_data = lis3lv02d_read_16; 237 + adev.mdps_max_val = 2048; 238 + break; 239 + case LIS_SINGLE_ID: 240 + printk(KERN_INFO DRIVER_NAME ": 1-byte sensor found\n"); 241 + adev.read_data = lis3lv02d_read_8; 242 + adev.mdps_max_val = 128; 243 + break; 244 + default: 245 printk(KERN_ERR DRIVER_NAME 246 + ": unknown sensor type 0x%X\n", adev.whoami); 247 + return -EINVAL; 248 } 249 250 /* If possible use a "standard" axes order */ ··· 246 ret = led_classdev_register(NULL, &hpled_led.led_classdev); 247 if (ret) 248 return ret; 249 + 250 + /* obtain IRQ number of our device from ACPI */ 251 + lis3lv02d_enum_resources(adev.device); 252 253 ret = lis3lv02d_init_device(&adev); 254 if (ret) {
+160 -35
drivers/hwmon/lis3lv02d.c
··· 3 * 4 * Copyright (C) 2007-2008 Yan Burman 5 * Copyright (C) 2008 Eric Piel 6 - * Copyright (C) 2008 Pavel Machek 7 * 8 * This program is free software; you can redistribute it and/or modify 9 * it under the terms of the GNU General Public License as published by ··· 35 #include <linux/poll.h> 36 #include <linux/freezer.h> 37 #include <linux/uaccess.h> 38 #include <acpi/acpi_drivers.h> 39 #include <asm/atomic.h> 40 #include "lis3lv02d.h" ··· 53 * joystick. 54 */ 55 56 - /* Maximum value our axis may get for the input device (signed 12 bits) */ 57 - #define MDPS_MAX_VAL 2048 58 59 - struct acpi_lis3lv02d adev; 60 EXPORT_SYMBOL_GPL(adev); 61 62 static int lis3lv02d_add_fs(struct acpi_device *device); 63 - 64 - static s16 lis3lv02d_read_16(acpi_handle handle, int reg) 65 - { 66 - u8 lo, hi; 67 - 68 - adev.read(handle, reg, &lo); 69 - adev.read(handle, reg + 1, &hi); 70 - /* In "12 bit right justified" mode, bit 6, bit 7, bit 8 = bit 5 */ 71 - return (s16)((hi << 8) | lo); 72 - } 73 74 /** 75 * lis3lv02d_get_axis - For the given axis, give the value converted ··· 89 { 90 int position[3]; 91 92 - position[0] = lis3lv02d_read_16(handle, OUTX_L); 93 - position[1] = lis3lv02d_read_16(handle, OUTY_L); 94 - position[2] = lis3lv02d_read_16(handle, OUTZ_L); 95 96 *x = lis3lv02d_get_axis(adev.ac.x, position); 97 *y = lis3lv02d_get_axis(adev.ac.y, position); ··· 101 void lis3lv02d_poweroff(acpi_handle handle) 102 { 103 adev.is_on = 0; 104 - /* disable X,Y,Z axis and power down */ 105 - adev.write(handle, CTRL_REG1, 0x00); 106 } 107 EXPORT_SYMBOL_GPL(lis3lv02d_poweroff); 108 109 void lis3lv02d_poweron(acpi_handle handle) 110 { 111 - u8 val; 112 - 113 adev.is_on = 1; 114 adev.init(handle); 115 - adev.write(handle, FF_WU_CFG, 0); 116 - /* 117 - * BDU: LSB and MSB values are not updated until both have been read. 118 - * So the value read will always be correct. 119 - * IEN: Interrupt for free-fall and DD, not for data-ready. 120 - */ 121 - adev.read(handle, CTRL_REG2, &val); 122 - val |= CTRL2_BDU | CTRL2_IEN; 123 - adev.write(handle, CTRL_REG2, val); 124 } 125 EXPORT_SYMBOL_GPL(lis3lv02d_poweron); 126 ··· 139 lis3lv02d_poweroff(dev->device->handle); 140 mutex_unlock(&dev->lock); 141 } 142 143 /** 144 * lis3lv02d_joystick_kthread - Kthread polling function ··· 315 lis3lv02d_decrease_use(&adev); 316 } 317 318 - 319 static inline void lis3lv02d_calibrate_joystick(void) 320 { 321 lis3lv02d_get_xyz(adev.device->handle, &adev.xcalib, &adev.ycalib, &adev.zcalib); ··· 342 adev.idev->close = lis3lv02d_joystick_close; 343 344 set_bit(EV_ABS, adev.idev->evbit); 345 - input_set_abs_params(adev.idev, ABS_X, -MDPS_MAX_VAL, MDPS_MAX_VAL, 3, 3); 346 - input_set_abs_params(adev.idev, ABS_Y, -MDPS_MAX_VAL, MDPS_MAX_VAL, 3, 3); 347 - input_set_abs_params(adev.idev, ABS_Z, -MDPS_MAX_VAL, MDPS_MAX_VAL, 3, 3); 348 349 err = input_register_device(adev.idev); 350 if (err) { ··· 361 if (!adev.idev) 362 return; 363 364 input_unregister_device(adev.idev); 365 adev.idev = NULL; 366 } ··· 380 if (lis3lv02d_joystick_enable()) 381 printk(KERN_ERR DRIVER_NAME ": joystick initialization failed\n"); 382 383 lis3lv02d_decrease_use(dev); 384 return 0; 385 } ··· 476 EXPORT_SYMBOL_GPL(lis3lv02d_remove_fs); 477 478 MODULE_DESCRIPTION("ST LIS3LV02Dx three-axis digital accelerometer driver"); 479 - MODULE_AUTHOR("Yan Burman and Eric Piel"); 480 MODULE_LICENSE("GPL"); 481
··· 3 * 4 * Copyright (C) 2007-2008 Yan Burman 5 * Copyright (C) 2008 Eric Piel 6 + * Copyright (C) 2008-2009 Pavel Machek 7 * 8 * This program is free software; you can redistribute it and/or modify 9 * it under the terms of the GNU General Public License as published by ··· 35 #include <linux/poll.h> 36 #include <linux/freezer.h> 37 #include <linux/uaccess.h> 38 + #include <linux/miscdevice.h> 39 #include <acpi/acpi_drivers.h> 40 #include <asm/atomic.h> 41 #include "lis3lv02d.h" ··· 52 * joystick. 53 */ 54 55 + struct acpi_lis3lv02d adev = { 56 + .misc_wait = __WAIT_QUEUE_HEAD_INITIALIZER(adev.misc_wait), 57 + }; 58 59 EXPORT_SYMBOL_GPL(adev); 60 61 static int lis3lv02d_add_fs(struct acpi_device *device); 62 63 /** 64 * lis3lv02d_get_axis - For the given axis, give the value converted ··· 98 { 99 int position[3]; 100 101 + position[0] = adev.read_data(handle, OUTX); 102 + position[1] = adev.read_data(handle, OUTY); 103 + position[2] = adev.read_data(handle, OUTZ); 104 105 *x = lis3lv02d_get_axis(adev.ac.x, position); 106 *y = lis3lv02d_get_axis(adev.ac.y, position); ··· 110 void lis3lv02d_poweroff(acpi_handle handle) 111 { 112 adev.is_on = 0; 113 } 114 EXPORT_SYMBOL_GPL(lis3lv02d_poweroff); 115 116 void lis3lv02d_poweron(acpi_handle handle) 117 { 118 adev.is_on = 1; 119 adev.init(handle); 120 } 121 EXPORT_SYMBOL_GPL(lis3lv02d_poweron); 122 ··· 161 lis3lv02d_poweroff(dev->device->handle); 162 mutex_unlock(&dev->lock); 163 } 164 + 165 + static irqreturn_t lis302dl_interrupt(int irq, void *dummy) 166 + { 167 + /* 168 + * Be careful: on some HP laptops the bios force DD when on battery and 169 + * the lid is closed. This leads to interrupts as soon as a little move 170 + * is done. 171 + */ 172 + atomic_inc(&adev.count); 173 + 174 + wake_up_interruptible(&adev.misc_wait); 175 + kill_fasync(&adev.async_queue, SIGIO, POLL_IN); 176 + return IRQ_HANDLED; 177 + } 178 + 179 + static int lis3lv02d_misc_open(struct inode *inode, struct file *file) 180 + { 181 + int ret; 182 + 183 + if (test_and_set_bit(0, &adev.misc_opened)) 184 + return -EBUSY; /* already open */ 185 + 186 + atomic_set(&adev.count, 0); 187 + 188 + /* 189 + * The sensor can generate interrupts for free-fall and direction 190 + * detection (distinguishable with FF_WU_SRC and DD_SRC) but to keep 191 + * the things simple and _fast_ we activate it only for free-fall, so 192 + * no need to read register (very slow with ACPI). For the same reason, 193 + * we forbid shared interrupts. 194 + * 195 + * IRQF_TRIGGER_RISING seems pointless on HP laptops because the 196 + * io-apic is not configurable (and generates a warning) but I keep it 197 + * in case of support for other hardware. 198 + */ 199 + ret = request_irq(adev.irq, lis302dl_interrupt, IRQF_TRIGGER_RISING, 200 + DRIVER_NAME, &adev); 201 + 202 + if (ret) { 203 + clear_bit(0, &adev.misc_opened); 204 + printk(KERN_ERR DRIVER_NAME ": IRQ%d allocation failed\n", adev.irq); 205 + return -EBUSY; 206 + } 207 + lis3lv02d_increase_use(&adev); 208 + printk("lis3: registered interrupt %d\n", adev.irq); 209 + return 0; 210 + } 211 + 212 + static int lis3lv02d_misc_release(struct inode *inode, struct file *file) 213 + { 214 + fasync_helper(-1, file, 0, &adev.async_queue); 215 + lis3lv02d_decrease_use(&adev); 216 + free_irq(adev.irq, &adev); 217 + clear_bit(0, &adev.misc_opened); /* release the device */ 218 + return 0; 219 + } 220 + 221 + static ssize_t lis3lv02d_misc_read(struct file *file, char __user *buf, 222 + size_t count, loff_t *pos) 223 + { 224 + DECLARE_WAITQUEUE(wait, current); 225 + u32 data; 226 + unsigned char byte_data; 227 + ssize_t retval = 1; 228 + 229 + if (count < 1) 230 + return -EINVAL; 231 + 232 + add_wait_queue(&adev.misc_wait, &wait); 233 + while (true) { 234 + set_current_state(TASK_INTERRUPTIBLE); 235 + data = atomic_xchg(&adev.count, 0); 236 + if (data) 237 + break; 238 + 239 + if (file->f_flags & O_NONBLOCK) { 240 + retval = -EAGAIN; 241 + goto out; 242 + } 243 + 244 + if (signal_pending(current)) { 245 + retval = -ERESTARTSYS; 246 + goto out; 247 + } 248 + 249 + schedule(); 250 + } 251 + 252 + if (data < 255) 253 + byte_data = data; 254 + else 255 + byte_data = 255; 256 + 257 + /* make sure we are not going into copy_to_user() with 258 + * TASK_INTERRUPTIBLE state */ 259 + set_current_state(TASK_RUNNING); 260 + if (copy_to_user(buf, &byte_data, sizeof(byte_data))) 261 + retval = -EFAULT; 262 + 263 + out: 264 + __set_current_state(TASK_RUNNING); 265 + remove_wait_queue(&adev.misc_wait, &wait); 266 + 267 + return retval; 268 + } 269 + 270 + static unsigned int lis3lv02d_misc_poll(struct file *file, poll_table *wait) 271 + { 272 + poll_wait(file, &adev.misc_wait, wait); 273 + if (atomic_read(&adev.count)) 274 + return POLLIN | POLLRDNORM; 275 + return 0; 276 + } 277 + 278 + static int lis3lv02d_misc_fasync(int fd, struct file *file, int on) 279 + { 280 + return fasync_helper(fd, file, on, &adev.async_queue); 281 + } 282 + 283 + static const struct file_operations lis3lv02d_misc_fops = { 284 + .owner = THIS_MODULE, 285 + .llseek = no_llseek, 286 + .read = lis3lv02d_misc_read, 287 + .open = lis3lv02d_misc_open, 288 + .release = lis3lv02d_misc_release, 289 + .poll = lis3lv02d_misc_poll, 290 + .fasync = lis3lv02d_misc_fasync, 291 + }; 292 + 293 + static struct miscdevice lis3lv02d_misc_device = { 294 + .minor = MISC_DYNAMIC_MINOR, 295 + .name = "freefall", 296 + .fops = &lis3lv02d_misc_fops, 297 + }; 298 299 /** 300 * lis3lv02d_joystick_kthread - Kthread polling function ··· 203 lis3lv02d_decrease_use(&adev); 204 } 205 206 static inline void lis3lv02d_calibrate_joystick(void) 207 { 208 lis3lv02d_get_xyz(adev.device->handle, &adev.xcalib, &adev.ycalib, &adev.zcalib); ··· 231 adev.idev->close = lis3lv02d_joystick_close; 232 233 set_bit(EV_ABS, adev.idev->evbit); 234 + input_set_abs_params(adev.idev, ABS_X, -adev.mdps_max_val, adev.mdps_max_val, 3, 3); 235 + input_set_abs_params(adev.idev, ABS_Y, -adev.mdps_max_val, adev.mdps_max_val, 3, 3); 236 + input_set_abs_params(adev.idev, ABS_Z, -adev.mdps_max_val, adev.mdps_max_val, 3, 3); 237 238 err = input_register_device(adev.idev); 239 if (err) { ··· 250 if (!adev.idev) 251 return; 252 253 + misc_deregister(&lis3lv02d_misc_device); 254 input_unregister_device(adev.idev); 255 adev.idev = NULL; 256 } ··· 268 if (lis3lv02d_joystick_enable()) 269 printk(KERN_ERR DRIVER_NAME ": joystick initialization failed\n"); 270 271 + printk("lis3_init_device: irq %d\n", dev->irq); 272 + 273 + /* if we did not get an IRQ from ACPI - we have nothing more to do */ 274 + if (!dev->irq) { 275 + printk(KERN_ERR DRIVER_NAME 276 + ": No IRQ in ACPI. Disabling /dev/freefall\n"); 277 + goto out; 278 + } 279 + 280 + printk("lis3: registering device\n"); 281 + if (misc_register(&lis3lv02d_misc_device)) 282 + printk(KERN_ERR DRIVER_NAME ": misc_register failed\n"); 283 + out: 284 lis3lv02d_decrease_use(dev); 285 return 0; 286 } ··· 351 EXPORT_SYMBOL_GPL(lis3lv02d_remove_fs); 352 353 MODULE_DESCRIPTION("ST LIS3LV02Dx three-axis digital accelerometer driver"); 354 + MODULE_AUTHOR("Yan Burman, Eric Piel, Pavel Machek"); 355 MODULE_LICENSE("GPL"); 356
+18 -3
drivers/hwmon/lis3lv02d.h
··· 22 /* 23 * The actual chip is STMicroelectronics LIS3LV02DL or LIS3LV02DQ that seems to 24 * be connected via SPI. There exists also several similar chips (such as LIS302DL or 25 - * LIS3L02DQ) but not in the HP laptops and they have slightly different registers. 26 * They can also be connected via I²C. 27 */ 28 29 - #define LIS3LV02DL_ID 0x3A /* Also the LIS3LV02DQ */ 30 - #define LIS302DL_ID 0x3B /* Also the LIS202DL! */ 31 32 enum lis3lv02d_reg { 33 WHO_AM_I = 0x0F, ··· 47 STATUS_REG = 0x27, 48 OUTX_L = 0x28, 49 OUTX_H = 0x29, 50 OUTY_L = 0x2A, 51 OUTY_H = 0x2B, 52 OUTZ_L = 0x2C, 53 OUTZ_H = 0x2D, 54 FF_WU_CFG = 0x30, 55 FF_WU_SRC = 0x31, 56 FF_WU_ACK = 0x32, ··· 165 acpi_status (*write) (acpi_handle handle, int reg, u8 val); 166 acpi_status (*read) (acpi_handle handle, int reg, u8 *ret); 167 168 struct input_dev *idev; /* input device */ 169 struct task_struct *kthread; /* kthread for input */ 170 struct mutex lock; ··· 180 unsigned char is_on; /* whether the device is on or off */ 181 unsigned char usage; /* usage counter */ 182 struct axis_conversion ac; /* hw -> logical axis */ 183 }; 184 185 int lis3lv02d_init_device(struct acpi_lis3lv02d *dev);
··· 22 /* 23 * The actual chip is STMicroelectronics LIS3LV02DL or LIS3LV02DQ that seems to 24 * be connected via SPI. There exists also several similar chips (such as LIS302DL or 25 + * LIS3L02DQ) and they have slightly different registers, but we can provide a 26 + * common interface for all of them. 27 * They can also be connected via I²C. 28 */ 29 30 + /* 2-byte registers */ 31 + #define LIS_DOUBLE_ID 0x3A /* LIS3LV02D[LQ] */ 32 + /* 1-byte registers */ 33 + #define LIS_SINGLE_ID 0x3B /* LIS[32]02DL and others */ 34 35 enum lis3lv02d_reg { 36 WHO_AM_I = 0x0F, ··· 44 STATUS_REG = 0x27, 45 OUTX_L = 0x28, 46 OUTX_H = 0x29, 47 + OUTX = 0x29, 48 OUTY_L = 0x2A, 49 OUTY_H = 0x2B, 50 + OUTY = 0x2B, 51 OUTZ_L = 0x2C, 52 OUTZ_H = 0x2D, 53 + OUTZ = 0x2D, 54 FF_WU_CFG = 0x30, 55 FF_WU_SRC = 0x31, 56 FF_WU_ACK = 0x32, ··· 159 acpi_status (*write) (acpi_handle handle, int reg, u8 val); 160 acpi_status (*read) (acpi_handle handle, int reg, u8 *ret); 161 162 + u8 whoami; /* 3Ah: 2-byte registries, 3Bh: 1-byte registries */ 163 + s16 (*read_data) (acpi_handle handle, int reg); 164 + int mdps_max_val; 165 + 166 struct input_dev *idev; /* input device */ 167 struct task_struct *kthread; /* kthread for input */ 168 struct mutex lock; ··· 170 unsigned char is_on; /* whether the device is on or off */ 171 unsigned char usage; /* usage counter */ 172 struct axis_conversion ac; /* hw -> logical axis */ 173 + 174 + u32 irq; /* IRQ number */ 175 + struct fasync_struct *async_queue; /* queue for the misc device */ 176 + wait_queue_head_t misc_wait; /* Wait queue for the misc device */ 177 + unsigned long misc_opened; /* bit0: whether the device is open */ 178 }; 179 180 int lis3lv02d_init_device(struct acpi_lis3lv02d *dev);
+1 -1
drivers/hwmon/vt1211.c
··· 1262 res.name = pdev->name; 1263 err = acpi_check_resource_conflict(&res); 1264 if (err) 1265 - goto EXIT; 1266 1267 err = platform_device_add_resources(pdev, &res, 1); 1268 if (err) {
··· 1262 res.name = pdev->name; 1263 err = acpi_check_resource_conflict(&res); 1264 if (err) 1265 + goto EXIT_DEV_PUT; 1266 1267 err = platform_device_add_resources(pdev, &res, 1); 1268 if (err) {
+1 -1
drivers/hwmon/w83627ehf.c
··· 1548 1549 err = acpi_check_resource_conflict(&res); 1550 if (err) 1551 - goto exit; 1552 1553 err = platform_device_add_resources(pdev, &res, 1); 1554 if (err) {
··· 1548 1549 err = acpi_check_resource_conflict(&res); 1550 if (err) 1551 + goto exit_device_put; 1552 1553 err = platform_device_add_resources(pdev, &res, 1); 1554 if (err) {
+1 -1
drivers/md/dm-io.c
··· 328 struct dpages old_pages = *dp; 329 330 if (sync) 331 - rw |= (1 << BIO_RW_SYNC); 332 333 /* 334 * For multiple regions we need to be careful to rewind
··· 328 struct dpages old_pages = *dp; 329 330 if (sync) 331 + rw |= (1 << BIO_RW_SYNCIO) | (1 << BIO_RW_UNPLUG); 332 333 /* 334 * For multiple regions we need to be careful to rewind
+1 -1
drivers/md/dm-kcopyd.c
··· 344 { 345 int r; 346 struct dm_io_request io_req = { 347 - .bi_rw = job->rw | (1 << BIO_RW_SYNC), 348 .mem.type = DM_IO_PAGE_LIST, 349 .mem.ptr.pl = job->pages, 350 .mem.offset = job->offset,
··· 344 { 345 int r; 346 struct dm_io_request io_req = { 347 + .bi_rw = job->rw | (1 << BIO_RW_SYNCIO) | (1 << BIO_RW_UNPLUG), 348 .mem.type = DM_IO_PAGE_LIST, 349 .mem.ptr.pl = job->pages, 350 .mem.offset = job->offset,
+2 -2
drivers/md/md.c
··· 474 * causes ENOTSUPP, we allocate a spare bio... 475 */ 476 struct bio *bio = bio_alloc(GFP_NOIO, 1); 477 - int rw = (1<<BIO_RW) | (1<<BIO_RW_SYNC); 478 479 bio->bi_bdev = rdev->bdev; 480 bio->bi_sector = sector; ··· 531 struct completion event; 532 int ret; 533 534 - rw |= (1 << BIO_RW_SYNC); 535 536 bio->bi_bdev = bdev; 537 bio->bi_sector = sector;
··· 474 * causes ENOTSUPP, we allocate a spare bio... 475 */ 476 struct bio *bio = bio_alloc(GFP_NOIO, 1); 477 + int rw = (1<<BIO_RW) | (1<<BIO_RW_SYNCIO) | (1<<BIO_RW_UNPLUG); 478 479 bio->bi_bdev = rdev->bdev; 480 bio->bi_sector = sector; ··· 531 struct completion event; 532 int ret; 533 534 + rw |= (1 << BIO_RW_SYNCIO) | (1 << BIO_RW_UNPLUG); 535 536 bio->bi_bdev = bdev; 537 bio->bi_sector = sector;
+4 -6
drivers/media/common/tuners/tuner-simple.c
··· 318 u8 *config, u8 *cb) 319 { 320 struct tuner_simple_priv *priv = fe->tuner_priv; 321 - u8 tuneraddr; 322 int rc; 323 324 /* tv norm specific stuff for multi-norm tuners */ ··· 386 387 case TUNER_PHILIPS_TUV1236D: 388 { 389 /* 0x40 -> ATSC antenna input 1 */ 390 /* 0x48 -> ATSC antenna input 2 */ 391 /* 0x00 -> NTSC antenna input 1 */ ··· 398 buffer[1] = 0x04; 399 } 400 /* set to the correct mode (analog or digital) */ 401 - tuneraddr = priv->i2c_props.addr; 402 - priv->i2c_props.addr = 0x0a; 403 - rc = tuner_i2c_xfer_send(&priv->i2c_props, &buffer[0], 2); 404 if (2 != rc) 405 tuner_warn("i2c i/o error: rc == %d " 406 "(should be 2)\n", rc); 407 - rc = tuner_i2c_xfer_send(&priv->i2c_props, &buffer[2], 2); 408 if (2 != rc) 409 tuner_warn("i2c i/o error: rc == %d " 410 "(should be 2)\n", rc); 411 - priv->i2c_props.addr = tuneraddr; 412 break; 413 } 414 }
··· 318 u8 *config, u8 *cb) 319 { 320 struct tuner_simple_priv *priv = fe->tuner_priv; 321 int rc; 322 323 /* tv norm specific stuff for multi-norm tuners */ ··· 387 388 case TUNER_PHILIPS_TUV1236D: 389 { 390 + struct tuner_i2c_props i2c = priv->i2c_props; 391 /* 0x40 -> ATSC antenna input 1 */ 392 /* 0x48 -> ATSC antenna input 2 */ 393 /* 0x00 -> NTSC antenna input 1 */ ··· 398 buffer[1] = 0x04; 399 } 400 /* set to the correct mode (analog or digital) */ 401 + i2c.addr = 0x0a; 402 + rc = tuner_i2c_xfer_send(&i2c, &buffer[0], 2); 403 if (2 != rc) 404 tuner_warn("i2c i/o error: rc == %d " 405 "(should be 2)\n", rc); 406 + rc = tuner_i2c_xfer_send(&i2c, &buffer[2], 2); 407 if (2 != rc) 408 tuner_warn("i2c i/o error: rc == %d " 409 "(should be 2)\n", rc); 410 break; 411 } 412 }
+7 -9
drivers/media/dvb/dvb-core/dmxdev.c
··· 364 enum dmx_success success) 365 { 366 struct dmxdev_filter *dmxdevfilter = filter->priv; 367 - unsigned long flags; 368 int ret; 369 370 if (dmxdevfilter->buffer.error) { 371 wake_up(&dmxdevfilter->buffer.queue); 372 return 0; 373 } 374 - spin_lock_irqsave(&dmxdevfilter->dev->lock, flags); 375 if (dmxdevfilter->state != DMXDEV_STATE_GO) { 376 - spin_unlock_irqrestore(&dmxdevfilter->dev->lock, flags); 377 return 0; 378 } 379 del_timer(&dmxdevfilter->timer); ··· 391 } 392 if (dmxdevfilter->params.sec.flags & DMX_ONESHOT) 393 dmxdevfilter->state = DMXDEV_STATE_DONE; 394 - spin_unlock_irqrestore(&dmxdevfilter->dev->lock, flags); 395 wake_up(&dmxdevfilter->buffer.queue); 396 return 0; 397 } ··· 403 { 404 struct dmxdev_filter *dmxdevfilter = feed->priv; 405 struct dvb_ringbuffer *buffer; 406 - unsigned long flags; 407 int ret; 408 409 - spin_lock_irqsave(&dmxdevfilter->dev->lock, flags); 410 if (dmxdevfilter->params.pes.output == DMX_OUT_DECODER) { 411 - spin_unlock_irqrestore(&dmxdevfilter->dev->lock, flags); 412 return 0; 413 } 414 ··· 417 else 418 buffer = &dmxdevfilter->dev->dvr_buffer; 419 if (buffer->error) { 420 - spin_unlock_irqrestore(&dmxdevfilter->dev->lock, flags); 421 wake_up(&buffer->queue); 422 return 0; 423 } ··· 428 dvb_ringbuffer_flush(buffer); 429 buffer->error = ret; 430 } 431 - spin_unlock_irqrestore(&dmxdevfilter->dev->lock, flags); 432 wake_up(&buffer->queue); 433 return 0; 434 }
··· 364 enum dmx_success success) 365 { 366 struct dmxdev_filter *dmxdevfilter = filter->priv; 367 int ret; 368 369 if (dmxdevfilter->buffer.error) { 370 wake_up(&dmxdevfilter->buffer.queue); 371 return 0; 372 } 373 + spin_lock(&dmxdevfilter->dev->lock); 374 if (dmxdevfilter->state != DMXDEV_STATE_GO) { 375 + spin_unlock(&dmxdevfilter->dev->lock); 376 return 0; 377 } 378 del_timer(&dmxdevfilter->timer); ··· 392 } 393 if (dmxdevfilter->params.sec.flags & DMX_ONESHOT) 394 dmxdevfilter->state = DMXDEV_STATE_DONE; 395 + spin_unlock(&dmxdevfilter->dev->lock); 396 wake_up(&dmxdevfilter->buffer.queue); 397 return 0; 398 } ··· 404 { 405 struct dmxdev_filter *dmxdevfilter = feed->priv; 406 struct dvb_ringbuffer *buffer; 407 int ret; 408 409 + spin_lock(&dmxdevfilter->dev->lock); 410 if (dmxdevfilter->params.pes.output == DMX_OUT_DECODER) { 411 + spin_unlock(&dmxdevfilter->dev->lock); 412 return 0; 413 } 414 ··· 419 else 420 buffer = &dmxdevfilter->dev->dvr_buffer; 421 if (buffer->error) { 422 + spin_unlock(&dmxdevfilter->dev->lock); 423 wake_up(&buffer->queue); 424 return 0; 425 } ··· 430 dvb_ringbuffer_flush(buffer); 431 buffer->error = ret; 432 } 433 + spin_unlock(&dmxdevfilter->dev->lock); 434 wake_up(&buffer->queue); 435 return 0; 436 }
+6 -10
drivers/media/dvb/dvb-core/dvb_demux.c
··· 399 void dvb_dmx_swfilter_packets(struct dvb_demux *demux, const u8 *buf, 400 size_t count) 401 { 402 - unsigned long flags; 403 - 404 - spin_lock_irqsave(&demux->lock, flags); 405 406 while (count--) { 407 if (buf[0] == 0x47) ··· 407 buf += 188; 408 } 409 410 - spin_unlock_irqrestore(&demux->lock, flags); 411 } 412 413 EXPORT_SYMBOL(dvb_dmx_swfilter_packets); 414 415 void dvb_dmx_swfilter(struct dvb_demux *demux, const u8 *buf, size_t count) 416 { 417 - unsigned long flags; 418 int p = 0, i, j; 419 420 - spin_lock_irqsave(&demux->lock, flags); 421 422 if (demux->tsbufp) { 423 i = demux->tsbufp; ··· 449 } 450 451 bailout: 452 - spin_unlock_irqrestore(&demux->lock, flags); 453 } 454 455 EXPORT_SYMBOL(dvb_dmx_swfilter); 456 457 void dvb_dmx_swfilter_204(struct dvb_demux *demux, const u8 *buf, size_t count) 458 { 459 - unsigned long flags; 460 int p = 0, i, j; 461 u8 tmppack[188]; 462 463 - spin_lock_irqsave(&demux->lock, flags); 464 465 if (demux->tsbufp) { 466 i = demux->tsbufp; ··· 500 } 501 502 bailout: 503 - spin_unlock_irqrestore(&demux->lock, flags); 504 } 505 506 EXPORT_SYMBOL(dvb_dmx_swfilter_204);
··· 399 void dvb_dmx_swfilter_packets(struct dvb_demux *demux, const u8 *buf, 400 size_t count) 401 { 402 + spin_lock(&demux->lock); 403 404 while (count--) { 405 if (buf[0] == 0x47) ··· 409 buf += 188; 410 } 411 412 + spin_unlock(&demux->lock); 413 } 414 415 EXPORT_SYMBOL(dvb_dmx_swfilter_packets); 416 417 void dvb_dmx_swfilter(struct dvb_demux *demux, const u8 *buf, size_t count) 418 { 419 int p = 0, i, j; 420 421 + spin_lock(&demux->lock); 422 423 if (demux->tsbufp) { 424 i = demux->tsbufp; ··· 452 } 453 454 bailout: 455 + spin_unlock(&demux->lock); 456 } 457 458 EXPORT_SYMBOL(dvb_dmx_swfilter); 459 460 void dvb_dmx_swfilter_204(struct dvb_demux *demux, const u8 *buf, size_t count) 461 { 462 int p = 0, i, j; 463 u8 tmppack[188]; 464 465 + spin_lock(&demux->lock); 466 467 if (demux->tsbufp) { 468 i = demux->tsbufp; ··· 504 } 505 506 bailout: 507 + spin_unlock(&demux->lock); 508 } 509 510 EXPORT_SYMBOL(dvb_dmx_swfilter_204);
+46 -9
drivers/media/radio/radio-si470x.c
··· 98 * - blacklisted KWorld radio in hid-core.c and hid-ids.h 99 * 2008-12-03 Mark Lord <mlord@pobox.com> 100 * - add support for DealExtreme USB Radio 101 * 102 * ToDo: 103 * - add firmware download/update support 104 * - RDS support: interrupt mode, instead of polling 105 - * - add LED status output (check if that's not already done in firmware) 106 */ 107 108 ··· 887 888 889 /************************************************************************** 890 * RDS Driver Functions 891 **************************************************************************/ 892 ··· 1414 }; 1415 1416 /* stereo indicator == stereo (instead of mono) */ 1417 - if ((radio->registers[STATUSRSSI] & STATUSRSSI_ST) == 1) 1418 - tuner->rxsubchans = V4L2_TUNER_SUB_MONO | V4L2_TUNER_SUB_STEREO; 1419 - else 1420 tuner->rxsubchans = V4L2_TUNER_SUB_MONO; 1421 1422 /* mono/stereo selector */ 1423 - if ((radio->registers[POWERCFG] & POWERCFG_MONO) == 1) 1424 - tuner->audmode = V4L2_TUNER_MODE_MONO; 1425 - else 1426 tuner->audmode = V4L2_TUNER_MODE_STEREO; 1427 1428 /* min is worst, max is best; signal:0..0xffff; rssi: 0..0xff */ 1429 - tuner->signal = (radio->registers[STATUSRSSI] & STATUSRSSI_RSSI) 1430 - * 0x0101; 1431 1432 /* automatic frequency control: -1: freq to low, 1 freq to high */ 1433 /* AFCRL does only indicate that freq. differs, not if too low/high */ ··· 1663 /* set initial frequency */ 1664 si470x_set_freq(radio, 87.5 * FREQ_MUL); /* available in all regions */ 1665 1666 /* rds buffer allocation */ 1667 radio->buf_size = rds_buf * 3; 1668 radio->buffer = kmalloc(radio->buf_size, GFP_KERNEL); ··· 1749 cancel_delayed_work_sync(&radio->work); 1750 usb_set_intfdata(intf, NULL); 1751 if (radio->users == 0) { 1752 video_unregister_device(radio->videodev); 1753 kfree(radio->buffer); 1754 kfree(radio);
··· 98 * - blacklisted KWorld radio in hid-core.c and hid-ids.h 99 * 2008-12-03 Mark Lord <mlord@pobox.com> 100 * - add support for DealExtreme USB Radio 101 + * 2009-01-31 Bob Ross <pigiron@gmx.com> 102 + * - correction of stereo detection/setting 103 + * - correction of signal strength indicator scaling 104 + * 2009-01-31 Rick Bronson <rick@efn.org> 105 + * Tobias Lorenz <tobias.lorenz@gmx.net> 106 + * - add LED status output 107 * 108 * ToDo: 109 * - add firmware download/update support 110 * - RDS support: interrupt mode, instead of polling 111 */ 112 113 ··· 882 883 884 /************************************************************************** 885 + * General Driver Functions - LED_REPORT 886 + **************************************************************************/ 887 + 888 + /* 889 + * si470x_set_led_state - sets the led state 890 + */ 891 + static int si470x_set_led_state(struct si470x_device *radio, 892 + unsigned char led_state) 893 + { 894 + unsigned char buf[LED_REPORT_SIZE]; 895 + int retval; 896 + 897 + buf[0] = LED_REPORT; 898 + buf[1] = LED_COMMAND; 899 + buf[2] = led_state; 900 + 901 + retval = si470x_set_report(radio, (void *) &buf, sizeof(buf)); 902 + 903 + return (retval < 0) ? -EINVAL : 0; 904 + } 905 + 906 + 907 + 908 + /************************************************************************** 909 * RDS Driver Functions 910 **************************************************************************/ 911 ··· 1385 }; 1386 1387 /* stereo indicator == stereo (instead of mono) */ 1388 + if ((radio->registers[STATUSRSSI] & STATUSRSSI_ST) == 0) 1389 tuner->rxsubchans = V4L2_TUNER_SUB_MONO; 1390 + else 1391 + tuner->rxsubchans = V4L2_TUNER_SUB_MONO | V4L2_TUNER_SUB_STEREO; 1392 1393 /* mono/stereo selector */ 1394 + if ((radio->registers[POWERCFG] & POWERCFG_MONO) == 0) 1395 tuner->audmode = V4L2_TUNER_MODE_STEREO; 1396 + else 1397 + tuner->audmode = V4L2_TUNER_MODE_MONO; 1398 1399 /* min is worst, max is best; signal:0..0xffff; rssi: 0..0xff */ 1400 + /* measured in units of dbµV in 1 db increments (max at ~75 dbµV) */ 1401 + tuner->signal = (radio->registers[STATUSRSSI] & STATUSRSSI_RSSI); 1402 + /* the ideal factor is 0xffff/75 = 873,8 */ 1403 + tuner->signal = (tuner->signal * 873) + (8 * tuner->signal / 10); 1404 1405 /* automatic frequency control: -1: freq to low, 1 freq to high */ 1406 /* AFCRL does only indicate that freq. differs, not if too low/high */ ··· 1632 /* set initial frequency */ 1633 si470x_set_freq(radio, 87.5 * FREQ_MUL); /* available in all regions */ 1634 1635 + /* set led to connect state */ 1636 + si470x_set_led_state(radio, BLINK_GREEN_LED); 1637 + 1638 /* rds buffer allocation */ 1639 radio->buf_size = rds_buf * 3; 1640 radio->buffer = kmalloc(radio->buf_size, GFP_KERNEL); ··· 1715 cancel_delayed_work_sync(&radio->work); 1716 usb_set_intfdata(intf, NULL); 1717 if (radio->users == 0) { 1718 + /* set led to disconnect state */ 1719 + si470x_set_led_state(radio, BLINK_ORANGE_LED); 1720 + 1721 video_unregister_device(radio->videodev); 1722 kfree(radio->buffer); 1723 kfree(radio);
+5
drivers/media/video/gspca/gspca.c
··· 422 if (urb == NULL) 423 break; 424 425 gspca_dev->urb[i] = NULL; 426 if (!gspca_dev->present) 427 usb_kill_urb(urb); ··· 1951 { 1952 struct gspca_dev *gspca_dev = usb_get_intfdata(intf); 1953 1954 gspca_dev->present = 0; 1955 1956 usb_set_intfdata(intf, NULL); 1957 1958 /* release the device */
··· 422 if (urb == NULL) 423 break; 424 425 + BUG_ON(!gspca_dev->dev); 426 gspca_dev->urb[i] = NULL; 427 if (!gspca_dev->present) 428 usb_kill_urb(urb); ··· 1950 { 1951 struct gspca_dev *gspca_dev = usb_get_intfdata(intf); 1952 1953 + mutex_lock(&gspca_dev->usb_lock); 1954 gspca_dev->present = 0; 1955 + mutex_unlock(&gspca_dev->usb_lock); 1956 1957 + destroy_urbs(gspca_dev); 1958 + gspca_dev->dev = NULL; 1959 usb_set_intfdata(intf, NULL); 1960 1961 /* release the device */
+13 -13
drivers/media/video/ivtv/ivtv-ioctl.c
··· 393 return 0; 394 } 395 396 - v4l2_subdev_call(itv->sd_video, video, s_fmt, fmt); 397 vbifmt->service_set = ivtv_get_service_set(vbifmt); 398 return 0; 399 } ··· 1748 break; 1749 } 1750 1751 default: 1752 return -EINVAL; 1753 } ··· 1801 itv->audio_bilingual_mode = arg; 1802 ivtv_vapi(itv, CX2341X_DEC_SET_AUDIO_MODE, 2, itv->audio_bilingual_mode, itv->audio_stereo_mode); 1803 return 0; 1804 - 1805 - case IVTV_IOC_DMA_FRAME: 1806 - case VIDEO_GET_PTS: 1807 - case VIDEO_GET_FRAME_COUNT: 1808 - case VIDEO_GET_EVENT: 1809 - case VIDEO_PLAY: 1810 - case VIDEO_STOP: 1811 - case VIDEO_FREEZE: 1812 - case VIDEO_CONTINUE: 1813 - case VIDEO_COMMAND: 1814 - case VIDEO_TRY_COMMAND: 1815 - return ivtv_decoder_ioctls(filp, cmd, (void *)arg); 1816 1817 default: 1818 break;
··· 393 return 0; 394 } 395 396 + v4l2_subdev_call(itv->sd_video, video, g_fmt, fmt); 397 vbifmt->service_set = ivtv_get_service_set(vbifmt); 398 return 0; 399 } ··· 1748 break; 1749 } 1750 1751 + case IVTV_IOC_DMA_FRAME: 1752 + case VIDEO_GET_PTS: 1753 + case VIDEO_GET_FRAME_COUNT: 1754 + case VIDEO_GET_EVENT: 1755 + case VIDEO_PLAY: 1756 + case VIDEO_STOP: 1757 + case VIDEO_FREEZE: 1758 + case VIDEO_CONTINUE: 1759 + case VIDEO_COMMAND: 1760 + case VIDEO_TRY_COMMAND: 1761 + return ivtv_decoder_ioctls(file, cmd, (void *)arg); 1762 + 1763 default: 1764 return -EINVAL; 1765 } ··· 1789 itv->audio_bilingual_mode = arg; 1790 ivtv_vapi(itv, CX2341X_DEC_SET_AUDIO_MODE, 2, itv->audio_bilingual_mode, itv->audio_stereo_mode); 1791 return 0; 1792 1793 default: 1794 break;
+2 -2
drivers/mfd/htc-egpio.c
··· 286 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 287 if (!res) 288 goto fail; 289 - ei->base_addr = ioremap_nocache(res->start, res->end - res->start); 290 if (!ei->base_addr) 291 goto fail; 292 pr_debug("EGPIO phys=%08x virt=%p\n", (u32)res->start, ei->base_addr); ··· 307 308 ei->nchips = pdata->num_chips; 309 ei->chip = kzalloc(sizeof(struct egpio_chip) * ei->nchips, GFP_KERNEL); 310 - if (!ei) { 311 ret = -ENOMEM; 312 goto fail; 313 }
··· 286 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 287 if (!res) 288 goto fail; 289 + ei->base_addr = ioremap_nocache(res->start, resource_size(res)); 290 if (!ei->base_addr) 291 goto fail; 292 pr_debug("EGPIO phys=%08x virt=%p\n", (u32)res->start, ei->base_addr); ··· 307 308 ei->nchips = pdata->num_chips; 309 ei->chip = kzalloc(sizeof(struct egpio_chip) * ei->nchips, GFP_KERNEL); 310 + if (!ei->chip) { 311 ret = -ENOMEM; 312 goto fail; 313 }
+1
drivers/mfd/pcf50633-core.c
··· 678 679 static struct i2c_device_id pcf50633_id_table[] = { 680 {"pcf50633", 0x73}, 681 }; 682 683 static struct i2c_driver pcf50633_driver = {
··· 678 679 static struct i2c_device_id pcf50633_id_table[] = { 680 {"pcf50633", 0x73}, 681 + {/* end of list */} 682 }; 683 684 static struct i2c_driver pcf50633_driver = {
+13 -13
drivers/mfd/sm501.c
··· 1050 return gpiochip_add(gchip); 1051 } 1052 1053 - static int sm501_register_gpio(struct sm501_devdata *sm) 1054 { 1055 struct sm501_gpio *gpio = &sm->gpio; 1056 resource_size_t iobase = sm->io_res->start + SM501_GPIO; ··· 1321 * Common init code for an SM501 1322 */ 1323 1324 - static int sm501_init_dev(struct sm501_devdata *sm) 1325 { 1326 struct sm501_initdata *idata; 1327 struct sm501_platdata *pdata; ··· 1397 return 0; 1398 } 1399 1400 - static int sm501_plat_probe(struct platform_device *dev) 1401 { 1402 struct sm501_devdata *sm; 1403 int ret; ··· 1586 .gpio_base = -1, 1587 }; 1588 1589 - static int sm501_pci_probe(struct pci_dev *dev, 1590 - const struct pci_device_id *id) 1591 { 1592 struct sm501_devdata *sm; 1593 int err; ··· 1693 sm501_gpio_remove(sm); 1694 } 1695 1696 - static void sm501_pci_remove(struct pci_dev *dev) 1697 { 1698 struct sm501_devdata *sm = pci_get_drvdata(dev); 1699 ··· 1727 1728 MODULE_DEVICE_TABLE(pci, sm501_pci_tbl); 1729 1730 - static struct pci_driver sm501_pci_drv = { 1731 .name = "sm501", 1732 .id_table = sm501_pci_tbl, 1733 .probe = sm501_pci_probe, 1734 - .remove = sm501_pci_remove, 1735 }; 1736 1737 MODULE_ALIAS("platform:sm501"); 1738 1739 - static struct platform_driver sm501_plat_drv = { 1740 .driver = { 1741 .name = "sm501", 1742 .owner = THIS_MODULE, ··· 1749 1750 static int __init sm501_base_init(void) 1751 { 1752 - platform_driver_register(&sm501_plat_drv); 1753 - return pci_register_driver(&sm501_pci_drv); 1754 } 1755 1756 static void __exit sm501_base_exit(void) 1757 { 1758 - platform_driver_unregister(&sm501_plat_drv); 1759 - pci_unregister_driver(&sm501_pci_drv); 1760 } 1761 1762 module_init(sm501_base_init);
··· 1050 return gpiochip_add(gchip); 1051 } 1052 1053 + static int __devinit sm501_register_gpio(struct sm501_devdata *sm) 1054 { 1055 struct sm501_gpio *gpio = &sm->gpio; 1056 resource_size_t iobase = sm->io_res->start + SM501_GPIO; ··· 1321 * Common init code for an SM501 1322 */ 1323 1324 + static int __devinit sm501_init_dev(struct sm501_devdata *sm) 1325 { 1326 struct sm501_initdata *idata; 1327 struct sm501_platdata *pdata; ··· 1397 return 0; 1398 } 1399 1400 + static int __devinit sm501_plat_probe(struct platform_device *dev) 1401 { 1402 struct sm501_devdata *sm; 1403 int ret; ··· 1586 .gpio_base = -1, 1587 }; 1588 1589 + static int __devinit sm501_pci_probe(struct pci_dev *dev, 1590 + const struct pci_device_id *id) 1591 { 1592 struct sm501_devdata *sm; 1593 int err; ··· 1693 sm501_gpio_remove(sm); 1694 } 1695 1696 + static void __devexit sm501_pci_remove(struct pci_dev *dev) 1697 { 1698 struct sm501_devdata *sm = pci_get_drvdata(dev); 1699 ··· 1727 1728 MODULE_DEVICE_TABLE(pci, sm501_pci_tbl); 1729 1730 + static struct pci_driver sm501_pci_driver = { 1731 .name = "sm501", 1732 .id_table = sm501_pci_tbl, 1733 .probe = sm501_pci_probe, 1734 + .remove = __devexit_p(sm501_pci_remove), 1735 }; 1736 1737 MODULE_ALIAS("platform:sm501"); 1738 1739 + static struct platform_driver sm501_plat_driver = { 1740 .driver = { 1741 .name = "sm501", 1742 .owner = THIS_MODULE, ··· 1749 1750 static int __init sm501_base_init(void) 1751 { 1752 + platform_driver_register(&sm501_plat_driver); 1753 + return pci_register_driver(&sm501_pci_driver); 1754 } 1755 1756 static void __exit sm501_base_exit(void) 1757 { 1758 + platform_driver_unregister(&sm501_plat_driver); 1759 + pci_unregister_driver(&sm501_pci_driver); 1760 } 1761 1762 module_init(sm501_base_init);
+1 -1
drivers/mfd/twl4030-core.c
··· 38 #include <linux/i2c.h> 39 #include <linux/i2c/twl4030.h> 40 41 - #ifdef CONFIG_ARM 42 #include <mach/cpu.h> 43 #endif 44
··· 38 #include <linux/i2c.h> 39 #include <linux/i2c/twl4030.h> 40 41 + #if defined(CONFIG_ARCH_OMAP2) || defined(CONFIG_ARCH_OMAP3) 42 #include <mach/cpu.h> 43 #endif 44
+35 -13
drivers/mfd/wm8350-core.c
··· 1111 do { 1112 schedule_timeout_interruptible(1); 1113 reg = wm8350_reg_read(wm8350, WM8350_DIGITISER_CONTROL_1); 1114 - } while (tries-- && (reg & WM8350_AUXADC_POLL)); 1115 1116 if (!tries) 1117 dev_err(wm8350->dev, "adc chn %d read timeout\n", channel); ··· 1297 int wm8350_device_init(struct wm8350 *wm8350, int irq, 1298 struct wm8350_platform_data *pdata) 1299 { 1300 - int ret = -EINVAL; 1301 u16 id1, id2, mask_rev; 1302 u16 cust_id, mode, chip_rev; 1303 1304 /* get WM8350 revision and config mode */ 1305 - wm8350->read_dev(wm8350, WM8350_RESET_ID, sizeof(id1), &id1); 1306 - wm8350->read_dev(wm8350, WM8350_ID, sizeof(id2), &id2); 1307 - wm8350->read_dev(wm8350, WM8350_REVISION, sizeof(mask_rev), &mask_rev); 1308 1309 id1 = be16_to_cpu(id1); 1310 id2 = be16_to_cpu(id2); ··· 1419 return ret; 1420 } 1421 1422 - if (pdata && pdata->init) { 1423 - ret = pdata->init(wm8350); 1424 - if (ret != 0) { 1425 - dev_err(wm8350->dev, "Platform init() failed: %d\n", 1426 - ret); 1427 - goto err; 1428 - } 1429 - } 1430 1431 mutex_init(&wm8350->auxadc_mutex); 1432 mutex_init(&wm8350->irq_mutex); ··· 1442 goto err; 1443 } 1444 wm8350->chip_irq = irq; 1445 1446 wm8350_reg_write(wm8350, WM8350_SYSTEM_INTERRUPTS_MASK, 0x0); 1447
··· 1111 do { 1112 schedule_timeout_interruptible(1); 1113 reg = wm8350_reg_read(wm8350, WM8350_DIGITISER_CONTROL_1); 1114 + } while (--tries && (reg & WM8350_AUXADC_POLL)); 1115 1116 if (!tries) 1117 dev_err(wm8350->dev, "adc chn %d read timeout\n", channel); ··· 1297 int wm8350_device_init(struct wm8350 *wm8350, int irq, 1298 struct wm8350_platform_data *pdata) 1299 { 1300 + int ret; 1301 u16 id1, id2, mask_rev; 1302 u16 cust_id, mode, chip_rev; 1303 1304 /* get WM8350 revision and config mode */ 1305 + ret = wm8350->read_dev(wm8350, WM8350_RESET_ID, sizeof(id1), &id1); 1306 + if (ret != 0) { 1307 + dev_err(wm8350->dev, "Failed to read ID: %d\n", ret); 1308 + goto err; 1309 + } 1310 + 1311 + ret = wm8350->read_dev(wm8350, WM8350_ID, sizeof(id2), &id2); 1312 + if (ret != 0) { 1313 + dev_err(wm8350->dev, "Failed to read ID: %d\n", ret); 1314 + goto err; 1315 + } 1316 + 1317 + ret = wm8350->read_dev(wm8350, WM8350_REVISION, sizeof(mask_rev), 1318 + &mask_rev); 1319 + if (ret != 0) { 1320 + dev_err(wm8350->dev, "Failed to read revision: %d\n", ret); 1321 + goto err; 1322 + } 1323 1324 id1 = be16_to_cpu(id1); 1325 id2 = be16_to_cpu(id2); ··· 1404 return ret; 1405 } 1406 1407 + wm8350_reg_write(wm8350, WM8350_SYSTEM_INTERRUPTS_MASK, 0xFFFF); 1408 + wm8350_reg_write(wm8350, WM8350_INT_STATUS_1_MASK, 0xFFFF); 1409 + wm8350_reg_write(wm8350, WM8350_INT_STATUS_2_MASK, 0xFFFF); 1410 + wm8350_reg_write(wm8350, WM8350_UNDER_VOLTAGE_INT_STATUS_MASK, 0xFFFF); 1411 + wm8350_reg_write(wm8350, WM8350_GPIO_INT_STATUS_MASK, 0xFFFF); 1412 + wm8350_reg_write(wm8350, WM8350_COMPARATOR_INT_STATUS_MASK, 0xFFFF); 1413 1414 mutex_init(&wm8350->auxadc_mutex); 1415 mutex_init(&wm8350->irq_mutex); ··· 1429 goto err; 1430 } 1431 wm8350->chip_irq = irq; 1432 + 1433 + if (pdata && pdata->init) { 1434 + ret = pdata->init(wm8350); 1435 + if (ret != 0) { 1436 + dev_err(wm8350->dev, "Platform init() failed: %d\n", 1437 + ret); 1438 + goto err; 1439 + } 1440 + } 1441 1442 wm8350_reg_write(wm8350, WM8350_SYSTEM_INTERRUPTS_MASK, 0x0); 1443
+1 -1
drivers/mfd/wm8350-regmap.c
··· 3188 { 0x7CFF, 0x0C00, 0x7FFF }, /* R1 - ID */ 3189 { 0x0000, 0x0000, 0x0000 }, /* R2 */ 3190 { 0xBE3B, 0xBE3B, 0x8000 }, /* R3 - System Control 1 */ 3191 - { 0xFCF7, 0xFCF7, 0xF800 }, /* R4 - System Control 2 */ 3192 { 0x80FF, 0x80FF, 0x8000 }, /* R5 - System Hibernate */ 3193 { 0xFB0E, 0xFB0E, 0x0000 }, /* R6 - Interface Control */ 3194 { 0x0000, 0x0000, 0x0000 }, /* R7 */
··· 3188 { 0x7CFF, 0x0C00, 0x7FFF }, /* R1 - ID */ 3189 { 0x0000, 0x0000, 0x0000 }, /* R2 */ 3190 { 0xBE3B, 0xBE3B, 0x8000 }, /* R3 - System Control 1 */ 3191 + { 0xFEF7, 0xFEF7, 0xF800 }, /* R4 - System Control 2 */ 3192 { 0x80FF, 0x80FF, 0x8000 }, /* R5 - System Hibernate */ 3193 { 0xFB0E, 0xFB0E, 0x0000 }, /* R6 - Interface Control */ 3194 { 0x0000, 0x0000, 0x0000 }, /* R7 */
+1 -1
drivers/mmc/card/block.c
··· 584 if (err) 585 goto out; 586 587 - string_get_size(get_capacity(md->disk) << 9, STRING_UNITS_2, 588 cap_str, sizeof(cap_str)); 589 printk(KERN_INFO "%s: %s %s %s %s\n", 590 md->disk->disk_name, mmc_card_id(card), mmc_card_name(card),
··· 584 if (err) 585 goto out; 586 587 + string_get_size((u64)get_capacity(md->disk) << 9, STRING_UNITS_2, 588 cap_str, sizeof(cap_str)); 589 printk(KERN_INFO "%s: %s %s %s %s\n", 590 md->disk->disk_name, mmc_card_id(card), mmc_card_name(card),
+1 -1
drivers/mmc/card/mmc_test.c
··· 494 495 sg_init_one(&sg, test->buffer, 512); 496 497 - ret = mmc_test_simple_transfer(test, &sg, 1, 0, 1, 512, 1); 498 if (ret) 499 return ret; 500
··· 494 495 sg_init_one(&sg, test->buffer, 512); 496 497 + ret = mmc_test_simple_transfer(test, &sg, 1, 0, 1, 512, 0); 498 if (ret) 499 return ret; 500
+3 -2
drivers/mmc/host/atmel-mci.c
··· 1548 { 1549 struct dw_dma_slave *dws = slave; 1550 1551 - if (dws->dma_dev == chan->device->dev) 1552 return true; 1553 - else 1554 return false; 1555 } 1556 #endif
··· 1548 { 1549 struct dw_dma_slave *dws = slave; 1550 1551 + if (dws->dma_dev == chan->device->dev) { 1552 + chan->private = dws; 1553 return true; 1554 + } else 1555 return false; 1556 } 1557 #endif
+68 -30
drivers/mmc/host/omap_hsmmc.c
··· 55 #define VS30 (1 << 25) 56 #define SDVS18 (0x5 << 9) 57 #define SDVS30 (0x6 << 9) 58 #define SDVSCLR 0xFFFFF1FF 59 #define SDVSDET 0x00000400 60 #define AUTOIDLE 0x1 ··· 376 } 377 #endif /* CONFIG_MMC_DEBUG */ 378 379 380 /* 381 * MMC controller IRQ handler ··· 430 (status & CMD_CRC)) { 431 if (host->cmd) { 432 if (status & CMD_TIMEOUT) { 433 - OMAP_HSMMC_WRITE(host->base, SYSCTL, 434 - OMAP_HSMMC_READ(host->base, 435 - SYSCTL) | SRC); 436 - while (OMAP_HSMMC_READ(host->base, 437 - SYSCTL) & SRC) 438 - ; 439 - 440 host->cmd->error = -ETIMEDOUT; 441 } else { 442 host->cmd->error = -EILSEQ; 443 } 444 end_cmd = 1; 445 } 446 - if (host->data) 447 mmc_dma_cleanup(host); 448 } 449 if ((status & DATA_TIMEOUT) || 450 (status & DATA_CRC)) { ··· 449 mmc_dma_cleanup(host); 450 else 451 host->data->error = -EILSEQ; 452 - OMAP_HSMMC_WRITE(host->base, SYSCTL, 453 - OMAP_HSMMC_READ(host->base, 454 - SYSCTL) | SRD); 455 - while (OMAP_HSMMC_READ(host->base, 456 - SYSCTL) & SRD) 457 - ; 458 end_trans = 1; 459 } 460 } ··· 474 } 475 476 /* 477 - * Switch MMC operating voltage 478 */ 479 static int omap_mmc_switch_opcond(struct mmc_omap_host *host, int vdd) 480 { 481 u32 reg_val = 0; 482 int ret; 483 484 /* Disable the clocks */ 485 clk_disable(host->fclk); ··· 510 OMAP_HSMMC_WRITE(host->base, HCTL, 511 OMAP_HSMMC_READ(host->base, HCTL) & SDVSCLR); 512 reg_val = OMAP_HSMMC_READ(host->base, HCTL); 513 /* 514 * If a MMC dual voltage card is detected, the set_ios fn calls 515 * this fn with VDD bit set for 1.8V. Upon card removal from the 516 * slot, omap_mmc_set_ios sets the VDD back to 3V on MMC_POWER_OFF. 517 * 518 - * Only MMC1 supports 3.0V. MMC2 will not function if SDVS30 is 519 - * set in HCTL. 520 */ 521 - if (host->id == OMAP_MMC1_DEVID && (((1 << vdd) == MMC_VDD_32_33) || 522 - ((1 << vdd) == MMC_VDD_33_34))) 523 - reg_val |= SDVS30; 524 - if ((1 << vdd) == MMC_VDD_165_195) 525 reg_val |= SDVS18; 526 527 OMAP_HSMMC_WRITE(host->base, HCTL, reg_val); 528 ··· 549 { 550 struct mmc_omap_host *host = container_of(work, struct mmc_omap_host, 551 mmc_carddetect_work); 552 553 sysfs_notify(&host->mmc->class_dev.kobj, NULL, "cover_switch"); 554 if (host->carddetect) { 555 mmc_detect_change(host->mmc, (HZ * 200) / 1000); 556 } else { 557 - OMAP_HSMMC_WRITE(host->base, SYSCTL, 558 - OMAP_HSMMC_READ(host->base, SYSCTL) | SRD); 559 - while (OMAP_HSMMC_READ(host->base, SYSCTL) & SRD) 560 - ; 561 - 562 mmc_detect_change(host->mmc, (HZ * 50) / 1000); 563 } 564 } ··· 569 { 570 struct mmc_omap_host *host = (struct mmc_omap_host *)dev_id; 571 572 - host->carddetect = mmc_slot(host).card_detect(irq); 573 schedule_work(&host->mmc_carddetect_work); 574 575 return IRQ_HANDLED; ··· 787 case MMC_POWER_OFF: 788 mmc_slot(host).set_power(host->dev, host->slot_id, 0, 0); 789 /* 790 - * Reset bus voltage to 3V if it got set to 1.8V earlier. 791 * REVISIT: If we are able to detect cards after unplugging 792 * a 1.8V card, this code should not be needed. 793 */ 794 if (!(OMAP_HSMMC_READ(host->base, HCTL) & SDVSDET)) { 795 int vdd = fls(host->mmc->ocr_avail) - 1; 796 if (omap_mmc_switch_opcond(host, vdd) != 0) ··· 818 } 819 820 if (host->id == OMAP_MMC1_DEVID) { 821 - /* Only MMC1 can operate at 3V/1.8V */ 822 if ((OMAP_HSMMC_READ(host->base, HCTL) & SDVSDET) && 823 (ios->vdd == DUAL_VOLT_OCR_BIT)) { 824 /* ··· 1173 " level suspend\n"); 1174 } 1175 1176 - if (!(OMAP_HSMMC_READ(host->base, HCTL) & SDVSDET)) { 1177 OMAP_HSMMC_WRITE(host->base, HCTL, 1178 OMAP_HSMMC_READ(host->base, HCTL) 1179 & SDVSCLR);
··· 55 #define VS30 (1 << 25) 56 #define SDVS18 (0x5 << 9) 57 #define SDVS30 (0x6 << 9) 58 + #define SDVS33 (0x7 << 9) 59 #define SDVSCLR 0xFFFFF1FF 60 #define SDVSDET 0x00000400 61 #define AUTOIDLE 0x1 ··· 375 } 376 #endif /* CONFIG_MMC_DEBUG */ 377 378 + /* 379 + * MMC controller internal state machines reset 380 + * 381 + * Used to reset command or data internal state machines, using respectively 382 + * SRC or SRD bit of SYSCTL register 383 + * Can be called from interrupt context 384 + */ 385 + static inline void mmc_omap_reset_controller_fsm(struct mmc_omap_host *host, 386 + unsigned long bit) 387 + { 388 + unsigned long i = 0; 389 + unsigned long limit = (loops_per_jiffy * 390 + msecs_to_jiffies(MMC_TIMEOUT_MS)); 391 + 392 + OMAP_HSMMC_WRITE(host->base, SYSCTL, 393 + OMAP_HSMMC_READ(host->base, SYSCTL) | bit); 394 + 395 + while ((OMAP_HSMMC_READ(host->base, SYSCTL) & bit) && 396 + (i++ < limit)) 397 + cpu_relax(); 398 + 399 + if (OMAP_HSMMC_READ(host->base, SYSCTL) & bit) 400 + dev_err(mmc_dev(host->mmc), 401 + "Timeout waiting on controller reset in %s\n", 402 + __func__); 403 + } 404 405 /* 406 * MMC controller IRQ handler ··· 403 (status & CMD_CRC)) { 404 if (host->cmd) { 405 if (status & CMD_TIMEOUT) { 406 + mmc_omap_reset_controller_fsm(host, SRC); 407 host->cmd->error = -ETIMEDOUT; 408 } else { 409 host->cmd->error = -EILSEQ; 410 } 411 end_cmd = 1; 412 } 413 + if (host->data) { 414 mmc_dma_cleanup(host); 415 + mmc_omap_reset_controller_fsm(host, SRD); 416 + } 417 } 418 if ((status & DATA_TIMEOUT) || 419 (status & DATA_CRC)) { ··· 426 mmc_dma_cleanup(host); 427 else 428 host->data->error = -EILSEQ; 429 + mmc_omap_reset_controller_fsm(host, SRD); 430 end_trans = 1; 431 } 432 } ··· 456 } 457 458 /* 459 + * Switch MMC interface voltage ... only relevant for MMC1. 460 + * 461 + * MMC2 and MMC3 use fixed 1.8V levels, and maybe a transceiver. 462 + * The MMC2 transceiver controls are used instead of DAT4..DAT7. 463 + * Some chips, like eMMC ones, use internal transceivers. 464 */ 465 static int omap_mmc_switch_opcond(struct mmc_omap_host *host, int vdd) 466 { 467 u32 reg_val = 0; 468 int ret; 469 + 470 + if (host->id != OMAP_MMC1_DEVID) 471 + return 0; 472 473 /* Disable the clocks */ 474 clk_disable(host->fclk); ··· 485 OMAP_HSMMC_WRITE(host->base, HCTL, 486 OMAP_HSMMC_READ(host->base, HCTL) & SDVSCLR); 487 reg_val = OMAP_HSMMC_READ(host->base, HCTL); 488 + 489 /* 490 * If a MMC dual voltage card is detected, the set_ios fn calls 491 * this fn with VDD bit set for 1.8V. Upon card removal from the 492 * slot, omap_mmc_set_ios sets the VDD back to 3V on MMC_POWER_OFF. 493 * 494 + * Cope with a bit of slop in the range ... per data sheets: 495 + * - "1.8V" for vdds_mmc1/vdds_mmc1a can be up to 2.45V max, 496 + * but recommended values are 1.71V to 1.89V 497 + * - "3.0V" for vdds_mmc1/vdds_mmc1a can be up to 3.5V max, 498 + * but recommended values are 2.7V to 3.3V 499 + * 500 + * Board setup code shouldn't permit anything very out-of-range. 501 + * TWL4030-family VMMC1 and VSIM regulators are fine (avoiding the 502 + * middle range) but VSIM can't power DAT4..DAT7 at more than 3V. 503 */ 504 + if ((1 << vdd) <= MMC_VDD_23_24) 505 reg_val |= SDVS18; 506 + else 507 + reg_val |= SDVS30; 508 509 OMAP_HSMMC_WRITE(host->base, HCTL, reg_val); 510 ··· 517 { 518 struct mmc_omap_host *host = container_of(work, struct mmc_omap_host, 519 mmc_carddetect_work); 520 + struct omap_mmc_slot_data *slot = &mmc_slot(host); 521 + 522 + host->carddetect = slot->card_detect(slot->card_detect_irq); 523 524 sysfs_notify(&host->mmc->class_dev.kobj, NULL, "cover_switch"); 525 if (host->carddetect) { 526 mmc_detect_change(host->mmc, (HZ * 200) / 1000); 527 } else { 528 + mmc_omap_reset_controller_fsm(host, SRD); 529 mmc_detect_change(host->mmc, (HZ * 50) / 1000); 530 } 531 } ··· 538 { 539 struct mmc_omap_host *host = (struct mmc_omap_host *)dev_id; 540 541 schedule_work(&host->mmc_carddetect_work); 542 543 return IRQ_HANDLED; ··· 757 case MMC_POWER_OFF: 758 mmc_slot(host).set_power(host->dev, host->slot_id, 0, 0); 759 /* 760 + * Reset interface voltage to 3V if it's 1.8V now; 761 + * only relevant on MMC-1, the others always use 1.8V. 762 + * 763 * REVISIT: If we are able to detect cards after unplugging 764 * a 1.8V card, this code should not be needed. 765 */ 766 + if (host->id != OMAP_MMC1_DEVID) 767 + break; 768 if (!(OMAP_HSMMC_READ(host->base, HCTL) & SDVSDET)) { 769 int vdd = fls(host->mmc->ocr_avail) - 1; 770 if (omap_mmc_switch_opcond(host, vdd) != 0) ··· 784 } 785 786 if (host->id == OMAP_MMC1_DEVID) { 787 + /* Only MMC1 can interface at 3V without some flavor 788 + * of external transceiver; but they all handle 1.8V. 789 + */ 790 if ((OMAP_HSMMC_READ(host->base, HCTL) & SDVSDET) && 791 (ios->vdd == DUAL_VOLT_OCR_BIT)) { 792 /* ··· 1137 " level suspend\n"); 1138 } 1139 1140 + if (host->id == OMAP_MMC1_DEVID 1141 + && !(OMAP_HSMMC_READ(host->base, HCTL) 1142 + & SDVSDET)) { 1143 OMAP_HSMMC_WRITE(host->base, HCTL, 1144 OMAP_HSMMC_READ(host->base, HCTL) 1145 & SDVSCLR);
+1 -1
drivers/mmc/host/s3cmci.c
··· 329 330 to_ptr = host->base + host->sdidata; 331 332 - while ((fifo = fifo_free(host))) { 333 if (!host->pio_bytes) { 334 res = get_data_buffer(host, &host->pio_bytes, 335 &host->pio_ptr);
··· 329 330 to_ptr = host->base + host->sdidata; 331 332 + while ((fifo = fifo_free(host)) > 3) { 333 if (!host->pio_bytes) { 334 res = get_data_buffer(host, &host->pio_bytes, 335 &host->pio_ptr);
+1 -2
drivers/mmc/host/sdhci-pci.c
··· 144 SDHCI_QUIRK_32BIT_DMA_SIZE | 145 SDHCI_QUIRK_32BIT_ADMA_SIZE | 146 SDHCI_QUIRK_RESET_AFTER_REQUEST | 147 - SDHCI_QUIRK_BROKEN_SMALL_PIO | 148 - SDHCI_QUIRK_FORCE_HIGHSPEED; 149 } 150 151 /*
··· 144 SDHCI_QUIRK_32BIT_DMA_SIZE | 145 SDHCI_QUIRK_32BIT_ADMA_SIZE | 146 SDHCI_QUIRK_RESET_AFTER_REQUEST | 147 + SDHCI_QUIRK_BROKEN_SMALL_PIO; 148 } 149 150 /*
+4 -3
drivers/mmc/host/sdhci.c
··· 1636 mmc->f_max = host->max_clk; 1637 mmc->caps = MMC_CAP_4_BIT_DATA | MMC_CAP_SDIO_IRQ; 1638 1639 - if ((caps & SDHCI_CAN_DO_HISPD) || 1640 - (host->quirks & SDHCI_QUIRK_FORCE_HIGHSPEED)) 1641 mmc->caps |= MMC_CAP_SD_HIGHSPEED; 1642 1643 mmc->ocr_avail = 0; ··· 1722 #endif 1723 1724 #ifdef SDHCI_USE_LEDS_CLASS 1725 - host->led.name = mmc_hostname(mmc); 1726 host->led.brightness = LED_OFF; 1727 host->led.default_trigger = mmc_hostname(mmc); 1728 host->led.brightness_set = sdhci_led_control;
··· 1636 mmc->f_max = host->max_clk; 1637 mmc->caps = MMC_CAP_4_BIT_DATA | MMC_CAP_SDIO_IRQ; 1638 1639 + if (caps & SDHCI_CAN_DO_HISPD) 1640 mmc->caps |= MMC_CAP_SD_HIGHSPEED; 1641 1642 mmc->ocr_avail = 0; ··· 1723 #endif 1724 1725 #ifdef SDHCI_USE_LEDS_CLASS 1726 + snprintf(host->led_name, sizeof(host->led_name), 1727 + "%s::", mmc_hostname(mmc)); 1728 + host->led.name = host->led_name; 1729 host->led.brightness = LED_OFF; 1730 host->led.default_trigger = mmc_hostname(mmc); 1731 host->led.brightness_set = sdhci_led_control;
+1 -2
drivers/mmc/host/sdhci.h
··· 208 #define SDHCI_QUIRK_BROKEN_TIMEOUT_VAL (1<<12) 209 /* Controller has an issue with buffer bits for small transfers */ 210 #define SDHCI_QUIRK_BROKEN_SMALL_PIO (1<<13) 211 - /* Controller supports high speed but doesn't have the caps bit set */ 212 - #define SDHCI_QUIRK_FORCE_HIGHSPEED (1<<14) 213 214 int irq; /* Device IRQ */ 215 void __iomem * ioaddr; /* Mapped address */ ··· 220 221 #if defined(CONFIG_LEDS_CLASS) || defined(CONFIG_LEDS_CLASS_MODULE) 222 struct led_classdev led; /* LED control */ 223 #endif 224 225 spinlock_t lock; /* Mutex */
··· 208 #define SDHCI_QUIRK_BROKEN_TIMEOUT_VAL (1<<12) 209 /* Controller has an issue with buffer bits for small transfers */ 210 #define SDHCI_QUIRK_BROKEN_SMALL_PIO (1<<13) 211 212 int irq; /* Device IRQ */ 213 void __iomem * ioaddr; /* Mapped address */ ··· 222 223 #if defined(CONFIG_LEDS_CLASS) || defined(CONFIG_LEDS_CLASS_MODULE) 224 struct led_classdev led; /* LED control */ 225 + char led_name[32]; 226 #endif 227 228 spinlock_t lock; /* Mutex */
+2 -1
drivers/mtd/nand/atmel_nand.c
··· 139 struct nand_chip *nand_chip = mtd->priv; 140 struct atmel_nand_host *host = nand_chip->priv; 141 142 - return gpio_get_value(host->board->rdy_pin); 143 } 144 145 /*
··· 139 struct nand_chip *nand_chip = mtd->priv; 140 struct atmel_nand_host *host = nand_chip->priv; 141 142 + return gpio_get_value(host->board->rdy_pin) ^ 143 + !!host->board->rdy_pin_active_low; 144 } 145 146 /*
+15 -1
drivers/pci/intel-iommu.c
··· 61 /* global iommu list, set NULL for ignored DMAR units */ 62 static struct intel_iommu **g_iommus; 63 64 /* 65 * 0: Present 66 * 1-11: Reserved ··· 787 u32 val; 788 unsigned long flag; 789 790 - if (!cap_rwbf(iommu->cap)) 791 return; 792 val = iommu->gcmd | DMA_GCMD_WBF; 793 ··· 3139 .unmap = intel_iommu_unmap_range, 3140 .iova_to_phys = intel_iommu_iova_to_phys, 3141 };
··· 61 /* global iommu list, set NULL for ignored DMAR units */ 62 static struct intel_iommu **g_iommus; 63 64 + static int rwbf_quirk; 65 + 66 /* 67 * 0: Present 68 * 1-11: Reserved ··· 785 u32 val; 786 unsigned long flag; 787 788 + if (!rwbf_quirk && !cap_rwbf(iommu->cap)) 789 return; 790 val = iommu->gcmd | DMA_GCMD_WBF; 791 ··· 3137 .unmap = intel_iommu_unmap_range, 3138 .iova_to_phys = intel_iommu_iova_to_phys, 3139 }; 3140 + 3141 + static void __devinit quirk_iommu_rwbf(struct pci_dev *dev) 3142 + { 3143 + /* 3144 + * Mobile 4 Series Chipset neglects to set RWBF capability, 3145 + * but needs it: 3146 + */ 3147 + printk(KERN_INFO "DMAR: Forcing write-buffer flush capability\n"); 3148 + rwbf_quirk = 1; 3149 + } 3150 + 3151 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x2a40, quirk_iommu_rwbf);
+4 -6
drivers/pci/msi.c
··· 103 } 104 } 105 106 - /* 107 - * Essentially, this is ((1 << (1 << x)) - 1), but without the 108 - * undefinedness of a << 32. 109 - */ 110 static inline __attribute_const__ u32 msi_mask(unsigned x) 111 { 112 - static const u32 mask[] = { 1, 2, 4, 0xf, 0xff, 0xffff, 0xffffffff }; 113 - return mask[x]; 114 } 115 116 static void msix_flush_writes(struct irq_desc *desc)
··· 103 } 104 } 105 106 static inline __attribute_const__ u32 msi_mask(unsigned x) 107 { 108 + /* Don't shift by >= width of type */ 109 + if (x >= 5) 110 + return 0xffffffff; 111 + return (1 << (1 << x)) - 1; 112 } 113 114 static void msix_flush_writes(struct irq_desc *desc)
+9 -4
drivers/pci/pci.c
··· 1540 } 1541 1542 /** 1543 - * pci_request_region - Reserved PCI I/O and memory resource 1544 * @pdev: PCI device whose resources are to be reserved 1545 * @bar: BAR to be reserved 1546 * @res_name: Name to be associated with resource. 1547 * 1548 * Mark the PCI region associated with PCI device @pdev BR @bar as 1549 * being reserved by owner @res_name. Do not access any 1550 * address inside the PCI regions unless this call returns 1551 * successfully. 1552 * 1553 * Returns 0 on success, or %EBUSY on error. A warning 1554 * message is also printed on failure. ··· 1593 } 1594 1595 /** 1596 - * pci_request_region - Reserved PCI I/O and memory resource 1597 * @pdev: PCI device whose resources are to be reserved 1598 * @bar: BAR to be reserved 1599 - * @res_name: Name to be associated with resource. 1600 * 1601 - * Mark the PCI region associated with PCI device @pdev BR @bar as 1602 * being reserved by owner @res_name. Do not access any 1603 * address inside the PCI regions unless this call returns 1604 * successfully.
··· 1540 } 1541 1542 /** 1543 + * __pci_request_region - Reserved PCI I/O and memory resource 1544 * @pdev: PCI device whose resources are to be reserved 1545 * @bar: BAR to be reserved 1546 * @res_name: Name to be associated with resource. 1547 + * @exclusive: whether the region access is exclusive or not 1548 * 1549 * Mark the PCI region associated with PCI device @pdev BR @bar as 1550 * being reserved by owner @res_name. Do not access any 1551 * address inside the PCI regions unless this call returns 1552 * successfully. 1553 + * 1554 + * If @exclusive is set, then the region is marked so that userspace 1555 + * is explicitly not allowed to map the resource via /dev/mem or 1556 + * sysfs MMIO access. 1557 * 1558 * Returns 0 on success, or %EBUSY on error. A warning 1559 * message is also printed on failure. ··· 1588 } 1589 1590 /** 1591 + * pci_request_region - Reserve PCI I/O and memory resource 1592 * @pdev: PCI device whose resources are to be reserved 1593 * @bar: BAR to be reserved 1594 + * @res_name: Name to be associated with resource 1595 * 1596 + * Mark the PCI region associated with PCI device @pdev BAR @bar as 1597 * being reserved by owner @res_name. Do not access any 1598 * address inside the PCI regions unless this call returns 1599 * successfully.
+10 -10
drivers/pci/pci.h
··· 16 #endif 17 18 /** 19 - * Firmware PM callbacks 20 * 21 - * @is_manageable - returns 'true' if given device is power manageable by the 22 - * platform firmware 23 * 24 - * @set_state - invokes the platform firmware to set the device's power state 25 * 26 - * @choose_state - returns PCI power state of given device preferred by the 27 - * platform; to be used during system-wide transitions from a 28 - * sleeping state to the working state and vice versa 29 * 30 - * @can_wakeup - returns 'true' if given device is capable of waking up the 31 - * system from a sleeping state 32 * 33 - * @sleep_wake - enables/disables the system wake up capability of given device 34 * 35 * If given platform is generally capable of power managing PCI devices, all of 36 * these callbacks are mandatory.
··· 16 #endif 17 18 /** 19 + * struct pci_platform_pm_ops - Firmware PM callbacks 20 * 21 + * @is_manageable: returns 'true' if given device is power manageable by the 22 + * platform firmware 23 * 24 + * @set_state: invokes the platform firmware to set the device's power state 25 * 26 + * @choose_state: returns PCI power state of given device preferred by the 27 + * platform; to be used during system-wide transitions from a 28 + * sleeping state to the working state and vice versa 29 * 30 + * @can_wakeup: returns 'true' if given device is capable of waking up the 31 + * system from a sleeping state 32 * 33 + * @sleep_wake: enables/disables the system wake up capability of given device 34 * 35 * If given platform is generally capable of power managing PCI devices, all of 36 * these callbacks are mandatory.
+1
drivers/pci/rom.c
··· 55 56 /** 57 * pci_get_rom_size - obtain the actual size of the ROM image 58 * @rom: kernel virtual pointer to image of ROM 59 * @size: size of PCI window 60 * return: size of actual ROM image
··· 55 56 /** 57 * pci_get_rom_size - obtain the actual size of the ROM image 58 + * @pdev: target PCI device 59 * @rom: kernel virtual pointer to image of ROM 60 * @size: size of PCI window 61 * return: size of actual ROM image
+2
drivers/platform/x86/Kconfig
··· 62 depends on EXPERIMENTAL 63 depends on BACKLIGHT_CLASS_DEVICE 64 depends on RFKILL 65 default n 66 ---help--- 67 This driver adds support for rfkill and backlight control to Dell ··· 302 config EEEPC_LAPTOP 303 tristate "Eee PC Hotkey Driver (EXPERIMENTAL)" 304 depends on ACPI 305 depends on EXPERIMENTAL 306 select BACKLIGHT_CLASS_DEVICE 307 select HWMON
··· 62 depends on EXPERIMENTAL 63 depends on BACKLIGHT_CLASS_DEVICE 64 depends on RFKILL 65 + depends on POWER_SUPPLY 66 default n 67 ---help--- 68 This driver adds support for rfkill and backlight control to Dell ··· 301 config EEEPC_LAPTOP 302 tristate "Eee PC Hotkey Driver (EXPERIMENTAL)" 303 depends on ACPI 304 + depends on INPUT 305 depends on EXPERIMENTAL 306 select BACKLIGHT_CLASS_DEVICE 307 select HWMON
+18 -7
drivers/platform/x86/fujitsu-laptop.c
··· 166 struct platform_device *pf_device; 167 struct kfifo *fifo; 168 spinlock_t fifo_lock; 169 int rfkill_state; 170 int logolamp_registered; 171 int kblamps_registered; ··· 527 show_lid_state(struct device *dev, 528 struct device_attribute *attr, char *buf) 529 { 530 - if (fujitsu_hotkey->rfkill_state == UNSUPPORTED_CMD) 531 return sprintf(buf, "unknown\n"); 532 if (fujitsu_hotkey->rfkill_state & 0x100) 533 return sprintf(buf, "open\n"); ··· 539 show_dock_state(struct device *dev, 540 struct device_attribute *attr, char *buf) 541 { 542 - if (fujitsu_hotkey->rfkill_state == UNSUPPORTED_CMD) 543 return sprintf(buf, "unknown\n"); 544 if (fujitsu_hotkey->rfkill_state & 0x200) 545 return sprintf(buf, "docked\n"); ··· 551 show_radios_state(struct device *dev, 552 struct device_attribute *attr, char *buf) 553 { 554 - if (fujitsu_hotkey->rfkill_state == UNSUPPORTED_CMD) 555 return sprintf(buf, "unknown\n"); 556 if (fujitsu_hotkey->rfkill_state & 0x20) 557 return sprintf(buf, "on\n"); ··· 929 ; /* No action, result is discarded */ 930 vdbg_printk(FUJLAPTOP_DBG_INFO, "Discarded %i ringbuffer entries\n", i); 931 932 - fujitsu_hotkey->rfkill_state = 933 - call_fext_func(FUNC_RFKILL, 0x4, 0x0, 0x0); 934 935 /* Suspect this is a keymap of the application panel, print it */ 936 printk(KERN_INFO "fujitsu-laptop: BTNI: [0x%x]\n", ··· 1015 1016 input = fujitsu_hotkey->input; 1017 1018 - fujitsu_hotkey->rfkill_state = 1019 - call_fext_func(FUNC_RFKILL, 0x4, 0x0, 0x0); 1020 1021 switch (event) { 1022 case ACPI_FUJITSU_NOTIFY_CODE1:
··· 166 struct platform_device *pf_device; 167 struct kfifo *fifo; 168 spinlock_t fifo_lock; 169 + int rfkill_supported; 170 int rfkill_state; 171 int logolamp_registered; 172 int kblamps_registered; ··· 526 show_lid_state(struct device *dev, 527 struct device_attribute *attr, char *buf) 528 { 529 + if (!(fujitsu_hotkey->rfkill_supported & 0x100)) 530 return sprintf(buf, "unknown\n"); 531 if (fujitsu_hotkey->rfkill_state & 0x100) 532 return sprintf(buf, "open\n"); ··· 538 show_dock_state(struct device *dev, 539 struct device_attribute *attr, char *buf) 540 { 541 + if (!(fujitsu_hotkey->rfkill_supported & 0x200)) 542 return sprintf(buf, "unknown\n"); 543 if (fujitsu_hotkey->rfkill_state & 0x200) 544 return sprintf(buf, "docked\n"); ··· 550 show_radios_state(struct device *dev, 551 struct device_attribute *attr, char *buf) 552 { 553 + if (!(fujitsu_hotkey->rfkill_supported & 0x20)) 554 return sprintf(buf, "unknown\n"); 555 if (fujitsu_hotkey->rfkill_state & 0x20) 556 return sprintf(buf, "on\n"); ··· 928 ; /* No action, result is discarded */ 929 vdbg_printk(FUJLAPTOP_DBG_INFO, "Discarded %i ringbuffer entries\n", i); 930 931 + fujitsu_hotkey->rfkill_supported = 932 + call_fext_func(FUNC_RFKILL, 0x0, 0x0, 0x0); 933 + 934 + /* Make sure our bitmask of supported functions is cleared if the 935 + RFKILL function block is not implemented, like on the S7020. */ 936 + if (fujitsu_hotkey->rfkill_supported == UNSUPPORTED_CMD) 937 + fujitsu_hotkey->rfkill_supported = 0; 938 + 939 + if (fujitsu_hotkey->rfkill_supported) 940 + fujitsu_hotkey->rfkill_state = 941 + call_fext_func(FUNC_RFKILL, 0x4, 0x0, 0x0); 942 943 /* Suspect this is a keymap of the application panel, print it */ 944 printk(KERN_INFO "fujitsu-laptop: BTNI: [0x%x]\n", ··· 1005 1006 input = fujitsu_hotkey->input; 1007 1008 + if (fujitsu_hotkey->rfkill_supported) 1009 + fujitsu_hotkey->rfkill_state = 1010 + call_fext_func(FUNC_RFKILL, 0x4, 0x0, 0x0); 1011 1012 switch (event) { 1013 case ACPI_FUJITSU_NOTIFY_CODE1:
+4 -1
drivers/s390/char/sclp.c
··· 280 rc = 0; 281 for (offset = sizeof(struct sccb_header); offset < sccb->length; 282 offset += evbuf->length) { 283 - /* Search for event handler */ 284 evbuf = (struct evbuf_header *) ((addr_t) sccb + offset); 285 reg = NULL; 286 list_for_each(l, &sclp_reg_list) { 287 reg = list_entry(l, struct sclp_register, list);
··· 280 rc = 0; 281 for (offset = sizeof(struct sccb_header); offset < sccb->length; 282 offset += evbuf->length) { 283 evbuf = (struct evbuf_header *) ((addr_t) sccb + offset); 284 + /* Check for malformed hardware response */ 285 + if (evbuf->length == 0) 286 + break; 287 + /* Search for event handler */ 288 reg = NULL; 289 list_for_each(l, &sclp_reg_list) { 290 reg = list_entry(l, struct sclp_register, list);
+5
drivers/s390/char/sclp_cmd.c
··· 19 #include <linux/memory.h> 20 #include <asm/chpid.h> 21 #include <asm/sclp.h> 22 23 #include "sclp.h" 24 ··· 475 goto skip_add; 476 if (start + size > VMEM_MAX_PHYS) 477 size = VMEM_MAX_PHYS - start; 478 add_memory(0, start, size); 479 skip_add: 480 first_rn = rn;
··· 19 #include <linux/memory.h> 20 #include <asm/chpid.h> 21 #include <asm/sclp.h> 22 + #include <asm/setup.h> 23 24 #include "sclp.h" 25 ··· 474 goto skip_add; 475 if (start + size > VMEM_MAX_PHYS) 476 size = VMEM_MAX_PHYS - start; 477 + if (memory_end_set && (start >= memory_end)) 478 + goto skip_add; 479 + if (memory_end_set && (start + size > memory_end)) 480 + size = memory_end - start; 481 add_memory(0, start, size); 482 skip_add: 483 first_rn = rn;
+11 -4
drivers/scsi/ibmvscsi/ibmvfc.c
··· 1573 vfc_cmd->resp_len = sizeof(vfc_cmd->rsp); 1574 vfc_cmd->cancel_key = (unsigned long)cmnd->device->hostdata; 1575 vfc_cmd->tgt_scsi_id = rport->port_id; 1576 - if ((rport->supported_classes & FC_COS_CLASS3) && 1577 - (fc_host_supported_classes(vhost->host) & FC_COS_CLASS3)) 1578 - vfc_cmd->flags = IBMVFC_CLASS_3_ERR; 1579 vfc_cmd->iu.xfer_len = scsi_bufflen(cmnd); 1580 int_to_scsilun(cmnd->device->lun, &vfc_cmd->iu.lun); 1581 memcpy(vfc_cmd->iu.cdb, cmnd->cmnd, cmnd->cmd_len); ··· 3263 return -ENOMEM; 3264 } 3265 3266 tgt->scsi_id = scsi_id; 3267 tgt->new_scsi_id = scsi_id; 3268 tgt->vhost = vhost; ··· 3574 static void ibmvfc_tgt_add_rport(struct ibmvfc_target *tgt) 3575 { 3576 struct ibmvfc_host *vhost = tgt->vhost; 3577 - struct fc_rport *rport; 3578 unsigned long flags; 3579 3580 tgt_dbg(tgt, "Adding rport\n"); 3581 rport = fc_remote_port_add(vhost->host, 0, &tgt->ids);
··· 1573 vfc_cmd->resp_len = sizeof(vfc_cmd->rsp); 1574 vfc_cmd->cancel_key = (unsigned long)cmnd->device->hostdata; 1575 vfc_cmd->tgt_scsi_id = rport->port_id; 1576 vfc_cmd->iu.xfer_len = scsi_bufflen(cmnd); 1577 int_to_scsilun(cmnd->device->lun, &vfc_cmd->iu.lun); 1578 memcpy(vfc_cmd->iu.cdb, cmnd->cmnd, cmnd->cmd_len); ··· 3266 return -ENOMEM; 3267 } 3268 3269 + memset(tgt, 0, sizeof(*tgt)); 3270 tgt->scsi_id = scsi_id; 3271 tgt->new_scsi_id = scsi_id; 3272 tgt->vhost = vhost; ··· 3576 static void ibmvfc_tgt_add_rport(struct ibmvfc_target *tgt) 3577 { 3578 struct ibmvfc_host *vhost = tgt->vhost; 3579 + struct fc_rport *rport = tgt->rport; 3580 unsigned long flags; 3581 + 3582 + if (rport) { 3583 + tgt_dbg(tgt, "Setting rport roles\n"); 3584 + fc_remote_port_rolechg(rport, tgt->ids.roles); 3585 + spin_lock_irqsave(vhost->host->host_lock, flags); 3586 + ibmvfc_set_tgt_action(tgt, IBMVFC_TGT_ACTION_NONE); 3587 + spin_unlock_irqrestore(vhost->host->host_lock, flags); 3588 + return; 3589 + } 3590 3591 tgt_dbg(tgt, "Adding rport\n"); 3592 rport = fc_remote_port_add(vhost->host, 0, &tgt->ids);
+1 -1
drivers/scsi/ibmvscsi/ibmvfc.h
··· 32 #define IBMVFC_DRIVER_VERSION "1.0.4" 33 #define IBMVFC_DRIVER_DATE "(November 14, 2008)" 34 35 - #define IBMVFC_DEFAULT_TIMEOUT 15 36 #define IBMVFC_INIT_TIMEOUT 120 37 #define IBMVFC_MAX_REQUESTS_DEFAULT 100 38
··· 32 #define IBMVFC_DRIVER_VERSION "1.0.4" 33 #define IBMVFC_DRIVER_DATE "(November 14, 2008)" 34 35 + #define IBMVFC_DEFAULT_TIMEOUT 60 36 #define IBMVFC_INIT_TIMEOUT 120 37 #define IBMVFC_MAX_REQUESTS_DEFAULT 100 38
+1
drivers/scsi/ibmvscsi/ibmvscsi.c
··· 432 sdev_printk(KERN_ERR, cmd->device, 433 "Can't allocate memory " 434 "for indirect table\n"); 435 return 0; 436 } 437 }
··· 432 sdev_printk(KERN_ERR, cmd->device, 433 "Can't allocate memory " 434 "for indirect table\n"); 435 + scsi_dma_unmap(cmd); 436 return 0; 437 } 438 }
+2 -1
drivers/scsi/libiscsi.c
··· 1998 if (!shost->can_queue) 1999 shost->can_queue = ISCSI_DEF_XMIT_CMDS_MAX; 2000 2001 return scsi_add_host(shost, pdev); 2002 } 2003 EXPORT_SYMBOL_GPL(iscsi_host_add); ··· 2022 shost = scsi_host_alloc(sht, sizeof(struct iscsi_host) + dd_data_size); 2023 if (!shost) 2024 return NULL; 2025 - shost->transportt->eh_timed_out = iscsi_eh_cmd_timed_out; 2026 2027 if (qdepth > ISCSI_MAX_CMD_PER_LUN || qdepth < 1) { 2028 if (qdepth != 0)
··· 1998 if (!shost->can_queue) 1999 shost->can_queue = ISCSI_DEF_XMIT_CMDS_MAX; 2000 2001 + if (!shost->transportt->eh_timed_out) 2002 + shost->transportt->eh_timed_out = iscsi_eh_cmd_timed_out; 2003 return scsi_add_host(shost, pdev); 2004 } 2005 EXPORT_SYMBOL_GPL(iscsi_host_add); ··· 2020 shost = scsi_host_alloc(sht, sizeof(struct iscsi_host) + dd_data_size); 2021 if (!shost) 2022 return NULL; 2023 2024 if (qdepth > ISCSI_MAX_CMD_PER_LUN || qdepth < 1) { 2025 if (qdepth != 0)
+1
drivers/scsi/lpfc/lpfc_els.c
··· 5258 sizeof(struct lpfc_name)); 5259 break; 5260 default: 5261 return; 5262 } 5263 memcpy(els_data->wwpn, &ndlp->nlp_portname, sizeof(struct lpfc_name));
··· 5258 sizeof(struct lpfc_name)); 5259 break; 5260 default: 5261 + kfree(els_data); 5262 return; 5263 } 5264 memcpy(els_data->wwpn, &ndlp->nlp_portname, sizeof(struct lpfc_name));
+6 -7
drivers/scsi/qla2xxx/qla_attr.c
··· 1265 test_bit(FCPORT_UPDATE_NEEDED, &vha->dpc_flags)) 1266 msleep(1000); 1267 1268 - if (ha->mqenable) { 1269 - if (qla25xx_delete_queues(vha, 0) != QLA_SUCCESS) 1270 - qla_printk(KERN_WARNING, ha, 1271 - "Queue delete failed.\n"); 1272 - vha->req_ques[0] = ha->req_q_map[0]->id; 1273 - } 1274 - 1275 qla24xx_disable_vp(vha); 1276 1277 fc_remove_host(vha->host); ··· 1285 "has stopped\n", 1286 vha->host_no, vha->vp_idx, vha)); 1287 } 1288 1289 scsi_host_put(vha->host); 1290 qla_printk(KERN_INFO, ha, "vport %d deleted\n", id);
··· 1265 test_bit(FCPORT_UPDATE_NEEDED, &vha->dpc_flags)) 1266 msleep(1000); 1267 1268 qla24xx_disable_vp(vha); 1269 1270 fc_remove_host(vha->host); ··· 1292 "has stopped\n", 1293 vha->host_no, vha->vp_idx, vha)); 1294 } 1295 + 1296 + if (ha->mqenable) { 1297 + if (qla25xx_delete_queues(vha, 0) != QLA_SUCCESS) 1298 + qla_printk(KERN_WARNING, ha, 1299 + "Queue delete failed.\n"); 1300 + } 1301 1302 scsi_host_put(vha->host); 1303 qla_printk(KERN_INFO, ha, "vport %d deleted\n", id);
+5
drivers/scsi/qla2xxx/qla_def.h
··· 2135 /* Work events. */ 2136 enum qla_work_type { 2137 QLA_EVT_AEN, 2138 }; 2139 2140 ··· 2150 enum fc_host_event_code code; 2151 u32 data; 2152 } aen; 2153 } u; 2154 }; 2155
··· 2135 /* Work events. */ 2136 enum qla_work_type { 2137 QLA_EVT_AEN, 2138 + QLA_EVT_IDC_ACK, 2139 }; 2140 2141 ··· 2149 enum fc_host_event_code code; 2150 u32 data; 2151 } aen; 2152 + struct { 2153 + #define QLA_IDC_ACK_REGS 7 2154 + uint16_t mb[QLA_IDC_ACK_REGS]; 2155 + } idc_ack; 2156 } u; 2157 }; 2158
+1 -1
drivers/scsi/qla2xxx/qla_devtbl.h
··· 72 "QLA2462", "Sun PCI-X 2.0 to 4Gb FC, Dual Channel", /* 0x141 */ 73 "QLE2460", "Sun PCI-Express to 2Gb FC, Single Channel", /* 0x142 */ 74 "QLE2462", "Sun PCI-Express to 4Gb FC, Single Channel", /* 0x143 */ 75 - "QEM2462" "Server I/O Module 4Gb FC, Dual Channel", /* 0x144 */ 76 "QLE2440", "PCI-Express to 4Gb FC, Single Channel", /* 0x145 */ 77 "QLE2464", "PCI-Express to 4Gb FC, Quad Channel", /* 0x146 */ 78 "QLA2440", "PCI-X 2.0 to 4Gb FC, Single Channel", /* 0x147 */
··· 72 "QLA2462", "Sun PCI-X 2.0 to 4Gb FC, Dual Channel", /* 0x141 */ 73 "QLE2460", "Sun PCI-Express to 2Gb FC, Single Channel", /* 0x142 */ 74 "QLE2462", "Sun PCI-Express to 4Gb FC, Single Channel", /* 0x143 */ 75 + "QEM2462", "Server I/O Module 4Gb FC, Dual Channel", /* 0x144 */ 76 "QLE2440", "PCI-Express to 4Gb FC, Single Channel", /* 0x145 */ 77 "QLE2464", "PCI-Express to 4Gb FC, Quad Channel", /* 0x146 */ 78 "QLA2440", "PCI-X 2.0 to 4Gb FC, Single Channel", /* 0x147 */
+2
drivers/scsi/qla2xxx/qla_fw.h
··· 1402 #define MBA_IDC_NOTIFY 0x8101 1403 #define MBA_IDC_TIME_EXT 0x8102 1404 1405 struct nvram_81xx { 1406 /* NVRAM header. */ 1407 uint8_t id[4];
··· 1402 #define MBA_IDC_NOTIFY 0x8101 1403 #define MBA_IDC_TIME_EXT 0x8102 1404 1405 + #define MBC_IDC_ACK 0x101 1406 + 1407 struct nvram_81xx { 1408 /* NVRAM header. */ 1409 uint8_t id[4];
+5 -4
drivers/scsi/qla2xxx/qla_gbl.h
··· 72 extern void qla2x00_abort_all_cmds(scsi_qla_host_t *, int); 73 extern int qla2x00_post_aen_work(struct scsi_qla_host *, enum 74 fc_host_event_code, u32); 75 76 extern void qla2x00_abort_fcport_cmds(fc_port_t *); 77 extern struct scsi_qla_host *qla2x00_create_host(struct scsi_host_template *, ··· 267 268 extern int qla84xx_verify_chip(struct scsi_qla_host *, uint16_t *); 269 270 /* 271 * Global Function Prototypes in qla_isr.c source file. 272 */ ··· 379 380 /* Globa function prototypes for multi-q */ 381 extern int qla25xx_request_irq(struct rsp_que *); 382 - extern int qla25xx_init_req_que(struct scsi_qla_host *, struct req_que *, 383 - uint8_t); 384 - extern int qla25xx_init_rsp_que(struct scsi_qla_host *, struct rsp_que *, 385 - uint8_t); 386 extern int qla25xx_create_req_que(struct qla_hw_data *, uint16_t, uint8_t, 387 uint16_t, uint8_t, uint8_t); 388 extern int qla25xx_create_rsp_que(struct qla_hw_data *, uint16_t, uint8_t,
··· 72 extern void qla2x00_abort_all_cmds(scsi_qla_host_t *, int); 73 extern int qla2x00_post_aen_work(struct scsi_qla_host *, enum 74 fc_host_event_code, u32); 75 + extern int qla2x00_post_idc_ack_work(struct scsi_qla_host *, uint16_t *); 76 77 extern void qla2x00_abort_fcport_cmds(fc_port_t *); 78 extern struct scsi_qla_host *qla2x00_create_host(struct scsi_host_template *, ··· 266 267 extern int qla84xx_verify_chip(struct scsi_qla_host *, uint16_t *); 268 269 + extern int qla81xx_idc_ack(scsi_qla_host_t *, uint16_t *); 270 + 271 /* 272 * Global Function Prototypes in qla_isr.c source file. 273 */ ··· 376 377 /* Globa function prototypes for multi-q */ 378 extern int qla25xx_request_irq(struct rsp_que *); 379 + extern int qla25xx_init_req_que(struct scsi_qla_host *, struct req_que *); 380 + extern int qla25xx_init_rsp_que(struct scsi_qla_host *, struct rsp_que *); 381 extern int qla25xx_create_req_que(struct qla_hw_data *, uint16_t, uint8_t, 382 uint16_t, uint8_t, uint8_t); 383 extern int qla25xx_create_rsp_que(struct qla_hw_data *, uint16_t, uint8_t,
+3 -4
drivers/scsi/qla2xxx/qla_init.c
··· 1226 icb->firmware_options_2 |= 1227 __constant_cpu_to_le32(BIT_18); 1228 1229 - icb->firmware_options_2 |= __constant_cpu_to_le32(BIT_22); 1230 icb->firmware_options_2 |= __constant_cpu_to_le32(BIT_23); 1231 - ha->rsp_q_map[0]->options = icb->firmware_options_2; 1232 1233 WRT_REG_DWORD(&reg->isp25mq.req_q_in, 0); 1234 WRT_REG_DWORD(&reg->isp25mq.req_q_out, 0); ··· 3492 rsp = ha->rsp_q_map[i]; 3493 if (rsp) { 3494 rsp->options &= ~BIT_0; 3495 - ret = qla25xx_init_rsp_que(base_vha, rsp, rsp->options); 3496 if (ret != QLA_SUCCESS) 3497 DEBUG2_17(printk(KERN_WARNING 3498 "%s Rsp que:%d init failed\n", __func__, ··· 3506 if (req) { 3507 /* Clear outstanding commands array. */ 3508 req->options &= ~BIT_0; 3509 - ret = qla25xx_init_req_que(base_vha, req, req->options); 3510 if (ret != QLA_SUCCESS) 3511 DEBUG2_17(printk(KERN_WARNING 3512 "%s Req que:%d init failed\n", __func__,
··· 1226 icb->firmware_options_2 |= 1227 __constant_cpu_to_le32(BIT_18); 1228 1229 + icb->firmware_options_2 &= __constant_cpu_to_le32(~BIT_22); 1230 icb->firmware_options_2 |= __constant_cpu_to_le32(BIT_23); 1231 1232 WRT_REG_DWORD(&reg->isp25mq.req_q_in, 0); 1233 WRT_REG_DWORD(&reg->isp25mq.req_q_out, 0); ··· 3493 rsp = ha->rsp_q_map[i]; 3494 if (rsp) { 3495 rsp->options &= ~BIT_0; 3496 + ret = qla25xx_init_rsp_que(base_vha, rsp); 3497 if (ret != QLA_SUCCESS) 3498 DEBUG2_17(printk(KERN_WARNING 3499 "%s Rsp que:%d init failed\n", __func__, ··· 3507 if (req) { 3508 /* Clear outstanding commands array. */ 3509 req->options &= ~BIT_0; 3510 + ret = qla25xx_init_req_que(base_vha, req); 3511 if (ret != QLA_SUCCESS) 3512 DEBUG2_17(printk(KERN_WARNING 3513 "%s Req que:%d init failed\n", __func__,
+35 -23
drivers/scsi/qla2xxx/qla_isr.c
··· 266 } 267 } 268 269 /** 270 * qla2x00_async_event() - Process aynchronous events. 271 * @ha: SCSI driver HA context ··· 748 "%04x %04x %04x\n", vha->host_no, mb[1], mb[2], mb[3])); 749 break; 750 case MBA_IDC_COMPLETE: 751 - DEBUG2(printk("scsi(%ld): Inter-Driver Commucation " 752 - "Complete -- %04x %04x %04x\n", vha->host_no, mb[1], mb[2], 753 - mb[3])); 754 - break; 755 case MBA_IDC_NOTIFY: 756 - DEBUG2(printk("scsi(%ld): Inter-Driver Commucation " 757 - "Request Notification -- %04x %04x %04x\n", vha->host_no, 758 - mb[1], mb[2], mb[3])); 759 - /**** Mailbox registers 4 - 7 valid!!! */ 760 - break; 761 case MBA_IDC_TIME_EXT: 762 - DEBUG2(printk("scsi(%ld): Inter-Driver Commucation " 763 - "Time Extension -- %04x %04x %04x\n", vha->host_no, mb[1], 764 - mb[2], mb[3])); 765 - /**** Mailbox registers 4 - 7 valid!!! */ 766 break; 767 } 768 ··· 1729 struct qla_hw_data *ha; 1730 struct rsp_que *rsp; 1731 struct device_reg_24xx __iomem *reg; 1732 - uint16_t msix_disabled_hccr = 0; 1733 1734 rsp = (struct rsp_que *) dev_id; 1735 if (!rsp) { ··· 1741 1742 spin_lock_irq(&ha->hardware_lock); 1743 1744 - msix_disabled_hccr = rsp->options; 1745 - if (!rsp->id) 1746 - msix_disabled_hccr &= __constant_cpu_to_le32(BIT_22); 1747 - else 1748 - msix_disabled_hccr &= __constant_cpu_to_le32(BIT_6); 1749 - 1750 qla24xx_process_response_queue(rsp); 1751 - 1752 - if (!msix_disabled_hccr) 1753 - WRT_REG_DWORD(&reg->hccr, HCCRX_CLR_RISC_INT); 1754 1755 spin_unlock_irq(&ha->hardware_lock); 1756
··· 266 } 267 } 268 269 + static void 270 + qla81xx_idc_event(scsi_qla_host_t *vha, uint16_t aen, uint16_t descr) 271 + { 272 + static char *event[] = 273 + { "Complete", "Request Notification", "Time Extension" }; 274 + int rval; 275 + struct device_reg_24xx __iomem *reg24 = &vha->hw->iobase->isp24; 276 + uint16_t __iomem *wptr; 277 + uint16_t cnt, timeout, mb[QLA_IDC_ACK_REGS]; 278 + 279 + /* Seed data -- mailbox1 -> mailbox7. */ 280 + wptr = (uint16_t __iomem *)&reg24->mailbox1; 281 + for (cnt = 0; cnt < QLA_IDC_ACK_REGS; cnt++, wptr++) 282 + mb[cnt] = RD_REG_WORD(wptr); 283 + 284 + DEBUG2(printk("scsi(%ld): Inter-Driver Commucation %s -- " 285 + "%04x %04x %04x %04x %04x %04x %04x.\n", vha->host_no, 286 + event[aen & 0xff], 287 + mb[0], mb[1], mb[2], mb[3], mb[4], mb[5], mb[6])); 288 + 289 + /* Acknowledgement needed? [Notify && non-zero timeout]. */ 290 + timeout = (descr >> 8) & 0xf; 291 + if (aen != MBA_IDC_NOTIFY || !timeout) 292 + return; 293 + 294 + DEBUG2(printk("scsi(%ld): Inter-Driver Commucation %s -- " 295 + "ACK timeout=%d.\n", vha->host_no, event[aen & 0xff], timeout)); 296 + 297 + rval = qla2x00_post_idc_ack_work(vha, mb); 298 + if (rval != QLA_SUCCESS) 299 + qla_printk(KERN_WARNING, vha->hw, 300 + "IDC failed to post ACK.\n"); 301 + } 302 + 303 /** 304 * qla2x00_async_event() - Process aynchronous events. 305 * @ha: SCSI driver HA context ··· 714 "%04x %04x %04x\n", vha->host_no, mb[1], mb[2], mb[3])); 715 break; 716 case MBA_IDC_COMPLETE: 717 case MBA_IDC_NOTIFY: 718 case MBA_IDC_TIME_EXT: 719 + qla81xx_idc_event(vha, mb[0], mb[1]); 720 break; 721 } 722 ··· 1707 struct qla_hw_data *ha; 1708 struct rsp_que *rsp; 1709 struct device_reg_24xx __iomem *reg; 1710 1711 rsp = (struct rsp_que *) dev_id; 1712 if (!rsp) { ··· 1720 1721 spin_lock_irq(&ha->hardware_lock); 1722 1723 qla24xx_process_response_queue(rsp); 1724 1725 spin_unlock_irq(&ha->hardware_lock); 1726
+32 -8
drivers/scsi/qla2xxx/qla_mbx.c
··· 3090 } 3091 3092 int 3093 - qla25xx_init_req_que(struct scsi_qla_host *vha, struct req_que *req, 3094 - uint8_t options) 3095 { 3096 int rval; 3097 unsigned long flags; ··· 3100 struct qla_hw_data *ha = vha->hw; 3101 3102 mcp->mb[0] = MBC_INITIALIZE_MULTIQ; 3103 - mcp->mb[1] = options; 3104 mcp->mb[2] = MSW(LSD(req->dma)); 3105 mcp->mb[3] = LSW(LSD(req->dma)); 3106 mcp->mb[6] = MSW(MSD(req->dma)); ··· 3127 mcp->tov = 60; 3128 3129 spin_lock_irqsave(&ha->hardware_lock, flags); 3130 - if (!(options & BIT_0)) { 3131 WRT_REG_DWORD(&reg->req_q_in, 0); 3132 WRT_REG_DWORD(&reg->req_q_out, 0); 3133 } ··· 3141 } 3142 3143 int 3144 - qla25xx_init_rsp_que(struct scsi_qla_host *vha, struct rsp_que *rsp, 3145 - uint8_t options) 3146 { 3147 int rval; 3148 unsigned long flags; ··· 3151 struct qla_hw_data *ha = vha->hw; 3152 3153 mcp->mb[0] = MBC_INITIALIZE_MULTIQ; 3154 - mcp->mb[1] = options; 3155 mcp->mb[2] = MSW(LSD(rsp->dma)); 3156 mcp->mb[3] = LSW(LSD(rsp->dma)); 3157 mcp->mb[6] = MSW(MSD(rsp->dma)); ··· 3176 mcp->tov = 60; 3177 3178 spin_lock_irqsave(&ha->hardware_lock, flags); 3179 - if (!(options & BIT_0)) { 3180 WRT_REG_DWORD(&reg->rsp_q_out, 0); 3181 WRT_REG_DWORD(&reg->rsp_q_in, 0); 3182 } ··· 3191 return rval; 3192 } 3193
··· 3090 } 3091 3092 int 3093 + qla25xx_init_req_que(struct scsi_qla_host *vha, struct req_que *req) 3094 { 3095 int rval; 3096 unsigned long flags; ··· 3101 struct qla_hw_data *ha = vha->hw; 3102 3103 mcp->mb[0] = MBC_INITIALIZE_MULTIQ; 3104 + mcp->mb[1] = req->options; 3105 mcp->mb[2] = MSW(LSD(req->dma)); 3106 mcp->mb[3] = LSW(LSD(req->dma)); 3107 mcp->mb[6] = MSW(MSD(req->dma)); ··· 3128 mcp->tov = 60; 3129 3130 spin_lock_irqsave(&ha->hardware_lock, flags); 3131 + if (!(req->options & BIT_0)) { 3132 WRT_REG_DWORD(&reg->req_q_in, 0); 3133 WRT_REG_DWORD(&reg->req_q_out, 0); 3134 } ··· 3142 } 3143 3144 int 3145 + qla25xx_init_rsp_que(struct scsi_qla_host *vha, struct rsp_que *rsp) 3146 { 3147 int rval; 3148 unsigned long flags; ··· 3153 struct qla_hw_data *ha = vha->hw; 3154 3155 mcp->mb[0] = MBC_INITIALIZE_MULTIQ; 3156 + mcp->mb[1] = rsp->options; 3157 mcp->mb[2] = MSW(LSD(rsp->dma)); 3158 mcp->mb[3] = LSW(LSD(rsp->dma)); 3159 mcp->mb[6] = MSW(MSD(rsp->dma)); ··· 3178 mcp->tov = 60; 3179 3180 spin_lock_irqsave(&ha->hardware_lock, flags); 3181 + if (!(rsp->options & BIT_0)) { 3182 WRT_REG_DWORD(&reg->rsp_q_out, 0); 3183 WRT_REG_DWORD(&reg->rsp_q_in, 0); 3184 } ··· 3193 return rval; 3194 } 3195 3196 + int 3197 + qla81xx_idc_ack(scsi_qla_host_t *vha, uint16_t *mb) 3198 + { 3199 + int rval; 3200 + mbx_cmd_t mc; 3201 + mbx_cmd_t *mcp = &mc; 3202 + 3203 + DEBUG11(printk("%s(%ld): entered.\n", __func__, vha->host_no)); 3204 + 3205 + mcp->mb[0] = MBC_IDC_ACK; 3206 + memcpy(&mcp->mb[1], mb, QLA_IDC_ACK_REGS * sizeof(uint16_t)); 3207 + mcp->out_mb = MBX_7|MBX_6|MBX_5|MBX_4|MBX_3|MBX_2|MBX_1|MBX_0; 3208 + mcp->in_mb = MBX_0; 3209 + mcp->tov = MBX_TOV_SECONDS; 3210 + mcp->flags = 0; 3211 + rval = qla2x00_mailbox_command(vha, mcp); 3212 + 3213 + if (rval != QLA_SUCCESS) { 3214 + DEBUG2_3_11(printk("%s(%ld): failed=%x (%x).\n", __func__, 3215 + vha->host_no, rval, mcp->mb[0])); 3216 + } else { 3217 + DEBUG11(printk("%s(%ld): done.\n", __func__, vha->host_no)); 3218 + } 3219 + 3220 + return rval; 3221 + }
+6 -6
drivers/scsi/qla2xxx/qla_mid.c
··· 396 397 qla2x00_start_timer(vha, qla2x00_timer, WATCH_INTERVAL); 398 399 - memset(vha->req_ques, 0, sizeof(vha->req_ques) * QLA_MAX_HOST_QUES); 400 vha->req_ques[0] = ha->req_q_map[0]->id; 401 host->can_queue = ha->req_q_map[0]->length + 128; 402 host->this_id = 255; ··· 471 472 if (req) { 473 req->options |= BIT_0; 474 - ret = qla25xx_init_req_que(vha, req, req->options); 475 } 476 if (ret == QLA_SUCCESS) 477 qla25xx_free_req_que(vha, req); ··· 486 487 if (rsp) { 488 rsp->options |= BIT_0; 489 - ret = qla25xx_init_rsp_que(vha, rsp, rsp->options); 490 } 491 if (ret == QLA_SUCCESS) 492 qla25xx_free_rsp_que(vha, rsp); ··· 502 503 req->options |= BIT_3; 504 req->qos = qos; 505 - ret = qla25xx_init_req_que(vha, req, req->options); 506 if (ret != QLA_SUCCESS) 507 DEBUG2_17(printk(KERN_WARNING "%s failed\n", __func__)); 508 /* restore options bit */ ··· 632 req->max_q_depth = ha->req_q_map[0]->max_q_depth; 633 mutex_unlock(&ha->vport_lock); 634 635 - ret = qla25xx_init_req_que(base_vha, req, options); 636 if (ret != QLA_SUCCESS) { 637 qla_printk(KERN_WARNING, ha, "%s failed\n", __func__); 638 mutex_lock(&ha->vport_lock); ··· 710 if (ret) 711 goto que_failed; 712 713 - ret = qla25xx_init_rsp_que(base_vha, rsp, options); 714 if (ret != QLA_SUCCESS) { 715 qla_printk(KERN_WARNING, ha, "%s failed\n", __func__); 716 mutex_lock(&ha->vport_lock);
··· 396 397 qla2x00_start_timer(vha, qla2x00_timer, WATCH_INTERVAL); 398 399 + memset(vha->req_ques, 0, sizeof(vha->req_ques)); 400 vha->req_ques[0] = ha->req_q_map[0]->id; 401 host->can_queue = ha->req_q_map[0]->length + 128; 402 host->this_id = 255; ··· 471 472 if (req) { 473 req->options |= BIT_0; 474 + ret = qla25xx_init_req_que(vha, req); 475 } 476 if (ret == QLA_SUCCESS) 477 qla25xx_free_req_que(vha, req); ··· 486 487 if (rsp) { 488 rsp->options |= BIT_0; 489 + ret = qla25xx_init_rsp_que(vha, rsp); 490 } 491 if (ret == QLA_SUCCESS) 492 qla25xx_free_rsp_que(vha, rsp); ··· 502 503 req->options |= BIT_3; 504 req->qos = qos; 505 + ret = qla25xx_init_req_que(vha, req); 506 if (ret != QLA_SUCCESS) 507 DEBUG2_17(printk(KERN_WARNING "%s failed\n", __func__)); 508 /* restore options bit */ ··· 632 req->max_q_depth = ha->req_q_map[0]->max_q_depth; 633 mutex_unlock(&ha->vport_lock); 634 635 + ret = qla25xx_init_req_que(base_vha, req); 636 if (ret != QLA_SUCCESS) { 637 qla_printk(KERN_WARNING, ha, "%s failed\n", __func__); 638 mutex_lock(&ha->vport_lock); ··· 710 if (ret) 711 goto que_failed; 712 713 + ret = qla25xx_init_rsp_que(base_vha, rsp); 714 if (ret != QLA_SUCCESS) { 715 qla_printk(KERN_WARNING, ha, "%s failed\n", __func__); 716 mutex_lock(&ha->vport_lock);
+16
drivers/scsi/qla2xxx/qla_os.c
··· 2522 return qla2x00_post_work(vha, e, 1); 2523 } 2524 2525 static void 2526 qla2x00_do_work(struct scsi_qla_host *vha) 2527 { ··· 2551 case QLA_EVT_AEN: 2552 fc_host_post_event(vha->host, fc_get_event_number(), 2553 e->u.aen.code, e->u.aen.data); 2554 break; 2555 } 2556 if (e->flags & QLA_EVT_FLAG_FREE)
··· 2522 return qla2x00_post_work(vha, e, 1); 2523 } 2524 2525 + int 2526 + qla2x00_post_idc_ack_work(struct scsi_qla_host *vha, uint16_t *mb) 2527 + { 2528 + struct qla_work_evt *e; 2529 + 2530 + e = qla2x00_alloc_work(vha, QLA_EVT_IDC_ACK, 1); 2531 + if (!e) 2532 + return QLA_FUNCTION_FAILED; 2533 + 2534 + memcpy(e->u.idc_ack.mb, mb, QLA_IDC_ACK_REGS * sizeof(uint16_t)); 2535 + return qla2x00_post_work(vha, e, 1); 2536 + } 2537 + 2538 static void 2539 qla2x00_do_work(struct scsi_qla_host *vha) 2540 { ··· 2538 case QLA_EVT_AEN: 2539 fc_host_post_event(vha->host, fc_get_event_number(), 2540 e->u.aen.code, e->u.aen.data); 2541 + break; 2542 + case QLA_EVT_IDC_ACK: 2543 + qla81xx_idc_ack(vha, e->u.idc_ack.mb); 2544 break; 2545 } 2546 if (e->flags & QLA_EVT_FLAG_FREE)
+1 -1
drivers/scsi/qla2xxx/qla_sup.c
··· 684 "end=0x%x size=0x%x.\n", le32_to_cpu(region->code), start, 685 le32_to_cpu(region->end) >> 2, le32_to_cpu(region->size))); 686 687 - switch (le32_to_cpu(region->code)) { 688 case FLT_REG_FW: 689 ha->flt_region_fw = start; 690 break;
··· 684 "end=0x%x size=0x%x.\n", le32_to_cpu(region->code), start, 685 le32_to_cpu(region->end) >> 2, le32_to_cpu(region->size))); 686 687 + switch (le32_to_cpu(region->code) & 0xff) { 688 case FLT_REG_FW: 689 ha->flt_region_fw = start; 690 break;
+1 -1
drivers/scsi/qla2xxx/qla_version.h
··· 7 /* 8 * Driver version 9 */ 10 - #define QLA2XXX_VERSION "8.03.00-k2" 11 12 #define QLA_DRIVER_MAJOR_VER 8 13 #define QLA_DRIVER_MINOR_VER 3
··· 7 /* 8 * Driver version 9 */ 10 + #define QLA2XXX_VERSION "8.03.00-k3" 11 12 #define QLA_DRIVER_MAJOR_VER 8 13 #define QLA_DRIVER_MINOR_VER 3
+1
drivers/scsi/scsi_scan.c
··· 317 return sdev; 318 319 out_device_destroy: 320 transport_destroy_device(&sdev->sdev_gendev); 321 put_device(&sdev->sdev_gendev); 322 out:
··· 317 return sdev; 318 319 out_device_destroy: 320 + scsi_device_set_state(sdev, SDEV_DEL); 321 transport_destroy_device(&sdev->sdev_gendev); 322 put_device(&sdev->sdev_gendev); 323 out:
+1 -1
drivers/scsi/sg.c
··· 1078 case BLKTRACESETUP: 1079 return blk_trace_setup(sdp->device->request_queue, 1080 sdp->disk->disk_name, 1081 - sdp->device->sdev_gendev.devt, 1082 (char *)arg); 1083 case BLKTRACESTART: 1084 return blk_trace_startstop(sdp->device->request_queue, 1);
··· 1078 case BLKTRACESETUP: 1079 return blk_trace_setup(sdp->device->request_queue, 1080 sdp->disk->disk_name, 1081 + MKDEV(SCSI_GENERIC_MAJOR, sdp->index), 1082 (char *)arg); 1083 case BLKTRACESTART: 1084 return blk_trace_startstop(sdp->device->request_queue, 1);
+15
drivers/serial/8250.c
··· 2083 2084 serial8250_set_mctrl(&up->port, up->port.mctrl); 2085 2086 /* 2087 * Do a quick test to see if we receive an 2088 * interrupt when we enable the TX irq. ··· 2116 up->bugs &= ~UART_BUG_TXEN; 2117 } 2118 2119 spin_unlock_irqrestore(&up->port.lock, flags); 2120 2121 /*
··· 2083 2084 serial8250_set_mctrl(&up->port, up->port.mctrl); 2085 2086 + /* Serial over Lan (SoL) hack: 2087 + Intel 8257x Gigabit ethernet chips have a 2088 + 16550 emulation, to be used for Serial Over Lan. 2089 + Those chips take a longer time than a normal 2090 + serial device to signalize that a transmission 2091 + data was queued. Due to that, the above test generally 2092 + fails. One solution would be to delay the reading of 2093 + iir. However, this is not reliable, since the timeout 2094 + is variable. So, let's just don't test if we receive 2095 + TX irq. This way, we'll never enable UART_BUG_TXEN. 2096 + */ 2097 + if (up->port.flags & UPF_NO_TXEN_TEST) 2098 + goto dont_test_tx_en; 2099 + 2100 /* 2101 * Do a quick test to see if we receive an 2102 * interrupt when we enable the TX irq. ··· 2102 up->bugs &= ~UART_BUG_TXEN; 2103 } 2104 2105 + dont_test_tx_en: 2106 spin_unlock_irqrestore(&up->port.lock, flags); 2107 2108 /*
+36
drivers/serial/8250_pci.c
··· 798 return setup_port(priv, port, bar, offset, board->reg_shift); 799 } 800 801 /* This should be in linux/pci_ids.h */ 802 #define PCI_VENDOR_ID_SBSMODULARIO 0x124B 803 #define PCI_SUBVENDOR_ID_SBSMODULARIO 0x124B ··· 878 .subdevice = PCI_ANY_ID, 879 .init = pci_inteli960ni_init, 880 .setup = pci_default_setup, 881 }, 882 /* 883 * ITE
··· 798 return setup_port(priv, port, bar, offset, board->reg_shift); 799 } 800 801 + static int skip_tx_en_setup(struct serial_private *priv, 802 + const struct pciserial_board *board, 803 + struct uart_port *port, int idx) 804 + { 805 + port->flags |= UPF_NO_TXEN_TEST; 806 + printk(KERN_DEBUG "serial8250: skipping TxEn test for device " 807 + "[%04x:%04x] subsystem [%04x:%04x]\n", 808 + priv->dev->vendor, 809 + priv->dev->device, 810 + priv->dev->subsystem_vendor, 811 + priv->dev->subsystem_device); 812 + 813 + return pci_default_setup(priv, board, port, idx); 814 + } 815 + 816 /* This should be in linux/pci_ids.h */ 817 #define PCI_VENDOR_ID_SBSMODULARIO 0x124B 818 #define PCI_SUBVENDOR_ID_SBSMODULARIO 0x124B ··· 863 .subdevice = PCI_ANY_ID, 864 .init = pci_inteli960ni_init, 865 .setup = pci_default_setup, 866 + }, 867 + { 868 + .vendor = PCI_VENDOR_ID_INTEL, 869 + .device = PCI_DEVICE_ID_INTEL_8257X_SOL, 870 + .subvendor = PCI_ANY_ID, 871 + .subdevice = PCI_ANY_ID, 872 + .setup = skip_tx_en_setup, 873 + }, 874 + { 875 + .vendor = PCI_VENDOR_ID_INTEL, 876 + .device = PCI_DEVICE_ID_INTEL_82573L_SOL, 877 + .subvendor = PCI_ANY_ID, 878 + .subdevice = PCI_ANY_ID, 879 + .setup = skip_tx_en_setup, 880 + }, 881 + { 882 + .vendor = PCI_VENDOR_ID_INTEL, 883 + .device = PCI_DEVICE_ID_INTEL_82573E_SOL, 884 + .subvendor = PCI_ANY_ID, 885 + .subdevice = PCI_ANY_ID, 886 + .setup = skip_tx_en_setup, 887 }, 888 /* 889 * ITE
+4
drivers/serial/atmel_serial.c
··· 877 } 878 } 879 880 /* 881 * Finally, enable the serial port 882 */
··· 877 } 878 } 879 880 + /* Save current CSR for comparison in atmel_tasklet_func() */ 881 + atmel_port->irq_status_prev = UART_GET_CSR(port); 882 + atmel_port->irq_status = atmel_port->irq_status_prev; 883 + 884 /* 885 * Finally, enable the serial port 886 */
+3
drivers/serial/jsm/jsm_driver.c
··· 84 brd->pci_dev = pdev; 85 if (pdev->device == PCIE_DEVICE_ID_NEO_4_IBM) 86 brd->maxports = 4; 87 else 88 brd->maxports = 2; 89 ··· 214 { PCI_DEVICE(PCI_VENDOR_ID_DIGI, PCI_DEVICE_ID_NEO_2RJ45), 0, 0, 2 }, 215 { PCI_DEVICE(PCI_VENDOR_ID_DIGI, PCI_DEVICE_ID_NEO_2RJ45PRI), 0, 0, 3 }, 216 { PCI_DEVICE(PCI_VENDOR_ID_DIGI, PCIE_DEVICE_ID_NEO_4_IBM), 0, 0, 4 }, 217 { 0, } 218 }; 219 MODULE_DEVICE_TABLE(pci, jsm_pci_tbl);
··· 84 brd->pci_dev = pdev; 85 if (pdev->device == PCIE_DEVICE_ID_NEO_4_IBM) 86 brd->maxports = 4; 87 + else if (pdev->device == PCI_DEVICE_ID_DIGI_NEO_8) 88 + brd->maxports = 8; 89 else 90 brd->maxports = 2; 91 ··· 212 { PCI_DEVICE(PCI_VENDOR_ID_DIGI, PCI_DEVICE_ID_NEO_2RJ45), 0, 0, 2 }, 213 { PCI_DEVICE(PCI_VENDOR_ID_DIGI, PCI_DEVICE_ID_NEO_2RJ45PRI), 0, 0, 3 }, 214 { PCI_DEVICE(PCI_VENDOR_ID_DIGI, PCIE_DEVICE_ID_NEO_4_IBM), 0, 0, 4 }, 215 + { PCI_DEVICE(PCI_VENDOR_ID_DIGI, PCI_DEVICE_ID_DIGI_NEO_8), 0, 0, 5 }, 216 { 0, } 217 }; 218 MODULE_DEVICE_TABLE(pci, jsm_pci_tbl);
+1 -1
drivers/spi/spi_gpio.c
··· 114 115 static inline int getmiso(const struct spi_device *spi) 116 { 117 - return gpio_get_value(SPI_MISO_GPIO); 118 } 119 120 #undef pdata
··· 114 115 static inline int getmiso(const struct spi_device *spi) 116 { 117 + return !!gpio_get_value(SPI_MISO_GPIO); 118 } 119 120 #undef pdata
+2 -13
drivers/usb/core/hcd-pci.c
··· 298 EXPORT_SYMBOL_GPL(usb_hcd_pci_suspend); 299 300 /** 301 - * usb_hcd_pci_resume_early - resume a PCI-based HCD before IRQs are enabled 302 - * @dev: USB Host Controller being resumed 303 - * 304 - * Store this function in the HCD's struct pci_driver as .resume_early. 305 - */ 306 - int usb_hcd_pci_resume_early(struct pci_dev *dev) 307 - { 308 - pci_restore_state(dev); 309 - return 0; 310 - } 311 - EXPORT_SYMBOL_GPL(usb_hcd_pci_resume_early); 312 - 313 - /** 314 * usb_hcd_pci_resume - power management resume of a PCI-based HCD 315 * @dev: USB Host Controller being resumed 316 * ··· 319 of_node, 0, 1); 320 } 321 #endif 322 323 hcd = pci_get_drvdata(dev); 324 if (hcd->state != HC_STATE_SUSPENDED) {
··· 298 EXPORT_SYMBOL_GPL(usb_hcd_pci_suspend); 299 300 /** 301 * usb_hcd_pci_resume - power management resume of a PCI-based HCD 302 * @dev: USB Host Controller being resumed 303 * ··· 332 of_node, 0, 1); 333 } 334 #endif 335 + 336 + pci_restore_state(dev); 337 338 hcd = pci_get_drvdata(dev); 339 if (hcd->state != HC_STATE_SUSPENDED) {
-1
drivers/usb/core/hcd.h
··· 257 258 #ifdef CONFIG_PM 259 extern int usb_hcd_pci_suspend(struct pci_dev *dev, pm_message_t msg); 260 - extern int usb_hcd_pci_resume_early(struct pci_dev *dev); 261 extern int usb_hcd_pci_resume(struct pci_dev *dev); 262 #endif /* CONFIG_PM */ 263
··· 257 258 #ifdef CONFIG_PM 259 extern int usb_hcd_pci_suspend(struct pci_dev *dev, pm_message_t msg); 260 extern int usb_hcd_pci_resume(struct pci_dev *dev); 261 #endif /* CONFIG_PM */ 262
+2 -2
drivers/usb/gadget/pxa25x_udc.c
··· 904 905 /* most IN status is the same, but ISO can't stall */ 906 *ep->reg_udccs = UDCCS_BI_TPC|UDCCS_BI_FTF|UDCCS_BI_TUR 907 - | (ep->bmAttributes == USB_ENDPOINT_XFER_ISOC) 908 - ? 0 : UDCCS_BI_SST; 909 } 910 911
··· 904 905 /* most IN status is the same, but ISO can't stall */ 906 *ep->reg_udccs = UDCCS_BI_TPC|UDCCS_BI_FTF|UDCCS_BI_TUR 907 + | (ep->bmAttributes == USB_ENDPOINT_XFER_ISOC 908 + ? 0 : UDCCS_BI_SST); 909 } 910 911
-1
drivers/usb/host/ehci-pci.c
··· 432 433 #ifdef CONFIG_PM 434 .suspend = usb_hcd_pci_suspend, 435 - .resume_early = usb_hcd_pci_resume_early, 436 .resume = usb_hcd_pci_resume, 437 #endif 438 .shutdown = usb_hcd_pci_shutdown,
··· 432 433 #ifdef CONFIG_PM 434 .suspend = usb_hcd_pci_suspend, 435 .resume = usb_hcd_pci_resume, 436 #endif 437 .shutdown = usb_hcd_pci_shutdown,
-1
drivers/usb/host/ohci-pci.c
··· 487 488 #ifdef CONFIG_PM 489 .suspend = usb_hcd_pci_suspend, 490 - .resume_early = usb_hcd_pci_resume_early, 491 .resume = usb_hcd_pci_resume, 492 #endif 493
··· 487 488 #ifdef CONFIG_PM 489 .suspend = usb_hcd_pci_suspend, 490 .resume = usb_hcd_pci_resume, 491 #endif 492
-1
drivers/usb/host/uhci-hcd.c
··· 942 943 #ifdef CONFIG_PM 944 .suspend = usb_hcd_pci_suspend, 945 - .resume_early = usb_hcd_pci_resume_early, 946 .resume = usb_hcd_pci_resume, 947 #endif /* PM */ 948 };
··· 942 943 #ifdef CONFIG_PM 944 .suspend = usb_hcd_pci_suspend, 945 .resume = usb_hcd_pci_resume, 946 #endif /* PM */ 947 };
+2 -2
drivers/usb/host/whci/asl.c
··· 227 * Now that the ASL is updated, complete the removal of any 228 * removed qsets. 229 */ 230 - spin_lock(&whc->lock); 231 232 list_for_each_entry_safe(qset, t, &whc->async_removed_list, list_node) { 233 qset_remove_complete(whc, qset); 234 } 235 236 - spin_unlock(&whc->lock); 237 } 238 239 /**
··· 227 * Now that the ASL is updated, complete the removal of any 228 * removed qsets. 229 */ 230 + spin_lock_irq(&whc->lock); 231 232 list_for_each_entry_safe(qset, t, &whc->async_removed_list, list_node) { 233 qset_remove_complete(whc, qset); 234 } 235 236 + spin_unlock_irq(&whc->lock); 237 } 238 239 /**
+2 -2
drivers/usb/host/whci/pzl.c
··· 255 * Now that the PZL is updated, complete the removal of any 256 * removed qsets. 257 */ 258 - spin_lock(&whc->lock); 259 260 list_for_each_entry_safe(qset, t, &whc->periodic_removed_list, list_node) { 261 qset_remove_complete(whc, qset); 262 } 263 264 - spin_unlock(&whc->lock); 265 } 266 267 /**
··· 255 * Now that the PZL is updated, complete the removal of any 256 * removed qsets. 257 */ 258 + spin_lock_irq(&whc->lock); 259 260 list_for_each_entry_safe(qset, t, &whc->periodic_removed_list, list_node) { 261 qset_remove_complete(whc, qset); 262 } 263 264 + spin_unlock_irq(&whc->lock); 265 } 266 267 /**
+2 -8
drivers/video/Kconfig
··· 1054 1055 config FB_I810 1056 tristate "Intel 810/815 support (EXPERIMENTAL)" 1057 - depends on EXPERIMENTAL && PCI && X86_32 1058 - select AGP 1059 - select AGP_INTEL 1060 - select FB 1061 select FB_MODE_HELPERS 1062 select FB_CFB_FILLRECT 1063 select FB_CFB_COPYAREA ··· 1117 1118 config FB_INTEL 1119 tristate "Intel 830M/845G/852GM/855GM/865G/915G/945G/945GM/965G/965GM support (EXPERIMENTAL)" 1120 - depends on EXPERIMENTAL && PCI && X86 1121 - select FB 1122 - select AGP 1123 - select AGP_INTEL 1124 select FB_MODE_HELPERS 1125 select FB_CFB_FILLRECT 1126 select FB_CFB_COPYAREA
··· 1054 1055 config FB_I810 1056 tristate "Intel 810/815 support (EXPERIMENTAL)" 1057 + depends on EXPERIMENTAL && FB && PCI && X86_32 && AGP_INTEL 1058 select FB_MODE_HELPERS 1059 select FB_CFB_FILLRECT 1060 select FB_CFB_COPYAREA ··· 1120 1121 config FB_INTEL 1122 tristate "Intel 830M/845G/852GM/855GM/865G/915G/945G/945GM/965G/965GM support (EXPERIMENTAL)" 1123 + depends on EXPERIMENTAL && FB && PCI && X86 && AGP_INTEL 1124 select FB_MODE_HELPERS 1125 select FB_CFB_FILLRECT 1126 select FB_CFB_COPYAREA
-1
drivers/video/aty/aty128fb.c
··· 2365 static void aty128_set_suspend(struct aty128fb_par *par, int suspend) 2366 { 2367 u32 pmgt; 2368 - u16 pwr_command; 2369 struct pci_dev *pdev = par->pdev; 2370 2371 if (!par->pm_reg)
··· 2365 static void aty128_set_suspend(struct aty128fb_par *par, int suspend) 2366 { 2367 u32 pmgt; 2368 struct pci_dev *pdev = par->pdev; 2369 2370 if (!par->pm_reg)
+1 -1
drivers/watchdog/Kconfig
··· 406 ---help--- 407 Hardware driver for the intel TCO timer based watchdog devices. 408 These drivers are included in the Intel 82801 I/O Controller 409 - Hub family (from ICH0 up to ICH8) and in the Intel 6300ESB 410 controller hub. 411 412 The TCO (Total Cost of Ownership) timer is a watchdog timer
··· 406 ---help--- 407 Hardware driver for the intel TCO timer based watchdog devices. 408 These drivers are included in the Intel 82801 I/O Controller 409 + Hub family (from ICH0 up to ICH10) and in the Intel 63xxESB 410 controller hub. 411 412 The TCO (Total Cost of Ownership) timer is a watchdog timer
+2 -2
drivers/watchdog/at91rm9200_wdt.c
··· 107 static int at91_wdt_settimeout(int new_time) 108 { 109 /* 110 - * All counting occurs at SLOW_CLOCK / 128 = 0.256 Hz 111 * 112 * Since WDV is a 16-bit counter, the maximum period is 113 - * 65536 / 0.256 = 256 seconds. 114 */ 115 if ((new_time <= 0) || (new_time > WDT_MAX_TIME)) 116 return -EINVAL;
··· 107 static int at91_wdt_settimeout(int new_time) 108 { 109 /* 110 + * All counting occurs at SLOW_CLOCK / 128 = 256 Hz 111 * 112 * Since WDV is a 16-bit counter, the maximum period is 113 + * 65536 / 256 = 256 seconds. 114 */ 115 if ((new_time <= 0) || (new_time > WDT_MAX_TIME)) 116 return -EINVAL;
+1
drivers/watchdog/at91sam9_wdt.c
··· 18 #include <linux/errno.h> 19 #include <linux/fs.h> 20 #include <linux/init.h> 21 #include <linux/kernel.h> 22 #include <linux/miscdevice.h> 23 #include <linux/module.h>
··· 18 #include <linux/errno.h> 19 #include <linux/fs.h> 20 #include <linux/init.h> 21 + #include <linux/io.h> 22 #include <linux/kernel.h> 23 #include <linux/miscdevice.h> 24 #include <linux/module.h>
+28 -4
drivers/watchdog/iTCO_vendor_support.c
··· 1 /* 2 * intel TCO vendor specific watchdog driver support 3 * 4 - * (c) Copyright 2006-2008 Wim Van Sebroeck <wim@iguana.be>. 5 * 6 * This program is free software; you can redistribute it and/or 7 * modify it under the terms of the GNU General Public License ··· 19 20 /* Module and version information */ 21 #define DRV_NAME "iTCO_vendor_support" 22 - #define DRV_VERSION "1.02" 23 #define PFX DRV_NAME ": " 24 25 /* Includes */ ··· 76 * time is about 40 seconds, and the minimum hang time is about 77 * 20.6 seconds. 78 */ 79 80 static void supermicro_old_pre_keepalive(unsigned long acpibase) 81 { ··· 248 void iTCO_vendor_pre_start(unsigned long acpibase, 249 unsigned int heartbeat) 250 { 251 - if (vendorsupport == SUPERMICRO_NEW_BOARD) 252 supermicro_new_pre_start(heartbeat); 253 } 254 EXPORT_SYMBOL(iTCO_vendor_pre_start); 255 256 void iTCO_vendor_pre_stop(unsigned long acpibase) 257 { 258 - if (vendorsupport == SUPERMICRO_NEW_BOARD) 259 supermicro_new_pre_stop(); 260 } 261 EXPORT_SYMBOL(iTCO_vendor_pre_stop);
··· 1 /* 2 * intel TCO vendor specific watchdog driver support 3 * 4 + * (c) Copyright 2006-2009 Wim Van Sebroeck <wim@iguana.be>. 5 * 6 * This program is free software; you can redistribute it and/or 7 * modify it under the terms of the GNU General Public License ··· 19 20 /* Module and version information */ 21 #define DRV_NAME "iTCO_vendor_support" 22 + #define DRV_VERSION "1.03" 23 #define PFX DRV_NAME ": " 24 25 /* Includes */ ··· 76 * time is about 40 seconds, and the minimum hang time is about 77 * 20.6 seconds. 78 */ 79 + 80 + static void supermicro_old_pre_start(unsigned long acpibase) 81 + { 82 + unsigned long val32; 83 + 84 + /* Bit 13: TCO_EN -> 0 = Disables TCO logic generating an SMI# */ 85 + val32 = inl(SMI_EN); 86 + val32 &= 0xffffdfff; /* Turn off SMI clearing watchdog */ 87 + outl(val32, SMI_EN); /* Needed to activate watchdog */ 88 + } 89 + 90 + static void supermicro_old_pre_stop(unsigned long acpibase) 91 + { 92 + unsigned long val32; 93 + 94 + /* Bit 13: TCO_EN -> 1 = Enables the TCO logic to generate SMI# */ 95 + val32 = inl(SMI_EN); 96 + val32 |= 0x00002000; /* Turn on SMI clearing watchdog */ 97 + outl(val32, SMI_EN); /* Needed to deactivate watchdog */ 98 + } 99 100 static void supermicro_old_pre_keepalive(unsigned long acpibase) 101 { ··· 228 void iTCO_vendor_pre_start(unsigned long acpibase, 229 unsigned int heartbeat) 230 { 231 + if (vendorsupport == SUPERMICRO_OLD_BOARD) 232 + supermicro_old_pre_start(acpibase); 233 + else if (vendorsupport == SUPERMICRO_NEW_BOARD) 234 supermicro_new_pre_start(heartbeat); 235 } 236 EXPORT_SYMBOL(iTCO_vendor_pre_start); 237 238 void iTCO_vendor_pre_stop(unsigned long acpibase) 239 { 240 + if (vendorsupport == SUPERMICRO_OLD_BOARD) 241 + supermicro_old_pre_stop(acpibase); 242 + else if (vendorsupport == SUPERMICRO_NEW_BOARD) 243 supermicro_new_pre_stop(); 244 } 245 EXPORT_SYMBOL(iTCO_vendor_pre_stop);
+14 -21
drivers/watchdog/iTCO_wdt.c
··· 1 /* 2 - * intel TCO Watchdog Driver (Used in i82801 and i6300ESB chipsets) 3 * 4 - * (c) Copyright 2006-2008 Wim Van Sebroeck <wim@iguana.be>. 5 * 6 * This program is free software; you can redistribute it and/or 7 * modify it under the terms of the GNU General Public License ··· 63 64 /* Module and version information */ 65 #define DRV_NAME "iTCO_wdt" 66 - #define DRV_VERSION "1.04" 67 #define PFX DRV_NAME ": " 68 69 /* Includes */ ··· 236 237 /* Address definitions for the TCO */ 238 /* TCO base address */ 239 - #define TCOBASE iTCO_wdt_private.ACPIBASE + 0x60 240 /* SMI Control and Enable Register */ 241 - #define SMI_EN iTCO_wdt_private.ACPIBASE + 0x30 242 243 #define TCO_RLD TCOBASE + 0x00 /* TCO Timer Reload and Curr. Value */ 244 #define TCOv1_TMR TCOBASE + 0x01 /* TCOv1 Timer Initial Value */ 245 - #define TCO_DAT_IN TCOBASE + 0x02 /* TCO Data In Register */ 246 - #define TCO_DAT_OUT TCOBASE + 0x03 /* TCO Data Out Register */ 247 - #define TCO1_STS TCOBASE + 0x04 /* TCO1 Status Register */ 248 - #define TCO2_STS TCOBASE + 0x06 /* TCO2 Status Register */ 249 #define TCO1_CNT TCOBASE + 0x08 /* TCO1 Control Register */ 250 #define TCO2_CNT TCOBASE + 0x0a /* TCO2 Control Register */ 251 #define TCOv2_TMR TCOBASE + 0x12 /* TCOv2 Timer Initial Value */ ··· 338 static int iTCO_wdt_start(void) 339 { 340 unsigned int val; 341 - unsigned long val32; 342 343 spin_lock(&iTCO_wdt_private.io_lock); 344 ··· 349 printk(KERN_ERR PFX "failed to reset NO_REBOOT flag, reboot disabled by hardware\n"); 350 return -EIO; 351 } 352 - 353 - /* Bit 13: TCO_EN -> 0 = Disables TCO logic generating an SMI# */ 354 - val32 = inl(SMI_EN); 355 - val32 &= 0xffffdfff; /* Turn off SMI clearing watchdog */ 356 - outl(val32, SMI_EN); 357 358 /* Force the timer to its reload value by writing to the TCO_RLD 359 register */ ··· 372 static int iTCO_wdt_stop(void) 373 { 374 unsigned int val; 375 - unsigned long val32; 376 377 spin_lock(&iTCO_wdt_private.io_lock); 378 ··· 382 val |= 0x0800; 383 outw(val, TCO1_CNT); 384 val = inw(TCO1_CNT); 385 - 386 - /* Bit 13: TCO_EN -> 1 = Enables the TCO logic to generate SMI# */ 387 - val32 = inl(SMI_EN); 388 - val32 |= 0x00002000; 389 - outl(val32, SMI_EN); 390 391 /* Set the NO_REBOOT bit to prevent later reboots, just for sure */ 392 iTCO_wdt_set_NO_REBOOT_bit(); ··· 637 int ret; 638 u32 base_address; 639 unsigned long RCBA; 640 641 /* 642 * Find the ACPI/PM base I/O address which is the base ··· 684 ret = -EIO; 685 goto out; 686 } 687 688 /* The TCO I/O registers reside in a 32-byte range pointed to 689 by the TCOBASE value */
··· 1 /* 2 + * intel TCO Watchdog Driver (Used in i82801 and i63xxESB chipsets) 3 * 4 + * (c) Copyright 2006-2009 Wim Van Sebroeck <wim@iguana.be>. 5 * 6 * This program is free software; you can redistribute it and/or 7 * modify it under the terms of the GNU General Public License ··· 63 64 /* Module and version information */ 65 #define DRV_NAME "iTCO_wdt" 66 + #define DRV_VERSION "1.05" 67 #define PFX DRV_NAME ": " 68 69 /* Includes */ ··· 236 237 /* Address definitions for the TCO */ 238 /* TCO base address */ 239 + #define TCOBASE iTCO_wdt_private.ACPIBASE + 0x60 240 /* SMI Control and Enable Register */ 241 + #define SMI_EN iTCO_wdt_private.ACPIBASE + 0x30 242 243 #define TCO_RLD TCOBASE + 0x00 /* TCO Timer Reload and Curr. Value */ 244 #define TCOv1_TMR TCOBASE + 0x01 /* TCOv1 Timer Initial Value */ 245 + #define TCO_DAT_IN TCOBASE + 0x02 /* TCO Data In Register */ 246 + #define TCO_DAT_OUT TCOBASE + 0x03 /* TCO Data Out Register */ 247 + #define TCO1_STS TCOBASE + 0x04 /* TCO1 Status Register */ 248 + #define TCO2_STS TCOBASE + 0x06 /* TCO2 Status Register */ 249 #define TCO1_CNT TCOBASE + 0x08 /* TCO1 Control Register */ 250 #define TCO2_CNT TCOBASE + 0x0a /* TCO2 Control Register */ 251 #define TCOv2_TMR TCOBASE + 0x12 /* TCOv2 Timer Initial Value */ ··· 338 static int iTCO_wdt_start(void) 339 { 340 unsigned int val; 341 342 spin_lock(&iTCO_wdt_private.io_lock); 343 ··· 350 printk(KERN_ERR PFX "failed to reset NO_REBOOT flag, reboot disabled by hardware\n"); 351 return -EIO; 352 } 353 354 /* Force the timer to its reload value by writing to the TCO_RLD 355 register */ ··· 378 static int iTCO_wdt_stop(void) 379 { 380 unsigned int val; 381 382 spin_lock(&iTCO_wdt_private.io_lock); 383 ··· 389 val |= 0x0800; 390 outw(val, TCO1_CNT); 391 val = inw(TCO1_CNT); 392 393 /* Set the NO_REBOOT bit to prevent later reboots, just for sure */ 394 iTCO_wdt_set_NO_REBOOT_bit(); ··· 649 int ret; 650 u32 base_address; 651 unsigned long RCBA; 652 + unsigned long val32; 653 654 /* 655 * Find the ACPI/PM base I/O address which is the base ··· 695 ret = -EIO; 696 goto out; 697 } 698 + /* Bit 13: TCO_EN -> 0 = Disables TCO logic generating an SMI# */ 699 + val32 = inl(SMI_EN); 700 + val32 &= 0xffffdfff; /* Turn off SMI clearing watchdog */ 701 + outl(val32, SMI_EN); 702 703 /* The TCO I/O registers reside in a 32-byte range pointed to 704 by the TCOBASE value */
+3 -2
fs/bio.c
··· 302 struct bio *bio_alloc_bioset(gfp_t gfp_mask, int nr_iovecs, struct bio_set *bs) 303 { 304 struct bio *bio = NULL; 305 306 if (bs) { 307 - void *p = mempool_alloc(bs->bio_pool, gfp_mask); 308 309 if (p) 310 bio = p + bs->front_pad; ··· 330 } 331 if (unlikely(!bvl)) { 332 if (bs) 333 - mempool_free(bio, bs->bio_pool); 334 else 335 kfree(bio); 336 bio = NULL;
··· 302 struct bio *bio_alloc_bioset(gfp_t gfp_mask, int nr_iovecs, struct bio_set *bs) 303 { 304 struct bio *bio = NULL; 305 + void *p; 306 307 if (bs) { 308 + p = mempool_alloc(bs->bio_pool, gfp_mask); 309 310 if (p) 311 bio = p + bs->front_pad; ··· 329 } 330 if (unlikely(!bvl)) { 331 if (bs) 332 + mempool_free(p, bs->bio_pool); 333 else 334 kfree(bio); 335 bio = NULL;
+37 -21
fs/btrfs/ctree.c
··· 38 static int del_ptr(struct btrfs_trans_handle *trans, struct btrfs_root *root, 39 struct btrfs_path *path, int level, int slot); 40 41 - inline void btrfs_init_path(struct btrfs_path *p) 42 - { 43 - memset(p, 0, sizeof(*p)); 44 - } 45 - 46 struct btrfs_path *btrfs_alloc_path(void) 47 { 48 struct btrfs_path *path; 49 - path = kmem_cache_alloc(btrfs_path_cachep, GFP_NOFS); 50 - if (path) { 51 - btrfs_init_path(path); 52 path->reada = 1; 53 - } 54 return path; 55 } 56 ··· 62 63 /* 64 * reset all the locked nodes in the patch to spinning locks. 65 */ 66 - noinline void btrfs_clear_path_blocking(struct btrfs_path *p) 67 { 68 int i; 69 - for (i = 0; i < BTRFS_MAX_LEVEL; i++) { 70 if (p->nodes[i] && p->locks[i]) 71 btrfs_clear_lock_blocking(p->nodes[i]); 72 } 73 } 74 75 /* this also releases the path */ ··· 303 trans->transid, level, &ins); 304 BUG_ON(ret); 305 cow = btrfs_init_new_buffer(trans, root, prealloc_dest, 306 - buf->len); 307 } else { 308 cow = btrfs_alloc_free_block(trans, root, buf->len, 309 parent_start, ··· 934 935 /* promote the child to a root */ 936 child = read_node_slot(root, mid, 0); 937 btrfs_tree_lock(child); 938 btrfs_set_lock_blocking(child); 939 - BUG_ON(!child); 940 ret = btrfs_cow_block(trans, root, child, mid, 0, &child, 0); 941 BUG_ON(ret); 942 ··· 1583 if (!p->skip_locking) 1584 p->locks[level] = 1; 1585 1586 - btrfs_clear_path_blocking(p); 1587 1588 /* 1589 * we have a lock on b and as long as we aren't changing ··· 1622 1623 btrfs_set_path_blocking(p); 1624 sret = split_node(trans, root, p, level); 1625 - btrfs_clear_path_blocking(p); 1626 1627 BUG_ON(sret > 0); 1628 if (sret) { ··· 1642 1643 btrfs_set_path_blocking(p); 1644 sret = balance_level(trans, root, p, level); 1645 - btrfs_clear_path_blocking(p); 1646 1647 if (sret) { 1648 ret = sret; ··· 1705 if (!p->skip_locking) { 1706 int lret; 1707 1708 - btrfs_clear_path_blocking(p); 1709 lret = btrfs_try_spin_lock(b); 1710 1711 if (!lret) { 1712 btrfs_set_path_blocking(p); 1713 btrfs_tree_lock(b); 1714 - btrfs_clear_path_blocking(p); 1715 } 1716 } 1717 } else { ··· 1723 btrfs_set_path_blocking(p); 1724 sret = split_leaf(trans, root, key, 1725 p, ins_len, ret == 0); 1726 - btrfs_clear_path_blocking(p); 1727 1728 BUG_ON(sret > 0); 1729 if (sret) { ··· 3943 btrfs_release_path(root, path); 3944 goto again; 3945 } else { 3946 - btrfs_clear_path_blocking(path); 3947 goto out; 3948 } 3949 } ··· 3962 path->locks[level - 1] = 1; 3963 path->nodes[level - 1] = cur; 3964 unlock_up(path, level, 1); 3965 - btrfs_clear_path_blocking(path); 3966 } 3967 out: 3968 if (ret == 0)
··· 38 static int del_ptr(struct btrfs_trans_handle *trans, struct btrfs_root *root, 39 struct btrfs_path *path, int level, int slot); 40 41 struct btrfs_path *btrfs_alloc_path(void) 42 { 43 struct btrfs_path *path; 44 + path = kmem_cache_zalloc(btrfs_path_cachep, GFP_NOFS); 45 + if (path) 46 path->reada = 1; 47 return path; 48 } 49 ··· 69 70 /* 71 * reset all the locked nodes in the patch to spinning locks. 72 + * 73 + * held is used to keep lockdep happy, when lockdep is enabled 74 + * we set held to a blocking lock before we go around and 75 + * retake all the spinlocks in the path. You can safely use NULL 76 + * for held 77 */ 78 + noinline void btrfs_clear_path_blocking(struct btrfs_path *p, 79 + struct extent_buffer *held) 80 { 81 int i; 82 + 83 + #ifdef CONFIG_DEBUG_LOCK_ALLOC 84 + /* lockdep really cares that we take all of these spinlocks 85 + * in the right order. If any of the locks in the path are not 86 + * currently blocking, it is going to complain. So, make really 87 + * really sure by forcing the path to blocking before we clear 88 + * the path blocking. 89 + */ 90 + if (held) 91 + btrfs_set_lock_blocking(held); 92 + btrfs_set_path_blocking(p); 93 + #endif 94 + 95 + for (i = BTRFS_MAX_LEVEL - 1; i >= 0; i--) { 96 if (p->nodes[i] && p->locks[i]) 97 btrfs_clear_lock_blocking(p->nodes[i]); 98 } 99 + 100 + #ifdef CONFIG_DEBUG_LOCK_ALLOC 101 + if (held) 102 + btrfs_clear_lock_blocking(held); 103 + #endif 104 } 105 106 /* this also releases the path */ ··· 286 trans->transid, level, &ins); 287 BUG_ON(ret); 288 cow = btrfs_init_new_buffer(trans, root, prealloc_dest, 289 + buf->len, level); 290 } else { 291 cow = btrfs_alloc_free_block(trans, root, buf->len, 292 parent_start, ··· 917 918 /* promote the child to a root */ 919 child = read_node_slot(root, mid, 0); 920 + BUG_ON(!child); 921 btrfs_tree_lock(child); 922 btrfs_set_lock_blocking(child); 923 ret = btrfs_cow_block(trans, root, child, mid, 0, &child, 0); 924 BUG_ON(ret); 925 ··· 1566 if (!p->skip_locking) 1567 p->locks[level] = 1; 1568 1569 + btrfs_clear_path_blocking(p, NULL); 1570 1571 /* 1572 * we have a lock on b and as long as we aren't changing ··· 1605 1606 btrfs_set_path_blocking(p); 1607 sret = split_node(trans, root, p, level); 1608 + btrfs_clear_path_blocking(p, NULL); 1609 1610 BUG_ON(sret > 0); 1611 if (sret) { ··· 1625 1626 btrfs_set_path_blocking(p); 1627 sret = balance_level(trans, root, p, level); 1628 + btrfs_clear_path_blocking(p, NULL); 1629 1630 if (sret) { 1631 ret = sret; ··· 1688 if (!p->skip_locking) { 1689 int lret; 1690 1691 + btrfs_clear_path_blocking(p, NULL); 1692 lret = btrfs_try_spin_lock(b); 1693 1694 if (!lret) { 1695 btrfs_set_path_blocking(p); 1696 btrfs_tree_lock(b); 1697 + btrfs_clear_path_blocking(p, b); 1698 } 1699 } 1700 } else { ··· 1706 btrfs_set_path_blocking(p); 1707 sret = split_leaf(trans, root, key, 1708 p, ins_len, ret == 0); 1709 + btrfs_clear_path_blocking(p, NULL); 1710 1711 BUG_ON(sret > 0); 1712 if (sret) { ··· 3926 btrfs_release_path(root, path); 3927 goto again; 3928 } else { 3929 goto out; 3930 } 3931 } ··· 3946 path->locks[level - 1] = 1; 3947 path->nodes[level - 1] = cur; 3948 unlock_up(path, level, 1); 3949 + btrfs_clear_path_blocking(path, NULL); 3950 } 3951 out: 3952 if (ret == 0)
+3 -8
fs/btrfs/ctree.h
··· 43 44 #define BTRFS_ACL_NOT_CACHED ((void *)-1) 45 46 - #ifdef CONFIG_LOCKDEP 47 - # define BTRFS_MAX_LEVEL 7 48 - #else 49 - # define BTRFS_MAX_LEVEL 8 50 - #endif 51 52 /* holds pointers to all of the tree roots */ 53 #define BTRFS_ROOT_TREE_OBJECTID 1ULL ··· 1711 u64 empty_size); 1712 struct extent_buffer *btrfs_init_new_buffer(struct btrfs_trans_handle *trans, 1713 struct btrfs_root *root, 1714 - u64 bytenr, u32 blocksize); 1715 int btrfs_alloc_extent(struct btrfs_trans_handle *trans, 1716 struct btrfs_root *root, 1717 u64 num_bytes, u64 parent, u64 min_bytes, ··· 1831 void btrfs_release_path(struct btrfs_root *root, struct btrfs_path *p); 1832 struct btrfs_path *btrfs_alloc_path(void); 1833 void btrfs_free_path(struct btrfs_path *p); 1834 - void btrfs_init_path(struct btrfs_path *p); 1835 void btrfs_set_path_blocking(struct btrfs_path *p); 1836 - void btrfs_clear_path_blocking(struct btrfs_path *p); 1837 void btrfs_unlock_up_safe(struct btrfs_path *p, int level); 1838 1839 int btrfs_del_items(struct btrfs_trans_handle *trans, struct btrfs_root *root,
··· 43 44 #define BTRFS_ACL_NOT_CACHED ((void *)-1) 45 46 + #define BTRFS_MAX_LEVEL 8 47 48 /* holds pointers to all of the tree roots */ 49 #define BTRFS_ROOT_TREE_OBJECTID 1ULL ··· 1715 u64 empty_size); 1716 struct extent_buffer *btrfs_init_new_buffer(struct btrfs_trans_handle *trans, 1717 struct btrfs_root *root, 1718 + u64 bytenr, u32 blocksize, 1719 + int level); 1720 int btrfs_alloc_extent(struct btrfs_trans_handle *trans, 1721 struct btrfs_root *root, 1722 u64 num_bytes, u64 parent, u64 min_bytes, ··· 1834 void btrfs_release_path(struct btrfs_root *root, struct btrfs_path *p); 1835 struct btrfs_path *btrfs_alloc_path(void); 1836 void btrfs_free_path(struct btrfs_path *p); 1837 void btrfs_set_path_blocking(struct btrfs_path *p); 1838 void btrfs_unlock_up_safe(struct btrfs_path *p, int level); 1839 1840 int btrfs_del_items(struct btrfs_trans_handle *trans, struct btrfs_root *root,
+45 -1
fs/btrfs/disk-io.c
··· 75 struct btrfs_work work; 76 }; 77 78 /* 79 * extents on the btree inode are pretty simple, there's one extent 80 * that covers the entire device ··· 381 return ret; 382 } 383 384 static int btree_readpage_end_io_hook(struct page *page, u64 start, u64 end, 385 struct extent_state *state) 386 { ··· 434 goto err; 435 } 436 found_level = btrfs_header_level(eb); 437 438 ret = csum_tree_block(root, eb, 1); 439 if (ret) ··· 1822 ret = find_and_setup_root(tree_root, fs_info, 1823 BTRFS_DEV_TREE_OBJECTID, dev_root); 1824 dev_root->track_dirty = 1; 1825 - 1826 if (ret) 1827 goto fail_extent_root; 1828
··· 75 struct btrfs_work work; 76 }; 77 78 + /* These are used to set the lockdep class on the extent buffer locks. 79 + * The class is set by the readpage_end_io_hook after the buffer has 80 + * passed csum validation but before the pages are unlocked. 81 + * 82 + * The lockdep class is also set by btrfs_init_new_buffer on freshly 83 + * allocated blocks. 84 + * 85 + * The class is based on the level in the tree block, which allows lockdep 86 + * to know that lower nodes nest inside the locks of higher nodes. 87 + * 88 + * We also add a check to make sure the highest level of the tree is 89 + * the same as our lockdep setup here. If BTRFS_MAX_LEVEL changes, this 90 + * code needs update as well. 91 + */ 92 + #ifdef CONFIG_DEBUG_LOCK_ALLOC 93 + # if BTRFS_MAX_LEVEL != 8 94 + # error 95 + # endif 96 + static struct lock_class_key btrfs_eb_class[BTRFS_MAX_LEVEL + 1]; 97 + static const char *btrfs_eb_name[BTRFS_MAX_LEVEL + 1] = { 98 + /* leaf */ 99 + "btrfs-extent-00", 100 + "btrfs-extent-01", 101 + "btrfs-extent-02", 102 + "btrfs-extent-03", 103 + "btrfs-extent-04", 104 + "btrfs-extent-05", 105 + "btrfs-extent-06", 106 + "btrfs-extent-07", 107 + /* highest possible level */ 108 + "btrfs-extent-08", 109 + }; 110 + #endif 111 + 112 /* 113 * extents on the btree inode are pretty simple, there's one extent 114 * that covers the entire device ··· 347 return ret; 348 } 349 350 + #ifdef CONFIG_DEBUG_LOCK_ALLOC 351 + void btrfs_set_buffer_lockdep_class(struct extent_buffer *eb, int level) 352 + { 353 + lockdep_set_class_and_name(&eb->lock, 354 + &btrfs_eb_class[level], 355 + btrfs_eb_name[level]); 356 + } 357 + #endif 358 + 359 static int btree_readpage_end_io_hook(struct page *page, u64 start, u64 end, 360 struct extent_state *state) 361 { ··· 391 goto err; 392 } 393 found_level = btrfs_header_level(eb); 394 + 395 + btrfs_set_buffer_lockdep_class(eb, found_level); 396 397 ret = csum_tree_block(root, eb, 1); 398 if (ret) ··· 1777 ret = find_and_setup_root(tree_root, fs_info, 1778 BTRFS_DEV_TREE_OBJECTID, dev_root); 1779 dev_root->track_dirty = 1; 1780 if (ret) 1781 goto fail_extent_root; 1782
+10
fs/btrfs/disk-io.h
··· 101 int btrfs_add_log_tree(struct btrfs_trans_handle *trans, 102 struct btrfs_root *root); 103 int btree_lock_page_hook(struct page *page); 104 #endif
··· 101 int btrfs_add_log_tree(struct btrfs_trans_handle *trans, 102 struct btrfs_root *root); 103 int btree_lock_page_hook(struct page *page); 104 + 105 + 106 + #ifdef CONFIG_DEBUG_LOCK_ALLOC 107 + void btrfs_set_buffer_lockdep_class(struct extent_buffer *eb, int level); 108 + #else 109 + static inline void btrfs_set_buffer_lockdep_class(struct extent_buffer *eb, 110 + int level) 111 + { 112 + } 113 + #endif 114 #endif
+51 -32
fs/btrfs/extent-tree.c
··· 1323 int btrfs_extent_post_op(struct btrfs_trans_handle *trans, 1324 struct btrfs_root *root) 1325 { 1326 - finish_current_insert(trans, root->fs_info->extent_root, 1); 1327 - del_pending_extents(trans, root->fs_info->extent_root, 1); 1328 return 0; 1329 } 1330 ··· 2228 u64 end; 2229 u64 priv; 2230 u64 search = 0; 2231 - u64 skipped = 0; 2232 struct btrfs_fs_info *info = extent_root->fs_info; 2233 struct btrfs_path *path; 2234 struct pending_extent_op *extent_op, *tmp; 2235 struct list_head insert_list, update_list; 2236 int ret; 2237 - int num_inserts = 0, max_inserts; 2238 2239 path = btrfs_alloc_path(); 2240 INIT_LIST_HEAD(&insert_list); ··· 2249 ret = find_first_extent_bit(&info->extent_ins, search, &start, 2250 &end, EXTENT_WRITEBACK); 2251 if (ret) { 2252 - if (skipped && all && !num_inserts && 2253 list_empty(&update_list)) { 2254 - skipped = 0; 2255 search = 0; 2256 continue; 2257 } 2258 - mutex_unlock(&info->extent_ins_mutex); 2259 break; 2260 } 2261 2262 ret = try_lock_extent(&info->extent_ins, start, end, GFP_NOFS); 2263 if (!ret) { 2264 - skipped = 1; 2265 search = end + 1; 2266 if (need_resched()) { 2267 mutex_unlock(&info->extent_ins_mutex); ··· 2280 list_add_tail(&extent_op->list, &insert_list); 2281 search = end + 1; 2282 if (num_inserts == max_inserts) { 2283 - mutex_unlock(&info->extent_ins_mutex); 2284 break; 2285 } 2286 } else if (extent_op->type == PENDING_BACKREF_UPDATE) { ··· 2296 * somebody marked this thing for deletion then just unlock it and be 2297 * done, the free_extents will handle it 2298 */ 2299 - mutex_lock(&info->extent_ins_mutex); 2300 list_for_each_entry_safe(extent_op, tmp, &update_list, list) { 2301 clear_extent_bits(&info->extent_ins, extent_op->bytenr, 2302 extent_op->bytenr + extent_op->num_bytes - 1, ··· 2317 if (!list_empty(&update_list)) { 2318 ret = update_backrefs(trans, extent_root, path, &update_list); 2319 BUG_ON(ret); 2320 } 2321 2322 /* ··· 2328 * need to make sure everything is cleaned then reset everything and 2329 * go back to the beginning 2330 */ 2331 - if (!num_inserts && all && skipped) { 2332 search = 0; 2333 - skipped = 0; 2334 INIT_LIST_HEAD(&update_list); 2335 INIT_LIST_HEAD(&insert_list); 2336 goto again; ··· 2387 BUG_ON(ret); 2388 2389 /* 2390 - * if we broke out of the loop in order to insert stuff because we hit 2391 - * the maximum number of inserts at a time we can handle, then loop 2392 - * back and pick up where we left off 2393 */ 2394 - if (num_inserts == max_inserts) { 2395 - INIT_LIST_HEAD(&insert_list); 2396 - INIT_LIST_HEAD(&update_list); 2397 - num_inserts = 0; 2398 - goto again; 2399 - } 2400 - 2401 - /* 2402 - * again, if we need to make absolutely sure there are no more pending 2403 - * extent operations left and we know that we skipped some, go back to 2404 - * the beginning and do it all again 2405 - */ 2406 - if (all && skipped) { 2407 INIT_LIST_HEAD(&insert_list); 2408 INIT_LIST_HEAD(&update_list); 2409 search = 0; 2410 - skipped = 0; 2411 num_inserts = 0; 2412 goto again; 2413 } ··· 2720 goto again; 2721 } 2722 2723 return err; 2724 } 2725 ··· 2872 2873 if (data & BTRFS_BLOCK_GROUP_METADATA) { 2874 last_ptr = &root->fs_info->last_alloc; 2875 - empty_cluster = 64 * 1024; 2876 } 2877 2878 if ((data & BTRFS_BLOCK_GROUP_DATA) && btrfs_test_opt(root, SSD)) ··· 3416 3417 struct extent_buffer *btrfs_init_new_buffer(struct btrfs_trans_handle *trans, 3418 struct btrfs_root *root, 3419 - u64 bytenr, u32 blocksize) 3420 { 3421 struct extent_buffer *buf; 3422 ··· 3425 if (!buf) 3426 return ERR_PTR(-ENOMEM); 3427 btrfs_set_header_generation(buf, trans->transid); 3428 btrfs_tree_lock(buf); 3429 clean_tree_block(trans, root, buf); 3430 ··· 3469 return ERR_PTR(ret); 3470 } 3471 3472 - buf = btrfs_init_new_buffer(trans, root, ins.objectid, blocksize); 3473 return buf; 3474 } 3475 ··· 5658 prev_block = block_start; 5659 } 5660 5661 btrfs_record_root_in_trans(found_root); 5662 if (ref_path->owner_objectid >= BTRFS_FIRST_FREE_OBJECTID) { 5663 /* 5664 * try to update data extent references while
··· 1323 int btrfs_extent_post_op(struct btrfs_trans_handle *trans, 1324 struct btrfs_root *root) 1325 { 1326 + u64 start; 1327 + u64 end; 1328 + int ret; 1329 + 1330 + while(1) { 1331 + finish_current_insert(trans, root->fs_info->extent_root, 1); 1332 + del_pending_extents(trans, root->fs_info->extent_root, 1); 1333 + 1334 + /* is there more work to do? */ 1335 + ret = find_first_extent_bit(&root->fs_info->pending_del, 1336 + 0, &start, &end, EXTENT_WRITEBACK); 1337 + if (!ret) 1338 + continue; 1339 + ret = find_first_extent_bit(&root->fs_info->extent_ins, 1340 + 0, &start, &end, EXTENT_WRITEBACK); 1341 + if (!ret) 1342 + continue; 1343 + break; 1344 + } 1345 return 0; 1346 } 1347 ··· 2211 u64 end; 2212 u64 priv; 2213 u64 search = 0; 2214 struct btrfs_fs_info *info = extent_root->fs_info; 2215 struct btrfs_path *path; 2216 struct pending_extent_op *extent_op, *tmp; 2217 struct list_head insert_list, update_list; 2218 int ret; 2219 + int num_inserts = 0, max_inserts, restart = 0; 2220 2221 path = btrfs_alloc_path(); 2222 INIT_LIST_HEAD(&insert_list); ··· 2233 ret = find_first_extent_bit(&info->extent_ins, search, &start, 2234 &end, EXTENT_WRITEBACK); 2235 if (ret) { 2236 + if (restart && !num_inserts && 2237 list_empty(&update_list)) { 2238 + restart = 0; 2239 search = 0; 2240 continue; 2241 } 2242 break; 2243 } 2244 2245 ret = try_lock_extent(&info->extent_ins, start, end, GFP_NOFS); 2246 if (!ret) { 2247 + if (all) 2248 + restart = 1; 2249 search = end + 1; 2250 if (need_resched()) { 2251 mutex_unlock(&info->extent_ins_mutex); ··· 2264 list_add_tail(&extent_op->list, &insert_list); 2265 search = end + 1; 2266 if (num_inserts == max_inserts) { 2267 + restart = 1; 2268 break; 2269 } 2270 } else if (extent_op->type == PENDING_BACKREF_UPDATE) { ··· 2280 * somebody marked this thing for deletion then just unlock it and be 2281 * done, the free_extents will handle it 2282 */ 2283 list_for_each_entry_safe(extent_op, tmp, &update_list, list) { 2284 clear_extent_bits(&info->extent_ins, extent_op->bytenr, 2285 extent_op->bytenr + extent_op->num_bytes - 1, ··· 2302 if (!list_empty(&update_list)) { 2303 ret = update_backrefs(trans, extent_root, path, &update_list); 2304 BUG_ON(ret); 2305 + 2306 + /* we may have COW'ed new blocks, so lets start over */ 2307 + if (all) 2308 + restart = 1; 2309 } 2310 2311 /* ··· 2309 * need to make sure everything is cleaned then reset everything and 2310 * go back to the beginning 2311 */ 2312 + if (!num_inserts && restart) { 2313 search = 0; 2314 + restart = 0; 2315 INIT_LIST_HEAD(&update_list); 2316 INIT_LIST_HEAD(&insert_list); 2317 goto again; ··· 2368 BUG_ON(ret); 2369 2370 /* 2371 + * if restart is set for whatever reason we need to go back and start 2372 + * searching through the pending list again. 2373 + * 2374 + * We just inserted some extents, which could have resulted in new 2375 + * blocks being allocated, which would result in new blocks needing 2376 + * updates, so if all is set we _must_ restart to get the updated 2377 + * blocks. 2378 */ 2379 + if (restart || all) { 2380 INIT_LIST_HEAD(&insert_list); 2381 INIT_LIST_HEAD(&update_list); 2382 search = 0; 2383 + restart = 0; 2384 num_inserts = 0; 2385 goto again; 2386 } ··· 2709 goto again; 2710 } 2711 2712 + if (!err) 2713 + finish_current_insert(trans, extent_root, 0); 2714 return err; 2715 } 2716 ··· 2859 2860 if (data & BTRFS_BLOCK_GROUP_METADATA) { 2861 last_ptr = &root->fs_info->last_alloc; 2862 + if (!btrfs_test_opt(root, SSD)) 2863 + empty_cluster = 64 * 1024; 2864 } 2865 2866 if ((data & BTRFS_BLOCK_GROUP_DATA) && btrfs_test_opt(root, SSD)) ··· 3402 3403 struct extent_buffer *btrfs_init_new_buffer(struct btrfs_trans_handle *trans, 3404 struct btrfs_root *root, 3405 + u64 bytenr, u32 blocksize, 3406 + int level) 3407 { 3408 struct extent_buffer *buf; 3409 ··· 3410 if (!buf) 3411 return ERR_PTR(-ENOMEM); 3412 btrfs_set_header_generation(buf, trans->transid); 3413 + btrfs_set_buffer_lockdep_class(buf, level); 3414 btrfs_tree_lock(buf); 3415 clean_tree_block(trans, root, buf); 3416 ··· 3453 return ERR_PTR(ret); 3454 } 3455 3456 + buf = btrfs_init_new_buffer(trans, root, ins.objectid, 3457 + blocksize, level); 3458 return buf; 3459 } 3460 ··· 5641 prev_block = block_start; 5642 } 5643 5644 + mutex_lock(&extent_root->fs_info->trans_mutex); 5645 btrfs_record_root_in_trans(found_root); 5646 + mutex_unlock(&extent_root->fs_info->trans_mutex); 5647 if (ref_path->owner_objectid >= BTRFS_FIRST_FREE_OBJECTID) { 5648 /* 5649 * try to update data extent references while
-2
fs/btrfs/extent_io.c
··· 415 416 node = tree_insert(&tree->state, prealloc->end, &prealloc->rb_node); 417 if (node) { 418 - struct extent_state *found; 419 - found = rb_entry(node, struct extent_state, rb_node); 420 free_extent_state(prealloc); 421 return -EEXIST; 422 }
··· 415 416 node = tree_insert(&tree->state, prealloc->end, &prealloc->rb_node); 417 if (node) { 418 free_extent_state(prealloc); 419 return -EEXIST; 420 }
+4 -4
fs/btrfs/file.c
··· 1222 /* 1223 * ok we haven't committed the transaction yet, lets do a commit 1224 */ 1225 - if (file->private_data) 1226 btrfs_ioctl_trans_end(file); 1227 1228 trans = btrfs_start_transaction(root, 1); ··· 1231 goto out; 1232 } 1233 1234 - ret = btrfs_log_dentry_safe(trans, root, file->f_dentry); 1235 if (ret < 0) 1236 goto out; 1237 ··· 1245 * file again, but that will end up using the synchronization 1246 * inside btrfs_sync_log to keep things safe. 1247 */ 1248 - mutex_unlock(&file->f_dentry->d_inode->i_mutex); 1249 1250 if (ret > 0) { 1251 ret = btrfs_commit_transaction(trans, root); ··· 1253 btrfs_sync_log(trans, root); 1254 ret = btrfs_end_transaction(trans, root); 1255 } 1256 - mutex_lock(&file->f_dentry->d_inode->i_mutex); 1257 out: 1258 return ret > 0 ? EIO : ret; 1259 }
··· 1222 /* 1223 * ok we haven't committed the transaction yet, lets do a commit 1224 */ 1225 + if (file && file->private_data) 1226 btrfs_ioctl_trans_end(file); 1227 1228 trans = btrfs_start_transaction(root, 1); ··· 1231 goto out; 1232 } 1233 1234 + ret = btrfs_log_dentry_safe(trans, root, dentry); 1235 if (ret < 0) 1236 goto out; 1237 ··· 1245 * file again, but that will end up using the synchronization 1246 * inside btrfs_sync_log to keep things safe. 1247 */ 1248 + mutex_unlock(&dentry->d_inode->i_mutex); 1249 1250 if (ret > 0) { 1251 ret = btrfs_commit_transaction(trans, root); ··· 1253 btrfs_sync_log(trans, root); 1254 ret = btrfs_end_transaction(trans, root); 1255 } 1256 + mutex_lock(&dentry->d_inode->i_mutex); 1257 out: 1258 return ret > 0 ? EIO : ret; 1259 }
-1
fs/btrfs/inode-map.c
··· 84 search_key.type = 0; 85 search_key.offset = 0; 86 87 - btrfs_init_path(path); 88 start_found = 0; 89 ret = btrfs_search_slot(trans, root, &search_key, path, 0, 0); 90 if (ret < 0)
··· 84 search_key.type = 0; 85 search_key.offset = 0; 86 87 start_found = 0; 88 ret = btrfs_search_slot(trans, root, &search_key, path, 0, 0); 89 if (ret < 0)
+1 -3
fs/btrfs/inode.c
··· 2531 key.offset = (u64)-1; 2532 key.type = (u8)-1; 2533 2534 - btrfs_init_path(path); 2535 - 2536 search_again: 2537 ret = btrfs_search_slot(trans, root, &key, path, -1, 1); 2538 if (ret < 0) ··· 4261 { 4262 if (PageWriteback(page) || PageDirty(page)) 4263 return 0; 4264 - return __btrfs_releasepage(page, gfp_flags); 4265 } 4266 4267 static void btrfs_invalidatepage(struct page *page, unsigned long offset)
··· 2531 key.offset = (u64)-1; 2532 key.type = (u8)-1; 2533 2534 search_again: 2535 ret = btrfs_search_slot(trans, root, &key, path, -1, 1); 2536 if (ret < 0) ··· 4263 { 4264 if (PageWriteback(page) || PageDirty(page)) 4265 return 0; 4266 + return __btrfs_releasepage(page, gfp_flags & GFP_NOFS); 4267 } 4268 4269 static void btrfs_invalidatepage(struct page *page, unsigned long offset)
-11
fs/btrfs/locking.c
··· 25 #include "extent_io.h" 26 #include "locking.h" 27 28 - /* 29 - * btrfs_header_level() isn't free, so don't call it when lockdep isn't 30 - * on 31 - */ 32 - #ifdef CONFIG_DEBUG_LOCK_ALLOC 33 - static inline void spin_nested(struct extent_buffer *eb) 34 - { 35 - spin_lock_nested(&eb->lock, BTRFS_MAX_LEVEL - btrfs_header_level(eb)); 36 - } 37 - #else 38 static inline void spin_nested(struct extent_buffer *eb) 39 { 40 spin_lock(&eb->lock); 41 } 42 - #endif 43 44 /* 45 * Setting a lock to blocking will drop the spinlock and set the
··· 25 #include "extent_io.h" 26 #include "locking.h" 27 28 static inline void spin_nested(struct extent_buffer *eb) 29 { 30 spin_lock(&eb->lock); 31 } 32 33 /* 34 * Setting a lock to blocking will drop the spinlock and set the
+4 -1
fs/btrfs/super.c
··· 379 btrfs_start_delalloc_inodes(root); 380 btrfs_wait_ordered_extents(root, 0); 381 382 - btrfs_clean_old_snapshots(root); 383 trans = btrfs_start_transaction(root, 1); 384 ret = btrfs_commit_transaction(trans, root); 385 sb->s_dirt = 0; ··· 509 { 510 struct btrfs_root *root = btrfs_sb(sb); 511 int ret; 512 513 if ((*flags & MS_RDONLY) == (sb->s_flags & MS_RDONLY)) 514 return 0;
··· 379 btrfs_start_delalloc_inodes(root); 380 btrfs_wait_ordered_extents(root, 0); 381 382 trans = btrfs_start_transaction(root, 1); 383 ret = btrfs_commit_transaction(trans, root); 384 sb->s_dirt = 0; ··· 510 { 511 struct btrfs_root *root = btrfs_sb(sb); 512 int ret; 513 + 514 + ret = btrfs_parse_options(root, data); 515 + if (ret) 516 + return -EINVAL; 517 518 if ((*flags & MS_RDONLY) == (sb->s_flags & MS_RDONLY)) 519 return 0;
+2
fs/btrfs/transaction.c
··· 688 num_bytes -= btrfs_root_used(&dirty->root->root_item); 689 bytes_used = btrfs_root_used(&root->root_item); 690 if (num_bytes) { 691 btrfs_record_root_in_trans(root); 692 btrfs_set_root_used(&root->root_item, 693 bytes_used - num_bytes); 694 }
··· 688 num_bytes -= btrfs_root_used(&dirty->root->root_item); 689 bytes_used = btrfs_root_used(&root->root_item); 690 if (num_bytes) { 691 + mutex_lock(&root->fs_info->trans_mutex); 692 btrfs_record_root_in_trans(root); 693 + mutex_unlock(&root->fs_info->trans_mutex); 694 btrfs_set_root_used(&root->root_item, 695 bytes_used - num_bytes); 696 }
+2
fs/btrfs/tree-log.c
··· 2832 BUG_ON(!wc.replay_dest); 2833 2834 wc.replay_dest->log_root = log; 2835 btrfs_record_root_in_trans(wc.replay_dest); 2836 ret = walk_log_tree(trans, log, &wc); 2837 BUG_ON(ret); 2838
··· 2832 BUG_ON(!wc.replay_dest); 2833 2834 wc.replay_dest->log_root = log; 2835 + mutex_lock(&fs_info->trans_mutex); 2836 btrfs_record_root_in_trans(wc.replay_dest); 2837 + mutex_unlock(&fs_info->trans_mutex); 2838 ret = walk_log_tree(trans, log, &wc); 2839 BUG_ON(ret); 2840
+2 -4
fs/btrfs/volumes.c
··· 2894 free_extent_map(em); 2895 } 2896 2897 - map = kzalloc(sizeof(*map), GFP_NOFS); 2898 - if (!map) 2899 - return -ENOMEM; 2900 - 2901 em = alloc_extent_map(GFP_NOFS); 2902 if (!em) 2903 return -ENOMEM; ··· 3102 if (!sb) 3103 return -ENOMEM; 3104 btrfs_set_buffer_uptodate(sb); 3105 write_extent_buffer(sb, super_copy, 0, BTRFS_SUPER_INFO_SIZE); 3106 array_size = btrfs_super_sys_array_size(super_copy); 3107
··· 2894 free_extent_map(em); 2895 } 2896 2897 em = alloc_extent_map(GFP_NOFS); 2898 if (!em) 2899 return -ENOMEM; ··· 3106 if (!sb) 3107 return -ENOMEM; 3108 btrfs_set_buffer_uptodate(sb); 3109 + btrfs_set_buffer_lockdep_class(sb, 0); 3110 + 3111 write_extent_buffer(sb, super_copy, 0, BTRFS_SUPER_INFO_SIZE); 3112 array_size = btrfs_super_sys_array_size(super_copy); 3113
+2 -1
fs/buffer.c
··· 777 __inc_zone_page_state(page, NR_FILE_DIRTY); 778 __inc_bdi_stat(mapping->backing_dev_info, 779 BDI_RECLAIMABLE); 780 task_io_account_write(PAGE_CACHE_SIZE); 781 } 782 radix_tree_tag_set(&mapping->page_tree, ··· 3109 if (test_clear_buffer_dirty(bh)) { 3110 get_bh(bh); 3111 bh->b_end_io = end_buffer_write_sync; 3112 - ret = submit_bh(WRITE_SYNC, bh); 3113 wait_on_buffer(bh); 3114 if (buffer_eopnotsupp(bh)) { 3115 clear_buffer_eopnotsupp(bh);
··· 777 __inc_zone_page_state(page, NR_FILE_DIRTY); 778 __inc_bdi_stat(mapping->backing_dev_info, 779 BDI_RECLAIMABLE); 780 + task_dirty_inc(current); 781 task_io_account_write(PAGE_CACHE_SIZE); 782 } 783 radix_tree_tag_set(&mapping->page_tree, ··· 3108 if (test_clear_buffer_dirty(bh)) { 3109 get_bh(bh); 3110 bh->b_end_io = end_buffer_write_sync; 3111 + ret = submit_bh(WRITE, bh); 3112 wait_on_buffer(bh); 3113 if (buffer_eopnotsupp(bh)) { 3114 clear_buffer_eopnotsupp(bh);
+14 -1
fs/cifs/CHANGES
··· 1 Version 1.56 2 ------------ 3 Add "forcemandatorylock" mount option to allow user to use mandatory ··· 17 top of the share. Fix problem in 2.6.28 resolving DFS paths to 18 Samba servers (worked to Windows). Fix rmdir so that pending search 19 (readdir) requests do not get invalid results which include the now 20 - removed directory. 21 22 Version 1.55 23 ------------
··· 1 + Version 1.57 2 + ------------ 3 + Improve support for multiple security contexts to the same server. We 4 + used to use the same "vcnumber" for all connections which could cause 5 + the server to treat subsequent connections, especially those that 6 + are authenticated as guest, as reconnections, invalidating the earlier 7 + user's smb session. This fix allows cifs to mount multiple times to the 8 + same server with different userids without risking invalidating earlier 9 + established security contexts. 10 + 11 Version 1.56 12 ------------ 13 Add "forcemandatorylock" mount option to allow user to use mandatory ··· 7 top of the share. Fix problem in 2.6.28 resolving DFS paths to 8 Samba servers (worked to Windows). Fix rmdir so that pending search 9 (readdir) requests do not get invalid results which include the now 10 + removed directory. Fix oops in cifs_dfs_ref.c when prefixpath is not reachable 11 + when using DFS. Add better file create support to servers which support 12 + the CIFS POSIX protocol extensions (this adds support for new flags 13 + on create, and improves semantics for write of locked ranges). 14 15 Version 1.55 16 ------------
+1 -1
fs/cifs/cifsfs.h
··· 100 extern const struct export_operations cifs_export_ops; 101 #endif /* EXPERIMENTAL */ 102 103 - #define CIFS_VERSION "1.56" 104 #endif /* _CIFSFS_H */
··· 100 extern const struct export_operations cifs_export_ops; 101 #endif /* EXPERIMENTAL */ 102 103 + #define CIFS_VERSION "1.57" 104 #endif /* _CIFSFS_H */
+5 -1
fs/cifs/cifsglob.h
··· 164 /* multiplexed reads or writes */ 165 unsigned int maxBuf; /* maxBuf specifies the maximum */ 166 /* message size the server can send or receive for non-raw SMBs */ 167 - unsigned int maxRw; /* maxRw specifies the maximum */ 168 /* message size the server can send or receive for */ 169 /* SMB_COM_WRITE_RAW or SMB_COM_READ_RAW. */ 170 char sessid[4]; /* unique token id for this session */ 171 /* (returned on Negotiate */ 172 int capabilities; /* allow selective disabling of caps by smb sess */ ··· 213 unsigned overrideSecFlg; /* if non-zero override global sec flags */ 214 __u16 ipc_tid; /* special tid for connection to IPC share */ 215 __u16 flags; 216 char *serverOS; /* name of operating system underlying server */ 217 char *serverNOS; /* name of network operating system of server */ 218 char *serverDomain; /* security realm of server */
··· 164 /* multiplexed reads or writes */ 165 unsigned int maxBuf; /* maxBuf specifies the maximum */ 166 /* message size the server can send or receive for non-raw SMBs */ 167 + unsigned int max_rw; /* maxRw specifies the maximum */ 168 /* message size the server can send or receive for */ 169 /* SMB_COM_WRITE_RAW or SMB_COM_READ_RAW. */ 170 + unsigned int max_vcs; /* maximum number of smb sessions, at least 171 + those that can be specified uniquely with 172 + vcnumbers */ 173 char sessid[4]; /* unique token id for this session */ 174 /* (returned on Negotiate */ 175 int capabilities; /* allow selective disabling of caps by smb sess */ ··· 210 unsigned overrideSecFlg; /* if non-zero override global sec flags */ 211 __u16 ipc_tid; /* special tid for connection to IPC share */ 212 __u16 flags; 213 + __u16 vcnum; 214 char *serverOS; /* name of operating system underlying server */ 215 char *serverNOS; /* name of network operating system of server */ 216 char *serverDomain; /* security realm of server */
+4
fs/cifs/cifsproto.h
··· 42 #define GetXid() (int)_GetXid(); cFYI(1,("CIFS VFS: in %s as Xid: %d with uid: %d",__func__, xid,current_fsuid())); 43 #define FreeXid(curr_xid) {_FreeXid(curr_xid); cFYI(1,("CIFS VFS: leaving %s (xid = %d) rc = %d",__func__,curr_xid,(int)rc));} 44 extern char *build_path_from_dentry(struct dentry *); 45 extern char *build_wildcard_path_from_dentry(struct dentry *direntry); 46 /* extern void renew_parental_timestamps(struct dentry *direntry);*/ 47 extern int SendReceive(const unsigned int /* xid */ , struct cifsSesInfo *, ··· 92 extern __le64 cnvrtDosCifsTm(__u16 date, __u16 time); 93 extern struct timespec cnvrtDosUnixTm(__u16 date, __u16 time); 94 95 extern int cifs_get_inode_info(struct inode **pinode, 96 const unsigned char *search_path, 97 FILE_ALL_INFO *pfile_info,
··· 42 #define GetXid() (int)_GetXid(); cFYI(1,("CIFS VFS: in %s as Xid: %d with uid: %d",__func__, xid,current_fsuid())); 43 #define FreeXid(curr_xid) {_FreeXid(curr_xid); cFYI(1,("CIFS VFS: leaving %s (xid = %d) rc = %d",__func__,curr_xid,(int)rc));} 44 extern char *build_path_from_dentry(struct dentry *); 45 + extern char *cifs_build_path_to_root(struct cifs_sb_info *cifs_sb); 46 extern char *build_wildcard_path_from_dentry(struct dentry *direntry); 47 /* extern void renew_parental_timestamps(struct dentry *direntry);*/ 48 extern int SendReceive(const unsigned int /* xid */ , struct cifsSesInfo *, ··· 91 extern __le64 cnvrtDosCifsTm(__u16 date, __u16 time); 92 extern struct timespec cnvrtDosUnixTm(__u16 date, __u16 time); 93 94 + extern void posix_fill_in_inode(struct inode *tmp_inode, 95 + FILE_UNIX_BASIC_INFO *pData, int isNewInode); 96 + extern struct inode *cifs_new_inode(struct super_block *sb, __u64 *inum); 97 extern int cifs_get_inode_info(struct inode **pinode, 98 const unsigned char *search_path, 99 FILE_ALL_INFO *pfile_info,
+4 -3
fs/cifs/cifssmb.c
··· 528 server->maxReq = le16_to_cpu(rsp->MaxMpxCount); 529 server->maxBuf = min((__u32)le16_to_cpu(rsp->MaxBufSize), 530 (__u32)CIFSMaxBufSize + MAX_CIFS_HDR_SIZE); 531 GETU32(server->sessid) = le32_to_cpu(rsp->SessionKey); 532 /* even though we do not use raw we might as well set this 533 accurately, in case we ever find a need for it */ 534 if ((le16_to_cpu(rsp->RawMode) & RAW_ENABLE) == RAW_ENABLE) { 535 - server->maxRw = 0xFF00; 536 server->capabilities = CAP_MPX_MODE | CAP_RAW_MODE; 537 } else { 538 - server->maxRw = 0;/* we do not need to use raw anyway */ 539 server->capabilities = CAP_MPX_MODE; 540 } 541 tmp = (__s16)le16_to_cpu(rsp->ServerTimeZone); ··· 639 /* probably no need to store and check maxvcs */ 640 server->maxBuf = min(le32_to_cpu(pSMBr->MaxBufferSize), 641 (__u32) CIFSMaxBufSize + MAX_CIFS_HDR_SIZE); 642 - server->maxRw = le32_to_cpu(pSMBr->MaxRawSize); 643 cFYI(DBG2, ("Max buf = %d", ses->server->maxBuf)); 644 GETU32(ses->server->sessid) = le32_to_cpu(pSMBr->SessionKey); 645 server->capabilities = le32_to_cpu(pSMBr->Capabilities);
··· 528 server->maxReq = le16_to_cpu(rsp->MaxMpxCount); 529 server->maxBuf = min((__u32)le16_to_cpu(rsp->MaxBufSize), 530 (__u32)CIFSMaxBufSize + MAX_CIFS_HDR_SIZE); 531 + server->max_vcs = le16_to_cpu(rsp->MaxNumberVcs); 532 GETU32(server->sessid) = le32_to_cpu(rsp->SessionKey); 533 /* even though we do not use raw we might as well set this 534 accurately, in case we ever find a need for it */ 535 if ((le16_to_cpu(rsp->RawMode) & RAW_ENABLE) == RAW_ENABLE) { 536 + server->max_rw = 0xFF00; 537 server->capabilities = CAP_MPX_MODE | CAP_RAW_MODE; 538 } else { 539 + server->max_rw = 0;/* do not need to use raw anyway */ 540 server->capabilities = CAP_MPX_MODE; 541 } 542 tmp = (__s16)le16_to_cpu(rsp->ServerTimeZone); ··· 638 /* probably no need to store and check maxvcs */ 639 server->maxBuf = min(le32_to_cpu(pSMBr->MaxBufferSize), 640 (__u32) CIFSMaxBufSize + MAX_CIFS_HDR_SIZE); 641 + server->max_rw = le32_to_cpu(pSMBr->MaxRawSize); 642 cFYI(DBG2, ("Max buf = %d", ses->server->maxBuf)); 643 GETU32(ses->server->sessid) = le32_to_cpu(pSMBr->SessionKey); 644 server->capabilities = le32_to_cpu(pSMBr->Capabilities);
+48 -3
fs/cifs/connect.c
··· 23 #include <linux/string.h> 24 #include <linux/list.h> 25 #include <linux/wait.h> 26 - #include <linux/ipv6.h> 27 #include <linux/pagemap.h> 28 #include <linux/ctype.h> 29 #include <linux/utsname.h> ··· 34 #include <linux/freezer.h> 35 #include <asm/uaccess.h> 36 #include <asm/processor.h> 37 #include "cifspdu.h" 38 #include "cifsglob.h" 39 #include "cifsproto.h" ··· 1379 server->addr.sockAddr.sin_addr.s_addr)) 1380 continue; 1381 else if (addr->ss_family == AF_INET6 && 1382 - memcmp(&server->addr.sockAddr6.sin6_addr, 1383 - &addr6->sin6_addr, sizeof(addr6->sin6_addr))) 1384 continue; 1385 1386 ++server->srv_count; ··· 2180 "mount option supported")); 2181 } 2182 2183 int 2184 cifs_mount(struct super_block *sb, struct cifs_sb_info *cifs_sb, 2185 char *mount_data, const char *devname) ··· 2217 struct cifsSesInfo *pSesInfo = NULL; 2218 struct cifsTconInfo *tcon = NULL; 2219 struct TCP_Server_Info *srvTcp = NULL; 2220 2221 xid = GetXid(); 2222 ··· 2453 if (!(tcon->ses->capabilities & CAP_LARGE_READ_X)) 2454 cifs_sb->rsize = min(cifs_sb->rsize, 2455 (tcon->ses->server->maxBuf - MAX_CIFS_HDR_SIZE)); 2456 2457 /* volume_info->password is freed above when existing session found 2458 (in which case it is not needed anymore) but when new sesion is created
··· 23 #include <linux/string.h> 24 #include <linux/list.h> 25 #include <linux/wait.h> 26 #include <linux/pagemap.h> 27 #include <linux/ctype.h> 28 #include <linux/utsname.h> ··· 35 #include <linux/freezer.h> 36 #include <asm/uaccess.h> 37 #include <asm/processor.h> 38 + #include <net/ipv6.h> 39 #include "cifspdu.h" 40 #include "cifsglob.h" 41 #include "cifsproto.h" ··· 1379 server->addr.sockAddr.sin_addr.s_addr)) 1380 continue; 1381 else if (addr->ss_family == AF_INET6 && 1382 + !ipv6_addr_equal(&server->addr.sockAddr6.sin6_addr, 1383 + &addr6->sin6_addr)) 1384 continue; 1385 1386 ++server->srv_count; ··· 2180 "mount option supported")); 2181 } 2182 2183 + static int 2184 + is_path_accessible(int xid, struct cifsTconInfo *tcon, 2185 + struct cifs_sb_info *cifs_sb, const char *full_path) 2186 + { 2187 + int rc; 2188 + __u64 inode_num; 2189 + FILE_ALL_INFO *pfile_info; 2190 + 2191 + rc = CIFSGetSrvInodeNumber(xid, tcon, full_path, &inode_num, 2192 + cifs_sb->local_nls, 2193 + cifs_sb->mnt_cifs_flags & 2194 + CIFS_MOUNT_MAP_SPECIAL_CHR); 2195 + if (rc != -EOPNOTSUPP) 2196 + return rc; 2197 + 2198 + pfile_info = kmalloc(sizeof(FILE_ALL_INFO), GFP_KERNEL); 2199 + if (pfile_info == NULL) 2200 + return -ENOMEM; 2201 + 2202 + rc = CIFSSMBQPathInfo(xid, tcon, full_path, pfile_info, 2203 + 0 /* not legacy */, cifs_sb->local_nls, 2204 + cifs_sb->mnt_cifs_flags & 2205 + CIFS_MOUNT_MAP_SPECIAL_CHR); 2206 + kfree(pfile_info); 2207 + return rc; 2208 + } 2209 + 2210 int 2211 cifs_mount(struct super_block *sb, struct cifs_sb_info *cifs_sb, 2212 char *mount_data, const char *devname) ··· 2190 struct cifsSesInfo *pSesInfo = NULL; 2191 struct cifsTconInfo *tcon = NULL; 2192 struct TCP_Server_Info *srvTcp = NULL; 2193 + char *full_path; 2194 2195 xid = GetXid(); 2196 ··· 2425 if (!(tcon->ses->capabilities & CAP_LARGE_READ_X)) 2426 cifs_sb->rsize = min(cifs_sb->rsize, 2427 (tcon->ses->server->maxBuf - MAX_CIFS_HDR_SIZE)); 2428 + 2429 + if (!rc && cifs_sb->prepathlen) { 2430 + /* build_path_to_root works only when we have a valid tcon */ 2431 + full_path = cifs_build_path_to_root(cifs_sb); 2432 + if (full_path == NULL) { 2433 + rc = -ENOMEM; 2434 + goto mount_fail_check; 2435 + } 2436 + rc = is_path_accessible(xid, tcon, cifs_sb, full_path); 2437 + if (rc) { 2438 + cERROR(1, ("Path %s in not accessible: %d", 2439 + full_path, rc)); 2440 + kfree(full_path); 2441 + goto mount_fail_check; 2442 + } 2443 + kfree(full_path); 2444 + } 2445 2446 /* volume_info->password is freed above when existing session found 2447 (in which case it is not needed anymore) but when new sesion is created
+202 -99
fs/cifs/dir.c
··· 3 * 4 * vfs operations that deal with dentries 5 * 6 - * Copyright (C) International Business Machines Corp., 2002,2008 7 * Author(s): Steve French (sfrench@us.ibm.com) 8 * 9 * This library is free software; you can redistribute it and/or modify ··· 129 return full_path; 130 } 131 132 static void setup_cifs_dentry(struct cifsTconInfo *tcon, 133 struct dentry *direntry, 134 struct inode *newinode) ··· 222 int xid; 223 int create_options = CREATE_NOT_DIR; 224 int oplock = 0; 225 - /* BB below access is too much for the mknod to request */ 226 int desiredAccess = GENERIC_READ | GENERIC_WRITE; 227 __u16 fileHandle; 228 struct cifs_sb_info *cifs_sb; ··· 253 } 254 255 mode &= ~current->fs->umask; 256 257 if (nd && (nd->flags & LOOKUP_OPEN)) { 258 - int oflags = nd->intent.open.flags; 259 - 260 desiredAccess = 0; 261 if (oflags & FMODE_READ) 262 - desiredAccess |= GENERIC_READ; 263 if (oflags & FMODE_WRITE) { 264 desiredAccess |= GENERIC_WRITE; 265 if (!(oflags & FMODE_READ)) ··· 308 309 /* BB add processing to set equivalent of mode - e.g. via CreateX with 310 ACLs */ 311 - if (oplockEnabled) 312 - oplock = REQ_OPLOCK; 313 314 buf = kmalloc(sizeof(FILE_ALL_INFO), GFP_KERNEL); 315 if (buf == NULL) { ··· 340 } 341 if (rc) { 342 cFYI(1, ("cifs_create returned 0x%x", rc)); 343 - } else { 344 - /* If Open reported that we actually created a file 345 - then we now have to set the mode if possible */ 346 - if ((tcon->unix_ext) && (oplock & CIFS_CREATE_ACTION)) { 347 - struct cifs_unix_set_info_args args = { 348 .mode = mode, 349 .ctime = NO_CHANGE_64, 350 .atime = NO_CHANGE_64, 351 .mtime = NO_CHANGE_64, 352 .device = 0, 353 - }; 354 355 - if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SET_UID) { 356 - args.uid = (__u64) current_fsuid(); 357 - if (inode->i_mode & S_ISGID) 358 - args.gid = (__u64) inode->i_gid; 359 - else 360 - args.gid = (__u64) current_fsgid(); 361 - } else { 362 - args.uid = NO_CHANGE_64; 363 - args.gid = NO_CHANGE_64; 364 - } 365 - CIFSSMBUnixSetInfo(xid, tcon, full_path, &args, 366 - cifs_sb->local_nls, 367 - cifs_sb->mnt_cifs_flags & 368 - CIFS_MOUNT_MAP_SPECIAL_CHR); 369 } else { 370 - /* BB implement mode setting via Windows security 371 - descriptors e.g. */ 372 - /* CIFSSMBWinSetPerms(xid,tcon,path,mode,-1,-1,nls);*/ 373 - 374 - /* Could set r/o dos attribute if mode & 0222 == 0 */ 375 } 376 377 - /* server might mask mode so we have to query for it */ 378 - if (tcon->unix_ext) 379 - rc = cifs_get_inode_info_unix(&newinode, full_path, 380 - inode->i_sb, xid); 381 - else { 382 - rc = cifs_get_inode_info(&newinode, full_path, 383 - buf, inode->i_sb, xid, 384 - &fileHandle); 385 - if (newinode) { 386 - if (cifs_sb->mnt_cifs_flags & 387 - CIFS_MOUNT_DYNPERM) 388 - newinode->i_mode = mode; 389 - if ((oplock & CIFS_CREATE_ACTION) && 390 - (cifs_sb->mnt_cifs_flags & 391 - CIFS_MOUNT_SET_UID)) { 392 - newinode->i_uid = current_fsuid(); 393 - if (inode->i_mode & S_ISGID) 394 - newinode->i_gid = 395 - inode->i_gid; 396 - else 397 - newinode->i_gid = 398 - current_fsgid(); 399 - } 400 } 401 } 402 403 - if (rc != 0) { 404 - cFYI(1, ("Create worked, get_inode_info failed rc = %d", 405 - rc)); 406 - } else 407 - setup_cifs_dentry(tcon, direntry, newinode); 408 409 - if ((nd == NULL /* nfsd case - nfs srv does not set nd */) || 410 - (!(nd->flags & LOOKUP_OPEN))) { 411 - /* mknod case - do not leave file open */ 412 - CIFSSMBClose(xid, tcon, fileHandle); 413 - } else if (newinode) { 414 - struct cifsFileInfo *pCifsFile = 415 - kzalloc(sizeof(struct cifsFileInfo), GFP_KERNEL); 416 417 - if (pCifsFile == NULL) 418 - goto cifs_create_out; 419 - pCifsFile->netfid = fileHandle; 420 - pCifsFile->pid = current->tgid; 421 - pCifsFile->pInode = newinode; 422 - pCifsFile->invalidHandle = false; 423 - pCifsFile->closePend = false; 424 - init_MUTEX(&pCifsFile->fh_sem); 425 - mutex_init(&pCifsFile->lock_mutex); 426 - INIT_LIST_HEAD(&pCifsFile->llist); 427 - atomic_set(&pCifsFile->wrtPending, 0); 428 429 - /* set the following in open now 430 pCifsFile->pfile = file; */ 431 - write_lock(&GlobalSMBSeslock); 432 - list_add(&pCifsFile->tlist, &tcon->openFileList); 433 - pCifsInode = CIFS_I(newinode); 434 - if (pCifsInode) { 435 - /* if readable file instance put first in list*/ 436 - if (write_only) { 437 - list_add_tail(&pCifsFile->flist, 438 - &pCifsInode->openFileList); 439 - } else { 440 - list_add(&pCifsFile->flist, 441 - &pCifsInode->openFileList); 442 - } 443 - if ((oplock & 0xF) == OPLOCK_EXCLUSIVE) { 444 - pCifsInode->clientCanCacheAll = true; 445 - pCifsInode->clientCanCacheRead = true; 446 - cFYI(1, ("Exclusive Oplock inode %p", 447 - newinode)); 448 - } else if ((oplock & 0xF) == OPLOCK_READ) 449 - pCifsInode->clientCanCacheRead = true; 450 } 451 - write_unlock(&GlobalSMBSeslock); 452 } 453 } 454 cifs_create_out: 455 kfree(buf);
··· 3 * 4 * vfs operations that deal with dentries 5 * 6 + * Copyright (C) International Business Machines Corp., 2002,2009 7 * Author(s): Steve French (sfrench@us.ibm.com) 8 * 9 * This library is free software; you can redistribute it and/or modify ··· 129 return full_path; 130 } 131 132 + static int cifs_posix_open(char *full_path, struct inode **pinode, 133 + struct super_block *sb, int mode, int oflags, 134 + int *poplock, __u16 *pnetfid, int xid) 135 + { 136 + int rc; 137 + __u32 oplock; 138 + FILE_UNIX_BASIC_INFO *presp_data; 139 + __u32 posix_flags = 0; 140 + struct cifs_sb_info *cifs_sb = CIFS_SB(sb); 141 + 142 + cFYI(1, ("posix open %s", full_path)); 143 + 144 + presp_data = kzalloc(sizeof(FILE_UNIX_BASIC_INFO), GFP_KERNEL); 145 + if (presp_data == NULL) 146 + return -ENOMEM; 147 + 148 + /* So far cifs posix extensions can only map the following flags. 149 + There are other valid fmode oflags such as FMODE_LSEEK, FMODE_PREAD, but 150 + so far we do not seem to need them, and we can treat them as local only */ 151 + if ((oflags & (FMODE_READ | FMODE_WRITE)) == 152 + (FMODE_READ | FMODE_WRITE)) 153 + posix_flags = SMB_O_RDWR; 154 + else if (oflags & FMODE_READ) 155 + posix_flags = SMB_O_RDONLY; 156 + else if (oflags & FMODE_WRITE) 157 + posix_flags = SMB_O_WRONLY; 158 + if (oflags & O_CREAT) 159 + posix_flags |= SMB_O_CREAT; 160 + if (oflags & O_EXCL) 161 + posix_flags |= SMB_O_EXCL; 162 + if (oflags & O_TRUNC) 163 + posix_flags |= SMB_O_TRUNC; 164 + if (oflags & O_APPEND) 165 + posix_flags |= SMB_O_APPEND; 166 + if (oflags & O_SYNC) 167 + posix_flags |= SMB_O_SYNC; 168 + if (oflags & O_DIRECTORY) 169 + posix_flags |= SMB_O_DIRECTORY; 170 + if (oflags & O_NOFOLLOW) 171 + posix_flags |= SMB_O_NOFOLLOW; 172 + if (oflags & O_DIRECT) 173 + posix_flags |= SMB_O_DIRECT; 174 + 175 + 176 + rc = CIFSPOSIXCreate(xid, cifs_sb->tcon, posix_flags, mode, 177 + pnetfid, presp_data, &oplock, full_path, 178 + cifs_sb->local_nls, cifs_sb->mnt_cifs_flags & 179 + CIFS_MOUNT_MAP_SPECIAL_CHR); 180 + if (rc) 181 + goto posix_open_ret; 182 + 183 + if (presp_data->Type == cpu_to_le32(-1)) 184 + goto posix_open_ret; /* open ok, caller does qpathinfo */ 185 + 186 + /* get new inode and set it up */ 187 + if (!pinode) 188 + goto posix_open_ret; /* caller does not need info */ 189 + 190 + *pinode = cifs_new_inode(sb, &presp_data->UniqueId); 191 + 192 + /* We do not need to close the file if new_inode fails since 193 + the caller will retry qpathinfo as long as inode is null */ 194 + if (*pinode == NULL) 195 + goto posix_open_ret; 196 + 197 + posix_fill_in_inode(*pinode, presp_data, 1); 198 + 199 + posix_open_ret: 200 + kfree(presp_data); 201 + return rc; 202 + } 203 + 204 static void setup_cifs_dentry(struct cifsTconInfo *tcon, 205 struct dentry *direntry, 206 struct inode *newinode) ··· 150 int xid; 151 int create_options = CREATE_NOT_DIR; 152 int oplock = 0; 153 + int oflags; 154 + /* 155 + * BB below access is probably too much for mknod to request 156 + * but we have to do query and setpathinfo so requesting 157 + * less could fail (unless we want to request getatr and setatr 158 + * permissions (only). At least for POSIX we do not have to 159 + * request so much. 160 + */ 161 int desiredAccess = GENERIC_READ | GENERIC_WRITE; 162 __u16 fileHandle; 163 struct cifs_sb_info *cifs_sb; ··· 174 } 175 176 mode &= ~current->fs->umask; 177 + if (oplockEnabled) 178 + oplock = REQ_OPLOCK; 179 + 180 + if (nd && (nd->flags & LOOKUP_OPEN)) 181 + oflags = nd->intent.open.flags; 182 + else 183 + oflags = FMODE_READ; 184 + 185 + if (tcon->unix_ext && (tcon->ses->capabilities & CAP_UNIX) && 186 + (CIFS_UNIX_POSIX_PATH_OPS_CAP & 187 + le64_to_cpu(tcon->fsUnixInfo.Capability))) { 188 + rc = cifs_posix_open(full_path, &newinode, inode->i_sb, 189 + mode, oflags, &oplock, &fileHandle, xid); 190 + /* EIO could indicate that (posix open) operation is not 191 + supported, despite what server claimed in capability 192 + negotation. EREMOTE indicates DFS junction, which is not 193 + handled in posix open */ 194 + 195 + if ((rc == 0) && (newinode == NULL)) 196 + goto cifs_create_get_file_info; /* query inode info */ 197 + else if (rc == 0) /* success, no need to query */ 198 + goto cifs_create_set_dentry; 199 + else if ((rc != -EIO) && (rc != -EREMOTE) && 200 + (rc != -EOPNOTSUPP)) /* path not found or net err */ 201 + goto cifs_create_out; 202 + /* else fallthrough to retry, using older open call, this is 203 + case where server does not support this SMB level, and 204 + falsely claims capability (also get here for DFS case 205 + which should be rare for path not covered on files) */ 206 + } 207 208 if (nd && (nd->flags & LOOKUP_OPEN)) { 209 + /* if the file is going to stay open, then we 210 + need to set the desired access properly */ 211 desiredAccess = 0; 212 if (oflags & FMODE_READ) 213 + desiredAccess |= GENERIC_READ; /* is this too little? */ 214 if (oflags & FMODE_WRITE) { 215 desiredAccess |= GENERIC_WRITE; 216 if (!(oflags & FMODE_READ)) ··· 199 200 /* BB add processing to set equivalent of mode - e.g. via CreateX with 201 ACLs */ 202 203 buf = kmalloc(sizeof(FILE_ALL_INFO), GFP_KERNEL); 204 if (buf == NULL) { ··· 233 } 234 if (rc) { 235 cFYI(1, ("cifs_create returned 0x%x", rc)); 236 + goto cifs_create_out; 237 + } 238 + 239 + /* If Open reported that we actually created a file 240 + then we now have to set the mode if possible */ 241 + if ((tcon->unix_ext) && (oplock & CIFS_CREATE_ACTION)) { 242 + struct cifs_unix_set_info_args args = { 243 .mode = mode, 244 .ctime = NO_CHANGE_64, 245 .atime = NO_CHANGE_64, 246 .mtime = NO_CHANGE_64, 247 .device = 0, 248 + }; 249 250 + if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SET_UID) { 251 + args.uid = (__u64) current_fsuid(); 252 + if (inode->i_mode & S_ISGID) 253 + args.gid = (__u64) inode->i_gid; 254 + else 255 + args.gid = (__u64) current_fsgid(); 256 } else { 257 + args.uid = NO_CHANGE_64; 258 + args.gid = NO_CHANGE_64; 259 } 260 + CIFSSMBUnixSetInfo(xid, tcon, full_path, &args, 261 + cifs_sb->local_nls, 262 + cifs_sb->mnt_cifs_flags & CIFS_MOUNT_MAP_SPECIAL_CHR); 263 + } else { 264 + /* BB implement mode setting via Windows security 265 + descriptors e.g. */ 266 + /* CIFSSMBWinSetPerms(xid,tcon,path,mode,-1,-1,nls);*/ 267 268 + /* Could set r/o dos attribute if mode & 0222 == 0 */ 269 + } 270 + 271 + cifs_create_get_file_info: 272 + /* server might mask mode so we have to query for it */ 273 + if (tcon->unix_ext) 274 + rc = cifs_get_inode_info_unix(&newinode, full_path, 275 + inode->i_sb, xid); 276 + else { 277 + rc = cifs_get_inode_info(&newinode, full_path, buf, 278 + inode->i_sb, xid, &fileHandle); 279 + if (newinode) { 280 + if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_DYNPERM) 281 + newinode->i_mode = mode; 282 + if ((oplock & CIFS_CREATE_ACTION) && 283 + (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SET_UID)) { 284 + newinode->i_uid = current_fsuid(); 285 + if (inode->i_mode & S_ISGID) 286 + newinode->i_gid = inode->i_gid; 287 + else 288 + newinode->i_gid = current_fsgid(); 289 } 290 } 291 + } 292 293 + cifs_create_set_dentry: 294 + if (rc == 0) 295 + setup_cifs_dentry(tcon, direntry, newinode); 296 + else 297 + cFYI(1, ("Create worked, get_inode_info failed rc = %d", rc)); 298 299 + /* nfsd case - nfs srv does not set nd */ 300 + if ((nd == NULL) || (!(nd->flags & LOOKUP_OPEN))) { 301 + /* mknod case - do not leave file open */ 302 + CIFSSMBClose(xid, tcon, fileHandle); 303 + } else if (newinode) { 304 + struct cifsFileInfo *pCifsFile = 305 + kzalloc(sizeof(struct cifsFileInfo), GFP_KERNEL); 306 307 + if (pCifsFile == NULL) 308 + goto cifs_create_out; 309 + pCifsFile->netfid = fileHandle; 310 + pCifsFile->pid = current->tgid; 311 + pCifsFile->pInode = newinode; 312 + pCifsFile->invalidHandle = false; 313 + pCifsFile->closePend = false; 314 + init_MUTEX(&pCifsFile->fh_sem); 315 + mutex_init(&pCifsFile->lock_mutex); 316 + INIT_LIST_HEAD(&pCifsFile->llist); 317 + atomic_set(&pCifsFile->wrtPending, 0); 318 319 + /* set the following in open now 320 pCifsFile->pfile = file; */ 321 + write_lock(&GlobalSMBSeslock); 322 + list_add(&pCifsFile->tlist, &tcon->openFileList); 323 + pCifsInode = CIFS_I(newinode); 324 + if (pCifsInode) { 325 + /* if readable file instance put first in list*/ 326 + if (write_only) { 327 + list_add_tail(&pCifsFile->flist, 328 + &pCifsInode->openFileList); 329 + } else { 330 + list_add(&pCifsFile->flist, 331 + &pCifsInode->openFileList); 332 } 333 + if ((oplock & 0xF) == OPLOCK_EXCLUSIVE) { 334 + pCifsInode->clientCanCacheAll = true; 335 + pCifsInode->clientCanCacheRead = true; 336 + cFYI(1, ("Exclusive Oplock inode %p", 337 + newinode)); 338 + } else if ((oplock & 0xF) == OPLOCK_READ) 339 + pCifsInode->clientCanCacheRead = true; 340 } 341 + write_unlock(&GlobalSMBSeslock); 342 } 343 cifs_create_out: 344 kfree(buf);
+64 -40
fs/cifs/inode.c
··· 199 pfnd_dat->Gid = cpu_to_le64(pinode->i_gid); 200 } 201 202 int cifs_get_inode_info_unix(struct inode **pinode, 203 const unsigned char *full_path, struct super_block *sb, int xid) 204 { ··· 276 277 /* get new inode */ 278 if (*pinode == NULL) { 279 - *pinode = new_inode(sb); 280 if (*pinode == NULL) { 281 rc = -ENOMEM; 282 goto cgiiu_exit; 283 } 284 - /* Is an i_ino of zero legal? */ 285 - /* note ino incremented to unique num in new_inode */ 286 - /* Are there sanity checks we can use to ensure that 287 - the server is really filling in that field? */ 288 - if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SERVER_INUM) 289 - (*pinode)->i_ino = (unsigned long)find_data.UniqueId; 290 - 291 - if (sb->s_flags & MS_NOATIME) 292 - (*pinode)->i_flags |= S_NOATIME | S_NOCMTIME; 293 - 294 - insert_inode_hash(*pinode); 295 } 296 297 inode = *pinode; ··· 497 498 /* get new inode */ 499 if (*pinode == NULL) { 500 - *pinode = new_inode(sb); 501 - if (*pinode == NULL) { 502 - rc = -ENOMEM; 503 - goto cgii_exit; 504 - } 505 /* Is an i_ino of zero legal? Can we use that to check 506 if the server supports returning inode numbers? Are 507 there other sanity checks we can use to ensure that ··· 516 517 if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SERVER_INUM) { 518 int rc1 = 0; 519 - __u64 inode_num; 520 521 rc1 = CIFSGetSrvInodeNumber(xid, pTcon, 522 - full_path, &inode_num, 523 cifs_sb->local_nls, 524 cifs_sb->mnt_cifs_flags & 525 CIFS_MOUNT_MAP_SPECIAL_CHR); 526 if (rc1) { 527 cFYI(1, ("GetSrvInodeNum rc %d", rc1)); 528 /* BB EOPNOSUPP disable SERVER_INUM? */ 529 - } else /* do we need cast or hash to ino? */ 530 - (*pinode)->i_ino = inode_num; 531 - } /* else ino incremented to unique num in new_inode*/ 532 - if (sb->s_flags & MS_NOATIME) 533 - (*pinode)->i_flags |= S_NOATIME | S_NOCMTIME; 534 - insert_inode_hash(*pinode); 535 } 536 inode = *pinode; 537 cifsInfo = CIFS_I(inode); ··· 655 .lookup = cifs_lookup, 656 }; 657 658 - static char *build_path_to_root(struct cifs_sb_info *cifs_sb) 659 { 660 int pplen = cifs_sb->prepathlen; 661 int dfsplen; ··· 712 return inode; 713 714 cifs_sb = CIFS_SB(inode->i_sb); 715 - full_path = build_path_to_root(cifs_sb); 716 if (full_path == NULL) 717 return ERR_PTR(-ENOMEM); 718 ··· 1051 return rc; 1052 } 1053 1054 - static void posix_fill_in_inode(struct inode *tmp_inode, 1055 FILE_UNIX_BASIC_INFO *pData, int isNewInode) 1056 { 1057 struct cifsInodeInfo *cifsInfo = CIFS_I(tmp_inode); ··· 1148 else 1149 direntry->d_op = &cifs_dentry_ops; 1150 1151 - newinode = new_inode(inode->i_sb); 1152 if (newinode == NULL) { 1153 kfree(pInfo); 1154 goto mkdir_get_info; 1155 } 1156 1157 - /* Is an i_ino of zero legal? */ 1158 - /* Are there sanity checks we can use to ensure that 1159 - the server is really filling in that field? */ 1160 - if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SERVER_INUM) { 1161 - newinode->i_ino = 1162 - (unsigned long)pInfo->UniqueId; 1163 - } /* note ino incremented to unique num in new_inode */ 1164 - if (inode->i_sb->s_flags & MS_NOATIME) 1165 - newinode->i_flags |= S_NOATIME | S_NOCMTIME; 1166 newinode->i_nlink = 2; 1167 - 1168 - insert_inode_hash(newinode); 1169 d_instantiate(direntry, newinode); 1170 1171 /* we already checked in POSIXCreate whether
··· 199 pfnd_dat->Gid = cpu_to_le64(pinode->i_gid); 200 } 201 202 + /** 203 + * cifs_new inode - create new inode, initialize, and hash it 204 + * @sb - pointer to superblock 205 + * @inum - if valid pointer and serverino is enabled, replace i_ino with val 206 + * 207 + * Create a new inode, initialize it for CIFS and hash it. Returns the new 208 + * inode or NULL if one couldn't be allocated. 209 + * 210 + * If the share isn't mounted with "serverino" or inum is a NULL pointer then 211 + * we'll just use the inode number assigned by new_inode(). Note that this can 212 + * mean i_ino collisions since the i_ino assigned by new_inode is not 213 + * guaranteed to be unique. 214 + */ 215 + struct inode * 216 + cifs_new_inode(struct super_block *sb, __u64 *inum) 217 + { 218 + struct inode *inode; 219 + 220 + inode = new_inode(sb); 221 + if (inode == NULL) 222 + return NULL; 223 + 224 + /* 225 + * BB: Is i_ino == 0 legal? Here, we assume that it is. If it isn't we 226 + * stop passing inum as ptr. Are there sanity checks we can use to 227 + * ensure that the server is really filling in that field? Also, 228 + * if serverino is disabled, perhaps we should be using iunique()? 229 + */ 230 + if (inum && (CIFS_SB(sb)->mnt_cifs_flags & CIFS_MOUNT_SERVER_INUM)) 231 + inode->i_ino = (unsigned long) *inum; 232 + 233 + /* 234 + * must set this here instead of cifs_alloc_inode since VFS will 235 + * clobber i_flags 236 + */ 237 + if (sb->s_flags & MS_NOATIME) 238 + inode->i_flags |= S_NOATIME | S_NOCMTIME; 239 + 240 + insert_inode_hash(inode); 241 + 242 + return inode; 243 + } 244 + 245 int cifs_get_inode_info_unix(struct inode **pinode, 246 const unsigned char *full_path, struct super_block *sb, int xid) 247 { ··· 233 234 /* get new inode */ 235 if (*pinode == NULL) { 236 + *pinode = cifs_new_inode(sb, &find_data.UniqueId); 237 if (*pinode == NULL) { 238 rc = -ENOMEM; 239 goto cgiiu_exit; 240 } 241 } 242 243 inode = *pinode; ··· 465 466 /* get new inode */ 467 if (*pinode == NULL) { 468 + __u64 inode_num; 469 + __u64 *pinum = &inode_num; 470 + 471 /* Is an i_ino of zero legal? Can we use that to check 472 if the server supports returning inode numbers? Are 473 there other sanity checks we can use to ensure that ··· 486 487 if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SERVER_INUM) { 488 int rc1 = 0; 489 490 rc1 = CIFSGetSrvInodeNumber(xid, pTcon, 491 + full_path, pinum, 492 cifs_sb->local_nls, 493 cifs_sb->mnt_cifs_flags & 494 CIFS_MOUNT_MAP_SPECIAL_CHR); 495 if (rc1) { 496 cFYI(1, ("GetSrvInodeNum rc %d", rc1)); 497 + pinum = NULL; 498 /* BB EOPNOSUPP disable SERVER_INUM? */ 499 + } 500 + } else { 501 + pinum = NULL; 502 + } 503 + 504 + *pinode = cifs_new_inode(sb, pinum); 505 + if (*pinode == NULL) { 506 + rc = -ENOMEM; 507 + goto cgii_exit; 508 + } 509 } 510 inode = *pinode; 511 cifsInfo = CIFS_I(inode); ··· 621 .lookup = cifs_lookup, 622 }; 623 624 + char *cifs_build_path_to_root(struct cifs_sb_info *cifs_sb) 625 { 626 int pplen = cifs_sb->prepathlen; 627 int dfsplen; ··· 678 return inode; 679 680 cifs_sb = CIFS_SB(inode->i_sb); 681 + full_path = cifs_build_path_to_root(cifs_sb); 682 if (full_path == NULL) 683 return ERR_PTR(-ENOMEM); 684 ··· 1017 return rc; 1018 } 1019 1020 + void posix_fill_in_inode(struct inode *tmp_inode, 1021 FILE_UNIX_BASIC_INFO *pData, int isNewInode) 1022 { 1023 struct cifsInodeInfo *cifsInfo = CIFS_I(tmp_inode); ··· 1114 else 1115 direntry->d_op = &cifs_dentry_ops; 1116 1117 + newinode = cifs_new_inode(inode->i_sb, 1118 + &pInfo->UniqueId); 1119 if (newinode == NULL) { 1120 kfree(pInfo); 1121 goto mkdir_get_info; 1122 } 1123 1124 newinode->i_nlink = 2; 1125 d_instantiate(direntry, newinode); 1126 1127 /* we already checked in POSIXCreate whether
+26 -32
fs/cifs/readdir.c
··· 56 } 57 #endif /* DEBUG2 */ 58 59 - /* Returns one if new inode created (which therefore needs to be hashed) */ 60 /* Might check in the future if inode number changed so we can rehash inode */ 61 - static int construct_dentry(struct qstr *qstring, struct file *file, 62 - struct inode **ptmp_inode, struct dentry **pnew_dentry) 63 { 64 - struct dentry *tmp_dentry; 65 - struct cifs_sb_info *cifs_sb; 66 - struct cifsTconInfo *pTcon; 67 int rc = 0; 68 69 cFYI(1, ("For %s", qstring->name)); 70 - cifs_sb = CIFS_SB(file->f_path.dentry->d_sb); 71 - pTcon = cifs_sb->tcon; 72 73 qstring->hash = full_name_hash(qstring->name, qstring->len); 74 tmp_dentry = d_lookup(file->f_path.dentry, qstring); 75 if (tmp_dentry) { 76 cFYI(0, ("existing dentry with inode 0x%p", 77 tmp_dentry->d_inode)); 78 *ptmp_inode = tmp_dentry->d_inode; 79 - /* BB overwrite old name? i.e. tmp_dentry->d_name and tmp_dentry->d_name.len??*/ 80 if (*ptmp_inode == NULL) { 81 - *ptmp_inode = new_inode(file->f_path.dentry->d_sb); 82 if (*ptmp_inode == NULL) 83 return rc; 84 rc = 1; 85 } 86 - if (file->f_path.dentry->d_sb->s_flags & MS_NOATIME) 87 - (*ptmp_inode)->i_flags |= S_NOATIME | S_NOCMTIME; 88 } else { 89 tmp_dentry = d_alloc(file->f_path.dentry, qstring); 90 if (tmp_dentry == NULL) { ··· 92 return rc; 93 } 94 95 - *ptmp_inode = new_inode(file->f_path.dentry->d_sb); 96 - if (pTcon->nocase) 97 tmp_dentry->d_op = &cifs_ci_dentry_ops; 98 else 99 tmp_dentry->d_op = &cifs_dentry_ops; 100 if (*ptmp_inode == NULL) 101 return rc; 102 - if (file->f_path.dentry->d_sb->s_flags & MS_NOATIME) 103 - (*ptmp_inode)->i_flags |= S_NOATIME | S_NOCMTIME; 104 rc = 2; 105 } 106 ··· 820 /* inode num, inode type and filename returned */ 821 static int cifs_get_name_from_search_buf(struct qstr *pqst, 822 char *current_entry, __u16 level, unsigned int unicode, 823 - struct cifs_sb_info *cifs_sb, int max_len, ino_t *pinum) 824 { 825 int rc = 0; 826 unsigned int len = 0; ··· 840 len = strnlen(filename, PATH_MAX); 841 } 842 843 - /* BB fixme - hash low and high 32 bits if not 64 bit arch BB */ 844 - if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SERVER_INUM) 845 - *pinum = pFindData->UniqueId; 846 } else if (level == SMB_FIND_FILE_DIRECTORY_INFO) { 847 FILE_DIRECTORY_INFO *pFindData = 848 (FILE_DIRECTORY_INFO *)current_entry; ··· 903 struct qstr qstring; 904 struct cifsFileInfo *pCifsF; 905 unsigned int obj_type; 906 - ino_t inum; 907 struct cifs_sb_info *cifs_sb; 908 struct inode *tmp_inode; 909 struct dentry *tmp_dentry; ··· 936 if (rc) 937 return rc; 938 939 - rc = construct_dentry(&qstring, file, &tmp_inode, &tmp_dentry); 940 if ((tmp_inode == NULL) || (tmp_dentry == NULL)) 941 return -ENOMEM; 942 - 943 - if (rc) { 944 - /* inode created, we need to hash it with right inode number */ 945 - if (inum != 0) { 946 - /* BB fixme - hash the 2 32 quantities bits together if 947 - * necessary BB */ 948 - tmp_inode->i_ino = inum; 949 - } 950 - insert_inode_hash(tmp_inode); 951 - } 952 953 /* we pass in rc below, indicating whether it is a new inode, 954 so we can figure out whether to invalidate the inode cached
··· 56 } 57 #endif /* DEBUG2 */ 58 59 + /* Returns 1 if new inode created, 2 if both dentry and inode were */ 60 /* Might check in the future if inode number changed so we can rehash inode */ 61 + static int 62 + construct_dentry(struct qstr *qstring, struct file *file, 63 + struct inode **ptmp_inode, struct dentry **pnew_dentry, 64 + __u64 *inum) 65 { 66 + struct dentry *tmp_dentry = NULL; 67 + struct super_block *sb = file->f_path.dentry->d_sb; 68 int rc = 0; 69 70 cFYI(1, ("For %s", qstring->name)); 71 72 qstring->hash = full_name_hash(qstring->name, qstring->len); 73 tmp_dentry = d_lookup(file->f_path.dentry, qstring); 74 if (tmp_dentry) { 75 + /* BB: overwrite old name? i.e. tmp_dentry->d_name and 76 + * tmp_dentry->d_name.len?? 77 + */ 78 cFYI(0, ("existing dentry with inode 0x%p", 79 tmp_dentry->d_inode)); 80 *ptmp_inode = tmp_dentry->d_inode; 81 if (*ptmp_inode == NULL) { 82 + *ptmp_inode = cifs_new_inode(sb, inum); 83 if (*ptmp_inode == NULL) 84 return rc; 85 rc = 1; 86 } 87 } else { 88 tmp_dentry = d_alloc(file->f_path.dentry, qstring); 89 if (tmp_dentry == NULL) { ··· 93 return rc; 94 } 95 96 + if (CIFS_SB(sb)->tcon->nocase) 97 tmp_dentry->d_op = &cifs_ci_dentry_ops; 98 else 99 tmp_dentry->d_op = &cifs_dentry_ops; 100 + 101 + *ptmp_inode = cifs_new_inode(sb, inum); 102 if (*ptmp_inode == NULL) 103 return rc; 104 rc = 2; 105 } 106 ··· 822 /* inode num, inode type and filename returned */ 823 static int cifs_get_name_from_search_buf(struct qstr *pqst, 824 char *current_entry, __u16 level, unsigned int unicode, 825 + struct cifs_sb_info *cifs_sb, int max_len, __u64 *pinum) 826 { 827 int rc = 0; 828 unsigned int len = 0; ··· 842 len = strnlen(filename, PATH_MAX); 843 } 844 845 + *pinum = pFindData->UniqueId; 846 } else if (level == SMB_FIND_FILE_DIRECTORY_INFO) { 847 FILE_DIRECTORY_INFO *pFindData = 848 (FILE_DIRECTORY_INFO *)current_entry; ··· 907 struct qstr qstring; 908 struct cifsFileInfo *pCifsF; 909 unsigned int obj_type; 910 + __u64 inum; 911 struct cifs_sb_info *cifs_sb; 912 struct inode *tmp_inode; 913 struct dentry *tmp_dentry; ··· 940 if (rc) 941 return rc; 942 943 + /* only these two infolevels return valid inode numbers */ 944 + if (pCifsF->srch_inf.info_level == SMB_FIND_FILE_UNIX || 945 + pCifsF->srch_inf.info_level == SMB_FIND_FILE_ID_FULL_DIR_INFO) 946 + rc = construct_dentry(&qstring, file, &tmp_inode, &tmp_dentry, 947 + &inum); 948 + else 949 + rc = construct_dentry(&qstring, file, &tmp_inode, &tmp_dentry, 950 + NULL); 951 + 952 if ((tmp_inode == NULL) || (tmp_dentry == NULL)) 953 return -ENOMEM; 954 955 /* we pass in rc below, indicating whether it is a new inode, 956 so we can figure out whether to invalidate the inode cached
+87 -4
fs/cifs/sess.c
··· 34 extern void SMBNTencrypt(unsigned char *passwd, unsigned char *c8, 35 unsigned char *p24); 36 37 static __u32 cifs_ssetup_hdr(struct cifsSesInfo *ses, SESSION_SETUP_ANDX *pSMB) 38 { 39 __u32 capabilities = 0; 40 41 /* init fields common to all four types of SessSetup */ 42 - /* note that header is initialized to zero in header_assemble */ 43 pSMB->req.AndXCommand = 0xFF; 44 pSMB->req.MaxBufferSize = cpu_to_le16(ses->server->maxBuf); 45 pSMB->req.MaxMpxCount = cpu_to_le16(ses->server->maxReq); 46 47 /* Now no need to set SMBFLG_CASELESS or obsolete CANONICAL PATH */ 48 ··· 155 if (ses->capabilities & CAP_UNIX) 156 capabilities |= CAP_UNIX; 157 158 - /* BB check whether to init vcnum BB */ 159 return capabilities; 160 } 161 ··· 311 312 kfree(ses->serverOS); 313 /* UTF-8 string will not grow more than four times as big as UCS-16 */ 314 - ses->serverOS = kzalloc(4 * len, GFP_KERNEL); 315 if (ses->serverOS != NULL) 316 cifs_strfromUCS_le(ses->serverOS, (__le16 *)data, len, nls_cp); 317 data += 2 * (len + 1); ··· 324 return rc; 325 326 kfree(ses->serverNOS); 327 - ses->serverNOS = kzalloc(4 * len, GFP_KERNEL); /* BB this is wrong length FIXME BB */ 328 if (ses->serverNOS != NULL) { 329 cifs_strfromUCS_le(ses->serverNOS, (__le16 *)data, len, 330 nls_cp);
··· 34 extern void SMBNTencrypt(unsigned char *passwd, unsigned char *c8, 35 unsigned char *p24); 36 37 + /* Checks if this is the first smb session to be reconnected after 38 + the socket has been reestablished (so we know whether to use vc 0). 39 + Called while holding the cifs_tcp_ses_lock, so do not block */ 40 + static bool is_first_ses_reconnect(struct cifsSesInfo *ses) 41 + { 42 + struct list_head *tmp; 43 + struct cifsSesInfo *tmp_ses; 44 + 45 + list_for_each(tmp, &ses->server->smb_ses_list) { 46 + tmp_ses = list_entry(tmp, struct cifsSesInfo, 47 + smb_ses_list); 48 + if (tmp_ses->need_reconnect == false) 49 + return false; 50 + } 51 + /* could not find a session that was already connected, 52 + this must be the first one we are reconnecting */ 53 + return true; 54 + } 55 + 56 + /* 57 + * vc number 0 is treated specially by some servers, and should be the 58 + * first one we request. After that we can use vcnumbers up to maxvcs, 59 + * one for each smb session (some Windows versions set maxvcs incorrectly 60 + * so maxvc=1 can be ignored). If we have too many vcs, we can reuse 61 + * any vc but zero (some servers reset the connection on vcnum zero) 62 + * 63 + */ 64 + static __le16 get_next_vcnum(struct cifsSesInfo *ses) 65 + { 66 + __u16 vcnum = 0; 67 + struct list_head *tmp; 68 + struct cifsSesInfo *tmp_ses; 69 + __u16 max_vcs = ses->server->max_vcs; 70 + __u16 i; 71 + int free_vc_found = 0; 72 + 73 + /* Quoting the MS-SMB specification: "Windows-based SMB servers set this 74 + field to one but do not enforce this limit, which allows an SMB client 75 + to establish more virtual circuits than allowed by this value ... but 76 + other server implementations can enforce this limit." */ 77 + if (max_vcs < 2) 78 + max_vcs = 0xFFFF; 79 + 80 + write_lock(&cifs_tcp_ses_lock); 81 + if ((ses->need_reconnect) && is_first_ses_reconnect(ses)) 82 + goto get_vc_num_exit; /* vcnum will be zero */ 83 + for (i = ses->server->srv_count - 1; i < max_vcs; i++) { 84 + if (i == 0) /* this is the only connection, use vc 0 */ 85 + break; 86 + 87 + free_vc_found = 1; 88 + 89 + list_for_each(tmp, &ses->server->smb_ses_list) { 90 + tmp_ses = list_entry(tmp, struct cifsSesInfo, 91 + smb_ses_list); 92 + if (tmp_ses->vcnum == i) { 93 + free_vc_found = 0; 94 + break; /* found duplicate, try next vcnum */ 95 + } 96 + } 97 + if (free_vc_found) 98 + break; /* we found a vcnumber that will work - use it */ 99 + } 100 + 101 + if (i == 0) 102 + vcnum = 0; /* for most common case, ie if one smb session, use 103 + vc zero. Also for case when no free vcnum, zero 104 + is safest to send (some clients only send zero) */ 105 + else if (free_vc_found == 0) 106 + vcnum = 1; /* we can not reuse vc=0 safely, since some servers 107 + reset all uids on that, but 1 is ok. */ 108 + else 109 + vcnum = i; 110 + ses->vcnum = vcnum; 111 + get_vc_num_exit: 112 + write_unlock(&cifs_tcp_ses_lock); 113 + 114 + return le16_to_cpu(vcnum); 115 + } 116 + 117 static __u32 cifs_ssetup_hdr(struct cifsSesInfo *ses, SESSION_SETUP_ANDX *pSMB) 118 { 119 __u32 capabilities = 0; 120 121 /* init fields common to all four types of SessSetup */ 122 + /* Note that offsets for first seven fields in req struct are same */ 123 + /* in CIFS Specs so does not matter which of 3 forms of struct */ 124 + /* that we use in next few lines */ 125 + /* Note that header is initialized to zero in header_assemble */ 126 pSMB->req.AndXCommand = 0xFF; 127 pSMB->req.MaxBufferSize = cpu_to_le16(ses->server->maxBuf); 128 pSMB->req.MaxMpxCount = cpu_to_le16(ses->server->maxReq); 129 + pSMB->req.VcNumber = get_next_vcnum(ses); 130 131 /* Now no need to set SMBFLG_CASELESS or obsolete CANONICAL PATH */ 132 ··· 71 if (ses->capabilities & CAP_UNIX) 72 capabilities |= CAP_UNIX; 73 74 return capabilities; 75 } 76 ··· 228 229 kfree(ses->serverOS); 230 /* UTF-8 string will not grow more than four times as big as UCS-16 */ 231 + ses->serverOS = kzalloc((4 * len) + 2 /* trailing null */, GFP_KERNEL); 232 if (ses->serverOS != NULL) 233 cifs_strfromUCS_le(ses->serverOS, (__le16 *)data, len, nls_cp); 234 data += 2 * (len + 1); ··· 241 return rc; 242 243 kfree(ses->serverNOS); 244 + ses->serverNOS = kzalloc((4 * len) + 2 /* trailing null */, GFP_KERNEL); 245 if (ses->serverNOS != NULL) { 246 cifs_strfromUCS_le(ses->serverNOS, (__le16 *)data, len, 247 nls_cp);
+2
fs/compat_ioctl.c
··· 1938 /* Big K */ 1939 COMPATIBLE_IOCTL(PIO_FONT) 1940 COMPATIBLE_IOCTL(GIO_FONT) 1941 ULONG_IOCTL(KDSIGACCEPT) 1942 COMPATIBLE_IOCTL(KDGETKEYCODE) 1943 COMPATIBLE_IOCTL(KDSETKEYCODE)
··· 1938 /* Big K */ 1939 COMPATIBLE_IOCTL(PIO_FONT) 1940 COMPATIBLE_IOCTL(GIO_FONT) 1941 + COMPATIBLE_IOCTL(PIO_CMAP) 1942 + COMPATIBLE_IOCTL(GIO_CMAP) 1943 ULONG_IOCTL(KDSIGACCEPT) 1944 COMPATIBLE_IOCTL(KDGETKEYCODE) 1945 COMPATIBLE_IOCTL(KDSETKEYCODE)
+1 -1
fs/ext4/ext4.h
··· 868 { 869 unsigned len = le16_to_cpu(dlen); 870 871 - if (len == EXT4_MAX_REC_LEN) 872 return 1 << 16; 873 return len; 874 }
··· 868 { 869 unsigned len = le16_to_cpu(dlen); 870 871 + if (len == EXT4_MAX_REC_LEN || len == 0) 872 return 1 << 16; 873 return len; 874 }
+23 -4
fs/ext4/inode.c
··· 47 static inline int ext4_begin_ordered_truncate(struct inode *inode, 48 loff_t new_size) 49 { 50 - return jbd2_journal_begin_ordered_truncate(&EXT4_I(inode)->jinode, 51 - new_size); 52 } 53 54 static void ext4_invalidatepage(struct page *page, unsigned long offset); ··· 2439 int no_nrwrite_index_update; 2440 int pages_written = 0; 2441 long pages_skipped; 2442 int needed_blocks, ret = 0, nr_to_writebump = 0; 2443 struct ext4_sb_info *sbi = EXT4_SB(mapping->host->i_sb); 2444 ··· 2491 if (wbc->range_start == 0 && wbc->range_end == LLONG_MAX) 2492 range_whole = 1; 2493 2494 - if (wbc->range_cyclic) 2495 index = mapping->writeback_index; 2496 - else 2497 index = wbc->range_start >> PAGE_CACHE_SHIFT; 2498 2499 mpd.wbc = wbc; ··· 2513 wbc->no_nrwrite_index_update = 1; 2514 pages_skipped = wbc->pages_skipped; 2515 2516 while (!ret && wbc->nr_to_write > 0) { 2517 2518 /* ··· 2556 pages_written += mpd.pages_written; 2557 wbc->pages_skipped = pages_skipped; 2558 ret = 0; 2559 } else if (wbc->nr_to_write) 2560 /* 2561 * There is no more writeout needed ··· 2565 */ 2566 break; 2567 } 2568 if (pages_skipped != wbc->pages_skipped) 2569 printk(KERN_EMERG "This should not happen leaving %s " 2570 "with nr_to_write = %ld ret = %d\n", ··· 2579 2580 /* Update index */ 2581 index += pages_written; 2582 if (wbc->range_cyclic || (range_whole && wbc->nr_to_write > 0)) 2583 /* 2584 * set the writeback_index so that range_cyclic
··· 47 static inline int ext4_begin_ordered_truncate(struct inode *inode, 48 loff_t new_size) 49 { 50 + return jbd2_journal_begin_ordered_truncate( 51 + EXT4_SB(inode->i_sb)->s_journal, 52 + &EXT4_I(inode)->jinode, 53 + new_size); 54 } 55 56 static void ext4_invalidatepage(struct page *page, unsigned long offset); ··· 2437 int no_nrwrite_index_update; 2438 int pages_written = 0; 2439 long pages_skipped; 2440 + int range_cyclic, cycled = 1, io_done = 0; 2441 int needed_blocks, ret = 0, nr_to_writebump = 0; 2442 struct ext4_sb_info *sbi = EXT4_SB(mapping->host->i_sb); 2443 ··· 2488 if (wbc->range_start == 0 && wbc->range_end == LLONG_MAX) 2489 range_whole = 1; 2490 2491 + range_cyclic = wbc->range_cyclic; 2492 + if (wbc->range_cyclic) { 2493 index = mapping->writeback_index; 2494 + if (index) 2495 + cycled = 0; 2496 + wbc->range_start = index << PAGE_CACHE_SHIFT; 2497 + wbc->range_end = LLONG_MAX; 2498 + wbc->range_cyclic = 0; 2499 + } else 2500 index = wbc->range_start >> PAGE_CACHE_SHIFT; 2501 2502 mpd.wbc = wbc; ··· 2504 wbc->no_nrwrite_index_update = 1; 2505 pages_skipped = wbc->pages_skipped; 2506 2507 + retry: 2508 while (!ret && wbc->nr_to_write > 0) { 2509 2510 /* ··· 2546 pages_written += mpd.pages_written; 2547 wbc->pages_skipped = pages_skipped; 2548 ret = 0; 2549 + io_done = 1; 2550 } else if (wbc->nr_to_write) 2551 /* 2552 * There is no more writeout needed ··· 2554 */ 2555 break; 2556 } 2557 + if (!io_done && !cycled) { 2558 + cycled = 1; 2559 + index = 0; 2560 + wbc->range_start = index << PAGE_CACHE_SHIFT; 2561 + wbc->range_end = mapping->writeback_index - 1; 2562 + goto retry; 2563 + } 2564 if (pages_skipped != wbc->pages_skipped) 2565 printk(KERN_EMERG "This should not happen leaving %s " 2566 "with nr_to_write = %ld ret = %d\n", ··· 2561 2562 /* Update index */ 2563 index += pages_written; 2564 + wbc->range_cyclic = range_cyclic; 2565 if (wbc->range_cyclic || (range_whole && wbc->nr_to_write > 0)) 2566 /* 2567 * set the writeback_index so that range_cyclic
+19 -13
fs/ext4/mballoc.c
··· 3693 pa->pa_free = pa->pa_len; 3694 atomic_set(&pa->pa_count, 1); 3695 spin_lock_init(&pa->pa_lock); 3696 pa->pa_deleted = 0; 3697 pa->pa_linear = 0; 3698 ··· 3757 atomic_set(&pa->pa_count, 1); 3758 spin_lock_init(&pa->pa_lock); 3759 INIT_LIST_HEAD(&pa->pa_inode_list); 3760 pa->pa_deleted = 0; 3761 pa->pa_linear = 1; 3762 ··· 4479 pa->pa_free -= ac->ac_b_ex.fe_len; 4480 pa->pa_len -= ac->ac_b_ex.fe_len; 4481 spin_unlock(&pa->pa_lock); 4482 - /* 4483 - * We want to add the pa to the right bucket. 4484 - * Remove it from the list and while adding 4485 - * make sure the list to which we are adding 4486 - * doesn't grow big. 4487 - */ 4488 - if (likely(pa->pa_free)) { 4489 - spin_lock(pa->pa_obj_lock); 4490 - list_del_rcu(&pa->pa_inode_list); 4491 - spin_unlock(pa->pa_obj_lock); 4492 - ext4_mb_add_n_trim(ac); 4493 - } 4494 } 4495 - ext4_mb_put_pa(ac, ac->ac_sb, pa); 4496 } 4497 if (ac->alloc_semp) 4498 up_read(ac->alloc_semp); 4499 if (ac->ac_bitmap_page) 4500 page_cache_release(ac->ac_bitmap_page); 4501 if (ac->ac_buddy_page)
··· 3693 pa->pa_free = pa->pa_len; 3694 atomic_set(&pa->pa_count, 1); 3695 spin_lock_init(&pa->pa_lock); 3696 + INIT_LIST_HEAD(&pa->pa_inode_list); 3697 + INIT_LIST_HEAD(&pa->pa_group_list); 3698 pa->pa_deleted = 0; 3699 pa->pa_linear = 0; 3700 ··· 3755 atomic_set(&pa->pa_count, 1); 3756 spin_lock_init(&pa->pa_lock); 3757 INIT_LIST_HEAD(&pa->pa_inode_list); 3758 + INIT_LIST_HEAD(&pa->pa_group_list); 3759 pa->pa_deleted = 0; 3760 pa->pa_linear = 1; 3761 ··· 4476 pa->pa_free -= ac->ac_b_ex.fe_len; 4477 pa->pa_len -= ac->ac_b_ex.fe_len; 4478 spin_unlock(&pa->pa_lock); 4479 } 4480 } 4481 if (ac->alloc_semp) 4482 up_read(ac->alloc_semp); 4483 + if (pa) { 4484 + /* 4485 + * We want to add the pa to the right bucket. 4486 + * Remove it from the list and while adding 4487 + * make sure the list to which we are adding 4488 + * doesn't grow big. We need to release 4489 + * alloc_semp before calling ext4_mb_add_n_trim() 4490 + */ 4491 + if (pa->pa_linear && likely(pa->pa_free)) { 4492 + spin_lock(pa->pa_obj_lock); 4493 + list_del_rcu(&pa->pa_inode_list); 4494 + spin_unlock(pa->pa_obj_lock); 4495 + ext4_mb_add_n_trim(ac); 4496 + } 4497 + ext4_mb_put_pa(ac, ac->ac_sb, pa); 4498 + } 4499 if (ac->ac_bitmap_page) 4500 page_cache_release(ac->ac_bitmap_page); 4501 if (ac->ac_buddy_page)
+3 -5
fs/ext4/migrate.c
··· 481 + 1); 482 if (IS_ERR(handle)) { 483 retval = PTR_ERR(handle); 484 - goto err_out; 485 } 486 tmp_inode = ext4_new_inode(handle, 487 inode->i_sb->s_root->d_inode, ··· 489 if (IS_ERR(tmp_inode)) { 490 retval = -ENOMEM; 491 ext4_journal_stop(handle); 492 - tmp_inode = NULL; 493 - goto err_out; 494 } 495 i_size_write(tmp_inode, i_size_read(inode)); 496 /* ··· 617 618 ext4_journal_stop(handle); 619 620 - if (tmp_inode) 621 - iput(tmp_inode); 622 623 return retval; 624 }
··· 481 + 1); 482 if (IS_ERR(handle)) { 483 retval = PTR_ERR(handle); 484 + return retval; 485 } 486 tmp_inode = ext4_new_inode(handle, 487 inode->i_sb->s_root->d_inode, ··· 489 if (IS_ERR(tmp_inode)) { 490 retval = -ENOMEM; 491 ext4_journal_stop(handle); 492 + return retval; 493 } 494 i_size_write(tmp_inode, i_size_read(inode)); 495 /* ··· 618 619 ext4_journal_stop(handle); 620 621 + iput(tmp_inode); 622 623 return retval; 624 }
+7 -4
fs/ext4/super.c
··· 3046 static int ext4_sync_fs(struct super_block *sb, int wait) 3047 { 3048 int ret = 0; 3049 3050 trace_mark(ext4_sync_fs, "dev %s wait %d", sb->s_id, wait); 3051 sb->s_dirt = 0; 3052 if (EXT4_SB(sb)->s_journal) { 3053 - if (wait) 3054 - ret = ext4_force_commit(sb); 3055 - else 3056 - jbd2_journal_start_commit(EXT4_SB(sb)->s_journal, NULL); 3057 } else { 3058 ext4_commit_super(sb, EXT4_SB(sb)->s_es, wait); 3059 }
··· 3046 static int ext4_sync_fs(struct super_block *sb, int wait) 3047 { 3048 int ret = 0; 3049 + tid_t target; 3050 3051 trace_mark(ext4_sync_fs, "dev %s wait %d", sb->s_id, wait); 3052 sb->s_dirt = 0; 3053 if (EXT4_SB(sb)->s_journal) { 3054 + if (jbd2_journal_start_commit(EXT4_SB(sb)->s_journal, 3055 + &target)) { 3056 + if (wait) 3057 + jbd2_log_wait_commit(EXT4_SB(sb)->s_journal, 3058 + target); 3059 + } 3060 } else { 3061 ext4_commit_super(sb, EXT4_SB(sb)->s_es, wait); 3062 }
+11 -6
fs/jbd2/journal.c
··· 450 } 451 452 /* 453 - * Called under j_state_lock. Returns true if a transaction was started. 454 */ 455 int __jbd2_log_start_commit(journal_t *journal, tid_t target) 456 { ··· 518 519 /* 520 * Start a commit of the current running transaction (if any). Returns true 521 - * if a transaction was started, and fills its tid in at *ptid 522 */ 523 int jbd2_journal_start_commit(journal_t *journal, tid_t *ptid) 524 { ··· 529 if (journal->j_running_transaction) { 530 tid_t tid = journal->j_running_transaction->t_tid; 531 532 - ret = __jbd2_log_start_commit(journal, tid); 533 - if (ret && ptid) 534 *ptid = tid; 535 - } else if (journal->j_committing_transaction && ptid) { 536 /* 537 * If ext3_write_super() recently started a commit, then we 538 * have to wait for completion of that transaction 539 */ 540 - *ptid = journal->j_committing_transaction->t_tid; 541 ret = 1; 542 } 543 spin_unlock(&journal->j_state_lock);
··· 450 } 451 452 /* 453 + * Called under j_state_lock. Returns true if a transaction commit was started. 454 */ 455 int __jbd2_log_start_commit(journal_t *journal, tid_t target) 456 { ··· 518 519 /* 520 * Start a commit of the current running transaction (if any). Returns true 521 + * if a transaction is going to be committed (or is currently already 522 + * committing), and fills its tid in at *ptid 523 */ 524 int jbd2_journal_start_commit(journal_t *journal, tid_t *ptid) 525 { ··· 528 if (journal->j_running_transaction) { 529 tid_t tid = journal->j_running_transaction->t_tid; 530 531 + __jbd2_log_start_commit(journal, tid); 532 + /* There's a running transaction and we've just made sure 533 + * it's commit has been scheduled. */ 534 + if (ptid) 535 *ptid = tid; 536 + ret = 1; 537 + } else if (journal->j_committing_transaction) { 538 /* 539 * If ext3_write_super() recently started a commit, then we 540 * have to wait for completion of that transaction 541 */ 542 + if (ptid) 543 + *ptid = journal->j_committing_transaction->t_tid; 544 ret = 1; 545 } 546 spin_unlock(&journal->j_state_lock);
+31 -11
fs/jbd2/transaction.c
··· 2129 } 2130 2131 /* 2132 - * This function must be called when inode is journaled in ordered mode 2133 - * before truncation happens. It starts writeout of truncated part in 2134 - * case it is in the committing transaction so that we stand to ordered 2135 - * mode consistency guarantees. 2136 */ 2137 - int jbd2_journal_begin_ordered_truncate(struct jbd2_inode *inode, 2138 loff_t new_size) 2139 { 2140 - journal_t *journal; 2141 - transaction_t *commit_trans; 2142 int ret = 0; 2143 2144 - if (!inode->i_transaction && !inode->i_next_transaction) 2145 goto out; 2146 - journal = inode->i_transaction->t_journal; 2147 spin_lock(&journal->j_state_lock); 2148 commit_trans = journal->j_committing_transaction; 2149 spin_unlock(&journal->j_state_lock); 2150 - if (inode->i_transaction == commit_trans) { 2151 - ret = filemap_fdatawrite_range(inode->i_vfs_inode->i_mapping, 2152 new_size, LLONG_MAX); 2153 if (ret) 2154 jbd2_journal_abort(journal, ret);
··· 2129 } 2130 2131 /* 2132 + * File truncate and transaction commit interact with each other in a 2133 + * non-trivial way. If a transaction writing data block A is 2134 + * committing, we cannot discard the data by truncate until we have 2135 + * written them. Otherwise if we crashed after the transaction with 2136 + * write has committed but before the transaction with truncate has 2137 + * committed, we could see stale data in block A. This function is a 2138 + * helper to solve this problem. It starts writeout of the truncated 2139 + * part in case it is in the committing transaction. 2140 + * 2141 + * Filesystem code must call this function when inode is journaled in 2142 + * ordered mode before truncation happens and after the inode has been 2143 + * placed on orphan list with the new inode size. The second condition 2144 + * avoids the race that someone writes new data and we start 2145 + * committing the transaction after this function has been called but 2146 + * before a transaction for truncate is started (and furthermore it 2147 + * allows us to optimize the case where the addition to orphan list 2148 + * happens in the same transaction as write --- we don't have to write 2149 + * any data in such case). 2150 */ 2151 + int jbd2_journal_begin_ordered_truncate(journal_t *journal, 2152 + struct jbd2_inode *jinode, 2153 loff_t new_size) 2154 { 2155 + transaction_t *inode_trans, *commit_trans; 2156 int ret = 0; 2157 2158 + /* This is a quick check to avoid locking if not necessary */ 2159 + if (!jinode->i_transaction) 2160 goto out; 2161 + /* Locks are here just to force reading of recent values, it is 2162 + * enough that the transaction was not committing before we started 2163 + * a transaction adding the inode to orphan list */ 2164 spin_lock(&journal->j_state_lock); 2165 commit_trans = journal->j_committing_transaction; 2166 spin_unlock(&journal->j_state_lock); 2167 + spin_lock(&journal->j_list_lock); 2168 + inode_trans = jinode->i_transaction; 2169 + spin_unlock(&journal->j_list_lock); 2170 + if (inode_trans == commit_trans) { 2171 + ret = filemap_fdatawrite_range(jinode->i_vfs_inode->i_mapping, 2172 new_size, LLONG_MAX); 2173 if (ret) 2174 jbd2_journal_abort(journal, ret);
+4 -2
fs/namespace.c
··· 614 */ 615 for_each_possible_cpu(cpu) { 616 struct mnt_writer *cpu_writer = &per_cpu(mnt_writers, cpu); 617 - if (cpu_writer->mnt != mnt) 618 - continue; 619 spin_lock(&cpu_writer->lock); 620 atomic_add(cpu_writer->count, &mnt->__mnt_writers); 621 cpu_writer->count = 0; 622 /*
··· 614 */ 615 for_each_possible_cpu(cpu) { 616 struct mnt_writer *cpu_writer = &per_cpu(mnt_writers, cpu); 617 spin_lock(&cpu_writer->lock); 618 + if (cpu_writer->mnt != mnt) { 619 + spin_unlock(&cpu_writer->lock); 620 + continue; 621 + } 622 atomic_add(cpu_writer->count, &mnt->__mnt_writers); 623 cpu_writer->count = 0; 624 /*
+1 -1
fs/notify/inotify/inotify.c
··· 156 int ret; 157 158 do { 159 - if (unlikely(!idr_pre_get(&ih->idr, GFP_KERNEL))) 160 return -ENOSPC; 161 ret = idr_get_new_above(&ih->idr, watch, ih->last_wd+1, &watch->wd); 162 } while (ret == -EAGAIN);
··· 156 int ret; 157 158 do { 159 + if (unlikely(!idr_pre_get(&ih->idr, GFP_NOFS))) 160 return -ENOSPC; 161 ret = idr_get_new_above(&ih->idr, watch, ih->last_wd+1, &watch->wd); 162 } while (ret == -EAGAIN);
+4 -2
fs/ocfs2/journal.h
··· 513 static inline int ocfs2_begin_ordered_truncate(struct inode *inode, 514 loff_t new_size) 515 { 516 - return jbd2_journal_begin_ordered_truncate(&OCFS2_I(inode)->ip_jinode, 517 - new_size); 518 } 519 520 #endif /* OCFS2_JOURNAL_H */
··· 513 static inline int ocfs2_begin_ordered_truncate(struct inode *inode, 514 loff_t new_size) 515 { 516 + return jbd2_journal_begin_ordered_truncate( 517 + OCFS2_SB(inode->i_sb)->journal->j_journal, 518 + &OCFS2_I(inode)->ip_jinode, 519 + new_size); 520 } 521 522 #endif /* OCFS2_JOURNAL_H */
+32 -4
fs/seq_file.c
··· 48 */ 49 file->f_version = 0; 50 51 - /* SEQ files support lseek, but not pread/pwrite */ 52 - file->f_mode &= ~(FMODE_PREAD | FMODE_PWRITE); 53 return 0; 54 } 55 EXPORT_SYMBOL(seq_open); ··· 139 int err = 0; 140 141 mutex_lock(&m->lock); 142 /* 143 * seq_file->op->..m_start/m_stop/m_next may do special actions 144 * or optimisations based on the file->f_version, so we want to ··· 254 Done: 255 if (!copied) 256 copied = err; 257 - else 258 *ppos += copied; 259 file->f_version = m->version; 260 mutex_unlock(&m->lock); 261 return copied; ··· 292 if (offset < 0) 293 break; 294 retval = offset; 295 - if (offset != file->f_pos) { 296 while ((retval=traverse(m, offset)) == -EAGAIN) 297 ; 298 if (retval) { 299 /* with extreme prejudice... */ 300 file->f_pos = 0; 301 m->version = 0; 302 m->index = 0; 303 m->count = 0; 304 } else { 305 retval = file->f_pos = offset; 306 } 307 }
··· 48 */ 49 file->f_version = 0; 50 51 + /* 52 + * seq_files support lseek() and pread(). They do not implement 53 + * write() at all, but we clear FMODE_PWRITE here for historical 54 + * reasons. 55 + * 56 + * If a client of seq_files a) implements file.write() and b) wishes to 57 + * support pwrite() then that client will need to implement its own 58 + * file.open() which calls seq_open() and then sets FMODE_PWRITE. 59 + */ 60 + file->f_mode &= ~FMODE_PWRITE; 61 return 0; 62 } 63 EXPORT_SYMBOL(seq_open); ··· 131 int err = 0; 132 133 mutex_lock(&m->lock); 134 + 135 + /* Don't assume *ppos is where we left it */ 136 + if (unlikely(*ppos != m->read_pos)) { 137 + m->read_pos = *ppos; 138 + while ((err = traverse(m, *ppos)) == -EAGAIN) 139 + ; 140 + if (err) { 141 + /* With prejudice... */ 142 + m->read_pos = 0; 143 + m->version = 0; 144 + m->index = 0; 145 + m->count = 0; 146 + goto Done; 147 + } 148 + } 149 + 150 /* 151 * seq_file->op->..m_start/m_stop/m_next may do special actions 152 * or optimisations based on the file->f_version, so we want to ··· 230 Done: 231 if (!copied) 232 copied = err; 233 + else { 234 *ppos += copied; 235 + m->read_pos += copied; 236 + } 237 file->f_version = m->version; 238 mutex_unlock(&m->lock); 239 return copied; ··· 266 if (offset < 0) 267 break; 268 retval = offset; 269 + if (offset != m->read_pos) { 270 while ((retval=traverse(m, offset)) == -EAGAIN) 271 ; 272 if (retval) { 273 /* with extreme prejudice... */ 274 file->f_pos = 0; 275 + m->read_pos = 0; 276 m->version = 0; 277 m->index = 0; 278 m->count = 0; 279 } else { 280 + m->read_pos = offset; 281 retval = file->f_pos = offset; 282 } 283 }
+16 -1
fs/super.c
··· 82 * lock ordering than usbfs: 83 */ 84 lockdep_set_class(&s->s_lock, &type->s_lock_key); 85 - down_write(&s->s_umount); 86 s->s_count = S_BIAS; 87 atomic_set(&s->s_active, 1); 88 mutex_init(&s->s_vfs_rename_mutex);
··· 82 * lock ordering than usbfs: 83 */ 84 lockdep_set_class(&s->s_lock, &type->s_lock_key); 85 + /* 86 + * sget() can have s_umount recursion. 87 + * 88 + * When it cannot find a suitable sb, it allocates a new 89 + * one (this one), and tries again to find a suitable old 90 + * one. 91 + * 92 + * In case that succeeds, it will acquire the s_umount 93 + * lock of the old one. Since these are clearly distrinct 94 + * locks, and this object isn't exposed yet, there's no 95 + * risk of deadlocks. 96 + * 97 + * Annotate this by putting this lock in a different 98 + * subclass. 99 + */ 100 + down_write_nested(&s->s_umount, SINGLE_DEPTH_NESTING); 101 s->s_count = S_BIAS; 102 atomic_set(&s->s_active, 1); 103 mutex_init(&s->s_vfs_rename_mutex);
+6 -6
fs/timerfd.c
··· 186 BUILD_BUG_ON(TFD_CLOEXEC != O_CLOEXEC); 187 BUILD_BUG_ON(TFD_NONBLOCK != O_NONBLOCK); 188 189 - if (flags & ~(TFD_CLOEXEC | TFD_NONBLOCK)) 190 - return -EINVAL; 191 - if (clockid != CLOCK_MONOTONIC && 192 - clockid != CLOCK_REALTIME) 193 return -EINVAL; 194 195 ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); ··· 200 hrtimer_init(&ctx->tmr, clockid, HRTIMER_MODE_ABS); 201 202 ufd = anon_inode_getfd("[timerfd]", &timerfd_fops, ctx, 203 - flags & (O_CLOEXEC | O_NONBLOCK)); 204 if (ufd < 0) 205 kfree(ctx); 206 ··· 218 if (copy_from_user(&ktmr, utmr, sizeof(ktmr))) 219 return -EFAULT; 220 221 - if (!timespec_valid(&ktmr.it_value) || 222 !timespec_valid(&ktmr.it_interval)) 223 return -EINVAL; 224
··· 186 BUILD_BUG_ON(TFD_CLOEXEC != O_CLOEXEC); 187 BUILD_BUG_ON(TFD_NONBLOCK != O_NONBLOCK); 188 189 + if ((flags & ~TFD_CREATE_FLAGS) || 190 + (clockid != CLOCK_MONOTONIC && 191 + clockid != CLOCK_REALTIME)) 192 return -EINVAL; 193 194 ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); ··· 201 hrtimer_init(&ctx->tmr, clockid, HRTIMER_MODE_ABS); 202 203 ufd = anon_inode_getfd("[timerfd]", &timerfd_fops, ctx, 204 + flags & TFD_SHARED_FCNTL_FLAGS); 205 if (ufd < 0) 206 kfree(ctx); 207 ··· 219 if (copy_from_user(&ktmr, utmr, sizeof(ktmr))) 220 return -EFAULT; 221 222 + if ((flags & ~TFD_SETTIME_FLAGS) || 223 + !timespec_valid(&ktmr.it_value) || 224 !timespec_valid(&ktmr.it_interval)) 225 return -EINVAL; 226
+76 -3
fs/xfs/linux-2.6/xfs_buf.c
··· 166 } 167 168 /* 169 * Internal xfs_buf_t object manipulation 170 */ 171 ··· 333 uint i; 334 335 if ((bp->b_flags & XBF_MAPPED) && (bp->b_page_count > 1)) 336 - vm_unmap_ram(bp->b_addr - bp->b_offset, bp->b_page_count); 337 338 for (i = 0; i < bp->b_page_count; i++) { 339 struct page *page = bp->b_pages[i]; ··· 455 bp->b_addr = page_address(bp->b_pages[0]) + bp->b_offset; 456 bp->b_flags |= XBF_MAPPED; 457 } else if (flags & XBF_MAPPED) { 458 - bp->b_addr = vm_map_ram(bp->b_pages, bp->b_page_count, 459 - -1, PAGE_KERNEL); 460 if (unlikely(bp->b_addr == NULL)) 461 return -ENOMEM; 462 bp->b_addr += bp->b_offset; ··· 1743 count++; 1744 } 1745 1746 if (count) 1747 blk_run_address_space(target->bt_mapping); 1748
··· 166 } 167 168 /* 169 + * Mapping of multi-page buffers into contiguous virtual space 170 + */ 171 + 172 + typedef struct a_list { 173 + void *vm_addr; 174 + struct a_list *next; 175 + } a_list_t; 176 + 177 + static a_list_t *as_free_head; 178 + static int as_list_len; 179 + static DEFINE_SPINLOCK(as_lock); 180 + 181 + /* 182 + * Try to batch vunmaps because they are costly. 183 + */ 184 + STATIC void 185 + free_address( 186 + void *addr) 187 + { 188 + a_list_t *aentry; 189 + 190 + #ifdef CONFIG_XEN 191 + /* 192 + * Xen needs to be able to make sure it can get an exclusive 193 + * RO mapping of pages it wants to turn into a pagetable. If 194 + * a newly allocated page is also still being vmap()ed by xfs, 195 + * it will cause pagetable construction to fail. This is a 196 + * quick workaround to always eagerly unmap pages so that Xen 197 + * is happy. 198 + */ 199 + vunmap(addr); 200 + return; 201 + #endif 202 + 203 + aentry = kmalloc(sizeof(a_list_t), GFP_NOWAIT); 204 + if (likely(aentry)) { 205 + spin_lock(&as_lock); 206 + aentry->next = as_free_head; 207 + aentry->vm_addr = addr; 208 + as_free_head = aentry; 209 + as_list_len++; 210 + spin_unlock(&as_lock); 211 + } else { 212 + vunmap(addr); 213 + } 214 + } 215 + 216 + STATIC void 217 + purge_addresses(void) 218 + { 219 + a_list_t *aentry, *old; 220 + 221 + if (as_free_head == NULL) 222 + return; 223 + 224 + spin_lock(&as_lock); 225 + aentry = as_free_head; 226 + as_free_head = NULL; 227 + as_list_len = 0; 228 + spin_unlock(&as_lock); 229 + 230 + while ((old = aentry) != NULL) { 231 + vunmap(aentry->vm_addr); 232 + aentry = aentry->next; 233 + kfree(old); 234 + } 235 + } 236 + 237 + /* 238 * Internal xfs_buf_t object manipulation 239 */ 240 ··· 264 uint i; 265 266 if ((bp->b_flags & XBF_MAPPED) && (bp->b_page_count > 1)) 267 + free_address(bp->b_addr - bp->b_offset); 268 269 for (i = 0; i < bp->b_page_count; i++) { 270 struct page *page = bp->b_pages[i]; ··· 386 bp->b_addr = page_address(bp->b_pages[0]) + bp->b_offset; 387 bp->b_flags |= XBF_MAPPED; 388 } else if (flags & XBF_MAPPED) { 389 + if (as_list_len > 64) 390 + purge_addresses(); 391 + bp->b_addr = vmap(bp->b_pages, bp->b_page_count, 392 + VM_MAP, PAGE_KERNEL); 393 if (unlikely(bp->b_addr == NULL)) 394 return -ENOMEM; 395 bp->b_addr += bp->b_offset; ··· 1672 count++; 1673 } 1674 1675 + if (as_list_len > 0) 1676 + purge_addresses(); 1677 if (count) 1678 blk_run_address_space(target->bt_mapping); 1679
+1 -1
include/asm-frv/pgtable.h
··· 478 #define __swp_type(x) (((x).val >> 2) & 0x1f) 479 #define __swp_offset(x) ((x).val >> 8) 480 #define __swp_entry(type, offset) ((swp_entry_t) { ((type) << 2) | ((offset) << 8) }) 481 - #define __pte_to_swp_entry(pte) ((swp_entry_t) { (pte).pte }) 482 #define __swp_entry_to_pte(x) ((pte_t) { (x).val }) 483 484 static inline int pte_file(pte_t pte)
··· 478 #define __swp_type(x) (((x).val >> 2) & 0x1f) 479 #define __swp_offset(x) ((x).val >> 8) 480 #define __swp_entry(type, offset) ((swp_entry_t) { ((type) << 2) | ((offset) << 8) }) 481 + #define __pte_to_swp_entry(_pte) ((swp_entry_t) { (_pte).pte }) 482 #define __swp_entry_to_pte(x) ((pte_t) { (x).val }) 483 484 static inline int pte_file(pte_t pte)
+2
include/drm/drmP.h
··· 1321 struct drm_gem_object *drm_gem_object_alloc(struct drm_device *dev, 1322 size_t size); 1323 void drm_gem_object_handle_free(struct kref *kref); 1324 int drm_gem_mmap(struct file *filp, struct vm_area_struct *vma); 1325 1326 static inline void
··· 1321 struct drm_gem_object *drm_gem_object_alloc(struct drm_device *dev, 1322 size_t size); 1323 void drm_gem_object_handle_free(struct kref *kref); 1324 + void drm_gem_vm_open(struct vm_area_struct *vma); 1325 + void drm_gem_vm_close(struct vm_area_struct *vma); 1326 int drm_gem_mmap(struct file *filp, struct vm_area_struct *vma); 1327 1328 static inline void
+1 -1
include/drm/drm_crtc.h
··· 609 extern char *drm_get_dvi_i_select_name(int val); 610 extern char *drm_get_tv_subconnector_name(int val); 611 extern char *drm_get_tv_select_name(int val); 612 - extern void drm_fb_release(struct file *filp); 613 extern int drm_mode_group_init_legacy_group(struct drm_device *dev, struct drm_mode_group *group); 614 extern struct edid *drm_get_edid(struct drm_connector *connector, 615 struct i2c_adapter *adapter);
··· 609 extern char *drm_get_dvi_i_select_name(int val); 610 extern char *drm_get_tv_subconnector_name(int val); 611 extern char *drm_get_tv_select_name(int val); 612 + extern void drm_fb_release(struct drm_file *file_priv); 613 extern int drm_mode_group_init_legacy_group(struct drm_device *dev, struct drm_mode_group *group); 614 extern struct edid *drm_get_edid(struct drm_connector *connector, 615 struct i2c_adapter *adapter);
+5 -5
include/drm/drm_crtc_helper.h
··· 54 struct drm_display_mode *mode, 55 struct drm_display_mode *adjusted_mode); 56 /* Actually set the mode */ 57 - void (*mode_set)(struct drm_crtc *crtc, struct drm_display_mode *mode, 58 - struct drm_display_mode *adjusted_mode, int x, int y, 59 - struct drm_framebuffer *old_fb); 60 61 /* Move the crtc on the current fb to the given position *optional* */ 62 - void (*mode_set_base)(struct drm_crtc *crtc, int x, int y, 63 - struct drm_framebuffer *old_fb); 64 }; 65 66 struct drm_encoder_helper_funcs {
··· 54 struct drm_display_mode *mode, 55 struct drm_display_mode *adjusted_mode); 56 /* Actually set the mode */ 57 + int (*mode_set)(struct drm_crtc *crtc, struct drm_display_mode *mode, 58 + struct drm_display_mode *adjusted_mode, int x, int y, 59 + struct drm_framebuffer *old_fb); 60 61 /* Move the crtc on the current fb to the given position *optional* */ 62 + int (*mode_set_base)(struct drm_crtc *crtc, int x, int y, 63 + struct drm_framebuffer *old_fb); 64 }; 65 66 struct drm_encoder_helper_funcs {
-2
include/linux/bio.h
··· 171 #define BIO_RW_FAILFAST_TRANSPORT 8 172 #define BIO_RW_FAILFAST_DRIVER 9 173 174 - #define BIO_RW_SYNC (BIO_RW_SYNCIO | BIO_RW_UNPLUG) 175 - 176 #define bio_rw_flagged(bio, flag) ((bio)->bi_rw & (1 << (flag))) 177 178 /*
··· 171 #define BIO_RW_FAILFAST_TRANSPORT 8 172 #define BIO_RW_FAILFAST_DRIVER 9 173 174 #define bio_rw_flagged(bio, flag) ((bio)->bi_rw & (1 << (flag))) 175 176 /*
+1
include/linux/blktrace_api.h
··· 15 BLK_TC_WRITE = 1 << 1, /* writes */ 16 BLK_TC_BARRIER = 1 << 2, /* barrier */ 17 BLK_TC_SYNC = 1 << 3, /* sync IO */ 18 BLK_TC_QUEUE = 1 << 4, /* queueing/merging */ 19 BLK_TC_REQUEUE = 1 << 5, /* requeueing */ 20 BLK_TC_ISSUE = 1 << 6, /* issue */
··· 15 BLK_TC_WRITE = 1 << 1, /* writes */ 16 BLK_TC_BARRIER = 1 << 2, /* barrier */ 17 BLK_TC_SYNC = 1 << 3, /* sync IO */ 18 + BLK_TC_SYNCIO = BLK_TC_SYNC, 19 BLK_TC_QUEUE = 1 << 4, /* queueing/merging */ 20 BLK_TC_REQUEUE = 1 << 5, /* requeueing */ 21 BLK_TC_ISSUE = 1 << 6, /* issue */
+2
include/linux/device.h
··· 147 extern struct device_driver *driver_find(const char *name, 148 struct bus_type *bus); 149 extern int driver_probe_done(void); 150 151 /* sysfs interface for exporting driver attributes */ 152
··· 147 extern struct device_driver *driver_find(const char *name, 148 struct bus_type *bus); 149 extern int driver_probe_done(void); 150 + extern int wait_for_device_probe(void); 151 + 152 153 /* sysfs interface for exporting driver attributes */ 154
+2
include/linux/dmaengine.h
··· 121 * @local: per-cpu pointer to a struct dma_chan_percpu 122 * @client-count: how many clients are using this channel 123 * @table_count: number of appearances in the mem-to-mem allocation table 124 */ 125 struct dma_chan { 126 struct dma_device *device; ··· 135 struct dma_chan_percpu *local; 136 int client_count; 137 int table_count; 138 }; 139 140 /**
··· 121 * @local: per-cpu pointer to a struct dma_chan_percpu 122 * @client-count: how many clients are using this channel 123 * @table_count: number of appearances in the mem-to-mem allocation table 124 + * @private: private data for certain client-channel associations 125 */ 126 struct dma_chan { 127 struct dma_device *device; ··· 134 struct dma_chan_percpu *local; 135 int client_count; 136 int table_count; 137 + void *private; 138 }; 139 140 /**
+1 -1
include/linux/firmware-map.h
··· 1 /* 2 * include/linux/firmware-map.h: 3 * Copyright (C) 2008 SUSE LINUX Products GmbH 4 - * by Bernhard Walle <bwalle@suse.de> 5 * 6 * This program is free software; you can redistribute it and/or modify 7 * it under the terms of the GNU General Public License v2.0 as published by
··· 1 /* 2 * include/linux/firmware-map.h: 3 * Copyright (C) 2008 SUSE LINUX Products GmbH 4 + * by Bernhard Walle <bernhard.walle@gmx.de> 5 * 6 * This program is free software; you can redistribute it and/or modify 7 * it under the terms of the GNU General Public License v2.0 as published by
+15 -9
include/linux/fs.h
··· 54 #define MAY_ACCESS 16 55 #define MAY_OPEN 32 56 57 /* file is open for reading */ 58 #define FMODE_READ ((__force fmode_t)1) 59 /* file is open for writing */ 60 #define FMODE_WRITE ((__force fmode_t)2) 61 /* file is seekable */ 62 #define FMODE_LSEEK ((__force fmode_t)4) 63 - /* file can be accessed using pread/pwrite */ 64 #define FMODE_PREAD ((__force fmode_t)8) 65 - #define FMODE_PWRITE FMODE_PREAD /* These go hand in hand */ 66 /* File is opened for execution with sys_execve / sys_uselib */ 67 - #define FMODE_EXEC ((__force fmode_t)16) 68 /* File is opened with O_NDELAY (only set for block devices) */ 69 - #define FMODE_NDELAY ((__force fmode_t)32) 70 /* File is opened with O_EXCL (only set for block devices) */ 71 - #define FMODE_EXCL ((__force fmode_t)64) 72 /* File is opened using open(.., 3, ..) and is writeable only for ioctls 73 (specialy hack for floppy.c) */ 74 - #define FMODE_WRITE_IOCTL ((__force fmode_t)128) 75 76 /* 77 * Don't update ctime and mtime. ··· 93 #define WRITE 1 94 #define READA 2 /* read-ahead - don't block if no resources */ 95 #define SWRITE 3 /* for ll_rw_block() - wait for buffer lock */ 96 - #define READ_SYNC (READ | (1 << BIO_RW_SYNC)) 97 #define READ_META (READ | (1 << BIO_RW_META)) 98 - #define WRITE_SYNC (WRITE | (1 << BIO_RW_SYNC)) 99 - #define SWRITE_SYNC (SWRITE | (1 << BIO_RW_SYNC)) 100 #define WRITE_BARRIER (WRITE | (1 << BIO_RW_BARRIER)) 101 #define DISCARD_NOBARRIER (1 << BIO_RW_DISCARD) 102 #define DISCARD_BARRIER ((1 << BIO_RW_DISCARD) | (1 << BIO_RW_BARRIER))
··· 54 #define MAY_ACCESS 16 55 #define MAY_OPEN 32 56 57 + /* 58 + * flags in file.f_mode. Note that FMODE_READ and FMODE_WRITE must correspond 59 + * to O_WRONLY and O_RDWR via the strange trick in __dentry_open() 60 + */ 61 + 62 /* file is open for reading */ 63 #define FMODE_READ ((__force fmode_t)1) 64 /* file is open for writing */ 65 #define FMODE_WRITE ((__force fmode_t)2) 66 /* file is seekable */ 67 #define FMODE_LSEEK ((__force fmode_t)4) 68 + /* file can be accessed using pread */ 69 #define FMODE_PREAD ((__force fmode_t)8) 70 + /* file can be accessed using pwrite */ 71 + #define FMODE_PWRITE ((__force fmode_t)16) 72 /* File is opened for execution with sys_execve / sys_uselib */ 73 + #define FMODE_EXEC ((__force fmode_t)32) 74 /* File is opened with O_NDELAY (only set for block devices) */ 75 + #define FMODE_NDELAY ((__force fmode_t)64) 76 /* File is opened with O_EXCL (only set for block devices) */ 77 + #define FMODE_EXCL ((__force fmode_t)128) 78 /* File is opened using open(.., 3, ..) and is writeable only for ioctls 79 (specialy hack for floppy.c) */ 80 + #define FMODE_WRITE_IOCTL ((__force fmode_t)256) 81 82 /* 83 * Don't update ctime and mtime. ··· 87 #define WRITE 1 88 #define READA 2 /* read-ahead - don't block if no resources */ 89 #define SWRITE 3 /* for ll_rw_block() - wait for buffer lock */ 90 + #define READ_SYNC (READ | (1 << BIO_RW_SYNCIO) | (1 << BIO_RW_UNPLUG)) 91 #define READ_META (READ | (1 << BIO_RW_META)) 92 + #define WRITE_SYNC (WRITE | (1 << BIO_RW_SYNCIO) | (1 << BIO_RW_UNPLUG)) 93 + #define SWRITE_SYNC (SWRITE | (1 << BIO_RW_SYNCIO) | (1 << BIO_RW_UNPLUG)) 94 #define WRITE_BARRIER (WRITE | (1 << BIO_RW_BARRIER)) 95 #define DISCARD_NOBARRIER (1 << BIO_RW_DISCARD) 96 #define DISCARD_BARRIER ((1 << BIO_RW_DISCARD) | (1 << BIO_RW_BARRIER))
+2 -1
include/linux/jbd2.h
··· 1150 extern int jbd2_journal_bmap(journal_t *, unsigned long, unsigned long long *); 1151 extern int jbd2_journal_force_commit(journal_t *); 1152 extern int jbd2_journal_file_inode(handle_t *handle, struct jbd2_inode *inode); 1153 - extern int jbd2_journal_begin_ordered_truncate(struct jbd2_inode *inode, loff_t new_size); 1154 extern void jbd2_journal_init_jbd_inode(struct jbd2_inode *jinode, struct inode *inode); 1155 extern void jbd2_journal_release_jbd_inode(journal_t *journal, struct jbd2_inode *jinode); 1156
··· 1150 extern int jbd2_journal_bmap(journal_t *, unsigned long, unsigned long long *); 1151 extern int jbd2_journal_force_commit(journal_t *); 1152 extern int jbd2_journal_file_inode(handle_t *handle, struct jbd2_inode *inode); 1153 + extern int jbd2_journal_begin_ordered_truncate(journal_t *journal, 1154 + struct jbd2_inode *inode, loff_t new_size); 1155 extern void jbd2_journal_init_jbd_inode(struct jbd2_inode *jinode, struct inode *inode); 1156 extern void jbd2_journal_release_jbd_inode(journal_t *journal, struct jbd2_inode *jinode); 1157
+5 -5
include/linux/kvm.h
··· 58 __u32 pad; 59 union { 60 char dummy[512]; /* reserving space */ 61 - #ifdef CONFIG_X86 62 struct kvm_pic_state pic; 63 #endif 64 - #if defined(CONFIG_X86) || defined(CONFIG_IA64) 65 struct kvm_ioapic_state ioapic; 66 #endif 67 } chip; ··· 384 #define KVM_CAP_MP_STATE 14 385 #define KVM_CAP_COALESCED_MMIO 15 386 #define KVM_CAP_SYNC_MMU 16 /* Changes to host mmap are reflected in guest */ 387 - #if defined(CONFIG_X86)||defined(CONFIG_IA64) 388 #define KVM_CAP_DEVICE_ASSIGNMENT 17 389 #endif 390 #define KVM_CAP_IOMMU 18 391 - #if defined(CONFIG_X86) 392 #define KVM_CAP_DEVICE_MSI 20 393 #endif 394 /* Bug in KVM_SET_USER_MEMORY_REGION fixed: */ 395 #define KVM_CAP_DESTROY_MEMORY_REGION_WORKS 21 396 - #if defined(CONFIG_X86) 397 #define KVM_CAP_USER_NMI 22 398 #endif 399
··· 58 __u32 pad; 59 union { 60 char dummy[512]; /* reserving space */ 61 + #ifdef __KVM_HAVE_PIT 62 struct kvm_pic_state pic; 63 #endif 64 + #ifdef __KVM_HAVE_IOAPIC 65 struct kvm_ioapic_state ioapic; 66 #endif 67 } chip; ··· 384 #define KVM_CAP_MP_STATE 14 385 #define KVM_CAP_COALESCED_MMIO 15 386 #define KVM_CAP_SYNC_MMU 16 /* Changes to host mmap are reflected in guest */ 387 + #ifdef __KVM_HAVE_DEVICE_ASSIGNMENT 388 #define KVM_CAP_DEVICE_ASSIGNMENT 17 389 #endif 390 #define KVM_CAP_IOMMU 18 391 + #ifdef __KVM_HAVE_MSI 392 #define KVM_CAP_DEVICE_MSI 20 393 #endif 394 /* Bug in KVM_SET_USER_MEMORY_REGION fixed: */ 395 #define KVM_CAP_DESTROY_MEMORY_REGION_WORKS 21 396 + #ifdef __KVM_HAVE_USER_NMI 397 #define KVM_CAP_USER_NMI 22 398 #endif 399
+1
include/linux/kvm_host.h
··· 285 struct kvm *kvm_arch_create_vm(void); 286 void kvm_arch_destroy_vm(struct kvm *kvm); 287 void kvm_free_all_assigned_devices(struct kvm *kvm); 288 289 int kvm_cpu_get_interrupt(struct kvm_vcpu *v); 290 int kvm_cpu_has_interrupt(struct kvm_vcpu *v);
··· 285 struct kvm *kvm_arch_create_vm(void); 286 void kvm_arch_destroy_vm(struct kvm *kvm); 287 void kvm_free_all_assigned_devices(struct kvm *kvm); 288 + void kvm_arch_sync_events(struct kvm *kvm); 289 290 int kvm_cpu_get_interrupt(struct kvm_vcpu *v); 291 int kvm_cpu_has_interrupt(struct kvm_vcpu *v);
+18 -3
include/linux/mm.h
··· 1041 typedef int (*work_fn_t)(unsigned long, unsigned long, void *); 1042 extern void work_with_active_regions(int nid, work_fn_t work_fn, void *data); 1043 extern void sparse_memory_present_with_active_regions(int nid); 1044 - #ifndef CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID 1045 - extern int early_pfn_to_nid(unsigned long pfn); 1046 - #endif /* CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID */ 1047 #endif /* CONFIG_ARCH_POPULATES_NODE_MAP */ 1048 extern void set_dma_reserve(unsigned long new_dma_reserve); 1049 extern void memmap_init_zone(unsigned long, int, unsigned long, 1050 unsigned long, enum memmap_context); ··· 1172 1173 /* mm/page-writeback.c */ 1174 int write_one_page(struct page *page, int wait); 1175 1176 /* readahead.c */ 1177 #define VM_MAX_READAHEAD 128 /* kbytes */ ··· 1318 1319 extern void *alloc_locked_buffer(size_t size); 1320 extern void free_locked_buffer(void *buffer, size_t size); 1321 #endif /* __KERNEL__ */ 1322 #endif /* _LINUX_MM_H */
··· 1041 typedef int (*work_fn_t)(unsigned long, unsigned long, void *); 1042 extern void work_with_active_regions(int nid, work_fn_t work_fn, void *data); 1043 extern void sparse_memory_present_with_active_regions(int nid); 1044 #endif /* CONFIG_ARCH_POPULATES_NODE_MAP */ 1045 + 1046 + #if !defined(CONFIG_ARCH_POPULATES_NODE_MAP) && \ 1047 + !defined(CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID) 1048 + static inline int __early_pfn_to_nid(unsigned long pfn) 1049 + { 1050 + return 0; 1051 + } 1052 + #else 1053 + /* please see mm/page_alloc.c */ 1054 + extern int __meminit early_pfn_to_nid(unsigned long pfn); 1055 + #ifdef CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID 1056 + /* there is a per-arch backend function. */ 1057 + extern int __meminit __early_pfn_to_nid(unsigned long pfn); 1058 + #endif /* CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID */ 1059 + #endif 1060 + 1061 extern void set_dma_reserve(unsigned long new_dma_reserve); 1062 extern void memmap_init_zone(unsigned long, int, unsigned long, 1063 unsigned long, enum memmap_context); ··· 1159 1160 /* mm/page-writeback.c */ 1161 int write_one_page(struct page *page, int wait); 1162 + void task_dirty_inc(struct task_struct *tsk); 1163 1164 /* readahead.c */ 1165 #define VM_MAX_READAHEAD 128 /* kbytes */ ··· 1304 1305 extern void *alloc_locked_buffer(size_t size); 1306 extern void free_locked_buffer(void *buffer, size_t size); 1307 + extern void release_locked_buffer(void *buffer, size_t size); 1308 #endif /* __KERNEL__ */ 1309 #endif /* _LINUX_MM_H */
+1 -1
include/linux/mmzone.h
··· 1071 #endif /* CONFIG_SPARSEMEM */ 1072 1073 #ifdef CONFIG_NODES_SPAN_OTHER_NODES 1074 - #define early_pfn_in_nid(pfn, nid) (early_pfn_to_nid(pfn) == (nid)) 1075 #else 1076 #define early_pfn_in_nid(pfn, nid) (1) 1077 #endif
··· 1071 #endif /* CONFIG_SPARSEMEM */ 1072 1073 #ifdef CONFIG_NODES_SPAN_OTHER_NODES 1074 + bool early_pfn_in_nid(unsigned long pfn, int nid); 1075 #else 1076 #define early_pfn_in_nid(pfn, nid) (1) 1077 #endif
+5
include/linux/pci_ids.h
··· 1312 #define PCI_DEVICE_ID_VIA_VT3351 0x0351 1313 #define PCI_DEVICE_ID_VIA_VT3364 0x0364 1314 #define PCI_DEVICE_ID_VIA_8371_0 0x0391 1315 #define PCI_DEVICE_ID_VIA_8501_0 0x0501 1316 #define PCI_DEVICE_ID_VIA_82C561 0x0561 1317 #define PCI_DEVICE_ID_VIA_82C586_1 0x0571 ··· 1445 #define PCI_DEVICE_ID_DIGI_DF_M_E 0x0071 1446 #define PCI_DEVICE_ID_DIGI_DF_M_IOM2_A 0x0072 1447 #define PCI_DEVICE_ID_DIGI_DF_M_A 0x0073 1448 #define PCI_DEVICE_ID_NEO_2DB9 0x00C8 1449 #define PCI_DEVICE_ID_NEO_2DB9PRI 0x00C9 1450 #define PCI_DEVICE_ID_NEO_2RJ45 0x00CA ··· 2323 #define PCI_DEVICE_ID_INTEL_82378 0x0484 2324 #define PCI_DEVICE_ID_INTEL_I960 0x0960 2325 #define PCI_DEVICE_ID_INTEL_I960RM 0x0962 2326 #define PCI_DEVICE_ID_INTEL_82815_MC 0x1130 2327 #define PCI_DEVICE_ID_INTEL_82815_CGC 0x1132 2328 #define PCI_DEVICE_ID_INTEL_82092AA_0 0x1221
··· 1312 #define PCI_DEVICE_ID_VIA_VT3351 0x0351 1313 #define PCI_DEVICE_ID_VIA_VT3364 0x0364 1314 #define PCI_DEVICE_ID_VIA_8371_0 0x0391 1315 + #define PCI_DEVICE_ID_VIA_6415 0x0415 1316 #define PCI_DEVICE_ID_VIA_8501_0 0x0501 1317 #define PCI_DEVICE_ID_VIA_82C561 0x0561 1318 #define PCI_DEVICE_ID_VIA_82C586_1 0x0571 ··· 1444 #define PCI_DEVICE_ID_DIGI_DF_M_E 0x0071 1445 #define PCI_DEVICE_ID_DIGI_DF_M_IOM2_A 0x0072 1446 #define PCI_DEVICE_ID_DIGI_DF_M_A 0x0073 1447 + #define PCI_DEVICE_ID_DIGI_NEO_8 0x00B1 1448 #define PCI_DEVICE_ID_NEO_2DB9 0x00C8 1449 #define PCI_DEVICE_ID_NEO_2DB9PRI 0x00C9 1450 #define PCI_DEVICE_ID_NEO_2RJ45 0x00CA ··· 2321 #define PCI_DEVICE_ID_INTEL_82378 0x0484 2322 #define PCI_DEVICE_ID_INTEL_I960 0x0960 2323 #define PCI_DEVICE_ID_INTEL_I960RM 0x0962 2324 + #define PCI_DEVICE_ID_INTEL_8257X_SOL 0x1062 2325 + #define PCI_DEVICE_ID_INTEL_82573E_SOL 0x1085 2326 + #define PCI_DEVICE_ID_INTEL_82573L_SOL 0x108F 2327 #define PCI_DEVICE_ID_INTEL_82815_MC 0x1130 2328 #define PCI_DEVICE_ID_INTEL_82815_CGC 0x1132 2329 #define PCI_DEVICE_ID_INTEL_82092AA_0 0x1221
+1
include/linux/seq_file.h
··· 19 size_t from; 20 size_t count; 21 loff_t index; 22 u64 version; 23 struct mutex lock; 24 const struct seq_operations *op;
··· 19 size_t from; 20 size_t count; 21 loff_t index; 22 + loff_t read_pos; 23 u64 version; 24 struct mutex lock; 25 const struct seq_operations *op;
+1
include/linux/serial_core.h
··· 296 #define UPF_HARDPPS_CD ((__force upf_t) (1 << 11)) 297 #define UPF_LOW_LATENCY ((__force upf_t) (1 << 13)) 298 #define UPF_BUGGY_UART ((__force upf_t) (1 << 14)) 299 #define UPF_MAGIC_MULTIPLIER ((__force upf_t) (1 << 16)) 300 #define UPF_CONS_FLOW ((__force upf_t) (1 << 23)) 301 #define UPF_SHARE_IRQ ((__force upf_t) (1 << 24))
··· 296 #define UPF_HARDPPS_CD ((__force upf_t) (1 << 11)) 297 #define UPF_LOW_LATENCY ((__force upf_t) (1 << 13)) 298 #define UPF_BUGGY_UART ((__force upf_t) (1 << 14)) 299 + #define UPF_NO_TXEN_TEST ((__force upf_t) (1 << 15)) 300 #define UPF_MAGIC_MULTIPLIER ((__force upf_t) (1 << 16)) 301 #define UPF_CONS_FLOW ((__force upf_t) (1 << 23)) 302 #define UPF_SHARE_IRQ ((__force upf_t) (1 << 24))
+1
include/linux/slab.h
··· 127 void * __must_check __krealloc(const void *, size_t, gfp_t); 128 void * __must_check krealloc(const void *, size_t, gfp_t); 129 void kfree(const void *); 130 size_t ksize(const void *); 131 132 /*
··· 127 void * __must_check __krealloc(const void *, size_t, gfp_t); 128 void * __must_check krealloc(const void *, size_t, gfp_t); 129 void kfree(const void *); 130 + void kzfree(const void *); 131 size_t ksize(const void *); 132 133 /*
+7
include/linux/spi/spi_bitbang.h
··· 83 * int getmiso(struct spi_device *); 84 * void spidelay(unsigned); 85 * 86 * A non-inlined routine would call bitbang_txrx_*() routines. The 87 * main loop could easily compile down to a handful of instructions, 88 * especially if the delay is a NOP (to run at peak speed).
··· 83 * int getmiso(struct spi_device *); 84 * void spidelay(unsigned); 85 * 86 + * setsck()'s is_on parameter is a zero/nonzero boolean. 87 + * 88 + * setmosi()'s is_on parameter is a zero/nonzero boolean. 89 + * 90 + * getmiso() is required to return 0 or 1 only. Any other value is invalid 91 + * and will result in improper operation. 92 + * 93 * A non-inlined routine would call bitbang_txrx_*() routines. The 94 * main loop could easily compile down to a handful of instructions, 95 * especially if the delay is a NOP (to run at peak speed).
+12 -4
include/linux/timerfd.h
··· 11 /* For O_CLOEXEC and O_NONBLOCK */ 12 #include <linux/fcntl.h> 13 14 - /* Flags for timerfd_settime. */ 15 #define TFD_TIMER_ABSTIME (1 << 0) 16 - 17 - /* Flags for timerfd_create. */ 18 #define TFD_CLOEXEC O_CLOEXEC 19 #define TFD_NONBLOCK O_NONBLOCK 20 21 22 #endif /* _LINUX_TIMERFD_H */ 23 -
··· 11 /* For O_CLOEXEC and O_NONBLOCK */ 12 #include <linux/fcntl.h> 13 14 + /* 15 + * CAREFUL: Check include/asm-generic/fcntl.h when defining 16 + * new flags, since they might collide with O_* ones. We want 17 + * to re-use O_* flags that couldn't possibly have a meaning 18 + * from eventfd, in order to leave a free define-space for 19 + * shared O_* flags. 20 + */ 21 #define TFD_TIMER_ABSTIME (1 << 0) 22 #define TFD_CLOEXEC O_CLOEXEC 23 #define TFD_NONBLOCK O_NONBLOCK 24 25 + #define TFD_SHARED_FCNTL_FLAGS (TFD_CLOEXEC | TFD_NONBLOCK) 26 + /* Flags for timerfd_create. */ 27 + #define TFD_CREATE_FLAGS TFD_SHARED_FCNTL_FLAGS 28 + /* Flags for timerfd_settime. */ 29 + #define TFD_SETTIME_FLAGS TFD_TIMER_ABSTIME 30 31 #endif /* _LINUX_TIMERFD_H */
+4
include/linux/vmalloc.h
··· 84 unsigned long flags, void *caller); 85 extern struct vm_struct *__get_vm_area(unsigned long size, unsigned long flags, 86 unsigned long start, unsigned long end); 87 extern struct vm_struct *get_vm_area_node(unsigned long size, 88 unsigned long flags, int node, 89 gfp_t gfp_mask);
··· 84 unsigned long flags, void *caller); 85 extern struct vm_struct *__get_vm_area(unsigned long size, unsigned long flags, 86 unsigned long start, unsigned long end); 87 + extern struct vm_struct *__get_vm_area_caller(unsigned long size, 88 + unsigned long flags, 89 + unsigned long start, unsigned long end, 90 + void *caller); 91 extern struct vm_struct *get_vm_area_node(unsigned long size, 92 unsigned long flags, int node, 93 gfp_t gfp_mask);
+9 -4
init/do_mounts.c
··· 370 ssleep(root_delay); 371 } 372 373 - /* wait for the known devices to complete their probing */ 374 - while (driver_probe_done() != 0) 375 - msleep(100); 376 - async_synchronize_full(); 377 378 md_run_setup(); 379 ··· 403 while (driver_probe_done() != 0 || 404 (ROOT_DEV = name_to_dev_t(saved_root_name)) == 0) 405 msleep(100); 406 } 407 408 is_floppy = MAJOR(ROOT_DEV) == FLOPPY_MAJOR;
··· 370 ssleep(root_delay); 371 } 372 373 + /* 374 + * wait for the known devices to complete their probing 375 + * 376 + * Note: this is a potential source of long boot delays. 377 + * For example, it is not atypical to wait 5 seconds here 378 + * for the touchpad of a laptop to initialize. 379 + */ 380 + wait_for_device_probe(); 381 382 md_run_setup(); 383 ··· 399 while (driver_probe_done() != 0 || 400 (ROOT_DEV = name_to_dev_t(saved_root_name)) == 0) 401 msleep(100); 402 + async_synchronize_full(); 403 } 404 405 is_floppy = MAJOR(ROOT_DEV) == FLOPPY_MAJOR;
+3 -2
init/do_mounts_md.c
··· 281 */ 282 printk(KERN_INFO "md: Waiting for all devices to be available before autodetect\n"); 283 printk(KERN_INFO "md: If you don't use raid, use raid=noautodetect\n"); 284 - while (driver_probe_done() < 0) 285 - msleep(100); 286 fd = sys_open("/dev/md0", 0, 0); 287 if (fd >= 0) { 288 sys_ioctl(fd, RAID_AUTORUN, raid_autopart);
··· 281 */ 282 printk(KERN_INFO "md: Waiting for all devices to be available before autodetect\n"); 283 printk(KERN_INFO "md: If you don't use raid, use raid=noautodetect\n"); 284 + 285 + wait_for_device_probe(); 286 + 287 fd = sys_open("/dev/md0", 0, 0); 288 if (fd >= 0) { 289 sys_ioctl(fd, RAID_AUTORUN, raid_autopart);
+1
kernel/Makefile
··· 51 obj-$(CONFIG_MODULES) += module.o 52 obj-$(CONFIG_KALLSYMS) += kallsyms.o 53 obj-$(CONFIG_PM) += power/ 54 obj-$(CONFIG_BSD_PROCESS_ACCT) += acct.o 55 obj-$(CONFIG_KEXEC) += kexec.o 56 obj-$(CONFIG_BACKTRACE_SELF_TEST) += backtracetest.o
··· 51 obj-$(CONFIG_MODULES) += module.o 52 obj-$(CONFIG_KALLSYMS) += kallsyms.o 53 obj-$(CONFIG_PM) += power/ 54 + obj-$(CONFIG_FREEZER) += power/ 55 obj-$(CONFIG_BSD_PROCESS_ACCT) += acct.o 56 obj-$(CONFIG_KEXEC) += kexec.o 57 obj-$(CONFIG_BACKTRACE_SELF_TEST) += backtracetest.o
+1 -1
kernel/cgroup.c
··· 1122 1123 mutex_unlock(&cgroup_mutex); 1124 1125 - kfree(root); 1126 kill_litter_super(sb); 1127 } 1128 1129 static struct file_system_type cgroup_fs_type = {
··· 1122 1123 mutex_unlock(&cgroup_mutex); 1124 1125 kill_litter_super(sb); 1126 + kfree(root); 1127 } 1128 1129 static struct file_system_type cgroup_fs_type = {
+27 -24
kernel/futex.c
··· 1165 u32 val, ktime_t *abs_time, u32 bitset, int clockrt) 1166 { 1167 struct task_struct *curr = current; 1168 DECLARE_WAITQUEUE(wait, curr); 1169 struct futex_hash_bucket *hb; 1170 struct futex_q q; ··· 1217 1218 if (!ret) 1219 goto retry; 1220 - return ret; 1221 } 1222 ret = -EWOULDBLOCK; 1223 - if (uval != val) 1224 - goto out_unlock_put_key; 1225 1226 /* Only actually queue if *uaddr contained val. */ 1227 queue_me(&q, hb); ··· 1287 */ 1288 1289 /* If we were woken (and unqueued), we succeeded, whatever. */ 1290 if (!unqueue_me(&q)) 1291 - return 0; 1292 if (rem) 1293 - return -ETIMEDOUT; 1294 1295 /* 1296 * We expect signal_pending(current), but another thread may 1297 * have handled it for us already. 1298 */ 1299 if (!abs_time) 1300 - return -ERESTARTSYS; 1301 - else { 1302 - struct restart_block *restart; 1303 - restart = &current_thread_info()->restart_block; 1304 - restart->fn = futex_wait_restart; 1305 - restart->futex.uaddr = (u32 *)uaddr; 1306 - restart->futex.val = val; 1307 - restart->futex.time = abs_time->tv64; 1308 - restart->futex.bitset = bitset; 1309 - restart->futex.flags = 0; 1310 1311 - if (fshared) 1312 - restart->futex.flags |= FLAGS_SHARED; 1313 - if (clockrt) 1314 - restart->futex.flags |= FLAGS_CLOCKRT; 1315 - return -ERESTART_RESTARTBLOCK; 1316 - } 1317 1318 - out_unlock_put_key: 1319 - queue_unlock(&q, hb); 1320 put_futex_key(fshared, &q.key); 1321 - 1322 out: 1323 return ret; 1324 }
··· 1165 u32 val, ktime_t *abs_time, u32 bitset, int clockrt) 1166 { 1167 struct task_struct *curr = current; 1168 + struct restart_block *restart; 1169 DECLARE_WAITQUEUE(wait, curr); 1170 struct futex_hash_bucket *hb; 1171 struct futex_q q; ··· 1216 1217 if (!ret) 1218 goto retry; 1219 + goto out; 1220 } 1221 ret = -EWOULDBLOCK; 1222 + if (unlikely(uval != val)) { 1223 + queue_unlock(&q, hb); 1224 + goto out_put_key; 1225 + } 1226 1227 /* Only actually queue if *uaddr contained val. */ 1228 queue_me(&q, hb); ··· 1284 */ 1285 1286 /* If we were woken (and unqueued), we succeeded, whatever. */ 1287 + ret = 0; 1288 if (!unqueue_me(&q)) 1289 + goto out_put_key; 1290 + ret = -ETIMEDOUT; 1291 if (rem) 1292 + goto out_put_key; 1293 1294 /* 1295 * We expect signal_pending(current), but another thread may 1296 * have handled it for us already. 1297 */ 1298 + ret = -ERESTARTSYS; 1299 if (!abs_time) 1300 + goto out_put_key; 1301 1302 + restart = &current_thread_info()->restart_block; 1303 + restart->fn = futex_wait_restart; 1304 + restart->futex.uaddr = (u32 *)uaddr; 1305 + restart->futex.val = val; 1306 + restart->futex.time = abs_time->tv64; 1307 + restart->futex.bitset = bitset; 1308 + restart->futex.flags = 0; 1309 1310 + if (fshared) 1311 + restart->futex.flags |= FLAGS_SHARED; 1312 + if (clockrt) 1313 + restart->futex.flags |= FLAGS_CLOCKRT; 1314 + 1315 + ret = -ERESTART_RESTARTBLOCK; 1316 + 1317 + out_put_key: 1318 put_futex_key(fshared, &q.key); 1319 out: 1320 return ret; 1321 }
+30 -30
kernel/posix-cpu-timers.c
··· 681 } 682 683 /* 684 * Guts of sys_timer_settime for CPU timers. 685 * This is called with the timer locked and interrupts disabled. 686 * If we return TIMER_RETRY, it's necessary to release the timer's lock ··· 768 if (CPUCLOCK_PERTHREAD(timer->it_clock)) { 769 cpu_clock_sample(timer->it_clock, p, &val); 770 } else { 771 - cpu_clock_sample_group(timer->it_clock, p, &val); 772 } 773 774 if (old) { ··· 916 read_unlock(&tasklist_lock); 917 goto dead; 918 } else { 919 - cpu_clock_sample_group(timer->it_clock, p, &now); 920 clear_dead = (unlikely(p->exit_state) && 921 thread_group_empty(p)); 922 } ··· 1271 clear_dead_task(timer, now); 1272 goto out_unlock; 1273 } 1274 - cpu_clock_sample_group(timer->it_clock, p, &now); 1275 bump_cpu_timer(timer, now); 1276 /* Leave the tasklist_lock locked for the call below. */ 1277 } ··· 1433 } 1434 spin_unlock(&timer->it_lock); 1435 } 1436 - } 1437 - 1438 - /* 1439 - * Sample a process (thread group) timer for the given group_leader task. 1440 - * Must be called with tasklist_lock held for reading. 1441 - */ 1442 - static int cpu_timer_sample_group(const clockid_t which_clock, 1443 - struct task_struct *p, 1444 - union cpu_time_count *cpu) 1445 - { 1446 - struct task_cputime cputime; 1447 - 1448 - thread_group_cputimer(p, &cputime); 1449 - switch (CPUCLOCK_WHICH(which_clock)) { 1450 - default: 1451 - return -EINVAL; 1452 - case CPUCLOCK_PROF: 1453 - cpu->cpu = cputime_add(cputime.utime, cputime.stime); 1454 - break; 1455 - case CPUCLOCK_VIRT: 1456 - cpu->cpu = cputime.utime; 1457 - break; 1458 - case CPUCLOCK_SCHED: 1459 - cpu->sched = cputime.sum_exec_runtime + task_delta_exec(p); 1460 - break; 1461 - } 1462 - return 0; 1463 } 1464 1465 /*
··· 681 } 682 683 /* 684 + * Sample a process (thread group) timer for the given group_leader task. 685 + * Must be called with tasklist_lock held for reading. 686 + */ 687 + static int cpu_timer_sample_group(const clockid_t which_clock, 688 + struct task_struct *p, 689 + union cpu_time_count *cpu) 690 + { 691 + struct task_cputime cputime; 692 + 693 + thread_group_cputimer(p, &cputime); 694 + switch (CPUCLOCK_WHICH(which_clock)) { 695 + default: 696 + return -EINVAL; 697 + case CPUCLOCK_PROF: 698 + cpu->cpu = cputime_add(cputime.utime, cputime.stime); 699 + break; 700 + case CPUCLOCK_VIRT: 701 + cpu->cpu = cputime.utime; 702 + break; 703 + case CPUCLOCK_SCHED: 704 + cpu->sched = cputime.sum_exec_runtime + task_delta_exec(p); 705 + break; 706 + } 707 + return 0; 708 + } 709 + 710 + /* 711 * Guts of sys_timer_settime for CPU timers. 712 * This is called with the timer locked and interrupts disabled. 713 * If we return TIMER_RETRY, it's necessary to release the timer's lock ··· 741 if (CPUCLOCK_PERTHREAD(timer->it_clock)) { 742 cpu_clock_sample(timer->it_clock, p, &val); 743 } else { 744 + cpu_timer_sample_group(timer->it_clock, p, &val); 745 } 746 747 if (old) { ··· 889 read_unlock(&tasklist_lock); 890 goto dead; 891 } else { 892 + cpu_timer_sample_group(timer->it_clock, p, &now); 893 clear_dead = (unlikely(p->exit_state) && 894 thread_group_empty(p)); 895 } ··· 1244 clear_dead_task(timer, now); 1245 goto out_unlock; 1246 } 1247 + cpu_timer_sample_group(timer->it_clock, p, &now); 1248 bump_cpu_timer(timer, now); 1249 /* Leave the tasklist_lock locked for the call below. */ 1250 } ··· 1406 } 1407 spin_unlock(&timer->it_lock); 1408 } 1409 } 1410 1411 /*
+1 -1
kernel/power/Makefile
··· 3 EXTRA_CFLAGS += -DDEBUG 4 endif 5 6 - obj-y := main.o 7 obj-$(CONFIG_PM_SLEEP) += console.o 8 obj-$(CONFIG_FREEZER) += process.o 9 obj-$(CONFIG_HIBERNATION) += swsusp.o disk.o snapshot.o swap.o user.o
··· 3 EXTRA_CFLAGS += -DDEBUG 4 endif 5 6 + obj-$(CONFIG_PM) += main.o 7 obj-$(CONFIG_PM_SLEEP) += console.o 8 obj-$(CONFIG_FREEZER) += process.o 9 obj-$(CONFIG_HIBERNATION) += swsusp.o disk.o snapshot.o swap.o user.o
+6
kernel/power/console.c
··· 78 } 79 set_console(orig_fgconsole); 80 release_console_sem(); 81 kmsg_redirect = orig_kmsg; 82 } 83 #endif
··· 78 } 79 set_console(orig_fgconsole); 80 release_console_sem(); 81 + 82 + if (vt_waitactive(orig_fgconsole)) { 83 + pr_debug("Resume: Can't switch VCs."); 84 + return; 85 + } 86 + 87 kmsg_redirect = orig_kmsg; 88 } 89 #endif
+11
kernel/power/disk.c
··· 595 unsigned int flags; 596 597 /* 598 * name_to_dev_t() below takes a sysfs buffer mutex when sysfs 599 * is configured into the kernel. Since the regular hibernate 600 * trigger path is via sysfs which takes a buffer mutex before ··· 616 mutex_unlock(&pm_mutex); 617 return -ENOENT; 618 } 619 swsusp_resume_device = name_to_dev_t(resume_file); 620 pr_debug("PM: Resume from partition %s\n", resume_file); 621 } else {
··· 595 unsigned int flags; 596 597 /* 598 + * If the user said "noresume".. bail out early. 599 + */ 600 + if (noresume) 601 + return 0; 602 + 603 + /* 604 * name_to_dev_t() below takes a sysfs buffer mutex when sysfs 605 * is configured into the kernel. Since the regular hibernate 606 * trigger path is via sysfs which takes a buffer mutex before ··· 610 mutex_unlock(&pm_mutex); 611 return -ENOENT; 612 } 613 + /* 614 + * Some device discovery might still be in progress; we need 615 + * to wait for this to finish. 616 + */ 617 + wait_for_device_probe(); 618 swsusp_resume_device = name_to_dev_t(resume_file); 619 pr_debug("PM: Resume from partition %s\n", resume_file); 620 } else {
+3 -2
kernel/power/swap.c
··· 60 static int submit(int rw, pgoff_t page_off, struct page *page, 61 struct bio **bio_chain) 62 { 63 struct bio *bio; 64 65 bio = bio_alloc(__GFP_WAIT | __GFP_HIGH, 1); ··· 81 bio_get(bio); 82 83 if (bio_chain == NULL) { 84 - submit_bio(rw | (1 << BIO_RW_SYNC), bio); 85 wait_on_page_locked(page); 86 if (rw == READ) 87 bio_set_pages_dirty(bio); ··· 91 get_page(page); /* These pages are freed later */ 92 bio->bi_private = *bio_chain; 93 *bio_chain = bio; 94 - submit_bio(rw | (1 << BIO_RW_SYNC), bio); 95 } 96 return 0; 97 }
··· 60 static int submit(int rw, pgoff_t page_off, struct page *page, 61 struct bio **bio_chain) 62 { 63 + const int bio_rw = rw | (1 << BIO_RW_SYNCIO) | (1 << BIO_RW_UNPLUG); 64 struct bio *bio; 65 66 bio = bio_alloc(__GFP_WAIT | __GFP_HIGH, 1); ··· 80 bio_get(bio); 81 82 if (bio_chain == NULL) { 83 + submit_bio(bio_rw, bio); 84 wait_on_page_locked(page); 85 if (rw == READ) 86 bio_set_pages_dirty(bio); ··· 90 get_page(page); /* These pages are freed later */ 91 bio->bi_private = *bio_chain; 92 *bio_chain = bio; 93 + submit_bio(bio_rw, bio); 94 } 95 return 0; 96 }
+6 -6
kernel/power/user.c
··· 95 data->swap = swsusp_resume_device ? 96 swap_type_of(swsusp_resume_device, 0, NULL) : -1; 97 data->mode = O_RDONLY; 98 - error = pm_notifier_call_chain(PM_RESTORE_PREPARE); 99 - if (error) 100 - pm_notifier_call_chain(PM_POST_RESTORE); 101 - } else { 102 - data->swap = -1; 103 - data->mode = O_WRONLY; 104 error = pm_notifier_call_chain(PM_HIBERNATION_PREPARE); 105 if (error) 106 pm_notifier_call_chain(PM_POST_HIBERNATION); 107 } 108 if (error) 109 atomic_inc(&snapshot_device_available);
··· 95 data->swap = swsusp_resume_device ? 96 swap_type_of(swsusp_resume_device, 0, NULL) : -1; 97 data->mode = O_RDONLY; 98 error = pm_notifier_call_chain(PM_HIBERNATION_PREPARE); 99 if (error) 100 pm_notifier_call_chain(PM_POST_HIBERNATION); 101 + } else { 102 + data->swap = -1; 103 + data->mode = O_WRONLY; 104 + error = pm_notifier_call_chain(PM_RESTORE_PREPARE); 105 + if (error) 106 + pm_notifier_call_chain(PM_POST_RESTORE); 107 } 108 if (error) 109 atomic_inc(&snapshot_device_available);
+9 -6
kernel/printk.c
··· 73 * driver system. 74 */ 75 static DECLARE_MUTEX(console_sem); 76 - static DECLARE_MUTEX(secondary_console_sem); 77 struct console *console_drivers; 78 EXPORT_SYMBOL_GPL(console_drivers); 79 ··· 890 printk("Suspending console(s) (use no_console_suspend to debug)\n"); 891 acquire_console_sem(); 892 console_suspended = 1; 893 } 894 895 void resume_console(void) 896 { 897 if (!console_suspend_enabled) 898 return; 899 console_suspended = 0; 900 release_console_sem(); 901 } ··· 913 void acquire_console_sem(void) 914 { 915 BUG_ON(in_interrupt()); 916 - if (console_suspended) { 917 - down(&secondary_console_sem); 918 - return; 919 - } 920 down(&console_sem); 921 console_locked = 1; 922 console_may_schedule = 1; 923 } ··· 925 { 926 if (down_trylock(&console_sem)) 927 return -1; 928 console_locked = 1; 929 console_may_schedule = 0; 930 return 0; ··· 982 unsigned wake_klogd = 0; 983 984 if (console_suspended) { 985 - up(&secondary_console_sem); 986 return; 987 } 988
··· 73 * driver system. 74 */ 75 static DECLARE_MUTEX(console_sem); 76 struct console *console_drivers; 77 EXPORT_SYMBOL_GPL(console_drivers); 78 ··· 891 printk("Suspending console(s) (use no_console_suspend to debug)\n"); 892 acquire_console_sem(); 893 console_suspended = 1; 894 + up(&console_sem); 895 } 896 897 void resume_console(void) 898 { 899 if (!console_suspend_enabled) 900 return; 901 + down(&console_sem); 902 console_suspended = 0; 903 release_console_sem(); 904 } ··· 912 void acquire_console_sem(void) 913 { 914 BUG_ON(in_interrupt()); 915 down(&console_sem); 916 + if (console_suspended) 917 + return; 918 console_locked = 1; 919 console_may_schedule = 1; 920 } ··· 926 { 927 if (down_trylock(&console_sem)) 928 return -1; 929 + if (console_suspended) { 930 + up(&console_sem); 931 + return -1; 932 + } 933 console_locked = 1; 934 console_may_schedule = 0; 935 return 0; ··· 979 unsigned wake_klogd = 0; 980 981 if (console_suspended) { 982 + up(&console_sem); 983 return; 984 } 985
+12 -3
kernel/sched.c
··· 6944 6945 static void rq_attach_root(struct rq *rq, struct root_domain *rd) 6946 { 6947 unsigned long flags; 6948 6949 spin_lock_irqsave(&rq->lock, flags); 6950 6951 if (rq->rd) { 6952 - struct root_domain *old_rd = rq->rd; 6953 6954 if (cpumask_test_cpu(rq->cpu, old_rd->online)) 6955 set_rq_offline(rq); 6956 6957 cpumask_clear_cpu(rq->cpu, old_rd->span); 6958 6959 - if (atomic_dec_and_test(&old_rd->refcount)) 6960 - free_rootdomain(old_rd); 6961 } 6962 6963 atomic_inc(&rd->refcount); ··· 6974 set_rq_online(rq); 6975 6976 spin_unlock_irqrestore(&rq->lock, flags); 6977 } 6978 6979 static int __init_refok init_rootdomain(struct root_domain *rd, bool bootmem)
··· 6944 6945 static void rq_attach_root(struct rq *rq, struct root_domain *rd) 6946 { 6947 + struct root_domain *old_rd = NULL; 6948 unsigned long flags; 6949 6950 spin_lock_irqsave(&rq->lock, flags); 6951 6952 if (rq->rd) { 6953 + old_rd = rq->rd; 6954 6955 if (cpumask_test_cpu(rq->cpu, old_rd->online)) 6956 set_rq_offline(rq); 6957 6958 cpumask_clear_cpu(rq->cpu, old_rd->span); 6959 6960 + /* 6961 + * If we dont want to free the old_rt yet then 6962 + * set old_rd to NULL to skip the freeing later 6963 + * in this function: 6964 + */ 6965 + if (!atomic_dec_and_test(&old_rd->refcount)) 6966 + old_rd = NULL; 6967 } 6968 6969 atomic_inc(&rd->refcount); ··· 6968 set_rq_online(rq); 6969 6970 spin_unlock_irqrestore(&rq->lock, flags); 6971 + 6972 + if (old_rd) 6973 + free_rootdomain(old_rd); 6974 } 6975 6976 static int __init_refok init_rootdomain(struct root_domain *rd, bool bootmem)
+25
kernel/trace/Kconfig
··· 52 depends on HAVE_FUNCTION_TRACER 53 depends on DEBUG_KERNEL 54 select FRAME_POINTER 55 select TRACING 56 select CONTEXT_SWITCH_TRACER 57 help ··· 239 depends on DEBUG_KERNEL 240 select FUNCTION_TRACER 241 select STACKTRACE 242 help 243 This special tracer records the maximum stack footprint of the 244 kernel and displays it in debugfs/tracing/stack_trace. ··· 303 a series of tests are made to verify that the tracer is 304 functioning properly. It will do tests on all the configured 305 tracers of ftrace. 306 307 endmenu
··· 52 depends on HAVE_FUNCTION_TRACER 53 depends on DEBUG_KERNEL 54 select FRAME_POINTER 55 + select KALLSYMS 56 select TRACING 57 select CONTEXT_SWITCH_TRACER 58 help ··· 238 depends on DEBUG_KERNEL 239 select FUNCTION_TRACER 240 select STACKTRACE 241 + select KALLSYMS 242 help 243 This special tracer records the maximum stack footprint of the 244 kernel and displays it in debugfs/tracing/stack_trace. ··· 301 a series of tests are made to verify that the tracer is 302 functioning properly. It will do tests on all the configured 303 tracers of ftrace. 304 + 305 + config MMIOTRACE 306 + bool "Memory mapped IO tracing" 307 + depends on HAVE_MMIOTRACE_SUPPORT && DEBUG_KERNEL && PCI 308 + select TRACING 309 + help 310 + Mmiotrace traces Memory Mapped I/O access and is meant for 311 + debugging and reverse engineering. It is called from the ioremap 312 + implementation and works via page faults. Tracing is disabled by 313 + default and can be enabled at run-time. 314 + 315 + See Documentation/tracers/mmiotrace.txt. 316 + If you are not helping to develop drivers, say N. 317 + 318 + config MMIOTRACE_TEST 319 + tristate "Test module for mmiotrace" 320 + depends on MMIOTRACE && m 321 + help 322 + This is a dumb module for testing mmiotrace. It is very dangerous 323 + as it will write garbage to IO memory starting at a given address. 324 + However, it should be safe to use on e.g. unused portion of VRAM. 325 + 326 + Say N, unless you absolutely know what you are doing. 327 328 endmenu
+5 -1
kernel/trace/ftrace.c
··· 2033 static int start_graph_tracing(void) 2034 { 2035 struct ftrace_ret_stack **ret_stack_list; 2036 - int ret; 2037 2038 ret_stack_list = kmalloc(FTRACE_RETSTACK_ALLOC_SIZE * 2039 sizeof(struct ftrace_ret_stack *), ··· 2041 2042 if (!ret_stack_list) 2043 return -ENOMEM; 2044 2045 do { 2046 ret = alloc_retstack_tasklist(ret_stack_list);
··· 2033 static int start_graph_tracing(void) 2034 { 2035 struct ftrace_ret_stack **ret_stack_list; 2036 + int ret, cpu; 2037 2038 ret_stack_list = kmalloc(FTRACE_RETSTACK_ALLOC_SIZE * 2039 sizeof(struct ftrace_ret_stack *), ··· 2041 2042 if (!ret_stack_list) 2043 return -ENOMEM; 2044 + 2045 + /* The cpu_boot init_task->ret_stack will never be freed */ 2046 + for_each_online_cpu(cpu) 2047 + ftrace_graph_init_task(idle_task(cpu)); 2048 2049 do { 2050 ret = alloc_retstack_tasklist(ret_stack_list);
+10 -4
kernel/trace/trace_mmiotrace.c
··· 9 #include <linux/kernel.h> 10 #include <linux/mmiotrace.h> 11 #include <linux/pci.h> 12 13 #include "trace.h" 14 ··· 20 static struct trace_array *mmio_trace_array; 21 static bool overrun_detected; 22 static unsigned long prev_overruns; 23 24 static void mmio_reset_data(struct trace_array *tr) 25 { ··· 123 124 static unsigned long count_overruns(struct trace_iterator *iter) 125 { 126 - unsigned long cnt = 0; 127 unsigned long over = ring_buffer_overruns(iter->tr->buffer); 128 129 if (over > prev_overruns) 130 - cnt = over - prev_overruns; 131 prev_overruns = over; 132 return cnt; 133 } ··· 312 313 event = ring_buffer_lock_reserve(tr->buffer, sizeof(*entry), 314 &irq_flags); 315 - if (!event) 316 return; 317 entry = ring_buffer_event_data(event); 318 tracing_generic_entry_update(&entry->ent, 0, preempt_count()); 319 entry->ent.type = TRACE_MMIO_RW; ··· 342 343 event = ring_buffer_lock_reserve(tr->buffer, sizeof(*entry), 344 &irq_flags); 345 - if (!event) 346 return; 347 entry = ring_buffer_event_data(event); 348 tracing_generic_entry_update(&entry->ent, 0, preempt_count()); 349 entry->ent.type = TRACE_MMIO_MAP;
··· 9 #include <linux/kernel.h> 10 #include <linux/mmiotrace.h> 11 #include <linux/pci.h> 12 + #include <asm/atomic.h> 13 14 #include "trace.h" 15 ··· 19 static struct trace_array *mmio_trace_array; 20 static bool overrun_detected; 21 static unsigned long prev_overruns; 22 + static atomic_t dropped_count; 23 24 static void mmio_reset_data(struct trace_array *tr) 25 { ··· 121 122 static unsigned long count_overruns(struct trace_iterator *iter) 123 { 124 + unsigned long cnt = atomic_xchg(&dropped_count, 0); 125 unsigned long over = ring_buffer_overruns(iter->tr->buffer); 126 127 if (over > prev_overruns) 128 + cnt += over - prev_overruns; 129 prev_overruns = over; 130 return cnt; 131 } ··· 310 311 event = ring_buffer_lock_reserve(tr->buffer, sizeof(*entry), 312 &irq_flags); 313 + if (!event) { 314 + atomic_inc(&dropped_count); 315 return; 316 + } 317 entry = ring_buffer_event_data(event); 318 tracing_generic_entry_update(&entry->ent, 0, preempt_count()); 319 entry->ent.type = TRACE_MMIO_RW; ··· 338 339 event = ring_buffer_lock_reserve(tr->buffer, sizeof(*entry), 340 &irq_flags); 341 + if (!event) { 342 + atomic_inc(&dropped_count); 343 return; 344 + } 345 entry = ring_buffer_event_data(event); 346 tracing_generic_entry_update(&entry->ent, 0, preempt_count()); 347 entry->ent.type = TRACE_MMIO_MAP;
+19
kernel/trace/trace_selftest.c
··· 23 { 24 struct ring_buffer_event *event; 25 struct trace_entry *entry; 26 27 while ((event = ring_buffer_consume(tr->buffer, cpu, NULL))) { 28 entry = ring_buffer_event_data(event); 29 30 if (!trace_valid_entry(entry)) { 31 printk(KERN_CONT ".. invalid entry %d ", 32 entry->type); ··· 67 68 cnt = ring_buffer_entries(tr->buffer); 69 70 for_each_possible_cpu(cpu) { 71 ret = trace_test_buffer_cpu(tr, cpu); 72 if (ret) 73 break; 74 } 75 __raw_spin_unlock(&ftrace_max_lock); 76 local_irq_restore(flags); 77
··· 23 { 24 struct ring_buffer_event *event; 25 struct trace_entry *entry; 26 + unsigned int loops = 0; 27 28 while ((event = ring_buffer_consume(tr->buffer, cpu, NULL))) { 29 entry = ring_buffer_event_data(event); 30 31 + /* 32 + * The ring buffer is a size of trace_buf_size, if 33 + * we loop more than the size, there's something wrong 34 + * with the ring buffer. 35 + */ 36 + if (loops++ > trace_buf_size) { 37 + printk(KERN_CONT ".. bad ring buffer "); 38 + goto failed; 39 + } 40 if (!trace_valid_entry(entry)) { 41 printk(KERN_CONT ".. invalid entry %d ", 42 entry->type); ··· 57 58 cnt = ring_buffer_entries(tr->buffer); 59 60 + /* 61 + * The trace_test_buffer_cpu runs a while loop to consume all data. 62 + * If the calling tracer is broken, and is constantly filling 63 + * the buffer, this will run forever, and hard lock the box. 64 + * We disable the ring buffer while we do this test to prevent 65 + * a hard lock up. 66 + */ 67 + tracing_off(); 68 for_each_possible_cpu(cpu) { 69 ret = trace_test_buffer_cpu(tr, cpu); 70 if (ret) 71 break; 72 } 73 + tracing_on(); 74 __raw_spin_unlock(&ftrace_max_lock); 75 local_irq_restore(flags); 76
+1 -1
lib/Kconfig.debug
··· 838 839 If unsure, say N. 840 841 - menuconfig BUILD_DOCSRC 842 bool "Build targets in Documentation/ tree" 843 depends on HEADERS_CHECK 844 help
··· 838 839 If unsure, say N. 840 841 + config BUILD_DOCSRC 842 bool "Build targets in Documentation/ tree" 843 depends on HEADERS_CHECK 844 help
+6 -1
mm/mlock.c
··· 660 return buffer; 661 } 662 663 - void free_locked_buffer(void *buffer, size_t size) 664 { 665 unsigned long pgsz = PAGE_ALIGN(size) >> PAGE_SHIFT; 666 ··· 670 current->mm->locked_vm -= pgsz; 671 672 up_write(&current->mm->mmap_sem); 673 674 kfree(buffer); 675 }
··· 660 return buffer; 661 } 662 663 + void release_locked_buffer(void *buffer, size_t size) 664 { 665 unsigned long pgsz = PAGE_ALIGN(size) >> PAGE_SHIFT; 666 ··· 670 current->mm->locked_vm -= pgsz; 671 672 up_write(&current->mm->mmap_sem); 673 + } 674 + 675 + void free_locked_buffer(void *buffer, size_t size) 676 + { 677 + release_locked_buffer(buffer, size); 678 679 kfree(buffer); 680 }
+3 -10
mm/page-writeback.c
··· 240 } 241 EXPORT_SYMBOL_GPL(bdi_writeout_inc); 242 243 - static inline void task_dirty_inc(struct task_struct *tsk) 244 { 245 prop_inc_single(&vm_dirties, &tsk->dirties); 246 } ··· 1230 __inc_zone_page_state(page, NR_FILE_DIRTY); 1231 __inc_bdi_stat(mapping->backing_dev_info, 1232 BDI_RECLAIMABLE); 1233 task_io_account_write(PAGE_CACHE_SIZE); 1234 } 1235 radix_tree_tag_set(&mapping->page_tree, ··· 1263 * If the mapping doesn't provide a set_page_dirty a_op, then 1264 * just fall through and assume that it wants buffer_heads. 1265 */ 1266 - static int __set_page_dirty(struct page *page) 1267 { 1268 struct address_space *mapping = page_mapping(page); 1269 ··· 1280 return 1; 1281 } 1282 return 0; 1283 - } 1284 - 1285 - int set_page_dirty(struct page *page) 1286 - { 1287 - int ret = __set_page_dirty(page); 1288 - if (ret) 1289 - task_dirty_inc(current); 1290 - return ret; 1291 } 1292 EXPORT_SYMBOL(set_page_dirty); 1293
··· 240 } 241 EXPORT_SYMBOL_GPL(bdi_writeout_inc); 242 243 + void task_dirty_inc(struct task_struct *tsk) 244 { 245 prop_inc_single(&vm_dirties, &tsk->dirties); 246 } ··· 1230 __inc_zone_page_state(page, NR_FILE_DIRTY); 1231 __inc_bdi_stat(mapping->backing_dev_info, 1232 BDI_RECLAIMABLE); 1233 + task_dirty_inc(current); 1234 task_io_account_write(PAGE_CACHE_SIZE); 1235 } 1236 radix_tree_tag_set(&mapping->page_tree, ··· 1262 * If the mapping doesn't provide a set_page_dirty a_op, then 1263 * just fall through and assume that it wants buffer_heads. 1264 */ 1265 + int set_page_dirty(struct page *page) 1266 { 1267 struct address_space *mapping = page_mapping(page); 1268 ··· 1279 return 1; 1280 } 1281 return 0; 1282 } 1283 EXPORT_SYMBOL(set_page_dirty); 1284
+26 -3
mm/page_alloc.c
··· 2989 * was used and there are no special requirements, this is a convenient 2990 * alternative 2991 */ 2992 - int __meminit early_pfn_to_nid(unsigned long pfn) 2993 { 2994 int i; 2995 ··· 3000 if (start_pfn <= pfn && pfn < end_pfn) 3001 return early_node_map[i].nid; 3002 } 3003 - 3004 - return 0; 3005 } 3006 #endif /* CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID */ 3007 3008 /* Basic iterator support to walk early_node_map[] */ 3009 #define for_each_active_range_index_in_nid(i, nid) \
··· 2989 * was used and there are no special requirements, this is a convenient 2990 * alternative 2991 */ 2992 + int __meminit __early_pfn_to_nid(unsigned long pfn) 2993 { 2994 int i; 2995 ··· 3000 if (start_pfn <= pfn && pfn < end_pfn) 3001 return early_node_map[i].nid; 3002 } 3003 + /* This is a memory hole */ 3004 + return -1; 3005 } 3006 #endif /* CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID */ 3007 + 3008 + int __meminit early_pfn_to_nid(unsigned long pfn) 3009 + { 3010 + int nid; 3011 + 3012 + nid = __early_pfn_to_nid(pfn); 3013 + if (nid >= 0) 3014 + return nid; 3015 + /* just returns 0 */ 3016 + return 0; 3017 + } 3018 + 3019 + #ifdef CONFIG_NODES_SPAN_OTHER_NODES 3020 + bool __meminit early_pfn_in_nid(unsigned long pfn, int node) 3021 + { 3022 + int nid; 3023 + 3024 + nid = __early_pfn_to_nid(pfn); 3025 + if (nid >= 0 && nid != node) 3026 + return false; 3027 + return true; 3028 + } 3029 + #endif 3030 3031 /* Basic iterator support to walk early_node_map[] */ 3032 #define for_each_active_range_index_in_nid(i, nid) \
+1 -1
mm/page_io.c
··· 111 goto out; 112 } 113 if (wbc->sync_mode == WB_SYNC_ALL) 114 - rw |= (1 << BIO_RW_SYNC); 115 count_vm_event(PSWPOUT); 116 set_page_writeback(page); 117 unlock_page(page);
··· 111 goto out; 112 } 113 if (wbc->sync_mode == WB_SYNC_ALL) 114 + rw |= (1 << BIO_RW_SYNCIO) | (1 << BIO_RW_UNPLUG); 115 count_vm_event(PSWPOUT); 116 set_page_writeback(page); 117 unlock_page(page);
+2 -2
mm/swapfile.c
··· 635 636 if (!bdev) { 637 if (bdev_p) 638 - *bdev_p = sis->bdev; 639 640 spin_unlock(&swap_lock); 641 return i; ··· 647 struct swap_extent, list); 648 if (se->start_block == offset) { 649 if (bdev_p) 650 - *bdev_p = sis->bdev; 651 652 spin_unlock(&swap_lock); 653 bdput(bdev);
··· 635 636 if (!bdev) { 637 if (bdev_p) 638 + *bdev_p = bdget(sis->bdev->bd_dev); 639 640 spin_unlock(&swap_lock); 641 return i; ··· 647 struct swap_extent, list); 648 if (se->start_block == offset) { 649 if (bdev_p) 650 + *bdev_p = bdget(sis->bdev->bd_dev); 651 652 spin_unlock(&swap_lock); 653 bdput(bdev);
+20
mm/util.c
··· 129 } 130 EXPORT_SYMBOL(krealloc); 131 132 /* 133 * strndup_user - duplicate an existing string from user space 134 * @s: The string to duplicate
··· 129 } 130 EXPORT_SYMBOL(krealloc); 131 132 + /** 133 + * kzfree - like kfree but zero memory 134 + * @p: object to free memory of 135 + * 136 + * The memory of the object @p points to is zeroed before freed. 137 + * If @p is %NULL, kzfree() does nothing. 138 + */ 139 + void kzfree(const void *p) 140 + { 141 + size_t ks; 142 + void *mem = (void *)p; 143 + 144 + if (unlikely(ZERO_OR_NULL_PTR(mem))) 145 + return; 146 + ks = ksize(mem); 147 + memset(mem, 0, ks); 148 + kfree(mem); 149 + } 150 + EXPORT_SYMBOL(kzfree); 151 + 152 /* 153 * strndup_user - duplicate an existing string from user space 154 * @s: The string to duplicate
+10
mm/vmalloc.c
··· 1012 void unmap_kernel_range(unsigned long addr, unsigned long size) 1013 { 1014 unsigned long end = addr + size; 1015 vunmap_page_range(addr, end); 1016 flush_tlb_kernel_range(addr, end); 1017 } ··· 1107 __builtin_return_address(0)); 1108 } 1109 EXPORT_SYMBOL_GPL(__get_vm_area); 1110 1111 /** 1112 * get_vm_area - reserve a contiguous kernel virtual area
··· 1012 void unmap_kernel_range(unsigned long addr, unsigned long size) 1013 { 1014 unsigned long end = addr + size; 1015 + 1016 + flush_cache_vunmap(addr, end); 1017 vunmap_page_range(addr, end); 1018 flush_tlb_kernel_range(addr, end); 1019 } ··· 1105 __builtin_return_address(0)); 1106 } 1107 EXPORT_SYMBOL_GPL(__get_vm_area); 1108 + 1109 + struct vm_struct *__get_vm_area_caller(unsigned long size, unsigned long flags, 1110 + unsigned long start, unsigned long end, 1111 + void *caller) 1112 + { 1113 + return __get_vm_area_node(size, flags, start, end, -1, GFP_KERNEL, 1114 + caller); 1115 + } 1116 1117 /** 1118 * get_vm_area - reserve a contiguous kernel virtual area
+12 -16
mm/vmscan.c
··· 2057 int pass, struct scan_control *sc) 2058 { 2059 struct zone *zone; 2060 - unsigned long nr_to_scan, ret = 0; 2061 - enum lru_list l; 2062 2063 for_each_zone(zone) { 2064 2065 if (!populated_zone(zone)) 2066 continue; 2067 - 2068 if (zone_is_all_unreclaimable(zone) && prio != DEF_PRIORITY) 2069 continue; 2070 2071 for_each_evictable_lru(l) { 2072 /* For pass = 0, we don't shrink the active list */ 2073 - if (pass == 0 && 2074 - (l == LRU_ACTIVE || l == LRU_ACTIVE_FILE)) 2075 continue; 2076 2077 - zone->lru[l].nr_scan += 2078 - (zone_page_state(zone, NR_LRU_BASE + l) 2079 - >> prio) + 1; 2080 if (zone->lru[l].nr_scan >= nr_pages || pass > 3) { 2081 zone->lru[l].nr_scan = 0; 2082 - nr_to_scan = min(nr_pages, 2083 - zone_page_state(zone, 2084 - NR_LRU_BASE + l)); 2085 ret += shrink_list(l, nr_to_scan, zone, 2086 sc, prio); 2087 if (ret >= nr_pages) ··· 2089 } 2090 } 2091 } 2092 - 2093 return ret; 2094 } 2095 ··· 2111 .may_swap = 0, 2112 .swap_cluster_max = nr_pages, 2113 .may_writepage = 1, 2114 - .swappiness = vm_swappiness, 2115 .isolate_pages = isolate_pages_global, 2116 }; 2117 ··· 2144 int prio; 2145 2146 /* Force reclaiming mapped pages in the passes #3 and #4 */ 2147 - if (pass > 2) { 2148 sc.may_swap = 1; 2149 - sc.swappiness = 100; 2150 - } 2151 2152 for (prio = DEF_PRIORITY; prio >= 0; prio--) { 2153 unsigned long nr_to_scan = nr_pages - ret;
··· 2057 int pass, struct scan_control *sc) 2058 { 2059 struct zone *zone; 2060 + unsigned long ret = 0; 2061 2062 for_each_zone(zone) { 2063 + enum lru_list l; 2064 2065 if (!populated_zone(zone)) 2066 continue; 2067 if (zone_is_all_unreclaimable(zone) && prio != DEF_PRIORITY) 2068 continue; 2069 2070 for_each_evictable_lru(l) { 2071 + enum zone_stat_item ls = NR_LRU_BASE + l; 2072 + unsigned long lru_pages = zone_page_state(zone, ls); 2073 + 2074 /* For pass = 0, we don't shrink the active list */ 2075 + if (pass == 0 && (l == LRU_ACTIVE_ANON || 2076 + l == LRU_ACTIVE_FILE)) 2077 continue; 2078 2079 + zone->lru[l].nr_scan += (lru_pages >> prio) + 1; 2080 if (zone->lru[l].nr_scan >= nr_pages || pass > 3) { 2081 + unsigned long nr_to_scan; 2082 + 2083 zone->lru[l].nr_scan = 0; 2084 + nr_to_scan = min(nr_pages, lru_pages); 2085 ret += shrink_list(l, nr_to_scan, zone, 2086 sc, prio); 2087 if (ret >= nr_pages) ··· 2089 } 2090 } 2091 } 2092 return ret; 2093 } 2094 ··· 2112 .may_swap = 0, 2113 .swap_cluster_max = nr_pages, 2114 .may_writepage = 1, 2115 .isolate_pages = isolate_pages_global, 2116 }; 2117 ··· 2146 int prio; 2147 2148 /* Force reclaiming mapped pages in the passes #3 and #4 */ 2149 + if (pass > 2) 2150 sc.may_swap = 1; 2151 2152 for (prio = DEF_PRIORITY; prio >= 0; prio--) { 2153 unsigned long nr_to_scan = nr_pages - ret;
+2 -2
scripts/bootgraph.pl
··· 51 52 while (<>) { 53 my $line = $_; 54 - if ($line =~ /([0-9\.]+)\] calling ([a-zA-Z0-9\_]+)\+/) { 55 my $func = $2; 56 if ($done == 0) { 57 $start{$func} = $1; ··· 87 $count = $count + 1; 88 } 89 90 - if ($line =~ /([0-9\.]+)\] initcall ([a-zA-Z0-9\_]+)\+.*returned/) { 91 if ($done == 0) { 92 $end{$2} = $1; 93 $maxtime = $1;
··· 51 52 while (<>) { 53 my $line = $_; 54 + if ($line =~ /([0-9\.]+)\] calling ([a-zA-Z0-9\_\.]+)\+/) { 55 my $func = $2; 56 if ($done == 0) { 57 $start{$func} = $1; ··· 87 $count = $count + 1; 88 } 89 90 + if ($line =~ /([0-9\.]+)\] initcall ([a-zA-Z0-9\_\.]+)\+.*returned/) { 91 if ($done == 0) { 92 $end{$2} = $1; 93 $maxtime = $1;
+158 -13
scripts/markup_oops.pl
··· 1 - #!/usr/bin/perl -w 2 3 use File::Basename; 4 ··· 29 my $target = "0"; 30 my $function; 31 my $module = ""; 32 - my $func_offset; 33 my $vmaoffset = 0; 34 35 while (<STDIN>) { 36 my $line = $_; 37 if ($line =~ /EIP: 0060:\[\<([a-z0-9]+)\>\]/) { 38 $target = $1; 39 } 40 if ($line =~ /EIP is at ([a-zA-Z0-9\_]+)\+(0x[0-9a-f]+)\/0x[a-f0-9]/) { 41 $function = $1; 42 $func_offset = $2; 43 } ··· 166 if ($line =~ /EIP is at ([a-zA-Z0-9\_]+)\+(0x[0-9a-f]+)\/0x[a-f0-9]+\W\[([a-zA-Z0-9\_\-]+)\]/) { 167 $module = $3; 168 } 169 } 170 171 my $decodestart = hex($target) - hex($func_offset); 172 - my $decodestop = $decodestart + 8192; 173 if ($target eq "0") { 174 print "No oops found!\n"; 175 print "Usage: \n"; ··· 208 my $state = 0; 209 my $center = 0; 210 my @lines; 211 212 sub InRange { 213 my ($address, $target) = @_; ··· 313 314 my $i; 315 316 - my $fulltext = ""; 317 - $i = $start; 318 - while ($i < $finish) { 319 - if ($i == $center) { 320 - $fulltext = $fulltext . "*$lines[$i] <----- faulting instruction\n"; 321 - } else { 322 - $fulltext = $fulltext . " $lines[$i]\n"; 323 - } 324 - $i = $i +1; 325 } 326 327 - print $fulltext; 328
··· 1 + #!/usr/bin/perl 2 3 use File::Basename; 4 ··· 29 my $target = "0"; 30 my $function; 31 my $module = ""; 32 + my $func_offset = 0; 33 my $vmaoffset = 0; 34 35 + my %regs; 36 + 37 + 38 + sub parse_x86_regs 39 + { 40 + my ($line) = @_; 41 + if ($line =~ /EAX: ([0-9a-f]+) EBX: ([0-9a-f]+) ECX: ([0-9a-f]+) EDX: ([0-9a-f]+)/) { 42 + $regs{"%eax"} = $1; 43 + $regs{"%ebx"} = $2; 44 + $regs{"%ecx"} = $3; 45 + $regs{"%edx"} = $4; 46 + } 47 + if ($line =~ /ESI: ([0-9a-f]+) EDI: ([0-9a-f]+) EBP: ([0-9a-f]+) ESP: ([0-9a-f]+)/) { 48 + $regs{"%esi"} = $1; 49 + $regs{"%edi"} = $2; 50 + $regs{"%esp"} = $4; 51 + } 52 + if ($line =~ /RAX: ([0-9a-f]+) RBX: ([0-9a-f]+) RCX: ([0-9a-f]+)/) { 53 + $regs{"%eax"} = $1; 54 + $regs{"%ebx"} = $2; 55 + $regs{"%ecx"} = $3; 56 + } 57 + if ($line =~ /RDX: ([0-9a-f]+) RSI: ([0-9a-f]+) RDI: ([0-9a-f]+)/) { 58 + $regs{"%edx"} = $1; 59 + $regs{"%esi"} = $2; 60 + $regs{"%edi"} = $3; 61 + } 62 + if ($line =~ /RBP: ([0-9a-f]+) R08: ([0-9a-f]+) R09: ([0-9a-f]+)/) { 63 + $regs{"%r08"} = $2; 64 + $regs{"%r09"} = $3; 65 + } 66 + if ($line =~ /R10: ([0-9a-f]+) R11: ([0-9a-f]+) R12: ([0-9a-f]+)/) { 67 + $regs{"%r10"} = $1; 68 + $regs{"%r11"} = $2; 69 + $regs{"%r12"} = $3; 70 + } 71 + if ($line =~ /R13: ([0-9a-f]+) R14: ([0-9a-f]+) R15: ([0-9a-f]+)/) { 72 + $regs{"%r13"} = $1; 73 + $regs{"%r14"} = $2; 74 + $regs{"%r15"} = $3; 75 + } 76 + } 77 + 78 + sub reg_name 79 + { 80 + my ($reg) = @_; 81 + $reg =~ s/r(.)x/e\1x/; 82 + $reg =~ s/r(.)i/e\1i/; 83 + $reg =~ s/r(.)p/e\1p/; 84 + return $reg; 85 + } 86 + 87 + sub process_x86_regs 88 + { 89 + my ($line, $cntr) = @_; 90 + my $str = ""; 91 + if (length($line) < 40) { 92 + return ""; # not an asm istruction 93 + } 94 + 95 + # find the arguments to the instruction 96 + if ($line =~ /([0-9a-zA-Z\,\%\(\)\-\+]+)$/) { 97 + $lastword = $1; 98 + } else { 99 + return ""; 100 + } 101 + 102 + # we need to find the registers that get clobbered, 103 + # since their value is no longer relevant for previous 104 + # instructions in the stream. 105 + 106 + $clobber = $lastword; 107 + # first, remove all memory operands, they're read only 108 + $clobber =~ s/\([a-z0-9\%\,]+\)//g; 109 + # then, remove everything before the comma, thats the read part 110 + $clobber =~ s/.*\,//g; 111 + 112 + # if this is the instruction that faulted, we haven't actually done 113 + # the write yet... nothing is clobbered. 114 + if ($cntr == 0) { 115 + $clobber = ""; 116 + } 117 + 118 + foreach $reg (keys(%regs)) { 119 + my $clobberprime = reg_name($clobber); 120 + my $lastwordprime = reg_name($lastword); 121 + my $val = $regs{$reg}; 122 + if ($val =~ /^[0]+$/) { 123 + $val = "0"; 124 + } else { 125 + $val =~ s/^0*//; 126 + } 127 + 128 + # first check if we're clobbering this register; if we do 129 + # we print it with a =>, and then delete its value 130 + if ($clobber =~ /$reg/ || $clobberprime =~ /$reg/) { 131 + if (length($val) > 0) { 132 + $str = $str . " $reg => $val "; 133 + } 134 + $regs{$reg} = ""; 135 + $val = ""; 136 + } 137 + # now check if we're reading this register 138 + if ($lastword =~ /$reg/ || $lastwordprime =~ /$reg/) { 139 + if (length($val) > 0) { 140 + $str = $str . " $reg = $val "; 141 + } 142 + } 143 + } 144 + return $str; 145 + } 146 + 147 + # parse the oops 148 while (<STDIN>) { 149 my $line = $_; 150 if ($line =~ /EIP: 0060:\[\<([a-z0-9]+)\>\]/) { 151 $target = $1; 152 } 153 + if ($line =~ /RIP: 0010:\[\<([a-z0-9]+)\>\]/) { 154 + $target = $1; 155 + } 156 if ($line =~ /EIP is at ([a-zA-Z0-9\_]+)\+(0x[0-9a-f]+)\/0x[a-f0-9]/) { 157 + $function = $1; 158 + $func_offset = $2; 159 + } 160 + if ($line =~ /RIP: 0010:\[\<[0-9a-f]+\>\] \[\<[0-9a-f]+\>\] ([a-zA-Z0-9\_]+)\+(0x[0-9a-f]+)\/0x[a-f0-9]/) { 161 $function = $1; 162 $func_offset = $2; 163 } ··· 46 if ($line =~ /EIP is at ([a-zA-Z0-9\_]+)\+(0x[0-9a-f]+)\/0x[a-f0-9]+\W\[([a-zA-Z0-9\_\-]+)\]/) { 47 $module = $3; 48 } 49 + if ($line =~ /RIP: 0010:\[\<[0-9a-f]+\>\] \[\<[0-9a-f]+\>\] ([a-zA-Z0-9\_]+)\+(0x[0-9a-f]+)\/0x[a-f0-9]+\W\[([a-zA-Z0-9\_\-]+)\]/) { 50 + $module = $3; 51 + } 52 + parse_x86_regs($line); 53 } 54 55 my $decodestart = hex($target) - hex($func_offset); 56 + my $decodestop = hex($target) + 8192; 57 if ($target eq "0") { 58 print "No oops found!\n"; 59 print "Usage: \n"; ··· 84 my $state = 0; 85 my $center = 0; 86 my @lines; 87 + my @reglines; 88 89 sub InRange { 90 my ($address, $target) = @_; ··· 188 189 my $i; 190 191 + 192 + # start annotating the registers in the asm. 193 + # this goes from the oopsing point back, so that the annotator 194 + # can track (opportunistically) which registers got written and 195 + # whos value no longer is relevant. 196 + 197 + $i = $center; 198 + while ($i >= $start) { 199 + $reglines[$i] = process_x86_regs($lines[$i], $center - $i); 200 + $i = $i - 1; 201 } 202 203 + $i = $start; 204 + while ($i < $finish) { 205 + my $line; 206 + if ($i == $center) { 207 + $line = "*$lines[$i] "; 208 + } else { 209 + $line = " $lines[$i] "; 210 + } 211 + print $line; 212 + if (defined($reglines[$i]) && length($reglines[$i]) > 0) { 213 + my $c = 60 - length($line); 214 + while ($c > 0) { print " "; $c = $c - 1; }; 215 + print "| $reglines[$i]"; 216 + } 217 + if ($i == $center) { 218 + print "<--- faulting instruction"; 219 + } 220 + print "\n"; 221 + $i = $i +1; 222 + } 223
+1
scripts/mod/file2alias.c
··· 210 static int do_hid_entry(const char *filename, 211 struct hid_device_id *id, char *alias) 212 { 213 id->vendor = TO_NATIVE(id->vendor); 214 id->product = TO_NATIVE(id->product); 215
··· 210 static int do_hid_entry(const char *filename, 211 struct hid_device_id *id, char *alias) 212 { 213 + id->bus = TO_NATIVE(id->bus); 214 id->vendor = TO_NATIVE(id->vendor); 215 id->product = TO_NATIVE(id->product); 216
+8
scripts/package/mkspec
··· 86 echo 'cp System.map $RPM_BUILD_ROOT'"/boot/System.map-$KERNELRELEASE" 87 88 echo 'cp .config $RPM_BUILD_ROOT'"/boot/config-$KERNELRELEASE" 89 echo "" 90 echo "%clean" 91 echo '#echo -rf $RPM_BUILD_ROOT'
··· 86 echo 'cp System.map $RPM_BUILD_ROOT'"/boot/System.map-$KERNELRELEASE" 87 88 echo 'cp .config $RPM_BUILD_ROOT'"/boot/config-$KERNELRELEASE" 89 + 90 + echo "%ifnarch ppc64" 91 + echo 'cp vmlinux vmlinux.orig' 92 + echo 'bzip2 -9 vmlinux' 93 + echo 'mv vmlinux.bz2 $RPM_BUILD_ROOT'"/boot/vmlinux-$KERNELRELEASE.bz2" 94 + echo 'mv vmlinux.orig vmlinux' 95 + echo "%endif" 96 + 97 echo "" 98 echo "%clean" 99 echo '#echo -rf $RPM_BUILD_ROOT'
+1 -8
scripts/setlocalversion
··· 58 # Check for svn and a svn repo. 59 if rev=`svn info 2>/dev/null | grep '^Last Changed Rev'`; then 60 rev=`echo $rev | awk '{print $NF}'` 61 - changes=`svn status 2>/dev/null | grep '^[AMD]' | wc -l` 62 - 63 - # Are there uncommitted changes? 64 - if [ $changes != 0 ]; then 65 - printf -- '-svn%s%s' "$rev" -dirty 66 - else 67 - printf -- '-svn%s' "$rev" 68 - fi 69 70 # All done with svn 71 exit
··· 58 # Check for svn and a svn repo. 59 if rev=`svn info 2>/dev/null | grep '^Last Changed Rev'`; then 60 rev=`echo $rev | awk '{print $NF}'` 61 + printf -- '-svn%s' "$rev" 62 63 # All done with svn 64 exit
+9 -3
scripts/tags.sh
··· 76 77 all_kconfigs() 78 { 79 - find_sources $ALLSOURCE_ARCHS 'Kconfig*' 80 } 81 82 all_defconfigs() ··· 102 -I ____cacheline_internodealigned_in_smp \ 103 -I EXPORT_SYMBOL,EXPORT_SYMBOL_GPL \ 104 --extra=+f --c-kinds=+px \ 105 - --regex-asm='/^ENTRY\(([^)]*)\).*/\1/' 106 107 all_kconfigs | xargs $1 -a \ 108 --langdef=kconfig --language-force=kconfig \ ··· 121 122 emacs() 123 { 124 - all_sources | xargs $1 -a 125 126 all_kconfigs | xargs $1 -a \ 127 --regex='/^[ \t]*\(\(menu\)*config\)[ \t]+\([a-zA-Z0-9_]+\)/\3/'
··· 76 77 all_kconfigs() 78 { 79 + for arch in $ALLSOURCE_ARCHS; do 80 + find_sources $arch 'Kconfig*' 81 + done 82 + find_other_sources 'Kconfig*' 83 } 84 85 all_defconfigs() ··· 99 -I ____cacheline_internodealigned_in_smp \ 100 -I EXPORT_SYMBOL,EXPORT_SYMBOL_GPL \ 101 --extra=+f --c-kinds=+px \ 102 + --regex-asm='/^ENTRY\(([^)]*)\).*/\1/' \ 103 + --regex-c='/^SYSCALL_DEFINE[[:digit:]]?\(([^,)]*).*/sys_\1/' 104 105 all_kconfigs | xargs $1 -a \ 106 --langdef=kconfig --language-force=kconfig \ ··· 117 118 emacs() 119 { 120 + all_sources | xargs $1 -a \ 121 + --regex='/^ENTRY(\([^)]*\)).*/\1/' \ 122 + --regex='/^SYSCALL_DEFINE[0-9]?(\([^,)]*\).*/sys_\1/' 123 124 all_kconfigs | xargs $1 -a \ 125 --regex='/^[ \t]*\(\(menu\)*config\)[ \t]+\([a-zA-Z0-9_]+\)/\3/'
+1 -1
sound/core/jack.c
··· 47 int err; 48 49 snprintf(jack->name, sizeof(jack->name), "%s %s", 50 - card->longname, jack->id); 51 jack->input_dev->name = jack->name; 52 53 /* Default to the sound card device. */
··· 47 int err; 48 49 snprintf(jack->name, sizeof(jack->name), "%s %s", 50 + card->shortname, jack->id); 51 jack->input_dev->name = jack->name; 52 53 /* Default to the sound card device. */
+2 -6
sound/pci/hda/hda_intel.c
··· 1947 return 0; 1948 } 1949 1950 - static int azx_resume_early(struct pci_dev *pci) 1951 - { 1952 - return pci_restore_state(pci); 1953 - } 1954 - 1955 static int azx_resume(struct pci_dev *pci) 1956 { 1957 struct snd_card *card = pci_get_drvdata(pci); 1958 struct azx *chip = card->private_data; 1959 1960 if (pci_enable_device(pci) < 0) { 1961 printk(KERN_ERR "hda-intel: pci_enable_device failed, " 1962 "disabling device\n"); ··· 2465 .remove = __devexit_p(azx_remove), 2466 #ifdef CONFIG_PM 2467 .suspend = azx_suspend, 2468 - .resume_early = azx_resume_early, 2469 .resume = azx_resume, 2470 #endif 2471 };
··· 1947 return 0; 1948 } 1949 1950 static int azx_resume(struct pci_dev *pci) 1951 { 1952 struct snd_card *card = pci_get_drvdata(pci); 1953 struct azx *chip = card->private_data; 1954 1955 + pci_set_power_state(pci, PCI_D0); 1956 + pci_restore_state(pci); 1957 if (pci_enable_device(pci) < 0) { 1958 printk(KERN_ERR "hda-intel: pci_enable_device failed, " 1959 "disabling device\n"); ··· 2468 .remove = __devexit_p(azx_remove), 2469 #ifdef CONFIG_PM 2470 .suspend = azx_suspend, 2471 .resume = azx_resume, 2472 #endif 2473 };
+4 -13
sound/pci/oxygen/virtuoso.c
··· 26 * SPI 0 -> 1st PCM1796 (front) 27 * SPI 1 -> 2nd PCM1796 (surround) 28 * SPI 2 -> 3rd PCM1796 (center/LFE) 29 - * SPI 4 -> 4th PCM1796 (back) and EEPROM self-destruct (do not use!) 30 * 31 * GPIO 2 -> M0 of CS5381 32 * GPIO 3 -> M1 of CS5381 ··· 207 static inline void pcm1796_write_spi(struct oxygen *chip, unsigned int codec, 208 u8 reg, u8 value) 209 { 210 - /* 211 - * We don't want to do writes on SPI 4 because the EEPROM, which shares 212 - * the same pin, might get confused and broken. We'd better take care 213 - * that the driver works with the default register values ... 214 - */ 215 - #if 0 216 /* maps ALSA channel pair number to SPI output */ 217 static const u8 codec_map[4] = { 218 0, 1, 2, 4 ··· 217 (codec_map[codec] << OXYGEN_SPI_CODEC_SHIFT) | 218 OXYGEN_SPI_CEN_LATCH_CLOCK_HI, 219 (reg << 8) | value); 220 - #endif 221 } 222 223 static inline void pcm1796_write_i2c(struct oxygen *chip, unsigned int codec, ··· 750 751 static int xonar_d2_control_filter(struct snd_kcontrol_new *template) 752 { 753 - if (!strncmp(template->name, "Master Playback ", 16)) 754 - /* disable volume/mute because they would require SPI writes */ 755 - return 1; 756 if (!strncmp(template->name, "CD Capture ", 11)) 757 /* CD in is actually connected to the video in pin */ 758 template->private_value ^= AC97_CD ^ AC97_VIDEO; ··· 840 .dac_volume_min = 0x0f, 841 .dac_volume_max = 0xff, 842 .misc_flags = OXYGEN_MISC_MIDI, 843 - .function_flags = OXYGEN_FUNCTION_SPI, 844 - .dac_i2s_format = OXYGEN_I2S_FORMAT_I2S, 845 .adc_i2s_format = OXYGEN_I2S_FORMAT_LJUST, 846 }; 847
··· 26 * SPI 0 -> 1st PCM1796 (front) 27 * SPI 1 -> 2nd PCM1796 (surround) 28 * SPI 2 -> 3rd PCM1796 (center/LFE) 29 + * SPI 4 -> 4th PCM1796 (back) 30 * 31 * GPIO 2 -> M0 of CS5381 32 * GPIO 3 -> M1 of CS5381 ··· 207 static inline void pcm1796_write_spi(struct oxygen *chip, unsigned int codec, 208 u8 reg, u8 value) 209 { 210 /* maps ALSA channel pair number to SPI output */ 211 static const u8 codec_map[4] = { 212 0, 1, 2, 4 ··· 223 (codec_map[codec] << OXYGEN_SPI_CODEC_SHIFT) | 224 OXYGEN_SPI_CEN_LATCH_CLOCK_HI, 225 (reg << 8) | value); 226 } 227 228 static inline void pcm1796_write_i2c(struct oxygen *chip, unsigned int codec, ··· 757 758 static int xonar_d2_control_filter(struct snd_kcontrol_new *template) 759 { 760 if (!strncmp(template->name, "CD Capture ", 11)) 761 /* CD in is actually connected to the video in pin */ 762 template->private_value ^= AC97_CD ^ AC97_VIDEO; ··· 850 .dac_volume_min = 0x0f, 851 .dac_volume_max = 0xff, 852 .misc_flags = OXYGEN_MISC_MIDI, 853 + .function_flags = OXYGEN_FUNCTION_SPI | 854 + OXYGEN_FUNCTION_ENABLE_SPI_4_5, 855 + .dac_i2s_format = OXYGEN_I2S_FORMAT_LJUST, 856 .adc_i2s_format = OXYGEN_I2S_FORMAT_LJUST, 857 }; 858
+11 -9
sound/usb/usbaudio.c
··· 2524 * build the rate table and bitmap flags 2525 */ 2526 int r, idx; 2527 - unsigned int nonzero_rates = 0; 2528 2529 fp->rate_table = kmalloc(sizeof(int) * nr_rates, GFP_KERNEL); 2530 if (fp->rate_table == NULL) { ··· 2531 return -1; 2532 } 2533 2534 - fp->nr_rates = nr_rates; 2535 - fp->rate_min = fp->rate_max = combine_triple(&fmt[8]); 2536 for (r = 0, idx = offset + 1; r < nr_rates; r++, idx += 3) { 2537 unsigned int rate = combine_triple(&fmt[idx]); 2538 /* C-Media CM6501 mislabels its 96 kHz altsetting */ 2539 if (rate == 48000 && nr_rates == 1 && 2540 - chip->usb_id == USB_ID(0x0d8c, 0x0201) && 2541 fp->altsetting == 5 && fp->maxpacksize == 392) 2542 rate = 96000; 2543 - fp->rate_table[r] = rate; 2544 - nonzero_rates |= rate; 2545 - if (rate < fp->rate_min) 2546 fp->rate_min = rate; 2547 - else if (rate > fp->rate_max) 2548 fp->rate_max = rate; 2549 fp->rates |= snd_pcm_rate_to_rate_bit(rate); 2550 } 2551 - if (!nonzero_rates) { 2552 hwc_debug("All rates were zero. Skipping format!\n"); 2553 return -1; 2554 }
··· 2524 * build the rate table and bitmap flags 2525 */ 2526 int r, idx; 2527 2528 fp->rate_table = kmalloc(sizeof(int) * nr_rates, GFP_KERNEL); 2529 if (fp->rate_table == NULL) { ··· 2532 return -1; 2533 } 2534 2535 + fp->nr_rates = 0; 2536 + fp->rate_min = fp->rate_max = 0; 2537 for (r = 0, idx = offset + 1; r < nr_rates; r++, idx += 3) { 2538 unsigned int rate = combine_triple(&fmt[idx]); 2539 + if (!rate) 2540 + continue; 2541 /* C-Media CM6501 mislabels its 96 kHz altsetting */ 2542 if (rate == 48000 && nr_rates == 1 && 2543 + (chip->usb_id == USB_ID(0x0d8c, 0x0201) || 2544 + chip->usb_id == USB_ID(0x0d8c, 0x0102)) && 2545 fp->altsetting == 5 && fp->maxpacksize == 392) 2546 rate = 96000; 2547 + fp->rate_table[fp->nr_rates] = rate; 2548 + if (!fp->rate_min || rate < fp->rate_min) 2549 fp->rate_min = rate; 2550 + if (!fp->rate_max || rate > fp->rate_max) 2551 fp->rate_max = rate; 2552 fp->rates |= snd_pcm_rate_to_rate_bit(rate); 2553 + fp->nr_rates++; 2554 } 2555 + if (!fp->nr_rates) { 2556 hwc_debug("All rates were zero. Skipping format!\n"); 2557 return -1; 2558 }
+1
sound/usb/usbmidi.c
··· 1625 } 1626 1627 ep_info.out_ep = get_endpoint(hostif, 2)->bEndpointAddress & USB_ENDPOINT_NUMBER_MASK; 1628 ep_info.out_cables = endpoint->out_cables & 0x5555; 1629 err = snd_usbmidi_out_endpoint_create(umidi, &ep_info, &umidi->endpoints[0]); 1630 if (err < 0)
··· 1625 } 1626 1627 ep_info.out_ep = get_endpoint(hostif, 2)->bEndpointAddress & USB_ENDPOINT_NUMBER_MASK; 1628 + ep_info.out_interval = 0; 1629 ep_info.out_cables = endpoint->out_cables & 0x5555; 1630 err = snd_usbmidi_out_endpoint_create(umidi, &ep_info, &umidi->endpoints[0]); 1631 if (err < 0)
+2 -4
virt/kvm/iommu.c
··· 73 { 74 int i, r = 0; 75 76 - down_read(&kvm->slots_lock); 77 for (i = 0; i < kvm->nmemslots; i++) { 78 r = kvm_iommu_map_pages(kvm, kvm->memslots[i].base_gfn, 79 kvm->memslots[i].npages); 80 if (r) 81 break; 82 } 83 - up_read(&kvm->slots_lock); 84 return r; 85 } 86 ··· 189 static int kvm_iommu_unmap_memslots(struct kvm *kvm) 190 { 191 int i; 192 - down_read(&kvm->slots_lock); 193 for (i = 0; i < kvm->nmemslots; i++) { 194 kvm_iommu_put_pages(kvm, kvm->memslots[i].base_gfn, 195 kvm->memslots[i].npages); 196 } 197 - up_read(&kvm->slots_lock); 198 199 return 0; 200 }
··· 73 { 74 int i, r = 0; 75 76 for (i = 0; i < kvm->nmemslots; i++) { 77 r = kvm_iommu_map_pages(kvm, kvm->memslots[i].base_gfn, 78 kvm->memslots[i].npages); 79 if (r) 80 break; 81 } 82 + 83 return r; 84 } 85 ··· 190 static int kvm_iommu_unmap_memslots(struct kvm *kvm) 191 { 192 int i; 193 + 194 for (i = 0; i < kvm->nmemslots; i++) { 195 kvm_iommu_put_pages(kvm, kvm->memslots[i].base_gfn, 196 kvm->memslots[i].npages); 197 } 198 199 return 0; 200 }
+33 -10
virt/kvm/kvm_main.c
··· 173 assigned_dev->host_irq_disabled = false; 174 } 175 mutex_unlock(&assigned_dev->kvm->lock); 176 - kvm_put_kvm(assigned_dev->kvm); 177 } 178 179 static irqreturn_t kvm_assigned_dev_intr(int irq, void *dev_id) 180 { 181 struct kvm_assigned_dev_kernel *assigned_dev = 182 (struct kvm_assigned_dev_kernel *) dev_id; 183 - 184 - kvm_get_kvm(assigned_dev->kvm); 185 186 schedule_work(&assigned_dev->interrupt_work); 187 ··· 210 } 211 } 212 213 static void kvm_free_assigned_irq(struct kvm *kvm, 214 struct kvm_assigned_dev_kernel *assigned_dev) 215 { ··· 226 if (!assigned_dev->irq_requested_type) 227 return; 228 229 - if (cancel_work_sync(&assigned_dev->interrupt_work)) 230 - /* We had pending work. That means we will have to take 231 - * care of kvm_put_kvm. 232 - */ 233 - kvm_put_kvm(kvm); 234 235 free_irq(assigned_dev->host_irq, (void *)assigned_dev); 236 ··· 296 297 if (irqchip_in_kernel(kvm)) { 298 if (!msi2intx && 299 - adev->irq_requested_type & KVM_ASSIGNED_DEV_HOST_MSI) { 300 - free_irq(adev->host_irq, (void *)kvm); 301 pci_disable_msi(adev->dev); 302 } 303 ··· 466 struct kvm_assigned_dev_kernel *match; 467 struct pci_dev *dev; 468 469 mutex_lock(&kvm->lock); 470 471 match = kvm_find_assigned_dev(&kvm->arch.assigned_dev_head, ··· 528 529 out: 530 mutex_unlock(&kvm->lock); 531 return r; 532 out_list_del: 533 list_del(&match->list); ··· 540 out_free: 541 kfree(match); 542 mutex_unlock(&kvm->lock); 543 return r; 544 } 545 #endif ··· 803 return young; 804 } 805 806 static const struct mmu_notifier_ops kvm_mmu_notifier_ops = { 807 .invalidate_page = kvm_mmu_notifier_invalidate_page, 808 .invalidate_range_start = kvm_mmu_notifier_invalidate_range_start, 809 .invalidate_range_end = kvm_mmu_notifier_invalidate_range_end, 810 .clear_flush_young = kvm_mmu_notifier_clear_flush_young, 811 }; 812 #endif /* CONFIG_MMU_NOTIFIER && KVM_ARCH_WANT_MMU_NOTIFIER */ 813 ··· 905 { 906 struct mm_struct *mm = kvm->mm; 907 908 spin_lock(&kvm_lock); 909 list_del(&kvm->vm_list); 910 spin_unlock(&kvm_lock);
··· 173 assigned_dev->host_irq_disabled = false; 174 } 175 mutex_unlock(&assigned_dev->kvm->lock); 176 } 177 178 static irqreturn_t kvm_assigned_dev_intr(int irq, void *dev_id) 179 { 180 struct kvm_assigned_dev_kernel *assigned_dev = 181 (struct kvm_assigned_dev_kernel *) dev_id; 182 183 schedule_work(&assigned_dev->interrupt_work); 184 ··· 213 } 214 } 215 216 + /* The function implicit hold kvm->lock mutex due to cancel_work_sync() */ 217 static void kvm_free_assigned_irq(struct kvm *kvm, 218 struct kvm_assigned_dev_kernel *assigned_dev) 219 { ··· 228 if (!assigned_dev->irq_requested_type) 229 return; 230 231 + /* 232 + * In kvm_free_device_irq, cancel_work_sync return true if: 233 + * 1. work is scheduled, and then cancelled. 234 + * 2. work callback is executed. 235 + * 236 + * The first one ensured that the irq is disabled and no more events 237 + * would happen. But for the second one, the irq may be enabled (e.g. 238 + * for MSI). So we disable irq here to prevent further events. 239 + * 240 + * Notice this maybe result in nested disable if the interrupt type is 241 + * INTx, but it's OK for we are going to free it. 242 + * 243 + * If this function is a part of VM destroy, please ensure that till 244 + * now, the kvm state is still legal for probably we also have to wait 245 + * interrupt_work done. 246 + */ 247 + disable_irq_nosync(assigned_dev->host_irq); 248 + cancel_work_sync(&assigned_dev->interrupt_work); 249 250 free_irq(assigned_dev->host_irq, (void *)assigned_dev); 251 ··· 285 286 if (irqchip_in_kernel(kvm)) { 287 if (!msi2intx && 288 + (adev->irq_requested_type & KVM_ASSIGNED_DEV_HOST_MSI)) { 289 + free_irq(adev->host_irq, (void *)adev); 290 pci_disable_msi(adev->dev); 291 } 292 ··· 455 struct kvm_assigned_dev_kernel *match; 456 struct pci_dev *dev; 457 458 + down_read(&kvm->slots_lock); 459 mutex_lock(&kvm->lock); 460 461 match = kvm_find_assigned_dev(&kvm->arch.assigned_dev_head, ··· 516 517 out: 518 mutex_unlock(&kvm->lock); 519 + up_read(&kvm->slots_lock); 520 return r; 521 out_list_del: 522 list_del(&match->list); ··· 527 out_free: 528 kfree(match); 529 mutex_unlock(&kvm->lock); 530 + up_read(&kvm->slots_lock); 531 return r; 532 } 533 #endif ··· 789 return young; 790 } 791 792 + static void kvm_mmu_notifier_release(struct mmu_notifier *mn, 793 + struct mm_struct *mm) 794 + { 795 + struct kvm *kvm = mmu_notifier_to_kvm(mn); 796 + kvm_arch_flush_shadow(kvm); 797 + } 798 + 799 static const struct mmu_notifier_ops kvm_mmu_notifier_ops = { 800 .invalidate_page = kvm_mmu_notifier_invalidate_page, 801 .invalidate_range_start = kvm_mmu_notifier_invalidate_range_start, 802 .invalidate_range_end = kvm_mmu_notifier_invalidate_range_end, 803 .clear_flush_young = kvm_mmu_notifier_clear_flush_young, 804 + .release = kvm_mmu_notifier_release, 805 }; 806 #endif /* CONFIG_MMU_NOTIFIER && KVM_ARCH_WANT_MMU_NOTIFIER */ 807 ··· 883 { 884 struct mm_struct *mm = kvm->mm; 885 886 + kvm_arch_sync_events(kvm); 887 spin_lock(&kvm_lock); 888 list_del(&kvm->vm_list); 889 spin_unlock(&kvm_lock);