···21662167N: Pavel Machek2168E: pavel@ucw.cz2169-E: pavel@suse.cz2170D: Softcursor for vga, hypertech cdrom support, vcsa bugfix, nbd2171D: sun4/330 port, capabilities for elf, speedup for rm on ext2, USB,2172D: work on suspend-to-ram/disk, killing duplicates from ioctl32
···21662167N: Pavel Machek2168E: pavel@ucw.cz02169D: Softcursor for vga, hypertech cdrom support, vcsa bugfix, nbd2170D: sun4/330 port, capabilities for elf, speedup for rm on ext2, USB,2171D: work on suspend-to-ram/disk, killing duplicates from ioctl32
+1-1
Documentation/ABI/testing/sysfs-firmware-memmap
···1What: /sys/firmware/memmap/2Date: June 20083-Contact: Bernhard Walle <bwalle@suse.de>4Description:5 On all platforms, the firmware provides a memory map which the6 kernel reads. The resources from that memory map are registered
···1What: /sys/firmware/memmap/2Date: June 20083+Contact: Bernhard Walle <bernhard.walle@gmx.de>4Description:5 On all platforms, the firmware provides a memory map which the6 kernel reads. The resources from that memory map are registered
+1-1
Documentation/PCI/PCIEBUS-HOWTO.txt
···9394int pcie_port_service_register(struct pcie_port_service_driver *new)9596-This API replaces the Linux Driver Model's pci_module_init API. A97service driver should always calls pcie_port_service_register at98module init. Note that after service driver being loaded, calls99such as pci_enable_device(dev) and pci_set_master(dev) are no longer
···9394int pcie_port_service_register(struct pcie_port_service_driver *new)9596+This API replaces the Linux Driver Model's pci_register_driver API. A97service driver should always calls pcie_port_service_register at98module init. Note that after service driver being loaded, calls99such as pci_enable_device(dev) and pci_set_master(dev) are no longer
+2-4
Documentation/cgroups/cgroups.txt
···252When a task is moved from one cgroup to another, it gets a new253css_set pointer - if there's an already existing css_set with the254desired collection of cgroups then that group is reused, else a new255-css_set is allocated. Note that the current implementation uses a256-linear search to locate an appropriate existing css_set, so isn't257-very efficient. A future version will use a hash table for better258-performance.259260To allow access from a cgroup to the css_sets (and hence tasks)261that comprise it, a set of cg_cgroup_link objects form a lattice;
···252When a task is moved from one cgroup to another, it gets a new253css_set pointer - if there's an already existing css_set with the254desired collection of cgroups then that group is reused, else a new255+css_set is allocated. The appropriate existing css_set is located by256+looking into a hash table.00257258To allow access from a cgroup to the css_sets (and hence tasks)259that comprise it, a set of cg_cgroup_link objects form a lattice;
+36-27
Documentation/cgroups/cpusets.txt
···142 - in fork and exit, to attach and detach a task from its cpuset.143 - in sched_setaffinity, to mask the requested CPUs by what's144 allowed in that tasks cpuset.145- - in sched.c migrate_all_tasks(), to keep migrating tasks within146 the CPUs allowed by their cpuset, if possible.147 - in the mbind and set_mempolicy system calls, to mask the requested148 Memory Nodes by what's allowed in that tasks cpuset.···175 - mem_exclusive flag: is memory placement exclusive?176 - mem_hardwall flag: is memory allocation hardwalled177 - memory_pressure: measure of how much paging pressure in cpuset0000178179In addition, the root cpuset only has the following file:180 - memory_pressure_enabled flag: compute memory_pressure?···256257This is useful both on tightly managed systems running a wide mix of258submitted jobs, which may choose to terminate or re-prioritize jobs that259-are trying to use more memory than allowed on the nodes assigned them,260and with tightly coupled, long running, massively parallel scientific261computing jobs that will dramatically fail to meet required performance262goals if they start to use more memory than allowed to them.···382The algorithmic cost of load balancing and its impact on key shared383kernel data structures such as the task list increases more than384linearly with the number of CPUs being balanced. So the scheduler385-has support to partition the systems CPUs into a number of sched386domains such that it only load balances within each sched domain.387Each sched domain covers some subset of the CPUs in the system;388no two sched domains overlap; some CPUs might not be in any sched···489The internal kernel cpuset to scheduler interface passes from the490cpuset code to the scheduler code a partition of the load balanced491CPUs in the system. This partition is a set of subsets (represented492-as an array of cpumask_t) of CPUs, pairwise disjoint, that cover all493-the CPUs that must be load balanced.494495-Whenever the 'sched_load_balance' flag changes, or CPUs come or go496-from a cpuset with this flag enabled, or a cpuset with this flag497-enabled is removed, the cpuset code builds a new such partition and498-passes it to the scheduler sched domain setup code, to have the sched499-domains rebuilt as necessary.0000500501This partition exactly defines what sched domains the scheduler should502-setup - one sched domain for each element (cpumask_t) in the partition.0503504The scheduler remembers the currently active sched domain partitions.505When the scheduler routine partition_sched_domains() is invoked from···568requests 0 and others are -1 then 0 is used.569570Note that modifying this file will have both good and bad effects,571-and whether it is acceptable or not will be depend on your situation.572Don't modify this file if you are not sure.573574If your situation is:···609610If a cpuset has its 'cpus' modified, then each task in that cpuset611will have its allowed CPU placement changed immediately. Similarly,612-if a tasks pid is written to a cpusets 'tasks' file, in either its613-current cpuset or another cpuset, then its allowed CPU placement is614-changed immediately. If such a task had been bound to some subset615-of its cpuset using the sched_setaffinity() call, the task will be616-allowed to run on any CPU allowed in its new cpuset, negating the617-affect of the prior sched_setaffinity() call.618619In summary, the memory placement of a task whose cpuset is changed is620updated by the kernel, on the next allocation of a page for that task,621-but the processor placement is not updated, until that tasks pid is622-rewritten to the 'tasks' file of its cpuset. This is done to avoid623-impacting the scheduler code in the kernel with a check for changes624-in a tasks processor placement.625626Normally, once a page is allocated (given a physical page627of main memory) then that page stays on whatever node it···686 # The next line should display '/Charlie'687 cat /proc/self/cpuset688689-In the future, a C library interface to cpusets will likely be690-available. For now, the only way to query or modify cpusets is691-via the cpuset file system, using the various cd, mkdir, echo, cat,692-rmdir commands from the shell, or their equivalent from C.0000693694The sched_setaffinity calls can also be done at the shell prompt using695SGI's runon or Robert Love's taskset. The mbind and set_mempolicy···765766is equivalent to767768-mount -t cgroup -ocpuset X /dev/cpuset769echo "/sbin/cpuset_release_agent" > /dev/cpuset/release_agent7707712.2 Adding/removing cpus
···142 - in fork and exit, to attach and detach a task from its cpuset.143 - in sched_setaffinity, to mask the requested CPUs by what's144 allowed in that tasks cpuset.145+ - in sched.c migrate_live_tasks(), to keep migrating tasks within146 the CPUs allowed by their cpuset, if possible.147 - in the mbind and set_mempolicy system calls, to mask the requested148 Memory Nodes by what's allowed in that tasks cpuset.···175 - mem_exclusive flag: is memory placement exclusive?176 - mem_hardwall flag: is memory allocation hardwalled177 - memory_pressure: measure of how much paging pressure in cpuset178+ - memory_spread_page flag: if set, spread page cache evenly on allowed nodes179+ - memory_spread_slab flag: if set, spread slab cache evenly on allowed nodes180+ - sched_load_balance flag: if set, load balance within CPUs on that cpuset181+ - sched_relax_domain_level: the searching range when migrating tasks182183In addition, the root cpuset only has the following file:184 - memory_pressure_enabled flag: compute memory_pressure?···252253This is useful both on tightly managed systems running a wide mix of254submitted jobs, which may choose to terminate or re-prioritize jobs that255+are trying to use more memory than allowed on the nodes assigned to them,256and with tightly coupled, long running, massively parallel scientific257computing jobs that will dramatically fail to meet required performance258goals if they start to use more memory than allowed to them.···378The algorithmic cost of load balancing and its impact on key shared379kernel data structures such as the task list increases more than380linearly with the number of CPUs being balanced. So the scheduler381+has support to partition the systems CPUs into a number of sched382domains such that it only load balances within each sched domain.383Each sched domain covers some subset of the CPUs in the system;384no two sched domains overlap; some CPUs might not be in any sched···485The internal kernel cpuset to scheduler interface passes from the486cpuset code to the scheduler code a partition of the load balanced487CPUs in the system. This partition is a set of subsets (represented488+as an array of struct cpumask) of CPUs, pairwise disjoint, that cover489+all the CPUs that must be load balanced.490491+The cpuset code builds a new such partition and passes it to the492+scheduler sched domain setup code, to have the sched domains rebuilt493+as necessary, whenever:494+ - the 'sched_load_balance' flag of a cpuset with non-empty CPUs changes,495+ - or CPUs come or go from a cpuset with this flag enabled,496+ - or 'sched_relax_domain_level' value of a cpuset with non-empty CPUs497+ and with this flag enabled changes,498+ - or a cpuset with non-empty CPUs and with this flag enabled is removed,499+ - or a cpu is offlined/onlined.500501This partition exactly defines what sched domains the scheduler should502+setup - one sched domain for each element (struct cpumask) in the503+partition.504505The scheduler remembers the currently active sched domain partitions.506When the scheduler routine partition_sched_domains() is invoked from···559requests 0 and others are -1 then 0 is used.560561Note that modifying this file will have both good and bad effects,562+and whether it is acceptable or not depends on your situation.563Don't modify this file if you are not sure.564565If your situation is:···600601If a cpuset has its 'cpus' modified, then each task in that cpuset602will have its allowed CPU placement changed immediately. Similarly,603+if a tasks pid is written to another cpusets 'tasks' file, then its604+allowed CPU placement is changed immediately. If such a task had been605+bound to some subset of its cpuset using the sched_setaffinity() call,606+the task will be allowed to run on any CPU allowed in its new cpuset,607+negating the effect of the prior sched_setaffinity() call.0608609In summary, the memory placement of a task whose cpuset is changed is610updated by the kernel, on the next allocation of a page for that task,611+and the processor placement is updated immediately.000612613Normally, once a page is allocated (given a physical page614of main memory) then that page stays on whatever node it···681 # The next line should display '/Charlie'682 cat /proc/self/cpuset683684+There are ways to query or modify cpusets:685+ - via the cpuset file system directly, using the various cd, mkdir, echo,686+ cat, rmdir commands from the shell, or their equivalent from C.687+ - via the C library libcpuset.688+ - via the C library libcgroup.689+ (http://sourceforge.net/proects/libcg/)690+ - via the python application cset.691+ (http://developer.novell.com/wiki/index.php/Cpuset)692693The sched_setaffinity calls can also be done at the shell prompt using694SGI's runon or Robert Love's taskset. The mbind and set_mempolicy···756757is equivalent to758759+mount -t cgroup -ocpuset,noprefix X /dev/cpuset760echo "/sbin/cpuset_release_agent" > /dev/cpuset/release_agent7617622.2 Adding/removing cpus
···1+/* Disk protection for HP machines.2+ *3+ * Copyright 2008 Eric Piel4+ * Copyright 2009 Pavel Machek <pavel@suse.cz>5+ *6+ * GPLv2.7+ */8+9+#include <stdio.h>10+#include <stdlib.h>11+#include <unistd.h>12+#include <fcntl.h>13+#include <sys/stat.h>14+#include <sys/types.h>15+#include <string.h>16+#include <stdint.h>17+#include <errno.h>18+#include <signal.h>19+20+void write_int(char *path, int i)21+{22+ char buf[1024];23+ int fd = open(path, O_RDWR);24+ if (fd < 0) {25+ perror("open");26+ exit(1);27+ }28+ sprintf(buf, "%d", i);29+ if (write(fd, buf, strlen(buf)) != strlen(buf)) {30+ perror("write");31+ exit(1);32+ }33+ close(fd);34+}35+36+void set_led(int on)37+{38+ write_int("/sys/class/leds/hp::hddprotect/brightness", on);39+}40+41+void protect(int seconds)42+{43+ write_int("/sys/block/sda/device/unload_heads", seconds*1000);44+}45+46+int on_ac(void)47+{48+// /sys/class/power_supply/AC0/online49+}50+51+int lid_open(void)52+{53+// /proc/acpi/button/lid/LID/state54+}55+56+void ignore_me(void)57+{58+ protect(0);59+ set_led(0);60+61+}62+63+int main(int argc, char* argv[])64+{65+ int fd, ret;66+67+ fd = open("/dev/freefall", O_RDONLY);68+ if (fd < 0) {69+ perror("open");70+ return EXIT_FAILURE;71+ }72+73+ signal(SIGALRM, ignore_me);74+75+ for (;;) {76+ unsigned char count;77+78+ ret = read(fd, &count, sizeof(count));79+ alarm(0);80+ if ((ret == -1) && (errno == EINTR)) {81+ /* Alarm expired, time to unpark the heads */82+ continue;83+ }84+85+ if (ret != sizeof(count)) {86+ perror("read");87+ break;88+ }89+90+ protect(21);91+ set_led(1);92+ if (1 || on_ac() || lid_open()) {93+ alarm(2);94+ } else {95+ alarm(20);96+ }97+ }98+99+ close(fd);100+ return EXIT_SUCCESS;101+}
+8
Documentation/hwmon/lis3lv02d
···33This driver also provides an absolute input class device, allowing34the laptop to act as a pinball machine-esque joystick.350000000036Axes orientation37----------------38
···33This driver also provides an absolute input class device, allowing34the laptop to act as a pinball machine-esque joystick.3536+Another feature of the driver is misc device called "freefall" that37+acts similar to /dev/rtc and reacts on free-fall interrupts received38+from the device. It supports blocking operations, poll/select and39+fasync operation modes. You must read 1 bytes from the device. The40+result is number of free-fall interrupts since the last successful41+read (or 255 if number of interrupts would not fit).42+43+44Axes orientation45----------------46
+2-4
Documentation/tracers/mmiotrace.txt
···78events were lost, the trace is incomplete. You should enlarge the buffers and79try again. Buffers are enlarged by first seeing how large the current buffers80are:81-$ cat /debug/tracing/trace_entries82gives you a number. Approximately double this number and write it back, for83instance:84-$ echo 0 > /debug/tracing/tracing_enabled85-$ echo 128000 > /debug/tracing/trace_entries86-$ echo 1 > /debug/tracing/tracing_enabled87Then start again from the top.8889If you are doing a trace for a driver project, e.g. Nouveau, you should also
···78events were lost, the trace is incomplete. You should enlarge the buffers and79try again. Buffers are enlarged by first seeing how large the current buffers80are:81+$ cat /debug/tracing/buffer_size_kb82gives you a number. Approximately double this number and write it back, for83instance:84+$ echo 128000 > /debug/tracing/buffer_size_kb0085Then start again from the top.8687If you are doing a trace for a driver project, e.g. Nouveau, you should also
+18-11
MAINTAINERS
···692L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only)693S: Maintained6940000000695ARPD SUPPORT696P: Jonathan Layes697L: netdev@vger.kernel.org···1912S: Maintained19131914HARD DRIVE ACTIVE PROTECTION SYSTEM (HDAPS) DRIVER1915-P: Robert Love1916-M: rlove@rlove.org1917-M: linux-kernel@vger.kernel.org1918-W: http://www.kernel.org/pub/linux/kernel/people/rml/hdaps/1919S: Maintained19201921GSPCA FINEPIX SUBDRIVER···20082009HIBERNATION (aka Software Suspend, aka swsusp)2010P: Pavel Machek2011-M: pavel@suse.cz2012P: Rafael J. Wysocki2013M: rjw@sisk.pl2014L: linux-pm@lists.linux-foundation.org···3334M: jeremy@xensource.com3335P: Chris Wright3336M: chrisw@sous-sol.org3337-P: Zachary Amsden3338-M: zach@vmware.com3339P: Rusty Russell3340M: rusty@rustcorp.com.au3341L: virtualization@lists.osdl.org···4179P: Len Brown4180M: len.brown@intel.com4181P: Pavel Machek4182-M: pavel@suse.cz4183P: Rafael J. Wysocki4184M: rjw@sisk.pl4185L: linux-pm@lists.linux-foundation.org···4931S: Maintained49324933ZR36067 VIDEO FOR LINUX DRIVER4934-P: Ronald Bultje4935-M: rbultje@ronald.bitfreak.net4936L: mjpeg-users@lists.sourceforge.net04937W: http://mjpeg.sourceforge.net/driver-zoran/4938-S: Maintained049394940ZS DECSTATION Z85C30 SERIAL DRIVER4941P: Maciej W. Rozycki
···692L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only)693S: Maintained694695+ARM/NUVOTON W90X900 ARM ARCHITECTURE696+P: Wan ZongShun697+M: mcuos.com@gmail.com698+L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only)699+W: http://www.mcuos.com700+S: Maintained701+702ARPD SUPPORT703P: Jonathan Layes704L: netdev@vger.kernel.org···1905S: Maintained19061907HARD DRIVE ACTIVE PROTECTION SYSTEM (HDAPS) DRIVER1908+P: Frank Seidel1909+M: frank@f-seidel.de1910+L: lm-sensors@lm-sensors.org1911+W: http://www.kernel.org/pub/linux/kernel/people/fseidel/hdaps/1912S: Maintained19131914GSPCA FINEPIX SUBDRIVER···20012002HIBERNATION (aka Software Suspend, aka swsusp)2003P: Pavel Machek2004+M: pavel@ucw.cz2005P: Rafael J. Wysocki2006M: rjw@sisk.pl2007L: linux-pm@lists.linux-foundation.org···3327M: jeremy@xensource.com3328P: Chris Wright3329M: chrisw@sous-sol.org3330+P: Alok Kataria3331+M: akataria@vmware.com3332P: Rusty Russell3333M: rusty@rustcorp.com.au3334L: virtualization@lists.osdl.org···4172P: Len Brown4173M: len.brown@intel.com4174P: Pavel Machek4175+M: pavel@ucw.cz4176P: Rafael J. Wysocki4177M: rjw@sisk.pl4178L: linux-pm@lists.linux-foundation.org···4924S: Maintained49254926ZR36067 VIDEO FOR LINUX DRIVER004927L: mjpeg-users@lists.sourceforge.net4928+L: linux-media@vger.kernel.org4929W: http://mjpeg.sourceforge.net/driver-zoran/4930+T: Mercurial http://linuxtv.org/hg/v4l-dvb4931+S: Odd Fixes49324933ZS DECSTATION Z85C30 SERIAL DRIVER4934P: Maciej W. Rozycki
+1-1
Makefile
···389# output directory.390outputmakefile:391ifneq ($(KBUILD_SRC),)0392 $(Q)$(CONFIG_SHELL) $(srctree)/scripts/mkmakefile \393 $(srctree) $(objtree) $(VERSION) $(PATCHLEVEL)394endif···947 mkdir -p include2; \948 ln -fsn $(srctree)/include/asm-$(SRCARCH) include2/asm; \949 fi950- ln -fsn $(srctree) source951endif952953# prepare2 creates a makefile if using a separate output directory
···389# output directory.390outputmakefile:391ifneq ($(KBUILD_SRC),)392+ $(Q)ln -fsn $(srctree) source393 $(Q)$(CONFIG_SHELL) $(srctree)/scripts/mkmakefile \394 $(srctree) $(objtree) $(VERSION) $(PATCHLEVEL)395endif···946 mkdir -p include2; \947 ln -fsn $(srctree)/include/asm-$(SRCARCH) include2/asm; \948 fi0949endif950951# prepare2 creates a makefile if using a separate output directory
+1-1
README
···188 values to random values.189190 You can find more information on using the Linux kernel config tools191- in Documentation/kbuild/make-configs.txt.192193 NOTES on "make config":194 - having unnecessary drivers will make the kernel bigger, and can
···188 values to random values.189190 You can find more information on using the Linux kernel config tools191+ in Documentation/kbuild/kconfig.txt.192193 NOTES on "make config":194 - having unnecessary drivers will make the kernel bigger, and can
+4-4
arch/alpha/kernel/process.c
···93 if (cpuid != boot_cpuid) {94 flags |= 0x00040000UL; /* "remain halted" */95 *pflags = flags;96- cpu_clear(cpuid, cpu_present_map);97- cpu_clear(cpuid, cpu_possible_map);98 halt();99 }100#endif···120121#ifdef CONFIG_SMP122 /* Wait for the secondaries to halt. */123- cpu_clear(boot_cpuid, cpu_present_map);124- cpu_clear(boot_cpuid, cpu_possible_map);125 while (cpus_weight(cpu_present_map))126 barrier();127#endif
···93 if (cpuid != boot_cpuid) {94 flags |= 0x00040000UL; /* "remain halted" */95 *pflags = flags;96+ set_cpu_present(cpuid, false);97+ set_cpu_possible(cpuid, false);98 halt();99 }100#endif···120121#ifdef CONFIG_SMP122 /* Wait for the secondaries to halt. */123+ set_cpu_present(boot_cpuid, false);124+ set_cpu_possible(boot_cpuid, false);125 while (cpus_weight(cpu_present_map))126 barrier();127#endif
+6-6
arch/alpha/kernel/smp.c
···120smp_callin(void)121{122 int cpuid = hard_smp_processor_id();123- cpumask_t mask = cpu_online_map;124125- if (cpu_test_and_set(cpuid, mask)) {126 printk("??, cpu 0x%x already present??\n", cpuid);127 BUG();128 }0129130 /* Turn on machine checks. */131 wrmces(7);···436 ((char *)cpubase + i*hwrpb->processor_size);437 if ((cpu->flags & 0x1cc) == 0x1cc) {438 smp_num_probed++;439- cpu_set(i, cpu_possible_map);440- cpu_set(i, cpu_present_map);441 cpu->pal_revision = boot_cpu_palrev;442 }443···470471 /* Nothing to do on a UP box, or when told not to. */472 if (smp_num_probed == 1 || max_cpus == 0) {473- cpu_possible_map = cpumask_of_cpu(boot_cpuid);474- cpu_present_map = cpumask_of_cpu(boot_cpuid);475 printk(KERN_INFO "SMP mode deactivated.\n");476 return;477 }
···120smp_callin(void)121{122 int cpuid = hard_smp_processor_id();0123124+ if (cpu_online(cpuid)) {125 printk("??, cpu 0x%x already present??\n", cpuid);126 BUG();127 }128+ set_cpu_online(cpuid, true);129130 /* Turn on machine checks. */131 wrmces(7);···436 ((char *)cpubase + i*hwrpb->processor_size);437 if ((cpu->flags & 0x1cc) == 0x1cc) {438 smp_num_probed++;439+ set_cpu_possible(i, true);440+ set_cpu_present(i, true);441 cpu->pal_revision = boot_cpu_palrev;442 }443···470471 /* Nothing to do on a UP box, or when told not to. */472 if (smp_num_probed == 1 || max_cpus == 0) {473+ init_cpu_possible(cpumask_of(boot_cpuid));474+ init_cpu_present(cpumask_of(boot_cpuid));475 printk(KERN_INFO "SMP mode deactivated.\n");476 return;477 }
+1-1
arch/arm/configs/at91sam9260ek_defconfig
···608# Watchdog Device Drivers609#610# CONFIG_SOFT_WATCHDOG is not set611-CONFIG_AT91SAM9_WATCHDOG=y612613#614# USB-based Watchdog Cards
···608# Watchdog Device Drivers609#610# CONFIG_SOFT_WATCHDOG is not set611+CONFIG_AT91SAM9X_WATCHDOG=y612613#614# USB-based Watchdog Cards
+1-1
arch/arm/configs/at91sam9261ek_defconfig
···700# Watchdog Device Drivers701#702# CONFIG_SOFT_WATCHDOG is not set703-CONFIG_AT91SAM9_WATCHDOG=y704705#706# USB-based Watchdog Cards
···700# Watchdog Device Drivers701#702# CONFIG_SOFT_WATCHDOG is not set703+CONFIG_AT91SAM9X_WATCHDOG=y704705#706# USB-based Watchdog Cards
+1-1
arch/arm/configs/at91sam9263ek_defconfig
···710# Watchdog Device Drivers711#712# CONFIG_SOFT_WATCHDOG is not set713-CONFIG_AT91SAM9_WATCHDOG=y714715#716# USB-based Watchdog Cards
···710# Watchdog Device Drivers711#712# CONFIG_SOFT_WATCHDOG is not set713+CONFIG_AT91SAM9X_WATCHDOG=y714715#716# USB-based Watchdog Cards
+1-1
arch/arm/configs/at91sam9rlek_defconfig
···606# Watchdog Device Drivers607#608# CONFIG_SOFT_WATCHDOG is not set609-CONFIG_AT91SAM9_WATCHDOG=y610611#612# Sonics Silicon Backplane
···606# Watchdog Device Drivers607#608# CONFIG_SOFT_WATCHDOG is not set609+CONFIG_AT91SAM9X_WATCHDOG=y610611#612# Sonics Silicon Backplane
+1-1
arch/arm/configs/qil-a9260_defconfig
···727# Watchdog Device Drivers728#729# CONFIG_SOFT_WATCHDOG is not set730-# CONFIG_AT91SAM9_WATCHDOG is not set731732#733# USB-based Watchdog Cards
···727# Watchdog Device Drivers728#729# CONFIG_SOFT_WATCHDOG is not set730+# CONFIG_AT91SAM9X_WATCHDOG is not set731732#733# USB-based Watchdog Cards
+2-2
arch/arm/kernel/elf.c
···74 */75int arm_elf_read_implies_exec(const struct elf32_hdr *x, int executable_stack)76{77- if (executable_stack != EXSTACK_ENABLE_X)78 return 1;79- if (cpu_architecture() <= CPU_ARCH_ARMv6)80 return 1;81 return 0;82}
···74 */75int arm_elf_read_implies_exec(const struct elf32_hdr *x, int executable_stack)76{77+ if (executable_stack != EXSTACK_DISABLE_X)78 return 1;79+ if (cpu_architecture() < CPU_ARCH_ARMv6)80 return 1;81 return 0;82}
···490491/*--------------------------------------------------------------------------*/492493-/* This lock class tells lockdep that GPIO irqs are in a different0494 * category than their parents, so it won't report false recursion.495 */496static struct lock_class_key gpio_lock_class;···509 prev = this, this++) {510 unsigned id = this->id;511 unsigned i;512-513- /* enable PIO controller's clock */514- clk_enable(this->clock);515516 __raw_writel(~0, this->regbase + PIO_IDR);517···554 data->chipbase = PIN_BASE + i * 32;555 data->regbase = data->offset + (void __iomem *)AT91_VA_BASE_SYS;556557- /* AT91SAM9263_ID_PIOCDE groups PIOC, PIOD, PIOE */0000000558 if (last && last->id == data->id)559 last->next = data;560 }
···490491/*--------------------------------------------------------------------------*/492493+/*494+ * This lock class tells lockdep that GPIO irqs are in a different495 * category than their parents, so it won't report false recursion.496 */497static struct lock_class_key gpio_lock_class;···508 prev = this, this++) {509 unsigned id = this->id;510 unsigned i;000511512 __raw_writel(~0, this->regbase + PIO_IDR);513···556 data->chipbase = PIN_BASE + i * 32;557 data->regbase = data->offset + (void __iomem *)AT91_VA_BASE_SYS;558559+ /* enable PIO controller's clock */560+ clk_enable(data->clock);561+562+ /*563+ * Some processors share peripheral ID between multiple GPIO banks.564+ * SAM9263 (PIOC, PIOD, PIOE)565+ * CAP9 (PIOA, PIOB, PIOC, PIOD)566+ */567 if (last && last->id == data->id)568 last->next = data;569 }
+1
arch/arm/mach-at91/include/mach/board.h
···93 u8 enable_pin; /* chip enable */94 u8 det_pin; /* card detect */95 u8 rdy_pin; /* ready/busy */096 u8 ale; /* address line number connected to ALE */97 u8 cle; /* address line number connected to CLE */98 u8 bus_width_16; /* buswidth is 16 bit */
···93 u8 enable_pin; /* chip enable */94 u8 det_pin; /* card detect */95 u8 rdy_pin; /* ready/busy */96+ u8 rdy_pin_active_low; /* rdy_pin value is inverted */97 u8 ale; /* address line number connected to ALE */98 u8 cle; /* address line number connected to CLE */99 u8 bus_width_16; /* buswidth is 16 bit */
···42 writel(0, GPIO_EDGE_CAUSE(32));4344 for (i = IRQ_KIRKWOOD_GPIO_START; i < NR_IRQS; i++) {45- set_irq_chip(i, &orion_gpio_irq_level_chip);46 set_irq_handler(i, handle_level_irq);47 irq_desc[i].status |= IRQ_LEVEL;48 set_irq_flags(i, IRQF_VALID);
···42 writel(0, GPIO_EDGE_CAUSE(32));4344 for (i = IRQ_KIRKWOOD_GPIO_START; i < NR_IRQS; i++) {45+ set_irq_chip(i, &orion_gpio_irq_chip);46 set_irq_handler(i, handle_level_irq);47 irq_desc[i].status |= IRQ_LEVEL;48 set_irq_flags(i, IRQF_VALID);
+1-1
arch/arm/mach-mv78xx0/irq.c
···40 writel(0, GPIO_EDGE_CAUSE(0));4142 for (i = IRQ_MV78XX0_GPIO_START; i < NR_IRQS; i++) {43- set_irq_chip(i, &orion_gpio_irq_level_chip);44 set_irq_handler(i, handle_level_irq);45 irq_desc[i].status |= IRQ_LEVEL;46 set_irq_flags(i, IRQF_VALID);
···40 writel(0, GPIO_EDGE_CAUSE(0));4142 for (i = IRQ_MV78XX0_GPIO_START; i < NR_IRQS; i++) {43+ set_irq_chip(i, &orion_gpio_irq_chip);44 set_irq_handler(i, handle_level_irq);45 irq_desc[i].status |= IRQ_LEVEL;46 set_irq_flags(i, IRQF_VALID);
+8-8
arch/arm/mach-omap2/clock.c
···565 *566 * Given a struct clk of a rate-selectable clksel clock, and a clock divisor,567 * find the corresponding register field value. The return register value is568- * the value before left-shifting. Returns 0xffffffff on error569 */570u32 omap2_divisor_to_clksel(struct clk *clk, u32 div)571{···577578 clks = omap2_get_clksel_by_parent(clk, clk->parent);579 if (clks == NULL)580- return 0;581582 for (clkr = clks->rates; clkr->div; clkr++) {583 if ((clkr->flags & cpu_mask) && (clkr->div == div))···588 printk(KERN_ERR "clock: Could not find divisor %d for "589 "clock %s parent %s\n", div, clk->name,590 clk->parent->name);591- return 0;592 }593594 return clkr->val;···708 return 0;709710 for (clkr = clks->rates; clkr->div; clkr++) {711- if (clkr->flags & (cpu_mask | DEFAULT_RATE))712 break; /* Found the default rate for this platform */713 }714···746 return -EINVAL;747748 if (clk->usecount > 0)749- _omap2_clk_disable(clk);750751 /* Set new source value (previous dividers if any in effect) */752 reg_val = __raw_readl(src_addr) & ~field_mask;···759 wmb();760 }761762- if (clk->usecount > 0)763- _omap2_clk_enable(clk);764-765 clk->parent = new_parent;000766767 /* CLKSEL clocks follow their parents' rates, divided by a divisor */768 clk->rate = new_parent->rate;
···565 *566 * Given a struct clk of a rate-selectable clksel clock, and a clock divisor,567 * find the corresponding register field value. The return register value is568+ * the value before left-shifting. Returns ~0 on error569 */570u32 omap2_divisor_to_clksel(struct clk *clk, u32 div)571{···577578 clks = omap2_get_clksel_by_parent(clk, clk->parent);579 if (clks == NULL)580+ return ~0;581582 for (clkr = clks->rates; clkr->div; clkr++) {583 if ((clkr->flags & cpu_mask) && (clkr->div == div))···588 printk(KERN_ERR "clock: Could not find divisor %d for "589 "clock %s parent %s\n", div, clk->name,590 clk->parent->name);591+ return ~0;592 }593594 return clkr->val;···708 return 0;709710 for (clkr = clks->rates; clkr->div; clkr++) {711+ if (clkr->flags & cpu_mask && clkr->flags & DEFAULT_RATE)712 break; /* Found the default rate for this platform */713 }714···746 return -EINVAL;747748 if (clk->usecount > 0)749+ omap2_clk_disable(clk);750751 /* Set new source value (previous dividers if any in effect) */752 reg_val = __raw_readl(src_addr) & ~field_mask;···759 wmb();760 }761000762 clk->parent = new_parent;763+764+ if (clk->usecount > 0)765+ omap2_clk_enable(clk);766767 /* CLKSEL clocks follow their parents' rates, divided by a divisor */768 clk->rate = new_parent->rate;
+1-1
arch/arm/mach-orion5x/irq.c
···44 * User can use set_type() if he wants to use edge types handlers.45 */46 for (i = IRQ_ORION5X_GPIO_START; i < NR_IRQS; i++) {47- set_irq_chip(i, &orion_gpio_irq_level_chip);48 set_irq_handler(i, handle_level_irq);49 irq_desc[i].status |= IRQ_LEVEL;50 set_irq_flags(i, IRQF_VALID);
···44 * User can use set_type() if he wants to use edge types handlers.45 */46 for (i = IRQ_ORION5X_GPIO_START; i < NR_IRQS; i++) {47+ set_irq_chip(i, &orion_gpio_irq_chip);48 set_irq_handler(i, handle_level_irq);49 irq_desc[i].status |= IRQ_LEVEL;50 set_irq_flags(i, IRQF_VALID);
+2-1
arch/arm/mm/mmu.c
···693 * Check whether this memory bank would entirely overlap694 * the vmalloc area.695 */696- if (__va(bank->start) >= VMALLOC_MIN) {0697 printk(KERN_NOTICE "Ignoring RAM at %.8lx-%.8lx "698 "(vmalloc region overlap).\n",699 bank->start, bank->start + bank->size - 1);
···693 * Check whether this memory bank would entirely overlap694 * the vmalloc area.695 */696+ if (__va(bank->start) >= VMALLOC_MIN ||697+ __va(bank->start) < PAGE_OFFSET) {698 printk(KERN_NOTICE "Ignoring RAM at %.8lx-%.8lx "699 "(vmalloc region overlap).\n",700 bank->start, bank->start + bank->size - 1);
+26-49
arch/arm/plat-orion/gpio.c
···265 * polarity LEVEL mask266 *267 ****************************************************************************/268-static void gpio_irq_edge_ack(u32 irq)269-{270- int pin = irq_to_gpio(irq);271272- writel(~(1 << (pin & 31)), GPIO_EDGE_CAUSE(pin));000000273}274275-static void gpio_irq_edge_mask(u32 irq)276{277 int pin = irq_to_gpio(irq);278- u32 u;279-280- u = readl(GPIO_EDGE_MASK(pin));0281 u &= ~(1 << (pin & 31));282- writel(u, GPIO_EDGE_MASK(pin));283}284285-static void gpio_irq_edge_unmask(u32 irq)286{287 int pin = irq_to_gpio(irq);288- u32 u;289-290- u = readl(GPIO_EDGE_MASK(pin));0291 u |= 1 << (pin & 31);292- writel(u, GPIO_EDGE_MASK(pin));293-}294-295-static void gpio_irq_level_mask(u32 irq)296-{297- int pin = irq_to_gpio(irq);298- u32 u;299-300- u = readl(GPIO_LEVEL_MASK(pin));301- u &= ~(1 << (pin & 31));302- writel(u, GPIO_LEVEL_MASK(pin));303-}304-305-static void gpio_irq_level_unmask(u32 irq)306-{307- int pin = irq_to_gpio(irq);308- u32 u;309-310- u = readl(GPIO_LEVEL_MASK(pin));311- u |= 1 << (pin & 31);312- writel(u, GPIO_LEVEL_MASK(pin));313}314315static int gpio_irq_set_type(u32 irq, u32 type)···316 * Set edge/level type.317 */318 if (type & (IRQ_TYPE_EDGE_RISING | IRQ_TYPE_EDGE_FALLING)) {319- desc->chip = &orion_gpio_irq_edge_chip;320 } else if (type & (IRQ_TYPE_LEVEL_HIGH | IRQ_TYPE_LEVEL_LOW)) {321- desc->chip = &orion_gpio_irq_level_chip;322 } else {323 printk(KERN_ERR "failed to set irq=%d (type=%d)\n", irq, type);324 return -EINVAL;···356 return 0;357}358359-struct irq_chip orion_gpio_irq_edge_chip = {360- .name = "orion_gpio_irq_edge",361- .ack = gpio_irq_edge_ack,362- .mask = gpio_irq_edge_mask,363- .unmask = gpio_irq_edge_unmask,364- .set_type = gpio_irq_set_type,365-};366-367-struct irq_chip orion_gpio_irq_level_chip = {368- .name = "orion_gpio_irq_level",369- .mask = gpio_irq_level_mask,370- .mask_ack = gpio_irq_level_mask,371- .unmask = gpio_irq_level_unmask,372 .set_type = gpio_irq_set_type,373};374
···265 * polarity LEVEL mask266 *267 ****************************************************************************/000268269+static void gpio_irq_ack(u32 irq)270+{271+ int type = irq_desc[irq].status & IRQ_TYPE_SENSE_MASK;272+ if (type & (IRQ_TYPE_EDGE_RISING | IRQ_TYPE_EDGE_FALLING)) {273+ int pin = irq_to_gpio(irq);274+ writel(~(1 << (pin & 31)), GPIO_EDGE_CAUSE(pin));275+ }276}277278+static void gpio_irq_mask(u32 irq)279{280 int pin = irq_to_gpio(irq);281+ int type = irq_desc[irq].status & IRQ_TYPE_SENSE_MASK;282+ u32 reg = (type & (IRQ_TYPE_EDGE_RISING | IRQ_TYPE_EDGE_FALLING)) ?283+ GPIO_EDGE_MASK(pin) : GPIO_LEVEL_MASK(pin);284+ u32 u = readl(reg);285 u &= ~(1 << (pin & 31));286+ writel(u, reg);287}288289+static void gpio_irq_unmask(u32 irq)290{291 int pin = irq_to_gpio(irq);292+ int type = irq_desc[irq].status & IRQ_TYPE_SENSE_MASK;293+ u32 reg = (type & (IRQ_TYPE_EDGE_RISING | IRQ_TYPE_EDGE_FALLING)) ?294+ GPIO_EDGE_MASK(pin) : GPIO_LEVEL_MASK(pin);295+ u32 u = readl(reg);296 u |= 1 << (pin & 31);297+ writel(u, reg);00000000000000000000298}299300static int gpio_irq_set_type(u32 irq, u32 type)···331 * Set edge/level type.332 */333 if (type & (IRQ_TYPE_EDGE_RISING | IRQ_TYPE_EDGE_FALLING)) {334+ desc->handle_irq = handle_edge_irq;335 } else if (type & (IRQ_TYPE_LEVEL_HIGH | IRQ_TYPE_LEVEL_LOW)) {336+ desc->handle_irq = handle_level_irq;337 } else {338 printk(KERN_ERR "failed to set irq=%d (type=%d)\n", irq, type);339 return -EINVAL;···371 return 0;372}373374+struct irq_chip orion_gpio_irq_chip = {375+ .name = "orion_gpio",376+ .ack = gpio_irq_ack,377+ .mask = gpio_irq_mask,378+ .unmask = gpio_irq_unmask,00000000379 .set_type = gpio_irq_set_type,380};381
···116 int enable_pin; /* chip enable */117 int det_pin; /* card detect */118 int rdy_pin; /* ready/busy */0119 u8 ale; /* address line number connected to ALE */120 u8 cle; /* address line number connected to CLE */121 u8 bus_width_16; /* buswidth is 16 bit */
···116 int enable_pin; /* chip enable */117 int det_pin; /* card detect */118 int rdy_pin; /* ready/busy */119+ u8 rdy_pin_active_low; /* rdy_pin value is inverted */120 u8 ale; /* address line number connected to ALE */121 u8 cle; /* address line number connected to CLE */122 u8 bus_width_16; /* buswidth is 16 bit */
+5-2
arch/ia64/Kconfig
···221222config IA64_XEN_GUEST223 bool "Xen guest"0224 depends on XEN000225226endchoice227···483 default y if VIRTUAL_MEM_MAP484485config HAVE_ARCH_EARLY_PFN_TO_NID486- def_bool y487- depends on NEED_MULTIPLE_NODES488489config HAVE_ARCH_NODEDATA_EXTENSION490 def_bool y
···221222config IA64_XEN_GUEST223 bool "Xen guest"224+ select SWIOTLB225 depends on XEN226+ help227+ Build a kernel that runs on Xen guest domain. At this moment only228+ 16KB page size in supported.229230endchoice231···479 default y if VIRTUAL_MEM_MAP480481config HAVE_ARCH_EARLY_PFN_TO_NID482+ def_bool NUMA && SPARSEMEM0483484config HAVE_ARCH_NODEDATA_EXTENSION485 def_bool y
···1+#2+# Automatically generated make config: don't edit3+# Linux kernel version: 2.6.29-rc14+# Fri Jan 16 11:49:59 20095+#6+CONFIG_DEFCONFIG_LIST="/lib/modules/$UNAME_RELEASE/.config"7+8+#9+# General setup10+#11+CONFIG_EXPERIMENTAL=y12+CONFIG_LOCK_KERNEL=y13+CONFIG_INIT_ENV_ARG_LIMIT=3214+CONFIG_LOCALVERSION=""15+CONFIG_LOCALVERSION_AUTO=y16+CONFIG_SWAP=y17+CONFIG_SYSVIPC=y18+CONFIG_SYSVIPC_SYSCTL=y19+CONFIG_POSIX_MQUEUE=y20+# CONFIG_BSD_PROCESS_ACCT is not set21+# CONFIG_TASKSTATS is not set22+# CONFIG_AUDIT is not set23+CONFIG_IKCONFIG=y24+CONFIG_IKCONFIG_PROC=y25+CONFIG_LOG_BUF_SHIFT=2026+CONFIG_HAVE_UNSTABLE_SCHED_CLOCK=y27+# CONFIG_GROUP_SCHED is not set28+29+#30+# Control Group support31+#32+# CONFIG_CGROUPS is not set33+CONFIG_SYSFS_DEPRECATED=y34+CONFIG_SYSFS_DEPRECATED_V2=y35+# CONFIG_RELAY is not set36+CONFIG_NAMESPACES=y37+# CONFIG_UTS_NS is not set38+# CONFIG_IPC_NS is not set39+# CONFIG_USER_NS is not set40+# CONFIG_PID_NS is not set41+CONFIG_BLK_DEV_INITRD=y42+CONFIG_INITRAMFS_SOURCE=""43+CONFIG_CC_OPTIMIZE_FOR_SIZE=y44+CONFIG_SYSCTL=y45+# CONFIG_EMBEDDED is not set46+CONFIG_SYSCTL_SYSCALL=y47+CONFIG_KALLSYMS=y48+CONFIG_KALLSYMS_ALL=y49+CONFIG_KALLSYMS_STRIP_GENERATED=y50+# CONFIG_KALLSYMS_EXTRA_PASS is not set51+CONFIG_HOTPLUG=y52+CONFIG_PRINTK=y53+CONFIG_BUG=y54+CONFIG_ELF_CORE=y55+CONFIG_COMPAT_BRK=y56+CONFIG_BASE_FULL=y57+CONFIG_FUTEX=y58+CONFIG_ANON_INODES=y59+CONFIG_EPOLL=y60+CONFIG_SIGNALFD=y61+CONFIG_TIMERFD=y62+CONFIG_EVENTFD=y63+CONFIG_SHMEM=y64+CONFIG_AIO=y65+CONFIG_VM_EVENT_COUNTERS=y66+CONFIG_PCI_QUIRKS=y67+CONFIG_SLUB_DEBUG=y68+# CONFIG_SLAB is not set69+CONFIG_SLUB=y70+# CONFIG_SLOB is not set71+# CONFIG_PROFILING is not set72+CONFIG_HAVE_OPROFILE=y73+# CONFIG_KPROBES is not set74+CONFIG_HAVE_KPROBES=y75+CONFIG_HAVE_KRETPROBES=y76+CONFIG_HAVE_ARCH_TRACEHOOK=y77+CONFIG_HAVE_DMA_ATTRS=y78+CONFIG_USE_GENERIC_SMP_HELPERS=y79+# CONFIG_HAVE_GENERIC_DMA_COHERENT is not set80+CONFIG_SLABINFO=y81+CONFIG_RT_MUTEXES=y82+CONFIG_BASE_SMALL=083+CONFIG_MODULES=y84+# CONFIG_MODULE_FORCE_LOAD is not set85+CONFIG_MODULE_UNLOAD=y86+# CONFIG_MODULE_FORCE_UNLOAD is not set87+CONFIG_MODVERSIONS=y88+CONFIG_MODULE_SRCVERSION_ALL=y89+CONFIG_STOP_MACHINE=y90+CONFIG_BLOCK=y91+# CONFIG_BLK_DEV_IO_TRACE is not set92+# CONFIG_BLK_DEV_BSG is not set93+# CONFIG_BLK_DEV_INTEGRITY is not set94+95+#96+# IO Schedulers97+#98+CONFIG_IOSCHED_NOOP=y99+CONFIG_IOSCHED_AS=y100+CONFIG_IOSCHED_DEADLINE=y101+CONFIG_IOSCHED_CFQ=y102+CONFIG_DEFAULT_AS=y103+# CONFIG_DEFAULT_DEADLINE is not set104+# CONFIG_DEFAULT_CFQ is not set105+# CONFIG_DEFAULT_NOOP is not set106+CONFIG_DEFAULT_IOSCHED="anticipatory"107+CONFIG_CLASSIC_RCU=y108+# CONFIG_TREE_RCU is not set109+# CONFIG_PREEMPT_RCU is not set110+# CONFIG_TREE_RCU_TRACE is not set111+# CONFIG_PREEMPT_RCU_TRACE is not set112+CONFIG_FREEZER=y113+114+#115+# Processor type and features116+#117+CONFIG_IA64=y118+CONFIG_64BIT=y119+CONFIG_ZONE_DMA=y120+CONFIG_QUICKLIST=y121+CONFIG_MMU=y122+CONFIG_SWIOTLB=y123+CONFIG_IOMMU_HELPER=y124+CONFIG_RWSEM_XCHGADD_ALGORITHM=y125+CONFIG_HUGETLB_PAGE_SIZE_VARIABLE=y126+CONFIG_GENERIC_FIND_NEXT_BIT=y127+CONFIG_GENERIC_CALIBRATE_DELAY=y128+CONFIG_GENERIC_TIME=y129+CONFIG_GENERIC_TIME_VSYSCALL=y130+CONFIG_HAVE_SETUP_PER_CPU_AREA=y131+CONFIG_DMI=y132+CONFIG_EFI=y133+CONFIG_GENERIC_IOMAP=y134+CONFIG_SCHED_OMIT_FRAME_POINTER=y135+CONFIG_AUDIT_ARCH=y136+CONFIG_PARAVIRT_GUEST=y137+CONFIG_PARAVIRT=y138+CONFIG_XEN=y139+CONFIG_XEN_XENCOMM=y140+CONFIG_NO_IDLE_HZ=y141+# CONFIG_IA64_GENERIC is not set142+# CONFIG_IA64_DIG is not set143+# CONFIG_IA64_DIG_VTD is not set144+# CONFIG_IA64_HP_ZX1 is not set145+# CONFIG_IA64_HP_ZX1_SWIOTLB is not set146+# CONFIG_IA64_SGI_SN2 is not set147+# CONFIG_IA64_SGI_UV is not set148+# CONFIG_IA64_HP_SIM is not set149+CONFIG_IA64_XEN_GUEST=y150+# CONFIG_ITANIUM is not set151+CONFIG_MCKINLEY=y152+# CONFIG_IA64_PAGE_SIZE_4KB is not set153+# CONFIG_IA64_PAGE_SIZE_8KB is not set154+CONFIG_IA64_PAGE_SIZE_16KB=y155+# CONFIG_IA64_PAGE_SIZE_64KB is not set156+CONFIG_PGTABLE_3=y157+# CONFIG_PGTABLE_4 is not set158+CONFIG_HZ=250159+# CONFIG_HZ_100 is not set160+CONFIG_HZ_250=y161+# CONFIG_HZ_300 is not set162+# CONFIG_HZ_1000 is not set163+# CONFIG_SCHED_HRTICK is not set164+CONFIG_IA64_L1_CACHE_SHIFT=7165+CONFIG_IA64_CYCLONE=y166+CONFIG_IOSAPIC=y167+CONFIG_FORCE_MAX_ZONEORDER=17168+# CONFIG_VIRT_CPU_ACCOUNTING is not set169+CONFIG_SMP=y170+CONFIG_NR_CPUS=16171+CONFIG_HOTPLUG_CPU=y172+CONFIG_ARCH_ENABLE_MEMORY_HOTPLUG=y173+CONFIG_ARCH_ENABLE_MEMORY_HOTREMOVE=y174+# CONFIG_SCHED_SMT is not set175+CONFIG_PERMIT_BSP_REMOVE=y176+CONFIG_FORCE_CPEI_RETARGET=y177+CONFIG_PREEMPT_NONE=y178+# CONFIG_PREEMPT_VOLUNTARY is not set179+# CONFIG_PREEMPT is not set180+CONFIG_SELECT_MEMORY_MODEL=y181+CONFIG_FLATMEM_MANUAL=y182+# CONFIG_DISCONTIGMEM_MANUAL is not set183+# CONFIG_SPARSEMEM_MANUAL is not set184+CONFIG_FLATMEM=y185+CONFIG_FLAT_NODE_MEM_MAP=y186+CONFIG_SPARSEMEM_VMEMMAP_ENABLE=y187+CONFIG_PAGEFLAGS_EXTENDED=y188+CONFIG_SPLIT_PTLOCK_CPUS=4189+CONFIG_MIGRATION=y190+CONFIG_PHYS_ADDR_T_64BIT=y191+CONFIG_ZONE_DMA_FLAG=1192+CONFIG_BOUNCE=y193+CONFIG_NR_QUICK=1194+CONFIG_VIRT_TO_BUS=y195+CONFIG_UNEVICTABLE_LRU=y196+CONFIG_ARCH_SELECT_MEMORY_MODEL=y197+CONFIG_ARCH_DISCONTIGMEM_ENABLE=y198+CONFIG_ARCH_FLATMEM_ENABLE=y199+CONFIG_ARCH_SPARSEMEM_ENABLE=y200+CONFIG_ARCH_POPULATES_NODE_MAP=y201+CONFIG_VIRTUAL_MEM_MAP=y202+CONFIG_HOLES_IN_ZONE=y203+# CONFIG_IA32_SUPPORT is not set204+# CONFIG_COMPAT_FOR_U64_ALIGNMENT is not set205+CONFIG_IA64_MCA_RECOVERY=y206+CONFIG_PERFMON=y207+CONFIG_IA64_PALINFO=y208+# CONFIG_IA64_MC_ERR_INJECT is not set209+# CONFIG_IA64_ESI is not set210+# CONFIG_IA64_HP_AML_NFW is not set211+CONFIG_KEXEC=y212+# CONFIG_CRASH_DUMP is not set213+214+#215+# Firmware Drivers216+#217+# CONFIG_FIRMWARE_MEMMAP is not set218+CONFIG_EFI_VARS=y219+CONFIG_EFI_PCDP=y220+CONFIG_DMIID=y221+CONFIG_BINFMT_ELF=y222+# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set223+# CONFIG_HAVE_AOUT is not set224+CONFIG_BINFMT_MISC=m225+226+#227+# Power management and ACPI options228+#229+CONFIG_PM=y230+# CONFIG_PM_DEBUG is not set231+CONFIG_PM_SLEEP=y232+CONFIG_SUSPEND=y233+CONFIG_SUSPEND_FREEZER=y234+CONFIG_ACPI=y235+CONFIG_ACPI_SLEEP=y236+CONFIG_ACPI_PROCFS=y237+CONFIG_ACPI_PROCFS_POWER=y238+CONFIG_ACPI_SYSFS_POWER=y239+CONFIG_ACPI_PROC_EVENT=y240+CONFIG_ACPI_BUTTON=m241+CONFIG_ACPI_FAN=m242+# CONFIG_ACPI_DOCK is not set243+CONFIG_ACPI_PROCESSOR=m244+CONFIG_ACPI_HOTPLUG_CPU=y245+CONFIG_ACPI_THERMAL=m246+# CONFIG_ACPI_CUSTOM_DSDT is not set247+CONFIG_ACPI_BLACKLIST_YEAR=0248+# CONFIG_ACPI_DEBUG is not set249+# CONFIG_ACPI_PCI_SLOT is not set250+CONFIG_ACPI_SYSTEM=y251+CONFIG_ACPI_CONTAINER=m252+253+#254+# CPU Frequency scaling255+#256+# CONFIG_CPU_FREQ is not set257+258+#259+# Bus options (PCI, PCMCIA)260+#261+CONFIG_PCI=y262+CONFIG_PCI_DOMAINS=y263+CONFIG_PCI_SYSCALL=y264+# CONFIG_PCIEPORTBUS is not set265+CONFIG_ARCH_SUPPORTS_MSI=y266+# CONFIG_PCI_MSI is not set267+CONFIG_PCI_LEGACY=y268+# CONFIG_PCI_DEBUG is not set269+# CONFIG_PCI_STUB is not set270+CONFIG_HOTPLUG_PCI=m271+# CONFIG_HOTPLUG_PCI_FAKE is not set272+CONFIG_HOTPLUG_PCI_ACPI=m273+# CONFIG_HOTPLUG_PCI_ACPI_IBM is not set274+# CONFIG_HOTPLUG_PCI_CPCI is not set275+# CONFIG_HOTPLUG_PCI_SHPC is not set276+# CONFIG_PCCARD is not set277+CONFIG_NET=y278+279+#280+# Networking options281+#282+# CONFIG_NET_NS is not set283+CONFIG_COMPAT_NET_DEV_OPS=y284+CONFIG_PACKET=y285+# CONFIG_PACKET_MMAP is not set286+CONFIG_UNIX=y287+CONFIG_XFRM=y288+# CONFIG_XFRM_USER is not set289+# CONFIG_XFRM_SUB_POLICY is not set290+# CONFIG_XFRM_MIGRATE is not set291+# CONFIG_XFRM_STATISTICS is not set292+# CONFIG_NET_KEY is not set293+CONFIG_INET=y294+CONFIG_IP_MULTICAST=y295+# CONFIG_IP_ADVANCED_ROUTER is not set296+CONFIG_IP_FIB_HASH=y297+# CONFIG_IP_PNP is not set298+# CONFIG_NET_IPIP is not set299+# CONFIG_NET_IPGRE is not set300+# CONFIG_IP_MROUTE is not set301+CONFIG_ARPD=y302+CONFIG_SYN_COOKIES=y303+# CONFIG_INET_AH is not set304+# CONFIG_INET_ESP is not set305+# CONFIG_INET_IPCOMP is not set306+# CONFIG_INET_XFRM_TUNNEL is not set307+# CONFIG_INET_TUNNEL is not set308+CONFIG_INET_XFRM_MODE_TRANSPORT=y309+CONFIG_INET_XFRM_MODE_TUNNEL=y310+CONFIG_INET_XFRM_MODE_BEET=y311+# CONFIG_INET_LRO is not set312+CONFIG_INET_DIAG=y313+CONFIG_INET_TCP_DIAG=y314+# CONFIG_TCP_CONG_ADVANCED is not set315+CONFIG_TCP_CONG_CUBIC=y316+CONFIG_DEFAULT_TCP_CONG="cubic"317+# CONFIG_TCP_MD5SIG is not set318+# CONFIG_IPV6 is not set319+# CONFIG_NETWORK_SECMARK is not set320+# CONFIG_NETFILTER is not set321+# CONFIG_IP_DCCP is not set322+# CONFIG_IP_SCTP is not set323+# CONFIG_TIPC is not set324+# CONFIG_ATM is not set325+# CONFIG_BRIDGE is not set326+# CONFIG_NET_DSA is not set327+# CONFIG_VLAN_8021Q is not set328+# CONFIG_DECNET is not set329+# CONFIG_LLC2 is not set330+# CONFIG_IPX is not set331+# CONFIG_ATALK is not set332+# CONFIG_X25 is not set333+# CONFIG_LAPB is not set334+# CONFIG_ECONET is not set335+# CONFIG_WAN_ROUTER is not set336+# CONFIG_NET_SCHED is not set337+# CONFIG_DCB is not set338+339+#340+# Network testing341+#342+# CONFIG_NET_PKTGEN is not set343+# CONFIG_HAMRADIO is not set344+# CONFIG_CAN is not set345+# CONFIG_IRDA is not set346+# CONFIG_BT is not set347+# CONFIG_AF_RXRPC is not set348+# CONFIG_PHONET is not set349+# CONFIG_WIRELESS is not set350+# CONFIG_WIMAX is not set351+# CONFIG_RFKILL is not set352+# CONFIG_NET_9P is not set353+354+#355+# Device Drivers356+#357+358+#359+# Generic Driver Options360+#361+CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"362+CONFIG_STANDALONE=y363+CONFIG_PREVENT_FIRMWARE_BUILD=y364+CONFIG_FW_LOADER=y365+CONFIG_FIRMWARE_IN_KERNEL=y366+CONFIG_EXTRA_FIRMWARE=""367+# CONFIG_DEBUG_DRIVER is not set368+# CONFIG_DEBUG_DEVRES is not set369+# CONFIG_SYS_HYPERVISOR is not set370+# CONFIG_CONNECTOR is not set371+# CONFIG_MTD is not set372+# CONFIG_PARPORT is not set373+CONFIG_PNP=y374+CONFIG_PNP_DEBUG_MESSAGES=y375+376+#377+# Protocols378+#379+CONFIG_PNPACPI=y380+CONFIG_BLK_DEV=y381+# CONFIG_BLK_CPQ_DA is not set382+# CONFIG_BLK_CPQ_CISS_DA is not set383+# CONFIG_BLK_DEV_DAC960 is not set384+# CONFIG_BLK_DEV_UMEM is not set385+# CONFIG_BLK_DEV_COW_COMMON is not set386+CONFIG_BLK_DEV_LOOP=m387+CONFIG_BLK_DEV_CRYPTOLOOP=m388+CONFIG_BLK_DEV_NBD=m389+# CONFIG_BLK_DEV_SX8 is not set390+# CONFIG_BLK_DEV_UB is not set391+CONFIG_BLK_DEV_RAM=y392+CONFIG_BLK_DEV_RAM_COUNT=16393+CONFIG_BLK_DEV_RAM_SIZE=4096394+# CONFIG_BLK_DEV_XIP is not set395+# CONFIG_CDROM_PKTCDVD is not set396+# CONFIG_ATA_OVER_ETH is not set397+CONFIG_XEN_BLKDEV_FRONTEND=y398+# CONFIG_BLK_DEV_HD is not set399+CONFIG_MISC_DEVICES=y400+# CONFIG_PHANTOM is not set401+# CONFIG_EEPROM_93CX6 is not set402+# CONFIG_SGI_IOC4 is not set403+# CONFIG_TIFM_CORE is not set404+# CONFIG_ICS932S401 is not set405+# CONFIG_ENCLOSURE_SERVICES is not set406+# CONFIG_HP_ILO is not set407+# CONFIG_C2PORT is not set408+CONFIG_HAVE_IDE=y409+CONFIG_IDE=y410+411+#412+# Please see Documentation/ide/ide.txt for help/info on IDE drives413+#414+CONFIG_IDE_TIMINGS=y415+CONFIG_IDE_ATAPI=y416+# CONFIG_BLK_DEV_IDE_SATA is not set417+CONFIG_IDE_GD=y418+CONFIG_IDE_GD_ATA=y419+# CONFIG_IDE_GD_ATAPI is not set420+CONFIG_BLK_DEV_IDECD=y421+CONFIG_BLK_DEV_IDECD_VERBOSE_ERRORS=y422+# CONFIG_BLK_DEV_IDETAPE is not set423+# CONFIG_BLK_DEV_IDEACPI is not set424+# CONFIG_IDE_TASK_IOCTL is not set425+CONFIG_IDE_PROC_FS=y426+427+#428+# IDE chipset support/bugfixes429+#430+# CONFIG_IDE_GENERIC is not set431+# CONFIG_BLK_DEV_PLATFORM is not set432+# CONFIG_BLK_DEV_IDEPNP is not set433+CONFIG_BLK_DEV_IDEDMA_SFF=y434+435+#436+# PCI IDE chipsets support437+#438+CONFIG_BLK_DEV_IDEPCI=y439+CONFIG_IDEPCI_PCIBUS_ORDER=y440+# CONFIG_BLK_DEV_OFFBOARD is not set441+CONFIG_BLK_DEV_GENERIC=y442+# CONFIG_BLK_DEV_OPTI621 is not set443+CONFIG_BLK_DEV_IDEDMA_PCI=y444+# CONFIG_BLK_DEV_AEC62XX is not set445+# CONFIG_BLK_DEV_ALI15X3 is not set446+# CONFIG_BLK_DEV_AMD74XX is not set447+CONFIG_BLK_DEV_CMD64X=y448+# CONFIG_BLK_DEV_TRIFLEX is not set449+# CONFIG_BLK_DEV_CS5520 is not set450+# CONFIG_BLK_DEV_CS5530 is not set451+# CONFIG_BLK_DEV_HPT366 is not set452+# CONFIG_BLK_DEV_JMICRON is not set453+# CONFIG_BLK_DEV_SC1200 is not set454+CONFIG_BLK_DEV_PIIX=y455+# CONFIG_BLK_DEV_IT8172 is not set456+# CONFIG_BLK_DEV_IT8213 is not set457+# CONFIG_BLK_DEV_IT821X is not set458+# CONFIG_BLK_DEV_NS87415 is not set459+# CONFIG_BLK_DEV_PDC202XX_OLD is not set460+# CONFIG_BLK_DEV_PDC202XX_NEW is not set461+# CONFIG_BLK_DEV_SVWKS is not set462+# CONFIG_BLK_DEV_SIIMAGE is not set463+# CONFIG_BLK_DEV_SLC90E66 is not set464+# CONFIG_BLK_DEV_TRM290 is not set465+# CONFIG_BLK_DEV_VIA82CXXX is not set466+# CONFIG_BLK_DEV_TC86C001 is not set467+CONFIG_BLK_DEV_IDEDMA=y468+469+#470+# SCSI device support471+#472+# CONFIG_RAID_ATTRS is not set473+CONFIG_SCSI=y474+CONFIG_SCSI_DMA=y475+# CONFIG_SCSI_TGT is not set476+CONFIG_SCSI_NETLINK=y477+CONFIG_SCSI_PROC_FS=y478+479+#480+# SCSI support type (disk, tape, CD-ROM)481+#482+CONFIG_BLK_DEV_SD=y483+CONFIG_CHR_DEV_ST=m484+# CONFIG_CHR_DEV_OSST is not set485+CONFIG_BLK_DEV_SR=m486+# CONFIG_BLK_DEV_SR_VENDOR is not set487+CONFIG_CHR_DEV_SG=m488+# CONFIG_CHR_DEV_SCH is not set489+490+#491+# Some SCSI devices (e.g. CD jukebox) support multiple LUNs492+#493+# CONFIG_SCSI_MULTI_LUN is not set494+# CONFIG_SCSI_CONSTANTS is not set495+# CONFIG_SCSI_LOGGING is not set496+# CONFIG_SCSI_SCAN_ASYNC is not set497+CONFIG_SCSI_WAIT_SCAN=m498+499+#500+# SCSI Transports501+#502+CONFIG_SCSI_SPI_ATTRS=y503+CONFIG_SCSI_FC_ATTRS=y504+# CONFIG_SCSI_ISCSI_ATTRS is not set505+# CONFIG_SCSI_SAS_LIBSAS is not set506+# CONFIG_SCSI_SRP_ATTRS is not set507+CONFIG_SCSI_LOWLEVEL=y508+# CONFIG_ISCSI_TCP is not set509+# CONFIG_SCSI_CXGB3_ISCSI is not set510+# CONFIG_BLK_DEV_3W_XXXX_RAID is not set511+# CONFIG_SCSI_3W_9XXX is not set512+# CONFIG_SCSI_ACARD is not set513+# CONFIG_SCSI_AACRAID is not set514+# CONFIG_SCSI_AIC7XXX is not set515+# CONFIG_SCSI_AIC7XXX_OLD is not set516+# CONFIG_SCSI_AIC79XX is not set517+# CONFIG_SCSI_AIC94XX is not set518+# CONFIG_SCSI_DPT_I2O is not set519+# CONFIG_SCSI_ADVANSYS is not set520+# CONFIG_SCSI_ARCMSR is not set521+# CONFIG_MEGARAID_NEWGEN is not set522+# CONFIG_MEGARAID_LEGACY is not set523+# CONFIG_MEGARAID_SAS is not set524+# CONFIG_SCSI_HPTIOP is not set525+# CONFIG_LIBFC is not set526+# CONFIG_FCOE is not set527+# CONFIG_SCSI_DMX3191D is not set528+# CONFIG_SCSI_FUTURE_DOMAIN is not set529+# CONFIG_SCSI_IPS is not set530+# CONFIG_SCSI_INITIO is not set531+# CONFIG_SCSI_INIA100 is not set532+# CONFIG_SCSI_MVSAS is not set533+# CONFIG_SCSI_STEX is not set534+CONFIG_SCSI_SYM53C8XX_2=y535+CONFIG_SCSI_SYM53C8XX_DMA_ADDRESSING_MODE=1536+CONFIG_SCSI_SYM53C8XX_DEFAULT_TAGS=16537+CONFIG_SCSI_SYM53C8XX_MAX_TAGS=64538+CONFIG_SCSI_SYM53C8XX_MMIO=y539+CONFIG_SCSI_QLOGIC_1280=y540+# CONFIG_SCSI_QLA_FC is not set541+# CONFIG_SCSI_QLA_ISCSI is not set542+# CONFIG_SCSI_LPFC is not set543+# CONFIG_SCSI_DC395x is not set544+# CONFIG_SCSI_DC390T is not set545+# CONFIG_SCSI_DEBUG is not set546+# CONFIG_SCSI_SRP is not set547+# CONFIG_SCSI_DH is not set548+# CONFIG_ATA is not set549+CONFIG_MD=y550+CONFIG_BLK_DEV_MD=m551+CONFIG_MD_LINEAR=m552+CONFIG_MD_RAID0=m553+CONFIG_MD_RAID1=m554+# CONFIG_MD_RAID10 is not set555+# CONFIG_MD_RAID456 is not set556+CONFIG_MD_MULTIPATH=m557+# CONFIG_MD_FAULTY is not set558+CONFIG_BLK_DEV_DM=m559+# CONFIG_DM_DEBUG is not set560+CONFIG_DM_CRYPT=m561+CONFIG_DM_SNAPSHOT=m562+CONFIG_DM_MIRROR=m563+CONFIG_DM_ZERO=m564+# CONFIG_DM_MULTIPATH is not set565+# CONFIG_DM_DELAY is not set566+# CONFIG_DM_UEVENT is not set567+CONFIG_FUSION=y568+CONFIG_FUSION_SPI=y569+CONFIG_FUSION_FC=y570+# CONFIG_FUSION_SAS is not set571+CONFIG_FUSION_MAX_SGE=128572+CONFIG_FUSION_CTL=y573+# CONFIG_FUSION_LOGGING is not set574+575+#576+# IEEE 1394 (FireWire) support577+#578+579+#580+# Enable only one of the two stacks, unless you know what you are doing581+#582+# CONFIG_FIREWIRE is not set583+# CONFIG_IEEE1394 is not set584+# CONFIG_I2O is not set585+CONFIG_NETDEVICES=y586+CONFIG_DUMMY=m587+# CONFIG_BONDING is not set588+# CONFIG_MACVLAN is not set589+# CONFIG_EQUALIZER is not set590+# CONFIG_TUN is not set591+# CONFIG_VETH is not set592+# CONFIG_NET_SB1000 is not set593+# CONFIG_ARCNET is not set594+CONFIG_PHYLIB=y595+596+#597+# MII PHY device drivers598+#599+# CONFIG_MARVELL_PHY is not set600+# CONFIG_DAVICOM_PHY is not set601+# CONFIG_QSEMI_PHY is not set602+# CONFIG_LXT_PHY is not set603+# CONFIG_CICADA_PHY is not set604+# CONFIG_VITESSE_PHY is not set605+# CONFIG_SMSC_PHY is not set606+# CONFIG_BROADCOM_PHY is not set607+# CONFIG_ICPLUS_PHY is not set608+# CONFIG_REALTEK_PHY is not set609+# CONFIG_NATIONAL_PHY is not set610+# CONFIG_STE10XP is not set611+# CONFIG_LSI_ET1011C_PHY is not set612+# CONFIG_FIXED_PHY is not set613+# CONFIG_MDIO_BITBANG is not set614+CONFIG_NET_ETHERNET=y615+CONFIG_MII=m616+# CONFIG_HAPPYMEAL is not set617+# CONFIG_SUNGEM is not set618+# CONFIG_CASSINI is not set619+# CONFIG_NET_VENDOR_3COM is not set620+CONFIG_NET_TULIP=y621+# CONFIG_DE2104X is not set622+CONFIG_TULIP=m623+# CONFIG_TULIP_MWI is not set624+# CONFIG_TULIP_MMIO is not set625+# CONFIG_TULIP_NAPI is not set626+# CONFIG_DE4X5 is not set627+# CONFIG_WINBOND_840 is not set628+# CONFIG_DM9102 is not set629+# CONFIG_ULI526X is not set630+# CONFIG_HP100 is not set631+# CONFIG_IBM_NEW_EMAC_ZMII is not set632+# CONFIG_IBM_NEW_EMAC_RGMII is not set633+# CONFIG_IBM_NEW_EMAC_TAH is not set634+# CONFIG_IBM_NEW_EMAC_EMAC4 is not set635+# CONFIG_IBM_NEW_EMAC_NO_FLOW_CTRL is not set636+# CONFIG_IBM_NEW_EMAC_MAL_CLR_ICINTSTAT is not set637+# CONFIG_IBM_NEW_EMAC_MAL_COMMON_ERR is not set638+CONFIG_NET_PCI=y639+# CONFIG_PCNET32 is not set640+# CONFIG_AMD8111_ETH is not set641+# CONFIG_ADAPTEC_STARFIRE is not set642+# CONFIG_B44 is not set643+# CONFIG_FORCEDETH is not set644+CONFIG_E100=m645+# CONFIG_FEALNX is not set646+# CONFIG_NATSEMI is not set647+# CONFIG_NE2K_PCI is not set648+# CONFIG_8139CP is not set649+# CONFIG_8139TOO is not set650+# CONFIG_R6040 is not set651+# CONFIG_SIS900 is not set652+# CONFIG_EPIC100 is not set653+# CONFIG_SMSC9420 is not set654+# CONFIG_SUNDANCE is not set655+# CONFIG_TLAN is not set656+# CONFIG_VIA_RHINE is not set657+# CONFIG_SC92031 is not set658+# CONFIG_ATL2 is not set659+CONFIG_NETDEV_1000=y660+# CONFIG_ACENIC is not set661+# CONFIG_DL2K is not set662+CONFIG_E1000=y663+# CONFIG_E1000E is not set664+# CONFIG_IP1000 is not set665+# CONFIG_IGB is not set666+# CONFIG_NS83820 is not set667+# CONFIG_HAMACHI is not set668+# CONFIG_YELLOWFIN is not set669+# CONFIG_R8169 is not set670+# CONFIG_SIS190 is not set671+# CONFIG_SKGE is not set672+# CONFIG_SKY2 is not set673+# CONFIG_VIA_VELOCITY is not set674+CONFIG_TIGON3=y675+# CONFIG_BNX2 is not set676+# CONFIG_QLA3XXX is not set677+# CONFIG_ATL1 is not set678+# CONFIG_ATL1E is not set679+# CONFIG_JME is not set680+CONFIG_NETDEV_10000=y681+# CONFIG_CHELSIO_T1 is not set682+CONFIG_CHELSIO_T3_DEPENDS=y683+# CONFIG_CHELSIO_T3 is not set684+# CONFIG_ENIC is not set685+# CONFIG_IXGBE is not set686+# CONFIG_IXGB is not set687+# CONFIG_S2IO is not set688+# CONFIG_MYRI10GE is not set689+# CONFIG_NETXEN_NIC is not set690+# CONFIG_NIU is not set691+# CONFIG_MLX4_EN is not set692+# CONFIG_MLX4_CORE is not set693+# CONFIG_TEHUTI is not set694+# CONFIG_BNX2X is not set695+# CONFIG_QLGE is not set696+# CONFIG_SFC is not set697+# CONFIG_TR is not set698+699+#700+# Wireless LAN701+#702+# CONFIG_WLAN_PRE80211 is not set703+# CONFIG_WLAN_80211 is not set704+# CONFIG_IWLWIFI_LEDS is not set705+706+#707+# Enable WiMAX (Networking options) to see the WiMAX drivers708+#709+710+#711+# USB Network Adapters712+#713+# CONFIG_USB_CATC is not set714+# CONFIG_USB_KAWETH is not set715+# CONFIG_USB_PEGASUS is not set716+# CONFIG_USB_RTL8150 is not set717+# CONFIG_USB_USBNET is not set718+# CONFIG_WAN is not set719+CONFIG_XEN_NETDEV_FRONTEND=y720+# CONFIG_FDDI is not set721+# CONFIG_HIPPI is not set722+# CONFIG_PPP is not set723+# CONFIG_SLIP is not set724+# CONFIG_NET_FC is not set725+CONFIG_NETCONSOLE=y726+# CONFIG_NETCONSOLE_DYNAMIC is not set727+CONFIG_NETPOLL=y728+# CONFIG_NETPOLL_TRAP is not set729+CONFIG_NET_POLL_CONTROLLER=y730+# CONFIG_ISDN is not set731+# CONFIG_PHONE is not set732+733+#734+# Input device support735+#736+CONFIG_INPUT=y737+# CONFIG_INPUT_FF_MEMLESS is not set738+# CONFIG_INPUT_POLLDEV is not set739+740+#741+# Userland interfaces742+#743+CONFIG_INPUT_MOUSEDEV=y744+CONFIG_INPUT_MOUSEDEV_PSAUX=y745+CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024746+CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768747+# CONFIG_INPUT_JOYDEV is not set748+# CONFIG_INPUT_EVDEV is not set749+# CONFIG_INPUT_EVBUG is not set750+751+#752+# Input Device Drivers753+#754+CONFIG_INPUT_KEYBOARD=y755+CONFIG_KEYBOARD_ATKBD=y756+# CONFIG_KEYBOARD_SUNKBD is not set757+# CONFIG_KEYBOARD_LKKBD is not set758+# CONFIG_KEYBOARD_XTKBD is not set759+# CONFIG_KEYBOARD_NEWTON is not set760+# CONFIG_KEYBOARD_STOWAWAY is not set761+CONFIG_INPUT_MOUSE=y762+CONFIG_MOUSE_PS2=y763+CONFIG_MOUSE_PS2_ALPS=y764+CONFIG_MOUSE_PS2_LOGIPS2PP=y765+CONFIG_MOUSE_PS2_SYNAPTICS=y766+CONFIG_MOUSE_PS2_LIFEBOOK=y767+CONFIG_MOUSE_PS2_TRACKPOINT=y768+# CONFIG_MOUSE_PS2_ELANTECH is not set769+# CONFIG_MOUSE_PS2_TOUCHKIT is not set770+# CONFIG_MOUSE_SERIAL is not set771+# CONFIG_MOUSE_APPLETOUCH is not set772+# CONFIG_MOUSE_BCM5974 is not set773+# CONFIG_MOUSE_VSXXXAA is not set774+# CONFIG_INPUT_JOYSTICK is not set775+# CONFIG_INPUT_TABLET is not set776+# CONFIG_INPUT_TOUCHSCREEN is not set777+# CONFIG_INPUT_MISC is not set778+779+#780+# Hardware I/O ports781+#782+CONFIG_SERIO=y783+CONFIG_SERIO_I8042=y784+# CONFIG_SERIO_SERPORT is not set785+# CONFIG_SERIO_PCIPS2 is not set786+CONFIG_SERIO_LIBPS2=y787+# CONFIG_SERIO_RAW is not set788+CONFIG_GAMEPORT=m789+# CONFIG_GAMEPORT_NS558 is not set790+# CONFIG_GAMEPORT_L4 is not set791+# CONFIG_GAMEPORT_EMU10K1 is not set792+# CONFIG_GAMEPORT_FM801 is not set793+794+#795+# Character devices796+#797+CONFIG_VT=y798+CONFIG_CONSOLE_TRANSLATIONS=y799+CONFIG_VT_CONSOLE=y800+CONFIG_HW_CONSOLE=y801+# CONFIG_VT_HW_CONSOLE_BINDING is not set802+CONFIG_DEVKMEM=y803+CONFIG_SERIAL_NONSTANDARD=y804+# CONFIG_COMPUTONE is not set805+# CONFIG_ROCKETPORT is not set806+# CONFIG_CYCLADES is not set807+# CONFIG_DIGIEPCA is not set808+# CONFIG_MOXA_INTELLIO is not set809+# CONFIG_MOXA_SMARTIO is not set810+# CONFIG_ISI is not set811+# CONFIG_SYNCLINKMP is not set812+# CONFIG_SYNCLINK_GT is not set813+# CONFIG_N_HDLC is not set814+# CONFIG_RISCOM8 is not set815+# CONFIG_SPECIALIX is not set816+# CONFIG_SX is not set817+# CONFIG_RIO is not set818+# CONFIG_STALDRV is not set819+# CONFIG_NOZOMI is not set820+821+#822+# Serial drivers823+#824+CONFIG_SERIAL_8250=y825+CONFIG_SERIAL_8250_CONSOLE=y826+CONFIG_SERIAL_8250_PCI=y827+CONFIG_SERIAL_8250_PNP=y828+CONFIG_SERIAL_8250_NR_UARTS=6829+CONFIG_SERIAL_8250_RUNTIME_UARTS=4830+CONFIG_SERIAL_8250_EXTENDED=y831+CONFIG_SERIAL_8250_SHARE_IRQ=y832+# CONFIG_SERIAL_8250_DETECT_IRQ is not set833+# CONFIG_SERIAL_8250_RSA is not set834+835+#836+# Non-8250 serial port support837+#838+CONFIG_SERIAL_CORE=y839+CONFIG_SERIAL_CORE_CONSOLE=y840+# CONFIG_SERIAL_JSM is not set841+CONFIG_UNIX98_PTYS=y842+# CONFIG_DEVPTS_MULTIPLE_INSTANCES is not set843+CONFIG_LEGACY_PTYS=y844+CONFIG_LEGACY_PTY_COUNT=256845+CONFIG_HVC_DRIVER=y846+CONFIG_HVC_IRQ=y847+CONFIG_HVC_XEN=y848+# CONFIG_IPMI_HANDLER is not set849+# CONFIG_HW_RANDOM is not set850+CONFIG_EFI_RTC=y851+# CONFIG_R3964 is not set852+# CONFIG_APPLICOM is not set853+CONFIG_RAW_DRIVER=m854+CONFIG_MAX_RAW_DEVS=256855+CONFIG_HPET=y856+CONFIG_HPET_MMAP=y857+# CONFIG_HANGCHECK_TIMER is not set858+# CONFIG_TCG_TPM is not set859+CONFIG_DEVPORT=y860+CONFIG_I2C=m861+CONFIG_I2C_BOARDINFO=y862+# CONFIG_I2C_CHARDEV is not set863+CONFIG_I2C_HELPER_AUTO=y864+CONFIG_I2C_ALGOBIT=m865+866+#867+# I2C Hardware Bus support868+#869+870+#871+# PC SMBus host controller drivers872+#873+# CONFIG_I2C_ALI1535 is not set874+# CONFIG_I2C_ALI1563 is not set875+# CONFIG_I2C_ALI15X3 is not set876+# CONFIG_I2C_AMD756 is not set877+# CONFIG_I2C_AMD8111 is not set878+# CONFIG_I2C_I801 is not set879+# CONFIG_I2C_ISCH is not set880+# CONFIG_I2C_PIIX4 is not set881+# CONFIG_I2C_NFORCE2 is not set882+# CONFIG_I2C_SIS5595 is not set883+# CONFIG_I2C_SIS630 is not set884+# CONFIG_I2C_SIS96X is not set885+# CONFIG_I2C_VIA is not set886+# CONFIG_I2C_VIAPRO is not set887+888+#889+# I2C system bus drivers (mostly embedded / system-on-chip)890+#891+# CONFIG_I2C_OCORES is not set892+# CONFIG_I2C_SIMTEC is not set893+894+#895+# External I2C/SMBus adapter drivers896+#897+# CONFIG_I2C_PARPORT_LIGHT is not set898+# CONFIG_I2C_TAOS_EVM is not set899+# CONFIG_I2C_TINY_USB is not set900+901+#902+# Graphics adapter I2C/DDC channel drivers903+#904+# CONFIG_I2C_VOODOO3 is not set905+906+#907+# Other I2C/SMBus bus drivers908+#909+# CONFIG_I2C_PCA_PLATFORM is not set910+# CONFIG_I2C_STUB is not set911+912+#913+# Miscellaneous I2C Chip support914+#915+# CONFIG_DS1682 is not set916+# CONFIG_AT24 is not set917+# CONFIG_SENSORS_EEPROM is not set918+# CONFIG_SENSORS_PCF8574 is not set919+# CONFIG_PCF8575 is not set920+# CONFIG_SENSORS_PCA9539 is not set921+# CONFIG_SENSORS_PCF8591 is not set922+# CONFIG_SENSORS_MAX6875 is not set923+# CONFIG_SENSORS_TSL2550 is not set924+# CONFIG_I2C_DEBUG_CORE is not set925+# CONFIG_I2C_DEBUG_ALGO is not set926+# CONFIG_I2C_DEBUG_BUS is not set927+# CONFIG_I2C_DEBUG_CHIP is not set928+# CONFIG_SPI is not set929+# CONFIG_W1 is not set930+CONFIG_POWER_SUPPLY=y931+# CONFIG_POWER_SUPPLY_DEBUG is not set932+# CONFIG_PDA_POWER is not set933+# CONFIG_BATTERY_DS2760 is not set934+# CONFIG_BATTERY_BQ27x00 is not set935+CONFIG_HWMON=y936+# CONFIG_HWMON_VID is not set937+# CONFIG_SENSORS_AD7414 is not set938+# CONFIG_SENSORS_AD7418 is not set939+# CONFIG_SENSORS_ADM1021 is not set940+# CONFIG_SENSORS_ADM1025 is not set941+# CONFIG_SENSORS_ADM1026 is not set942+# CONFIG_SENSORS_ADM1029 is not set943+# CONFIG_SENSORS_ADM1031 is not set944+# CONFIG_SENSORS_ADM9240 is not set945+# CONFIG_SENSORS_ADT7462 is not set946+# CONFIG_SENSORS_ADT7470 is not set947+# CONFIG_SENSORS_ADT7473 is not set948+# CONFIG_SENSORS_ATXP1 is not set949+# CONFIG_SENSORS_DS1621 is not set950+# CONFIG_SENSORS_I5K_AMB is not set951+# CONFIG_SENSORS_F71805F is not set952+# CONFIG_SENSORS_F71882FG is not set953+# CONFIG_SENSORS_F75375S is not set954+# CONFIG_SENSORS_GL518SM is not set955+# CONFIG_SENSORS_GL520SM is not set956+# CONFIG_SENSORS_IT87 is not set957+# CONFIG_SENSORS_LM63 is not set958+# CONFIG_SENSORS_LM75 is not set959+# CONFIG_SENSORS_LM77 is not set960+# CONFIG_SENSORS_LM78 is not set961+# CONFIG_SENSORS_LM80 is not set962+# CONFIG_SENSORS_LM83 is not set963+# CONFIG_SENSORS_LM85 is not set964+# CONFIG_SENSORS_LM87 is not set965+# CONFIG_SENSORS_LM90 is not set966+# CONFIG_SENSORS_LM92 is not set967+# CONFIG_SENSORS_LM93 is not set968+# CONFIG_SENSORS_LTC4245 is not set969+# CONFIG_SENSORS_MAX1619 is not set970+# CONFIG_SENSORS_MAX6650 is not set971+# CONFIG_SENSORS_PC87360 is not set972+# CONFIG_SENSORS_PC87427 is not set973+# CONFIG_SENSORS_SIS5595 is not set974+# CONFIG_SENSORS_DME1737 is not set975+# CONFIG_SENSORS_SMSC47M1 is not set976+# CONFIG_SENSORS_SMSC47M192 is not set977+# CONFIG_SENSORS_SMSC47B397 is not set978+# CONFIG_SENSORS_ADS7828 is not set979+# CONFIG_SENSORS_THMC50 is not set980+# CONFIG_SENSORS_VIA686A is not set981+# CONFIG_SENSORS_VT1211 is not set982+# CONFIG_SENSORS_VT8231 is not set983+# CONFIG_SENSORS_W83781D is not set984+# CONFIG_SENSORS_W83791D is not set985+# CONFIG_SENSORS_W83792D is not set986+# CONFIG_SENSORS_W83793 is not set987+# CONFIG_SENSORS_W83L785TS is not set988+# CONFIG_SENSORS_W83L786NG is not set989+# CONFIG_SENSORS_W83627HF is not set990+# CONFIG_SENSORS_W83627EHF is not set991+# CONFIG_SENSORS_LIS3LV02D is not set992+# CONFIG_HWMON_DEBUG_CHIP is not set993+CONFIG_THERMAL=m994+# CONFIG_THERMAL_HWMON is not set995+# CONFIG_WATCHDOG is not set996+CONFIG_SSB_POSSIBLE=y997+998+#999+# Sonics Silicon Backplane1000+#1001+# CONFIG_SSB is not set1002+1003+#1004+# Multifunction device drivers1005+#1006+# CONFIG_MFD_CORE is not set1007+# CONFIG_MFD_SM501 is not set1008+# CONFIG_HTC_PASIC3 is not set1009+# CONFIG_MFD_TMIO is not set1010+# CONFIG_MFD_WM8400 is not set1011+# CONFIG_MFD_WM8350_I2C is not set1012+# CONFIG_MFD_PCF50633 is not set1013+# CONFIG_REGULATOR is not set1014+1015+#1016+# Multimedia devices1017+#1018+1019+#1020+# Multimedia core support1021+#1022+# CONFIG_VIDEO_DEV is not set1023+# CONFIG_DVB_CORE is not set1024+# CONFIG_VIDEO_MEDIA is not set1025+1026+#1027+# Multimedia drivers1028+#1029+CONFIG_DAB=y1030+# CONFIG_USB_DABUSB is not set1031+1032+#1033+# Graphics support1034+#1035+CONFIG_AGP=m1036+CONFIG_DRM=m1037+CONFIG_DRM_TDFX=m1038+CONFIG_DRM_R128=m1039+CONFIG_DRM_RADEON=m1040+CONFIG_DRM_MGA=m1041+CONFIG_DRM_SIS=m1042+# CONFIG_DRM_VIA is not set1043+# CONFIG_DRM_SAVAGE is not set1044+# CONFIG_VGASTATE is not set1045+# CONFIG_VIDEO_OUTPUT_CONTROL is not set1046+# CONFIG_FB is not set1047+# CONFIG_BACKLIGHT_LCD_SUPPORT is not set1048+1049+#1050+# Display device support1051+#1052+# CONFIG_DISPLAY_SUPPORT is not set1053+1054+#1055+# Console display driver support1056+#1057+CONFIG_VGA_CONSOLE=y1058+# CONFIG_VGACON_SOFT_SCROLLBACK is not set1059+CONFIG_DUMMY_CONSOLE=y1060+# CONFIG_SOUND is not set1061+CONFIG_HID_SUPPORT=y1062+CONFIG_HID=y1063+# CONFIG_HID_DEBUG is not set1064+# CONFIG_HIDRAW is not set1065+1066+#1067+# USB Input Devices1068+#1069+CONFIG_USB_HID=y1070+# CONFIG_HID_PID is not set1071+# CONFIG_USB_HIDDEV is not set1072+1073+#1074+# Special HID drivers1075+#1076+CONFIG_HID_COMPAT=y1077+CONFIG_HID_A4TECH=y1078+CONFIG_HID_APPLE=y1079+CONFIG_HID_BELKIN=y1080+CONFIG_HID_CHERRY=y1081+CONFIG_HID_CHICONY=y1082+CONFIG_HID_CYPRESS=y1083+CONFIG_HID_EZKEY=y1084+CONFIG_HID_GYRATION=y1085+CONFIG_HID_LOGITECH=y1086+# CONFIG_LOGITECH_FF is not set1087+# CONFIG_LOGIRUMBLEPAD2_FF is not set1088+CONFIG_HID_MICROSOFT=y1089+CONFIG_HID_MONTEREY=y1090+CONFIG_HID_NTRIG=y1091+CONFIG_HID_PANTHERLORD=y1092+# CONFIG_PANTHERLORD_FF is not set1093+CONFIG_HID_PETALYNX=y1094+CONFIG_HID_SAMSUNG=y1095+CONFIG_HID_SONY=y1096+CONFIG_HID_SUNPLUS=y1097+# CONFIG_GREENASIA_FF is not set1098+CONFIG_HID_TOPSEED=y1099+# CONFIG_THRUSTMASTER_FF is not set1100+# CONFIG_ZEROPLUS_FF is not set1101+CONFIG_USB_SUPPORT=y1102+CONFIG_USB_ARCH_HAS_HCD=y1103+CONFIG_USB_ARCH_HAS_OHCI=y1104+CONFIG_USB_ARCH_HAS_EHCI=y1105+CONFIG_USB=y1106+# CONFIG_USB_DEBUG is not set1107+# CONFIG_USB_ANNOUNCE_NEW_DEVICES is not set1108+1109+#1110+# Miscellaneous USB options1111+#1112+CONFIG_USB_DEVICEFS=y1113+CONFIG_USB_DEVICE_CLASS=y1114+# CONFIG_USB_DYNAMIC_MINORS is not set1115+# CONFIG_USB_SUSPEND is not set1116+# CONFIG_USB_OTG is not set1117+# CONFIG_USB_MON is not set1118+# CONFIG_USB_WUSB is not set1119+# CONFIG_USB_WUSB_CBAF is not set1120+1121+#1122+# USB Host Controller Drivers1123+#1124+# CONFIG_USB_C67X00_HCD is not set1125+CONFIG_USB_EHCI_HCD=m1126+# CONFIG_USB_EHCI_ROOT_HUB_TT is not set1127+# CONFIG_USB_EHCI_TT_NEWSCHED is not set1128+# CONFIG_USB_OXU210HP_HCD is not set1129+# CONFIG_USB_ISP116X_HCD is not set1130+# CONFIG_USB_ISP1760_HCD is not set1131+CONFIG_USB_OHCI_HCD=m1132+# CONFIG_USB_OHCI_BIG_ENDIAN_DESC is not set1133+# CONFIG_USB_OHCI_BIG_ENDIAN_MMIO is not set1134+CONFIG_USB_OHCI_LITTLE_ENDIAN=y1135+CONFIG_USB_UHCI_HCD=y1136+# CONFIG_USB_SL811_HCD is not set1137+# CONFIG_USB_R8A66597_HCD is not set1138+# CONFIG_USB_WHCI_HCD is not set1139+# CONFIG_USB_HWA_HCD is not set1140+1141+#1142+# USB Device Class drivers1143+#1144+# CONFIG_USB_ACM is not set1145+# CONFIG_USB_PRINTER is not set1146+# CONFIG_USB_WDM is not set1147+# CONFIG_USB_TMC is not set1148+1149+#1150+# NOTE: USB_STORAGE depends on SCSI but BLK_DEV_SD may also be needed;1151+#1152+1153+#1154+# see USB_STORAGE Help for more information1155+#1156+CONFIG_USB_STORAGE=m1157+# CONFIG_USB_STORAGE_DEBUG is not set1158+# CONFIG_USB_STORAGE_DATAFAB is not set1159+# CONFIG_USB_STORAGE_FREECOM is not set1160+# CONFIG_USB_STORAGE_ISD200 is not set1161+# CONFIG_USB_STORAGE_USBAT is not set1162+# CONFIG_USB_STORAGE_SDDR09 is not set1163+# CONFIG_USB_STORAGE_SDDR55 is not set1164+# CONFIG_USB_STORAGE_JUMPSHOT is not set1165+# CONFIG_USB_STORAGE_ALAUDA is not set1166+# CONFIG_USB_STORAGE_ONETOUCH is not set1167+# CONFIG_USB_STORAGE_KARMA is not set1168+# CONFIG_USB_STORAGE_CYPRESS_ATACB is not set1169+# CONFIG_USB_LIBUSUAL is not set1170+1171+#1172+# USB Imaging devices1173+#1174+# CONFIG_USB_MDC800 is not set1175+# CONFIG_USB_MICROTEK is not set1176+1177+#1178+# USB port drivers1179+#1180+# CONFIG_USB_SERIAL is not set1181+1182+#1183+# USB Miscellaneous drivers1184+#1185+# CONFIG_USB_EMI62 is not set1186+# CONFIG_USB_EMI26 is not set1187+# CONFIG_USB_ADUTUX is not set1188+# CONFIG_USB_SEVSEG is not set1189+# CONFIG_USB_RIO500 is not set1190+# CONFIG_USB_LEGOTOWER is not set1191+# CONFIG_USB_LCD is not set1192+# CONFIG_USB_BERRY_CHARGE is not set1193+# CONFIG_USB_LED is not set1194+# CONFIG_USB_CYPRESS_CY7C63 is not set1195+# CONFIG_USB_CYTHERM is not set1196+# CONFIG_USB_PHIDGET is not set1197+# CONFIG_USB_IDMOUSE is not set1198+# CONFIG_USB_FTDI_ELAN is not set1199+# CONFIG_USB_APPLEDISPLAY is not set1200+# CONFIG_USB_SISUSBVGA is not set1201+# CONFIG_USB_LD is not set1202+# CONFIG_USB_TRANCEVIBRATOR is not set1203+# CONFIG_USB_IOWARRIOR is not set1204+# CONFIG_USB_TEST is not set1205+# CONFIG_USB_ISIGHTFW is not set1206+# CONFIG_USB_VST is not set1207+# CONFIG_USB_GADGET is not set1208+1209+#1210+# OTG and related infrastructure1211+#1212+# CONFIG_UWB is not set1213+# CONFIG_MMC is not set1214+# CONFIG_MEMSTICK is not set1215+# CONFIG_NEW_LEDS is not set1216+# CONFIG_ACCESSIBILITY is not set1217+# CONFIG_INFINIBAND is not set1218+# CONFIG_RTC_CLASS is not set1219+# CONFIG_DMADEVICES is not set1220+# CONFIG_UIO is not set1221+CONFIG_XEN_BALLOON=y1222+CONFIG_XEN_SCRUB_PAGES=y1223+CONFIG_XENFS=y1224+CONFIG_XEN_COMPAT_XENFS=y1225+# CONFIG_STAGING is not set1226+# CONFIG_MSPEC is not set1227+1228+#1229+# File systems1230+#1231+CONFIG_EXT2_FS=y1232+CONFIG_EXT2_FS_XATTR=y1233+CONFIG_EXT2_FS_POSIX_ACL=y1234+CONFIG_EXT2_FS_SECURITY=y1235+# CONFIG_EXT2_FS_XIP is not set1236+CONFIG_EXT3_FS=y1237+CONFIG_EXT3_FS_XATTR=y1238+CONFIG_EXT3_FS_POSIX_ACL=y1239+CONFIG_EXT3_FS_SECURITY=y1240+# CONFIG_EXT4_FS is not set1241+CONFIG_JBD=y1242+CONFIG_FS_MBCACHE=y1243+CONFIG_REISERFS_FS=y1244+# CONFIG_REISERFS_CHECK is not set1245+# CONFIG_REISERFS_PROC_INFO is not set1246+CONFIG_REISERFS_FS_XATTR=y1247+CONFIG_REISERFS_FS_POSIX_ACL=y1248+CONFIG_REISERFS_FS_SECURITY=y1249+# CONFIG_JFS_FS is not set1250+CONFIG_FS_POSIX_ACL=y1251+CONFIG_FILE_LOCKING=y1252+CONFIG_XFS_FS=y1253+# CONFIG_XFS_QUOTA is not set1254+# CONFIG_XFS_POSIX_ACL is not set1255+# CONFIG_XFS_RT is not set1256+# CONFIG_XFS_DEBUG is not set1257+# CONFIG_GFS2_FS is not set1258+# CONFIG_OCFS2_FS is not set1259+# CONFIG_BTRFS_FS is not set1260+CONFIG_DNOTIFY=y1261+CONFIG_INOTIFY=y1262+CONFIG_INOTIFY_USER=y1263+# CONFIG_QUOTA is not set1264+CONFIG_AUTOFS_FS=y1265+CONFIG_AUTOFS4_FS=y1266+# CONFIG_FUSE_FS is not set1267+1268+#1269+# CD-ROM/DVD Filesystems1270+#1271+CONFIG_ISO9660_FS=m1272+CONFIG_JOLIET=y1273+# CONFIG_ZISOFS is not set1274+CONFIG_UDF_FS=m1275+CONFIG_UDF_NLS=y1276+1277+#1278+# DOS/FAT/NT Filesystems1279+#1280+CONFIG_FAT_FS=y1281+# CONFIG_MSDOS_FS is not set1282+CONFIG_VFAT_FS=y1283+CONFIG_FAT_DEFAULT_CODEPAGE=4371284+CONFIG_FAT_DEFAULT_IOCHARSET="iso8859-1"1285+CONFIG_NTFS_FS=m1286+# CONFIG_NTFS_DEBUG is not set1287+# CONFIG_NTFS_RW is not set1288+1289+#1290+# Pseudo filesystems1291+#1292+CONFIG_PROC_FS=y1293+CONFIG_PROC_KCORE=y1294+CONFIG_PROC_SYSCTL=y1295+CONFIG_PROC_PAGE_MONITOR=y1296+CONFIG_SYSFS=y1297+CONFIG_TMPFS=y1298+# CONFIG_TMPFS_POSIX_ACL is not set1299+CONFIG_HUGETLBFS=y1300+CONFIG_HUGETLB_PAGE=y1301+# CONFIG_CONFIGFS_FS is not set1302+CONFIG_MISC_FILESYSTEMS=y1303+# CONFIG_ADFS_FS is not set1304+# CONFIG_AFFS_FS is not set1305+# CONFIG_HFS_FS is not set1306+# CONFIG_HFSPLUS_FS is not set1307+# CONFIG_BEFS_FS is not set1308+# CONFIG_BFS_FS is not set1309+# CONFIG_EFS_FS is not set1310+# CONFIG_CRAMFS is not set1311+# CONFIG_SQUASHFS is not set1312+# CONFIG_VXFS_FS is not set1313+# CONFIG_MINIX_FS is not set1314+# CONFIG_OMFS_FS is not set1315+# CONFIG_HPFS_FS is not set1316+# CONFIG_QNX4FS_FS is not set1317+# CONFIG_ROMFS_FS is not set1318+# CONFIG_SYSV_FS is not set1319+# CONFIG_UFS_FS is not set1320+CONFIG_NETWORK_FILESYSTEMS=y1321+CONFIG_NFS_FS=m1322+CONFIG_NFS_V3=y1323+# CONFIG_NFS_V3_ACL is not set1324+CONFIG_NFS_V4=y1325+CONFIG_NFSD=m1326+CONFIG_NFSD_V3=y1327+# CONFIG_NFSD_V3_ACL is not set1328+CONFIG_NFSD_V4=y1329+CONFIG_LOCKD=m1330+CONFIG_LOCKD_V4=y1331+CONFIG_EXPORTFS=m1332+CONFIG_NFS_COMMON=y1333+CONFIG_SUNRPC=m1334+CONFIG_SUNRPC_GSS=m1335+# CONFIG_SUNRPC_REGISTER_V4 is not set1336+CONFIG_RPCSEC_GSS_KRB5=m1337+# CONFIG_RPCSEC_GSS_SPKM3 is not set1338+CONFIG_SMB_FS=m1339+CONFIG_SMB_NLS_DEFAULT=y1340+CONFIG_SMB_NLS_REMOTE="cp437"1341+CONFIG_CIFS=m1342+# CONFIG_CIFS_STATS is not set1343+# CONFIG_CIFS_WEAK_PW_HASH is not set1344+# CONFIG_CIFS_XATTR is not set1345+# CONFIG_CIFS_DEBUG2 is not set1346+# CONFIG_CIFS_EXPERIMENTAL is not set1347+# CONFIG_NCP_FS is not set1348+# CONFIG_CODA_FS is not set1349+# CONFIG_AFS_FS is not set1350+1351+#1352+# Partition Types1353+#1354+CONFIG_PARTITION_ADVANCED=y1355+# CONFIG_ACORN_PARTITION is not set1356+# CONFIG_OSF_PARTITION is not set1357+# CONFIG_AMIGA_PARTITION is not set1358+# CONFIG_ATARI_PARTITION is not set1359+# CONFIG_MAC_PARTITION is not set1360+CONFIG_MSDOS_PARTITION=y1361+# CONFIG_BSD_DISKLABEL is not set1362+# CONFIG_MINIX_SUBPARTITION is not set1363+# CONFIG_SOLARIS_X86_PARTITION is not set1364+# CONFIG_UNIXWARE_DISKLABEL is not set1365+# CONFIG_LDM_PARTITION is not set1366+CONFIG_SGI_PARTITION=y1367+# CONFIG_ULTRIX_PARTITION is not set1368+# CONFIG_SUN_PARTITION is not set1369+# CONFIG_KARMA_PARTITION is not set1370+CONFIG_EFI_PARTITION=y1371+# CONFIG_SYSV68_PARTITION is not set1372+CONFIG_NLS=y1373+CONFIG_NLS_DEFAULT="iso8859-1"1374+CONFIG_NLS_CODEPAGE_437=y1375+CONFIG_NLS_CODEPAGE_737=m1376+CONFIG_NLS_CODEPAGE_775=m1377+CONFIG_NLS_CODEPAGE_850=m1378+CONFIG_NLS_CODEPAGE_852=m1379+CONFIG_NLS_CODEPAGE_855=m1380+CONFIG_NLS_CODEPAGE_857=m1381+CONFIG_NLS_CODEPAGE_860=m1382+CONFIG_NLS_CODEPAGE_861=m1383+CONFIG_NLS_CODEPAGE_862=m1384+CONFIG_NLS_CODEPAGE_863=m1385+CONFIG_NLS_CODEPAGE_864=m1386+CONFIG_NLS_CODEPAGE_865=m1387+CONFIG_NLS_CODEPAGE_866=m1388+CONFIG_NLS_CODEPAGE_869=m1389+CONFIG_NLS_CODEPAGE_936=m1390+CONFIG_NLS_CODEPAGE_950=m1391+CONFIG_NLS_CODEPAGE_932=m1392+CONFIG_NLS_CODEPAGE_949=m1393+CONFIG_NLS_CODEPAGE_874=m1394+CONFIG_NLS_ISO8859_8=m1395+CONFIG_NLS_CODEPAGE_1250=m1396+CONFIG_NLS_CODEPAGE_1251=m1397+# CONFIG_NLS_ASCII is not set1398+CONFIG_NLS_ISO8859_1=y1399+CONFIG_NLS_ISO8859_2=m1400+CONFIG_NLS_ISO8859_3=m1401+CONFIG_NLS_ISO8859_4=m1402+CONFIG_NLS_ISO8859_5=m1403+CONFIG_NLS_ISO8859_6=m1404+CONFIG_NLS_ISO8859_7=m1405+CONFIG_NLS_ISO8859_9=m1406+CONFIG_NLS_ISO8859_13=m1407+CONFIG_NLS_ISO8859_14=m1408+CONFIG_NLS_ISO8859_15=m1409+CONFIG_NLS_KOI8_R=m1410+CONFIG_NLS_KOI8_U=m1411+CONFIG_NLS_UTF8=m1412+# CONFIG_DLM is not set1413+1414+#1415+# Kernel hacking1416+#1417+# CONFIG_PRINTK_TIME is not set1418+CONFIG_ENABLE_WARN_DEPRECATED=y1419+CONFIG_ENABLE_MUST_CHECK=y1420+CONFIG_FRAME_WARN=20481421+CONFIG_MAGIC_SYSRQ=y1422+# CONFIG_UNUSED_SYMBOLS is not set1423+# CONFIG_DEBUG_FS is not set1424+# CONFIG_HEADERS_CHECK is not set1425+CONFIG_DEBUG_KERNEL=y1426+# CONFIG_DEBUG_SHIRQ is not set1427+CONFIG_DETECT_SOFTLOCKUP=y1428+# CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC is not set1429+CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC_VALUE=01430+CONFIG_SCHED_DEBUG=y1431+# CONFIG_SCHEDSTATS is not set1432+# CONFIG_TIMER_STATS is not set1433+# CONFIG_DEBUG_OBJECTS is not set1434+# CONFIG_SLUB_DEBUG_ON is not set1435+# CONFIG_SLUB_STATS is not set1436+# CONFIG_DEBUG_RT_MUTEXES is not set1437+# CONFIG_RT_MUTEX_TESTER is not set1438+# CONFIG_DEBUG_SPINLOCK is not set1439+CONFIG_DEBUG_MUTEXES=y1440+# CONFIG_DEBUG_SPINLOCK_SLEEP is not set1441+# CONFIG_DEBUG_LOCKING_API_SELFTESTS is not set1442+# CONFIG_DEBUG_KOBJECT is not set1443+# CONFIG_DEBUG_INFO is not set1444+# CONFIG_DEBUG_VM is not set1445+# CONFIG_DEBUG_WRITECOUNT is not set1446+CONFIG_DEBUG_MEMORY_INIT=y1447+# CONFIG_DEBUG_LIST is not set1448+# CONFIG_DEBUG_SG is not set1449+# CONFIG_DEBUG_NOTIFIERS is not set1450+# CONFIG_BOOT_PRINTK_DELAY is not set1451+# CONFIG_RCU_TORTURE_TEST is not set1452+# CONFIG_RCU_CPU_STALL_DETECTOR is not set1453+# CONFIG_BACKTRACE_SELF_TEST is not set1454+# CONFIG_DEBUG_BLOCK_EXT_DEVT is not set1455+# CONFIG_FAULT_INJECTION is not set1456+# CONFIG_SYSCTL_SYSCALL_CHECK is not set1457+1458+#1459+# Tracers1460+#1461+# CONFIG_SCHED_TRACER is not set1462+# CONFIG_CONTEXT_SWITCH_TRACER is not set1463+# CONFIG_BOOT_TRACER is not set1464+# CONFIG_TRACE_BRANCH_PROFILING is not set1465+# CONFIG_DYNAMIC_PRINTK_DEBUG is not set1466+# CONFIG_SAMPLES is not set1467+CONFIG_IA64_GRANULE_16MB=y1468+# CONFIG_IA64_GRANULE_64MB is not set1469+# CONFIG_IA64_PRINT_HAZARDS is not set1470+# CONFIG_DISABLE_VHPT is not set1471+# CONFIG_IA64_DEBUG_CMPXCHG is not set1472+# CONFIG_IA64_DEBUG_IRQ is not set1473+1474+#1475+# Security options1476+#1477+# CONFIG_KEYS is not set1478+# CONFIG_SECURITY is not set1479+# CONFIG_SECURITYFS is not set1480+# CONFIG_SECURITY_FILE_CAPABILITIES is not set1481+CONFIG_CRYPTO=y1482+1483+#1484+# Crypto core or helper1485+#1486+# CONFIG_CRYPTO_FIPS is not set1487+CONFIG_CRYPTO_ALGAPI=y1488+CONFIG_CRYPTO_ALGAPI2=y1489+CONFIG_CRYPTO_AEAD2=y1490+CONFIG_CRYPTO_BLKCIPHER=m1491+CONFIG_CRYPTO_BLKCIPHER2=y1492+CONFIG_CRYPTO_HASH=y1493+CONFIG_CRYPTO_HASH2=y1494+CONFIG_CRYPTO_RNG2=y1495+CONFIG_CRYPTO_MANAGER=m1496+CONFIG_CRYPTO_MANAGER2=y1497+# CONFIG_CRYPTO_GF128MUL is not set1498+# CONFIG_CRYPTO_NULL is not set1499+# CONFIG_CRYPTO_CRYPTD is not set1500+# CONFIG_CRYPTO_AUTHENC is not set1501+# CONFIG_CRYPTO_TEST is not set1502+1503+#1504+# Authenticated Encryption with Associated Data1505+#1506+# CONFIG_CRYPTO_CCM is not set1507+# CONFIG_CRYPTO_GCM is not set1508+# CONFIG_CRYPTO_SEQIV is not set1509+1510+#1511+# Block modes1512+#1513+CONFIG_CRYPTO_CBC=m1514+# CONFIG_CRYPTO_CTR is not set1515+# CONFIG_CRYPTO_CTS is not set1516+CONFIG_CRYPTO_ECB=m1517+# CONFIG_CRYPTO_LRW is not set1518+CONFIG_CRYPTO_PCBC=m1519+# CONFIG_CRYPTO_XTS is not set1520+1521+#1522+# Hash modes1523+#1524+# CONFIG_CRYPTO_HMAC is not set1525+# CONFIG_CRYPTO_XCBC is not set1526+1527+#1528+# Digest1529+#1530+# CONFIG_CRYPTO_CRC32C is not set1531+# CONFIG_CRYPTO_MD4 is not set1532+CONFIG_CRYPTO_MD5=y1533+# CONFIG_CRYPTO_MICHAEL_MIC is not set1534+# CONFIG_CRYPTO_RMD128 is not set1535+# CONFIG_CRYPTO_RMD160 is not set1536+# CONFIG_CRYPTO_RMD256 is not set1537+# CONFIG_CRYPTO_RMD320 is not set1538+# CONFIG_CRYPTO_SHA1 is not set1539+# CONFIG_CRYPTO_SHA256 is not set1540+# CONFIG_CRYPTO_SHA512 is not set1541+# CONFIG_CRYPTO_TGR192 is not set1542+# CONFIG_CRYPTO_WP512 is not set1543+1544+#1545+# Ciphers1546+#1547+# CONFIG_CRYPTO_AES is not set1548+# CONFIG_CRYPTO_ANUBIS is not set1549+# CONFIG_CRYPTO_ARC4 is not set1550+# CONFIG_CRYPTO_BLOWFISH is not set1551+# CONFIG_CRYPTO_CAMELLIA is not set1552+# CONFIG_CRYPTO_CAST5 is not set1553+# CONFIG_CRYPTO_CAST6 is not set1554+CONFIG_CRYPTO_DES=m1555+# CONFIG_CRYPTO_FCRYPT is not set1556+# CONFIG_CRYPTO_KHAZAD is not set1557+# CONFIG_CRYPTO_SALSA20 is not set1558+# CONFIG_CRYPTO_SEED is not set1559+# CONFIG_CRYPTO_SERPENT is not set1560+# CONFIG_CRYPTO_TEA is not set1561+# CONFIG_CRYPTO_TWOFISH is not set1562+1563+#1564+# Compression1565+#1566+# CONFIG_CRYPTO_DEFLATE is not set1567+# CONFIG_CRYPTO_LZO is not set1568+1569+#1570+# Random Number Generation1571+#1572+# CONFIG_CRYPTO_ANSI_CPRNG is not set1573+CONFIG_CRYPTO_HW=y1574+# CONFIG_CRYPTO_DEV_HIFN_795X is not set1575+CONFIG_HAVE_KVM=y1576+CONFIG_VIRTUALIZATION=y1577+# CONFIG_KVM is not set1578+# CONFIG_VIRTIO_PCI is not set1579+# CONFIG_VIRTIO_BALLOON is not set1580+1581+#1582+# Library routines1583+#1584+CONFIG_BITREVERSE=y1585+CONFIG_GENERIC_FIND_LAST_BIT=y1586+# CONFIG_CRC_CCITT is not set1587+# CONFIG_CRC16 is not set1588+# CONFIG_CRC_T10DIF is not set1589+CONFIG_CRC_ITU_T=m1590+CONFIG_CRC32=y1591+# CONFIG_CRC7 is not set1592+# CONFIG_LIBCRC32C is not set1593+CONFIG_PLIST=y1594+CONFIG_HAS_IOMEM=y1595+CONFIG_HAS_IOPORT=y1596+CONFIG_HAS_DMA=y1597+CONFIG_GENERIC_HARDIRQS=y1598+CONFIG_GENERIC_IRQ_PROBE=y1599+CONFIG_GENERIC_PENDING_IRQ=y1600+CONFIG_IRQ_PER_CPU=y1601+# CONFIG_IOMMU_API is not set
+4
arch/ia64/include/asm/kvm.h
···2526#include <linux/ioctl.h>27000028/* Architectural interrupt line count. */29#define KVM_NR_INTERRUPTS 25630
···2526#include <linux/ioctl.h>2728+/* Select x86 specific features in <linux/kvm.h> */29+#define __KVM_HAVE_IOAPIC30+#define __KVM_HAVE_DEVICE_ASSIGNMENT31+32/* Architectural interrupt line count. */33#define KVM_NR_INTERRUPTS 25634
-4
arch/ia64/include/asm/mmzone.h
···31#endif32}3334-#ifdef CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID35-extern int early_pfn_to_nid(unsigned long pfn);36-#endif37-38#ifdef CONFIG_IA64_DIG /* DIG systems are small */39# define MAX_PHYSNODE_ID 840# define NR_NODE_MEMBLKS (MAX_NUMNODES * 8)
···31#endif32}33000034#ifdef CONFIG_IA64_DIG /* DIG systems are small */35# define MAX_PHYSNODE_ID 836# define NR_NODE_MEMBLKS (MAX_NUMNODES * 8)
+1-1
arch/ia64/include/asm/sn/bte.h
···39/* BTE status register only supports 16 bits for length field */40#define BTE_LEN_BITS (16)41#define BTE_LEN_MASK ((1 << BTE_LEN_BITS) - 1)42-#define BTE_MAX_XFER ((1 << BTE_LEN_BITS) * L1_CACHE_BYTES)434445/* Define hardware */
···39/* BTE status register only supports 16 bits for length field */40#define BTE_LEN_BITS (16)41#define BTE_LEN_MASK ((1 << BTE_LEN_BITS) - 1)42+#define BTE_MAX_XFER (BTE_LEN_MASK << L1_CACHE_SHIFT)434445/* Define hardware */
···455 if (!vmm_fpswa_interface)456 return (fpswa_ret_t) {-1, 0, 0, 0};457458- /*459- * Just let fpswa driver to use hardware fp registers.460- * No fp register is valid in memory.461- */462 memset(&fp_state, 0, sizeof(fp_state_t));463464 /*000000000465 * unsigned long (*EFI_FPSWA) (466 * unsigned long trap_type,467 * void *Bundle,···550 status = vmm_handle_fpu_swa(0, regs, isr);551 if (!status)552 return ;553- else if (-EAGAIN == status) {554- vcpu_decrement_iip(vcpu);555- return ;556- }557 break;558 }559
···455 if (!vmm_fpswa_interface)456 return (fpswa_ret_t) {-1, 0, 0, 0};4570000458 memset(&fp_state, 0, sizeof(fp_state_t));459460 /*461+ * compute fp_state. only FP registers f6 - f11 are used by the462+ * vmm, so set those bits in the mask and set the low volatile463+ * pointer to point to these registers.464+ */465+ fp_state.bitmask_low64 = 0xfc0; /* bit6..bit11 */466+467+ fp_state.fp_state_low_volatile = (fp_state_low_volatile_t *) ®s->f6;468+469+ /*470 * unsigned long (*EFI_FPSWA) (471 * unsigned long trap_type,472 * void *Bundle,···545 status = vmm_handle_fpu_swa(0, regs, isr);546 if (!status)547 return ;0000548 break;549 }550
+2-2
arch/ia64/mm/numa.c
···58 * SPARSEMEM to allocate the SPARSEMEM sectionmap on the NUMA node where59 * the section resides.60 */61-int early_pfn_to_nid(unsigned long pfn)62{63 int i, section = pfn >> PFN_SECTION_SHIFT, ssec, esec;64···70 return node_memblk[i].nid;71 }7273- return 0;74}7576#ifdef CONFIG_MEMORY_HOTPLUG
···58 * SPARSEMEM to allocate the SPARSEMEM sectionmap on the NUMA node where59 * the section resides.60 */61+int __meminit __early_pfn_to_nid(unsigned long pfn)62{63 int i, section = pfn >> PFN_SECTION_SHIFT, ssec, esec;64···70 return node_memblk[i].nid;71 }7273+ return -1;74}7576#ifdef CONFIG_MEMORY_HOTPLUG
+4-3
arch/ia64/sn/kernel/bte.c
···97 return BTE_SUCCESS;98 }99100- BUG_ON((len & L1_CACHE_MASK) ||101- (src & L1_CACHE_MASK) || (dest & L1_CACHE_MASK));102- BUG_ON(!(len < ((BTE_LEN_MASK + 1) << L1_CACHE_SHIFT)));0103104 /*105 * Start with interface corresponding to cpu number
···97 return BTE_SUCCESS;98 }99100+ BUG_ON(len & L1_CACHE_MASK);101+ BUG_ON(src & L1_CACHE_MASK);102+ BUG_ON(dest & L1_CACHE_MASK);103+ BUG_ON(len > BTE_MAX_XFER);104105 /*106 * Start with interface corresponding to cpu number
+1-2
arch/ia64/xen/Kconfig
···8 depends on PARAVIRT && MCKINLEY && IA64_PAGE_SIZE_16KB && EXPERIMENTAL9 select XEN_XENCOMM10 select NO_IDLE_HZ11-12- # those are required to save/restore.13 select ARCH_SUSPEND_POSSIBLE14 select SUSPEND15 select PM_SLEEP
···8 depends on PARAVIRT && MCKINLEY && IA64_PAGE_SIZE_16KB && EXPERIMENTAL9 select XEN_XENCOMM10 select NO_IDLE_HZ11+ # followings are required to save/restore.012 select ARCH_SUSPEND_POSSIBLE13 select SUSPEND14 select PM_SLEEP
···60/* It should be preserving the high 48 bits and then specifically */61/* preserving _PAGE_SECONDARY | _PAGE_GROUP_IX */62#define _PAGE_CHG_MASK (PAGE_MASK | _PAGE_ACCESSED | _PAGE_DIRTY | \63- _PAGE_HPTEFLAGS)6465/* Bits to mask out from a PMD to get to the PTE page */66#define PMD_MASKED_BITS 0
···60/* It should be preserving the high 48 bits and then specifically */61/* preserving _PAGE_SECONDARY | _PAGE_GROUP_IX */62#define _PAGE_CHG_MASK (PAGE_MASK | _PAGE_ACCESSED | _PAGE_DIRTY | \63+ _PAGE_HPTEFLAGS | _PAGE_SPECIAL)6465/* Bits to mask out from a PMD to get to the PTE page */66#define PMD_MASKED_BITS 0
+1-1
arch/powerpc/include/asm/pgtable-64k.h
···114 * pgprot changes115 */116#define _PAGE_CHG_MASK (PTE_RPN_MASK | _PAGE_HPTEFLAGS | _PAGE_DIRTY | \117- _PAGE_ACCESSED)118119/* Bits to mask out from a PMD to get to the PTE page */120#define PMD_MASKED_BITS 0x1ff
···114 * pgprot changes115 */116#define _PAGE_CHG_MASK (PTE_RPN_MASK | _PAGE_HPTEFLAGS | _PAGE_DIRTY | \117+ _PAGE_ACCESSED | _PAGE_SPECIAL)118119/* Bits to mask out from a PMD to get to the PTE page */120#define PMD_MASKED_BITS 0x1ff
···646 unsigned int areg, struct pt_regs *regs,647 unsigned int flags, unsigned int length)648{649- char *ptr = (char *) ¤t->thread.TS_FPR(reg);650 int ret = 0;651652 flush_vsx_to_thread(current);00000653654 if (flags & ST)655 ret = __copy_to_user(addr, ptr, length);
···646 unsigned int areg, struct pt_regs *regs,647 unsigned int flags, unsigned int length)648{649+ char *ptr;650 int ret = 0;651652 flush_vsx_to_thread(current);653+654+ if (reg < 32)655+ ptr = (char *) ¤t->thread.TS_FPR(reg);656+ else657+ ptr = (char *) ¤t->thread.vr[reg - 32];658659 if (flags & ST)660 ret = __copy_to_user(addr, ptr, length);
···4344extern struct mem_chunk memory_chunk[];45extern unsigned long real_memory_size;004647void detect_memory_layout(struct mem_chunk chunk[]);48
···4344extern struct mem_chunk memory_chunk[];45extern unsigned long real_memory_size;46+extern int memory_end_set;47+extern unsigned long memory_end;4849void detect_memory_layout(struct mem_chunk chunk[]);50
+7-2
arch/s390/kernel/setup.c
···8283struct mem_chunk __initdata memory_chunk[MEMORY_CHUNKS];84volatile int __cpu_logical_map[NR_CPUS]; /* logical cpu to cpu address */85-static unsigned long __initdata memory_end;008687/*88 * This is set up by the setup-routine at boot-time···283static int __init early_parse_mem(char *p)284{285 memory_end = memparse(p, &p);0286 return 0;287}288early_param("mem", early_parse_mem);···511 int i;512513#if defined(CONFIG_ZFCPDUMP) || defined(CONFIG_ZFCPDUMP_MODULE)514- if (ipl_info.type == IPL_TYPE_FCP_DUMP)515 memory_end = ZFCPDUMP_HSA_SIZE;00516#endif517 memory_size = 0;518 memory_end &= PAGE_MASK;
···8283struct mem_chunk __initdata memory_chunk[MEMORY_CHUNKS];84volatile int __cpu_logical_map[NR_CPUS]; /* logical cpu to cpu address */85+86+int __initdata memory_end_set;87+unsigned long __initdata memory_end;8889/*90 * This is set up by the setup-routine at boot-time···281static int __init early_parse_mem(char *p)282{283 memory_end = memparse(p, &p);284+ memory_end_set = 1;285 return 0;286}287early_param("mem", early_parse_mem);···508 int i;509510#if defined(CONFIG_ZFCPDUMP) || defined(CONFIG_ZFCPDUMP_MODULE)511+ if (ipl_info.type == IPL_TYPE_FCP_DUMP) {512 memory_end = ZFCPDUMP_HSA_SIZE;513+ memory_end_set = 1;514+ }515#endif516 memory_size = 0;517 memory_end &= PAGE_MASK;
···174 Add a simple leak tracer to the IOMMU code. This is useful when you175 are debugging a buggy device driver that leaks IOMMU mappings.176177-config MMIOTRACE178- bool "Memory mapped IO tracing"179- depends on DEBUG_KERNEL && PCI180- select TRACING181- help182- Mmiotrace traces Memory Mapped I/O access and is meant for183- debugging and reverse engineering. It is called from the ioremap184- implementation and works via page faults. Tracing is disabled by185- default and can be enabled at run-time.186-187- See Documentation/tracers/mmiotrace.txt.188- If you are not helping to develop drivers, say N.189-190-config MMIOTRACE_TEST191- tristate "Test module for mmiotrace"192- depends on MMIOTRACE && m193- help194- This is a dumb module for testing mmiotrace. It is very dangerous195- as it will write garbage to IO memory starting at a given address.196- However, it should be safe to use on e.g. unused portion of VRAM.197-198- Say N, unless you absolutely know what you are doing.199200#201# IO delay types:
···174 Add a simple leak tracer to the IOMMU code. This is useful when you175 are debugging a buggy device driver that leaks IOMMU mappings.176177+config HAVE_MMIOTRACE_SUPPORT178+ def_bool y00000000000000000000179180#181# IO delay types:
+7
arch/x86/include/asm/kvm.h
···9#include <linux/types.h>10#include <linux/ioctl.h>11000000012/* Architectural interrupt line count. */13#define KVM_NR_INTERRUPTS 25614
···9#include <linux/types.h>10#include <linux/ioctl.h>1112+/* Select x86 specific features in <linux/kvm.h> */13+#define __KVM_HAVE_PIT14+#define __KVM_HAVE_IOAPIC15+#define __KVM_HAVE_DEVICE_ASSIGNMENT16+#define __KVM_HAVE_MSI17+#define __KVM_HAVE_USER_NMI18+19/* Architectural interrupt line count. */20#define KVM_NR_INTERRUPTS 25621
-2
arch/x86/include/asm/mmzone_32.h
···32 get_memcfg_numa_flat();33}3435-extern int early_pfn_to_nid(unsigned long pfn);36-37extern void resume_map_numa_kva(pgd_t *pgd);3839#else /* !CONFIG_NUMA */
···57typedef struct { pgprotval_t pgprot; } pgprot_t;5859extern int page_is_ram(unsigned long pagenr);60-extern int pagerange_is_ram(unsigned long start, unsigned long end);61extern int devmem_is_allowed(unsigned long pagenr);62extern void map_devmem(unsigned long pfn, unsigned long size,63 pgprot_t vma_prot);
···57typedef struct { pgprotval_t pgprot; } pgprot_t;5859extern int page_is_ram(unsigned long pagenr);060extern int devmem_is_allowed(unsigned long pagenr);61extern void map_devmem(unsigned long pfn, unsigned long size,62 pgprot_t vma_prot);
···13 * Hooray, we are in Long 64-bit mode (but still running in low memory)14 */15ENTRY(wakeup_long64)016 movq saved_magic, %rax17 movq $0x123456789abcdef0, %rdx18 cmpq %rdx, %rax···3435 movq saved_rip, %rax36 jmp *%rax37+ENDPROC(wakeup_long64)3839bogus_64_magic:40 jmp bogus_64_magic4142+ENTRY(do_suspend_lowlevel)0000043 subq $8, %rsp44 xorl %eax, %eax45 call save_processor_state···67 pushfq68 popq pt_regs_flags(%rax)6970+ movq $resume_point, saved_rip(%rip)7172 movq %rsp, saved_rsp73 movq %rbp, saved_rbp···78 addq $8, %rsp79 movl $3, %edi80 xorl %eax, %eax81+ call acpi_enter_sleep_state82+ /* in case something went wrong, restore the machine status and go on */83+ jmp resume_point00008485+ .align 486+resume_point:87 /* We don't restore %rax, it must be 0 anyway */88 movq $saved_context, %rax89 movq saved_context_cr4(%rax), %rbx···117 xorl %eax, %eax118 addq $8, %rsp119 jmp restore_processor_state120+ENDPROC(do_suspend_lowlevel)121+00122.data0123ENTRY(saved_rbp) .quad 0124ENTRY(saved_rsi) .quad 0125ENTRY(saved_rdi) .quad 0
+1-1
arch/x86/kernel/apic.c
···862 }863864 /* lets not touch this if we didn't frob it */865-#if defined(CONFIG_X86_MCE_P4THERMAL) || defined(X86_MCE_INTEL)866 if (maxlvt >= 5) {867 v = apic_read(APIC_LVTTHMR);868 apic_write(APIC_LVTTHMR, v | APIC_LVT_MASKED);
···862 }863864 /* lets not touch this if we didn't frob it */865+#if defined(CONFIG_X86_MCE_P4THERMAL) || defined(CONFIG_X86_MCE_INTEL)866 if (maxlvt >= 5) {867 v = apic_read(APIC_LVTTHMR);868 apic_write(APIC_LVTTHMR, v | APIC_LVT_MASKED);
+7-5
arch/x86/kernel/cpu/cpufreq/powernow-k8.c
···1157 data->cpu = pol->cpu;1158 data->currpstate = HW_PSTATE_INVALID;11591160- rc = powernow_k8_cpu_init_acpi(data);1161- if (rc) {1162 /*1163 * Use the PSB BIOS structure. This is only availabe on1164 * an UP version, and is deprecated by AMD.···1175 "ACPI maintainers and complain to your BIOS "1176 "vendor.\n");1177#endif1178- goto err_out;01179 }1180 if (pol->cpu != 0) {1181 printk(KERN_ERR FW_BUG PFX "No ACPI _PSS objects for "1182 "CPU other than CPU0. Complain to your BIOS "1183 "vendor.\n");1184- goto err_out;01185 }1186 rc = find_psb_table(data);1187 if (rc) {1188- goto err_out;01189 }1190 /* Take a crude guess here.1191 * That guess was in microseconds, so multiply with 1000 */
···1157 data->cpu = pol->cpu;1158 data->currpstate = HW_PSTATE_INVALID;11591160+ if (powernow_k8_cpu_init_acpi(data)) {01161 /*1162 * Use the PSB BIOS structure. This is only availabe on1163 * an UP version, and is deprecated by AMD.···1176 "ACPI maintainers and complain to your BIOS "1177 "vendor.\n");1178#endif1179+ kfree(data);1180+ return -ENODEV;1181 }1182 if (pol->cpu != 0) {1183 printk(KERN_ERR FW_BUG PFX "No ACPI _PSS objects for "1184 "CPU other than CPU0. Complain to your BIOS "1185 "vendor.\n");1186+ kfree(data);1187+ return -ENODEV;1188 }1189 rc = find_psb_table(data);1190 if (rc) {1191+ kfree(data);1192+ return -ENODEV;1193 }1194 /* Take a crude guess here.1195 * That guess was in microseconds, so multiply with 1000 */
+4-3
arch/x86/kernel/cpu/mcheck/mce_64.c
···295 * If we know that the error was in user space, send a296 * SIGBUS. Otherwise, panic if tolerance is low.297 *298- * do_exit() takes an awful lot of locks and has a slight299 * risk of deadlocking.300 */301 if (user_space) {302- do_exit(SIGBUS);303 } else if (panic_on_oops || tolerant < 2) {304 mce_panic("Uncorrected machine check",305 &panicm, mcestart);···490491}492493-static void __cpuinit mce_cpu_features(struct cpuinfo_x86 *c)494{495 switch (c->x86_vendor) {496 case X86_VENDOR_INTEL:···734static int mce_resume(struct sys_device *dev)735{736 mce_init(NULL);0737 return 0;738}739
···295 * If we know that the error was in user space, send a296 * SIGBUS. Otherwise, panic if tolerance is low.297 *298+ * force_sig() takes an awful lot of locks and has a slight299 * risk of deadlocking.300 */301 if (user_space) {302+ force_sig(SIGBUS, current);303 } else if (panic_on_oops || tolerant < 2) {304 mce_panic("Uncorrected machine check",305 &panicm, mcestart);···490491}492493+static void mce_cpu_features(struct cpuinfo_x86 *c)494{495 switch (c->x86_vendor) {496 case X86_VENDOR_INTEL:···734static int mce_resume(struct sys_device *dev)735{736 mce_init(NULL);737+ mce_cpu_features(¤t_cpu_data);738 return 0;739}740
+1-1
arch/x86/kernel/cpu/mcheck/mce_amd_64.c
···121}122123/* cpu init entry point, called from mce.c with preempt off */124-void __cpuinit mce_amd_feature_init(struct cpuinfo_x86 *c)125{126 unsigned int bank, block;127 unsigned int cpu = smp_processor_id();
···121}122123/* cpu init entry point, called from mce.c with preempt off */124+void mce_amd_feature_init(struct cpuinfo_x86 *c)125{126 unsigned int bank, block;127 unsigned int cpu = smp_processor_id();
···269 now = hpet_readl(HPET_COUNTER);270 cmp = now + (unsigned long) delta;271 cfg = hpet_readl(HPET_Tn_CFG(timer));272+ /* Make sure we use edge triggered interrupts */273+ cfg &= ~HPET_TN_LEVEL;274 cfg |= HPET_TN_ENABLE | HPET_TN_PERIODIC |275 HPET_TN_SETVAL | HPET_TN_32BIT;276 hpet_writel(cfg, HPET_Tn_CFG(timer));
+1-1
arch/x86/kernel/olpc.c
···203static void __init platform_detect(void)204{205 /* stopgap until OFW support is added to the kernel */206- olpc_platform_info.boardrev = 0xc2;207}208#endif209
···203static void __init platform_detect(void)204{205 /* stopgap until OFW support is added to the kernel */206+ olpc_platform_info.boardrev = olpc_board(0xc2);207}208#endif209
···810811static void ptrace_bts_detach(struct task_struct *child)812{813+ /*814+ * Ptrace_detach() races with ptrace_untrace() in case815+ * the child dies and is reaped by another thread.816+ *817+ * We only do the memory accounting at this point and818+ * leave the buffer deallocation and the bts tracer819+ * release to ptrace_bts_untrace() which will be called820+ * later on with tasklist_lock held.821+ */822+ release_locked_buffer(child->bts_buffer, child->bts_size);823}824#else825static inline void ptrace_bts_fork(struct task_struct *tsk) {}
···1600 /* Okay, we can deliver the interrupt: grab it and update PIC state. */1601 intr_vector = kvm_cpu_get_interrupt(vcpu);1602 svm_inject_irq(svm, intr_vector);1603- kvm_timer_intr_post(vcpu, intr_vector);1604out:1605 update_cr8_intercept(vcpu);1606}
···1600 /* Okay, we can deliver the interrupt: grab it and update PIC state. */1601 intr_vector = kvm_cpu_get_interrupt(vcpu);1602 svm_inject_irq(svm, intr_vector);01603out:1604 update_cr8_intercept(vcpu);1605}
+2-3
arch/x86/kvm/vmx.c
···903 data = vmcs_readl(GUEST_SYSENTER_ESP);904 break;905 default:0906 msr = find_msr_entry(to_vmx(vcpu), msr_index);907 if (msr) {908 data = msr->data;···3286 }3287 if (vcpu->arch.interrupt.pending) {3288 vmx_inject_irq(vcpu, vcpu->arch.interrupt.nr);3289- kvm_timer_intr_post(vcpu, vcpu->arch.interrupt.nr);3290 if (kvm_cpu_has_interrupt(vcpu))3291 enable_irq_window(vcpu);3292 }···3687 if (vm_need_ept()) {3688 bypass_guest_pf = 0;3689 kvm_mmu_set_base_ptes(VMX_EPT_READABLE_MASK |3690- VMX_EPT_WRITABLE_MASK |3691- VMX_EPT_IGMT_BIT);3692 kvm_mmu_set_mask_ptes(0ull, 0ull, 0ull, 0ull,3693 VMX_EPT_EXECUTABLE_MASK,3694 VMX_EPT_DEFAULT_MT << VMX_EPT_MT_EPTE_SHIFT);
···903 data = vmcs_readl(GUEST_SYSENTER_ESP);904 break;905 default:906+ vmx_load_host_state(to_vmx(vcpu));907 msr = find_msr_entry(to_vmx(vcpu), msr_index);908 if (msr) {909 data = msr->data;···3285 }3286 if (vcpu->arch.interrupt.pending) {3287 vmx_inject_irq(vcpu, vcpu->arch.interrupt.nr);03288 if (kvm_cpu_has_interrupt(vcpu))3289 enable_irq_window(vcpu);3290 }···3687 if (vm_need_ept()) {3688 bypass_guest_pf = 0;3689 kvm_mmu_set_base_ptes(VMX_EPT_READABLE_MASK |3690+ VMX_EPT_WRITABLE_MASK);03691 kvm_mmu_set_mask_ptes(0ull, 0ull, 0ull, 0ull,3692 VMX_EPT_EXECUTABLE_MASK,3693 VMX_EPT_DEFAULT_MT << VMX_EPT_MT_EPTE_SHIFT);
+8-2
arch/x86/kvm/x86.c
···967 case KVM_CAP_MMU_SHADOW_CACHE_CONTROL:968 case KVM_CAP_SET_TSS_ADDR:969 case KVM_CAP_EXT_CPUID:970- case KVM_CAP_CLOCKSOURCE:971 case KVM_CAP_PIT:972 case KVM_CAP_NOP_IO_DELAY:973 case KVM_CAP_MP_STATE:···990 break;991 case KVM_CAP_IOMMU:992 r = iommu_found();000993 break;994 default:995 r = 0;···41294130}41314132-void kvm_arch_destroy_vm(struct kvm *kvm)4133{4134 kvm_free_all_assigned_devices(kvm);00004135 kvm_iommu_unmap_guest(kvm);4136 kvm_free_pit(kvm);4137 kfree(kvm->arch.vpic);
···967 case KVM_CAP_MMU_SHADOW_CACHE_CONTROL:968 case KVM_CAP_SET_TSS_ADDR:969 case KVM_CAP_EXT_CPUID:0970 case KVM_CAP_PIT:971 case KVM_CAP_NOP_IO_DELAY:972 case KVM_CAP_MP_STATE:···991 break;992 case KVM_CAP_IOMMU:993 r = iommu_found();994+ break;995+ case KVM_CAP_CLOCKSOURCE:996+ r = boot_cpu_has(X86_FEATURE_CONSTANT_TSC);997 break;998 default:999 r = 0;···41274128}41294130+void kvm_arch_sync_events(struct kvm *kvm)4131{4132 kvm_free_all_assigned_devices(kvm);4133+}4134+4135+void kvm_arch_destroy_vm(struct kvm *kvm)4136+{4137 kvm_iommu_unmap_guest(kvm);4138 kvm_free_pit(kvm);4139 kfree(kvm->arch.vpic);
-19
arch/x86/mm/ioremap.c
···134 return 0;135}136137-int pagerange_is_ram(unsigned long start, unsigned long end)138-{139- int ram_page = 0, not_rampage = 0;140- unsigned long page_nr;141-142- for (page_nr = (start >> PAGE_SHIFT); page_nr < (end >> PAGE_SHIFT);143- ++page_nr) {144- if (page_is_ram(page_nr))145- ram_page = 1;146- else147- not_rampage = 1;148-149- if (ram_page == not_rampage)150- return -1;151- }152-153- return ram_page;154-}155-156/*157 * Fix up the linear direct mapping of the kernel to avoid cache attribute158 * conflicts.
···134 return 0;135}1360000000000000000000137/*138 * Fix up the linear direct mapping of the kernel to avoid cache attribute139 * conflicts.
+1-1
arch/x86/mm/numa_64.c
···145 return shift;146}147148-int early_pfn_to_nid(unsigned long pfn)149{150 return phys_to_nid(pfn << PAGE_SHIFT);151}
···508#endif509510 /*511- * Install the new, split up pagetable. Important details here:512 *513- * On Intel the NX bit of all levels must be cleared to make a514- * page executable. See section 4.13.2 of Intel 64 and IA-32515- * Architectures Software Developer's Manual).516- *517- * Mark the entry present. The current mapping might be518- * set to not present, which we preserved above.519 */520- ref_prot = pte_pgprot(pte_mkexec(pte_clrhuge(*kpte)));521- pgprot_val(ref_prot) |= _PAGE_PRESENT;522- __set_pmd_pte(kpte, address, mk_pte(base, ref_prot));523 base = NULL;524525out_unlock:···570 address = cpa->vaddr[cpa->curpage];571 else572 address = *cpa->vaddr;573-574repeat:575 kpte = lookup_address(address, &level);576 if (!kpte)···806807 vm_unmap_aliases();8080000000809 cpa.vaddr = addr;810 cpa.numpages = numpages;811 cpa.mask_set = mask_set;···854 cpa_flush_range(*addr, numpages, cache);855 } else856 cpa_flush_all(cache);0000000857858out:859 return ret;
···508#endif509510 /*511+ * Install the new, split up pagetable.512 *513+ * We use the standard kernel pagetable protections for the new514+ * pagetable protections, the actual ptes set above control the515+ * primary protection behavior:000516 */517+ __set_pmd_pte(kpte, address, mk_pte(base, __pgprot(_KERNPG_TABLE)));00518 base = NULL;519520out_unlock:···575 address = cpa->vaddr[cpa->curpage];576 else577 address = *cpa->vaddr;0578repeat:579 kpte = lookup_address(address, &level);580 if (!kpte)···812813 vm_unmap_aliases();814815+ /*816+ * If we're called with lazy mmu updates enabled, the817+ * in-memory pte state may be stale. Flush pending updates to818+ * bring them up to date.819+ */820+ arch_flush_lazy_mmu_mode();821+822 cpa.vaddr = addr;823 cpa.numpages = numpages;824 cpa.mask_set = mask_set;···853 cpa_flush_range(*addr, numpages, cache);854 } else855 cpa_flush_all(cache);856+857+ /*858+ * If we've been called with lazy mmu updates enabled, then859+ * make sure that everything gets flushed out before we860+ * return.861+ */862+ arch_flush_lazy_mmu_mode();863864out:865 return ret;
+45-38
arch/x86/mm/pat.c
···211static struct memtype *cached_entry;212static u64 cached_start;213000000000000000000000000000214/*215 * For RAM pages, mark the pages as non WB memory type using216 * PageNonWB (PG_arch_1). We allow only one set_memory_uc() or···363 if (new_type)364 *new_type = actual_type;365366- /*367- * For legacy reasons, some parts of the physical address range in the368- * legacy 1MB region is treated as non-RAM (even when listed as RAM in369- * the e820 tables). So we will track the memory attributes of this370- * legacy 1MB region using the linear memtype_list always.371- */372- if (end >= ISA_END_ADDRESS) {373- is_range_ram = pagerange_is_ram(start, end);374- if (is_range_ram == 1)375- return reserve_ram_pages_type(start, end, req_type,376- new_type);377- else if (is_range_ram < 0)378- return -EINVAL;379- }380381 new = kmalloc(sizeof(struct memtype), GFP_KERNEL);382 if (!new)···465 if (is_ISA_range(start, end - 1))466 return 0;467468- /*469- * For legacy reasons, some parts of the physical address range in the470- * legacy 1MB region is treated as non-RAM (even when listed as RAM in471- * the e820 tables). So we will track the memory attributes of this472- * legacy 1MB region using the linear memtype_list always.473- */474- if (end >= ISA_END_ADDRESS) {475- is_range_ram = pagerange_is_ram(start, end);476- if (is_range_ram == 1)477- return free_ram_pages_type(start, end);478- else if (is_range_ram < 0)479- return -EINVAL;480- }481482 spin_lock(&memtype_lock);483 list_for_each_entry(entry, &memtype_list, nd) {···637 unsigned long flags;638 unsigned long want_flags = (pgprot_val(*vma_prot) & _PAGE_CACHE_MASK);639640- is_ram = pagerange_is_ram(paddr, paddr + size);641642- if (is_ram != 0) {643- /*644- * For mapping RAM pages, drivers need to call645- * set_memory_[uc|wc|wb] directly, for reserve and free, before646- * setting up the PTE.647- */648- WARN_ON_ONCE(1);649- return 0;650- }651652 ret = reserve_memtype(paddr, paddr + size, want_flags, &flags);653 if (ret)···700{701 int is_ram;702703- is_ram = pagerange_is_ram(paddr, paddr + size);704 if (is_ram == 0)705 free_memtype(paddr, paddr + size);706}
···211static struct memtype *cached_entry;212static u64 cached_start;213214+static int pat_pagerange_is_ram(unsigned long start, unsigned long end)215+{216+ int ram_page = 0, not_rampage = 0;217+ unsigned long page_nr;218+219+ for (page_nr = (start >> PAGE_SHIFT); page_nr < (end >> PAGE_SHIFT);220+ ++page_nr) {221+ /*222+ * For legacy reasons, physical address range in the legacy ISA223+ * region is tracked as non-RAM. This will allow users of224+ * /dev/mem to map portions of legacy ISA region, even when225+ * some of those portions are listed(or not even listed) with226+ * different e820 types(RAM/reserved/..)227+ */228+ if (page_nr >= (ISA_END_ADDRESS >> PAGE_SHIFT) &&229+ page_is_ram(page_nr))230+ ram_page = 1;231+ else232+ not_rampage = 1;233+234+ if (ram_page == not_rampage)235+ return -1;236+ }237+238+ return ram_page;239+}240+241/*242 * For RAM pages, mark the pages as non WB memory type using243 * PageNonWB (PG_arch_1). We allow only one set_memory_uc() or···336 if (new_type)337 *new_type = actual_type;338339+ is_range_ram = pat_pagerange_is_ram(start, end);340+ if (is_range_ram == 1)341+ return reserve_ram_pages_type(start, end, req_type,342+ new_type);343+ else if (is_range_ram < 0)344+ return -EINVAL;00000000345346 new = kmalloc(sizeof(struct memtype), GFP_KERNEL);347 if (!new)···446 if (is_ISA_range(start, end - 1))447 return 0;448449+ is_range_ram = pat_pagerange_is_ram(start, end);450+ if (is_range_ram == 1)451+ return free_ram_pages_type(start, end);452+ else if (is_range_ram < 0)453+ return -EINVAL;00000000454455 spin_lock(&memtype_lock);456 list_for_each_entry(entry, &memtype_list, nd) {···626 unsigned long flags;627 unsigned long want_flags = (pgprot_val(*vma_prot) & _PAGE_CACHE_MASK);628629+ is_ram = pat_pagerange_is_ram(paddr, paddr + size);630631+ /*632+ * reserve_pfn_range() doesn't support RAM pages.633+ */634+ if (is_ram != 0)635+ return -EINVAL;0000636637 ret = reserve_memtype(paddr, paddr + size, want_flags, &flags);638 if (ret)···693{694 int is_ram;695696+ is_ram = pat_pagerange_is_ram(paddr, paddr + size);697 if (is_ram == 0)698 free_memtype(paddr, paddr + size);699}
···209{210 unsigned long flags;211 struct request *rq, *tmp;212+ LIST_HEAD(list);213214 spin_lock_irqsave(q->queue_lock, flags);215216 elv_abort_queue(q);217218+ /*219+ * Splice entries to local list, to avoid deadlocking if entries220+ * get readded to the timeout list by error handling221+ */222+ list_splice_init(&q->timeout_list, &list);223+224+ list_for_each_entry_safe(rq, tmp, &list, timeout_list)225 blk_abort_request(rq);226227 spin_unlock_irqrestore(q->queue_lock, flags);
+1-1
block/blktrace.c
···142143 what |= ddir_act[rw & WRITE];144 what |= MASK_TC_BIT(rw, BARRIER);145- what |= MASK_TC_BIT(rw, SYNC);146 what |= MASK_TC_BIT(rw, AHEAD);147 what |= MASK_TC_BIT(rw, META);148 what |= MASK_TC_BIT(rw, DISCARD);
···142143 what |= ddir_act[rw & WRITE];144 what |= MASK_TC_BIT(rw, BARRIER);145+ what |= MASK_TC_BIT(rw, SYNCIO);146 what |= MASK_TC_BIT(rw, AHEAD);147 what |= MASK_TC_BIT(rw, META);148 what |= MASK_TC_BIT(rw, DISCARD);
+10-7
block/bsg.c
···244 * map sg_io_v4 to a request.245 */246static struct request *247-bsg_map_hdr(struct bsg_device *bd, struct sg_io_v4 *hdr, fmode_t has_write_perm)0248{249 struct request_queue *q = bd->queue;250 struct request *rq, *next_rq = NULL;···307 if (ret)308 goto out;309 }0000310 return rq;311out:312 if (rq->cmd != rq->__cmd)···353static void bsg_add_command(struct bsg_device *bd, struct request_queue *q,354 struct bsg_command *bc, struct request *rq)355{356- rq->sense = bc->sense;357- rq->sense_len = 0;358-359 /*360 * add bc command to busy queue and submit rq for io361 */···421{422 int ret = 0;423424- dprintk("rq %p bio %p %u\n", rq, bio, rq->errors);425 /*426 * fill in all the output members427 */···637 /*638 * get a request, fill in the blanks, and add to request queue639 */640- rq = bsg_map_hdr(bd, &bc->hdr, has_write_perm);641 if (IS_ERR(rq)) {642 ret = PTR_ERR(rq);643 rq = NULL;···924 struct request *rq;925 struct bio *bio, *bidi_bio = NULL;926 struct sg_io_v4 hdr;0927928 if (copy_from_user(&hdr, uarg, sizeof(hdr)))929 return -EFAULT;930931- rq = bsg_map_hdr(bd, &hdr, file->f_mode & FMODE_WRITE);932 if (IS_ERR(rq))933 return PTR_ERR(rq);934
···244 * map sg_io_v4 to a request.245 */246static struct request *247+bsg_map_hdr(struct bsg_device *bd, struct sg_io_v4 *hdr, fmode_t has_write_perm,248+ u8 *sense)249{250 struct request_queue *q = bd->queue;251 struct request *rq, *next_rq = NULL;···306 if (ret)307 goto out;308 }309+310+ rq->sense = sense;311+ rq->sense_len = 0;312+313 return rq;314out:315 if (rq->cmd != rq->__cmd)···348static void bsg_add_command(struct bsg_device *bd, struct request_queue *q,349 struct bsg_command *bc, struct request *rq)350{000351 /*352 * add bc command to busy queue and submit rq for io353 */···419{420 int ret = 0;421422+ dprintk("rq %p bio %p 0x%x\n", rq, bio, rq->errors);423 /*424 * fill in all the output members425 */···635 /*636 * get a request, fill in the blanks, and add to request queue637 */638+ rq = bsg_map_hdr(bd, &bc->hdr, has_write_perm, bc->sense);639 if (IS_ERR(rq)) {640 ret = PTR_ERR(rq);641 rq = NULL;···922 struct request *rq;923 struct bio *bio, *bidi_bio = NULL;924 struct sg_io_v4 hdr;925+ u8 sense[SCSI_SENSE_BUFFERSIZE];926927 if (copy_from_user(&hdr, uarg, sizeof(hdr)))928 return -EFAULT;929930+ rq = bsg_map_hdr(bd, &hdr, file->f_mode & FMODE_WRITE, sense);931 if (IS_ERR(rq))932 return PTR_ERR(rq);933
+8
block/genhd.c
···1087 if (strcmp(dev_name(dev), name))1088 continue;1089000000001090 part = disk_get_part(disk, partno);1091 if (part) {1092 devt = part_devt(part);
···1087 if (strcmp(dev_name(dev), name))1088 continue;10891090+ if (partno < disk->minors) {1091+ /* We need to return the right devno, even1092+ * if the partition doesn't exist yet.1093+ */1094+ devt = MKDEV(MAJOR(dev->devt),1095+ MINOR(dev->devt) + partno);1096+ break;1097+ }1098 part = disk_get_part(disk, partno);1099 if (part) {1100 devt = part_devt(part);
+7-1
crypto/lrw.c
···4546static inline void setbit128_bbe(void *b, int bit)47{48- __set_bit(bit ^ 0x78, b);00000049}5051static int setkey(struct crypto_tfm *parent, const u8 *key,
···254 help you correlate PCI bus addresses with the physical geography255 of your slots. If you are unsure, say N.256257-config ACPI_SYSTEM258- bool259- default y260- help261- This driver will enable your system to shut down using ACPI, and262- dump your ACPI DSDT table using /proc/acpi/dsdt.263-264config X86_PM_TIMER265 bool "Power Management Timer Support" if EMBEDDED266 depends on X86
···254 help you correlate PCI bus addresses with the physical geography255 of your slots. If you are unsure, say N.2560000000257config X86_PM_TIMER258 bool "Power Management Timer Support" if EMBEDDED259 depends on X86
···138139static int acpi_battery_get_state(struct acpi_battery *battery);14000000000000000000000000141static int acpi_battery_get_property(struct power_supply *psy,142 enum power_supply_property psp,143 union power_supply_propval *val)···178 val->intval = POWER_SUPPLY_STATUS_DISCHARGING;179 else if (battery->state & 0x02)180 val->intval = POWER_SUPPLY_STATUS_CHARGING;181- else if (battery->state == 0)182 val->intval = POWER_SUPPLY_STATUS_FULL;183 else184 val->intval = POWER_SUPPLY_STATUS_UNKNOWN;
···138139static int acpi_battery_get_state(struct acpi_battery *battery);140141+static int acpi_battery_is_charged(struct acpi_battery *battery)142+{143+ /* either charging or discharging */144+ if (battery->state != 0)145+ return 0;146+147+ /* battery not reporting charge */148+ if (battery->capacity_now == ACPI_BATTERY_VALUE_UNKNOWN ||149+ battery->capacity_now == 0)150+ return 0;151+152+ /* good batteries update full_charge as the batteries degrade */153+ if (battery->full_charge_capacity == battery->capacity_now)154+ return 1;155+156+ /* fallback to using design values for broken batteries */157+ if (battery->design_capacity == battery->capacity_now)158+ return 1;159+160+ /* we don't do any sort of metric based on percentages */161+ return 0;162+}163+164static int acpi_battery_get_property(struct power_supply *psy,165 enum power_supply_property psp,166 union power_supply_propval *val)···155 val->intval = POWER_SUPPLY_STATUS_DISCHARGING;156 else if (battery->state & 0x02)157 val->intval = POWER_SUPPLY_STATUS_CHARGING;158+ else if (acpi_battery_is_charged(battery))159 val->intval = POWER_SUPPLY_STATUS_FULL;160 else161 val->intval = POWER_SUPPLY_STATUS_UNKNOWN;
···773 else774 iowrite32_rep(data_addr, buf, words);775776+ /* Transfer trailing bytes, if any */777 if (unlikely(slop)) {778+ unsigned char pad[4];779+780+ /* Point buf to the tail of buffer */781+ buf += buflen - slop;782+783+ /*784+ * Use io*_rep() accessors here as well to avoid pointlessly785+ * swapping bytes to and fro on the big endian machines...786+ */787 if (rw == READ) {788+ if (slop < 3)789+ ioread16_rep(data_addr, pad, 1);790+ else791+ ioread32_rep(data_addr, pad, 1);792+ memcpy(buf, pad, slop);793 } else {794+ memcpy(pad, buf, slop);795+ if (slop < 3)796+ iowrite16_rep(data_addr, pad, 1);797+ else798+ iowrite32_rep(data_addr, pad, 1);799 }0800 }801+ return (buflen + 1) & ~1;802}803EXPORT_SYMBOL_GPL(ata_sff_data_xfer32);804
···421 .hardreset = ATA_OP_NULL,422};423424-/* OSDL bz3352 reports that nf2/3 controllers can't determine device425- * signature reliably. Also, the following thread reports detection426- * failure on cold boot with the standard debouncing timing.00427 *428 * http://thread.gmane.org/gmane.linux.ide/34098429 *430- * Debounce with hotplug timing and request follow-up SRST.0431 */432static struct ata_port_operations nv_nf2_ops = {433- .inherits = &nv_common_ops,434 .freeze = nv_nf2_freeze,435 .thaw = nv_nf2_thaw,436- .hardreset = nv_noclassify_hardreset,437};438439/* For initial probing after boot and hot plugging, hardreset mostly
···421 .hardreset = ATA_OP_NULL,422};423424+/* nf2 is ripe with hardreset related problems.425+ *426+ * kernel bz#3352 reports nf2/3 controllers can't determine device427+ * signature reliably. The following thread reports detection failure428+ * on cold boot with the standard debouncing timing.429 *430 * http://thread.gmane.org/gmane.linux.ide/34098431 *432+ * And bz#12176 reports that hardreset simply doesn't work on nf2.433+ * Give up on it and just don't do hardreset.434 */435static struct ata_port_operations nv_nf2_ops = {436+ .inherits = &nv_generic_ops,437 .freeze = nv_nf2_freeze,438 .thaw = nv_nf2_thaw,0439};440441/* For initial probing after boot and hot plugging, hardreset mostly
···18 */1920#include <linux/device.h>21+#include <linux/delay.h>22#include <linux/module.h>23#include <linux/kthread.h>24#include <linux/wait.h>25+#include <linux/async.h>2627#include "base.h"28#include "power/power.h"···164 atomic_read(&probe_count));165 if (atomic_read(&probe_count))166 return -EBUSY;167+ return 0;168+}169+170+/**171+ * wait_for_device_probe172+ * Wait for device probing to be completed.173+ *174+ * Note: this function polls at 100 msec intervals.175+ */176+int wait_for_device_probe(void)177+{178+ /* wait for the known devices to complete their probing */179+ while (driver_probe_done() != 0)180+ msleep(100);181+ async_synchronize_full();182 return 0;183}184
···3390 kfree(p);3391}3392000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000003393/*3394 * This is it. Find all the controllers and register them. I really hate3395 * stealing all these major device numbers.···3600 int rc;3601 int dac, return_code;3602 InquiryData_struct *inq_buff = NULL;00000000000000000036033604 i = alloc_cciss_hba();3605 if (i < 0)
···3390 kfree(p);3391}33923393+/* Send a message CDB to the firmware. */3394+static __devinit int cciss_message(struct pci_dev *pdev, unsigned char opcode, unsigned char type)3395+{3396+ typedef struct {3397+ CommandListHeader_struct CommandHeader;3398+ RequestBlock_struct Request;3399+ ErrDescriptor_struct ErrorDescriptor;3400+ } Command;3401+ static const size_t cmd_sz = sizeof(Command) + sizeof(ErrorInfo_struct);3402+ Command *cmd;3403+ dma_addr_t paddr64;3404+ uint32_t paddr32, tag;3405+ void __iomem *vaddr;3406+ int i, err;3407+3408+ vaddr = ioremap_nocache(pci_resource_start(pdev, 0), pci_resource_len(pdev, 0));3409+ if (vaddr == NULL)3410+ return -ENOMEM;3411+3412+ /* The Inbound Post Queue only accepts 32-bit physical addresses for the3413+ CCISS commands, so they must be allocated from the lower 4GiB of3414+ memory. */3415+ err = pci_set_consistent_dma_mask(pdev, DMA_32BIT_MASK);3416+ if (err) {3417+ iounmap(vaddr);3418+ return -ENOMEM;3419+ }3420+3421+ cmd = pci_alloc_consistent(pdev, cmd_sz, &paddr64);3422+ if (cmd == NULL) {3423+ iounmap(vaddr);3424+ return -ENOMEM;3425+ }3426+3427+ /* This must fit, because of the 32-bit consistent DMA mask. Also,3428+ although there's no guarantee, we assume that the address is at3429+ least 4-byte aligned (most likely, it's page-aligned). */3430+ paddr32 = paddr64;3431+3432+ cmd->CommandHeader.ReplyQueue = 0;3433+ cmd->CommandHeader.SGList = 0;3434+ cmd->CommandHeader.SGTotal = 0;3435+ cmd->CommandHeader.Tag.lower = paddr32;3436+ cmd->CommandHeader.Tag.upper = 0;3437+ memset(&cmd->CommandHeader.LUN.LunAddrBytes, 0, 8);3438+3439+ cmd->Request.CDBLen = 16;3440+ cmd->Request.Type.Type = TYPE_MSG;3441+ cmd->Request.Type.Attribute = ATTR_HEADOFQUEUE;3442+ cmd->Request.Type.Direction = XFER_NONE;3443+ cmd->Request.Timeout = 0; /* Don't time out */3444+ cmd->Request.CDB[0] = opcode;3445+ cmd->Request.CDB[1] = type;3446+ memset(&cmd->Request.CDB[2], 0, 14); /* the rest of the CDB is reserved */3447+3448+ cmd->ErrorDescriptor.Addr.lower = paddr32 + sizeof(Command);3449+ cmd->ErrorDescriptor.Addr.upper = 0;3450+ cmd->ErrorDescriptor.Len = sizeof(ErrorInfo_struct);3451+3452+ writel(paddr32, vaddr + SA5_REQUEST_PORT_OFFSET);3453+3454+ for (i = 0; i < 10; i++) {3455+ tag = readl(vaddr + SA5_REPLY_PORT_OFFSET);3456+ if ((tag & ~3) == paddr32)3457+ break;3458+ schedule_timeout_uninterruptible(HZ);3459+ }3460+3461+ iounmap(vaddr);3462+3463+ /* we leak the DMA buffer here ... no choice since the controller could3464+ still complete the command. */3465+ if (i == 10) {3466+ printk(KERN_ERR "cciss: controller message %02x:%02x timed out\n",3467+ opcode, type);3468+ return -ETIMEDOUT;3469+ }3470+3471+ pci_free_consistent(pdev, cmd_sz, cmd, paddr64);3472+3473+ if (tag & 2) {3474+ printk(KERN_ERR "cciss: controller message %02x:%02x failed\n",3475+ opcode, type);3476+ return -EIO;3477+ }3478+3479+ printk(KERN_INFO "cciss: controller message %02x:%02x succeeded\n",3480+ opcode, type);3481+ return 0;3482+}3483+3484+#define cciss_soft_reset_controller(p) cciss_message(p, 1, 0)3485+#define cciss_noop(p) cciss_message(p, 3, 0)3486+3487+static __devinit int cciss_reset_msi(struct pci_dev *pdev)3488+{3489+/* the #defines are stolen from drivers/pci/msi.h. */3490+#define msi_control_reg(base) (base + PCI_MSI_FLAGS)3491+#define PCI_MSIX_FLAGS_ENABLE (1 << 15)3492+3493+ int pos;3494+ u16 control = 0;3495+3496+ pos = pci_find_capability(pdev, PCI_CAP_ID_MSI);3497+ if (pos) {3498+ pci_read_config_word(pdev, msi_control_reg(pos), &control);3499+ if (control & PCI_MSI_FLAGS_ENABLE) {3500+ printk(KERN_INFO "cciss: resetting MSI\n");3501+ pci_write_config_word(pdev, msi_control_reg(pos), control & ~PCI_MSI_FLAGS_ENABLE);3502+ }3503+ }3504+3505+ pos = pci_find_capability(pdev, PCI_CAP_ID_MSIX);3506+ if (pos) {3507+ pci_read_config_word(pdev, msi_control_reg(pos), &control);3508+ if (control & PCI_MSIX_FLAGS_ENABLE) {3509+ printk(KERN_INFO "cciss: resetting MSI-X\n");3510+ pci_write_config_word(pdev, msi_control_reg(pos), control & ~PCI_MSIX_FLAGS_ENABLE);3511+ }3512+ }3513+3514+ return 0;3515+}3516+3517+/* This does a hard reset of the controller using PCI power management3518+ * states. */3519+static __devinit int cciss_hard_reset_controller(struct pci_dev *pdev)3520+{3521+ u16 pmcsr, saved_config_space[32];3522+ int i, pos;3523+3524+ printk(KERN_INFO "cciss: using PCI PM to reset controller\n");3525+3526+ /* This is very nearly the same thing as3527+3528+ pci_save_state(pci_dev);3529+ pci_set_power_state(pci_dev, PCI_D3hot);3530+ pci_set_power_state(pci_dev, PCI_D0);3531+ pci_restore_state(pci_dev);3532+3533+ but we can't use these nice canned kernel routines on3534+ kexec, because they also check the MSI/MSI-X state in PCI3535+ configuration space and do the wrong thing when it is3536+ set/cleared. Also, the pci_save/restore_state functions3537+ violate the ordering requirements for restoring the3538+ configuration space from the CCISS document (see the3539+ comment below). So we roll our own .... */3540+3541+ for (i = 0; i < 32; i++)3542+ pci_read_config_word(pdev, 2*i, &saved_config_space[i]);3543+3544+ pos = pci_find_capability(pdev, PCI_CAP_ID_PM);3545+ if (pos == 0) {3546+ printk(KERN_ERR "cciss_reset_controller: PCI PM not supported\n");3547+ return -ENODEV;3548+ }3549+3550+ /* Quoting from the Open CISS Specification: "The Power3551+ * Management Control/Status Register (CSR) controls the power3552+ * state of the device. The normal operating state is D0,3553+ * CSR=00h. The software off state is D3, CSR=03h. To reset3554+ * the controller, place the interface device in D3 then to3555+ * D0, this causes a secondary PCI reset which will reset the3556+ * controller." */3557+3558+ /* enter the D3hot power management state */3559+ pci_read_config_word(pdev, pos + PCI_PM_CTRL, &pmcsr);3560+ pmcsr &= ~PCI_PM_CTRL_STATE_MASK;3561+ pmcsr |= PCI_D3hot;3562+ pci_write_config_word(pdev, pos + PCI_PM_CTRL, pmcsr);3563+3564+ schedule_timeout_uninterruptible(HZ >> 1);3565+3566+ /* enter the D0 power management state */3567+ pmcsr &= ~PCI_PM_CTRL_STATE_MASK;3568+ pmcsr |= PCI_D0;3569+ pci_write_config_word(pdev, pos + PCI_PM_CTRL, pmcsr);3570+3571+ schedule_timeout_uninterruptible(HZ >> 1);3572+3573+ /* Restore the PCI configuration space. The Open CISS3574+ * Specification says, "Restore the PCI Configuration3575+ * Registers, offsets 00h through 60h. It is important to3576+ * restore the command register, 16-bits at offset 04h,3577+ * last. Do not restore the configuration status register,3578+ * 16-bits at offset 06h." Note that the offset is 2*i. */3579+ for (i = 0; i < 32; i++) {3580+ if (i == 2 || i == 3)3581+ continue;3582+ pci_write_config_word(pdev, 2*i, saved_config_space[i]);3583+ }3584+ wmb();3585+ pci_write_config_word(pdev, 4, saved_config_space[2]);3586+3587+ return 0;3588+}3589+3590/*3591 * This is it. Find all the controllers and register them. I really hate3592 * stealing all these major device numbers.···3403 int rc;3404 int dac, return_code;3405 InquiryData_struct *inq_buff = NULL;3406+3407+ if (reset_devices) {3408+ /* Reset the controller with a PCI power-cycle */3409+ if (cciss_hard_reset_controller(pdev) || cciss_reset_msi(pdev))3410+ return -ENODEV;3411+3412+ /* Some devices (notably the HP Smart Array 5i Controller)3413+ need a little pause here */3414+ schedule_timeout_uninterruptible(30*HZ);3415+3416+ /* Now try to get the controller to respond to a no-op */3417+ for (i=0; i<12; i++) {3418+ if (cciss_noop(pdev) == 0)3419+ break;3420+ else3421+ printk("cciss: no-op failed%s\n", (i < 11 ? "; re-trying" : ""));3422+ }3423+ }34243425 i = alloc_cciss_hba();3426 if (i < 0)
+52-27
drivers/block/floppy.c
···558static void recalibrate_floppy(void);559static void floppy_shutdown(unsigned long);56000561static int floppy_grab_irq_and_dma(void);562static void floppy_release_irq_and_dma(void);563···4276 FDCS->rawcmd = 2;4277 if (user_reset_fdc(-1, FD_RESET_ALWAYS, 0)) {4278 /* free ioports reserved by floppy_grab_irq_and_dma() */4279- release_region(FDCS->address + 2, 4);4280- release_region(FDCS->address + 7, 1);4281 FDCS->address = -1;4282 FDCS->version = FDC_NONE;4283 continue;···4285 FDCS->version = get_fdc_version();4286 if (FDCS->version == FDC_NONE) {4287 /* free ioports reserved by floppy_grab_irq_and_dma() */4288- release_region(FDCS->address + 2, 4);4289- release_region(FDCS->address + 7, 1);4290 FDCS->address = -1;4291 continue;4292 }···43584359static DEFINE_SPINLOCK(floppy_usage_lock);4360000000000000000000000000000000000000000004361static int floppy_grab_irq_and_dma(void)4362{4363 unsigned long flags;···44404441 for (fdc = 0; fdc < N_FDC; fdc++) {4442 if (FDCS->address != -1) {4443- if (!request_region(FDCS->address + 2, 4, "floppy")) {4444- DPRINT("Floppy io-port 0x%04lx in use\n",4445- FDCS->address + 2);4446- goto cleanup1;4447- }4448- if (!request_region(FDCS->address + 7, 1, "floppy DIR")) {4449- DPRINT("Floppy io-port 0x%04lx in use\n",4450- FDCS->address + 7);4451- goto cleanup2;4452- }4453- /* address + 6 is reserved, and may be taken by IDE.4454- * Unfortunately, Adaptec doesn't know this :-(, */4455 }4456 }4457 for (fdc = 0; fdc < N_FDC; fdc++) {···4463 fdc = 0;4464 irqdma_allocated = 1;4465 return 0;4466-cleanup2:4467- release_region(FDCS->address + 2, 4);4468-cleanup1:4469 fd_free_irq();4470 fd_free_dma();4471- while (--fdc >= 0) {4472- release_region(FDCS->address + 2, 4);4473- release_region(FDCS->address + 7, 1);4474- }4475 spin_lock_irqsave(&floppy_usage_lock, flags);4476 usage_count--;4477 spin_unlock_irqrestore(&floppy_usage_lock, flags);···4528#endif4529 old_fdc = fdc;4530 for (fdc = 0; fdc < N_FDC; fdc++)4531- if (FDCS->address != -1) {4532- release_region(FDCS->address + 2, 4);4533- release_region(FDCS->address + 7, 1);4534- }4535 fdc = old_fdc;4536}4537
···558static void recalibrate_floppy(void);559static void floppy_shutdown(unsigned long);560561+static int floppy_request_regions(int);562+static void floppy_release_regions(int);563static int floppy_grab_irq_and_dma(void);564static void floppy_release_irq_and_dma(void);565···4274 FDCS->rawcmd = 2;4275 if (user_reset_fdc(-1, FD_RESET_ALWAYS, 0)) {4276 /* free ioports reserved by floppy_grab_irq_and_dma() */4277+ floppy_release_regions(fdc);04278 FDCS->address = -1;4279 FDCS->version = FDC_NONE;4280 continue;···4284 FDCS->version = get_fdc_version();4285 if (FDCS->version == FDC_NONE) {4286 /* free ioports reserved by floppy_grab_irq_and_dma() */4287+ floppy_release_regions(fdc);04288 FDCS->address = -1;4289 continue;4290 }···43584359static DEFINE_SPINLOCK(floppy_usage_lock);43604361+static const struct io_region {4362+ int offset;4363+ int size;4364+} io_regions[] = {4365+ { 2, 1 },4366+ /* address + 3 is sometimes reserved by pnp bios for motherboard */4367+ { 4, 2 },4368+ /* address + 6 is reserved, and may be taken by IDE.4369+ * Unfortunately, Adaptec doesn't know this :-(, */4370+ { 7, 1 },4371+};4372+4373+static void floppy_release_allocated_regions(int fdc, const struct io_region *p)4374+{4375+ while (p != io_regions) {4376+ p--;4377+ release_region(FDCS->address + p->offset, p->size);4378+ }4379+}4380+4381+#define ARRAY_END(X) (&((X)[ARRAY_SIZE(X)]))4382+4383+static int floppy_request_regions(int fdc)4384+{4385+ const struct io_region *p;4386+4387+ for (p = io_regions; p < ARRAY_END(io_regions); p++) {4388+ if (!request_region(FDCS->address + p->offset, p->size, "floppy")) {4389+ DPRINT("Floppy io-port 0x%04lx in use\n", FDCS->address + p->offset);4390+ floppy_release_allocated_regions(fdc, p);4391+ return -EBUSY;4392+ }4393+ }4394+ return 0;4395+}4396+4397+static void floppy_release_regions(int fdc)4398+{4399+ floppy_release_allocated_regions(fdc, ARRAY_END(io_regions));4400+}4401+4402static int floppy_grab_irq_and_dma(void)4403{4404 unsigned long flags;···43994400 for (fdc = 0; fdc < N_FDC; fdc++) {4401 if (FDCS->address != -1) {4402+ if (floppy_request_regions(fdc))4403+ goto cleanup;00000000004404 }4405 }4406 for (fdc = 0; fdc < N_FDC; fdc++) {···4432 fdc = 0;4433 irqdma_allocated = 1;4434 return 0;4435+cleanup:004436 fd_free_irq();4437 fd_free_dma();4438+ while (--fdc >= 0)4439+ floppy_release_regions(fdc);004440 spin_lock_irqsave(&floppy_usage_lock, flags);4441 usage_count--;4442 spin_unlock_irqrestore(&floppy_usage_lock, flags);···4501#endif4502 old_fdc = fdc;4503 for (fdc = 0; fdc < N_FDC; fdc++)4504+ if (FDCS->address != -1)4505+ floppy_release_regions(fdc);004506 fdc = old_fdc;4507}4508
+1-1
drivers/block/paride/pg.c
···422423 for (k = 0; k < len; k++) {424 char c = *buf++;425- if (c != ' ' || c != l)426 l = *targ++ = c;427 }428 if (l == ' ')
···422423 for (k = 0; k < len; k++) {424 char c = *buf++;425+ if (c != ' ' && c != l)426 l = *targ++ = c;427 }428 if (l == ' ')
+3-2
drivers/char/sx.c
···1746 sx_dprintk(SX_DEBUG_FIRMWARE, "returning type= %ld\n", rc);1747 break;1748 case SXIO_DO_RAMTEST:1749- if (sx_initialized) /* Already initialized: better not ramtest the board. */1750 rc = -EPERM;1751 break;01752 if (IS_SX_BOARD(board)) {1753 rc = do_memtest(board, 0, 0x7000);1754 if (!rc)···1789 nbytes - i : SX_CHUNK_SIZE)) {1790 kfree(tmp);1791 rc = -EFAULT;1792- break;1793 }1794 memcpy_toio(board->base2 + offset + i, tmp,1795 (i + SX_CHUNK_SIZE > nbytes) ?
···1746 sx_dprintk(SX_DEBUG_FIRMWARE, "returning type= %ld\n", rc);1747 break;1748 case SXIO_DO_RAMTEST:1749+ if (sx_initialized) { /* Already initialized: better not ramtest the board. */1750 rc = -EPERM;1751 break;1752+ }1753 if (IS_SX_BOARD(board)) {1754 rc = do_memtest(board, 0, 0x7000);1755 if (!rc)···1788 nbytes - i : SX_CHUNK_SIZE)) {1789 kfree(tmp);1790 rc = -EFAULT;1791+ goto out;1792 }1793 memcpy_toio(board->base2 + offset + i, tmp,1794 (i + SX_CHUNK_SIZE > nbytes) ?
···139 struct list_head queue;140 struct list_head free_list;14100142 unsigned int descs_allocated;143};144
+1-1
drivers/firmware/memmap.c
···1/*2 * linux/drivers/firmware/memmap.c3 * Copyright (C) 2008 SUSE LINUX Products GmbH4- * by Bernhard Walle <bwalle@suse.de>5 *6 * This program is free software; you can redistribute it and/or modify7 * it under the terms of the GNU General Public License v2.0 as published by
···1/*2 * linux/drivers/firmware/memmap.c3 * Copyright (C) 2008 SUSE LINUX Products GmbH4+ * by Bernhard Walle <bernhard.walle@gmx.de>5 *6 * This program is free software; you can redistribute it and/or modify7 * it under the terms of the GNU General Public License v2.0 as published by
+6-7
drivers/gpu/drm/Kconfig
···80 XFree86 4.4 and above. If unsure, build this and i830 as modules and81 the X server will load the correct one.8283-endchoice84-85config DRM_I915_KMS86 bool "Enable modesetting on intel by default"87 depends on DRM_I91588 help89- Choose this option if you want kernel modesetting enabled by default,90- and you have a new enough userspace to support this. Running old91- userspaces with this enabled will cause pain. Note that this causes92- the driver to bind to PCI devices, which precludes loading things93- like intelfb.9409596config DRM_MGA97 tristate "Matrox g200/g400"
···80 XFree86 4.4 and above. If unsure, build this and i830 as modules and81 the X server will load the correct one.820083config DRM_I915_KMS84 bool "Enable modesetting on intel by default"85 depends on DRM_I91586 help87+ Choose this option if you want kernel modesetting enabled by default,88+ and you have a new enough userspace to support this. Running old89+ userspaces with this enabled will cause pain. Note that this causes90+ the driver to bind to PCI devices, which precludes loading things91+ like intelfb.9293+endchoice9495config DRM_MGA96 tristate "Matrox g200/g400"
+1-2
drivers/gpu/drm/drm_crtc.c
···1741 * RETURNS:1742 * Zero on success, errno on failure.1743 */1744-void drm_fb_release(struct file *filp)1745{1746- struct drm_file *priv = filp->private_data;1747 struct drm_device *dev = priv->minor->dev;1748 struct drm_framebuffer *fb, *tfb;1749
···1741 * RETURNS:1742 * Zero on success, errno on failure.1743 */1744+void drm_fb_release(struct drm_file *priv)1745{01746 struct drm_device *dev = priv->minor->dev;1747 struct drm_framebuffer *fb, *tfb;1748
+16-5
drivers/gpu/drm/drm_crtc_helper.c
···512 if (drm_mode_equal(&saved_mode, &crtc->mode)) {513 if (saved_x != crtc->x || saved_y != crtc->y ||514 depth_changed || bpp_changed) {515- crtc_funcs->mode_set_base(crtc, crtc->x, crtc->y,516- old_fb);517 goto done;518 }519 }···552 /* Set up the DPLL and any encoders state that needs to adjust or depend553 * on the DPLL.554 */555- crtc_funcs->mode_set(crtc, mode, adjusted_mode, x, y, old_fb);00556557 list_for_each_entry(encoder, &dev->mode_config.encoder_list, head) {558···754 if (!drm_crtc_helper_set_mode(set->crtc, set->mode,755 set->x, set->y,756 old_fb)) {00757 ret = -EINVAL;758 goto fail_set_mode;759 }···769 old_fb = set->crtc->fb;770 if (set->crtc->fb != set->fb)771 set->crtc->fb = set->fb;772- crtc_funcs->mode_set_base(set->crtc, set->x, set->y, old_fb);000773 }774775 kfree(save_encoders);···782fail_set_mode:783 set->crtc->enabled = save_enabled;784 count = 0;785- list_for_each_entry(connector, &dev->mode_config.connector_list, head)000786 connector->encoder->crtc = save_crtcs[count++];0787fail_no_encoder:788 kfree(save_crtcs);789 count = 0;
···512 if (drm_mode_equal(&saved_mode, &crtc->mode)) {513 if (saved_x != crtc->x || saved_y != crtc->y ||514 depth_changed || bpp_changed) {515+ ret = !crtc_funcs->mode_set_base(crtc, crtc->x, crtc->y,516+ old_fb);517 goto done;518 }519 }···552 /* Set up the DPLL and any encoders state that needs to adjust or depend553 * on the DPLL.554 */555+ ret = !crtc_funcs->mode_set(crtc, mode, adjusted_mode, x, y, old_fb);556+ if (!ret)557+ goto done;558559 list_for_each_entry(encoder, &dev->mode_config.encoder_list, head) {560···752 if (!drm_crtc_helper_set_mode(set->crtc, set->mode,753 set->x, set->y,754 old_fb)) {755+ DRM_ERROR("failed to set mode on crtc %p\n",756+ set->crtc);757 ret = -EINVAL;758 goto fail_set_mode;759 }···765 old_fb = set->crtc->fb;766 if (set->crtc->fb != set->fb)767 set->crtc->fb = set->fb;768+ ret = crtc_funcs->mode_set_base(set->crtc,769+ set->x, set->y, old_fb);770+ if (ret != 0)771+ goto fail_set_mode;772 }773774 kfree(save_encoders);···775fail_set_mode:776 set->crtc->enabled = save_enabled;777 count = 0;778+ list_for_each_entry(connector, &dev->mode_config.connector_list, head) {779+ if (!connector->encoder)780+ continue;781+782 connector->encoder->crtc = save_crtcs[count++];783+ }784fail_no_encoder:785 kfree(save_crtcs);786 count = 0;
+3
drivers/gpu/drm/drm_fops.c
···457 if (dev->driver->driver_features & DRIVER_GEM)458 drm_gem_release(dev, file_priv);459000460 mutex_lock(&dev->ctxlist_mutex);461 if (!list_empty(&dev->ctxlist)) {462 struct drm_ctx_list *pos, *n;
···457 if (dev->driver->driver_features & DRIVER_GEM)458 drm_gem_release(dev, file_priv);459460+ if (dev->driver->driver_features & DRIVER_MODESET)461+ drm_fb_release(file_priv);462+463 mutex_lock(&dev->ctxlist_mutex);464 if (!list_empty(&dev->ctxlist)) {465 struct drm_ctx_list *pos, *n;
+56-25
drivers/gpu/drm/drm_gem.c
···104105 if (drm_mm_init(&mm->offset_manager, DRM_FILE_PAGE_OFFSET_START,106 DRM_FILE_PAGE_OFFSET_SIZE)) {107- drm_free(mm, sizeof(struct drm_gem_mm), DRM_MEM_MM);108 drm_ht_remove(&mm->offset_hash);0109 return -ENOMEM;110 }111···295 return -EBADF;296297again:298- if (idr_pre_get(&dev->object_name_idr, GFP_KERNEL) == 0)299- return -ENOMEM;00300301 spin_lock(&dev->object_name_lock);302- if (obj->name) {303- args->name = obj->name;00304 spin_unlock(&dev->object_name_lock);305- return 0;000000000000306 }307- ret = idr_get_new_above(&dev->object_name_idr, obj, 1,308- &obj->name);309- spin_unlock(&dev->object_name_lock);310- if (ret == -EAGAIN)311- goto again;312313- if (ret != 0) {314- mutex_lock(&dev->struct_mutex);315- drm_gem_object_unreference(obj);316- mutex_unlock(&dev->struct_mutex);317- return ret;318- }319-320- /*321- * Leave the reference from the lookup around as the322- * name table now holds one323- */324- args->name = (uint64_t) obj->name;325-326- return 0;327}328329/**···450 spin_lock(&dev->object_name_lock);451 if (obj->name) {452 idr_remove(&dev->object_name_idr, obj->name);0453 spin_unlock(&dev->object_name_lock);454 /*455 * The object name held a reference to this object, drop···462463}464EXPORT_SYMBOL(drm_gem_object_handle_free);00000000000000000000465466/**467 * drm_gem_mmap - memory map routine for GEM objects···543 prot |= _PAGE_CACHE_WC;544#endif545 vma->vm_page_prot = __pgprot(prot);00000000546547 vma->vm_file = filp; /* Needed for drm_vm_open() */548 drm_vm_open_locked(vma);
···104105 if (drm_mm_init(&mm->offset_manager, DRM_FILE_PAGE_OFFSET_START,106 DRM_FILE_PAGE_OFFSET_SIZE)) {0107 drm_ht_remove(&mm->offset_hash);108+ drm_free(mm, sizeof(struct drm_gem_mm), DRM_MEM_MM);109 return -ENOMEM;110 }111···295 return -EBADF;296297again:298+ if (idr_pre_get(&dev->object_name_idr, GFP_KERNEL) == 0) {299+ ret = -ENOMEM;300+ goto err;301+ }302303 spin_lock(&dev->object_name_lock);304+ if (!obj->name) {305+ ret = idr_get_new_above(&dev->object_name_idr, obj, 1,306+ &obj->name);307+ args->name = (uint64_t) obj->name;308 spin_unlock(&dev->object_name_lock);309+310+ if (ret == -EAGAIN)311+ goto again;312+313+ if (ret != 0)314+ goto err;315+316+ /* Allocate a reference for the name table. */317+ drm_gem_object_reference(obj);318+ } else {319+ args->name = (uint64_t) obj->name;320+ spin_unlock(&dev->object_name_lock);321+ ret = 0;322 }00000323324+err:325+ mutex_lock(&dev->struct_mutex);326+ drm_gem_object_unreference(obj);327+ mutex_unlock(&dev->struct_mutex);328+ return ret;000000000329}330331/**···448 spin_lock(&dev->object_name_lock);449 if (obj->name) {450 idr_remove(&dev->object_name_idr, obj->name);451+ obj->name = 0;452 spin_unlock(&dev->object_name_lock);453 /*454 * The object name held a reference to this object, drop···459460}461EXPORT_SYMBOL(drm_gem_object_handle_free);462+463+void drm_gem_vm_open(struct vm_area_struct *vma)464+{465+ struct drm_gem_object *obj = vma->vm_private_data;466+467+ drm_gem_object_reference(obj);468+}469+EXPORT_SYMBOL(drm_gem_vm_open);470+471+void drm_gem_vm_close(struct vm_area_struct *vma)472+{473+ struct drm_gem_object *obj = vma->vm_private_data;474+ struct drm_device *dev = obj->dev;475+476+ mutex_lock(&dev->struct_mutex);477+ drm_gem_object_unreference(obj);478+ mutex_unlock(&dev->struct_mutex);479+}480+EXPORT_SYMBOL(drm_gem_vm_close);481+482483/**484 * drm_gem_mmap - memory map routine for GEM objects···520 prot |= _PAGE_CACHE_WC;521#endif522 vma->vm_page_prot = __pgprot(prot);523+524+ /* Take a ref for this mapping of the object, so that the fault525+ * handler can dereference the mmap offset's pointer to the object.526+ * This reference is cleaned up by the corresponding vm_close527+ * (which should happen whether the vma was created by this call, or528+ * by a vm_open due to mremap or partial unmap or whatever).529+ */530+ drm_gem_object_reference(obj);531532 vma->vm_file = filp; /* Needed for drm_vm_open() */533 drm_vm_open_locked(vma);
···184 unsigned int lvds_dither:1;185 unsigned int lvds_vbt:1;186 unsigned int int_crt_support:1;00187188 struct drm_i915_fence_reg fence_regs[16]; /* assume 965 */189 int fence_reg_start; /* 4 if userland hasn't ioctl'd us yet */
···184 unsigned int lvds_dither:1;185 unsigned int lvds_vbt:1;186 unsigned int int_crt_support:1;187+ unsigned int lvds_use_ssc:1;188+ int lvds_ssc_freq;189190 struct drm_i915_fence_reg fence_regs[16]; /* assume 965 */191 int fence_reg_start; /* 4 if userland hasn't ioctl'd us yet */
+75-44
drivers/gpu/drm/i915/i915_gem.c
···607 case -EAGAIN:608 return VM_FAULT_OOM;609 case -EFAULT:610- case -EBUSY:611- DRM_ERROR("can't insert pfn?? fault or busy...\n");612 return VM_FAULT_SIGBUS;613 default:614 return VM_FAULT_NOPAGE;···680 drm_free(list->map, sizeof(struct drm_map_list), DRM_MEM_DRIVER);681682 return ret;000000000000000000000000683}684685/**···780781 if (!obj_priv->mmap_offset) {782 ret = i915_gem_create_mmap_offset(obj);783- if (ret)00784 return ret;0785 }786787 args->offset = obj_priv->mmap_offset;···2276 (int) reloc.offset,2277 reloc.read_domains,2278 reloc.write_domain);002279 return -EINVAL;2280 }2281···2507 if (dev_priv->mm.wedged) {2508 DRM_ERROR("Execbuf while wedged\n");2509 mutex_unlock(&dev->struct_mutex);2510- return -EIO;02511 }25122513 if (dev_priv->mm.suspended) {2514 DRM_ERROR("Execbuf while VT-switched.\n");2515 mutex_unlock(&dev->struct_mutex);2516- return -EBUSY;02517 }25182519 /* Look up object handles */···26612662 i915_verify_inactive(dev, __FILE__, __LINE__);26632664- /* Copy the new buffer offsets back to the user's exec list. */2665- ret = copy_to_user((struct drm_i915_relocation_entry __user *)2666- (uintptr_t) args->buffers_ptr,2667- exec_list,2668- sizeof(*exec_list) * args->buffer_count);2669- if (ret)2670- DRM_ERROR("failed to copy %d exec entries "2671- "back to user (%d)\n",2672- args->buffer_count, ret);2673err:2674 for (i = 0; i < pinned; i++)2675 i915_gem_object_unpin(object_list[i]);···2669 drm_gem_object_unreference(object_list[i]);26702671 mutex_unlock(&dev->struct_mutex);00000000000026722673pre_mutex_err:2674 drm_free(object_list, sizeof(*object_list) * args->buffer_count,···2785 if (obj_priv->pin_filp != NULL && obj_priv->pin_filp != file_priv) {2786 DRM_ERROR("Already pinned in i915_gem_pin_ioctl(): %d\n",2787 args->handle);02788 mutex_unlock(&dev->struct_mutex);2789 return -EINVAL;2790 }···2918void i915_gem_free_object(struct drm_gem_object *obj)2919{2920 struct drm_device *dev = obj->dev;2921- struct drm_gem_mm *mm = dev->mm_private;2922- struct drm_map_list *list;2923- struct drm_map *map;2924 struct drm_i915_gem_object *obj_priv = obj->driver_private;29252926 while (obj_priv->pin_count > 0)···29282929 i915_gem_object_unbind(obj);29302931- list = &obj->map_list;2932- drm_ht_remove_item(&mm->offset_hash, &list->hash);2933-2934- if (list->file_offset_node) {2935- drm_mm_put_block(list->file_offset_node);2936- list->file_offset_node = NULL;2937- }2938-2939- map = list->map;2940- if (map) {2941- drm_free(map, sizeof(*map), DRM_MEM_DRIVER);2942- list->map = NULL;2943- }29442945 drm_free(obj_priv->page_cpu_valid, 1, DRM_MEM_DRIVER);2946 drm_free(obj->driver_private, 1, DRM_MEM_DRIVER);···3113 if (dev_priv->hw_status_page == NULL) {3114 DRM_ERROR("Failed to map status page.\n");3115 memset(&dev_priv->hws_map, 0, sizeof(dev_priv->hws_map));03116 drm_gem_object_unreference(obj);3117 return -EINVAL;3118 }···3124 DRM_DEBUG("hws offset: 0x%08x\n", dev_priv->status_gfx_addr);31253126 return 0;0000000000000000000003127}31283129int···3164 obj = drm_gem_object_alloc(dev, 128 * 1024);3165 if (obj == NULL) {3166 DRM_ERROR("Failed to allocate ringbuffer\n");03167 return -ENOMEM;3168 }3169 obj_priv = obj->driver_private;···3172 ret = i915_gem_object_pin(obj, 4096);3173 if (ret != 0) {3174 drm_gem_object_unreference(obj);03175 return ret;3176 }3177···3190 if (ring->map.handle == NULL) {3191 DRM_ERROR("Failed to map ringbuffer.\n");3192 memset(&dev_priv->ring, 0, sizeof(dev_priv->ring));03193 drm_gem_object_unreference(obj);03194 return -EINVAL;3195 }3196 ring->ring_obj = obj;···3272 dev_priv->ring.ring_obj = NULL;3273 memset(&dev_priv->ring, 0, sizeof(dev_priv->ring));32743275- if (dev_priv->hws_obj != NULL) {3276- struct drm_gem_object *obj = dev_priv->hws_obj;3277- struct drm_i915_gem_object *obj_priv = obj->driver_private;3278-3279- kunmap(obj_priv->page_list[0]);3280- i915_gem_object_unpin(obj);3281- drm_gem_object_unreference(obj);3282- dev_priv->hws_obj = NULL;3283- memset(&dev_priv->hws_map, 0, sizeof(dev_priv->hws_map));3284- dev_priv->hw_status_page = NULL;3285-3286- /* Write high address into HWS_PGA when disabling. */3287- I915_WRITE(HWS_PGA, 0x1ffff000);3288- }3289}32903291int
···607 case -EAGAIN:608 return VM_FAULT_OOM;609 case -EFAULT:00610 return VM_FAULT_SIGBUS;611 default:612 return VM_FAULT_NOPAGE;···682 drm_free(list->map, sizeof(struct drm_map_list), DRM_MEM_DRIVER);683684 return ret;685+}686+687+static void688+i915_gem_free_mmap_offset(struct drm_gem_object *obj)689+{690+ struct drm_device *dev = obj->dev;691+ struct drm_i915_gem_object *obj_priv = obj->driver_private;692+ struct drm_gem_mm *mm = dev->mm_private;693+ struct drm_map_list *list;694+695+ list = &obj->map_list;696+ drm_ht_remove_item(&mm->offset_hash, &list->hash);697+698+ if (list->file_offset_node) {699+ drm_mm_put_block(list->file_offset_node);700+ list->file_offset_node = NULL;701+ }702+703+ if (list->map) {704+ drm_free(list->map, sizeof(struct drm_map), DRM_MEM_DRIVER);705+ list->map = NULL;706+ }707+708+ obj_priv->mmap_offset = 0;709}710711/**···758759 if (!obj_priv->mmap_offset) {760 ret = i915_gem_create_mmap_offset(obj);761+ if (ret) {762+ drm_gem_object_unreference(obj);763+ mutex_unlock(&dev->struct_mutex);764 return ret;765+ }766 }767768 args->offset = obj_priv->mmap_offset;···2251 (int) reloc.offset,2252 reloc.read_domains,2253 reloc.write_domain);2254+ drm_gem_object_unreference(target_obj);2255+ i915_gem_object_unpin(obj);2256 return -EINVAL;2257 }2258···2480 if (dev_priv->mm.wedged) {2481 DRM_ERROR("Execbuf while wedged\n");2482 mutex_unlock(&dev->struct_mutex);2483+ ret = -EIO;2484+ goto pre_mutex_err;2485 }24862487 if (dev_priv->mm.suspended) {2488 DRM_ERROR("Execbuf while VT-switched.\n");2489 mutex_unlock(&dev->struct_mutex);2490+ ret = -EBUSY;2491+ goto pre_mutex_err;2492 }24932494 /* Look up object handles */···26322633 i915_verify_inactive(dev, __FILE__, __LINE__);26340000000002635err:2636 for (i = 0; i < pinned; i++)2637 i915_gem_object_unpin(object_list[i]);···2649 drm_gem_object_unreference(object_list[i]);26502651 mutex_unlock(&dev->struct_mutex);2652+2653+ if (!ret) {2654+ /* Copy the new buffer offsets back to the user's exec list. */2655+ ret = copy_to_user((struct drm_i915_relocation_entry __user *)2656+ (uintptr_t) args->buffers_ptr,2657+ exec_list,2658+ sizeof(*exec_list) * args->buffer_count);2659+ if (ret)2660+ DRM_ERROR("failed to copy %d exec entries "2661+ "back to user (%d)\n",2662+ args->buffer_count, ret);2663+ }26642665pre_mutex_err:2666 drm_free(object_list, sizeof(*object_list) * args->buffer_count,···2753 if (obj_priv->pin_filp != NULL && obj_priv->pin_filp != file_priv) {2754 DRM_ERROR("Already pinned in i915_gem_pin_ioctl(): %d\n",2755 args->handle);2756+ drm_gem_object_unreference(obj);2757 mutex_unlock(&dev->struct_mutex);2758 return -EINVAL;2759 }···2885void i915_gem_free_object(struct drm_gem_object *obj)2886{2887 struct drm_device *dev = obj->dev;0002888 struct drm_i915_gem_object *obj_priv = obj->driver_private;28892890 while (obj_priv->pin_count > 0)···28982899 i915_gem_object_unbind(obj);29002901+ i915_gem_free_mmap_offset(obj);00000000000029022903 drm_free(obj_priv->page_cpu_valid, 1, DRM_MEM_DRIVER);2904 drm_free(obj->driver_private, 1, DRM_MEM_DRIVER);···3095 if (dev_priv->hw_status_page == NULL) {3096 DRM_ERROR("Failed to map status page.\n");3097 memset(&dev_priv->hws_map, 0, sizeof(dev_priv->hws_map));3098+ i915_gem_object_unpin(obj);3099 drm_gem_object_unreference(obj);3100 return -EINVAL;3101 }···3105 DRM_DEBUG("hws offset: 0x%08x\n", dev_priv->status_gfx_addr);31063107 return 0;3108+}3109+3110+static void3111+i915_gem_cleanup_hws(struct drm_device *dev)3112+{3113+ drm_i915_private_t *dev_priv = dev->dev_private;3114+ struct drm_gem_object *obj = dev_priv->hws_obj;3115+ struct drm_i915_gem_object *obj_priv = obj->driver_private;3116+3117+ if (dev_priv->hws_obj == NULL)3118+ return;3119+3120+ kunmap(obj_priv->page_list[0]);3121+ i915_gem_object_unpin(obj);3122+ drm_gem_object_unreference(obj);3123+ dev_priv->hws_obj = NULL;3124+ memset(&dev_priv->hws_map, 0, sizeof(dev_priv->hws_map));3125+ dev_priv->hw_status_page = NULL;3126+3127+ /* Write high address into HWS_PGA when disabling. */3128+ I915_WRITE(HWS_PGA, 0x1ffff000);3129}31303131int···3124 obj = drm_gem_object_alloc(dev, 128 * 1024);3125 if (obj == NULL) {3126 DRM_ERROR("Failed to allocate ringbuffer\n");3127+ i915_gem_cleanup_hws(dev);3128 return -ENOMEM;3129 }3130 obj_priv = obj->driver_private;···3131 ret = i915_gem_object_pin(obj, 4096);3132 if (ret != 0) {3133 drm_gem_object_unreference(obj);3134+ i915_gem_cleanup_hws(dev);3135 return ret;3136 }3137···3148 if (ring->map.handle == NULL) {3149 DRM_ERROR("Failed to map ringbuffer.\n");3150 memset(&dev_priv->ring, 0, sizeof(dev_priv->ring));3151+ i915_gem_object_unpin(obj);3152 drm_gem_object_unreference(obj);3153+ i915_gem_cleanup_hws(dev);3154 return -EINVAL;3155 }3156 ring->ring_obj = obj;···3228 dev_priv->ring.ring_obj = NULL;3229 memset(&dev_priv->ring, 0, sizeof(dev_priv->ring));32303231+ i915_gem_cleanup_hws(dev);00000000000003232}32333234int
···267 default:268 {269 struct hid_device *hid = dev->hid;270- if (_IOC_TYPE(cmd) != 'H' || _IOC_DIR(cmd) != _IOC_READ)271- return -EINVAL;00272273 if (_IOC_NR(cmd) == _IOC_NR(HIDIOCGRAWNAME(0))) {274 int len;···279 len = strlen(hid->name) + 1;280 if (len > _IOC_SIZE(cmd))281 len = _IOC_SIZE(cmd);282- return copy_to_user(user_arg, hid->name, len) ?283 -EFAULT : len;0284 }285286 if (_IOC_NR(cmd) == _IOC_NR(HIDIOCGRAWPHYS(0))) {···291 len = strlen(hid->phys) + 1;292 if (len > _IOC_SIZE(cmd))293 len = _IOC_SIZE(cmd);294- return copy_to_user(user_arg, hid->phys, len) ?295 -EFAULT : len;0296 }297 }298299- ret = -ENOTTY;300 }301 unlock_kernel();302 return ret;
···267 default:268 {269 struct hid_device *hid = dev->hid;270+ if (_IOC_TYPE(cmd) != 'H' || _IOC_DIR(cmd) != _IOC_READ) {271+ ret = -EINVAL;272+ break;273+ }274275 if (_IOC_NR(cmd) == _IOC_NR(HIDIOCGRAWNAME(0))) {276 int len;···277 len = strlen(hid->name) + 1;278 if (len > _IOC_SIZE(cmd))279 len = _IOC_SIZE(cmd);280+ ret = copy_to_user(user_arg, hid->name, len) ?281 -EFAULT : len;282+ break;283 }284285 if (_IOC_NR(cmd) == _IOC_NR(HIDIOCGRAWPHYS(0))) {···288 len = strlen(hid->phys) + 1;289 if (len > _IOC_SIZE(cmd))290 len = _IOC_SIZE(cmd);291+ ret = copy_to_user(user_arg, hid->phys, len) ?292 -EFAULT : len;293+ break;294 }295 }296297+ ret = -ENOTTY;298 }299 unlock_kernel();300 return ret;
+2-2
drivers/hwmon/f71882fg.c
···18721873 devid = superio_inw(sioaddr, SIO_REG_MANID);1874 if (devid != SIO_FINTEK_ID) {1875- printk(KERN_INFO DRVNAME ": Not a Fintek device\n");1876 goto exit;1877 }1878···1932 res.name = f71882fg_pdev->name;1933 err = acpi_check_resource_conflict(&res);1934 if (err)1935- return err;19361937 err = platform_device_add_resources(f71882fg_pdev, &res, 1);1938 if (err) {
···18721873 devid = superio_inw(sioaddr, SIO_REG_MANID);1874 if (devid != SIO_FINTEK_ID) {1875+ pr_debug(DRVNAME ": Not a Fintek device\n");1876 goto exit;1877 }1878···1932 res.name = f71882fg_pdev->name;1933 err = acpi_check_resource_conflict(&res);1934 if (err)1935+ goto exit_device_put;19361937 err = platform_device_add_resources(f71882fg_pdev, &res, 1);1938 if (err) {
+81-4
drivers/hwmon/hp_accel.c
···166 }, \167 .driver_data = &lis3lv02d_axis_##_axis \168}000000000000169static struct dmi_system_id lis3lv02d_dmi_ids[] = {170 /* product names are truncated to match all kinds of a same model */171 AXIS_DMI_MATCH("NC64x0", "HP Compaq nc64", x_inverted),···191 AXIS_DMI_MATCH("NC673x", "HP Compaq 673", xy_rotated_left_usd),192 AXIS_DMI_MATCH("NC651xx", "HP Compaq 651", xy_rotated_right),193 AXIS_DMI_MATCH("NC671xx", "HP Compaq 671", xy_swap_yz_inverted),0000000000194 { NULL, }195/* Laptop models without axis info (yet):196 * "NC6910" "HP Compaq 6910"···235 .set_brightness = hpled_set,236};23700000000000000000000000000000000000000000238static int lis3lv02d_add(struct acpi_device *device)239{240- u8 val;241 int ret;242243 if (!device)···291 strcpy(acpi_device_class(device), ACPI_MDPS_CLASS);292 device->driver_data = &adev;293294- lis3lv02d_acpi_read(device->handle, WHO_AM_I, &val);295- if ((val != LIS3LV02DL_ID) && (val != LIS302DL_ID)) {00000000000296 printk(KERN_ERR DRIVER_NAME297- ": Accelerometer chip not LIS3LV02D{L,Q}\n");0298 }299300 /* If possible use a "standard" axes order */···320 ret = led_classdev_register(NULL, &hpled_led.led_classdev);321 if (ret)322 return ret;000323324 ret = lis3lv02d_init_device(&adev);325 if (ret) {
···166 }, \167 .driver_data = &lis3lv02d_axis_##_axis \168}169+170+#define AXIS_DMI_MATCH2(_ident, _class1, _name1, \171+ _class2, _name2, \172+ _axis) { \173+ .ident = _ident, \174+ .callback = lis3lv02d_dmi_matched, \175+ .matches = { \176+ DMI_MATCH(DMI_##_class1, _name1), \177+ DMI_MATCH(DMI_##_class2, _name2), \178+ }, \179+ .driver_data = &lis3lv02d_axis_##_axis \180+}181static struct dmi_system_id lis3lv02d_dmi_ids[] = {182 /* product names are truncated to match all kinds of a same model */183 AXIS_DMI_MATCH("NC64x0", "HP Compaq nc64", x_inverted),···179 AXIS_DMI_MATCH("NC673x", "HP Compaq 673", xy_rotated_left_usd),180 AXIS_DMI_MATCH("NC651xx", "HP Compaq 651", xy_rotated_right),181 AXIS_DMI_MATCH("NC671xx", "HP Compaq 671", xy_swap_yz_inverted),182+ /* Intel-based HP Pavilion dv5 */183+ AXIS_DMI_MATCH2("HPDV5_I",184+ PRODUCT_NAME, "HP Pavilion dv5",185+ BOARD_NAME, "3603",186+ x_inverted),187+ /* AMD-based HP Pavilion dv5 */188+ AXIS_DMI_MATCH2("HPDV5_A",189+ PRODUCT_NAME, "HP Pavilion dv5",190+ BOARD_NAME, "3600",191+ y_inverted),192 { NULL, }193/* Laptop models without axis info (yet):194 * "NC6910" "HP Compaq 6910"···213 .set_brightness = hpled_set,214};215216+static acpi_status217+lis3lv02d_get_resource(struct acpi_resource *resource, void *context)218+{219+ if (resource->type == ACPI_RESOURCE_TYPE_EXTENDED_IRQ) {220+ struct acpi_resource_extended_irq *irq;221+ u32 *device_irq = context;222+223+ irq = &resource->data.extended_irq;224+ *device_irq = irq->interrupts[0];225+ }226+227+ return AE_OK;228+}229+230+static void lis3lv02d_enum_resources(struct acpi_device *device)231+{232+ acpi_status status;233+234+ status = acpi_walk_resources(device->handle, METHOD_NAME__CRS,235+ lis3lv02d_get_resource, &adev.irq);236+ if (ACPI_FAILURE(status))237+ printk(KERN_DEBUG DRIVER_NAME ": Error getting resources\n");238+}239+240+static s16 lis3lv02d_read_16(acpi_handle handle, int reg)241+{242+ u8 lo, hi;243+244+ adev.read(handle, reg - 1, &lo);245+ adev.read(handle, reg, &hi);246+ /* In "12 bit right justified" mode, bit 6, bit 7, bit 8 = bit 5 */247+ return (s16)((hi << 8) | lo);248+}249+250+static s16 lis3lv02d_read_8(acpi_handle handle, int reg)251+{252+ s8 lo;253+ adev.read(handle, reg, &lo);254+ return lo;255+}256+257static int lis3lv02d_add(struct acpi_device *device)258{0259 int ret;260261 if (!device)···229 strcpy(acpi_device_class(device), ACPI_MDPS_CLASS);230 device->driver_data = &adev;231232+ lis3lv02d_acpi_read(device->handle, WHO_AM_I, &adev.whoami);233+ switch (adev.whoami) {234+ case LIS_DOUBLE_ID:235+ printk(KERN_INFO DRIVER_NAME ": 2-byte sensor found\n");236+ adev.read_data = lis3lv02d_read_16;237+ adev.mdps_max_val = 2048;238+ break;239+ case LIS_SINGLE_ID:240+ printk(KERN_INFO DRIVER_NAME ": 1-byte sensor found\n");241+ adev.read_data = lis3lv02d_read_8;242+ adev.mdps_max_val = 128;243+ break;244+ default:245 printk(KERN_ERR DRIVER_NAME246+ ": unknown sensor type 0x%X\n", adev.whoami);247+ return -EINVAL;248 }249250 /* If possible use a "standard" axes order */···246 ret = led_classdev_register(NULL, &hpled_led.led_classdev);247 if (ret)248 return ret;249+250+ /* obtain IRQ number of our device from ACPI */251+ lis3lv02d_enum_resources(adev.device);252253 ret = lis3lv02d_init_device(&adev);254 if (ret) {
+160-35
drivers/hwmon/lis3lv02d.c
···3 *4 * Copyright (C) 2007-2008 Yan Burman5 * Copyright (C) 2008 Eric Piel6- * Copyright (C) 2008 Pavel Machek7 *8 * This program is free software; you can redistribute it and/or modify9 * it under the terms of the GNU General Public License as published by···35#include <linux/poll.h>36#include <linux/freezer.h>37#include <linux/uaccess.h>038#include <acpi/acpi_drivers.h>39#include <asm/atomic.h>40#include "lis3lv02d.h"···53 * joystick.54 */5556-/* Maximum value our axis may get for the input device (signed 12 bits) */57-#define MDPS_MAX_VAL 204805859-struct acpi_lis3lv02d adev;60EXPORT_SYMBOL_GPL(adev);6162static int lis3lv02d_add_fs(struct acpi_device *device);63-64-static s16 lis3lv02d_read_16(acpi_handle handle, int reg)65-{66- u8 lo, hi;67-68- adev.read(handle, reg, &lo);69- adev.read(handle, reg + 1, &hi);70- /* In "12 bit right justified" mode, bit 6, bit 7, bit 8 = bit 5 */71- return (s16)((hi << 8) | lo);72-}7374/**75 * lis3lv02d_get_axis - For the given axis, give the value converted···89{90 int position[3];9192- position[0] = lis3lv02d_read_16(handle, OUTX_L);93- position[1] = lis3lv02d_read_16(handle, OUTY_L);94- position[2] = lis3lv02d_read_16(handle, OUTZ_L);9596 *x = lis3lv02d_get_axis(adev.ac.x, position);97 *y = lis3lv02d_get_axis(adev.ac.y, position);···101void lis3lv02d_poweroff(acpi_handle handle)102{103 adev.is_on = 0;104- /* disable X,Y,Z axis and power down */105- adev.write(handle, CTRL_REG1, 0x00);106}107EXPORT_SYMBOL_GPL(lis3lv02d_poweroff);108109void lis3lv02d_poweron(acpi_handle handle)110{111- u8 val;112-113 adev.is_on = 1;114 adev.init(handle);115- adev.write(handle, FF_WU_CFG, 0);116- /*117- * BDU: LSB and MSB values are not updated until both have been read.118- * So the value read will always be correct.119- * IEN: Interrupt for free-fall and DD, not for data-ready.120- */121- adev.read(handle, CTRL_REG2, &val);122- val |= CTRL2_BDU | CTRL2_IEN;123- adev.write(handle, CTRL_REG2, val);124}125EXPORT_SYMBOL_GPL(lis3lv02d_poweron);126···139 lis3lv02d_poweroff(dev->device->handle);140 mutex_unlock(&dev->lock);141}00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000142143/**144 * lis3lv02d_joystick_kthread - Kthread polling function···315 lis3lv02d_decrease_use(&adev);316}317318-319static inline void lis3lv02d_calibrate_joystick(void)320{321 lis3lv02d_get_xyz(adev.device->handle, &adev.xcalib, &adev.ycalib, &adev.zcalib);···342 adev.idev->close = lis3lv02d_joystick_close;343344 set_bit(EV_ABS, adev.idev->evbit);345- input_set_abs_params(adev.idev, ABS_X, -MDPS_MAX_VAL, MDPS_MAX_VAL, 3, 3);346- input_set_abs_params(adev.idev, ABS_Y, -MDPS_MAX_VAL, MDPS_MAX_VAL, 3, 3);347- input_set_abs_params(adev.idev, ABS_Z, -MDPS_MAX_VAL, MDPS_MAX_VAL, 3, 3);348349 err = input_register_device(adev.idev);350 if (err) {···361 if (!adev.idev)362 return;3630364 input_unregister_device(adev.idev);365 adev.idev = NULL;366}···380 if (lis3lv02d_joystick_enable())381 printk(KERN_ERR DRIVER_NAME ": joystick initialization failed\n");3820000000000000383 lis3lv02d_decrease_use(dev);384 return 0;385}···476EXPORT_SYMBOL_GPL(lis3lv02d_remove_fs);477478MODULE_DESCRIPTION("ST LIS3LV02Dx three-axis digital accelerometer driver");479-MODULE_AUTHOR("Yan Burman and Eric Piel");480MODULE_LICENSE("GPL");481
···3 *4 * Copyright (C) 2007-2008 Yan Burman5 * Copyright (C) 2008 Eric Piel6+ * Copyright (C) 2008-2009 Pavel Machek7 *8 * This program is free software; you can redistribute it and/or modify9 * it under the terms of the GNU General Public License as published by···35#include <linux/poll.h>36#include <linux/freezer.h>37#include <linux/uaccess.h>38+#include <linux/miscdevice.h>39#include <acpi/acpi_drivers.h>40#include <asm/atomic.h>41#include "lis3lv02d.h"···52 * joystick.53 */5455+struct acpi_lis3lv02d adev = {56+ .misc_wait = __WAIT_QUEUE_HEAD_INITIALIZER(adev.misc_wait),57+};58059EXPORT_SYMBOL_GPL(adev);6061static int lis3lv02d_add_fs(struct acpi_device *device);00000000006263/**64 * lis3lv02d_get_axis - For the given axis, give the value converted···98{99 int position[3];100101+ position[0] = adev.read_data(handle, OUTX);102+ position[1] = adev.read_data(handle, OUTY);103+ position[2] = adev.read_data(handle, OUTZ);104105 *x = lis3lv02d_get_axis(adev.ac.x, position);106 *y = lis3lv02d_get_axis(adev.ac.y, position);···110void lis3lv02d_poweroff(acpi_handle handle)111{112 adev.is_on = 0;00113}114EXPORT_SYMBOL_GPL(lis3lv02d_poweroff);115116void lis3lv02d_poweron(acpi_handle handle)117{00118 adev.is_on = 1;119 adev.init(handle);000000000120}121EXPORT_SYMBOL_GPL(lis3lv02d_poweron);122···161 lis3lv02d_poweroff(dev->device->handle);162 mutex_unlock(&dev->lock);163}164+165+static irqreturn_t lis302dl_interrupt(int irq, void *dummy)166+{167+ /*168+ * Be careful: on some HP laptops the bios force DD when on battery and169+ * the lid is closed. This leads to interrupts as soon as a little move170+ * is done.171+ */172+ atomic_inc(&adev.count);173+174+ wake_up_interruptible(&adev.misc_wait);175+ kill_fasync(&adev.async_queue, SIGIO, POLL_IN);176+ return IRQ_HANDLED;177+}178+179+static int lis3lv02d_misc_open(struct inode *inode, struct file *file)180+{181+ int ret;182+183+ if (test_and_set_bit(0, &adev.misc_opened))184+ return -EBUSY; /* already open */185+186+ atomic_set(&adev.count, 0);187+188+ /*189+ * The sensor can generate interrupts for free-fall and direction190+ * detection (distinguishable with FF_WU_SRC and DD_SRC) but to keep191+ * the things simple and _fast_ we activate it only for free-fall, so192+ * no need to read register (very slow with ACPI). For the same reason,193+ * we forbid shared interrupts.194+ *195+ * IRQF_TRIGGER_RISING seems pointless on HP laptops because the196+ * io-apic is not configurable (and generates a warning) but I keep it197+ * in case of support for other hardware.198+ */199+ ret = request_irq(adev.irq, lis302dl_interrupt, IRQF_TRIGGER_RISING,200+ DRIVER_NAME, &adev);201+202+ if (ret) {203+ clear_bit(0, &adev.misc_opened);204+ printk(KERN_ERR DRIVER_NAME ": IRQ%d allocation failed\n", adev.irq);205+ return -EBUSY;206+ }207+ lis3lv02d_increase_use(&adev);208+ printk("lis3: registered interrupt %d\n", adev.irq);209+ return 0;210+}211+212+static int lis3lv02d_misc_release(struct inode *inode, struct file *file)213+{214+ fasync_helper(-1, file, 0, &adev.async_queue);215+ lis3lv02d_decrease_use(&adev);216+ free_irq(adev.irq, &adev);217+ clear_bit(0, &adev.misc_opened); /* release the device */218+ return 0;219+}220+221+static ssize_t lis3lv02d_misc_read(struct file *file, char __user *buf,222+ size_t count, loff_t *pos)223+{224+ DECLARE_WAITQUEUE(wait, current);225+ u32 data;226+ unsigned char byte_data;227+ ssize_t retval = 1;228+229+ if (count < 1)230+ return -EINVAL;231+232+ add_wait_queue(&adev.misc_wait, &wait);233+ while (true) {234+ set_current_state(TASK_INTERRUPTIBLE);235+ data = atomic_xchg(&adev.count, 0);236+ if (data)237+ break;238+239+ if (file->f_flags & O_NONBLOCK) {240+ retval = -EAGAIN;241+ goto out;242+ }243+244+ if (signal_pending(current)) {245+ retval = -ERESTARTSYS;246+ goto out;247+ }248+249+ schedule();250+ }251+252+ if (data < 255)253+ byte_data = data;254+ else255+ byte_data = 255;256+257+ /* make sure we are not going into copy_to_user() with258+ * TASK_INTERRUPTIBLE state */259+ set_current_state(TASK_RUNNING);260+ if (copy_to_user(buf, &byte_data, sizeof(byte_data)))261+ retval = -EFAULT;262+263+out:264+ __set_current_state(TASK_RUNNING);265+ remove_wait_queue(&adev.misc_wait, &wait);266+267+ return retval;268+}269+270+static unsigned int lis3lv02d_misc_poll(struct file *file, poll_table *wait)271+{272+ poll_wait(file, &adev.misc_wait, wait);273+ if (atomic_read(&adev.count))274+ return POLLIN | POLLRDNORM;275+ return 0;276+}277+278+static int lis3lv02d_misc_fasync(int fd, struct file *file, int on)279+{280+ return fasync_helper(fd, file, on, &adev.async_queue);281+}282+283+static const struct file_operations lis3lv02d_misc_fops = {284+ .owner = THIS_MODULE,285+ .llseek = no_llseek,286+ .read = lis3lv02d_misc_read,287+ .open = lis3lv02d_misc_open,288+ .release = lis3lv02d_misc_release,289+ .poll = lis3lv02d_misc_poll,290+ .fasync = lis3lv02d_misc_fasync,291+};292+293+static struct miscdevice lis3lv02d_misc_device = {294+ .minor = MISC_DYNAMIC_MINOR,295+ .name = "freefall",296+ .fops = &lis3lv02d_misc_fops,297+};298299/**300 * lis3lv02d_joystick_kthread - Kthread polling function···203 lis3lv02d_decrease_use(&adev);204}2050206static inline void lis3lv02d_calibrate_joystick(void)207{208 lis3lv02d_get_xyz(adev.device->handle, &adev.xcalib, &adev.ycalib, &adev.zcalib);···231 adev.idev->close = lis3lv02d_joystick_close;232233 set_bit(EV_ABS, adev.idev->evbit);234+ input_set_abs_params(adev.idev, ABS_X, -adev.mdps_max_val, adev.mdps_max_val, 3, 3);235+ input_set_abs_params(adev.idev, ABS_Y, -adev.mdps_max_val, adev.mdps_max_val, 3, 3);236+ input_set_abs_params(adev.idev, ABS_Z, -adev.mdps_max_val, adev.mdps_max_val, 3, 3);237238 err = input_register_device(adev.idev);239 if (err) {···250 if (!adev.idev)251 return;252253+ misc_deregister(&lis3lv02d_misc_device);254 input_unregister_device(adev.idev);255 adev.idev = NULL;256}···268 if (lis3lv02d_joystick_enable())269 printk(KERN_ERR DRIVER_NAME ": joystick initialization failed\n");270271+ printk("lis3_init_device: irq %d\n", dev->irq);272+273+ /* if we did not get an IRQ from ACPI - we have nothing more to do */274+ if (!dev->irq) {275+ printk(KERN_ERR DRIVER_NAME276+ ": No IRQ in ACPI. Disabling /dev/freefall\n");277+ goto out;278+ }279+280+ printk("lis3: registering device\n");281+ if (misc_register(&lis3lv02d_misc_device))282+ printk(KERN_ERR DRIVER_NAME ": misc_register failed\n");283+out:284 lis3lv02d_decrease_use(dev);285 return 0;286}···351EXPORT_SYMBOL_GPL(lis3lv02d_remove_fs);352353MODULE_DESCRIPTION("ST LIS3LV02Dx three-axis digital accelerometer driver");354+MODULE_AUTHOR("Yan Burman, Eric Piel, Pavel Machek");355MODULE_LICENSE("GPL");356
+18-3
drivers/hwmon/lis3lv02d.h
···22/*23 * The actual chip is STMicroelectronics LIS3LV02DL or LIS3LV02DQ that seems to24 * be connected via SPI. There exists also several similar chips (such as LIS302DL or25- * LIS3L02DQ) but not in the HP laptops and they have slightly different registers.026 * They can also be connected via I²C.27 */2829-#define LIS3LV02DL_ID 0x3A /* Also the LIS3LV02DQ */30-#define LIS302DL_ID 0x3B /* Also the LIS202DL! */003132enum lis3lv02d_reg {33 WHO_AM_I = 0x0F,···47 STATUS_REG = 0x27,48 OUTX_L = 0x28,49 OUTX_H = 0x29,050 OUTY_L = 0x2A,51 OUTY_H = 0x2B,052 OUTZ_L = 0x2C,53 OUTZ_H = 0x2D,054 FF_WU_CFG = 0x30,55 FF_WU_SRC = 0x31,56 FF_WU_ACK = 0x32,···165 acpi_status (*write) (acpi_handle handle, int reg, u8 val);166 acpi_status (*read) (acpi_handle handle, int reg, u8 *ret);1670000168 struct input_dev *idev; /* input device */169 struct task_struct *kthread; /* kthread for input */170 struct mutex lock;···180 unsigned char is_on; /* whether the device is on or off */181 unsigned char usage; /* usage counter */182 struct axis_conversion ac; /* hw -> logical axis */00000183};184185int lis3lv02d_init_device(struct acpi_lis3lv02d *dev);
···22/*23 * The actual chip is STMicroelectronics LIS3LV02DL or LIS3LV02DQ that seems to24 * be connected via SPI. There exists also several similar chips (such as LIS302DL or25+ * LIS3L02DQ) and they have slightly different registers, but we can provide a26+ * common interface for all of them.27 * They can also be connected via I²C.28 */2930+/* 2-byte registers */31+#define LIS_DOUBLE_ID 0x3A /* LIS3LV02D[LQ] */32+/* 1-byte registers */33+#define LIS_SINGLE_ID 0x3B /* LIS[32]02DL and others */3435enum lis3lv02d_reg {36 WHO_AM_I = 0x0F,···44 STATUS_REG = 0x27,45 OUTX_L = 0x28,46 OUTX_H = 0x29,47+ OUTX = 0x29,48 OUTY_L = 0x2A,49 OUTY_H = 0x2B,50+ OUTY = 0x2B,51 OUTZ_L = 0x2C,52 OUTZ_H = 0x2D,53+ OUTZ = 0x2D,54 FF_WU_CFG = 0x30,55 FF_WU_SRC = 0x31,56 FF_WU_ACK = 0x32,···159 acpi_status (*write) (acpi_handle handle, int reg, u8 val);160 acpi_status (*read) (acpi_handle handle, int reg, u8 *ret);161162+ u8 whoami; /* 3Ah: 2-byte registries, 3Bh: 1-byte registries */163+ s16 (*read_data) (acpi_handle handle, int reg);164+ int mdps_max_val;165+166 struct input_dev *idev; /* input device */167 struct task_struct *kthread; /* kthread for input */168 struct mutex lock;···170 unsigned char is_on; /* whether the device is on or off */171 unsigned char usage; /* usage counter */172 struct axis_conversion ac; /* hw -> logical axis */173+174+ u32 irq; /* IRQ number */175+ struct fasync_struct *async_queue; /* queue for the misc device */176+ wait_queue_head_t misc_wait; /* Wait queue for the misc device */177+ unsigned long misc_opened; /* bit0: whether the device is open */178};179180int lis3lv02d_init_device(struct acpi_lis3lv02d *dev);
+1-1
drivers/hwmon/vt1211.c
···1262 res.name = pdev->name;1263 err = acpi_check_resource_conflict(&res);1264 if (err)1265- goto EXIT;12661267 err = platform_device_add_resources(pdev, &res, 1);1268 if (err) {
···1262 res.name = pdev->name;1263 err = acpi_check_resource_conflict(&res);1264 if (err)1265+ goto EXIT_DEV_PUT;12661267 err = platform_device_add_resources(pdev, &res, 1);1268 if (err) {
+1-1
drivers/hwmon/w83627ehf.c
···15481549 err = acpi_check_resource_conflict(&res);1550 if (err)1551- goto exit;15521553 err = platform_device_add_resources(pdev, &res, 1);1554 if (err) {
···15481549 err = acpi_check_resource_conflict(&res);1550 if (err)1551+ goto exit_device_put;15521553 err = platform_device_add_resources(pdev, &res, 1);1554 if (err) {
+1-1
drivers/md/dm-io.c
···328 struct dpages old_pages = *dp;329330 if (sync)331- rw |= (1 << BIO_RW_SYNC);332333 /*334 * For multiple regions we need to be careful to rewind
···328 struct dpages old_pages = *dp;329330 if (sync)331+ rw |= (1 << BIO_RW_SYNCIO) | (1 << BIO_RW_UNPLUG);332333 /*334 * For multiple regions we need to be careful to rewind
···399void dvb_dmx_swfilter_packets(struct dvb_demux *demux, const u8 *buf,400 size_t count)401{402- unsigned long flags;403-404- spin_lock_irqsave(&demux->lock, flags);405406 while (count--) {407 if (buf[0] == 0x47)···407 buf += 188;408 }409410- spin_unlock_irqrestore(&demux->lock, flags);411}412413EXPORT_SYMBOL(dvb_dmx_swfilter_packets);414415void dvb_dmx_swfilter(struct dvb_demux *demux, const u8 *buf, size_t count)416{417- unsigned long flags;418 int p = 0, i, j;419420- spin_lock_irqsave(&demux->lock, flags);421422 if (demux->tsbufp) {423 i = demux->tsbufp;···449 }450451bailout:452- spin_unlock_irqrestore(&demux->lock, flags);453}454455EXPORT_SYMBOL(dvb_dmx_swfilter);456457void dvb_dmx_swfilter_204(struct dvb_demux *demux, const u8 *buf, size_t count)458{459- unsigned long flags;460 int p = 0, i, j;461 u8 tmppack[188];462463- spin_lock_irqsave(&demux->lock, flags);464465 if (demux->tsbufp) {466 i = demux->tsbufp;···500 }501502bailout:503- spin_unlock_irqrestore(&demux->lock, flags);504}505506EXPORT_SYMBOL(dvb_dmx_swfilter_204);
···399void dvb_dmx_swfilter_packets(struct dvb_demux *demux, const u8 *buf,400 size_t count)401{402+ spin_lock(&demux->lock);00403404 while (count--) {405 if (buf[0] == 0x47)···409 buf += 188;410 }411412+ spin_unlock(&demux->lock);413}414415EXPORT_SYMBOL(dvb_dmx_swfilter_packets);416417void dvb_dmx_swfilter(struct dvb_demux *demux, const u8 *buf, size_t count)418{0419 int p = 0, i, j;420421+ spin_lock(&demux->lock);422423 if (demux->tsbufp) {424 i = demux->tsbufp;···452 }453454bailout:455+ spin_unlock(&demux->lock);456}457458EXPORT_SYMBOL(dvb_dmx_swfilter);459460void dvb_dmx_swfilter_204(struct dvb_demux *demux, const u8 *buf, size_t count)461{0462 int p = 0, i, j;463 u8 tmppack[188];464465+ spin_lock(&demux->lock);466467 if (demux->tsbufp) {468 i = demux->tsbufp;···504 }505506bailout:507+ spin_unlock(&demux->lock);508}509510EXPORT_SYMBOL(dvb_dmx_swfilter_204);
+46-9
drivers/media/radio/radio-si470x.c
···98 * - blacklisted KWorld radio in hid-core.c and hid-ids.h99 * 2008-12-03 Mark Lord <mlord@pobox.com>100 * - add support for DealExtreme USB Radio000000101 *102 * ToDo:103 * - add firmware download/update support104 * - RDS support: interrupt mode, instead of polling105- * - add LED status output (check if that's not already done in firmware)106 */107108···887888889/**************************************************************************000000000000000000000000890 * RDS Driver Functions891 **************************************************************************/892···1414 };14151416 /* stereo indicator == stereo (instead of mono) */1417- if ((radio->registers[STATUSRSSI] & STATUSRSSI_ST) == 1)1418- tuner->rxsubchans = V4L2_TUNER_SUB_MONO | V4L2_TUNER_SUB_STEREO;1419- else1420 tuner->rxsubchans = V4L2_TUNER_SUB_MONO;0014211422 /* mono/stereo selector */1423- if ((radio->registers[POWERCFG] & POWERCFG_MONO) == 1)1424- tuner->audmode = V4L2_TUNER_MODE_MONO;1425- else1426 tuner->audmode = V4L2_TUNER_MODE_STEREO;0014271428 /* min is worst, max is best; signal:0..0xffff; rssi: 0..0xff */1429- tuner->signal = (radio->registers[STATUSRSSI] & STATUSRSSI_RSSI)1430- * 0x0101;0014311432 /* automatic frequency control: -1: freq to low, 1 freq to high */1433 /* AFCRL does only indicate that freq. differs, not if too low/high */···1663 /* set initial frequency */1664 si470x_set_freq(radio, 87.5 * FREQ_MUL); /* available in all regions */16650001666 /* rds buffer allocation */1667 radio->buf_size = rds_buf * 3;1668 radio->buffer = kmalloc(radio->buf_size, GFP_KERNEL);···1749 cancel_delayed_work_sync(&radio->work);1750 usb_set_intfdata(intf, NULL);1751 if (radio->users == 0) {0001752 video_unregister_device(radio->videodev);1753 kfree(radio->buffer);1754 kfree(radio);
···98 * - blacklisted KWorld radio in hid-core.c and hid-ids.h99 * 2008-12-03 Mark Lord <mlord@pobox.com>100 * - add support for DealExtreme USB Radio101+ * 2009-01-31 Bob Ross <pigiron@gmx.com>102+ * - correction of stereo detection/setting103+ * - correction of signal strength indicator scaling104+ * 2009-01-31 Rick Bronson <rick@efn.org>105+ * Tobias Lorenz <tobias.lorenz@gmx.net>106+ * - add LED status output107 *108 * ToDo:109 * - add firmware download/update support110 * - RDS support: interrupt mode, instead of polling0111 */112113···882883884/**************************************************************************885+ * General Driver Functions - LED_REPORT886+ **************************************************************************/887+888+/*889+ * si470x_set_led_state - sets the led state890+ */891+static int si470x_set_led_state(struct si470x_device *radio,892+ unsigned char led_state)893+{894+ unsigned char buf[LED_REPORT_SIZE];895+ int retval;896+897+ buf[0] = LED_REPORT;898+ buf[1] = LED_COMMAND;899+ buf[2] = led_state;900+901+ retval = si470x_set_report(radio, (void *) &buf, sizeof(buf));902+903+ return (retval < 0) ? -EINVAL : 0;904+}905+906+907+908+/**************************************************************************909 * RDS Driver Functions910 **************************************************************************/911···1385 };13861387 /* stereo indicator == stereo (instead of mono) */1388+ if ((radio->registers[STATUSRSSI] & STATUSRSSI_ST) == 0)001389 tuner->rxsubchans = V4L2_TUNER_SUB_MONO;1390+ else1391+ tuner->rxsubchans = V4L2_TUNER_SUB_MONO | V4L2_TUNER_SUB_STEREO;13921393 /* mono/stereo selector */1394+ if ((radio->registers[POWERCFG] & POWERCFG_MONO) == 0)001395 tuner->audmode = V4L2_TUNER_MODE_STEREO;1396+ else1397+ tuner->audmode = V4L2_TUNER_MODE_MONO;13981399 /* min is worst, max is best; signal:0..0xffff; rssi: 0..0xff */1400+ /* measured in units of dbµV in 1 db increments (max at ~75 dbµV) */1401+ tuner->signal = (radio->registers[STATUSRSSI] & STATUSRSSI_RSSI);1402+ /* the ideal factor is 0xffff/75 = 873,8 */1403+ tuner->signal = (tuner->signal * 873) + (8 * tuner->signal / 10);14041405 /* automatic frequency control: -1: freq to low, 1 freq to high */1406 /* AFCRL does only indicate that freq. differs, not if too low/high */···1632 /* set initial frequency */1633 si470x_set_freq(radio, 87.5 * FREQ_MUL); /* available in all regions */16341635+ /* set led to connect state */1636+ si470x_set_led_state(radio, BLINK_GREEN_LED);1637+1638 /* rds buffer allocation */1639 radio->buf_size = rds_buf * 3;1640 radio->buffer = kmalloc(radio->buf_size, GFP_KERNEL);···1715 cancel_delayed_work_sync(&radio->work);1716 usb_set_intfdata(intf, NULL);1717 if (radio->users == 0) {1718+ /* set led to disconnect state */1719+ si470x_set_led_state(radio, BLINK_ORANGE_LED);1720+1721 video_unregister_device(radio->videodev);1722 kfree(radio->buffer);1723 kfree(radio);
+5
drivers/media/video/gspca/gspca.c
···422 if (urb == NULL)423 break;4240425 gspca_dev->urb[i] = NULL;426 if (!gspca_dev->present)427 usb_kill_urb(urb);···1951{1952 struct gspca_dev *gspca_dev = usb_get_intfdata(intf);195301954 gspca_dev->present = 0;01955001956 usb_set_intfdata(intf, NULL);19571958 /* release the device */
···55#define VS30 (1 << 25)56#define SDVS18 (0x5 << 9)57#define SDVS30 (0x6 << 9)058#define SDVSCLR 0xFFFFF1FF59#define SDVSDET 0x0000040060#define AUTOIDLE 0x1···376}377#endif /* CONFIG_MMC_DEBUG */37800000000000000000000000000379380/*381 * MMC controller IRQ handler···430 (status & CMD_CRC)) {431 if (host->cmd) {432 if (status & CMD_TIMEOUT) {433- OMAP_HSMMC_WRITE(host->base, SYSCTL,434- OMAP_HSMMC_READ(host->base,435- SYSCTL) | SRC);436- while (OMAP_HSMMC_READ(host->base,437- SYSCTL) & SRC)438- ;439-440 host->cmd->error = -ETIMEDOUT;441 } else {442 host->cmd->error = -EILSEQ;443 }444 end_cmd = 1;445 }446- if (host->data)447 mmc_dma_cleanup(host);00448 }449 if ((status & DATA_TIMEOUT) ||450 (status & DATA_CRC)) {···449 mmc_dma_cleanup(host);450 else451 host->data->error = -EILSEQ;452- OMAP_HSMMC_WRITE(host->base, SYSCTL,453- OMAP_HSMMC_READ(host->base,454- SYSCTL) | SRD);455- while (OMAP_HSMMC_READ(host->base,456- SYSCTL) & SRD)457- ;458 end_trans = 1;459 }460 }···474}475476/*477- * Switch MMC operating voltage0000478 */479static int omap_mmc_switch_opcond(struct mmc_omap_host *host, int vdd)480{481 u32 reg_val = 0;482 int ret;000483484 /* Disable the clocks */485 clk_disable(host->fclk);···510 OMAP_HSMMC_WRITE(host->base, HCTL,511 OMAP_HSMMC_READ(host->base, HCTL) & SDVSCLR);512 reg_val = OMAP_HSMMC_READ(host->base, HCTL);0513 /*514 * If a MMC dual voltage card is detected, the set_ios fn calls515 * this fn with VDD bit set for 1.8V. Upon card removal from the516 * slot, omap_mmc_set_ios sets the VDD back to 3V on MMC_POWER_OFF.517 *518- * Only MMC1 supports 3.0V. MMC2 will not function if SDVS30 is519- * set in HCTL.0000000520 */521- if (host->id == OMAP_MMC1_DEVID && (((1 << vdd) == MMC_VDD_32_33) ||522- ((1 << vdd) == MMC_VDD_33_34)))523- reg_val |= SDVS30;524- if ((1 << vdd) == MMC_VDD_165_195)525 reg_val |= SDVS18;00526527 OMAP_HSMMC_WRITE(host->base, HCTL, reg_val);528···549{550 struct mmc_omap_host *host = container_of(work, struct mmc_omap_host,551 mmc_carddetect_work);000552553 sysfs_notify(&host->mmc->class_dev.kobj, NULL, "cover_switch");554 if (host->carddetect) {555 mmc_detect_change(host->mmc, (HZ * 200) / 1000);556 } else {557- OMAP_HSMMC_WRITE(host->base, SYSCTL,558- OMAP_HSMMC_READ(host->base, SYSCTL) | SRD);559- while (OMAP_HSMMC_READ(host->base, SYSCTL) & SRD)560- ;561-562 mmc_detect_change(host->mmc, (HZ * 50) / 1000);563 }564}···569{570 struct mmc_omap_host *host = (struct mmc_omap_host *)dev_id;571572- host->carddetect = mmc_slot(host).card_detect(irq);573 schedule_work(&host->mmc_carddetect_work);574575 return IRQ_HANDLED;···787 case MMC_POWER_OFF:788 mmc_slot(host).set_power(host->dev, host->slot_id, 0, 0);789 /*790- * Reset bus voltage to 3V if it got set to 1.8V earlier.00791 * REVISIT: If we are able to detect cards after unplugging792 * a 1.8V card, this code should not be needed.793 */00794 if (!(OMAP_HSMMC_READ(host->base, HCTL) & SDVSDET)) {795 int vdd = fls(host->mmc->ocr_avail) - 1;796 if (omap_mmc_switch_opcond(host, vdd) != 0)···818 }819820 if (host->id == OMAP_MMC1_DEVID) {821- /* Only MMC1 can operate at 3V/1.8V */00822 if ((OMAP_HSMMC_READ(host->base, HCTL) & SDVSDET) &&823 (ios->vdd == DUAL_VOLT_OCR_BIT)) {824 /*···1173 " level suspend\n");1174 }11751176- if (!(OMAP_HSMMC_READ(host->base, HCTL) & SDVSDET)) {001177 OMAP_HSMMC_WRITE(host->base, HCTL,1178 OMAP_HSMMC_READ(host->base, HCTL)1179 & SDVSCLR);
···55#define VS30 (1 << 25)56#define SDVS18 (0x5 << 9)57#define SDVS30 (0x6 << 9)58+#define SDVS33 (0x7 << 9)59#define SDVSCLR 0xFFFFF1FF60#define SDVSDET 0x0000040061#define AUTOIDLE 0x1···375}376#endif /* CONFIG_MMC_DEBUG */377378+/*379+ * MMC controller internal state machines reset380+ *381+ * Used to reset command or data internal state machines, using respectively382+ * SRC or SRD bit of SYSCTL register383+ * Can be called from interrupt context384+ */385+static inline void mmc_omap_reset_controller_fsm(struct mmc_omap_host *host,386+ unsigned long bit)387+{388+ unsigned long i = 0;389+ unsigned long limit = (loops_per_jiffy *390+ msecs_to_jiffies(MMC_TIMEOUT_MS));391+392+ OMAP_HSMMC_WRITE(host->base, SYSCTL,393+ OMAP_HSMMC_READ(host->base, SYSCTL) | bit);394+395+ while ((OMAP_HSMMC_READ(host->base, SYSCTL) & bit) &&396+ (i++ < limit))397+ cpu_relax();398+399+ if (OMAP_HSMMC_READ(host->base, SYSCTL) & bit)400+ dev_err(mmc_dev(host->mmc),401+ "Timeout waiting on controller reset in %s\n",402+ __func__);403+}404405/*406 * MMC controller IRQ handler···403 (status & CMD_CRC)) {404 if (host->cmd) {405 if (status & CMD_TIMEOUT) {406+ mmc_omap_reset_controller_fsm(host, SRC);000000407 host->cmd->error = -ETIMEDOUT;408 } else {409 host->cmd->error = -EILSEQ;410 }411 end_cmd = 1;412 }413+ if (host->data) {414 mmc_dma_cleanup(host);415+ mmc_omap_reset_controller_fsm(host, SRD);416+ }417 }418 if ((status & DATA_TIMEOUT) ||419 (status & DATA_CRC)) {···426 mmc_dma_cleanup(host);427 else428 host->data->error = -EILSEQ;429+ mmc_omap_reset_controller_fsm(host, SRD);00000430 end_trans = 1;431 }432 }···456}457458/*459+ * Switch MMC interface voltage ... only relevant for MMC1.460+ *461+ * MMC2 and MMC3 use fixed 1.8V levels, and maybe a transceiver.462+ * The MMC2 transceiver controls are used instead of DAT4..DAT7.463+ * Some chips, like eMMC ones, use internal transceivers.464 */465static int omap_mmc_switch_opcond(struct mmc_omap_host *host, int vdd)466{467 u32 reg_val = 0;468 int ret;469+470+ if (host->id != OMAP_MMC1_DEVID)471+ return 0;472473 /* Disable the clocks */474 clk_disable(host->fclk);···485 OMAP_HSMMC_WRITE(host->base, HCTL,486 OMAP_HSMMC_READ(host->base, HCTL) & SDVSCLR);487 reg_val = OMAP_HSMMC_READ(host->base, HCTL);488+489 /*490 * If a MMC dual voltage card is detected, the set_ios fn calls491 * this fn with VDD bit set for 1.8V. Upon card removal from the492 * slot, omap_mmc_set_ios sets the VDD back to 3V on MMC_POWER_OFF.493 *494+ * Cope with a bit of slop in the range ... per data sheets:495+ * - "1.8V" for vdds_mmc1/vdds_mmc1a can be up to 2.45V max,496+ * but recommended values are 1.71V to 1.89V497+ * - "3.0V" for vdds_mmc1/vdds_mmc1a can be up to 3.5V max,498+ * but recommended values are 2.7V to 3.3V499+ *500+ * Board setup code shouldn't permit anything very out-of-range.501+ * TWL4030-family VMMC1 and VSIM regulators are fine (avoiding the502+ * middle range) but VSIM can't power DAT4..DAT7 at more than 3V.503 */504+ if ((1 << vdd) <= MMC_VDD_23_24)000505 reg_val |= SDVS18;506+ else507+ reg_val |= SDVS30;508509 OMAP_HSMMC_WRITE(host->base, HCTL, reg_val);510···517{518 struct mmc_omap_host *host = container_of(work, struct mmc_omap_host,519 mmc_carddetect_work);520+ struct omap_mmc_slot_data *slot = &mmc_slot(host);521+522+ host->carddetect = slot->card_detect(slot->card_detect_irq);523524 sysfs_notify(&host->mmc->class_dev.kobj, NULL, "cover_switch");525 if (host->carddetect) {526 mmc_detect_change(host->mmc, (HZ * 200) / 1000);527 } else {528+ mmc_omap_reset_controller_fsm(host, SRD);0000529 mmc_detect_change(host->mmc, (HZ * 50) / 1000);530 }531}···538{539 struct mmc_omap_host *host = (struct mmc_omap_host *)dev_id;5400541 schedule_work(&host->mmc_carddetect_work);542543 return IRQ_HANDLED;···757 case MMC_POWER_OFF:758 mmc_slot(host).set_power(host->dev, host->slot_id, 0, 0);759 /*760+ * Reset interface voltage to 3V if it's 1.8V now;761+ * only relevant on MMC-1, the others always use 1.8V.762+ *763 * REVISIT: If we are able to detect cards after unplugging764 * a 1.8V card, this code should not be needed.765 */766+ if (host->id != OMAP_MMC1_DEVID)767+ break;768 if (!(OMAP_HSMMC_READ(host->base, HCTL) & SDVSDET)) {769 int vdd = fls(host->mmc->ocr_avail) - 1;770 if (omap_mmc_switch_opcond(host, vdd) != 0)···784 }785786 if (host->id == OMAP_MMC1_DEVID) {787+ /* Only MMC1 can interface at 3V without some flavor788+ * of external transceiver; but they all handle 1.8V.789+ */790 if ((OMAP_HSMMC_READ(host->base, HCTL) & SDVSDET) &&791 (ios->vdd == DUAL_VOLT_OCR_BIT)) {792 /*···1137 " level suspend\n");1138 }11391140+ if (host->id == OMAP_MMC1_DEVID1141+ && !(OMAP_HSMMC_READ(host->base, HCTL)1142+ & SDVSDET)) {1143 OMAP_HSMMC_WRITE(host->base, HCTL,1144 OMAP_HSMMC_READ(host->base, HCTL)1145 & SDVSCLR);
+1-1
drivers/mmc/host/s3cmci.c
···329330 to_ptr = host->base + host->sdidata;331332- while ((fifo = fifo_free(host))) {333 if (!host->pio_bytes) {334 res = get_data_buffer(host, &host->pio_bytes,335 &host->pio_ptr);
···329330 to_ptr = host->base + host->sdidata;331332+ while ((fifo = fifo_free(host)) > 3) {333 if (!host->pio_bytes) {334 res = get_data_buffer(host, &host->pio_bytes,335 &host->pio_ptr);
···208#define SDHCI_QUIRK_BROKEN_TIMEOUT_VAL (1<<12)209/* Controller has an issue with buffer bits for small transfers */210#define SDHCI_QUIRK_BROKEN_SMALL_PIO (1<<13)211-/* Controller supports high speed but doesn't have the caps bit set */212-#define SDHCI_QUIRK_FORCE_HIGHSPEED (1<<14)213214 int irq; /* Device IRQ */215 void __iomem * ioaddr; /* Mapped address */···220221#if defined(CONFIG_LEDS_CLASS) || defined(CONFIG_LEDS_CLASS_MODULE)222 struct led_classdev led; /* LED control */0223#endif224225 spinlock_t lock; /* Mutex */
···208#define SDHCI_QUIRK_BROKEN_TIMEOUT_VAL (1<<12)209/* Controller has an issue with buffer bits for small transfers */210#define SDHCI_QUIRK_BROKEN_SMALL_PIO (1<<13)00211212 int irq; /* Device IRQ */213 void __iomem * ioaddr; /* Mapped address */···222223#if defined(CONFIG_LEDS_CLASS) || defined(CONFIG_LEDS_CLASS_MODULE)224 struct led_classdev led; /* LED control */225+ char led_name[32];226#endif227228 spinlock_t lock; /* Mutex */
···61/* global iommu list, set NULL for ignored DMAR units */62static struct intel_iommu **g_iommus;630064/*65 * 0: Present66 * 1-11: Reserved···787 u32 val;788 unsigned long flag;789790- if (!cap_rwbf(iommu->cap))791 return;792 val = iommu->gcmd | DMA_GCMD_WBF;793···3139 .unmap = intel_iommu_unmap_range,3140 .iova_to_phys = intel_iommu_iova_to_phys,3141};000000000000
···61/* global iommu list, set NULL for ignored DMAR units */62static struct intel_iommu **g_iommus;6364+static int rwbf_quirk;65+66/*67 * 0: Present68 * 1-11: Reserved···785 u32 val;786 unsigned long flag;787788+ if (!rwbf_quirk && !cap_rwbf(iommu->cap))789 return;790 val = iommu->gcmd | DMA_GCMD_WBF;791···3137 .unmap = intel_iommu_unmap_range,3138 .iova_to_phys = intel_iommu_iova_to_phys,3139};3140+3141+static void __devinit quirk_iommu_rwbf(struct pci_dev *dev)3142+{3143+ /*3144+ * Mobile 4 Series Chipset neglects to set RWBF capability,3145+ * but needs it:3146+ */3147+ printk(KERN_INFO "DMAR: Forcing write-buffer flush capability\n");3148+ rwbf_quirk = 1;3149+}3150+3151+DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x2a40, quirk_iommu_rwbf);
+4-6
drivers/pci/msi.c
···103 }104}105106-/*107- * Essentially, this is ((1 << (1 << x)) - 1), but without the108- * undefinedness of a << 32.109- */110static inline __attribute_const__ u32 msi_mask(unsigned x)111{112- static const u32 mask[] = { 1, 2, 4, 0xf, 0xff, 0xffff, 0xffffffff };113- return mask[x];00114}115116static void msix_flush_writes(struct irq_desc *desc)
···103 }104}1050000106static inline __attribute_const__ u32 msi_mask(unsigned x)107{108+ /* Don't shift by >= width of type */109+ if (x >= 5)110+ return 0xffffffff;111+ return (1 << (1 << x)) - 1;112}113114static void msix_flush_writes(struct irq_desc *desc)
+9-4
drivers/pci/pci.c
···1540}15411542/**1543- * pci_request_region - Reserved PCI I/O and memory resource1544 * @pdev: PCI device whose resources are to be reserved1545 * @bar: BAR to be reserved1546 * @res_name: Name to be associated with resource.01547 *1548 * Mark the PCI region associated with PCI device @pdev BR @bar as1549 * being reserved by owner @res_name. Do not access any1550 * address inside the PCI regions unless this call returns1551 * successfully.00001552 *1553 * Returns 0 on success, or %EBUSY on error. A warning1554 * message is also printed on failure.···1593}15941595/**1596- * pci_request_region - Reserved PCI I/O and memory resource1597 * @pdev: PCI device whose resources are to be reserved1598 * @bar: BAR to be reserved1599- * @res_name: Name to be associated with resource.1600 *1601- * Mark the PCI region associated with PCI device @pdev BR @bar as1602 * being reserved by owner @res_name. Do not access any1603 * address inside the PCI regions unless this call returns1604 * successfully.
···1540}15411542/**1543+ * __pci_request_region - Reserved PCI I/O and memory resource1544 * @pdev: PCI device whose resources are to be reserved1545 * @bar: BAR to be reserved1546 * @res_name: Name to be associated with resource.1547+ * @exclusive: whether the region access is exclusive or not1548 *1549 * Mark the PCI region associated with PCI device @pdev BR @bar as1550 * being reserved by owner @res_name. Do not access any1551 * address inside the PCI regions unless this call returns1552 * successfully.1553+ *1554+ * If @exclusive is set, then the region is marked so that userspace1555+ * is explicitly not allowed to map the resource via /dev/mem or1556+ * sysfs MMIO access.1557 *1558 * Returns 0 on success, or %EBUSY on error. A warning1559 * message is also printed on failure.···1588}15891590/**1591+ * pci_request_region - Reserve PCI I/O and memory resource1592 * @pdev: PCI device whose resources are to be reserved1593 * @bar: BAR to be reserved1594+ * @res_name: Name to be associated with resource1595 *1596+ * Mark the PCI region associated with PCI device @pdev BAR @bar as1597 * being reserved by owner @res_name. Do not access any1598 * address inside the PCI regions unless this call returns1599 * successfully.
+10-10
drivers/pci/pci.h
···16#endif1718/**19- * Firmware PM callbacks20 *21- * @is_manageable - returns 'true' if given device is power manageable by the22- * platform firmware23 *24- * @set_state - invokes the platform firmware to set the device's power state25 *26- * @choose_state - returns PCI power state of given device preferred by the27- * platform; to be used during system-wide transitions from a28- * sleeping state to the working state and vice versa29 *30- * @can_wakeup - returns 'true' if given device is capable of waking up the31- * system from a sleeping state32 *33- * @sleep_wake - enables/disables the system wake up capability of given device34 *35 * If given platform is generally capable of power managing PCI devices, all of36 * these callbacks are mandatory.
···16#endif1718/**19+ * struct pci_platform_pm_ops - Firmware PM callbacks20 *21+ * @is_manageable: returns 'true' if given device is power manageable by the22+ * platform firmware23 *24+ * @set_state: invokes the platform firmware to set the device's power state25 *26+ * @choose_state: returns PCI power state of given device preferred by the27+ * platform; to be used during system-wide transitions from a28+ * sleeping state to the working state and vice versa29 *30+ * @can_wakeup: returns 'true' if given device is capable of waking up the31+ * system from a sleeping state32 *33+ * @sleep_wake: enables/disables the system wake up capability of given device34 *35 * If given platform is generally capable of power managing PCI devices, all of36 * these callbacks are mandatory.
+1
drivers/pci/rom.c
···5556/**57 * pci_get_rom_size - obtain the actual size of the ROM image058 * @rom: kernel virtual pointer to image of ROM59 * @size: size of PCI window60 * return: size of actual ROM image
···5556/**57 * pci_get_rom_size - obtain the actual size of the ROM image58+ * @pdev: target PCI device59 * @rom: kernel virtual pointer to image of ROM60 * @size: size of PCI window61 * return: size of actual ROM image
+2
drivers/platform/x86/Kconfig
···62 depends on EXPERIMENTAL63 depends on BACKLIGHT_CLASS_DEVICE64 depends on RFKILL065 default n66 ---help---67 This driver adds support for rfkill and backlight control to Dell···302config EEEPC_LAPTOP303 tristate "Eee PC Hotkey Driver (EXPERIMENTAL)"304 depends on ACPI0305 depends on EXPERIMENTAL306 select BACKLIGHT_CLASS_DEVICE307 select HWMON
···62 depends on EXPERIMENTAL63 depends on BACKLIGHT_CLASS_DEVICE64 depends on RFKILL65+ depends on POWER_SUPPLY66 default n67 ---help---68 This driver adds support for rfkill and backlight control to Dell···301config EEEPC_LAPTOP302 tristate "Eee PC Hotkey Driver (EXPERIMENTAL)"303 depends on ACPI304+ depends on INPUT305 depends on EXPERIMENTAL306 select BACKLIGHT_CLASS_DEVICE307 select HWMON
+18-7
drivers/platform/x86/fujitsu-laptop.c
···166 struct platform_device *pf_device;167 struct kfifo *fifo;168 spinlock_t fifo_lock;0169 int rfkill_state;170 int logolamp_registered;171 int kblamps_registered;···527show_lid_state(struct device *dev,528 struct device_attribute *attr, char *buf)529{530- if (fujitsu_hotkey->rfkill_state == UNSUPPORTED_CMD)531 return sprintf(buf, "unknown\n");532 if (fujitsu_hotkey->rfkill_state & 0x100)533 return sprintf(buf, "open\n");···539show_dock_state(struct device *dev,540 struct device_attribute *attr, char *buf)541{542- if (fujitsu_hotkey->rfkill_state == UNSUPPORTED_CMD)543 return sprintf(buf, "unknown\n");544 if (fujitsu_hotkey->rfkill_state & 0x200)545 return sprintf(buf, "docked\n");···551show_radios_state(struct device *dev,552 struct device_attribute *attr, char *buf)553{554- if (fujitsu_hotkey->rfkill_state == UNSUPPORTED_CMD)555 return sprintf(buf, "unknown\n");556 if (fujitsu_hotkey->rfkill_state & 0x20)557 return sprintf(buf, "on\n");···929 ; /* No action, result is discarded */930 vdbg_printk(FUJLAPTOP_DBG_INFO, "Discarded %i ringbuffer entries\n", i);931932- fujitsu_hotkey->rfkill_state =933- call_fext_func(FUNC_RFKILL, 0x4, 0x0, 0x0);000000000934935 /* Suspect this is a keymap of the application panel, print it */936 printk(KERN_INFO "fujitsu-laptop: BTNI: [0x%x]\n",···10151016 input = fujitsu_hotkey->input;10171018- fujitsu_hotkey->rfkill_state =1019- call_fext_func(FUNC_RFKILL, 0x4, 0x0, 0x0);010201021 switch (event) {1022 case ACPI_FUJITSU_NOTIFY_CODE1:
···166 struct platform_device *pf_device;167 struct kfifo *fifo;168 spinlock_t fifo_lock;169+ int rfkill_supported;170 int rfkill_state;171 int logolamp_registered;172 int kblamps_registered;···526show_lid_state(struct device *dev,527 struct device_attribute *attr, char *buf)528{529+ if (!(fujitsu_hotkey->rfkill_supported & 0x100))530 return sprintf(buf, "unknown\n");531 if (fujitsu_hotkey->rfkill_state & 0x100)532 return sprintf(buf, "open\n");···538show_dock_state(struct device *dev,539 struct device_attribute *attr, char *buf)540{541+ if (!(fujitsu_hotkey->rfkill_supported & 0x200))542 return sprintf(buf, "unknown\n");543 if (fujitsu_hotkey->rfkill_state & 0x200)544 return sprintf(buf, "docked\n");···550show_radios_state(struct device *dev,551 struct device_attribute *attr, char *buf)552{553+ if (!(fujitsu_hotkey->rfkill_supported & 0x20))554 return sprintf(buf, "unknown\n");555 if (fujitsu_hotkey->rfkill_state & 0x20)556 return sprintf(buf, "on\n");···928 ; /* No action, result is discarded */929 vdbg_printk(FUJLAPTOP_DBG_INFO, "Discarded %i ringbuffer entries\n", i);930931+ fujitsu_hotkey->rfkill_supported =932+ call_fext_func(FUNC_RFKILL, 0x0, 0x0, 0x0);933+934+ /* Make sure our bitmask of supported functions is cleared if the935+ RFKILL function block is not implemented, like on the S7020. */936+ if (fujitsu_hotkey->rfkill_supported == UNSUPPORTED_CMD)937+ fujitsu_hotkey->rfkill_supported = 0;938+939+ if (fujitsu_hotkey->rfkill_supported)940+ fujitsu_hotkey->rfkill_state =941+ call_fext_func(FUNC_RFKILL, 0x4, 0x0, 0x0);942943 /* Suspect this is a keymap of the application panel, print it */944 printk(KERN_INFO "fujitsu-laptop: BTNI: [0x%x]\n",···10051006 input = fujitsu_hotkey->input;10071008+ if (fujitsu_hotkey->rfkill_supported)1009+ fujitsu_hotkey->rfkill_state =1010+ call_fext_func(FUNC_RFKILL, 0x4, 0x0, 0x0);10111012 switch (event) {1013 case ACPI_FUJITSU_NOTIFY_CODE1:
···1078 case BLKTRACESETUP:1079 return blk_trace_setup(sdp->device->request_queue,1080 sdp->disk->disk_name,1081- sdp->device->sdev_gendev.devt,1082 (char *)arg);1083 case BLKTRACESTART:1084 return blk_trace_startstop(sdp->device->request_queue, 1);
···1078 case BLKTRACESETUP:1079 return blk_trace_setup(sdp->device->request_queue,1080 sdp->disk->disk_name,1081+ MKDEV(SCSI_GENERIC_MAJOR, sdp->index),1082 (char *)arg);1083 case BLKTRACESTART:1084 return blk_trace_startstop(sdp->device->request_queue, 1);
+15
drivers/serial/8250.c
···20832084 serial8250_set_mctrl(&up->port, up->port.mctrl);2085000000000000002086 /*2087 * Do a quick test to see if we receive an2088 * interrupt when we enable the TX irq.···2116 up->bugs &= ~UART_BUG_TXEN;2117 }211802119 spin_unlock_irqrestore(&up->port.lock, flags);21202121 /*
···20832084 serial8250_set_mctrl(&up->port, up->port.mctrl);20852086+ /* Serial over Lan (SoL) hack:2087+ Intel 8257x Gigabit ethernet chips have a2088+ 16550 emulation, to be used for Serial Over Lan.2089+ Those chips take a longer time than a normal2090+ serial device to signalize that a transmission2091+ data was queued. Due to that, the above test generally2092+ fails. One solution would be to delay the reading of2093+ iir. However, this is not reliable, since the timeout2094+ is variable. So, let's just don't test if we receive2095+ TX irq. This way, we'll never enable UART_BUG_TXEN.2096+ */2097+ if (up->port.flags & UPF_NO_TXEN_TEST)2098+ goto dont_test_tx_en;2099+2100 /*2101 * Do a quick test to see if we receive an2102 * interrupt when we enable the TX irq.···2102 up->bugs &= ~UART_BUG_TXEN;2103 }21042105+dont_test_tx_en:2106 spin_unlock_irqrestore(&up->port.lock, flags);21072108 /*
+36
drivers/serial/8250_pci.c
···798 return setup_port(priv, port, bar, offset, board->reg_shift);799}800000000000000000801/* This should be in linux/pci_ids.h */802#define PCI_VENDOR_ID_SBSMODULARIO 0x124B803#define PCI_SUBVENDOR_ID_SBSMODULARIO 0x124B···878 .subdevice = PCI_ANY_ID,879 .init = pci_inteli960ni_init,880 .setup = pci_default_setup,000000000000000000000881 },882 /*883 * ITE
···877 }878 }879880+ /* Save current CSR for comparison in atmel_tasklet_func() */881+ atmel_port->irq_status_prev = UART_GET_CSR(port);882+ atmel_port->irq_status = atmel_port->irq_status_prev;883+884 /*885 * Finally, enable the serial port886 */
···114115static inline int getmiso(const struct spi_device *spi)116{117- return gpio_get_value(SPI_MISO_GPIO);118}119120#undef pdata
···114115static inline int getmiso(const struct spi_device *spi)116{117+ return !!gpio_get_value(SPI_MISO_GPIO);118}119120#undef pdata
+2-13
drivers/usb/core/hcd-pci.c
···298EXPORT_SYMBOL_GPL(usb_hcd_pci_suspend);299300/**301- * usb_hcd_pci_resume_early - resume a PCI-based HCD before IRQs are enabled302- * @dev: USB Host Controller being resumed303- *304- * Store this function in the HCD's struct pci_driver as .resume_early.305- */306-int usb_hcd_pci_resume_early(struct pci_dev *dev)307-{308- pci_restore_state(dev);309- return 0;310-}311-EXPORT_SYMBOL_GPL(usb_hcd_pci_resume_early);312-313-/**314 * usb_hcd_pci_resume - power management resume of a PCI-based HCD315 * @dev: USB Host Controller being resumed316 *···319 of_node, 0, 1);320 }321#endif00322323 hcd = pci_get_drvdata(dev);324 if (hcd->state != HC_STATE_SUSPENDED) {
···298EXPORT_SYMBOL_GPL(usb_hcd_pci_suspend);299300/**0000000000000301 * usb_hcd_pci_resume - power management resume of a PCI-based HCD302 * @dev: USB Host Controller being resumed303 *···332 of_node, 0, 1);333 }334#endif335+336+ pci_restore_state(dev);337338 hcd = pci_get_drvdata(dev);339 if (hcd->state != HC_STATE_SUSPENDED) {
-1
drivers/usb/core/hcd.h
···257258#ifdef CONFIG_PM259extern int usb_hcd_pci_suspend(struct pci_dev *dev, pm_message_t msg);260-extern int usb_hcd_pci_resume_early(struct pci_dev *dev);261extern int usb_hcd_pci_resume(struct pci_dev *dev);262#endif /* CONFIG_PM */263
···257258#ifdef CONFIG_PM259extern int usb_hcd_pci_suspend(struct pci_dev *dev, pm_message_t msg);0260extern int usb_hcd_pci_resume(struct pci_dev *dev);261#endif /* CONFIG_PM */262
+2-2
drivers/usb/gadget/pxa25x_udc.c
···904905 /* most IN status is the same, but ISO can't stall */906 *ep->reg_udccs = UDCCS_BI_TPC|UDCCS_BI_FTF|UDCCS_BI_TUR907- | (ep->bmAttributes == USB_ENDPOINT_XFER_ISOC)908- ? 0 : UDCCS_BI_SST;909}910911
···904905 /* most IN status is the same, but ISO can't stall */906 *ep->reg_udccs = UDCCS_BI_TPC|UDCCS_BI_FTF|UDCCS_BI_TUR907+ | (ep->bmAttributes == USB_ENDPOINT_XFER_ISOC908+ ? 0 : UDCCS_BI_SST);909}910911
···227 * Now that the ASL is updated, complete the removal of any228 * removed qsets.229 */230- spin_lock(&whc->lock);231232 list_for_each_entry_safe(qset, t, &whc->async_removed_list, list_node) {233 qset_remove_complete(whc, qset);234 }235236- spin_unlock(&whc->lock);237}238239/**
···227 * Now that the ASL is updated, complete the removal of any228 * removed qsets.229 */230+ spin_lock_irq(&whc->lock);231232 list_for_each_entry_safe(qset, t, &whc->async_removed_list, list_node) {233 qset_remove_complete(whc, qset);234 }235236+ spin_unlock_irq(&whc->lock);237}238239/**
+2-2
drivers/usb/host/whci/pzl.c
···255 * Now that the PZL is updated, complete the removal of any256 * removed qsets.257 */258- spin_lock(&whc->lock);259260 list_for_each_entry_safe(qset, t, &whc->periodic_removed_list, list_node) {261 qset_remove_complete(whc, qset);262 }263264- spin_unlock(&whc->lock);265}266267/**
···255 * Now that the PZL is updated, complete the removal of any256 * removed qsets.257 */258+ spin_lock_irq(&whc->lock);259260 list_for_each_entry_safe(qset, t, &whc->periodic_removed_list, list_node) {261 qset_remove_complete(whc, qset);262 }263264+ spin_unlock_irq(&whc->lock);265}266267/**
···10541055config FB_I8101056 tristate "Intel 810/815 support (EXPERIMENTAL)"1057+ depends on EXPERIMENTAL && FB && PCI && X86_32 && AGP_INTEL0001058 select FB_MODE_HELPERS1059 select FB_CFB_FILLRECT1060 select FB_CFB_COPYAREA···11201121config FB_INTEL1122 tristate "Intel 830M/845G/852GM/855GM/865G/915G/945G/945GM/965G/965GM support (EXPERIMENTAL)"1123+ depends on EXPERIMENTAL && FB && PCI && X86 && AGP_INTEL0001124 select FB_MODE_HELPERS1125 select FB_CFB_FILLRECT1126 select FB_CFB_COPYAREA
-1
drivers/video/aty/aty128fb.c
···2365static void aty128_set_suspend(struct aty128fb_par *par, int suspend)2366{2367 u32 pmgt;2368- u16 pwr_command;2369 struct pci_dev *pdev = par->pdev;23702371 if (!par->pm_reg)
···2365static void aty128_set_suspend(struct aty128fb_par *par, int suspend)2366{2367 u32 pmgt;02368 struct pci_dev *pdev = par->pdev;23692370 if (!par->pm_reg)
+1-1
drivers/watchdog/Kconfig
···406 ---help---407 Hardware driver for the intel TCO timer based watchdog devices.408 These drivers are included in the Intel 82801 I/O Controller409- Hub family (from ICH0 up to ICH8) and in the Intel 6300ESB410 controller hub.411412 The TCO (Total Cost of Ownership) timer is a watchdog timer
···406 ---help---407 Hardware driver for the intel TCO timer based watchdog devices.408 These drivers are included in the Intel 82801 I/O Controller409+ Hub family (from ICH0 up to ICH10) and in the Intel 63xxESB410 controller hub.411412 The TCO (Total Cost of Ownership) timer is a watchdog timer
+2-2
drivers/watchdog/at91rm9200_wdt.c
···107static int at91_wdt_settimeout(int new_time)108{109 /*110- * All counting occurs at SLOW_CLOCK / 128 = 0.256 Hz111 *112 * Since WDV is a 16-bit counter, the maximum period is113- * 65536 / 0.256 = 256 seconds.114 */115 if ((new_time <= 0) || (new_time > WDT_MAX_TIME))116 return -EINVAL;
···107static int at91_wdt_settimeout(int new_time)108{109 /*110+ * All counting occurs at SLOW_CLOCK / 128 = 256 Hz111 *112 * Since WDV is a 16-bit counter, the maximum period is113+ * 65536 / 256 = 256 seconds.114 */115 if ((new_time <= 0) || (new_time > WDT_MAX_TIME))116 return -EINVAL;
···1/*2 * intel TCO vendor specific watchdog driver support3 *4- * (c) Copyright 2006-2008 Wim Van Sebroeck <wim@iguana.be>.5 *6 * This program is free software; you can redistribute it and/or7 * modify it under the terms of the GNU General Public License···1920/* Module and version information */21#define DRV_NAME "iTCO_vendor_support"22-#define DRV_VERSION "1.02"23#define PFX DRV_NAME ": "2425/* Includes */···76 * time is about 40 seconds, and the minimum hang time is about77 * 20.6 seconds.78 */000000000000000000007980static void supermicro_old_pre_keepalive(unsigned long acpibase)81{···248void iTCO_vendor_pre_start(unsigned long acpibase,249 unsigned int heartbeat)250{251- if (vendorsupport == SUPERMICRO_NEW_BOARD)00252 supermicro_new_pre_start(heartbeat);253}254EXPORT_SYMBOL(iTCO_vendor_pre_start);255256void iTCO_vendor_pre_stop(unsigned long acpibase)257{258- if (vendorsupport == SUPERMICRO_NEW_BOARD)00259 supermicro_new_pre_stop();260}261EXPORT_SYMBOL(iTCO_vendor_pre_stop);
···1/*2 * intel TCO vendor specific watchdog driver support3 *4+ * (c) Copyright 2006-2009 Wim Van Sebroeck <wim@iguana.be>.5 *6 * This program is free software; you can redistribute it and/or7 * modify it under the terms of the GNU General Public License···1920/* Module and version information */21#define DRV_NAME "iTCO_vendor_support"22+#define DRV_VERSION "1.03"23#define PFX DRV_NAME ": "2425/* Includes */···76 * time is about 40 seconds, and the minimum hang time is about77 * 20.6 seconds.78 */79+80+static void supermicro_old_pre_start(unsigned long acpibase)81+{82+ unsigned long val32;83+84+ /* Bit 13: TCO_EN -> 0 = Disables TCO logic generating an SMI# */85+ val32 = inl(SMI_EN);86+ val32 &= 0xffffdfff; /* Turn off SMI clearing watchdog */87+ outl(val32, SMI_EN); /* Needed to activate watchdog */88+}89+90+static void supermicro_old_pre_stop(unsigned long acpibase)91+{92+ unsigned long val32;93+94+ /* Bit 13: TCO_EN -> 1 = Enables the TCO logic to generate SMI# */95+ val32 = inl(SMI_EN);96+ val32 |= 0x00002000; /* Turn on SMI clearing watchdog */97+ outl(val32, SMI_EN); /* Needed to deactivate watchdog */98+}99100static void supermicro_old_pre_keepalive(unsigned long acpibase)101{···228void iTCO_vendor_pre_start(unsigned long acpibase,229 unsigned int heartbeat)230{231+ if (vendorsupport == SUPERMICRO_OLD_BOARD)232+ supermicro_old_pre_start(acpibase);233+ else if (vendorsupport == SUPERMICRO_NEW_BOARD)234 supermicro_new_pre_start(heartbeat);235}236EXPORT_SYMBOL(iTCO_vendor_pre_start);237238void iTCO_vendor_pre_stop(unsigned long acpibase)239{240+ if (vendorsupport == SUPERMICRO_OLD_BOARD)241+ supermicro_old_pre_stop(acpibase);242+ else if (vendorsupport == SUPERMICRO_NEW_BOARD)243 supermicro_new_pre_stop();244}245EXPORT_SYMBOL(iTCO_vendor_pre_stop);
+14-21
drivers/watchdog/iTCO_wdt.c
···1/*2- * intel TCO Watchdog Driver (Used in i82801 and i6300ESB chipsets)3 *4- * (c) Copyright 2006-2008 Wim Van Sebroeck <wim@iguana.be>.5 *6 * This program is free software; you can redistribute it and/or7 * modify it under the terms of the GNU General Public License···6364/* Module and version information */65#define DRV_NAME "iTCO_wdt"66-#define DRV_VERSION "1.04"67#define PFX DRV_NAME ": "6869/* Includes */···236237/* Address definitions for the TCO */238/* TCO base address */239-#define TCOBASE iTCO_wdt_private.ACPIBASE + 0x60240/* SMI Control and Enable Register */241-#define SMI_EN iTCO_wdt_private.ACPIBASE + 0x30242243#define TCO_RLD TCOBASE + 0x00 /* TCO Timer Reload and Curr. Value */244#define TCOv1_TMR TCOBASE + 0x01 /* TCOv1 Timer Initial Value */245-#define TCO_DAT_IN TCOBASE + 0x02 /* TCO Data In Register */246-#define TCO_DAT_OUT TCOBASE + 0x03 /* TCO Data Out Register */247-#define TCO1_STS TCOBASE + 0x04 /* TCO1 Status Register */248-#define TCO2_STS TCOBASE + 0x06 /* TCO2 Status Register */249#define TCO1_CNT TCOBASE + 0x08 /* TCO1 Control Register */250#define TCO2_CNT TCOBASE + 0x0a /* TCO2 Control Register */251#define TCOv2_TMR TCOBASE + 0x12 /* TCOv2 Timer Initial Value */···338static int iTCO_wdt_start(void)339{340 unsigned int val;341- unsigned long val32;342343 spin_lock(&iTCO_wdt_private.io_lock);344···349 printk(KERN_ERR PFX "failed to reset NO_REBOOT flag, reboot disabled by hardware\n");350 return -EIO;351 }352-353- /* Bit 13: TCO_EN -> 0 = Disables TCO logic generating an SMI# */354- val32 = inl(SMI_EN);355- val32 &= 0xffffdfff; /* Turn off SMI clearing watchdog */356- outl(val32, SMI_EN);357358 /* Force the timer to its reload value by writing to the TCO_RLD359 register */···372static int iTCO_wdt_stop(void)373{374 unsigned int val;375- unsigned long val32;376377 spin_lock(&iTCO_wdt_private.io_lock);378···382 val |= 0x0800;383 outw(val, TCO1_CNT);384 val = inw(TCO1_CNT);385-386- /* Bit 13: TCO_EN -> 1 = Enables the TCO logic to generate SMI# */387- val32 = inl(SMI_EN);388- val32 |= 0x00002000;389- outl(val32, SMI_EN);390391 /* Set the NO_REBOOT bit to prevent later reboots, just for sure */392 iTCO_wdt_set_NO_REBOOT_bit();···637 int ret;638 u32 base_address;639 unsigned long RCBA;0640641 /*642 * Find the ACPI/PM base I/O address which is the base···684 ret = -EIO;685 goto out;686 }0000687688 /* The TCO I/O registers reside in a 32-byte range pointed to689 by the TCOBASE value */
···1/*2+ * intel TCO Watchdog Driver (Used in i82801 and i63xxESB chipsets)3 *4+ * (c) Copyright 2006-2009 Wim Van Sebroeck <wim@iguana.be>.5 *6 * This program is free software; you can redistribute it and/or7 * modify it under the terms of the GNU General Public License···6364/* Module and version information */65#define DRV_NAME "iTCO_wdt"66+#define DRV_VERSION "1.05"67#define PFX DRV_NAME ": "6869/* Includes */···236237/* Address definitions for the TCO */238/* TCO base address */239+#define TCOBASE iTCO_wdt_private.ACPIBASE + 0x60240/* SMI Control and Enable Register */241+#define SMI_EN iTCO_wdt_private.ACPIBASE + 0x30242243#define TCO_RLD TCOBASE + 0x00 /* TCO Timer Reload and Curr. Value */244#define TCOv1_TMR TCOBASE + 0x01 /* TCOv1 Timer Initial Value */245+#define TCO_DAT_IN TCOBASE + 0x02 /* TCO Data In Register */246+#define TCO_DAT_OUT TCOBASE + 0x03 /* TCO Data Out Register */247+#define TCO1_STS TCOBASE + 0x04 /* TCO1 Status Register */248+#define TCO2_STS TCOBASE + 0x06 /* TCO2 Status Register */249#define TCO1_CNT TCOBASE + 0x08 /* TCO1 Control Register */250#define TCO2_CNT TCOBASE + 0x0a /* TCO2 Control Register */251#define TCOv2_TMR TCOBASE + 0x12 /* TCOv2 Timer Initial Value */···338static int iTCO_wdt_start(void)339{340 unsigned int val;0341342 spin_lock(&iTCO_wdt_private.io_lock);343···350 printk(KERN_ERR PFX "failed to reset NO_REBOOT flag, reboot disabled by hardware\n");351 return -EIO;352 }00000353354 /* Force the timer to its reload value by writing to the TCO_RLD355 register */···378static int iTCO_wdt_stop(void)379{380 unsigned int val;0381382 spin_lock(&iTCO_wdt_private.io_lock);383···389 val |= 0x0800;390 outw(val, TCO1_CNT);391 val = inw(TCO1_CNT);00000392393 /* Set the NO_REBOOT bit to prevent later reboots, just for sure */394 iTCO_wdt_set_NO_REBOOT_bit();···649 int ret;650 u32 base_address;651 unsigned long RCBA;652+ unsigned long val32;653654 /*655 * Find the ACPI/PM base I/O address which is the base···695 ret = -EIO;696 goto out;697 }698+ /* Bit 13: TCO_EN -> 0 = Disables TCO logic generating an SMI# */699+ val32 = inl(SMI_EN);700+ val32 &= 0xffffdfff; /* Turn off SMI clearing watchdog */701+ outl(val32, SMI_EN);702703 /* The TCO I/O registers reside in a 32-byte range pointed to704 by the TCOBASE value */
+3-2
fs/bio.c
···302struct bio *bio_alloc_bioset(gfp_t gfp_mask, int nr_iovecs, struct bio_set *bs)303{304 struct bio *bio = NULL;0305306 if (bs) {307- void *p = mempool_alloc(bs->bio_pool, gfp_mask);308309 if (p)310 bio = p + bs->front_pad;···330 }331 if (unlikely(!bvl)) {332 if (bs)333- mempool_free(bio, bs->bio_pool);334 else335 kfree(bio);336 bio = NULL;
···302struct bio *bio_alloc_bioset(gfp_t gfp_mask, int nr_iovecs, struct bio_set *bs)303{304 struct bio *bio = NULL;305+ void *p;306307 if (bs) {308+ p = mempool_alloc(bs->bio_pool, gfp_mask);309310 if (p)311 bio = p + bs->front_pad;···329 }330 if (unlikely(!bvl)) {331 if (bs)332+ mempool_free(p, bs->bio_pool);333 else334 kfree(bio);335 bio = NULL;
+37-21
fs/btrfs/ctree.c
···38static int del_ptr(struct btrfs_trans_handle *trans, struct btrfs_root *root,39 struct btrfs_path *path, int level, int slot);4041-inline void btrfs_init_path(struct btrfs_path *p)42-{43- memset(p, 0, sizeof(*p));44-}45-46struct btrfs_path *btrfs_alloc_path(void)47{48 struct btrfs_path *path;49- path = kmem_cache_alloc(btrfs_path_cachep, GFP_NOFS);50- if (path) {51- btrfs_init_path(path);52 path->reada = 1;53- }54 return path;55}56···6263/*64 * reset all the locked nodes in the patch to spinning locks.0000065 */66-noinline void btrfs_clear_path_blocking(struct btrfs_path *p)067{68 int i;69- for (i = 0; i < BTRFS_MAX_LEVEL; i++) {000000000000070 if (p->nodes[i] && p->locks[i])71 btrfs_clear_lock_blocking(p->nodes[i]);72 }0000073}7475/* this also releases the path */···303 trans->transid, level, &ins);304 BUG_ON(ret);305 cow = btrfs_init_new_buffer(trans, root, prealloc_dest,306- buf->len);307 } else {308 cow = btrfs_alloc_free_block(trans, root, buf->len,309 parent_start,···934935 /* promote the child to a root */936 child = read_node_slot(root, mid, 0);0937 btrfs_tree_lock(child);938 btrfs_set_lock_blocking(child);939- BUG_ON(!child);940 ret = btrfs_cow_block(trans, root, child, mid, 0, &child, 0);941 BUG_ON(ret);942···1583 if (!p->skip_locking)1584 p->locks[level] = 1;15851586- btrfs_clear_path_blocking(p);15871588 /*1589 * we have a lock on b and as long as we aren't changing···16221623 btrfs_set_path_blocking(p);1624 sret = split_node(trans, root, p, level);1625- btrfs_clear_path_blocking(p);16261627 BUG_ON(sret > 0);1628 if (sret) {···16421643 btrfs_set_path_blocking(p);1644 sret = balance_level(trans, root, p, level);1645- btrfs_clear_path_blocking(p);16461647 if (sret) {1648 ret = sret;···1705 if (!p->skip_locking) {1706 int lret;17071708- btrfs_clear_path_blocking(p);1709 lret = btrfs_try_spin_lock(b);17101711 if (!lret) {1712 btrfs_set_path_blocking(p);1713 btrfs_tree_lock(b);1714- btrfs_clear_path_blocking(p);1715 }1716 }1717 } else {···1723 btrfs_set_path_blocking(p);1724 sret = split_leaf(trans, root, key,1725 p, ins_len, ret == 0);1726- btrfs_clear_path_blocking(p);17271728 BUG_ON(sret > 0);1729 if (sret) {···3943 btrfs_release_path(root, path);3944 goto again;3945 } else {3946- btrfs_clear_path_blocking(path);3947 goto out;3948 }3949 }···3962 path->locks[level - 1] = 1;3963 path->nodes[level - 1] = cur;3964 unlock_up(path, level, 1);3965- btrfs_clear_path_blocking(path);3966 }3967out:3968 if (ret == 0)
···38static int del_ptr(struct btrfs_trans_handle *trans, struct btrfs_root *root,39 struct btrfs_path *path, int level, int slot);400000041struct btrfs_path *btrfs_alloc_path(void)42{43 struct btrfs_path *path;44+ path = kmem_cache_zalloc(btrfs_path_cachep, GFP_NOFS);45+ if (path)046 path->reada = 1;047 return path;48}49···6970/*71 * reset all the locked nodes in the patch to spinning locks.72+ *73+ * held is used to keep lockdep happy, when lockdep is enabled74+ * we set held to a blocking lock before we go around and75+ * retake all the spinlocks in the path. You can safely use NULL76+ * for held77 */78+noinline void btrfs_clear_path_blocking(struct btrfs_path *p,79+ struct extent_buffer *held)80{81 int i;82+83+#ifdef CONFIG_DEBUG_LOCK_ALLOC84+ /* lockdep really cares that we take all of these spinlocks85+ * in the right order. If any of the locks in the path are not86+ * currently blocking, it is going to complain. So, make really87+ * really sure by forcing the path to blocking before we clear88+ * the path blocking.89+ */90+ if (held)91+ btrfs_set_lock_blocking(held);92+ btrfs_set_path_blocking(p);93+#endif94+95+ for (i = BTRFS_MAX_LEVEL - 1; i >= 0; i--) {96 if (p->nodes[i] && p->locks[i])97 btrfs_clear_lock_blocking(p->nodes[i]);98 }99+100+#ifdef CONFIG_DEBUG_LOCK_ALLOC101+ if (held)102+ btrfs_clear_lock_blocking(held);103+#endif104}105106/* this also releases the path */···286 trans->transid, level, &ins);287 BUG_ON(ret);288 cow = btrfs_init_new_buffer(trans, root, prealloc_dest,289+ buf->len, level);290 } else {291 cow = btrfs_alloc_free_block(trans, root, buf->len,292 parent_start,···917918 /* promote the child to a root */919 child = read_node_slot(root, mid, 0);920+ BUG_ON(!child);921 btrfs_tree_lock(child);922 btrfs_set_lock_blocking(child);0923 ret = btrfs_cow_block(trans, root, child, mid, 0, &child, 0);924 BUG_ON(ret);925···1566 if (!p->skip_locking)1567 p->locks[level] = 1;15681569+ btrfs_clear_path_blocking(p, NULL);15701571 /*1572 * we have a lock on b and as long as we aren't changing···16051606 btrfs_set_path_blocking(p);1607 sret = split_node(trans, root, p, level);1608+ btrfs_clear_path_blocking(p, NULL);16091610 BUG_ON(sret > 0);1611 if (sret) {···16251626 btrfs_set_path_blocking(p);1627 sret = balance_level(trans, root, p, level);1628+ btrfs_clear_path_blocking(p, NULL);16291630 if (sret) {1631 ret = sret;···1688 if (!p->skip_locking) {1689 int lret;16901691+ btrfs_clear_path_blocking(p, NULL);1692 lret = btrfs_try_spin_lock(b);16931694 if (!lret) {1695 btrfs_set_path_blocking(p);1696 btrfs_tree_lock(b);1697+ btrfs_clear_path_blocking(p, b);1698 }1699 }1700 } else {···1706 btrfs_set_path_blocking(p);1707 sret = split_leaf(trans, root, key,1708 p, ins_len, ret == 0);1709+ btrfs_clear_path_blocking(p, NULL);17101711 BUG_ON(sret > 0);1712 if (sret) {···3926 btrfs_release_path(root, path);3927 goto again;3928 } else {03929 goto out;3930 }3931 }···3946 path->locks[level - 1] = 1;3947 path->nodes[level - 1] = cur;3948 unlock_up(path, level, 1);3949+ btrfs_clear_path_blocking(path, NULL);3950 }3951out:3952 if (ret == 0)
···4344#define BTRFS_ACL_NOT_CACHED ((void *)-1)4546+#define BTRFS_MAX_LEVEL 800004748/* holds pointers to all of the tree roots */49#define BTRFS_ROOT_TREE_OBJECTID 1ULL···1715 u64 empty_size);1716struct extent_buffer *btrfs_init_new_buffer(struct btrfs_trans_handle *trans,1717 struct btrfs_root *root,1718+ u64 bytenr, u32 blocksize,1719+ int level);1720int btrfs_alloc_extent(struct btrfs_trans_handle *trans,1721 struct btrfs_root *root,1722 u64 num_bytes, u64 parent, u64 min_bytes,···1834void btrfs_release_path(struct btrfs_root *root, struct btrfs_path *p);1835struct btrfs_path *btrfs_alloc_path(void);1836void btrfs_free_path(struct btrfs_path *p);01837void btrfs_set_path_blocking(struct btrfs_path *p);01838void btrfs_unlock_up_safe(struct btrfs_path *p, int level);18391840int btrfs_del_items(struct btrfs_trans_handle *trans, struct btrfs_root *root,
+45-1
fs/btrfs/disk-io.c
···75 struct btrfs_work work;76};77000000000000000000000000000000000078/*79 * extents on the btree inode are pretty simple, there's one extent80 * that covers the entire device···381 return ret;382}383000000000384static int btree_readpage_end_io_hook(struct page *page, u64 start, u64 end,385 struct extent_state *state)386{···434 goto err;435 }436 found_level = btrfs_header_level(eb);00437438 ret = csum_tree_block(root, eb, 1);439 if (ret)···1822 ret = find_and_setup_root(tree_root, fs_info,1823 BTRFS_DEV_TREE_OBJECTID, dev_root);1824 dev_root->track_dirty = 1;1825-1826 if (ret)1827 goto fail_extent_root;1828
···75 struct btrfs_work work;76};7778+/* These are used to set the lockdep class on the extent buffer locks.79+ * The class is set by the readpage_end_io_hook after the buffer has80+ * passed csum validation but before the pages are unlocked.81+ *82+ * The lockdep class is also set by btrfs_init_new_buffer on freshly83+ * allocated blocks.84+ *85+ * The class is based on the level in the tree block, which allows lockdep86+ * to know that lower nodes nest inside the locks of higher nodes.87+ *88+ * We also add a check to make sure the highest level of the tree is89+ * the same as our lockdep setup here. If BTRFS_MAX_LEVEL changes, this90+ * code needs update as well.91+ */92+#ifdef CONFIG_DEBUG_LOCK_ALLOC93+# if BTRFS_MAX_LEVEL != 894+# error95+# endif96+static struct lock_class_key btrfs_eb_class[BTRFS_MAX_LEVEL + 1];97+static const char *btrfs_eb_name[BTRFS_MAX_LEVEL + 1] = {98+ /* leaf */99+ "btrfs-extent-00",100+ "btrfs-extent-01",101+ "btrfs-extent-02",102+ "btrfs-extent-03",103+ "btrfs-extent-04",104+ "btrfs-extent-05",105+ "btrfs-extent-06",106+ "btrfs-extent-07",107+ /* highest possible level */108+ "btrfs-extent-08",109+};110+#endif111+112/*113 * extents on the btree inode are pretty simple, there's one extent114 * that covers the entire device···347 return ret;348}349350+#ifdef CONFIG_DEBUG_LOCK_ALLOC351+void btrfs_set_buffer_lockdep_class(struct extent_buffer *eb, int level)352+{353+ lockdep_set_class_and_name(&eb->lock,354+ &btrfs_eb_class[level],355+ btrfs_eb_name[level]);356+}357+#endif358+359static int btree_readpage_end_io_hook(struct page *page, u64 start, u64 end,360 struct extent_state *state)361{···391 goto err;392 }393 found_level = btrfs_header_level(eb);394+395+ btrfs_set_buffer_lockdep_class(eb, found_level);396397 ret = csum_tree_block(root, eb, 1);398 if (ret)···1777 ret = find_and_setup_root(tree_root, fs_info,1778 BTRFS_DEV_TREE_OBJECTID, dev_root);1779 dev_root->track_dirty = 1;01780 if (ret)1781 goto fail_extent_root;1782
···1323int btrfs_extent_post_op(struct btrfs_trans_handle *trans,1324 struct btrfs_root *root)1325{1326- finish_current_insert(trans, root->fs_info->extent_root, 1);1327- del_pending_extents(trans, root->fs_info->extent_root, 1);000000000000000001328 return 0;1329}1330···2228 u64 end;2229 u64 priv;2230 u64 search = 0;2231- u64 skipped = 0;2232 struct btrfs_fs_info *info = extent_root->fs_info;2233 struct btrfs_path *path;2234 struct pending_extent_op *extent_op, *tmp;2235 struct list_head insert_list, update_list;2236 int ret;2237- int num_inserts = 0, max_inserts;22382239 path = btrfs_alloc_path();2240 INIT_LIST_HEAD(&insert_list);···2249 ret = find_first_extent_bit(&info->extent_ins, search, &start,2250 &end, EXTENT_WRITEBACK);2251 if (ret) {2252- if (skipped && all && !num_inserts &&2253 list_empty(&update_list)) {2254- skipped = 0;2255 search = 0;2256 continue;2257 }2258- mutex_unlock(&info->extent_ins_mutex);2259 break;2260 }22612262 ret = try_lock_extent(&info->extent_ins, start, end, GFP_NOFS);2263 if (!ret) {2264- skipped = 1;02265 search = end + 1;2266 if (need_resched()) {2267 mutex_unlock(&info->extent_ins_mutex);···2280 list_add_tail(&extent_op->list, &insert_list);2281 search = end + 1;2282 if (num_inserts == max_inserts) {2283- mutex_unlock(&info->extent_ins_mutex);2284 break;2285 }2286 } else if (extent_op->type == PENDING_BACKREF_UPDATE) {···2296 * somebody marked this thing for deletion then just unlock it and be2297 * done, the free_extents will handle it2298 */2299- mutex_lock(&info->extent_ins_mutex);2300 list_for_each_entry_safe(extent_op, tmp, &update_list, list) {2301 clear_extent_bits(&info->extent_ins, extent_op->bytenr,2302 extent_op->bytenr + extent_op->num_bytes - 1,···2317 if (!list_empty(&update_list)) {2318 ret = update_backrefs(trans, extent_root, path, &update_list);2319 BUG_ON(ret);00002320 }23212322 /*···2328 * need to make sure everything is cleaned then reset everything and2329 * go back to the beginning2330 */2331- if (!num_inserts && all && skipped) {2332 search = 0;2333- skipped = 0;2334 INIT_LIST_HEAD(&update_list);2335 INIT_LIST_HEAD(&insert_list);2336 goto again;···2387 BUG_ON(ret);23882389 /*2390- * if we broke out of the loop in order to insert stuff because we hit2391- * the maximum number of inserts at a time we can handle, then loop2392- * back and pick up where we left off00002393 */2394- if (num_inserts == max_inserts) {2395- INIT_LIST_HEAD(&insert_list);2396- INIT_LIST_HEAD(&update_list);2397- num_inserts = 0;2398- goto again;2399- }2400-2401- /*2402- * again, if we need to make absolutely sure there are no more pending2403- * extent operations left and we know that we skipped some, go back to2404- * the beginning and do it all again2405- */2406- if (all && skipped) {2407 INIT_LIST_HEAD(&insert_list);2408 INIT_LIST_HEAD(&update_list);2409 search = 0;2410- skipped = 0;2411 num_inserts = 0;2412 goto again;2413 }···2720 goto again;2721 }2722002723 return err;2724}2725···28722873 if (data & BTRFS_BLOCK_GROUP_METADATA) {2874 last_ptr = &root->fs_info->last_alloc;2875- empty_cluster = 64 * 1024;02876 }28772878 if ((data & BTRFS_BLOCK_GROUP_DATA) && btrfs_test_opt(root, SSD))···34163417struct extent_buffer *btrfs_init_new_buffer(struct btrfs_trans_handle *trans,3418 struct btrfs_root *root,3419- u64 bytenr, u32 blocksize)03420{3421 struct extent_buffer *buf;3422···3425 if (!buf)3426 return ERR_PTR(-ENOMEM);3427 btrfs_set_header_generation(buf, trans->transid);03428 btrfs_tree_lock(buf);3429 clean_tree_block(trans, root, buf);3430···3469 return ERR_PTR(ret);3470 }34713472- buf = btrfs_init_new_buffer(trans, root, ins.objectid, blocksize);03473 return buf;3474}3475···5658 prev_block = block_start;5659 }566005661 btrfs_record_root_in_trans(found_root);05662 if (ref_path->owner_objectid >= BTRFS_FIRST_FREE_OBJECTID) {5663 /*5664 * try to update data extent references while
···1323int btrfs_extent_post_op(struct btrfs_trans_handle *trans,1324 struct btrfs_root *root)1325{1326+ u64 start;1327+ u64 end;1328+ int ret;1329+1330+ while(1) {1331+ finish_current_insert(trans, root->fs_info->extent_root, 1);1332+ del_pending_extents(trans, root->fs_info->extent_root, 1);1333+1334+ /* is there more work to do? */1335+ ret = find_first_extent_bit(&root->fs_info->pending_del,1336+ 0, &start, &end, EXTENT_WRITEBACK);1337+ if (!ret)1338+ continue;1339+ ret = find_first_extent_bit(&root->fs_info->extent_ins,1340+ 0, &start, &end, EXTENT_WRITEBACK);1341+ if (!ret)1342+ continue;1343+ break;1344+ }1345 return 0;1346}1347···2211 u64 end;2212 u64 priv;2213 u64 search = 0;02214 struct btrfs_fs_info *info = extent_root->fs_info;2215 struct btrfs_path *path;2216 struct pending_extent_op *extent_op, *tmp;2217 struct list_head insert_list, update_list;2218 int ret;2219+ int num_inserts = 0, max_inserts, restart = 0;22202221 path = btrfs_alloc_path();2222 INIT_LIST_HEAD(&insert_list);···2233 ret = find_first_extent_bit(&info->extent_ins, search, &start,2234 &end, EXTENT_WRITEBACK);2235 if (ret) {2236+ if (restart && !num_inserts &&2237 list_empty(&update_list)) {2238+ restart = 0;2239 search = 0;2240 continue;2241 }02242 break;2243 }22442245 ret = try_lock_extent(&info->extent_ins, start, end, GFP_NOFS);2246 if (!ret) {2247+ if (all)2248+ restart = 1;2249 search = end + 1;2250 if (need_resched()) {2251 mutex_unlock(&info->extent_ins_mutex);···2264 list_add_tail(&extent_op->list, &insert_list);2265 search = end + 1;2266 if (num_inserts == max_inserts) {2267+ restart = 1;2268 break;2269 }2270 } else if (extent_op->type == PENDING_BACKREF_UPDATE) {···2280 * somebody marked this thing for deletion then just unlock it and be2281 * done, the free_extents will handle it2282 */02283 list_for_each_entry_safe(extent_op, tmp, &update_list, list) {2284 clear_extent_bits(&info->extent_ins, extent_op->bytenr,2285 extent_op->bytenr + extent_op->num_bytes - 1,···2302 if (!list_empty(&update_list)) {2303 ret = update_backrefs(trans, extent_root, path, &update_list);2304 BUG_ON(ret);2305+2306+ /* we may have COW'ed new blocks, so lets start over */2307+ if (all)2308+ restart = 1;2309 }23102311 /*···2309 * need to make sure everything is cleaned then reset everything and2310 * go back to the beginning2311 */2312+ if (!num_inserts && restart) {2313 search = 0;2314+ restart = 0;2315 INIT_LIST_HEAD(&update_list);2316 INIT_LIST_HEAD(&insert_list);2317 goto again;···2368 BUG_ON(ret);23692370 /*2371+ * if restart is set for whatever reason we need to go back and start2372+ * searching through the pending list again.2373+ *2374+ * We just inserted some extents, which could have resulted in new2375+ * blocks being allocated, which would result in new blocks needing2376+ * updates, so if all is set we _must_ restart to get the updated2377+ * blocks.2378 */2379+ if (restart || all) {0000000000002380 INIT_LIST_HEAD(&insert_list);2381 INIT_LIST_HEAD(&update_list);2382 search = 0;2383+ restart = 0;2384 num_inserts = 0;2385 goto again;2386 }···2709 goto again;2710 }27112712+ if (!err)2713+ finish_current_insert(trans, extent_root, 0);2714 return err;2715}2716···28592860 if (data & BTRFS_BLOCK_GROUP_METADATA) {2861 last_ptr = &root->fs_info->last_alloc;2862+ if (!btrfs_test_opt(root, SSD))2863+ empty_cluster = 64 * 1024;2864 }28652866 if ((data & BTRFS_BLOCK_GROUP_DATA) && btrfs_test_opt(root, SSD))···34023403struct extent_buffer *btrfs_init_new_buffer(struct btrfs_trans_handle *trans,3404 struct btrfs_root *root,3405+ u64 bytenr, u32 blocksize,3406+ int level)3407{3408 struct extent_buffer *buf;3409···3410 if (!buf)3411 return ERR_PTR(-ENOMEM);3412 btrfs_set_header_generation(buf, trans->transid);3413+ btrfs_set_buffer_lockdep_class(buf, level);3414 btrfs_tree_lock(buf);3415 clean_tree_block(trans, root, buf);3416···3453 return ERR_PTR(ret);3454 }34553456+ buf = btrfs_init_new_buffer(trans, root, ins.objectid,3457+ blocksize, level);3458 return buf;3459}3460···5641 prev_block = block_start;5642 }56435644+ mutex_lock(&extent_root->fs_info->trans_mutex);5645 btrfs_record_root_in_trans(found_root);5646+ mutex_unlock(&extent_root->fs_info->trans_mutex);5647 if (ref_path->owner_objectid >= BTRFS_FIRST_FREE_OBJECTID) {5648 /*5649 * try to update data extent references while
···1222 /*1223 * ok we haven't committed the transaction yet, lets do a commit1224 */1225- if (file->private_data)1226 btrfs_ioctl_trans_end(file);12271228 trans = btrfs_start_transaction(root, 1);···1231 goto out;1232 }12331234- ret = btrfs_log_dentry_safe(trans, root, file->f_dentry);1235 if (ret < 0)1236 goto out;1237···1245 * file again, but that will end up using the synchronization1246 * inside btrfs_sync_log to keep things safe.1247 */1248- mutex_unlock(&file->f_dentry->d_inode->i_mutex);12491250 if (ret > 0) {1251 ret = btrfs_commit_transaction(trans, root);···1253 btrfs_sync_log(trans, root);1254 ret = btrfs_end_transaction(trans, root);1255 }1256- mutex_lock(&file->f_dentry->d_inode->i_mutex);1257out:1258 return ret > 0 ? EIO : ret;1259}
···1222 /*1223 * ok we haven't committed the transaction yet, lets do a commit1224 */1225+ if (file && file->private_data)1226 btrfs_ioctl_trans_end(file);12271228 trans = btrfs_start_transaction(root, 1);···1231 goto out;1232 }12331234+ ret = btrfs_log_dentry_safe(trans, root, dentry);1235 if (ret < 0)1236 goto out;1237···1245 * file again, but that will end up using the synchronization1246 * inside btrfs_sync_log to keep things safe.1247 */1248+ mutex_unlock(&dentry->d_inode->i_mutex);12491250 if (ret > 0) {1251 ret = btrfs_commit_transaction(trans, root);···1253 btrfs_sync_log(trans, root);1254 ret = btrfs_end_transaction(trans, root);1255 }1256+ mutex_lock(&dentry->d_inode->i_mutex);1257out:1258 return ret > 0 ? EIO : ret;1259}
···2531 key.offset = (u64)-1;2532 key.type = (u8)-1;25332534- btrfs_init_path(path);2535-2536search_again:2537 ret = btrfs_search_slot(trans, root, &key, path, -1, 1);2538 if (ret < 0)···4261{4262 if (PageWriteback(page) || PageDirty(page))4263 return 0;4264- return __btrfs_releasepage(page, gfp_flags);4265}42664267static void btrfs_invalidatepage(struct page *page, unsigned long offset)
···2531 key.offset = (u64)-1;2532 key.type = (u8)-1;2533002534search_again:2535 ret = btrfs_search_slot(trans, root, &key, path, -1, 1);2536 if (ret < 0)···4263{4264 if (PageWriteback(page) || PageDirty(page))4265 return 0;4266+ return __btrfs_releasepage(page, gfp_flags & GFP_NOFS);4267}42684269static void btrfs_invalidatepage(struct page *page, unsigned long offset)
-11
fs/btrfs/locking.c
···25#include "extent_io.h"26#include "locking.h"2728-/*29- * btrfs_header_level() isn't free, so don't call it when lockdep isn't30- * on31- */32-#ifdef CONFIG_DEBUG_LOCK_ALLOC33-static inline void spin_nested(struct extent_buffer *eb)34-{35- spin_lock_nested(&eb->lock, BTRFS_MAX_LEVEL - btrfs_header_level(eb));36-}37-#else38static inline void spin_nested(struct extent_buffer *eb)39{40 spin_lock(&eb->lock);41}42-#endif4344/*45 * Setting a lock to blocking will drop the spinlock and set the
···25#include "extent_io.h"26#include "locking.h"27000000000028static inline void spin_nested(struct extent_buffer *eb)29{30 spin_lock(&eb->lock);31}03233/*34 * Setting a lock to blocking will drop the spinlock and set the
+4-1
fs/btrfs/super.c
···379 btrfs_start_delalloc_inodes(root);380 btrfs_wait_ordered_extents(root, 0);381382- btrfs_clean_old_snapshots(root);383 trans = btrfs_start_transaction(root, 1);384 ret = btrfs_commit_transaction(trans, root);385 sb->s_dirt = 0;···509{510 struct btrfs_root *root = btrfs_sb(sb);511 int ret;0000512513 if ((*flags & MS_RDONLY) == (sb->s_flags & MS_RDONLY))514 return 0;
···379 btrfs_start_delalloc_inodes(root);380 btrfs_wait_ordered_extents(root, 0);3810382 trans = btrfs_start_transaction(root, 1);383 ret = btrfs_commit_transaction(trans, root);384 sb->s_dirt = 0;···510{511 struct btrfs_root *root = btrfs_sb(sb);512 int ret;513+514+ ret = btrfs_parse_options(root, data);515+ if (ret)516+ return -EINVAL;517518 if ((*flags & MS_RDONLY) == (sb->s_flags & MS_RDONLY))519 return 0;
···2894 free_extent_map(em);2895 }28962897- map = kzalloc(sizeof(*map), GFP_NOFS);2898- if (!map)2899- return -ENOMEM;2900-2901 em = alloc_extent_map(GFP_NOFS);2902 if (!em)2903 return -ENOMEM;···3102 if (!sb)3103 return -ENOMEM;3104 btrfs_set_buffer_uptodate(sb);003105 write_extent_buffer(sb, super_copy, 0, BTRFS_SUPER_INFO_SIZE);3106 array_size = btrfs_super_sys_array_size(super_copy);3107
···2894 free_extent_map(em);2895 }289600002897 em = alloc_extent_map(GFP_NOFS);2898 if (!em)2899 return -ENOMEM;···3106 if (!sb)3107 return -ENOMEM;3108 btrfs_set_buffer_uptodate(sb);3109+ btrfs_set_buffer_lockdep_class(sb, 0);3110+3111 write_extent_buffer(sb, super_copy, 0, BTRFS_SUPER_INFO_SIZE);3112 array_size = btrfs_super_sys_array_size(super_copy);3113
+2-1
fs/buffer.c
···777 __inc_zone_page_state(page, NR_FILE_DIRTY);778 __inc_bdi_stat(mapping->backing_dev_info,779 BDI_RECLAIMABLE);0780 task_io_account_write(PAGE_CACHE_SIZE);781 }782 radix_tree_tag_set(&mapping->page_tree,···3109 if (test_clear_buffer_dirty(bh)) {3110 get_bh(bh);3111 bh->b_end_io = end_buffer_write_sync;3112- ret = submit_bh(WRITE_SYNC, bh);3113 wait_on_buffer(bh);3114 if (buffer_eopnotsupp(bh)) {3115 clear_buffer_eopnotsupp(bh);
···777 __inc_zone_page_state(page, NR_FILE_DIRTY);778 __inc_bdi_stat(mapping->backing_dev_info,779 BDI_RECLAIMABLE);780+ task_dirty_inc(current);781 task_io_account_write(PAGE_CACHE_SIZE);782 }783 radix_tree_tag_set(&mapping->page_tree,···3108 if (test_clear_buffer_dirty(bh)) {3109 get_bh(bh);3110 bh->b_end_io = end_buffer_write_sync;3111+ ret = submit_bh(WRITE, bh);3112 wait_on_buffer(bh);3113 if (buffer_eopnotsupp(bh)) {3114 clear_buffer_eopnotsupp(bh);
+14-1
fs/cifs/CHANGES
···00000000001Version 1.562------------3Add "forcemandatorylock" mount option to allow user to use mandatory···17top of the share. Fix problem in 2.6.28 resolving DFS paths to18Samba servers (worked to Windows). Fix rmdir so that pending search19(readdir) requests do not get invalid results which include the now20-removed directory.0002122Version 1.5523------------
···1+Version 1.572+------------3+Improve support for multiple security contexts to the same server. We4+used to use the same "vcnumber" for all connections which could cause5+the server to treat subsequent connections, especially those that6+are authenticated as guest, as reconnections, invalidating the earlier7+user's smb session. This fix allows cifs to mount multiple times to the8+same server with different userids without risking invalidating earlier9+established security contexts.10+11Version 1.5612------------13Add "forcemandatorylock" mount option to allow user to use mandatory···7top of the share. Fix problem in 2.6.28 resolving DFS paths to8Samba servers (worked to Windows). Fix rmdir so that pending search9(readdir) requests do not get invalid results which include the now10+removed directory. Fix oops in cifs_dfs_ref.c when prefixpath is not reachable11+when using DFS. Add better file create support to servers which support12+the CIFS POSIX protocol extensions (this adds support for new flags13+on create, and improves semantics for write of locked ranges).1415Version 1.5516------------
···164 /* multiplexed reads or writes */165 unsigned int maxBuf; /* maxBuf specifies the maximum */166 /* message size the server can send or receive for non-raw SMBs */167- unsigned int maxRw; /* maxRw specifies the maximum */168 /* message size the server can send or receive for */169 /* SMB_COM_WRITE_RAW or SMB_COM_READ_RAW. */000170 char sessid[4]; /* unique token id for this session */171 /* (returned on Negotiate */172 int capabilities; /* allow selective disabling of caps by smb sess */···213 unsigned overrideSecFlg; /* if non-zero override global sec flags */214 __u16 ipc_tid; /* special tid for connection to IPC share */215 __u16 flags;0216 char *serverOS; /* name of operating system underlying server */217 char *serverNOS; /* name of network operating system of server */218 char *serverDomain; /* security realm of server */
···164 /* multiplexed reads or writes */165 unsigned int maxBuf; /* maxBuf specifies the maximum */166 /* message size the server can send or receive for non-raw SMBs */167+ unsigned int max_rw; /* maxRw specifies the maximum */168 /* message size the server can send or receive for */169 /* SMB_COM_WRITE_RAW or SMB_COM_READ_RAW. */170+ unsigned int max_vcs; /* maximum number of smb sessions, at least171+ those that can be specified uniquely with172+ vcnumbers */173 char sessid[4]; /* unique token id for this session */174 /* (returned on Negotiate */175 int capabilities; /* allow selective disabling of caps by smb sess */···210 unsigned overrideSecFlg; /* if non-zero override global sec flags */211 __u16 ipc_tid; /* special tid for connection to IPC share */212 __u16 flags;213+ __u16 vcnum;214 char *serverOS; /* name of operating system underlying server */215 char *serverNOS; /* name of network operating system of server */216 char *serverDomain; /* security realm of server */
···528 server->maxReq = le16_to_cpu(rsp->MaxMpxCount);529 server->maxBuf = min((__u32)le16_to_cpu(rsp->MaxBufSize),530 (__u32)CIFSMaxBufSize + MAX_CIFS_HDR_SIZE);0531 GETU32(server->sessid) = le32_to_cpu(rsp->SessionKey);532 /* even though we do not use raw we might as well set this533 accurately, in case we ever find a need for it */534 if ((le16_to_cpu(rsp->RawMode) & RAW_ENABLE) == RAW_ENABLE) {535- server->maxRw = 0xFF00;536 server->capabilities = CAP_MPX_MODE | CAP_RAW_MODE;537 } else {538- server->maxRw = 0;/* we do not need to use raw anyway */539 server->capabilities = CAP_MPX_MODE;540 }541 tmp = (__s16)le16_to_cpu(rsp->ServerTimeZone);···639 /* probably no need to store and check maxvcs */640 server->maxBuf = min(le32_to_cpu(pSMBr->MaxBufferSize),641 (__u32) CIFSMaxBufSize + MAX_CIFS_HDR_SIZE);642- server->maxRw = le32_to_cpu(pSMBr->MaxRawSize);643 cFYI(DBG2, ("Max buf = %d", ses->server->maxBuf));644 GETU32(ses->server->sessid) = le32_to_cpu(pSMBr->SessionKey);645 server->capabilities = le32_to_cpu(pSMBr->Capabilities);
···528 server->maxReq = le16_to_cpu(rsp->MaxMpxCount);529 server->maxBuf = min((__u32)le16_to_cpu(rsp->MaxBufSize),530 (__u32)CIFSMaxBufSize + MAX_CIFS_HDR_SIZE);531+ server->max_vcs = le16_to_cpu(rsp->MaxNumberVcs);532 GETU32(server->sessid) = le32_to_cpu(rsp->SessionKey);533 /* even though we do not use raw we might as well set this534 accurately, in case we ever find a need for it */535 if ((le16_to_cpu(rsp->RawMode) & RAW_ENABLE) == RAW_ENABLE) {536+ server->max_rw = 0xFF00;537 server->capabilities = CAP_MPX_MODE | CAP_RAW_MODE;538 } else {539+ server->max_rw = 0;/* do not need to use raw anyway */540 server->capabilities = CAP_MPX_MODE;541 }542 tmp = (__s16)le16_to_cpu(rsp->ServerTimeZone);···638 /* probably no need to store and check maxvcs */639 server->maxBuf = min(le32_to_cpu(pSMBr->MaxBufferSize),640 (__u32) CIFSMaxBufSize + MAX_CIFS_HDR_SIZE);641+ server->max_rw = le32_to_cpu(pSMBr->MaxRawSize);642 cFYI(DBG2, ("Max buf = %d", ses->server->maxBuf));643 GETU32(ses->server->sessid) = le32_to_cpu(pSMBr->SessionKey);644 server->capabilities = le32_to_cpu(pSMBr->Capabilities);
+48-3
fs/cifs/connect.c
···23#include <linux/string.h>24#include <linux/list.h>25#include <linux/wait.h>26-#include <linux/ipv6.h>27#include <linux/pagemap.h>28#include <linux/ctype.h>29#include <linux/utsname.h>···34#include <linux/freezer.h>35#include <asm/uaccess.h>36#include <asm/processor.h>037#include "cifspdu.h"38#include "cifsglob.h"39#include "cifsproto.h"···1379 server->addr.sockAddr.sin_addr.s_addr))1380 continue;1381 else if (addr->ss_family == AF_INET6 &&1382- memcmp(&server->addr.sockAddr6.sin6_addr,1383- &addr6->sin6_addr, sizeof(addr6->sin6_addr)))1384 continue;13851386 ++server->srv_count;···2180 "mount option supported"));2181}21820000000000000000000000000002183int2184cifs_mount(struct super_block *sb, struct cifs_sb_info *cifs_sb,2185 char *mount_data, const char *devname)···2217 struct cifsSesInfo *pSesInfo = NULL;2218 struct cifsTconInfo *tcon = NULL;2219 struct TCP_Server_Info *srvTcp = NULL;022202221 xid = GetXid();2222···2453 if (!(tcon->ses->capabilities & CAP_LARGE_READ_X))2454 cifs_sb->rsize = min(cifs_sb->rsize,2455 (tcon->ses->server->maxBuf - MAX_CIFS_HDR_SIZE));0000000000000000024562457 /* volume_info->password is freed above when existing session found2458 (in which case it is not needed anymore) but when new sesion is created
···23#include <linux/string.h>24#include <linux/list.h>25#include <linux/wait.h>026#include <linux/pagemap.h>27#include <linux/ctype.h>28#include <linux/utsname.h>···35#include <linux/freezer.h>36#include <asm/uaccess.h>37#include <asm/processor.h>38+#include <net/ipv6.h>39#include "cifspdu.h"40#include "cifsglob.h"41#include "cifsproto.h"···1379 server->addr.sockAddr.sin_addr.s_addr))1380 continue;1381 else if (addr->ss_family == AF_INET6 &&1382+ !ipv6_addr_equal(&server->addr.sockAddr6.sin6_addr,1383+ &addr6->sin6_addr))1384 continue;13851386 ++server->srv_count;···2180 "mount option supported"));2181}21822183+static int2184+is_path_accessible(int xid, struct cifsTconInfo *tcon,2185+ struct cifs_sb_info *cifs_sb, const char *full_path)2186+{2187+ int rc;2188+ __u64 inode_num;2189+ FILE_ALL_INFO *pfile_info;2190+2191+ rc = CIFSGetSrvInodeNumber(xid, tcon, full_path, &inode_num,2192+ cifs_sb->local_nls,2193+ cifs_sb->mnt_cifs_flags &2194+ CIFS_MOUNT_MAP_SPECIAL_CHR);2195+ if (rc != -EOPNOTSUPP)2196+ return rc;2197+2198+ pfile_info = kmalloc(sizeof(FILE_ALL_INFO), GFP_KERNEL);2199+ if (pfile_info == NULL)2200+ return -ENOMEM;2201+2202+ rc = CIFSSMBQPathInfo(xid, tcon, full_path, pfile_info,2203+ 0 /* not legacy */, cifs_sb->local_nls,2204+ cifs_sb->mnt_cifs_flags &2205+ CIFS_MOUNT_MAP_SPECIAL_CHR);2206+ kfree(pfile_info);2207+ return rc;2208+}2209+2210int2211cifs_mount(struct super_block *sb, struct cifs_sb_info *cifs_sb,2212 char *mount_data, const char *devname)···2190 struct cifsSesInfo *pSesInfo = NULL;2191 struct cifsTconInfo *tcon = NULL;2192 struct TCP_Server_Info *srvTcp = NULL;2193+ char *full_path;21942195 xid = GetXid();2196···2425 if (!(tcon->ses->capabilities & CAP_LARGE_READ_X))2426 cifs_sb->rsize = min(cifs_sb->rsize,2427 (tcon->ses->server->maxBuf - MAX_CIFS_HDR_SIZE));2428+2429+ if (!rc && cifs_sb->prepathlen) {2430+ /* build_path_to_root works only when we have a valid tcon */2431+ full_path = cifs_build_path_to_root(cifs_sb);2432+ if (full_path == NULL) {2433+ rc = -ENOMEM;2434+ goto mount_fail_check;2435+ }2436+ rc = is_path_accessible(xid, tcon, cifs_sb, full_path);2437+ if (rc) {2438+ cERROR(1, ("Path %s in not accessible: %d",2439+ full_path, rc));2440+ kfree(full_path);2441+ goto mount_fail_check;2442+ }2443+ kfree(full_path);2444+ }24452446 /* volume_info->password is freed above when existing session found2447 (in which case it is not needed anymore) but when new sesion is created
+202-99
fs/cifs/dir.c
···3 *4 * vfs operations that deal with dentries5 *6- * Copyright (C) International Business Machines Corp., 2002,20087 * Author(s): Steve French (sfrench@us.ibm.com)8 *9 * This library is free software; you can redistribute it and/or modify···129 return full_path;130}131000000000000000000000000000000000000000000000000000000000000000000000000132static void setup_cifs_dentry(struct cifsTconInfo *tcon,133 struct dentry *direntry,134 struct inode *newinode)···222 int xid;223 int create_options = CREATE_NOT_DIR;224 int oplock = 0;225- /* BB below access is too much for the mknod to request */0000000226 int desiredAccess = GENERIC_READ | GENERIC_WRITE;227 __u16 fileHandle;228 struct cifs_sb_info *cifs_sb;···253 }254255 mode &= ~current->fs->umask;000000000000000000000000000000256257 if (nd && (nd->flags & LOOKUP_OPEN)) {258- int oflags = nd->intent.open.flags;259-260 desiredAccess = 0;261 if (oflags & FMODE_READ)262- desiredAccess |= GENERIC_READ;263 if (oflags & FMODE_WRITE) {264 desiredAccess |= GENERIC_WRITE;265 if (!(oflags & FMODE_READ))···308309 /* BB add processing to set equivalent of mode - e.g. via CreateX with310 ACLs */311- if (oplockEnabled)312- oplock = REQ_OPLOCK;313314 buf = kmalloc(sizeof(FILE_ALL_INFO), GFP_KERNEL);315 if (buf == NULL) {···340 }341 if (rc) {342 cFYI(1, ("cifs_create returned 0x%x", rc));343- } else {344- /* If Open reported that we actually created a file345- then we now have to set the mode if possible */346- if ((tcon->unix_ext) && (oplock & CIFS_CREATE_ACTION)) {347- struct cifs_unix_set_info_args args = {00348 .mode = mode,349 .ctime = NO_CHANGE_64,350 .atime = NO_CHANGE_64,351 .mtime = NO_CHANGE_64,352 .device = 0,353- };354355- if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SET_UID) {356- args.uid = (__u64) current_fsuid();357- if (inode->i_mode & S_ISGID)358- args.gid = (__u64) inode->i_gid;359- else360- args.gid = (__u64) current_fsgid();361- } else {362- args.uid = NO_CHANGE_64;363- args.gid = NO_CHANGE_64;364- }365- CIFSSMBUnixSetInfo(xid, tcon, full_path, &args,366- cifs_sb->local_nls,367- cifs_sb->mnt_cifs_flags &368- CIFS_MOUNT_MAP_SPECIAL_CHR);369 } else {370- /* BB implement mode setting via Windows security371- descriptors e.g. */372- /* CIFSSMBWinSetPerms(xid,tcon,path,mode,-1,-1,nls);*/373-374- /* Could set r/o dos attribute if mode & 0222 == 0 */375 }0000000376377- /* server might mask mode so we have to query for it */378- if (tcon->unix_ext)379- rc = cifs_get_inode_info_unix(&newinode, full_path,380- inode->i_sb, xid);381- else {382- rc = cifs_get_inode_info(&newinode, full_path,383- buf, inode->i_sb, xid,384- &fileHandle);385- if (newinode) {386- if (cifs_sb->mnt_cifs_flags &387- CIFS_MOUNT_DYNPERM)388- newinode->i_mode = mode;389- if ((oplock & CIFS_CREATE_ACTION) &&390- (cifs_sb->mnt_cifs_flags &391- CIFS_MOUNT_SET_UID)) {392- newinode->i_uid = current_fsuid();393- if (inode->i_mode & S_ISGID)394- newinode->i_gid =395- inode->i_gid;396- else397- newinode->i_gid =398- current_fsgid();399- }400 }401 }0402403- if (rc != 0) {404- cFYI(1, ("Create worked, get_inode_info failed rc = %d",405- rc));406- } else407- setup_cifs_dentry(tcon, direntry, newinode);408409- if ((nd == NULL /* nfsd case - nfs srv does not set nd */) ||410- (!(nd->flags & LOOKUP_OPEN))) {411- /* mknod case - do not leave file open */412- CIFSSMBClose(xid, tcon, fileHandle);413- } else if (newinode) {414- struct cifsFileInfo *pCifsFile =415- kzalloc(sizeof(struct cifsFileInfo), GFP_KERNEL);416417- if (pCifsFile == NULL)418- goto cifs_create_out;419- pCifsFile->netfid = fileHandle;420- pCifsFile->pid = current->tgid;421- pCifsFile->pInode = newinode;422- pCifsFile->invalidHandle = false;423- pCifsFile->closePend = false;424- init_MUTEX(&pCifsFile->fh_sem);425- mutex_init(&pCifsFile->lock_mutex);426- INIT_LIST_HEAD(&pCifsFile->llist);427- atomic_set(&pCifsFile->wrtPending, 0);428429- /* set the following in open now430 pCifsFile->pfile = file; */431- write_lock(&GlobalSMBSeslock);432- list_add(&pCifsFile->tlist, &tcon->openFileList);433- pCifsInode = CIFS_I(newinode);434- if (pCifsInode) {435- /* if readable file instance put first in list*/436- if (write_only) {437- list_add_tail(&pCifsFile->flist,438- &pCifsInode->openFileList);439- } else {440- list_add(&pCifsFile->flist,441- &pCifsInode->openFileList);442- }443- if ((oplock & 0xF) == OPLOCK_EXCLUSIVE) {444- pCifsInode->clientCanCacheAll = true;445- pCifsInode->clientCanCacheRead = true;446- cFYI(1, ("Exclusive Oplock inode %p",447- newinode));448- } else if ((oplock & 0xF) == OPLOCK_READ)449- pCifsInode->clientCanCacheRead = true;450 }451- write_unlock(&GlobalSMBSeslock);000000452 }0453 }454cifs_create_out:455 kfree(buf);
···3 *4 * vfs operations that deal with dentries5 *6+ * Copyright (C) International Business Machines Corp., 2002,20097 * Author(s): Steve French (sfrench@us.ibm.com)8 *9 * This library is free software; you can redistribute it and/or modify···129 return full_path;130}131132+static int cifs_posix_open(char *full_path, struct inode **pinode,133+ struct super_block *sb, int mode, int oflags,134+ int *poplock, __u16 *pnetfid, int xid)135+{136+ int rc;137+ __u32 oplock;138+ FILE_UNIX_BASIC_INFO *presp_data;139+ __u32 posix_flags = 0;140+ struct cifs_sb_info *cifs_sb = CIFS_SB(sb);141+142+ cFYI(1, ("posix open %s", full_path));143+144+ presp_data = kzalloc(sizeof(FILE_UNIX_BASIC_INFO), GFP_KERNEL);145+ if (presp_data == NULL)146+ return -ENOMEM;147+148+/* So far cifs posix extensions can only map the following flags.149+ There are other valid fmode oflags such as FMODE_LSEEK, FMODE_PREAD, but150+ so far we do not seem to need them, and we can treat them as local only */151+ if ((oflags & (FMODE_READ | FMODE_WRITE)) ==152+ (FMODE_READ | FMODE_WRITE))153+ posix_flags = SMB_O_RDWR;154+ else if (oflags & FMODE_READ)155+ posix_flags = SMB_O_RDONLY;156+ else if (oflags & FMODE_WRITE)157+ posix_flags = SMB_O_WRONLY;158+ if (oflags & O_CREAT)159+ posix_flags |= SMB_O_CREAT;160+ if (oflags & O_EXCL)161+ posix_flags |= SMB_O_EXCL;162+ if (oflags & O_TRUNC)163+ posix_flags |= SMB_O_TRUNC;164+ if (oflags & O_APPEND)165+ posix_flags |= SMB_O_APPEND;166+ if (oflags & O_SYNC)167+ posix_flags |= SMB_O_SYNC;168+ if (oflags & O_DIRECTORY)169+ posix_flags |= SMB_O_DIRECTORY;170+ if (oflags & O_NOFOLLOW)171+ posix_flags |= SMB_O_NOFOLLOW;172+ if (oflags & O_DIRECT)173+ posix_flags |= SMB_O_DIRECT;174+175+176+ rc = CIFSPOSIXCreate(xid, cifs_sb->tcon, posix_flags, mode,177+ pnetfid, presp_data, &oplock, full_path,178+ cifs_sb->local_nls, cifs_sb->mnt_cifs_flags &179+ CIFS_MOUNT_MAP_SPECIAL_CHR);180+ if (rc)181+ goto posix_open_ret;182+183+ if (presp_data->Type == cpu_to_le32(-1))184+ goto posix_open_ret; /* open ok, caller does qpathinfo */185+186+ /* get new inode and set it up */187+ if (!pinode)188+ goto posix_open_ret; /* caller does not need info */189+190+ *pinode = cifs_new_inode(sb, &presp_data->UniqueId);191+192+ /* We do not need to close the file if new_inode fails since193+ the caller will retry qpathinfo as long as inode is null */194+ if (*pinode == NULL)195+ goto posix_open_ret;196+197+ posix_fill_in_inode(*pinode, presp_data, 1);198+199+posix_open_ret:200+ kfree(presp_data);201+ return rc;202+}203+204static void setup_cifs_dentry(struct cifsTconInfo *tcon,205 struct dentry *direntry,206 struct inode *newinode)···150 int xid;151 int create_options = CREATE_NOT_DIR;152 int oplock = 0;153+ int oflags;154+ /*155+ * BB below access is probably too much for mknod to request156+ * but we have to do query and setpathinfo so requesting157+ * less could fail (unless we want to request getatr and setatr158+ * permissions (only). At least for POSIX we do not have to159+ * request so much.160+ */161 int desiredAccess = GENERIC_READ | GENERIC_WRITE;162 __u16 fileHandle;163 struct cifs_sb_info *cifs_sb;···174 }175176 mode &= ~current->fs->umask;177+ if (oplockEnabled)178+ oplock = REQ_OPLOCK;179+180+ if (nd && (nd->flags & LOOKUP_OPEN))181+ oflags = nd->intent.open.flags;182+ else183+ oflags = FMODE_READ;184+185+ if (tcon->unix_ext && (tcon->ses->capabilities & CAP_UNIX) &&186+ (CIFS_UNIX_POSIX_PATH_OPS_CAP &187+ le64_to_cpu(tcon->fsUnixInfo.Capability))) {188+ rc = cifs_posix_open(full_path, &newinode, inode->i_sb,189+ mode, oflags, &oplock, &fileHandle, xid);190+ /* EIO could indicate that (posix open) operation is not191+ supported, despite what server claimed in capability192+ negotation. EREMOTE indicates DFS junction, which is not193+ handled in posix open */194+195+ if ((rc == 0) && (newinode == NULL))196+ goto cifs_create_get_file_info; /* query inode info */197+ else if (rc == 0) /* success, no need to query */198+ goto cifs_create_set_dentry;199+ else if ((rc != -EIO) && (rc != -EREMOTE) &&200+ (rc != -EOPNOTSUPP)) /* path not found or net err */201+ goto cifs_create_out;202+ /* else fallthrough to retry, using older open call, this is203+ case where server does not support this SMB level, and204+ falsely claims capability (also get here for DFS case205+ which should be rare for path not covered on files) */206+ }207208 if (nd && (nd->flags & LOOKUP_OPEN)) {209+ /* if the file is going to stay open, then we210+ need to set the desired access properly */211 desiredAccess = 0;212 if (oflags & FMODE_READ)213+ desiredAccess |= GENERIC_READ; /* is this too little? */214 if (oflags & FMODE_WRITE) {215 desiredAccess |= GENERIC_WRITE;216 if (!(oflags & FMODE_READ))···199200 /* BB add processing to set equivalent of mode - e.g. via CreateX with201 ACLs */00202203 buf = kmalloc(sizeof(FILE_ALL_INFO), GFP_KERNEL);204 if (buf == NULL) {···233 }234 if (rc) {235 cFYI(1, ("cifs_create returned 0x%x", rc));236+ goto cifs_create_out;237+ }238+239+ /* If Open reported that we actually created a file240+ then we now have to set the mode if possible */241+ if ((tcon->unix_ext) && (oplock & CIFS_CREATE_ACTION)) {242+ struct cifs_unix_set_info_args args = {243 .mode = mode,244 .ctime = NO_CHANGE_64,245 .atime = NO_CHANGE_64,246 .mtime = NO_CHANGE_64,247 .device = 0,248+ };249250+ if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SET_UID) {251+ args.uid = (__u64) current_fsuid();252+ if (inode->i_mode & S_ISGID)253+ args.gid = (__u64) inode->i_gid;254+ else255+ args.gid = (__u64) current_fsgid();00000000256 } else {257+ args.uid = NO_CHANGE_64;258+ args.gid = NO_CHANGE_64;000259 }260+ CIFSSMBUnixSetInfo(xid, tcon, full_path, &args,261+ cifs_sb->local_nls,262+ cifs_sb->mnt_cifs_flags & CIFS_MOUNT_MAP_SPECIAL_CHR);263+ } else {264+ /* BB implement mode setting via Windows security265+ descriptors e.g. */266+ /* CIFSSMBWinSetPerms(xid,tcon,path,mode,-1,-1,nls);*/267268+ /* Could set r/o dos attribute if mode & 0222 == 0 */269+ }270+271+cifs_create_get_file_info:272+ /* server might mask mode so we have to query for it */273+ if (tcon->unix_ext)274+ rc = cifs_get_inode_info_unix(&newinode, full_path,275+ inode->i_sb, xid);276+ else {277+ rc = cifs_get_inode_info(&newinode, full_path, buf,278+ inode->i_sb, xid, &fileHandle);279+ if (newinode) {280+ if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_DYNPERM)281+ newinode->i_mode = mode;282+ if ((oplock & CIFS_CREATE_ACTION) &&283+ (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SET_UID)) {284+ newinode->i_uid = current_fsuid();285+ if (inode->i_mode & S_ISGID)286+ newinode->i_gid = inode->i_gid;287+ else288+ newinode->i_gid = current_fsgid();00289 }290 }291+ }292293+cifs_create_set_dentry:294+ if (rc == 0)295+ setup_cifs_dentry(tcon, direntry, newinode);296+ else297+ cFYI(1, ("Create worked, get_inode_info failed rc = %d", rc));298299+ /* nfsd case - nfs srv does not set nd */300+ if ((nd == NULL) || (!(nd->flags & LOOKUP_OPEN))) {301+ /* mknod case - do not leave file open */302+ CIFSSMBClose(xid, tcon, fileHandle);303+ } else if (newinode) {304+ struct cifsFileInfo *pCifsFile =305+ kzalloc(sizeof(struct cifsFileInfo), GFP_KERNEL);306307+ if (pCifsFile == NULL)308+ goto cifs_create_out;309+ pCifsFile->netfid = fileHandle;310+ pCifsFile->pid = current->tgid;311+ pCifsFile->pInode = newinode;312+ pCifsFile->invalidHandle = false;313+ pCifsFile->closePend = false;314+ init_MUTEX(&pCifsFile->fh_sem);315+ mutex_init(&pCifsFile->lock_mutex);316+ INIT_LIST_HEAD(&pCifsFile->llist);317+ atomic_set(&pCifsFile->wrtPending, 0);318319+ /* set the following in open now320 pCifsFile->pfile = file; */321+ write_lock(&GlobalSMBSeslock);322+ list_add(&pCifsFile->tlist, &tcon->openFileList);323+ pCifsInode = CIFS_I(newinode);324+ if (pCifsInode) {325+ /* if readable file instance put first in list*/326+ if (write_only) {327+ list_add_tail(&pCifsFile->flist,328+ &pCifsInode->openFileList);329+ } else {330+ list_add(&pCifsFile->flist,331+ &pCifsInode->openFileList);00000000332 }333+ if ((oplock & 0xF) == OPLOCK_EXCLUSIVE) {334+ pCifsInode->clientCanCacheAll = true;335+ pCifsInode->clientCanCacheRead = true;336+ cFYI(1, ("Exclusive Oplock inode %p",337+ newinode));338+ } else if ((oplock & 0xF) == OPLOCK_READ)339+ pCifsInode->clientCanCacheRead = true;340 }341+ write_unlock(&GlobalSMBSeslock);342 }343cifs_create_out:344 kfree(buf);
+64-40
fs/cifs/inode.c
···199 pfnd_dat->Gid = cpu_to_le64(pinode->i_gid);200}2010000000000000000000000000000000000000000000202int cifs_get_inode_info_unix(struct inode **pinode,203 const unsigned char *full_path, struct super_block *sb, int xid)204{···276277 /* get new inode */278 if (*pinode == NULL) {279- *pinode = new_inode(sb);280 if (*pinode == NULL) {281 rc = -ENOMEM;282 goto cgiiu_exit;283 }284- /* Is an i_ino of zero legal? */285- /* note ino incremented to unique num in new_inode */286- /* Are there sanity checks we can use to ensure that287- the server is really filling in that field? */288- if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SERVER_INUM)289- (*pinode)->i_ino = (unsigned long)find_data.UniqueId;290-291- if (sb->s_flags & MS_NOATIME)292- (*pinode)->i_flags |= S_NOATIME | S_NOCMTIME;293-294- insert_inode_hash(*pinode);295 }296297 inode = *pinode;···497498 /* get new inode */499 if (*pinode == NULL) {500- *pinode = new_inode(sb);501- if (*pinode == NULL) {502- rc = -ENOMEM;503- goto cgii_exit;504- }505 /* Is an i_ino of zero legal? Can we use that to check506 if the server supports returning inode numbers? Are507 there other sanity checks we can use to ensure that···516517 if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SERVER_INUM) {518 int rc1 = 0;519- __u64 inode_num;520521 rc1 = CIFSGetSrvInodeNumber(xid, pTcon,522- full_path, &inode_num,523 cifs_sb->local_nls,524 cifs_sb->mnt_cifs_flags &525 CIFS_MOUNT_MAP_SPECIAL_CHR);526 if (rc1) {527 cFYI(1, ("GetSrvInodeNum rc %d", rc1));0528 /* BB EOPNOSUPP disable SERVER_INUM? */529- } else /* do we need cast or hash to ino? */530- (*pinode)->i_ino = inode_num;531- } /* else ino incremented to unique num in new_inode*/532- if (sb->s_flags & MS_NOATIME)533- (*pinode)->i_flags |= S_NOATIME | S_NOCMTIME;534- insert_inode_hash(*pinode);0000535 }536 inode = *pinode;537 cifsInfo = CIFS_I(inode);···655 .lookup = cifs_lookup,656};657658-static char *build_path_to_root(struct cifs_sb_info *cifs_sb)659{660 int pplen = cifs_sb->prepathlen;661 int dfsplen;···712 return inode;713714 cifs_sb = CIFS_SB(inode->i_sb);715- full_path = build_path_to_root(cifs_sb);716 if (full_path == NULL)717 return ERR_PTR(-ENOMEM);718···1051 return rc;1052}10531054-static void posix_fill_in_inode(struct inode *tmp_inode,1055 FILE_UNIX_BASIC_INFO *pData, int isNewInode)1056{1057 struct cifsInodeInfo *cifsInfo = CIFS_I(tmp_inode);···1148 else1149 direntry->d_op = &cifs_dentry_ops;11501151- newinode = new_inode(inode->i_sb);01152 if (newinode == NULL) {1153 kfree(pInfo);1154 goto mkdir_get_info;1155 }11561157- /* Is an i_ino of zero legal? */1158- /* Are there sanity checks we can use to ensure that1159- the server is really filling in that field? */1160- if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SERVER_INUM) {1161- newinode->i_ino =1162- (unsigned long)pInfo->UniqueId;1163- } /* note ino incremented to unique num in new_inode */1164- if (inode->i_sb->s_flags & MS_NOATIME)1165- newinode->i_flags |= S_NOATIME | S_NOCMTIME;1166 newinode->i_nlink = 2;1167-1168- insert_inode_hash(newinode);1169 d_instantiate(direntry, newinode);11701171 /* we already checked in POSIXCreate whether
···199 pfnd_dat->Gid = cpu_to_le64(pinode->i_gid);200}201202+/**203+ * cifs_new inode - create new inode, initialize, and hash it204+ * @sb - pointer to superblock205+ * @inum - if valid pointer and serverino is enabled, replace i_ino with val206+ *207+ * Create a new inode, initialize it for CIFS and hash it. Returns the new208+ * inode or NULL if one couldn't be allocated.209+ *210+ * If the share isn't mounted with "serverino" or inum is a NULL pointer then211+ * we'll just use the inode number assigned by new_inode(). Note that this can212+ * mean i_ino collisions since the i_ino assigned by new_inode is not213+ * guaranteed to be unique.214+ */215+struct inode *216+cifs_new_inode(struct super_block *sb, __u64 *inum)217+{218+ struct inode *inode;219+220+ inode = new_inode(sb);221+ if (inode == NULL)222+ return NULL;223+224+ /*225+ * BB: Is i_ino == 0 legal? Here, we assume that it is. If it isn't we226+ * stop passing inum as ptr. Are there sanity checks we can use to227+ * ensure that the server is really filling in that field? Also,228+ * if serverino is disabled, perhaps we should be using iunique()?229+ */230+ if (inum && (CIFS_SB(sb)->mnt_cifs_flags & CIFS_MOUNT_SERVER_INUM))231+ inode->i_ino = (unsigned long) *inum;232+233+ /*234+ * must set this here instead of cifs_alloc_inode since VFS will235+ * clobber i_flags236+ */237+ if (sb->s_flags & MS_NOATIME)238+ inode->i_flags |= S_NOATIME | S_NOCMTIME;239+240+ insert_inode_hash(inode);241+242+ return inode;243+}244+245int cifs_get_inode_info_unix(struct inode **pinode,246 const unsigned char *full_path, struct super_block *sb, int xid)247{···233234 /* get new inode */235 if (*pinode == NULL) {236+ *pinode = cifs_new_inode(sb, &find_data.UniqueId);237 if (*pinode == NULL) {238 rc = -ENOMEM;239 goto cgiiu_exit;240 }00000000000241 }242243 inode = *pinode;···465466 /* get new inode */467 if (*pinode == NULL) {468+ __u64 inode_num;469+ __u64 *pinum = &inode_num;470+00471 /* Is an i_ino of zero legal? Can we use that to check472 if the server supports returning inode numbers? Are473 there other sanity checks we can use to ensure that···486487 if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SERVER_INUM) {488 int rc1 = 0;0489490 rc1 = CIFSGetSrvInodeNumber(xid, pTcon,491+ full_path, pinum,492 cifs_sb->local_nls,493 cifs_sb->mnt_cifs_flags &494 CIFS_MOUNT_MAP_SPECIAL_CHR);495 if (rc1) {496 cFYI(1, ("GetSrvInodeNum rc %d", rc1));497+ pinum = NULL;498 /* BB EOPNOSUPP disable SERVER_INUM? */499+ }500+ } else {501+ pinum = NULL;502+ }503+504+ *pinode = cifs_new_inode(sb, pinum);505+ if (*pinode == NULL) {506+ rc = -ENOMEM;507+ goto cgii_exit;508+ }509 }510 inode = *pinode;511 cifsInfo = CIFS_I(inode);···621 .lookup = cifs_lookup,622};623624+char *cifs_build_path_to_root(struct cifs_sb_info *cifs_sb)625{626 int pplen = cifs_sb->prepathlen;627 int dfsplen;···678 return inode;679680 cifs_sb = CIFS_SB(inode->i_sb);681+ full_path = cifs_build_path_to_root(cifs_sb);682 if (full_path == NULL)683 return ERR_PTR(-ENOMEM);684···1017 return rc;1018}10191020+void posix_fill_in_inode(struct inode *tmp_inode,1021 FILE_UNIX_BASIC_INFO *pData, int isNewInode)1022{1023 struct cifsInodeInfo *cifsInfo = CIFS_I(tmp_inode);···1114 else1115 direntry->d_op = &cifs_dentry_ops;11161117+ newinode = cifs_new_inode(inode->i_sb,1118+ &pInfo->UniqueId);1119 if (newinode == NULL) {1120 kfree(pInfo);1121 goto mkdir_get_info;1122 }11230000000001124 newinode->i_nlink = 2;001125 d_instantiate(direntry, newinode);11261127 /* we already checked in POSIXCreate whether
+26-32
fs/cifs/readdir.c
···56}57#endif /* DEBUG2 */5859-/* Returns one if new inode created (which therefore needs to be hashed) */60/* Might check in the future if inode number changed so we can rehash inode */61-static int construct_dentry(struct qstr *qstring, struct file *file,62- struct inode **ptmp_inode, struct dentry **pnew_dentry)0063{64- struct dentry *tmp_dentry;65- struct cifs_sb_info *cifs_sb;66- struct cifsTconInfo *pTcon;67 int rc = 0;6869 cFYI(1, ("For %s", qstring->name));70- cifs_sb = CIFS_SB(file->f_path.dentry->d_sb);71- pTcon = cifs_sb->tcon;7273 qstring->hash = full_name_hash(qstring->name, qstring->len);74 tmp_dentry = d_lookup(file->f_path.dentry, qstring);75 if (tmp_dentry) {00076 cFYI(0, ("existing dentry with inode 0x%p",77 tmp_dentry->d_inode));78 *ptmp_inode = tmp_dentry->d_inode;79-/* BB overwrite old name? i.e. tmp_dentry->d_name and tmp_dentry->d_name.len??*/80 if (*ptmp_inode == NULL) {81- *ptmp_inode = new_inode(file->f_path.dentry->d_sb);82 if (*ptmp_inode == NULL)83 return rc;84 rc = 1;85 }86- if (file->f_path.dentry->d_sb->s_flags & MS_NOATIME)87- (*ptmp_inode)->i_flags |= S_NOATIME | S_NOCMTIME;88 } else {89 tmp_dentry = d_alloc(file->f_path.dentry, qstring);90 if (tmp_dentry == NULL) {···92 return rc;93 }9495- *ptmp_inode = new_inode(file->f_path.dentry->d_sb);96- if (pTcon->nocase)97 tmp_dentry->d_op = &cifs_ci_dentry_ops;98 else99 tmp_dentry->d_op = &cifs_dentry_ops;00100 if (*ptmp_inode == NULL)101 return rc;102- if (file->f_path.dentry->d_sb->s_flags & MS_NOATIME)103- (*ptmp_inode)->i_flags |= S_NOATIME | S_NOCMTIME;104 rc = 2;105 }106···820/* inode num, inode type and filename returned */821static int cifs_get_name_from_search_buf(struct qstr *pqst,822 char *current_entry, __u16 level, unsigned int unicode,823- struct cifs_sb_info *cifs_sb, int max_len, ino_t *pinum)824{825 int rc = 0;826 unsigned int len = 0;···840 len = strnlen(filename, PATH_MAX);841 }842843- /* BB fixme - hash low and high 32 bits if not 64 bit arch BB */844- if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SERVER_INUM)845- *pinum = pFindData->UniqueId;846 } else if (level == SMB_FIND_FILE_DIRECTORY_INFO) {847 FILE_DIRECTORY_INFO *pFindData =848 (FILE_DIRECTORY_INFO *)current_entry;···903 struct qstr qstring;904 struct cifsFileInfo *pCifsF;905 unsigned int obj_type;906- ino_t inum;907 struct cifs_sb_info *cifs_sb;908 struct inode *tmp_inode;909 struct dentry *tmp_dentry;···936 if (rc)937 return rc;938939- rc = construct_dentry(&qstring, file, &tmp_inode, &tmp_dentry);00000000940 if ((tmp_inode == NULL) || (tmp_dentry == NULL))941 return -ENOMEM;942-943- if (rc) {944- /* inode created, we need to hash it with right inode number */945- if (inum != 0) {946- /* BB fixme - hash the 2 32 quantities bits together if947- * necessary BB */948- tmp_inode->i_ino = inum;949- }950- insert_inode_hash(tmp_inode);951- }952953 /* we pass in rc below, indicating whether it is a new inode,954 so we can figure out whether to invalidate the inode cached
···56}57#endif /* DEBUG2 */5859+/* Returns 1 if new inode created, 2 if both dentry and inode were */60/* Might check in the future if inode number changed so we can rehash inode */61+static int62+construct_dentry(struct qstr *qstring, struct file *file,63+ struct inode **ptmp_inode, struct dentry **pnew_dentry,64+ __u64 *inum)65{66+ struct dentry *tmp_dentry = NULL;67+ struct super_block *sb = file->f_path.dentry->d_sb;068 int rc = 0;6970 cFYI(1, ("For %s", qstring->name));007172 qstring->hash = full_name_hash(qstring->name, qstring->len);73 tmp_dentry = d_lookup(file->f_path.dentry, qstring);74 if (tmp_dentry) {75+ /* BB: overwrite old name? i.e. tmp_dentry->d_name and76+ * tmp_dentry->d_name.len??77+ */78 cFYI(0, ("existing dentry with inode 0x%p",79 tmp_dentry->d_inode));80 *ptmp_inode = tmp_dentry->d_inode;081 if (*ptmp_inode == NULL) {82+ *ptmp_inode = cifs_new_inode(sb, inum);83 if (*ptmp_inode == NULL)84 return rc;85 rc = 1;86 }0087 } else {88 tmp_dentry = d_alloc(file->f_path.dentry, qstring);89 if (tmp_dentry == NULL) {···93 return rc;94 }9596+ if (CIFS_SB(sb)->tcon->nocase)097 tmp_dentry->d_op = &cifs_ci_dentry_ops;98 else99 tmp_dentry->d_op = &cifs_dentry_ops;100+101+ *ptmp_inode = cifs_new_inode(sb, inum);102 if (*ptmp_inode == NULL)103 return rc;00104 rc = 2;105 }106···822/* inode num, inode type and filename returned */823static int cifs_get_name_from_search_buf(struct qstr *pqst,824 char *current_entry, __u16 level, unsigned int unicode,825+ struct cifs_sb_info *cifs_sb, int max_len, __u64 *pinum)826{827 int rc = 0;828 unsigned int len = 0;···842 len = strnlen(filename, PATH_MAX);843 }844845+ *pinum = pFindData->UniqueId;00846 } else if (level == SMB_FIND_FILE_DIRECTORY_INFO) {847 FILE_DIRECTORY_INFO *pFindData =848 (FILE_DIRECTORY_INFO *)current_entry;···907 struct qstr qstring;908 struct cifsFileInfo *pCifsF;909 unsigned int obj_type;910+ __u64 inum;911 struct cifs_sb_info *cifs_sb;912 struct inode *tmp_inode;913 struct dentry *tmp_dentry;···940 if (rc)941 return rc;942943+ /* only these two infolevels return valid inode numbers */944+ if (pCifsF->srch_inf.info_level == SMB_FIND_FILE_UNIX ||945+ pCifsF->srch_inf.info_level == SMB_FIND_FILE_ID_FULL_DIR_INFO)946+ rc = construct_dentry(&qstring, file, &tmp_inode, &tmp_dentry,947+ &inum);948+ else949+ rc = construct_dentry(&qstring, file, &tmp_inode, &tmp_dentry,950+ NULL);951+952 if ((tmp_inode == NULL) || (tmp_dentry == NULL))953 return -ENOMEM;0000000000954955 /* we pass in rc below, indicating whether it is a new inode,956 so we can figure out whether to invalidate the inode cached
+87-4
fs/cifs/sess.c
···34extern void SMBNTencrypt(unsigned char *passwd, unsigned char *c8,35 unsigned char *p24);360000000000000000000000000000000000000000000000000000000000000000000000000000000037static __u32 cifs_ssetup_hdr(struct cifsSesInfo *ses, SESSION_SETUP_ANDX *pSMB)38{39 __u32 capabilities = 0;4041 /* init fields common to all four types of SessSetup */42- /* note that header is initialized to zero in header_assemble */00043 pSMB->req.AndXCommand = 0xFF;44 pSMB->req.MaxBufferSize = cpu_to_le16(ses->server->maxBuf);45 pSMB->req.MaxMpxCount = cpu_to_le16(ses->server->maxReq);04647 /* Now no need to set SMBFLG_CASELESS or obsolete CANONICAL PATH */48···155 if (ses->capabilities & CAP_UNIX)156 capabilities |= CAP_UNIX;157158- /* BB check whether to init vcnum BB */159 return capabilities;160}161···311312 kfree(ses->serverOS);313 /* UTF-8 string will not grow more than four times as big as UCS-16 */314- ses->serverOS = kzalloc(4 * len, GFP_KERNEL);315 if (ses->serverOS != NULL)316 cifs_strfromUCS_le(ses->serverOS, (__le16 *)data, len, nls_cp);317 data += 2 * (len + 1);···324 return rc;325326 kfree(ses->serverNOS);327- ses->serverNOS = kzalloc(4 * len, GFP_KERNEL); /* BB this is wrong length FIXME BB */328 if (ses->serverNOS != NULL) {329 cifs_strfromUCS_le(ses->serverNOS, (__le16 *)data, len,330 nls_cp);
···34extern void SMBNTencrypt(unsigned char *passwd, unsigned char *c8,35 unsigned char *p24);3637+/* Checks if this is the first smb session to be reconnected after38+ the socket has been reestablished (so we know whether to use vc 0).39+ Called while holding the cifs_tcp_ses_lock, so do not block */40+static bool is_first_ses_reconnect(struct cifsSesInfo *ses)41+{42+ struct list_head *tmp;43+ struct cifsSesInfo *tmp_ses;44+45+ list_for_each(tmp, &ses->server->smb_ses_list) {46+ tmp_ses = list_entry(tmp, struct cifsSesInfo,47+ smb_ses_list);48+ if (tmp_ses->need_reconnect == false)49+ return false;50+ }51+ /* could not find a session that was already connected,52+ this must be the first one we are reconnecting */53+ return true;54+}55+56+/*57+ * vc number 0 is treated specially by some servers, and should be the58+ * first one we request. After that we can use vcnumbers up to maxvcs,59+ * one for each smb session (some Windows versions set maxvcs incorrectly60+ * so maxvc=1 can be ignored). If we have too many vcs, we can reuse61+ * any vc but zero (some servers reset the connection on vcnum zero)62+ *63+ */64+static __le16 get_next_vcnum(struct cifsSesInfo *ses)65+{66+ __u16 vcnum = 0;67+ struct list_head *tmp;68+ struct cifsSesInfo *tmp_ses;69+ __u16 max_vcs = ses->server->max_vcs;70+ __u16 i;71+ int free_vc_found = 0;72+73+ /* Quoting the MS-SMB specification: "Windows-based SMB servers set this74+ field to one but do not enforce this limit, which allows an SMB client75+ to establish more virtual circuits than allowed by this value ... but76+ other server implementations can enforce this limit." */77+ if (max_vcs < 2)78+ max_vcs = 0xFFFF;79+80+ write_lock(&cifs_tcp_ses_lock);81+ if ((ses->need_reconnect) && is_first_ses_reconnect(ses))82+ goto get_vc_num_exit; /* vcnum will be zero */83+ for (i = ses->server->srv_count - 1; i < max_vcs; i++) {84+ if (i == 0) /* this is the only connection, use vc 0 */85+ break;86+87+ free_vc_found = 1;88+89+ list_for_each(tmp, &ses->server->smb_ses_list) {90+ tmp_ses = list_entry(tmp, struct cifsSesInfo,91+ smb_ses_list);92+ if (tmp_ses->vcnum == i) {93+ free_vc_found = 0;94+ break; /* found duplicate, try next vcnum */95+ }96+ }97+ if (free_vc_found)98+ break; /* we found a vcnumber that will work - use it */99+ }100+101+ if (i == 0)102+ vcnum = 0; /* for most common case, ie if one smb session, use103+ vc zero. Also for case when no free vcnum, zero104+ is safest to send (some clients only send zero) */105+ else if (free_vc_found == 0)106+ vcnum = 1; /* we can not reuse vc=0 safely, since some servers107+ reset all uids on that, but 1 is ok. */108+ else109+ vcnum = i;110+ ses->vcnum = vcnum;111+get_vc_num_exit:112+ write_unlock(&cifs_tcp_ses_lock);113+114+ return le16_to_cpu(vcnum);115+}116+117static __u32 cifs_ssetup_hdr(struct cifsSesInfo *ses, SESSION_SETUP_ANDX *pSMB)118{119 __u32 capabilities = 0;120121 /* init fields common to all four types of SessSetup */122+ /* Note that offsets for first seven fields in req struct are same */123+ /* in CIFS Specs so does not matter which of 3 forms of struct */124+ /* that we use in next few lines */125+ /* Note that header is initialized to zero in header_assemble */126 pSMB->req.AndXCommand = 0xFF;127 pSMB->req.MaxBufferSize = cpu_to_le16(ses->server->maxBuf);128 pSMB->req.MaxMpxCount = cpu_to_le16(ses->server->maxReq);129+ pSMB->req.VcNumber = get_next_vcnum(ses);130131 /* Now no need to set SMBFLG_CASELESS or obsolete CANONICAL PATH */132···71 if (ses->capabilities & CAP_UNIX)72 capabilities |= CAP_UNIX;73074 return capabilities;75}76···228229 kfree(ses->serverOS);230 /* UTF-8 string will not grow more than four times as big as UCS-16 */231+ ses->serverOS = kzalloc((4 * len) + 2 /* trailing null */, GFP_KERNEL);232 if (ses->serverOS != NULL)233 cifs_strfromUCS_le(ses->serverOS, (__le16 *)data, len, nls_cp);234 data += 2 * (len + 1);···241 return rc;242243 kfree(ses->serverNOS);244+ ses->serverNOS = kzalloc((4 * len) + 2 /* trailing null */, GFP_KERNEL);245 if (ses->serverNOS != NULL) {246 cifs_strfromUCS_le(ses->serverNOS, (__le16 *)data, len,247 nls_cp);
+2
fs/compat_ioctl.c
···1938/* Big K */1939COMPATIBLE_IOCTL(PIO_FONT)1940COMPATIBLE_IOCTL(GIO_FONT)001941ULONG_IOCTL(KDSIGACCEPT)1942COMPATIBLE_IOCTL(KDGETKEYCODE)1943COMPATIBLE_IOCTL(KDSETKEYCODE)
···1938/* Big K */1939COMPATIBLE_IOCTL(PIO_FONT)1940COMPATIBLE_IOCTL(GIO_FONT)1941+COMPATIBLE_IOCTL(PIO_CMAP)1942+COMPATIBLE_IOCTL(GIO_CMAP)1943ULONG_IOCTL(KDSIGACCEPT)1944COMPATIBLE_IOCTL(KDGETKEYCODE)1945COMPATIBLE_IOCTL(KDSETKEYCODE)
+1-1
fs/ext4/ext4.h
···868{869 unsigned len = le16_to_cpu(dlen);870871- if (len == EXT4_MAX_REC_LEN)872 return 1 << 16;873 return len;874}
···868{869 unsigned len = le16_to_cpu(dlen);870871+ if (len == EXT4_MAX_REC_LEN || len == 0)872 return 1 << 16;873 return len;874}
+23-4
fs/ext4/inode.c
···47static inline int ext4_begin_ordered_truncate(struct inode *inode,48 loff_t new_size)49{50- return jbd2_journal_begin_ordered_truncate(&EXT4_I(inode)->jinode,51- new_size);0052}5354static void ext4_invalidatepage(struct page *page, unsigned long offset);···2439 int no_nrwrite_index_update;2440 int pages_written = 0;2441 long pages_skipped;02442 int needed_blocks, ret = 0, nr_to_writebump = 0;2443 struct ext4_sb_info *sbi = EXT4_SB(mapping->host->i_sb);2444···2491 if (wbc->range_start == 0 && wbc->range_end == LLONG_MAX)2492 range_whole = 1;24932494- if (wbc->range_cyclic)02495 index = mapping->writeback_index;2496- else000002497 index = wbc->range_start >> PAGE_CACHE_SHIFT;24982499 mpd.wbc = wbc;···2513 wbc->no_nrwrite_index_update = 1;2514 pages_skipped = wbc->pages_skipped;251502516 while (!ret && wbc->nr_to_write > 0) {25172518 /*···2556 pages_written += mpd.pages_written;2557 wbc->pages_skipped = pages_skipped;2558 ret = 0;02559 } else if (wbc->nr_to_write)2560 /*2561 * There is no more writeout needed···2565 */2566 break;2567 }00000002568 if (pages_skipped != wbc->pages_skipped)2569 printk(KERN_EMERG "This should not happen leaving %s "2570 "with nr_to_write = %ld ret = %d\n",···25792580 /* Update index */2581 index += pages_written;02582 if (wbc->range_cyclic || (range_whole && wbc->nr_to_write > 0))2583 /*2584 * set the writeback_index so that range_cyclic
···47static inline int ext4_begin_ordered_truncate(struct inode *inode,48 loff_t new_size)49{50+ return jbd2_journal_begin_ordered_truncate(51+ EXT4_SB(inode->i_sb)->s_journal,52+ &EXT4_I(inode)->jinode,53+ new_size);54}5556static void ext4_invalidatepage(struct page *page, unsigned long offset);···2437 int no_nrwrite_index_update;2438 int pages_written = 0;2439 long pages_skipped;2440+ int range_cyclic, cycled = 1, io_done = 0;2441 int needed_blocks, ret = 0, nr_to_writebump = 0;2442 struct ext4_sb_info *sbi = EXT4_SB(mapping->host->i_sb);2443···2488 if (wbc->range_start == 0 && wbc->range_end == LLONG_MAX)2489 range_whole = 1;24902491+ range_cyclic = wbc->range_cyclic;2492+ if (wbc->range_cyclic) {2493 index = mapping->writeback_index;2494+ if (index)2495+ cycled = 0;2496+ wbc->range_start = index << PAGE_CACHE_SHIFT;2497+ wbc->range_end = LLONG_MAX;2498+ wbc->range_cyclic = 0;2499+ } else2500 index = wbc->range_start >> PAGE_CACHE_SHIFT;25012502 mpd.wbc = wbc;···2504 wbc->no_nrwrite_index_update = 1;2505 pages_skipped = wbc->pages_skipped;25062507+retry:2508 while (!ret && wbc->nr_to_write > 0) {25092510 /*···2546 pages_written += mpd.pages_written;2547 wbc->pages_skipped = pages_skipped;2548 ret = 0;2549+ io_done = 1;2550 } else if (wbc->nr_to_write)2551 /*2552 * There is no more writeout needed···2554 */2555 break;2556 }2557+ if (!io_done && !cycled) {2558+ cycled = 1;2559+ index = 0;2560+ wbc->range_start = index << PAGE_CACHE_SHIFT;2561+ wbc->range_end = mapping->writeback_index - 1;2562+ goto retry;2563+ }2564 if (pages_skipped != wbc->pages_skipped)2565 printk(KERN_EMERG "This should not happen leaving %s "2566 "with nr_to_write = %ld ret = %d\n",···25612562 /* Update index */2563 index += pages_written;2564+ wbc->range_cyclic = range_cyclic;2565 if (wbc->range_cyclic || (range_whole && wbc->nr_to_write > 0))2566 /*2567 * set the writeback_index so that range_cyclic
+19-13
fs/ext4/mballoc.c
···3693 pa->pa_free = pa->pa_len;3694 atomic_set(&pa->pa_count, 1);3695 spin_lock_init(&pa->pa_lock);003696 pa->pa_deleted = 0;3697 pa->pa_linear = 0;3698···3757 atomic_set(&pa->pa_count, 1);3758 spin_lock_init(&pa->pa_lock);3759 INIT_LIST_HEAD(&pa->pa_inode_list);03760 pa->pa_deleted = 0;3761 pa->pa_linear = 1;3762···4479 pa->pa_free -= ac->ac_b_ex.fe_len;4480 pa->pa_len -= ac->ac_b_ex.fe_len;4481 spin_unlock(&pa->pa_lock);4482- /*4483- * We want to add the pa to the right bucket.4484- * Remove it from the list and while adding4485- * make sure the list to which we are adding4486- * doesn't grow big.4487- */4488- if (likely(pa->pa_free)) {4489- spin_lock(pa->pa_obj_lock);4490- list_del_rcu(&pa->pa_inode_list);4491- spin_unlock(pa->pa_obj_lock);4492- ext4_mb_add_n_trim(ac);4493- }4494 }4495- ext4_mb_put_pa(ac, ac->ac_sb, pa);4496 }4497 if (ac->alloc_semp)4498 up_read(ac->alloc_semp);00000000000000004499 if (ac->ac_bitmap_page)4500 page_cache_release(ac->ac_bitmap_page);4501 if (ac->ac_buddy_page)
···3693 pa->pa_free = pa->pa_len;3694 atomic_set(&pa->pa_count, 1);3695 spin_lock_init(&pa->pa_lock);3696+ INIT_LIST_HEAD(&pa->pa_inode_list);3697+ INIT_LIST_HEAD(&pa->pa_group_list);3698 pa->pa_deleted = 0;3699 pa->pa_linear = 0;3700···3755 atomic_set(&pa->pa_count, 1);3756 spin_lock_init(&pa->pa_lock);3757 INIT_LIST_HEAD(&pa->pa_inode_list);3758+ INIT_LIST_HEAD(&pa->pa_group_list);3759 pa->pa_deleted = 0;3760 pa->pa_linear = 1;3761···4476 pa->pa_free -= ac->ac_b_ex.fe_len;4477 pa->pa_len -= ac->ac_b_ex.fe_len;4478 spin_unlock(&pa->pa_lock);0000000000004479 }04480 }4481 if (ac->alloc_semp)4482 up_read(ac->alloc_semp);4483+ if (pa) {4484+ /*4485+ * We want to add the pa to the right bucket.4486+ * Remove it from the list and while adding4487+ * make sure the list to which we are adding4488+ * doesn't grow big. We need to release4489+ * alloc_semp before calling ext4_mb_add_n_trim()4490+ */4491+ if (pa->pa_linear && likely(pa->pa_free)) {4492+ spin_lock(pa->pa_obj_lock);4493+ list_del_rcu(&pa->pa_inode_list);4494+ spin_unlock(pa->pa_obj_lock);4495+ ext4_mb_add_n_trim(ac);4496+ }4497+ ext4_mb_put_pa(ac, ac->ac_sb, pa);4498+ }4499 if (ac->ac_bitmap_page)4500 page_cache_release(ac->ac_bitmap_page);4501 if (ac->ac_buddy_page)
···3046static int ext4_sync_fs(struct super_block *sb, int wait)3047{3048 int ret = 0;030493050 trace_mark(ext4_sync_fs, "dev %s wait %d", sb->s_id, wait);3051 sb->s_dirt = 0;3052 if (EXT4_SB(sb)->s_journal) {3053- if (wait)3054- ret = ext4_force_commit(sb);3055- else3056- jbd2_journal_start_commit(EXT4_SB(sb)->s_journal, NULL);003057 } else {3058 ext4_commit_super(sb, EXT4_SB(sb)->s_es, wait);3059 }
···3046static int ext4_sync_fs(struct super_block *sb, int wait)3047{3048 int ret = 0;3049+ tid_t target;30503051 trace_mark(ext4_sync_fs, "dev %s wait %d", sb->s_id, wait);3052 sb->s_dirt = 0;3053 if (EXT4_SB(sb)->s_journal) {3054+ if (jbd2_journal_start_commit(EXT4_SB(sb)->s_journal,3055+ &target)) {3056+ if (wait)3057+ jbd2_log_wait_commit(EXT4_SB(sb)->s_journal,3058+ target);3059+ }3060 } else {3061 ext4_commit_super(sb, EXT4_SB(sb)->s_es, wait);3062 }
+11-6
fs/jbd2/journal.c
···450}451452/*453- * Called under j_state_lock. Returns true if a transaction was started.454 */455int __jbd2_log_start_commit(journal_t *journal, tid_t target)456{···518519/*520 * Start a commit of the current running transaction (if any). Returns true521- * if a transaction was started, and fills its tid in at *ptid0522 */523int jbd2_journal_start_commit(journal_t *journal, tid_t *ptid)524{···529 if (journal->j_running_transaction) {530 tid_t tid = journal->j_running_transaction->t_tid;531532- ret = __jbd2_log_start_commit(journal, tid);533- if (ret && ptid)00534 *ptid = tid;535- } else if (journal->j_committing_transaction && ptid) {0536 /*537 * If ext3_write_super() recently started a commit, then we538 * have to wait for completion of that transaction539 */540- *ptid = journal->j_committing_transaction->t_tid;0541 ret = 1;542 }543 spin_unlock(&journal->j_state_lock);
···450}451452/*453+ * Called under j_state_lock. Returns true if a transaction commit was started.454 */455int __jbd2_log_start_commit(journal_t *journal, tid_t target)456{···518519/*520 * Start a commit of the current running transaction (if any). Returns true521+ * if a transaction is going to be committed (or is currently already522+ * committing), and fills its tid in at *ptid523 */524int jbd2_journal_start_commit(journal_t *journal, tid_t *ptid)525{···528 if (journal->j_running_transaction) {529 tid_t tid = journal->j_running_transaction->t_tid;530531+ __jbd2_log_start_commit(journal, tid);532+ /* There's a running transaction and we've just made sure533+ * it's commit has been scheduled. */534+ if (ptid)535 *ptid = tid;536+ ret = 1;537+ } else if (journal->j_committing_transaction) {538 /*539 * If ext3_write_super() recently started a commit, then we540 * have to wait for completion of that transaction541 */542+ if (ptid)543+ *ptid = journal->j_committing_transaction->t_tid;544 ret = 1;545 }546 spin_unlock(&journal->j_state_lock);
+31-11
fs/jbd2/transaction.c
···2129}21302131/*2132- * This function must be called when inode is journaled in ordered mode2133- * before truncation happens. It starts writeout of truncated part in2134- * case it is in the committing transaction so that we stand to ordered2135- * mode consistency guarantees.000000000000002136 */2137-int jbd2_journal_begin_ordered_truncate(struct jbd2_inode *inode,02138 loff_t new_size)2139{2140- journal_t *journal;2141- transaction_t *commit_trans;2142 int ret = 0;21432144- if (!inode->i_transaction && !inode->i_next_transaction)02145 goto out;2146- journal = inode->i_transaction->t_journal;002147 spin_lock(&journal->j_state_lock);2148 commit_trans = journal->j_committing_transaction;2149 spin_unlock(&journal->j_state_lock);2150- if (inode->i_transaction == commit_trans) {2151- ret = filemap_fdatawrite_range(inode->i_vfs_inode->i_mapping,0002152 new_size, LLONG_MAX);2153 if (ret)2154 jbd2_journal_abort(journal, ret);
···2129}21302131/*2132+ * File truncate and transaction commit interact with each other in a2133+ * non-trivial way. If a transaction writing data block A is2134+ * committing, we cannot discard the data by truncate until we have2135+ * written them. Otherwise if we crashed after the transaction with2136+ * write has committed but before the transaction with truncate has2137+ * committed, we could see stale data in block A. This function is a2138+ * helper to solve this problem. It starts writeout of the truncated2139+ * part in case it is in the committing transaction.2140+ *2141+ * Filesystem code must call this function when inode is journaled in2142+ * ordered mode before truncation happens and after the inode has been2143+ * placed on orphan list with the new inode size. The second condition2144+ * avoids the race that someone writes new data and we start2145+ * committing the transaction after this function has been called but2146+ * before a transaction for truncate is started (and furthermore it2147+ * allows us to optimize the case where the addition to orphan list2148+ * happens in the same transaction as write --- we don't have to write2149+ * any data in such case).2150 */2151+int jbd2_journal_begin_ordered_truncate(journal_t *journal,2152+ struct jbd2_inode *jinode,2153 loff_t new_size)2154{2155+ transaction_t *inode_trans, *commit_trans;02156 int ret = 0;21572158+ /* This is a quick check to avoid locking if not necessary */2159+ if (!jinode->i_transaction)2160 goto out;2161+ /* Locks are here just to force reading of recent values, it is2162+ * enough that the transaction was not committing before we started2163+ * a transaction adding the inode to orphan list */2164 spin_lock(&journal->j_state_lock);2165 commit_trans = journal->j_committing_transaction;2166 spin_unlock(&journal->j_state_lock);2167+ spin_lock(&journal->j_list_lock);2168+ inode_trans = jinode->i_transaction;2169+ spin_unlock(&journal->j_list_lock);2170+ if (inode_trans == commit_trans) {2171+ ret = filemap_fdatawrite_range(jinode->i_vfs_inode->i_mapping,2172 new_size, LLONG_MAX);2173 if (ret)2174 jbd2_journal_abort(journal, ret);
···156 int ret;157158 do {159- if (unlikely(!idr_pre_get(&ih->idr, GFP_KERNEL)))160 return -ENOSPC;161 ret = idr_get_new_above(&ih->idr, watch, ih->last_wd+1, &watch->wd);162 } while (ret == -EAGAIN);
···156 int ret;157158 do {159+ if (unlikely(!idr_pre_get(&ih->idr, GFP_NOFS)))160 return -ENOSPC;161 ret = idr_get_new_above(&ih->idr, watch, ih->last_wd+1, &watch->wd);162 } while (ret == -EAGAIN);
···48 */49 file->f_version = 0;5051- /* SEQ files support lseek, but not pread/pwrite */52- file->f_mode &= ~(FMODE_PREAD | FMODE_PWRITE);0000000053 return 0;54}55EXPORT_SYMBOL(seq_open);···139 int err = 0;140141 mutex_lock(&m->lock);0000000000000000142 /*143 * seq_file->op->..m_start/m_stop/m_next may do special actions144 * or optimisations based on the file->f_version, so we want to···254Done:255 if (!copied)256 copied = err;257- else258 *ppos += copied;00259 file->f_version = m->version;260 mutex_unlock(&m->lock);261 return copied;···292 if (offset < 0)293 break;294 retval = offset;295- if (offset != file->f_pos) {296 while ((retval=traverse(m, offset)) == -EAGAIN)297 ;298 if (retval) {299 /* with extreme prejudice... */300 file->f_pos = 0;0301 m->version = 0;302 m->index = 0;303 m->count = 0;304 } else {0305 retval = file->f_pos = offset;306 }307 }
···48 */49 file->f_version = 0;5051+ /*52+ * seq_files support lseek() and pread(). They do not implement53+ * write() at all, but we clear FMODE_PWRITE here for historical54+ * reasons.55+ *56+ * If a client of seq_files a) implements file.write() and b) wishes to57+ * support pwrite() then that client will need to implement its own58+ * file.open() which calls seq_open() and then sets FMODE_PWRITE.59+ */60+ file->f_mode &= ~FMODE_PWRITE;61 return 0;62}63EXPORT_SYMBOL(seq_open);···131 int err = 0;132133 mutex_lock(&m->lock);134+135+ /* Don't assume *ppos is where we left it */136+ if (unlikely(*ppos != m->read_pos)) {137+ m->read_pos = *ppos;138+ while ((err = traverse(m, *ppos)) == -EAGAIN)139+ ;140+ if (err) {141+ /* With prejudice... */142+ m->read_pos = 0;143+ m->version = 0;144+ m->index = 0;145+ m->count = 0;146+ goto Done;147+ }148+ }149+150 /*151 * seq_file->op->..m_start/m_stop/m_next may do special actions152 * or optimisations based on the file->f_version, so we want to···230Done:231 if (!copied)232 copied = err;233+ else {234 *ppos += copied;235+ m->read_pos += copied;236+ }237 file->f_version = m->version;238 mutex_unlock(&m->lock);239 return copied;···266 if (offset < 0)267 break;268 retval = offset;269+ if (offset != m->read_pos) {270 while ((retval=traverse(m, offset)) == -EAGAIN)271 ;272 if (retval) {273 /* with extreme prejudice... */274 file->f_pos = 0;275+ m->read_pos = 0;276 m->version = 0;277 m->index = 0;278 m->count = 0;279 } else {280+ m->read_pos = offset;281 retval = file->f_pos = offset;282 }283 }
···82 * lock ordering than usbfs:83 */84 lockdep_set_class(&s->s_lock, &type->s_lock_key);85+ /*86+ * sget() can have s_umount recursion.87+ *88+ * When it cannot find a suitable sb, it allocates a new89+ * one (this one), and tries again to find a suitable old90+ * one.91+ *92+ * In case that succeeds, it will acquire the s_umount93+ * lock of the old one. Since these are clearly distrinct94+ * locks, and this object isn't exposed yet, there's no95+ * risk of deadlocks.96+ *97+ * Annotate this by putting this lock in a different98+ * subclass.99+ */100+ down_write_nested(&s->s_umount, SINGLE_DEPTH_NESTING);101 s->s_count = S_BIAS;102 atomic_set(&s->s_active, 1);103 mutex_init(&s->s_vfs_rename_mutex);
···166}167168/*000000000000000000000000000000000000000000000000000000000000000000000169 * Internal xfs_buf_t object manipulation170 */171···333 uint i;334335 if ((bp->b_flags & XBF_MAPPED) && (bp->b_page_count > 1))336- vm_unmap_ram(bp->b_addr - bp->b_offset, bp->b_page_count);337338 for (i = 0; i < bp->b_page_count; i++) {339 struct page *page = bp->b_pages[i];···455 bp->b_addr = page_address(bp->b_pages[0]) + bp->b_offset;456 bp->b_flags |= XBF_MAPPED;457 } else if (flags & XBF_MAPPED) {458- bp->b_addr = vm_map_ram(bp->b_pages, bp->b_page_count,459- -1, PAGE_KERNEL);00460 if (unlikely(bp->b_addr == NULL))461 return -ENOMEM;462 bp->b_addr += bp->b_offset;···1743 count++;1744 }1745001746 if (count)1747 blk_run_address_space(target->bt_mapping);1748
···166}167168/*169+ * Mapping of multi-page buffers into contiguous virtual space170+ */171+172+typedef struct a_list {173+ void *vm_addr;174+ struct a_list *next;175+} a_list_t;176+177+static a_list_t *as_free_head;178+static int as_list_len;179+static DEFINE_SPINLOCK(as_lock);180+181+/*182+ * Try to batch vunmaps because they are costly.183+ */184+STATIC void185+free_address(186+ void *addr)187+{188+ a_list_t *aentry;189+190+#ifdef CONFIG_XEN191+ /*192+ * Xen needs to be able to make sure it can get an exclusive193+ * RO mapping of pages it wants to turn into a pagetable. If194+ * a newly allocated page is also still being vmap()ed by xfs,195+ * it will cause pagetable construction to fail. This is a196+ * quick workaround to always eagerly unmap pages so that Xen197+ * is happy.198+ */199+ vunmap(addr);200+ return;201+#endif202+203+ aentry = kmalloc(sizeof(a_list_t), GFP_NOWAIT);204+ if (likely(aentry)) {205+ spin_lock(&as_lock);206+ aentry->next = as_free_head;207+ aentry->vm_addr = addr;208+ as_free_head = aentry;209+ as_list_len++;210+ spin_unlock(&as_lock);211+ } else {212+ vunmap(addr);213+ }214+}215+216+STATIC void217+purge_addresses(void)218+{219+ a_list_t *aentry, *old;220+221+ if (as_free_head == NULL)222+ return;223+224+ spin_lock(&as_lock);225+ aentry = as_free_head;226+ as_free_head = NULL;227+ as_list_len = 0;228+ spin_unlock(&as_lock);229+230+ while ((old = aentry) != NULL) {231+ vunmap(aentry->vm_addr);232+ aentry = aentry->next;233+ kfree(old);234+ }235+}236+237+/*238 * Internal xfs_buf_t object manipulation239 */240···264 uint i;265266 if ((bp->b_flags & XBF_MAPPED) && (bp->b_page_count > 1))267+ free_address(bp->b_addr - bp->b_offset);268269 for (i = 0; i < bp->b_page_count; i++) {270 struct page *page = bp->b_pages[i];···386 bp->b_addr = page_address(bp->b_pages[0]) + bp->b_offset;387 bp->b_flags |= XBF_MAPPED;388 } else if (flags & XBF_MAPPED) {389+ if (as_list_len > 64)390+ purge_addresses();391+ bp->b_addr = vmap(bp->b_pages, bp->b_page_count,392+ VM_MAP, PAGE_KERNEL);393 if (unlikely(bp->b_addr == NULL))394 return -ENOMEM;395 bp->b_addr += bp->b_offset;···1672 count++;1673 }16741675+ if (as_list_len > 0)1676+ purge_addresses();1677 if (count)1678 blk_run_address_space(target->bt_mapping);1679
···54 struct drm_display_mode *mode,55 struct drm_display_mode *adjusted_mode);56 /* Actually set the mode */57- void (*mode_set)(struct drm_crtc *crtc, struct drm_display_mode *mode,58- struct drm_display_mode *adjusted_mode, int x, int y,59- struct drm_framebuffer *old_fb);6061 /* Move the crtc on the current fb to the given position *optional* */62- void (*mode_set_base)(struct drm_crtc *crtc, int x, int y,63- struct drm_framebuffer *old_fb);64};6566struct drm_encoder_helper_funcs {
···54 struct drm_display_mode *mode,55 struct drm_display_mode *adjusted_mode);56 /* Actually set the mode */57+ int (*mode_set)(struct drm_crtc *crtc, struct drm_display_mode *mode,58+ struct drm_display_mode *adjusted_mode, int x, int y,59+ struct drm_framebuffer *old_fb);6061 /* Move the crtc on the current fb to the given position *optional* */62+ int (*mode_set_base)(struct drm_crtc *crtc, int x, int y,63+ struct drm_framebuffer *old_fb);64};6566struct drm_encoder_helper_funcs {
···147extern struct device_driver *driver_find(const char *name,148 struct bus_type *bus);149extern int driver_probe_done(void);00150151/* sysfs interface for exporting driver attributes */152
···147extern struct device_driver *driver_find(const char *name,148 struct bus_type *bus);149extern int driver_probe_done(void);150+extern int wait_for_device_probe(void);151+152153/* sysfs interface for exporting driver attributes */154
+2
include/linux/dmaengine.h
···121 * @local: per-cpu pointer to a struct dma_chan_percpu122 * @client-count: how many clients are using this channel123 * @table_count: number of appearances in the mem-to-mem allocation table0124 */125struct dma_chan {126 struct dma_device *device;···135 struct dma_chan_percpu *local;136 int client_count;137 int table_count;0138};139140/**
···121 * @local: per-cpu pointer to a struct dma_chan_percpu122 * @client-count: how many clients are using this channel123 * @table_count: number of appearances in the mem-to-mem allocation table124+ * @private: private data for certain client-channel associations125 */126struct dma_chan {127 struct dma_device *device;···134 struct dma_chan_percpu *local;135 int client_count;136 int table_count;137+ void *private;138};139140/**
+1-1
include/linux/firmware-map.h
···1/*2 * include/linux/firmware-map.h:3 * Copyright (C) 2008 SUSE LINUX Products GmbH4- * by Bernhard Walle <bwalle@suse.de>5 *6 * This program is free software; you can redistribute it and/or modify7 * it under the terms of the GNU General Public License v2.0 as published by
···1/*2 * include/linux/firmware-map.h:3 * Copyright (C) 2008 SUSE LINUX Products GmbH4+ * by Bernhard Walle <bernhard.walle@gmx.de>5 *6 * This program is free software; you can redistribute it and/or modify7 * it under the terms of the GNU General Public License v2.0 as published by
+15-9
include/linux/fs.h
···54#define MAY_ACCESS 1655#define MAY_OPEN 32560000057/* file is open for reading */58#define FMODE_READ ((__force fmode_t)1)59/* file is open for writing */60#define FMODE_WRITE ((__force fmode_t)2)61/* file is seekable */62#define FMODE_LSEEK ((__force fmode_t)4)63-/* file can be accessed using pread/pwrite */64#define FMODE_PREAD ((__force fmode_t)8)65-#define FMODE_PWRITE FMODE_PREAD /* These go hand in hand */066/* File is opened for execution with sys_execve / sys_uselib */67-#define FMODE_EXEC ((__force fmode_t)16)68/* File is opened with O_NDELAY (only set for block devices) */69-#define FMODE_NDELAY ((__force fmode_t)32)70/* File is opened with O_EXCL (only set for block devices) */71-#define FMODE_EXCL ((__force fmode_t)64)72/* File is opened using open(.., 3, ..) and is writeable only for ioctls73 (specialy hack for floppy.c) */74-#define FMODE_WRITE_IOCTL ((__force fmode_t)128)7576/*77 * Don't update ctime and mtime.···93#define WRITE 194#define READA 2 /* read-ahead - don't block if no resources */95#define SWRITE 3 /* for ll_rw_block() - wait for buffer lock */96-#define READ_SYNC (READ | (1 << BIO_RW_SYNC))97#define READ_META (READ | (1 << BIO_RW_META))98-#define WRITE_SYNC (WRITE | (1 << BIO_RW_SYNC))99-#define SWRITE_SYNC (SWRITE | (1 << BIO_RW_SYNC))100#define WRITE_BARRIER (WRITE | (1 << BIO_RW_BARRIER))101#define DISCARD_NOBARRIER (1 << BIO_RW_DISCARD)102#define DISCARD_BARRIER ((1 << BIO_RW_DISCARD) | (1 << BIO_RW_BARRIER))
···54#define MAY_ACCESS 1655#define MAY_OPEN 325657+/*58+ * flags in file.f_mode. Note that FMODE_READ and FMODE_WRITE must correspond59+ * to O_WRONLY and O_RDWR via the strange trick in __dentry_open()60+ */61+62/* file is open for reading */63#define FMODE_READ ((__force fmode_t)1)64/* file is open for writing */65#define FMODE_WRITE ((__force fmode_t)2)66/* file is seekable */67#define FMODE_LSEEK ((__force fmode_t)4)68+/* file can be accessed using pread */69#define FMODE_PREAD ((__force fmode_t)8)70+/* file can be accessed using pwrite */71+#define FMODE_PWRITE ((__force fmode_t)16)72/* File is opened for execution with sys_execve / sys_uselib */73+#define FMODE_EXEC ((__force fmode_t)32)74/* File is opened with O_NDELAY (only set for block devices) */75+#define FMODE_NDELAY ((__force fmode_t)64)76/* File is opened with O_EXCL (only set for block devices) */77+#define FMODE_EXCL ((__force fmode_t)128)78/* File is opened using open(.., 3, ..) and is writeable only for ioctls79 (specialy hack for floppy.c) */80+#define FMODE_WRITE_IOCTL ((__force fmode_t)256)8182/*83 * Don't update ctime and mtime.···87#define WRITE 188#define READA 2 /* read-ahead - don't block if no resources */89#define SWRITE 3 /* for ll_rw_block() - wait for buffer lock */90+#define READ_SYNC (READ | (1 << BIO_RW_SYNCIO) | (1 << BIO_RW_UNPLUG))91#define READ_META (READ | (1 << BIO_RW_META))92+#define WRITE_SYNC (WRITE | (1 << BIO_RW_SYNCIO) | (1 << BIO_RW_UNPLUG))93+#define SWRITE_SYNC (SWRITE | (1 << BIO_RW_SYNCIO) | (1 << BIO_RW_UNPLUG))94#define WRITE_BARRIER (WRITE | (1 << BIO_RW_BARRIER))95#define DISCARD_NOBARRIER (1 << BIO_RW_DISCARD)96#define DISCARD_BARRIER ((1 << BIO_RW_DISCARD) | (1 << BIO_RW_BARRIER))
+2-1
include/linux/jbd2.h
···1150extern int jbd2_journal_bmap(journal_t *, unsigned long, unsigned long long *);1151extern int jbd2_journal_force_commit(journal_t *);1152extern int jbd2_journal_file_inode(handle_t *handle, struct jbd2_inode *inode);1153-extern int jbd2_journal_begin_ordered_truncate(struct jbd2_inode *inode, loff_t new_size);01154extern void jbd2_journal_init_jbd_inode(struct jbd2_inode *jinode, struct inode *inode);1155extern void jbd2_journal_release_jbd_inode(journal_t *journal, struct jbd2_inode *jinode);1156
···1150extern int jbd2_journal_bmap(journal_t *, unsigned long, unsigned long long *);1151extern int jbd2_journal_force_commit(journal_t *);1152extern int jbd2_journal_file_inode(handle_t *handle, struct jbd2_inode *inode);1153+extern int jbd2_journal_begin_ordered_truncate(journal_t *journal,1154+ struct jbd2_inode *inode, loff_t new_size);1155extern void jbd2_journal_init_jbd_inode(struct jbd2_inode *jinode, struct inode *inode);1156extern void jbd2_journal_release_jbd_inode(journal_t *journal, struct jbd2_inode *jinode);1157
+5-5
include/linux/kvm.h
···58 __u32 pad;59 union {60 char dummy[512]; /* reserving space */61-#ifdef CONFIG_X8662 struct kvm_pic_state pic;63#endif64-#if defined(CONFIG_X86) || defined(CONFIG_IA64)65 struct kvm_ioapic_state ioapic;66#endif67 } chip;···384#define KVM_CAP_MP_STATE 14385#define KVM_CAP_COALESCED_MMIO 15386#define KVM_CAP_SYNC_MMU 16 /* Changes to host mmap are reflected in guest */387-#if defined(CONFIG_X86)||defined(CONFIG_IA64)388#define KVM_CAP_DEVICE_ASSIGNMENT 17389#endif390#define KVM_CAP_IOMMU 18391-#if defined(CONFIG_X86)392#define KVM_CAP_DEVICE_MSI 20393#endif394/* Bug in KVM_SET_USER_MEMORY_REGION fixed: */395#define KVM_CAP_DESTROY_MEMORY_REGION_WORKS 21396-#if defined(CONFIG_X86)397#define KVM_CAP_USER_NMI 22398#endif399
···58 __u32 pad;59 union {60 char dummy[512]; /* reserving space */61+#ifdef __KVM_HAVE_PIT62 struct kvm_pic_state pic;63#endif64+#ifdef __KVM_HAVE_IOAPIC65 struct kvm_ioapic_state ioapic;66#endif67 } chip;···384#define KVM_CAP_MP_STATE 14385#define KVM_CAP_COALESCED_MMIO 15386#define KVM_CAP_SYNC_MMU 16 /* Changes to host mmap are reflected in guest */387+#ifdef __KVM_HAVE_DEVICE_ASSIGNMENT388#define KVM_CAP_DEVICE_ASSIGNMENT 17389#endif390#define KVM_CAP_IOMMU 18391+#ifdef __KVM_HAVE_MSI392#define KVM_CAP_DEVICE_MSI 20393#endif394/* Bug in KVM_SET_USER_MEMORY_REGION fixed: */395#define KVM_CAP_DESTROY_MEMORY_REGION_WORKS 21396+#ifdef __KVM_HAVE_USER_NMI397#define KVM_CAP_USER_NMI 22398#endif399
···83 * int getmiso(struct spi_device *);84 * void spidelay(unsigned);85 *000000086 * A non-inlined routine would call bitbang_txrx_*() routines. The87 * main loop could easily compile down to a handful of instructions,88 * especially if the delay is a NOP (to run at peak speed).
···83 * int getmiso(struct spi_device *);84 * void spidelay(unsigned);85 *86+ * setsck()'s is_on parameter is a zero/nonzero boolean.87+ *88+ * setmosi()'s is_on parameter is a zero/nonzero boolean.89+ *90+ * getmiso() is required to return 0 or 1 only. Any other value is invalid91+ * and will result in improper operation.92+ *93 * A non-inlined routine would call bitbang_txrx_*() routines. The94 * main loop could easily compile down to a handful of instructions,95 * especially if the delay is a NOP (to run at peak speed).
+12-4
include/linux/timerfd.h
···11/* For O_CLOEXEC and O_NONBLOCK */12#include <linux/fcntl.h>1314-/* Flags for timerfd_settime. */00000015#define TFD_TIMER_ABSTIME (1 << 0)16-17-/* Flags for timerfd_create. */18#define TFD_CLOEXEC O_CLOEXEC19#define TFD_NONBLOCK O_NONBLOCK20000002122#endif /* _LINUX_TIMERFD_H */23-
···11/* For O_CLOEXEC and O_NONBLOCK */12#include <linux/fcntl.h>1314+/*15+ * CAREFUL: Check include/asm-generic/fcntl.h when defining16+ * new flags, since they might collide with O_* ones. We want17+ * to re-use O_* flags that couldn't possibly have a meaning18+ * from eventfd, in order to leave a free define-space for19+ * shared O_* flags.20+ */21#define TFD_TIMER_ABSTIME (1 << 0)0022#define TFD_CLOEXEC O_CLOEXEC23#define TFD_NONBLOCK O_NONBLOCK2425+#define TFD_SHARED_FCNTL_FLAGS (TFD_CLOEXEC | TFD_NONBLOCK)26+/* Flags for timerfd_create. */27+#define TFD_CREATE_FLAGS TFD_SHARED_FCNTL_FLAGS28+/* Flags for timerfd_settime. */29+#define TFD_SETTIME_FLAGS TFD_TIMER_ABSTIME3031#endif /* _LINUX_TIMERFD_H */0
+4
include/linux/vmalloc.h
···84 unsigned long flags, void *caller);85extern struct vm_struct *__get_vm_area(unsigned long size, unsigned long flags,86 unsigned long start, unsigned long end);000087extern struct vm_struct *get_vm_area_node(unsigned long size,88 unsigned long flags, int node,89 gfp_t gfp_mask);
···84 unsigned long flags, void *caller);85extern struct vm_struct *__get_vm_area(unsigned long size, unsigned long flags,86 unsigned long start, unsigned long end);87+extern struct vm_struct *__get_vm_area_caller(unsigned long size,88+ unsigned long flags,89+ unsigned long start, unsigned long end,90+ void *caller);91extern struct vm_struct *get_vm_area_node(unsigned long size,92 unsigned long flags, int node,93 gfp_t gfp_mask);
+9-4
init/do_mounts.c
···370 ssleep(root_delay);371 }372373- /* wait for the known devices to complete their probing */374- while (driver_probe_done() != 0)375- msleep(100);376- async_synchronize_full();0000377378 md_run_setup();379···403 while (driver_probe_done() != 0 ||404 (ROOT_DEV = name_to_dev_t(saved_root_name)) == 0)405 msleep(100);0406 }407408 is_floppy = MAJOR(ROOT_DEV) == FLOPPY_MAJOR;
···370 ssleep(root_delay);371 }372373+ /*374+ * wait for the known devices to complete their probing375+ *376+ * Note: this is a potential source of long boot delays.377+ * For example, it is not atypical to wait 5 seconds here378+ * for the touchpad of a laptop to initialize.379+ */380+ wait_for_device_probe();381382 md_run_setup();383···399 while (driver_probe_done() != 0 ||400 (ROOT_DEV = name_to_dev_t(saved_root_name)) == 0)401 msleep(100);402+ async_synchronize_full();403 }404405 is_floppy = MAJOR(ROOT_DEV) == FLOPPY_MAJOR;
+3-2
init/do_mounts_md.c
···281 */282 printk(KERN_INFO "md: Waiting for all devices to be available before autodetect\n");283 printk(KERN_INFO "md: If you don't use raid, use raid=noautodetect\n");284- while (driver_probe_done() < 0)285- msleep(100);0286 fd = sys_open("/dev/md0", 0, 0);287 if (fd >= 0) {288 sys_ioctl(fd, RAID_AUTORUN, raid_autopart);
···281 */282 printk(KERN_INFO "md: Waiting for all devices to be available before autodetect\n");283 printk(KERN_INFO "md: If you don't use raid, use raid=noautodetect\n");284+285+ wait_for_device_probe();286+287 fd = sys_open("/dev/md0", 0, 0);288 if (fd >= 0) {289 sys_ioctl(fd, RAID_AUTORUN, raid_autopart);
···1165 u32 val, ktime_t *abs_time, u32 bitset, int clockrt)1166{1167 struct task_struct *curr = current;01168 DECLARE_WAITQUEUE(wait, curr);1169 struct futex_hash_bucket *hb;1170 struct futex_q q;···12171218 if (!ret)1219 goto retry;1220- return ret;1221 }1222 ret = -EWOULDBLOCK;1223- if (uval != val)1224- goto out_unlock_put_key;0012251226 /* Only actually queue if *uaddr contained val. */1227 queue_me(&q, hb);···1287 */12881289 /* If we were woken (and unqueued), we succeeded, whatever. */01290 if (!unqueue_me(&q))1291- return 0;01292 if (rem)1293- return -ETIMEDOUT;12941295 /*1296 * We expect signal_pending(current), but another thread may1297 * have handled it for us already.1298 */01299 if (!abs_time)1300- return -ERESTARTSYS;1301- else {1302- struct restart_block *restart;1303- restart = ¤t_thread_info()->restart_block;1304- restart->fn = futex_wait_restart;1305- restart->futex.uaddr = (u32 *)uaddr;1306- restart->futex.val = val;1307- restart->futex.time = abs_time->tv64;1308- restart->futex.bitset = bitset;1309- restart->futex.flags = 0;13101311- if (fshared)1312- restart->futex.flags |= FLAGS_SHARED;1313- if (clockrt)1314- restart->futex.flags |= FLAGS_CLOCKRT;1315- return -ERESTART_RESTARTBLOCK;1316- }013171318-out_unlock_put_key:1319- queue_unlock(&q, hb);0000001320 put_futex_key(fshared, &q.key);1321-1322out:1323 return ret;1324}
···1165 u32 val, ktime_t *abs_time, u32 bitset, int clockrt)1166{1167 struct task_struct *curr = current;1168+ struct restart_block *restart;1169 DECLARE_WAITQUEUE(wait, curr);1170 struct futex_hash_bucket *hb;1171 struct futex_q q;···12161217 if (!ret)1218 goto retry;1219+ goto out;1220 }1221 ret = -EWOULDBLOCK;1222+ if (unlikely(uval != val)) {1223+ queue_unlock(&q, hb);1224+ goto out_put_key;1225+ }12261227 /* Only actually queue if *uaddr contained val. */1228 queue_me(&q, hb);···1284 */12851286 /* If we were woken (and unqueued), we succeeded, whatever. */1287+ ret = 0;1288 if (!unqueue_me(&q))1289+ goto out_put_key;1290+ ret = -ETIMEDOUT;1291 if (rem)1292+ goto out_put_key;12931294 /*1295 * We expect signal_pending(current), but another thread may1296 * have handled it for us already.1297 */1298+ ret = -ERESTARTSYS;1299 if (!abs_time)1300+ goto out_put_key;00000000013011302+ restart = ¤t_thread_info()->restart_block;1303+ restart->fn = futex_wait_restart;1304+ restart->futex.uaddr = (u32 *)uaddr;1305+ restart->futex.val = val;1306+ restart->futex.time = abs_time->tv64;1307+ restart->futex.bitset = bitset;1308+ restart->futex.flags = 0;13091310+ if (fshared)1311+ restart->futex.flags |= FLAGS_SHARED;1312+ if (clockrt)1313+ restart->futex.flags |= FLAGS_CLOCKRT;1314+1315+ ret = -ERESTART_RESTARTBLOCK;1316+1317+out_put_key:1318 put_futex_key(fshared, &q.key);01319out:1320 return ret;1321}
+30-30
kernel/posix-cpu-timers.c
···681}682683/*000000000000000000000000000684 * Guts of sys_timer_settime for CPU timers.685 * This is called with the timer locked and interrupts disabled.686 * If we return TIMER_RETRY, it's necessary to release the timer's lock···768 if (CPUCLOCK_PERTHREAD(timer->it_clock)) {769 cpu_clock_sample(timer->it_clock, p, &val);770 } else {771- cpu_clock_sample_group(timer->it_clock, p, &val);772 }773774 if (old) {···916 read_unlock(&tasklist_lock);917 goto dead;918 } else {919- cpu_clock_sample_group(timer->it_clock, p, &now);920 clear_dead = (unlikely(p->exit_state) &&921 thread_group_empty(p));922 }···1271 clear_dead_task(timer, now);1272 goto out_unlock;1273 }1274- cpu_clock_sample_group(timer->it_clock, p, &now);1275 bump_cpu_timer(timer, now);1276 /* Leave the tasklist_lock locked for the call below. */1277 }···1433 }1434 spin_unlock(&timer->it_lock);1435 }1436-}1437-1438-/*1439- * Sample a process (thread group) timer for the given group_leader task.1440- * Must be called with tasklist_lock held for reading.1441- */1442-static int cpu_timer_sample_group(const clockid_t which_clock,1443- struct task_struct *p,1444- union cpu_time_count *cpu)1445-{1446- struct task_cputime cputime;1447-1448- thread_group_cputimer(p, &cputime);1449- switch (CPUCLOCK_WHICH(which_clock)) {1450- default:1451- return -EINVAL;1452- case CPUCLOCK_PROF:1453- cpu->cpu = cputime_add(cputime.utime, cputime.stime);1454- break;1455- case CPUCLOCK_VIRT:1456- cpu->cpu = cputime.utime;1457- break;1458- case CPUCLOCK_SCHED:1459- cpu->sched = cputime.sum_exec_runtime + task_delta_exec(p);1460- break;1461- }1462- return 0;1463}14641465/*
···681}682683/*684+ * Sample a process (thread group) timer for the given group_leader task.685+ * Must be called with tasklist_lock held for reading.686+ */687+static int cpu_timer_sample_group(const clockid_t which_clock,688+ struct task_struct *p,689+ union cpu_time_count *cpu)690+{691+ struct task_cputime cputime;692+693+ thread_group_cputimer(p, &cputime);694+ switch (CPUCLOCK_WHICH(which_clock)) {695+ default:696+ return -EINVAL;697+ case CPUCLOCK_PROF:698+ cpu->cpu = cputime_add(cputime.utime, cputime.stime);699+ break;700+ case CPUCLOCK_VIRT:701+ cpu->cpu = cputime.utime;702+ break;703+ case CPUCLOCK_SCHED:704+ cpu->sched = cputime.sum_exec_runtime + task_delta_exec(p);705+ break;706+ }707+ return 0;708+}709+710+/*711 * Guts of sys_timer_settime for CPU timers.712 * This is called with the timer locked and interrupts disabled.713 * If we return TIMER_RETRY, it's necessary to release the timer's lock···741 if (CPUCLOCK_PERTHREAD(timer->it_clock)) {742 cpu_clock_sample(timer->it_clock, p, &val);743 } else {744+ cpu_timer_sample_group(timer->it_clock, p, &val);745 }746747 if (old) {···889 read_unlock(&tasklist_lock);890 goto dead;891 } else {892+ cpu_timer_sample_group(timer->it_clock, p, &now);893 clear_dead = (unlikely(p->exit_state) &&894 thread_group_empty(p));895 }···1244 clear_dead_task(timer, now);1245 goto out_unlock;1246 }1247+ cpu_timer_sample_group(timer->it_clock, p, &now);1248 bump_cpu_timer(timer, now);1249 /* Leave the tasklist_lock locked for the call below. */1250 }···1406 }1407 spin_unlock(&timer->it_lock);1408 }0000000000000000000000000001409}14101411/*
···595 unsigned int flags;596597 /*000000598 * name_to_dev_t() below takes a sysfs buffer mutex when sysfs599 * is configured into the kernel. Since the regular hibernate600 * trigger path is via sysfs which takes a buffer mutex before···616 mutex_unlock(&pm_mutex);617 return -ENOENT;618 }00000619 swsusp_resume_device = name_to_dev_t(resume_file);620 pr_debug("PM: Resume from partition %s\n", resume_file);621 } else {
···595 unsigned int flags;596597 /*598+ * If the user said "noresume".. bail out early.599+ */600+ if (noresume)601+ return 0;602+603+ /*604 * name_to_dev_t() below takes a sysfs buffer mutex when sysfs605 * is configured into the kernel. Since the regular hibernate606 * trigger path is via sysfs which takes a buffer mutex before···610 mutex_unlock(&pm_mutex);611 return -ENOENT;612 }613+ /*614+ * Some device discovery might still be in progress; we need615+ * to wait for this to finish.616+ */617+ wait_for_device_probe();618 swsusp_resume_device = name_to_dev_t(resume_file);619 pr_debug("PM: Resume from partition %s\n", resume_file);620 } else {
+3-2
kernel/power/swap.c
···60static int submit(int rw, pgoff_t page_off, struct page *page,61 struct bio **bio_chain)62{063 struct bio *bio;6465 bio = bio_alloc(__GFP_WAIT | __GFP_HIGH, 1);···81 bio_get(bio);8283 if (bio_chain == NULL) {84- submit_bio(rw | (1 << BIO_RW_SYNC), bio);85 wait_on_page_locked(page);86 if (rw == READ)87 bio_set_pages_dirty(bio);···91 get_page(page); /* These pages are freed later */92 bio->bi_private = *bio_chain;93 *bio_chain = bio;94- submit_bio(rw | (1 << BIO_RW_SYNC), bio);95 }96 return 0;97}
···60static int submit(int rw, pgoff_t page_off, struct page *page,61 struct bio **bio_chain)62{63+ const int bio_rw = rw | (1 << BIO_RW_SYNCIO) | (1 << BIO_RW_UNPLUG);64 struct bio *bio;6566 bio = bio_alloc(__GFP_WAIT | __GFP_HIGH, 1);···80 bio_get(bio);8182 if (bio_chain == NULL) {83+ submit_bio(bio_rw, bio);84 wait_on_page_locked(page);85 if (rw == READ)86 bio_set_pages_dirty(bio);···90 get_page(page); /* These pages are freed later */91 bio->bi_private = *bio_chain;92 *bio_chain = bio;93+ submit_bio(bio_rw, bio);94 }95 return 0;96}
···69446945static void rq_attach_root(struct rq *rq, struct root_domain *rd)6946{06947 unsigned long flags;69486949 spin_lock_irqsave(&rq->lock, flags);69506951 if (rq->rd) {6952- struct root_domain *old_rd = rq->rd;69536954 if (cpumask_test_cpu(rq->cpu, old_rd->online))6955 set_rq_offline(rq);69566957 cpumask_clear_cpu(rq->cpu, old_rd->span);69586959- if (atomic_dec_and_test(&old_rd->refcount))6960- free_rootdomain(old_rd);000006961 }69626963 atomic_inc(&rd->refcount);···6974 set_rq_online(rq);69756976 spin_unlock_irqrestore(&rq->lock, flags);0006977}69786979static int __init_refok init_rootdomain(struct root_domain *rd, bool bootmem)
···69446945static void rq_attach_root(struct rq *rq, struct root_domain *rd)6946{6947+ struct root_domain *old_rd = NULL;6948 unsigned long flags;69496950 spin_lock_irqsave(&rq->lock, flags);69516952 if (rq->rd) {6953+ old_rd = rq->rd;69546955 if (cpumask_test_cpu(rq->cpu, old_rd->online))6956 set_rq_offline(rq);69576958 cpumask_clear_cpu(rq->cpu, old_rd->span);69596960+ /*6961+ * If we dont want to free the old_rt yet then6962+ * set old_rd to NULL to skip the freeing later6963+ * in this function:6964+ */6965+ if (!atomic_dec_and_test(&old_rd->refcount))6966+ old_rd = NULL;6967 }69686969 atomic_inc(&rd->refcount);···6968 set_rq_online(rq);69696970 spin_unlock_irqrestore(&rq->lock, flags);6971+6972+ if (old_rd)6973+ free_rootdomain(old_rd);6974}69756976static int __init_refok init_rootdomain(struct root_domain *rd, bool bootmem)
+25
kernel/trace/Kconfig
···52 depends on HAVE_FUNCTION_TRACER53 depends on DEBUG_KERNEL54 select FRAME_POINTER055 select TRACING56 select CONTEXT_SWITCH_TRACER57 help···239 depends on DEBUG_KERNEL240 select FUNCTION_TRACER241 select STACKTRACE0242 help243 This special tracer records the maximum stack footprint of the244 kernel and displays it in debugfs/tracing/stack_trace.···303 a series of tests are made to verify that the tracer is304 functioning properly. It will do tests on all the configured305 tracers of ftrace.00000000000000000000000306307endmenu
···52 depends on HAVE_FUNCTION_TRACER53 depends on DEBUG_KERNEL54 select FRAME_POINTER55+ select KALLSYMS56 select TRACING57 select CONTEXT_SWITCH_TRACER58 help···238 depends on DEBUG_KERNEL239 select FUNCTION_TRACER240 select STACKTRACE241+ select KALLSYMS242 help243 This special tracer records the maximum stack footprint of the244 kernel and displays it in debugfs/tracing/stack_trace.···301 a series of tests are made to verify that the tracer is302 functioning properly. It will do tests on all the configured303 tracers of ftrace.304+305+config MMIOTRACE306+ bool "Memory mapped IO tracing"307+ depends on HAVE_MMIOTRACE_SUPPORT && DEBUG_KERNEL && PCI308+ select TRACING309+ help310+ Mmiotrace traces Memory Mapped I/O access and is meant for311+ debugging and reverse engineering. It is called from the ioremap312+ implementation and works via page faults. Tracing is disabled by313+ default and can be enabled at run-time.314+315+ See Documentation/tracers/mmiotrace.txt.316+ If you are not helping to develop drivers, say N.317+318+config MMIOTRACE_TEST319+ tristate "Test module for mmiotrace"320+ depends on MMIOTRACE && m321+ help322+ This is a dumb module for testing mmiotrace. It is very dangerous323+ as it will write garbage to IO memory starting at a given address.324+ However, it should be safe to use on e.g. unused portion of VRAM.325+326+ Say N, unless you absolutely know what you are doing.327328endmenu
+5-1
kernel/trace/ftrace.c
···2033static int start_graph_tracing(void)2034{2035 struct ftrace_ret_stack **ret_stack_list;2036- int ret;20372038 ret_stack_list = kmalloc(FTRACE_RETSTACK_ALLOC_SIZE *2039 sizeof(struct ftrace_ret_stack *),···20412042 if (!ret_stack_list)2043 return -ENOMEM;000020442045 do {2046 ret = alloc_retstack_tasklist(ret_stack_list);
···2033static int start_graph_tracing(void)2034{2035 struct ftrace_ret_stack **ret_stack_list;2036+ int ret, cpu;20372038 ret_stack_list = kmalloc(FTRACE_RETSTACK_ALLOC_SIZE *2039 sizeof(struct ftrace_ret_stack *),···20412042 if (!ret_stack_list)2043 return -ENOMEM;2044+2045+ /* The cpu_boot init_task->ret_stack will never be freed */2046+ for_each_online_cpu(cpu)2047+ ftrace_graph_init_task(idle_task(cpu));20482049 do {2050 ret = alloc_retstack_tasklist(ret_stack_list);
+10-4
kernel/trace/trace_mmiotrace.c
···9#include <linux/kernel.h>10#include <linux/mmiotrace.h>11#include <linux/pci.h>01213#include "trace.h"14···20static struct trace_array *mmio_trace_array;21static bool overrun_detected;22static unsigned long prev_overruns;02324static void mmio_reset_data(struct trace_array *tr)25{···123124static unsigned long count_overruns(struct trace_iterator *iter)125{126- unsigned long cnt = 0;127 unsigned long over = ring_buffer_overruns(iter->tr->buffer);128129 if (over > prev_overruns)130- cnt = over - prev_overruns;131 prev_overruns = over;132 return cnt;133}···312313 event = ring_buffer_lock_reserve(tr->buffer, sizeof(*entry),314 &irq_flags);315- if (!event)0316 return;0317 entry = ring_buffer_event_data(event);318 tracing_generic_entry_update(&entry->ent, 0, preempt_count());319 entry->ent.type = TRACE_MMIO_RW;···342343 event = ring_buffer_lock_reserve(tr->buffer, sizeof(*entry),344 &irq_flags);345- if (!event)0346 return;0347 entry = ring_buffer_event_data(event);348 tracing_generic_entry_update(&entry->ent, 0, preempt_count());349 entry->ent.type = TRACE_MMIO_MAP;
···9#include <linux/kernel.h>10#include <linux/mmiotrace.h>11#include <linux/pci.h>12+#include <asm/atomic.h>1314#include "trace.h"15···19static struct trace_array *mmio_trace_array;20static bool overrun_detected;21static unsigned long prev_overruns;22+static atomic_t dropped_count;2324static void mmio_reset_data(struct trace_array *tr)25{···121122static unsigned long count_overruns(struct trace_iterator *iter)123{124+ unsigned long cnt = atomic_xchg(&dropped_count, 0);125 unsigned long over = ring_buffer_overruns(iter->tr->buffer);126127 if (over > prev_overruns)128+ cnt += over - prev_overruns;129 prev_overruns = over;130 return cnt;131}···310311 event = ring_buffer_lock_reserve(tr->buffer, sizeof(*entry),312 &irq_flags);313+ if (!event) {314+ atomic_inc(&dropped_count);315 return;316+ }317 entry = ring_buffer_event_data(event);318 tracing_generic_entry_update(&entry->ent, 0, preempt_count());319 entry->ent.type = TRACE_MMIO_RW;···338339 event = ring_buffer_lock_reserve(tr->buffer, sizeof(*entry),340 &irq_flags);341+ if (!event) {342+ atomic_inc(&dropped_count);343 return;344+ }345 entry = ring_buffer_event_data(event);346 tracing_generic_entry_update(&entry->ent, 0, preempt_count());347 entry->ent.type = TRACE_MMIO_MAP;
···23{24 struct ring_buffer_event *event;25 struct trace_entry *entry;26+ unsigned int loops = 0;2728 while ((event = ring_buffer_consume(tr->buffer, cpu, NULL))) {29 entry = ring_buffer_event_data(event);3031+ /*32+ * The ring buffer is a size of trace_buf_size, if33+ * we loop more than the size, there's something wrong34+ * with the ring buffer.35+ */36+ if (loops++ > trace_buf_size) {37+ printk(KERN_CONT ".. bad ring buffer ");38+ goto failed;39+ }40 if (!trace_valid_entry(entry)) {41 printk(KERN_CONT ".. invalid entry %d ",42 entry->type);···5758 cnt = ring_buffer_entries(tr->buffer);5960+ /*61+ * The trace_test_buffer_cpu runs a while loop to consume all data.62+ * If the calling tracer is broken, and is constantly filling63+ * the buffer, this will run forever, and hard lock the box.64+ * We disable the ring buffer while we do this test to prevent65+ * a hard lock up.66+ */67+ tracing_off();68 for_each_possible_cpu(cpu) {69 ret = trace_test_buffer_cpu(tr, cpu);70 if (ret)71 break;72 }73+ tracing_on();74 __raw_spin_unlock(&ftrace_max_lock);75 local_irq_restore(flags);76
+1-1
lib/Kconfig.debug
···838839 If unsure, say N.840841-menuconfig BUILD_DOCSRC842 bool "Build targets in Documentation/ tree"843 depends on HEADERS_CHECK844 help
···838839 If unsure, say N.840841+config BUILD_DOCSRC842 bool "Build targets in Documentation/ tree"843 depends on HEADERS_CHECK844 help
···240}241EXPORT_SYMBOL_GPL(bdi_writeout_inc);242243-static inline void task_dirty_inc(struct task_struct *tsk)244{245 prop_inc_single(&vm_dirties, &tsk->dirties);246}···1230 __inc_zone_page_state(page, NR_FILE_DIRTY);1231 __inc_bdi_stat(mapping->backing_dev_info,1232 BDI_RECLAIMABLE);01233 task_io_account_write(PAGE_CACHE_SIZE);1234 }1235 radix_tree_tag_set(&mapping->page_tree,···1263 * If the mapping doesn't provide a set_page_dirty a_op, then1264 * just fall through and assume that it wants buffer_heads.1265 */1266-static int __set_page_dirty(struct page *page)1267{1268 struct address_space *mapping = page_mapping(page);1269···1280 return 1;1281 }1282 return 0;1283-}1284-1285-int set_page_dirty(struct page *page)1286-{1287- int ret = __set_page_dirty(page);1288- if (ret)1289- task_dirty_inc(current);1290- return ret;1291}1292EXPORT_SYMBOL(set_page_dirty);1293
···240}241EXPORT_SYMBOL_GPL(bdi_writeout_inc);242243+void task_dirty_inc(struct task_struct *tsk)244{245 prop_inc_single(&vm_dirties, &tsk->dirties);246}···1230 __inc_zone_page_state(page, NR_FILE_DIRTY);1231 __inc_bdi_stat(mapping->backing_dev_info,1232 BDI_RECLAIMABLE);1233+ task_dirty_inc(current);1234 task_io_account_write(PAGE_CACHE_SIZE);1235 }1236 radix_tree_tag_set(&mapping->page_tree,···1262 * If the mapping doesn't provide a set_page_dirty a_op, then1263 * just fall through and assume that it wants buffer_heads.1264 */1265+int set_page_dirty(struct page *page)1266{1267 struct address_space *mapping = page_mapping(page);1268···1279 return 1;1280 }1281 return 0;000000001282}1283EXPORT_SYMBOL(set_page_dirty);1284
+26-3
mm/page_alloc.c
···2989 * was used and there are no special requirements, this is a convenient2990 * alternative2991 */2992-int __meminit early_pfn_to_nid(unsigned long pfn)2993{2994 int i;2995···3000 if (start_pfn <= pfn && pfn < end_pfn)3001 return early_node_map[i].nid;3002 }3003-3004- return 0;3005}3006#endif /* CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID */0000000000000000000000030073008/* Basic iterator support to walk early_node_map[] */3009#define for_each_active_range_index_in_nid(i, nid) \
···2989 * was used and there are no special requirements, this is a convenient2990 * alternative2991 */2992+int __meminit __early_pfn_to_nid(unsigned long pfn)2993{2994 int i;2995···3000 if (start_pfn <= pfn && pfn < end_pfn)3001 return early_node_map[i].nid;3002 }3003+ /* This is a memory hole */3004+ return -1;3005}3006#endif /* CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID */3007+3008+int __meminit early_pfn_to_nid(unsigned long pfn)3009+{3010+ int nid;3011+3012+ nid = __early_pfn_to_nid(pfn);3013+ if (nid >= 0)3014+ return nid;3015+ /* just returns 0 */3016+ return 0;3017+}3018+3019+#ifdef CONFIG_NODES_SPAN_OTHER_NODES3020+bool __meminit early_pfn_in_nid(unsigned long pfn, int node)3021+{3022+ int nid;3023+3024+ nid = __early_pfn_to_nid(pfn);3025+ if (nid >= 0 && nid != node)3026+ return false;3027+ return true;3028+}3029+#endif30303031/* Basic iterator support to walk early_node_map[] */3032#define for_each_active_range_index_in_nid(i, nid) \
···635636 if (!bdev) {637 if (bdev_p)638- *bdev_p = sis->bdev;639640 spin_unlock(&swap_lock);641 return i;···647 struct swap_extent, list);648 if (se->start_block == offset) {649 if (bdev_p)650- *bdev_p = sis->bdev;651652 spin_unlock(&swap_lock);653 bdput(bdev);
···635636 if (!bdev) {637 if (bdev_p)638+ *bdev_p = bdget(sis->bdev->bd_dev);639640 spin_unlock(&swap_lock);641 return i;···647 struct swap_extent, list);648 if (se->start_block == offset) {649 if (bdev_p)650+ *bdev_p = bdget(sis->bdev->bd_dev);651652 spin_unlock(&swap_lock);653 bdput(bdev);
+20
mm/util.c
···129}130EXPORT_SYMBOL(krealloc);13100000000000000000000132/*133 * strndup_user - duplicate an existing string from user space134 * @s: The string to duplicate
···129}130EXPORT_SYMBOL(krealloc);131132+/**133+ * kzfree - like kfree but zero memory134+ * @p: object to free memory of135+ *136+ * The memory of the object @p points to is zeroed before freed.137+ * If @p is %NULL, kzfree() does nothing.138+ */139+void kzfree(const void *p)140+{141+ size_t ks;142+ void *mem = (void *)p;143+144+ if (unlikely(ZERO_OR_NULL_PTR(mem)))145+ return;146+ ks = ksize(mem);147+ memset(mem, 0, ks);148+ kfree(mem);149+}150+EXPORT_SYMBOL(kzfree);151+152/*153 * strndup_user - duplicate an existing string from user space154 * @s: The string to duplicate
+10
mm/vmalloc.c
···1012void unmap_kernel_range(unsigned long addr, unsigned long size)1013{1014 unsigned long end = addr + size;001015 vunmap_page_range(addr, end);1016 flush_tlb_kernel_range(addr, end);1017}···1107 __builtin_return_address(0));1108}1109EXPORT_SYMBOL_GPL(__get_vm_area);0000000011101111/**1112 * get_vm_area - reserve a contiguous kernel virtual area
···1012void unmap_kernel_range(unsigned long addr, unsigned long size)1013{1014 unsigned long end = addr + size;1015+1016+ flush_cache_vunmap(addr, end);1017 vunmap_page_range(addr, end);1018 flush_tlb_kernel_range(addr, end);1019}···1105 __builtin_return_address(0));1106}1107EXPORT_SYMBOL_GPL(__get_vm_area);1108+1109+struct vm_struct *__get_vm_area_caller(unsigned long size, unsigned long flags,1110+ unsigned long start, unsigned long end,1111+ void *caller)1112+{1113+ return __get_vm_area_node(size, flags, start, end, -1, GFP_KERNEL,1114+ caller);1115+}11161117/**1118 * get_vm_area - reserve a contiguous kernel virtual area
+12-16
mm/vmscan.c
···2057 int pass, struct scan_control *sc)2058{2059 struct zone *zone;2060- unsigned long nr_to_scan, ret = 0;2061- enum lru_list l;20622063 for_each_zone(zone) {020642065 if (!populated_zone(zone))2066 continue;2067-2068 if (zone_is_all_unreclaimable(zone) && prio != DEF_PRIORITY)2069 continue;20702071 for_each_evictable_lru(l) {0002072 /* For pass = 0, we don't shrink the active list */2073- if (pass == 0 &&2074- (l == LRU_ACTIVE || l == LRU_ACTIVE_FILE))2075 continue;20762077- zone->lru[l].nr_scan +=2078- (zone_page_state(zone, NR_LRU_BASE + l)2079- >> prio) + 1;2080 if (zone->lru[l].nr_scan >= nr_pages || pass > 3) {002081 zone->lru[l].nr_scan = 0;2082- nr_to_scan = min(nr_pages,2083- zone_page_state(zone,2084- NR_LRU_BASE + l));2085 ret += shrink_list(l, nr_to_scan, zone,2086 sc, prio);2087 if (ret >= nr_pages)···2089 }2090 }2091 }2092-2093 return ret;2094}2095···2111 .may_swap = 0,2112 .swap_cluster_max = nr_pages,2113 .may_writepage = 1,2114- .swappiness = vm_swappiness,2115 .isolate_pages = isolate_pages_global,2116 };2117···2144 int prio;21452146 /* Force reclaiming mapped pages in the passes #3 and #4 */2147- if (pass > 2) {2148 sc.may_swap = 1;2149- sc.swappiness = 100;2150- }21512152 for (prio = DEF_PRIORITY; prio >= 0; prio--) {2153 unsigned long nr_to_scan = nr_pages - ret;
···2057 int pass, struct scan_control *sc)2058{2059 struct zone *zone;2060+ unsigned long ret = 0;020612062 for_each_zone(zone) {2063+ enum lru_list l;20642065 if (!populated_zone(zone))2066 continue;02067 if (zone_is_all_unreclaimable(zone) && prio != DEF_PRIORITY)2068 continue;20692070 for_each_evictable_lru(l) {2071+ enum zone_stat_item ls = NR_LRU_BASE + l;2072+ unsigned long lru_pages = zone_page_state(zone, ls);2073+2074 /* For pass = 0, we don't shrink the active list */2075+ if (pass == 0 && (l == LRU_ACTIVE_ANON ||2076+ l == LRU_ACTIVE_FILE))2077 continue;20782079+ zone->lru[l].nr_scan += (lru_pages >> prio) + 1;002080 if (zone->lru[l].nr_scan >= nr_pages || pass > 3) {2081+ unsigned long nr_to_scan;2082+2083 zone->lru[l].nr_scan = 0;2084+ nr_to_scan = min(nr_pages, lru_pages);002085 ret += shrink_list(l, nr_to_scan, zone,2086 sc, prio);2087 if (ret >= nr_pages)···2089 }2090 }2091 }02092 return ret;2093}2094···2112 .may_swap = 0,2113 .swap_cluster_max = nr_pages,2114 .may_writepage = 1,02115 .isolate_pages = isolate_pages_global,2116 };2117···2146 int prio;21472148 /* Force reclaiming mapped pages in the passes #3 and #4 */2149+ if (pass > 2)2150 sc.may_swap = 1;0021512152 for (prio = DEF_PRIORITY; prio >= 0; prio--) {2153 unsigned long nr_to_scan = nr_pages - ret;
+2-2
scripts/bootgraph.pl
···5152while (<>) {53 my $line = $_;54- if ($line =~ /([0-9\.]+)\] calling ([a-zA-Z0-9\_]+)\+/) {55 my $func = $2;56 if ($done == 0) {57 $start{$func} = $1;···87 $count = $count + 1;88 }8990- if ($line =~ /([0-9\.]+)\] initcall ([a-zA-Z0-9\_]+)\+.*returned/) {91 if ($done == 0) {92 $end{$2} = $1;93 $maxtime = $1;
···5152while (<>) {53 my $line = $_;54+ if ($line =~ /([0-9\.]+)\] calling ([a-zA-Z0-9\_\.]+)\+/) {55 my $func = $2;56 if ($done == 0) {57 $start{$func} = $1;···87 $count = $count + 1;88 }8990+ if ($line =~ /([0-9\.]+)\] initcall ([a-zA-Z0-9\_\.]+)\+.*returned/) {91 if ($done == 0) {92 $end{$2} = $1;93 $maxtime = $1;
···73{74 int i, r = 0;7576- down_read(&kvm->slots_lock);77 for (i = 0; i < kvm->nmemslots; i++) {78 r = kvm_iommu_map_pages(kvm, kvm->memslots[i].base_gfn,79 kvm->memslots[i].npages);80 if (r)81 break;82 }83- up_read(&kvm->slots_lock);84 return r;85}86···189static int kvm_iommu_unmap_memslots(struct kvm *kvm)190{191 int i;192- down_read(&kvm->slots_lock);193 for (i = 0; i < kvm->nmemslots; i++) {194 kvm_iommu_put_pages(kvm, kvm->memslots[i].base_gfn,195 kvm->memslots[i].npages);196 }197- up_read(&kvm->slots_lock);198199 return 0;200}
···73{74 int i, r = 0;75076 for (i = 0; i < kvm->nmemslots; i++) {77 r = kvm_iommu_map_pages(kvm, kvm->memslots[i].base_gfn,78 kvm->memslots[i].npages);79 if (r)80 break;81 }82+83 return r;84}85···190static int kvm_iommu_unmap_memslots(struct kvm *kvm)191{192 int i;193+194 for (i = 0; i < kvm->nmemslots; i++) {195 kvm_iommu_put_pages(kvm, kvm->memslots[i].base_gfn,196 kvm->memslots[i].npages);197 }0198199 return 0;200}
+33-10
virt/kvm/kvm_main.c
···173 assigned_dev->host_irq_disabled = false;174 }175 mutex_unlock(&assigned_dev->kvm->lock);176- kvm_put_kvm(assigned_dev->kvm);177}178179static irqreturn_t kvm_assigned_dev_intr(int irq, void *dev_id)180{181 struct kvm_assigned_dev_kernel *assigned_dev =182 (struct kvm_assigned_dev_kernel *) dev_id;183-184- kvm_get_kvm(assigned_dev->kvm);185186 schedule_work(&assigned_dev->interrupt_work);187···210 }211}2120213static void kvm_free_assigned_irq(struct kvm *kvm,214 struct kvm_assigned_dev_kernel *assigned_dev)215{···226 if (!assigned_dev->irq_requested_type)227 return;228229- if (cancel_work_sync(&assigned_dev->interrupt_work))230- /* We had pending work. That means we will have to take231- * care of kvm_put_kvm.232- */233- kvm_put_kvm(kvm);0000000000000234235 free_irq(assigned_dev->host_irq, (void *)assigned_dev);236···296297 if (irqchip_in_kernel(kvm)) {298 if (!msi2intx &&299- adev->irq_requested_type & KVM_ASSIGNED_DEV_HOST_MSI) {300- free_irq(adev->host_irq, (void *)kvm);301 pci_disable_msi(adev->dev);302 }303···466 struct kvm_assigned_dev_kernel *match;467 struct pci_dev *dev;4680469 mutex_lock(&kvm->lock);470471 match = kvm_find_assigned_dev(&kvm->arch.assigned_dev_head,···528529out:530 mutex_unlock(&kvm->lock);0531 return r;532out_list_del:533 list_del(&match->list);···540out_free:541 kfree(match);542 mutex_unlock(&kvm->lock);0543 return r;544}545#endif···803 return young;804}8050000000806static const struct mmu_notifier_ops kvm_mmu_notifier_ops = {807 .invalidate_page = kvm_mmu_notifier_invalidate_page,808 .invalidate_range_start = kvm_mmu_notifier_invalidate_range_start,809 .invalidate_range_end = kvm_mmu_notifier_invalidate_range_end,810 .clear_flush_young = kvm_mmu_notifier_clear_flush_young,0811};812#endif /* CONFIG_MMU_NOTIFIER && KVM_ARCH_WANT_MMU_NOTIFIER */813···905{906 struct mm_struct *mm = kvm->mm;9070908 spin_lock(&kvm_lock);909 list_del(&kvm->vm_list);910 spin_unlock(&kvm_lock);
···173 assigned_dev->host_irq_disabled = false;174 }175 mutex_unlock(&assigned_dev->kvm->lock);0176}177178static irqreturn_t kvm_assigned_dev_intr(int irq, void *dev_id)179{180 struct kvm_assigned_dev_kernel *assigned_dev =181 (struct kvm_assigned_dev_kernel *) dev_id;00182183 schedule_work(&assigned_dev->interrupt_work);184···213 }214}215216+/* The function implicit hold kvm->lock mutex due to cancel_work_sync() */217static void kvm_free_assigned_irq(struct kvm *kvm,218 struct kvm_assigned_dev_kernel *assigned_dev)219{···228 if (!assigned_dev->irq_requested_type)229 return;230231+ /*232+ * In kvm_free_device_irq, cancel_work_sync return true if:233+ * 1. work is scheduled, and then cancelled.234+ * 2. work callback is executed.235+ *236+ * The first one ensured that the irq is disabled and no more events237+ * would happen. But for the second one, the irq may be enabled (e.g.238+ * for MSI). So we disable irq here to prevent further events.239+ *240+ * Notice this maybe result in nested disable if the interrupt type is241+ * INTx, but it's OK for we are going to free it.242+ *243+ * If this function is a part of VM destroy, please ensure that till244+ * now, the kvm state is still legal for probably we also have to wait245+ * interrupt_work done.246+ */247+ disable_irq_nosync(assigned_dev->host_irq);248+ cancel_work_sync(&assigned_dev->interrupt_work);249250 free_irq(assigned_dev->host_irq, (void *)assigned_dev);251···285286 if (irqchip_in_kernel(kvm)) {287 if (!msi2intx &&288+ (adev->irq_requested_type & KVM_ASSIGNED_DEV_HOST_MSI)) {289+ free_irq(adev->host_irq, (void *)adev);290 pci_disable_msi(adev->dev);291 }292···455 struct kvm_assigned_dev_kernel *match;456 struct pci_dev *dev;457458+ down_read(&kvm->slots_lock);459 mutex_lock(&kvm->lock);460461 match = kvm_find_assigned_dev(&kvm->arch.assigned_dev_head,···516517out:518 mutex_unlock(&kvm->lock);519+ up_read(&kvm->slots_lock);520 return r;521out_list_del:522 list_del(&match->list);···527out_free:528 kfree(match);529 mutex_unlock(&kvm->lock);530+ up_read(&kvm->slots_lock);531 return r;532}533#endif···789 return young;790}791792+static void kvm_mmu_notifier_release(struct mmu_notifier *mn,793+ struct mm_struct *mm)794+{795+ struct kvm *kvm = mmu_notifier_to_kvm(mn);796+ kvm_arch_flush_shadow(kvm);797+}798+799static const struct mmu_notifier_ops kvm_mmu_notifier_ops = {800 .invalidate_page = kvm_mmu_notifier_invalidate_page,801 .invalidate_range_start = kvm_mmu_notifier_invalidate_range_start,802 .invalidate_range_end = kvm_mmu_notifier_invalidate_range_end,803 .clear_flush_young = kvm_mmu_notifier_clear_flush_young,804+ .release = kvm_mmu_notifier_release,805};806#endif /* CONFIG_MMU_NOTIFIER && KVM_ARCH_WANT_MMU_NOTIFIER */807···883{884 struct mm_struct *mm = kvm->mm;885886+ kvm_arch_sync_events(kvm);887 spin_lock(&kvm_lock);888 list_del(&kvm->vm_list);889 spin_unlock(&kvm_lock);