···32793279S: Spain3280328032813281N: Linus Torvalds32823282-E: torvalds@osdl.org32823282+E: torvalds@linux-foundation.org32833283D: Original kernel hacker32843284S: 12725 SW Millikan Way, Suite 40032853285S: Beaverton, Oregon 97005
+4
Documentation/SubmitChecklist
···72727373 If the new code is substantial, addition of subsystem-specific fault7474 injection might be appropriate.7575+7676+22: Newly-added code has been compiled with `gcc -W'. This will generate7777+ lots of noise, but is good for finding bugs like "warning: comparison7878+ between signed and unsigned".
+3-3
Documentation/SubmittingPatches
···134134135135136136Linus Torvalds is the final arbiter of all changes accepted into the137137-Linux kernel. His e-mail address is <torvalds@osdl.org>. He gets138138-a lot of e-mail, so typically you should do your best to -avoid- sending139139-him e-mail.137137+Linux kernel. His e-mail address is <torvalds@linux-foundation.org>. 138138+He gets a lot of e-mail, so typically you should do your best to -avoid-139139+sending him e-mail. 140140141141Patches which are bug fixes, are "obvious" changes, or similarly142142require little discussion should be sent or CC'd to Linus. Patches
+7
Documentation/feature-removal-schedule.txt
···318318Who: Len Brown <len.brown@intel.com>319319320320---------------------------321321+322322+What: JFFS (version 1)323323+When: 2.6.21324324+Why: Unmaintained for years, superceded by JFFS2 for years.325325+Who: Jeff Garzik <jeff@garzik.org>326326+327327+---------------------------
+17-3
Documentation/filesystems/9p.txt
···7373RESOURCES7474=========75757676-The Linux version of the 9p server is now maintained under the npfs project7777-on sourceforge (http://sourceforge.net/projects/npfs).7676+Our current recommendation is to use Inferno (http://www.vitanuova.com/inferno)7777+as the 9p server. You can start a 9p server under Inferno by issuing the7878+following command:7979+ ; styxlisten -A tcp!*!564 export '#U*'8080+8181+The -A specifies an unauthenticated export. The 564 is the port # (you may8282+have to choose a higher port number if running as a normal user). The '#U*'8383+specifies exporting the root of the Linux name space. You may specify a8484+subset of the namespace by extending the path: '#U*'/tmp would just export8585+/tmp. For more information, see the Inferno manual pages covering styxlisten8686+and export.8787+8888+A Linux version of the 9p server is now maintained under the npfs project8989+on sourceforge (http://sourceforge.net/projects/npfs). There is also a9090+more stable single-threaded version of the server (named spfs) available from9191+the same CVS repository.78927993There are user and developer mailing lists available through the v9fs project8094on sourceforge (http://sourceforge.net/projects/v9fs).···1109611197The 2.6 kernel support is working on PPC and x86.11298113113-PLEASE USE THE SOURCEFORGE BUG-TRACKER TO REPORT PROBLEMS.9999+PLEASE USE THE KERNEL BUGZILLA TO REPORT PROBLEMS. (http://bugzilla.kernel.org)114100
+2-1
Documentation/i386/boot.txt
···22 ----------------------------3344 H. Peter Anvin <hpa@zytor.com>55- Last update 2006-11-1755+ Last update 2007-01-266677On the i386 platform, the Linux kernel uses a rather complicated boot88convention. This has evolved partially due to historical aspects, as···186186 7 GRuB187187 8 U-BOOT188188 9 Xen189189+ A Gujin189190190191 Please contact <hpa@zytor.com> if you need a bootloader ID191192 value assigned.
+38-11
Documentation/kdump/kdump.txt
···1717memory image to a dump file on the local disk, or across the network to1818a remote system.19192020-Kdump and kexec are currently supported on the x86, x86_64, ppc64 and IA642020+Kdump and kexec are currently supported on the x86, x86_64, ppc64 and ia642121architectures.22222323When the system kernel boots, it reserves a small section of memory for···616162622) Download the kexec-tools user-space package from the following URL:63636464-http://www.kernel.org/pub/linux/kernel/people/horms/kexec-tools/kexec-tools-testing-20061214.tar.gz6464+http://www.kernel.org/pub/linux/kernel/people/horms/kexec-tools/kexec-tools-testing.tar.gz6565+6666+This is a symlink to the latest version, which at the time of writing is6767+20061214, the only release of kexec-tools-testing so far. As other versions6868+are made released, the older onese will remain available at6969+http://www.kernel.org/pub/linux/kernel/people/horms/kexec-tools/65706671Note: Latest kexec-tools-testing git tree is available at6772···767177723) Unpack the tarball with the tar command, as follows:78737979- tar xvpzf kexec-tools-testing-20061214.tar.gz7474+ tar xvpzf kexec-tools-testing.tar.gz80758181-4) Change to the kexec-tools-1.101 directory, as follows:7676+4) Change to the kexec-tools directory, as follows:82778383- cd kexec-tools-testing-200612147878+ cd kexec-tools-testing-VERSION847985805) Configure the package, as follows:8681···229224230225Dump-capture kernel config options (Arch Dependent, ia64)231226----------------------------------------------------------232232-(To be filled)227227+228228+- No specific options are required to create a dump-capture kernel229229+ for ia64, other than those specified in the arch idependent section230230+ above. This means that it is possible to use the system kernel231231+ as a dump-capture kernel if desired.232232+233233+ The crashkernel region can be automatically placed by the system234234+ kernel at run time. This is done by specifying the base address as 0,235235+ or omitting it all together.236236+237237+ crashkernel=256M@0238238+ or239239+ crashkernel=256M240240+241241+ If the start address is specified, note that the start address of the242242+ kernel will be aligned to 64Mb, so if the start address is not then243243+ any space below the alignment point will be wasted.233244234245235246Boot into System Kernel···263242 On x86 and x86_64, use "crashkernel=64M@16M".264243265244 On ppc64, use "crashkernel=128M@32M".245245+246246+ On ia64, 256M@256M is a generous value that typically works.247247+ The region may be automatically placed on ia64, see the248248+ dump-capture kernel config option notes above.266249267250Load the Dump-capture Kernel268251============================···286261For ppc64:287262 - Use vmlinux288263For ia64:289289- (To be filled)264264+ - Use vmlinux or vmlinuz.gz265265+290266291267If you are using a uncompressed vmlinux image then use following command292268to load dump-capture kernel.···303277 --initrd=<initrd-for-dump-capture-kernel> \304278 --append="root=<root-dev> <arch-specific-options>"305279280280+Please note, that --args-linux does not need to be specified for ia64.281281+It is planned to make this a no-op on that architecture, but for now282282+it should be omitted283283+306284Following are the arch specific command line options to be used while307285loading dump-capture kernel.308286309309-For i386 and x86_64:287287+For i386, x86_64 and ia64:310288 "init 1 irqpoll maxcpus=1"311289312290For ppc64:313291 "init 1 maxcpus=1 noirqdistrib"314314-315315-For IA64316316- (To be filled)317292318293319294Notes on loading the dump-capture kernel:
+35-31
Documentation/sysrq.txt
···11Linux Magic System Request Key Hacks22-Documentation for sysrq.c version 1.1533-Last update: $Date: 2001/01/28 10:15:59 $22+Documentation for sysrq.c33+Last update: 2007-JAN-064455* What is the magic SysRq key?66~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~···35353636Note that the value of /proc/sys/kernel/sysrq influences only the invocation3737via a keyboard. Invocation of any operation via /proc/sysrq-trigger is always3838-allowed.3838+allowed (by a user with admin privileges).39394040* How do I use the magic SysRq key?4141~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~···5858On other - If you know of the key combos for other architectures, please5959 let me know so I can add them to this section.60606161-On all - write a character to /proc/sysrq-trigger. eg:6161+On all - write a character to /proc/sysrq-trigger. e.g.:62626363 echo t > /proc/sysrq-trigger6464···74747575'c' - Will perform a kexec reboot in order to take a crashdump.76767777+'d' - Shows all locks that are held.7878+7779'o' - Will shut your system off (if configured and supported).78807981's' - Will attempt to sync all mounted filesystems.···89879088'm' - Will dump current memory info to your console.91899090+'n' - Used to make RT tasks nice-able9191+9292'v' - Dumps Voyager SMP processor info to your console.9393+9494+'w' - Dumps tasks that are in uninterruptable (blocked) state.9595+9696+'x' - Used by xmon interface on ppc/powerpc platforms.93979498'0'-'9' - Sets the console log level, controlling which kernel messages9599 will be printed to your console. ('0', for example would make96100 it so that only emergency messages like PANICs or OOPSes would97101 make it to your console.)981029999-'f' - Will call oom_kill to kill a memory hog process103103+'f' - Will call oom_kill to kill a memory hog process.100104101105'e' - Send a SIGTERM to all processes, except for init.102106107107+'g' - Used by kgdb on ppc platforms.108108+103109'i' - Send a SIGKILL to all processes, except for init.104110105105-'l' - Send a SIGKILL to all processes, INCLUDING init. (Your system106106- will be non-functional after this.)107107-108108-'h' - Will display help ( actually any other key than those listed111111+'h' - Will display help (actually any other key than those listed109112 above will display help. but 'h' is easy to remember :-)110113111114* Okay, so what can I use them for?112115~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~113116Well, un'R'aw is very handy when your X server or a svgalib program crashes.114117115115-sa'K' (Secure Access Key) is useful when you want to be sure there are no116116-trojan program is running at console and which could grab your password117117-when you would try to login. It will kill all programs on given console118118-and thus letting you make sure that the login prompt you see is actually118118+sa'K' (Secure Access Key) is useful when you want to be sure there is no119119+trojan program running at console which could grab your password120120+when you would try to login. It will kill all programs on given console,121121+thus letting you make sure that the login prompt you see is actually119122the one from init, not some trojan program.120123IMPORTANT: In its true form it is not a true SAK like the one in a :IMPORTANT121124IMPORTANT: c2 compliant system, and it should not be mistaken as :IMPORTANT122125IMPORTANT: such. :IMPORTANT123123- It seems other find it useful as (System Attention Key) which is126126+ It seems others find it useful as (System Attention Key) which is124127useful when you want to exit a program that will not let you switch consoles.125128(For example, X or a svgalib program.)126129···146139Again, the unmount (remount read-only) hasn't taken place until you see the147140"OK" and "Done" message appear on the screen.148141149149-The loglevel'0'-'9' is useful when your console is being flooded with150150-kernel messages you do not want to see. Setting '0' will prevent all but142142+The loglevels '0'-'9' are useful when your console is being flooded with143143+kernel messages you do not want to see. Selecting '0' will prevent all but151144the most urgent kernel messages from reaching your console. (They will152145still be logged if syslogd/klogd are alive, though.)153146···159152~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~160153That happens to me, also. I've found that tapping shift, alt, and control161154on both sides of the keyboard, and hitting an invalid sysrq sequence again162162-will fix the problem. (ie, something like alt-sysrq-z). Switching to another155155+will fix the problem. (i.e., something like alt-sysrq-z). Switching to another163156virtual console (ALT+Fn) and then back again should also help.164157165158* I hit SysRq, but nothing seems to happen, what's wrong?···181174prints help, and C) an action_msg string, that will print right before your182175handler is called. Your handler must conform to the prototype in 'sysrq.h'.183176184184-After the sysrq_key_op is created, you can call the macro 185185-register_sysrq_key(int key, struct sysrq_key_op *op_p) that is defined in186186-sysrq.h, this will register the operation pointed to by 'op_p' at table187187-key 'key', if that slot in the table is blank. At module unload time, you must188188-call the macro unregister_sysrq_key(int key, struct sysrq_key_op *op_p), which177177+After the sysrq_key_op is created, you can call the kernel function178178+register_sysrq_key(int key, struct sysrq_key_op *op_p); this will179179+register the operation pointed to by 'op_p' at table key 'key',180180+if that slot in the table is blank. At module unload time, you must call181181+the function unregister_sysrq_key(int key, struct sysrq_key_op *op_p), which189182will remove the key op pointed to by 'op_p' from the key 'key', if and only if190183it is currently registered in that slot. This is in case the slot has been191184overwritten since you registered it.···193186The Magic SysRQ system works by registering key operations against a key op194187lookup table, which is defined in 'drivers/char/sysrq.c'. This key table has195188a number of operations registered into it at compile time, but is mutable,196196-and 4 functions are exported for interface to it: __sysrq_lock_table,197197-__sysrq_unlock_table, __sysrq_get_key_op, and __sysrq_put_key_op. The198198-functions __sysrq_swap_key_ops and __sysrq_swap_key_ops_nolock are defined199199-in the header itself, and the REGISTER and UNREGISTER macros are built from200200-these. More complex (and dangerous!) manipulations of the table are possible201201-using these functions, but you must be careful to always lock the table before202202-you read or write from it, and to unlock it again when you are done. (And of203203-course, to never ever leave an invalid pointer in the table). Null pointers in204204-the table are always safe :)189189+and 2 functions are exported for interface to it:190190+ register_sysrq_key and unregister_sysrq_key.191191+Of course, never ever leave an invalid pointer in the table. I.e., when192192+your module that called register_sysrq_key() exits, it must call193193+unregister_sysrq_key() to clean up the sysrq key table entry that it used.194194+Null pointers in the table are always safe. :)205195206196If for some reason you feel the need to call the handle_sysrq function from207197within a function called by handle_sysrq, you must be aware that you are in
+1-1
Documentation/usb/CREDITS
···2121 Bill Ryder <bryder@sgi.com>2222 Thomas Sailer <sailer@ife.ee.ethz.ch>2323 Gregory P. Smith <greg@electricrain.com>2424- Linus Torvalds <torvalds@osdl.org>2424+ Linus Torvalds <torvalds@linux-foundation.org>2525 Roman Weissgaerber <weissg@vienna.at>2626 <Kazuki.Yasumatsu@fujixerox.co.jp>2727
···11VERSION = 222PATCHLEVEL = 633SUBLEVEL = 2044-EXTRAVERSION =-rc544+EXTRAVERSION =55NAME = Homicidal Dwarf Hamster6677# *DOCUMENTATION*···11161116 @echo ' cscope - Generate cscope index'11171117 @echo ' kernelrelease - Output the release version string'11181118 @echo ' kernelversion - Output the version stored in Makefile'11191119- @if [ -r include/asm-$(ARCH)/Kbuild ]; then \11191119+ @if [ -r $(srctree)/include/asm-$(ARCH)/Kbuild ]; then \11201120 echo ' headers_install - Install sanitised kernel headers to INSTALL_HDR_PATH'; \11211121+ echo ' (default: $(INSTALL_HDR_PATH))'; \11211122 fi11221122- @echo ' (default: $(INSTALL_HDR_PATH))'11231123 @echo ''11241124 @echo 'Static analysers'11251125 @echo ' checkstack - Generate a list of stack hogs'11261126 @echo ' namespacecheck - Name space analysis on compiled kernel'11271127- @if [ -r include/asm-$(ARCH)/Kbuild ]; then \11271127+ @if [ -r $(srctree)/include/asm-$(ARCH)/Kbuild ]; then \11281128 echo ' headers_check - Sanity check on exported headers'; \11291129 fi11301130 @echo ''
+2-2
README
···278278 the file MAINTAINERS to see if there is a particular person associated279279 with the part of the kernel that you are having trouble with. If there280280 isn't anyone listed there, then the second best thing is to mail281281- them to me (torvalds@osdl.org), and possibly to any other relevant282282- mailing-list or to the newsgroup.281281+ them to me (torvalds@linux-foundation.org), and possibly to any other282282+ relevant mailing-list or to the newsgroup.283283284284 - In all bug-reports, *please* tell what kernel you are talking about,285285 how to duplicate the problem, and what your setup is (use your common
+1
arch/alpha/kernel/process.c
···4747 * Power off function, if any4848 */4949void (*pm_power_off)(void) = machine_power_off;5050+EXPORT_SYMBOL(pm_power_off);50515152void5253cpu_idle(void)
-1
arch/arm/configs/at91sam9260ek_defconfig
···923923# CONFIG_HEADERS_CHECK is not set924924# CONFIG_RCU_TORTURE_TEST is not set925925CONFIG_DEBUG_USER=y926926-# CONFIG_DEBUG_WAITQ is not set927926# CONFIG_DEBUG_ERRORS is not set928927CONFIG_DEBUG_LL=y929928# CONFIG_DEBUG_ICEDCC is not set
-1
arch/arm/configs/at91sam9261ek_defconfig
···10791079# CONFIG_HEADERS_CHECK is not set10801080# CONFIG_RCU_TORTURE_TEST is not set10811081CONFIG_DEBUG_USER=y10821082-# CONFIG_DEBUG_WAITQ is not set10831082# CONFIG_DEBUG_ERRORS is not set10841083CONFIG_DEBUG_LL=y10851084# CONFIG_DEBUG_ICEDCC is not set
+6-1
arch/arm/kernel/head.S
···2222#include <asm/thread_info.h>2323#include <asm/system.h>24242525+#if (PHYS_OFFSET & 0x001fffff)2626+#error "PHYS_OFFSET must be at an even 2MiB boundary!"2727+#endif2828+2529#define KERNEL_RAM_VADDR (PAGE_OFFSET + TEXT_OFFSET)2630#define KERNEL_RAM_PADDR (PHYS_OFFSET + TEXT_OFFSET)2731···255251 * Then map first 1MB of ram in case it contains our boot params.256252 */257253 add r0, r4, #PAGE_OFFSET >> 18258258- orr r6, r7, #PHYS_OFFSET254254+ orr r6, r7, #(PHYS_OFFSET & 0xff000000)255255+ orr r6, r6, #(PHYS_OFFSET & 0x00e00000)259256 str r6, [r0]260257261258#ifdef CONFIG_XIP_KERNEL
···2020#include <asm/io.h>2121#include <asm/hardware.h>2222#include <asm/arch/at91_pio.h>2323-#include <asm/arch/at91_pmc.h>2423#include <asm/arch/gpio.h>25242625#include "generic.h"···223224static int gpio_irq_set_wake(unsigned pin, unsigned state)224225{225226 unsigned mask = pin_to_mask(pin);227227+ unsigned bank = (pin - PIN_BASE) / 32;226228227227- pin -= PIN_BASE;228228- pin /= 32;229229-230230- if (unlikely(pin >= MAX_GPIO_BANKS))229229+ if (unlikely(bank >= MAX_GPIO_BANKS))231230 return -EINVAL;232231233232 if (state)234234- wakeups[pin] |= mask;233233+ wakeups[bank] |= mask;235234 else236236- wakeups[pin] &= ~mask;235235+ wakeups[bank] &= ~mask;236236+237237+ set_irq_wake(gpio[bank].id, state);237238238239 return 0;239240}···245246 for (i = 0; i < gpio_banks; i++) {246247 u32 pio = gpio[i].offset;247248248248- /*249249- * Note: drivers should have disabled GPIO interrupts that250250- * aren't supposed to be wakeup sources.251251- * But that is not much good on ARM..... disable_irq() does252252- * not update the hardware immediately, so the hardware mask253253- * (IMR) has the wrong value (not current, too much is254254- * permitted).255255- *256256- * Our workaround is to disable all non-wakeup IRQs ...257257- * which is exactly what correct drivers asked for in the258258- * first place!259259- */260249 backups[i] = at91_sys_read(pio + PIO_IMR);261250 at91_sys_write(pio + PIO_IDR, backups[i]);262251 at91_sys_write(pio + PIO_IER, wakeups[i]);263252264264- if (!wakeups[i]) {265265- disable_irq_wake(gpio[i].id);266266- at91_sys_write(AT91_PMC_PCDR, 1 << gpio[i].id);267267- } else {268268- enable_irq_wake(gpio[i].id);253253+ if (!wakeups[i])254254+ clk_disable(gpio[i].clock);255255+ else {269256#ifdef CONFIG_PM_DEBUG270270- printk(KERN_DEBUG "GPIO-%c may wake for %08x\n", "ABCD"[i], wakeups[i]);257257+ printk(KERN_DEBUG "GPIO-%c may wake for %08x\n", 'A'+i, wakeups[i]);271258#endif272259 }273260 }···266281 for (i = 0; i < gpio_banks; i++) {267282 u32 pio = gpio[i].offset;268283284284+ if (!wakeups[i])285285+ clk_enable(gpio[i].clock);286286+269287 at91_sys_write(pio + PIO_IDR, wakeups[i]);270288 at91_sys_write(pio + PIO_IER, backups[i]);271271- at91_sys_write(AT91_PMC_PCER, 1 << gpio[i].id);272289 }273290}274291
+13-1
arch/arm/mach-imx/cpufreq.c
···184184 long sysclk;185185 unsigned int bclk_div = 1;186186187187+ /*188188+ * Some governors do not respects CPU and policy lower limits189189+ * which leads to bad things (division by zero etc), ensure190190+ * that such things do not happen.191191+ */192192+ if(target_freq < policy->cpuinfo.min_freq)193193+ target_freq = policy->cpuinfo.min_freq;194194+195195+ if(target_freq < policy->min)196196+ target_freq = policy->min;197197+187198 freq = target_freq * 1000;188199189200 pr_debug(KERN_DEBUG "imx: requested frequency %ld Hz, mpctl0 at boot 0x%08x\n",···269258 policy->governor = CPUFREQ_DEFAULT_GOVERNOR;270259 policy->cpuinfo.min_freq = 8000;271260 policy->cpuinfo.max_freq = 200000;272272- policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL;261261+ /* Manual states, that PLL stabilizes in two CLK32 periods */262262+ policy->cpuinfo.transition_latency = 4 * 1000000000LL / CLK32;273263 return 0;274264}275265
+8-4
arch/arm/mach-s3c2410/gpio.c
···5757 case S3C2410_GPIO_SFN2:5858 case S3C2410_GPIO_SFN3:5959 if (pin < S3C2410_GPIO_BANKB) {6060+ function -= 1;6061 function &= 1;6162 function <<= S3C2410_GPIO_OFFSET(pin);6263 } else {···8483unsigned int s3c2410_gpio_getcfg(unsigned int pin)8584{8685 void __iomem *base = S3C24XX_GPIO_BASE(pin);8787- unsigned long mask;8686+ unsigned long val = __raw_readl(base);88878988 if (pin < S3C2410_GPIO_BANKB) {9090- mask = 1 << S3C2410_GPIO_OFFSET(pin);8989+ val >>= S3C2410_GPIO_OFFSET(pin);9090+ val &= 1;9191+ val += 1;9192 } else {9292- mask = 3 << S3C2410_GPIO_OFFSET(pin)*2;9393+ val >>= S3C2410_GPIO_OFFSET(pin)*2;9494+ val &= 3;9395 }94969595- return __raw_readl(base) & mask;9797+ return val | S3C2410_GPIO_INPUT;9698}979998100EXPORT_SYMBOL(s3c2410_gpio_getcfg);
···370370 u32 (* const fn)(int dd, int dn, int dm, u32 fpscr);371371 u32 flags;372372};373373+374374+#ifdef CONFIG_SMP375375+extern void vfp_save_state(void *location, u32 fpexc);376376+#endif
+24-2
arch/arm/vfp/vfphw.S
···6565@ r2 = faulted PC+46666@ r9 = successful return6767@ r10 = vfp_state union6868+@ r11 = CPU number6869@ lr = failure return69707071 .globl vfp_support_entry···8079 DBGSTR1 "enable %x", r108180 ldr r3, last_VFP_context_address8281 orr r1, r1, #FPEXC_ENABLE @ user FPEXC has the enable bit set8383- ldr r4, [r3] @ last_VFP_context pointer8282+ ldr r4, [r3, r11, lsl #2] @ last_VFP_context pointer8483 bic r5, r1, #FPEXC_EXCEPTION @ make sure exceptions are disabled8584 cmp r4, r108685 beq check_for_exception @ we are returning to the same···9291 @ exceptions, so we can get at the9392 @ rest of it94939494+#ifndef CONFIG_SMP9595 @ Save out the current registers to the old thread state9696+ @ No need for SMP since this is not done lazily96979798 DBGSTR1 "save old state %p", r49899 cmp r4, #0···108105 stmia r4, {r1, r5, r6, r8} @ save FPEXC, FPSCR, FPINST, FPINST2109106 @ and point r4 at the word at the110107 @ start of the register dump108108+#endif111109112110no_old_VFP_process:113111 DBGSTR1 "load state %p", r10114114- str r10, [r3] @ update the last_VFP_context pointer112112+ str r10, [r3, r11, lsl #2] @ update the last_VFP_context pointer115113 @ Load the saved state back into the VFP116114 VFPFLDMIA r10 @ reload the working registers while117115 @ FPEXC is in a safe state···165161 @ code will raise an exception if166162 @ required. If not, the user code will167163 @ retry the faulted instruction164164+165165+#ifdef CONFIG_SMP166166+ .globl vfp_save_state167167+ .type vfp_save_state, %function168168+vfp_save_state:169169+ @ Save the current VFP state170170+ @ r0 - save location171171+ @ r1 - FPEXC172172+ DBGSTR1 "save VFP state %p", r0173173+ VFPFMRX r2, FPSCR @ current status174174+ VFPFMRX r3, FPINST @ FPINST (always there, rev0 onwards)175175+ tst r1, #FPEXC_FPV2 @ is there an FPINST2 to read?176176+ VFPFMRX r12, FPINST2, NE @ FPINST2 if needed - avoids reading177177+ @ nonexistant reg on rev0178178+ VFPFSTMIA r0 @ save the working registers179179+ stmia r0, {r1, r2, r3, r12} @ save FPEXC, FPSCR, FPINST, FPINST2180180+ mov pc, lr181181+#endif168182169183last_VFP_context_address:170184 .word last_VFP_context
+26-4
arch/arm/vfp/vfpmodule.c
···2828void vfp_support_entry(void);29293030void (*vfp_vector)(void) = vfp_testing_entry;3131-union vfp_state *last_VFP_context;3131+union vfp_state *last_VFP_context[NR_CPUS];32323333/*3434 * Dual-use variable.···4141{4242 struct thread_info *thread = v;4343 union vfp_state *vfp;4444+ __u32 cpu = thread->cpu;44454546 if (likely(cmd == THREAD_NOTIFY_SWITCH)) {4747+ u32 fpexc = fmrx(FPEXC);4848+4949+#ifdef CONFIG_SMP5050+ /*5151+ * On SMP, if VFP is enabled, save the old state in5252+ * case the thread migrates to a different CPU. The5353+ * restoring is done lazily.5454+ */5555+ if ((fpexc & FPEXC_ENABLE) && last_VFP_context[cpu]) {5656+ vfp_save_state(last_VFP_context[cpu], fpexc);5757+ last_VFP_context[cpu]->hard.cpu = cpu;5858+ }5959+ /*6060+ * Thread migration, just force the reloading of the6161+ * state on the new CPU in case the VFP registers6262+ * contain stale data.6363+ */6464+ if (thread->vfpstate.hard.cpu != cpu)6565+ last_VFP_context[cpu] = NULL;6666+#endif6767+4668 /*4769 * Always disable VFP so we can lazily save/restore the4870 * old state.4971 */5050- fmxr(FPEXC, fmrx(FPEXC) & ~FPEXC_ENABLE);7272+ fmxr(FPEXC, fpexc & ~FPEXC_ENABLE);5173 return NOTIFY_DONE;5274 }5375···9068 }91699270 /* flush and release case: Per-thread VFP cleanup. */9393- if (last_VFP_context == vfp)9494- last_VFP_context = NULL;7171+ if (last_VFP_context[cpu] == vfp)7272+ last_VFP_context[cpu] = NULL;95739674 return NOTIFY_DONE;9775}
+27-12
arch/avr32/configs/atstk1002_defconfig
···11#22# Automatically generated make config: don't edit33-# Linux kernel version: 2.6.19-rc244-# Fri Oct 20 11:52:37 200633+# Linux kernel version: 2.6.20-rc644+# Fri Jan 26 13:12:59 200755#66CONFIG_AVR32=y77CONFIG_GENERIC_HARDIRQS=y···99CONFIG_GENERIC_IRQ_PROBE=y1010CONFIG_RWSEM_GENERIC_SPINLOCK=y1111CONFIG_GENERIC_TIME=y1212+# CONFIG_ARCH_HAS_ILOG2_U32 is not set1313+# CONFIG_ARCH_HAS_ILOG2_U64 is not set1214CONFIG_GENERIC_HWEIGHT=y1315CONFIG_GENERIC_CALIBRATE_DELAY=y1416CONFIG_DEFCONFIG_LIST="/lib/modules/$UNAME_RELEASE/.config"···3836# CONFIG_UTS_NS is not set3937CONFIG_AUDIT=y4038# CONFIG_IKCONFIG is not set3939+CONFIG_SYSFS_DEPRECATED=y4140CONFIG_RELAY=y4241CONFIG_INITRAMFS_SOURCE=""4342CONFIG_CC_OPTIMIZE_FOR_SIZE=y···7875# Block layer7976#8077CONFIG_BLOCK=y7878+# CONFIG_LBD is not set8179# CONFIG_BLK_DEV_IO_TRACE is not set8080+# CONFIG_LSF is not set82818382#8483# IO Schedulers···130125# CONFIG_OWNERSHIP_TRACE is not set131126# CONFIG_HZ_100 is not set132127CONFIG_HZ_250=y128128+# CONFIG_HZ_300 is not set133129# CONFIG_HZ_1000 is not set134130CONFIG_HZ=250135131CONFIG_CMDLINE=""···188182# CONFIG_TCP_CONG_ADVANCED is not set189183CONFIG_TCP_CONG_CUBIC=y190184CONFIG_DEFAULT_TCP_CONG="cubic"185185+# CONFIG_TCP_MD5SIG is not set191186# CONFIG_IPV6 is not set192187# CONFIG_INET6_XFRM_TUNNEL is not set193188# CONFIG_INET6_TUNNEL is not set···267260# User Modules And Translation Layers268261#269262CONFIG_MTD_CHAR=y263263+CONFIG_MTD_BLKDEVS=y270264CONFIG_MTD_BLOCK=y271265# CONFIG_FTL is not set272266# CONFIG_NFTL is not set···363355#364356# Misc devices365357#366366-# CONFIG_SGI_IOC4 is not set367358# CONFIG_TIFM_CORE is not set368359369360#···412405#413406# PHY device support414407#408408+# CONFIG_PHYLIB is not set415409416410#417411# Ethernet (10 or 100Mbit)418412#419419-# CONFIG_NET_ETHERNET is not set413413+CONFIG_NET_ETHERNET=y414414+CONFIG_MII=y415415+CONFIG_MACB=y420416421417#422418# Ethernet (1000 Mbit)···515505# CONFIG_GEN_RTC is not set516506# CONFIG_DTLK is not set517507# CONFIG_R3964 is not set518518-519519-#520520-# Ftape, the floppy tape device driver521521-#522508# CONFIG_RAW_DRIVER is not set523509524510#···627621#628622629623#624624+# Virtualization625625+#626626+627627+#630628# File systems631629#632630CONFIG_EXT2_FS=m···693683# CONFIG_BEFS_FS is not set694684# CONFIG_BFS_FS is not set695685# CONFIG_EFS_FS is not set696696-# CONFIG_JFFS_FS is not set697686CONFIG_JFFS2_FS=y698687CONFIG_JFFS2_FS_DEBUG=0699688CONFIG_JFFS2_FS_WRITEBUFFER=y···772763CONFIG_NLS_UTF8=m773764774765#766766+# Distributed Lock Manager767767+#768768+# CONFIG_DLM is not set769769+770770+#775771# Kernel hacking776772#777773CONFIG_TRACE_IRQFLAGS_SUPPORT=y···784770CONFIG_ENABLE_MUST_CHECK=y785771CONFIG_MAGIC_SYSRQ=y786772# CONFIG_UNUSED_SYMBOLS is not set773773+CONFIG_DEBUG_FS=y774774+# CONFIG_HEADERS_CHECK is not set787775CONFIG_DEBUG_KERNEL=y788776CONFIG_LOG_BUF_SHIFT=14789777CONFIG_DETECT_SOFTLOCKUP=y···801785# CONFIG_DEBUG_KOBJECT is not set802786CONFIG_DEBUG_BUGVERBOSE=y803787# CONFIG_DEBUG_INFO is not set804804-CONFIG_DEBUG_FS=y805788# CONFIG_DEBUG_VM is not set806789# CONFIG_DEBUG_LIST is not set807790CONFIG_FRAME_POINTER=y808808-# CONFIG_UNWIND_INFO is not set809791CONFIG_FORCED_INLINING=y810810-# CONFIG_HEADERS_CHECK is not set811792# CONFIG_RCU_TORTURE_TEST is not set812793# CONFIG_KPROBES is not set813794···822809#823810# Library routines824811#812812+CONFIG_BITREVERSE=y825813CONFIG_CRC_CCITT=m826814# CONFIG_CRC16 is not set827815CONFIG_CRC32=y···831817CONFIG_ZLIB_INFLATE=y832818CONFIG_ZLIB_DEFLATE=y833819CONFIG_PLIST=y820820+CONFIG_IOMAP_COPY=y
···473473}474474475475/*476476+ * Wrap all the virtual calls in a way that forces the parameters on the stack.477477+ */478478+479479+#define efi_call_virt(f, args...) \480480+ ((efi_##f##_t __attribute__((regparm(0)))*)efi.systab->runtime->f)(args)481481+482482+static efi_status_t virt_efi_get_time(efi_time_t *tm, efi_time_cap_t *tc)483483+{484484+ return efi_call_virt(get_time, tm, tc);485485+}486486+487487+static efi_status_t virt_efi_set_time (efi_time_t *tm)488488+{489489+ return efi_call_virt(set_time, tm);490490+}491491+492492+static efi_status_t virt_efi_get_wakeup_time (efi_bool_t *enabled,493493+ efi_bool_t *pending,494494+ efi_time_t *tm)495495+{496496+ return efi_call_virt(get_wakeup_time, enabled, pending, tm);497497+}498498+499499+static efi_status_t virt_efi_set_wakeup_time (efi_bool_t enabled,500500+ efi_time_t *tm)501501+{502502+ return efi_call_virt(set_wakeup_time, enabled, tm);503503+}504504+505505+static efi_status_t virt_efi_get_variable (efi_char16_t *name,506506+ efi_guid_t *vendor, u32 *attr,507507+ unsigned long *data_size, void *data)508508+{509509+ return efi_call_virt(get_variable, name, vendor, attr, data_size, data);510510+}511511+512512+static efi_status_t virt_efi_get_next_variable (unsigned long *name_size,513513+ efi_char16_t *name,514514+ efi_guid_t *vendor)515515+{516516+ return efi_call_virt(get_next_variable, name_size, name, vendor);517517+}518518+519519+static efi_status_t virt_efi_set_variable (efi_char16_t *name,520520+ efi_guid_t *vendor,521521+ unsigned long attr,522522+ unsigned long data_size, void *data)523523+{524524+ return efi_call_virt(set_variable, name, vendor, attr, data_size, data);525525+}526526+527527+static efi_status_t virt_efi_get_next_high_mono_count (u32 *count)528528+{529529+ return efi_call_virt(get_next_high_mono_count, count);530530+}531531+532532+static void virt_efi_reset_system (int reset_type, efi_status_t status,533533+ unsigned long data_size,534534+ efi_char16_t *data)535535+{536536+ efi_call_virt(reset_system, reset_type, status, data_size, data);537537+}538538+539539+/*476540 * This function will switch the EFI runtime services to virtual mode.477541 * Essentially, look through the EFI memmap and map every region that478542 * has the runtime attribute bit set in its memory descriptor and update···589525 * pointers in the runtime service table to the new virtual addresses.590526 */591527592592- efi.get_time = (efi_get_time_t *) efi.systab->runtime->get_time;593593- efi.set_time = (efi_set_time_t *) efi.systab->runtime->set_time;594594- efi.get_wakeup_time = (efi_get_wakeup_time_t *)595595- efi.systab->runtime->get_wakeup_time;596596- efi.set_wakeup_time = (efi_set_wakeup_time_t *)597597- efi.systab->runtime->set_wakeup_time;598598- efi.get_variable = (efi_get_variable_t *)599599- efi.systab->runtime->get_variable;600600- efi.get_next_variable = (efi_get_next_variable_t *)601601- efi.systab->runtime->get_next_variable;602602- efi.set_variable = (efi_set_variable_t *)603603- efi.systab->runtime->set_variable;604604- efi.get_next_high_mono_count = (efi_get_next_high_mono_count_t *)605605- efi.systab->runtime->get_next_high_mono_count;606606- efi.reset_system = (efi_reset_system_t *)607607- efi.systab->runtime->reset_system;528528+ efi.get_time = virt_efi_get_time;529529+ efi.set_time = virt_efi_set_time;530530+ efi.get_wakeup_time = virt_efi_get_wakeup_time;531531+ efi.set_wakeup_time = virt_efi_set_wakeup_time;532532+ efi.get_variable = virt_efi_get_variable;533533+ efi.get_next_variable = virt_efi_get_next_variable;534534+ efi.set_variable = virt_efi_set_variable;535535+ efi.get_next_high_mono_count = virt_efi_get_next_high_mono_count;536536+ efi.reset_system = virt_efi_reset_system;608537}609538610539void __init
+4
arch/i386/kernel/entry.S
···302302 pushl $(__USER_CS)303303 CFI_ADJUST_CFA_OFFSET 4304304 /*CFI_REL_OFFSET cs, 0*/305305+#ifndef CONFIG_COMPAT_VDSO305306 /*306307 * Push current_thread_info()->sysenter_return to the stack.307308 * A tiny bit of offset fixup is necessary - 4*4 means the 4 words308309 * pushed above; +8 corresponds to copy_thread's esp0 setting.309310 */310311 pushl (TI_sysenter_return-THREAD_SIZE+8+4*4)(%esp)312312+#else313313+ pushl $SYSENTER_RETURN314314+#endif311315 CFI_ADJUST_CFA_OFFSET 4312316 CFI_REL_OFFSET eip, 0313317
+19-13
arch/i386/kernel/io_apic.c
···1227122712281228static int __assign_irq_vector(int irq)12291229{12301230- static int current_vector = FIRST_DEVICE_VECTOR, offset = 0;12311231- int vector;12301230+ static int current_vector = FIRST_DEVICE_VECTOR, current_offset = 0;12311231+ int vector, offset, i;1232123212331233 BUG_ON((unsigned)irq >= NR_IRQ_VECTORS);1234123412351235 if (irq_vector[irq] > 0)12361236 return irq_vector[irq];1237123712381238- current_vector += 8;12391239- if (current_vector == SYSCALL_VECTOR)12401240- current_vector += 8;12411241-12421242- if (current_vector >= FIRST_SYSTEM_VECTOR) {12431243- offset++;12441244- if (!(offset % 8))12451245- return -ENOSPC;12461246- current_vector = FIRST_DEVICE_VECTOR + offset;12471247- }12481248-12491238 vector = current_vector;12391239+ offset = current_offset;12401240+next:12411241+ vector += 8;12421242+ if (vector >= FIRST_SYSTEM_VECTOR) {12431243+ offset = (offset + 1) % 8;12441244+ vector = FIRST_DEVICE_VECTOR + offset;12451245+ }12461246+ if (vector == current_vector)12471247+ return -ENOSPC;12481248+ if (vector == SYSCALL_VECTOR)12491249+ goto next;12501250+ for (i = 0; i < NR_IRQ_VECTORS; i++)12511251+ if (irq_vector[i] == vector)12521252+ goto next;12531253+12541254+ current_vector = vector;12551255+ current_offset = offset;12501256 irq_vector[irq] = vector;1251125712521258 return vector;
+1-7
arch/i386/kernel/nmi.c
···310310311311 if ((nmi >= NMI_INVALID) || (nmi < NMI_NONE))312312 return 0;313313- /*314314- * If any other x86 CPU has a local APIC, then315315- * please test the NMI stuff there and send me the316316- * missing bits. Right now Intel P6/P4 and AMD K7 only.317317- */318318- if ((nmi == NMI_LOCAL_APIC) && (nmi_known_cpu() == 0))319319- return 0; /* no lapic support */313313+320314 nmi_watchdog = nmi;321315 return 1;322316}
+8-1
arch/i386/kernel/paravirt.c
···566566 .irq_enable_sysexit = native_irq_enable_sysexit,567567 .iret = native_iret,568568};569569-EXPORT_SYMBOL(paravirt_ops);569569+570570+/*571571+ * NOTE: CONFIG_PARAVIRT is experimental and the paravirt_ops572572+ * semantics are subject to change. Hence we only do this573573+ * internal-only export of this, until it gets sorted out and574574+ * all lowlevel CPU ops used by modules are separately exported.575575+ */576576+EXPORT_SYMBOL_GPL(paravirt_ops);
+9-5
arch/i386/kernel/sysenter.c
···7979#ifdef CONFIG_COMPAT_VDSO8080 __set_fixmap(FIX_VDSO, __pa(syscall_page), PAGE_READONLY);8181 printk("Compat vDSO mapped to %08lx.\n", __fix_to_virt(FIX_VDSO));8282-#else8383- /*8484- * In the non-compat case the ELF coredumping code needs the fixmap:8585- */8686- __set_fixmap(FIX_VDSO, __pa(syscall_page), PAGE_KERNEL_RO);8782#endif88838984 if (!boot_cpu_has(X86_FEATURE_SEP)) {···95100 return 0;96101}97102103103+#ifndef CONFIG_COMPAT_VDSO98104static struct page *syscall_nopage(struct vm_area_struct *vma,99105 unsigned long adr, int *type)100106{···142146 vma->vm_end = addr + PAGE_SIZE;143147 /* MAYWRITE to allow gdb to COW and set breakpoints */144148 vma->vm_flags = VM_READ|VM_EXEC|VM_MAYREAD|VM_MAYEXEC|VM_MAYWRITE;149149+ /*150150+ * Make sure the vDSO gets into every core dump.151151+ * Dumping its contents makes post-mortem fully interpretable later152152+ * without matching up the same kernel and hardware config to see153153+ * what PC values meant.154154+ */155155+ vma->vm_flags |= VM_ALWAYSDUMP;145156 vma->vm_flags |= mm->def_flags;146157 vma->vm_page_prot = protection_map[vma->vm_flags & 7];147158 vma->vm_ops = &syscall_vm_ops;···190187{191188 return 0;192189}190190+#endif
+1-1
arch/i386/mach-default/setup.c
···102102 * along the MCA bus. Use this to hook into that chain if you will need103103 * it.104104 **/105105-void __init mca_nmi_hook(void)105105+void mca_nmi_hook(void)106106{107107 /* If I recall correctly, there's a whole bunch of other things that108108 * we can do to check for NMI problems, but that's all I know about
+3
arch/ia64/kernel/acpi.c
···609609610610void acpi_unregister_gsi(u32 gsi)611611{612612+ if (acpi_irq_model == ACPI_IRQ_MODEL_PLATFORM)613613+ return;614614+612615 iosapic_unregister_intr(gsi);613616}614617
+3
arch/ia64/kernel/irq.c
···122122 for (irq=0; irq < NR_IRQS; irq++) {123123 desc = irq_desc + irq;124124125125+ if (desc->status == IRQ_DISABLED)126126+ continue;127127+125128 /*126129 * No handling for now.127130 * TBD: Implement a disable function so we can now
+14
arch/mips/Kconfig
···15681568 depends on MIPS_MT15691569 default y1570157015711571+config MIPS_MT_SMTC_INSTANT_REPLAY15721572+ bool "Low-latency Dispatch of Deferred SMTC IPIs"15731573+ depends on MIPS_MT_SMTC15741574+ default y15751575+ help15761576+ SMTC pseudo-interrupts between TCs are deferred and queued15771577+ if the target TC is interrupt-inhibited (IXMT). In the first15781578+ SMTC prototypes, these queued IPIs were serviced on return15791579+ to user mode, or on entry into the kernel idle loop. The15801580+ INSTANT_REPLAY option dispatches them as part of local_irq_restore()15811581+ processing, which adds runtime overhead (hence the option to turn15821582+ it off), but ensures that IPIs are handled promptly even under15831583+ heavy I/O interrupt load.15841584+15711585config MIPS_VPE_LOADER_TOM15721586 bool "Load VPE program into memory hidden from linux"15731587 depends on MIPS_VPE_LOADER
···44#include <linux/sched.h>55#include <linux/cpumask.h>66#include <linux/interrupt.h>77+#include <linux/module.h>7889#include <asm/cpu.h>910#include <asm/processor.h>···271270 * of their initialization in smtc_cpu_setup().272271 */273272274274- tlbsiz = tlbsiz & 0x3f; /* MIPS32 limits TLB indices to 64 */275275- cpu_data[0].tlbsize = tlbsiz;273273+ /* MIPS32 limits TLB indices to 64 */274274+ if (tlbsiz > 64)275275+ tlbsiz = 64;276276+ cpu_data[0].tlbsize = current_cpu_data.tlbsize = tlbsiz;276277 smtc_status |= SMTC_TLB_SHARED;278278+ local_flush_tlb_all();277279278280 printk("TLB of %d entry pairs shared by %d VPEs\n",279281 tlbsiz, vpes);···10211017 * SMTC-specific hacks invoked from elsewhere in the kernel.10221018 */1023101910201020+void smtc_ipi_replay(void)10211021+{10221022+ /*10231023+ * To the extent that we've ever turned interrupts off,10241024+ * we may have accumulated deferred IPIs. This is subtle.10251025+ * If we use the smtc_ipi_qdepth() macro, we'll get an10261026+ * exact number - but we'll also disable interrupts10271027+ * and create a window of failure where a new IPI gets10281028+ * queued after we test the depth but before we re-enable10291029+ * interrupts. So long as IXMT never gets set, however,10301030+ * we should be OK: If we pick up something and dispatch10311031+ * it here, that's great. If we see nothing, but concurrent10321032+ * with this operation, another TC sends us an IPI, IXMT10331033+ * is clear, and we'll handle it as a real pseudo-interrupt10341034+ * and not a pseudo-pseudo interrupt.10351035+ */10361036+ if (IPIQ[smp_processor_id()].depth > 0) {10371037+ struct smtc_ipi *pipi;10381038+ extern void self_ipi(struct smtc_ipi *);10391039+10401040+ while ((pipi = smtc_ipi_dq(&IPIQ[smp_processor_id()]))) {10411041+ self_ipi(pipi);10421042+ smtc_cpu_stats[smp_processor_id()].selfipis++;10431043+ }10441044+ }10451045+}10461046+10471047+EXPORT_SYMBOL(smtc_ipi_replay);10481048+10241049void smtc_idle_loop_hook(void)10251050{10261051#ifdef SMTC_IDLE_HOOK_DEBUG···11461113 if (pdb_msg != &id_ho_db_msg[0])11471114 printk("CPU%d: %s", smp_processor_id(), id_ho_db_msg);11481115#endif /* SMTC_IDLE_HOOK_DEBUG */11491149- /*11501150- * To the extent that we've ever turned interrupts off,11511151- * we may have accumulated deferred IPIs. This is subtle.11521152- * If we use the smtc_ipi_qdepth() macro, we'll get an11531153- * exact number - but we'll also disable interrupts11541154- * and create a window of failure where a new IPI gets11551155- * queued after we test the depth but before we re-enable11561156- * interrupts. So long as IXMT never gets set, however,11571157- * we should be OK: If we pick up something and dispatch11581158- * it here, that's great. If we see nothing, but concurrent11591159- * with this operation, another TC sends us an IPI, IXMT11601160- * is clear, and we'll handle it as a real pseudo-interrupt11611161- * and not a pseudo-pseudo interrupt.11621162- */11631163- if (IPIQ[smp_processor_id()].depth > 0) {11641164- struct smtc_ipi *pipi;11651165- extern void self_ipi(struct smtc_ipi *);1166111611671167- if ((pipi = smtc_ipi_dq(&IPIQ[smp_processor_id()])) != NULL) {11681168- self_ipi(pipi);11691169- smtc_cpu_stats[smp_processor_id()].selfipis++;11701170- }11711171- }11171117+ /*11181118+ * Replay any accumulated deferred IPIs. If "Instant Replay"11191119+ * is in use, there should never be any.11201120+ */11211121+#ifndef CONFIG_MIPS_MT_SMTC_INSTANT_REPLAY11221122+ smtc_ipi_replay();11231123+#endif /* CONFIG_MIPS_MT_SMTC_INSTANT_REPLAY */11721124}1173112511741126void smtc_soft_dump(void)
···2828extern unsigned long marvell_base;2929extern unsigned long bus_clock;30303131-#ifdef CONFIG_GALILLEO_GT64240_ETH3131+#ifdef CONFIG_GALILEO_GT64240_ETH3232extern unsigned char prom_mac_addr_base[6];3333#endif3434···6161 mips_machgroup = MACH_GROUP_MOMENCO;6262 mips_machtype = MACH_MOMENCO_OCELOT_G;63636464-#ifdef CONFIG_GALILLEO_GT64240_ETH6464+#ifdef CONFIG_GALILEO_GT64240_ETH6565 /* get the base MAC address for on-board ethernet ports */6666 memcpy(prom_mac_addr_base, (void*)0xfc807cf2, 6);6767#endif
+2-2
arch/mips/momentum/ocelot_g/setup.c
···64646565#include "ocelot_pld.h"66666767-#ifdef CONFIG_GALILLEO_GT64240_ETH6767+#ifdef CONFIG_GALILEO_GT64240_ETH6868extern unsigned char prom_mac_addr_base[6];6969#endif7070···185185 /* do handoff reconfiguration */186186 PMON_v2_setup();187187188188-#ifdef CONFIG_GALILLEO_GT64240_ETH188188+#ifdef CONFIG_GALILEO_GT64240_ETH189189 /* get the mac addr */190190 memcpy(prom_mac_addr_base, (void*)0xfc807cf2, 6);191191#endif
+9-3
arch/mips/vr41xx/common/irq.c
···11/*22 * Interrupt handing routines for NEC VR4100 series.33 *44- * Copyright (C) 2005 Yoichi Yuasa <yoichi_yuasa@tripeaks.co.jp>44+ * Copyright (C) 2005-2007 Yoichi Yuasa <yoichi_yuasa@tripeaks.co.jp>55 *66 * This program is free software; you can redistribute it and/or modify77 * it under the terms of the GNU General Public License as published by···7373 if (cascade->get_irq != NULL) {7474 unsigned int source_irq = irq;7575 desc = irq_desc + source_irq;7676- desc->chip->ack(source_irq);7676+ if (desc->chip->mask_ack)7777+ desc->chip->mask_ack(source_irq);7878+ else {7979+ desc->chip->mask(source_irq);8080+ desc->chip->ack(source_irq);8181+ }7782 irq = cascade->get_irq(irq);7883 if (irq < 0)7984 atomic_inc(&irq_err_count);8085 else8186 irq_dispatch(irq);8282- desc->chip->end(source_irq);8787+ if (!(desc->status & IRQ_DISABLED) && desc->chip->unmask)8888+ desc->chip->unmask(source_irq);8389 } else8490 do_IRQ(irq);8591}
+6-2
arch/powerpc/Kconfig
···492492 select PPC_NATIVE493493 select PPC_RTAS494494 select MMIO_NVRAM495495+ select ATA_NONSTANDARD if ATA495496 default n496497 help497498 This option enables support for the Maple 970FX Evaluation Board.···534533 select UDBG_RTAS_CONSOLE535534536535config PPC_PS3537537- bool "Sony PS3"536536+ bool "Sony PS3 (incomplete)"538537 depends on PPC_MULTIPLATFORM && PPC64539538 select PPC_CELL540539 help541540 This option enables support for the Sony PS3 game console542541 and other platforms using the PS3 hypervisor.542542+ Support for this platform is not yet complete, so543543+ enabling this will not result in a bootable kernel on a544544+ PS3 system.543545544546config PPC_CELLEB545547 bool "Toshiba's Cell Reference Set 'Celleb' Architecture"···1206120212071203config KPROBES12081204 bool "Kprobes (EXPERIMENTAL)"12091209- depends on PPC64 && KALLSYMS && EXPERIMENTAL && MODULES12051205+ depends on !BOOKE && !4xx && KALLSYMS && EXPERIMENTAL && MODULES12101206 help12111207 Kprobes allows you to trap at almost any kernel address and12121208 execute a callback function. register_kprobe() establishes
+6-2
arch/powerpc/kernel/kprobes.c
···4646 if ((unsigned long)p->addr & 0x03) {4747 printk("Attempt to register kprobe at an unaligned address\n");4848 ret = -EINVAL;4949- } else if (IS_MTMSRD(insn) || IS_RFID(insn)) {5050- printk("Cannot register a kprobe on rfid or mtmsrd\n");4949+ } else if (IS_MTMSRD(insn) || IS_RFID(insn) || IS_RFI(insn)) {5050+ printk("Cannot register a kprobe on rfi/rfid or mtmsr[d]\n");5151 ret = -EINVAL;5252 }5353···483483 memcpy(&kcb->jprobe_saved_regs, regs, sizeof(struct pt_regs));484484485485 /* setup return addr to the jprobe handler routine */486486+#ifdef CONFIG_PPC64486487 regs->nip = (unsigned long)(((func_descr_t *)jp->entry)->entry);487488 regs->gpr[2] = (unsigned long)(((func_descr_t *)jp->entry)->toc);489489+#else490490+ regs->nip = (unsigned long)jp->entry;491491+#endif488492489493 return 1;490494}
+1-1
arch/powerpc/kernel/pci_64.c
···1429142914301430 for (ln = pci_root_buses.next; ln != &pci_root_buses; ln = ln->next) {14311431 bus = pci_bus_b(ln);14321432- if (in_bus >= bus->number && in_bus < (bus->number + bus->subordinate))14321432+ if (in_bus >= bus->number && in_bus <= bus->subordinate)14331433 break;14341434 bus = NULL;14351435 }
+74-35
arch/powerpc/kernel/traps.c
···535535 }536536}537537538538-static void parse_fpe(struct pt_regs *regs)538538+static inline int __parse_fpscr(unsigned long fpscr)539539{540540- int code = 0;541541- unsigned long fpscr;542542-543543- flush_fp_to_thread(current);544544-545545- fpscr = current->thread.fpscr.val;540540+ int ret = 0;546541547542 /* Invalid operation */548543 if ((fpscr & FPSCR_VE) && (fpscr & FPSCR_VX))549549- code = FPE_FLTINV;544544+ ret = FPE_FLTINV;550545551546 /* Overflow */552547 else if ((fpscr & FPSCR_OE) && (fpscr & FPSCR_OX))553553- code = FPE_FLTOVF;548548+ ret = FPE_FLTOVF;554549555550 /* Underflow */556551 else if ((fpscr & FPSCR_UE) && (fpscr & FPSCR_UX))557557- code = FPE_FLTUND;552552+ ret = FPE_FLTUND;558553559554 /* Divide by zero */560555 else if ((fpscr & FPSCR_ZE) && (fpscr & FPSCR_ZX))561561- code = FPE_FLTDIV;556556+ ret = FPE_FLTDIV;562557563558 /* Inexact result */564559 else if ((fpscr & FPSCR_XE) && (fpscr & FPSCR_XX))565565- code = FPE_FLTRES;560560+ ret = FPE_FLTRES;561561+562562+ return ret;563563+}564564+565565+static void parse_fpe(struct pt_regs *regs)566566+{567567+ int code = 0;568568+569569+ flush_fp_to_thread(current);570570+571571+ code = __parse_fpscr(current->thread.fpscr.val);566572567573 _exception(SIGFPE, regs, code, regs->nip);568574}···745739 extern int do_mathemu(struct pt_regs *regs);746740747741 /* We can now get here via a FP Unavailable exception if the core748748- * has no FPU, in that case no reason flags will be set */749749-#ifdef CONFIG_MATH_EMULATION750750- /* (reason & REASON_ILLEGAL) would be the obvious thing here,751751- * but there seems to be a hardware bug on the 405GP (RevD)752752- * that means ESR is sometimes set incorrectly - either to753753- * ESR_DST (!?) or 0. In the process of chasing this with the754754- * hardware people - not sure if it can happen on any illegal755755- * instruction or only on FP instructions, whether there is a756756- * pattern to occurences etc. -dgibson 31/Mar/2003 */757757- if (!(reason & REASON_TRAP) && do_mathemu(regs) == 0) {758758- emulate_single_step(regs);759759- return;760760- }761761-#endif /* CONFIG_MATH_EMULATION */742742+ * has no FPU, in that case the reason flags will be 0 */762743763744 if (reason & REASON_FP) {764745 /* IEEE FP exception */···770777 }771778772779 local_irq_enable();780780+781781+#ifdef CONFIG_MATH_EMULATION782782+ /* (reason & REASON_ILLEGAL) would be the obvious thing here,783783+ * but there seems to be a hardware bug on the 405GP (RevD)784784+ * that means ESR is sometimes set incorrectly - either to785785+ * ESR_DST (!?) or 0. In the process of chasing this with the786786+ * hardware people - not sure if it can happen on any illegal787787+ * instruction or only on FP instructions, whether there is a788788+ * pattern to occurences etc. -dgibson 31/Mar/2003 */789789+ switch (do_mathemu(regs)) {790790+ case 0:791791+ emulate_single_step(regs);792792+ return;793793+ case 1: {794794+ int code = 0;795795+ code = __parse_fpscr(current->thread.fpscr.val);796796+ _exception(SIGFPE, regs, code, regs->nip);797797+ return;798798+ }799799+ case -EFAULT:800800+ _exception(SIGSEGV, regs, SEGV_MAPERR, regs->nip);801801+ return;802802+ }803803+ /* fall through on any other errors */804804+#endif /* CONFIG_MATH_EMULATION */773805774806 /* Try to emulate it if we should. */775807 if (reason & (REASON_ILLEGAL | REASON_PRIVILEGED)) {···909891910892#ifdef CONFIG_MATH_EMULATION911893 errcode = do_mathemu(regs);894894+895895+ switch (errcode) {896896+ case 0:897897+ emulate_single_step(regs);898898+ return;899899+ case 1: {900900+ int code = 0;901901+ code = __parse_fpscr(current->thread.fpscr.val);902902+ _exception(SIGFPE, regs, code, regs->nip);903903+ return;904904+ }905905+ case -EFAULT:906906+ _exception(SIGSEGV, regs, SEGV_MAPERR, regs->nip);907907+ return;908908+ default:909909+ _exception(SIGILL, regs, ILL_ILLOPC, regs->nip);910910+ return;911911+ }912912+912913#else913914 errcode = Soft_emulate_8xx(regs);914914-#endif915915- if (errcode) {916916- if (errcode > 0)917917- _exception(SIGFPE, regs, 0, 0);918918- else if (errcode == -EFAULT)919919- _exception(SIGSEGV, regs, 0, 0);920920- else921921- _exception(SIGILL, regs, ILL_ILLOPC, regs->nip);922922- } else915915+ switch (errcode) {916916+ case 0:923917 emulate_single_step(regs);918918+ return;919919+ case 1:920920+ _exception(SIGILL, regs, ILL_ILLOPC, regs->nip);921921+ return;922922+ case -EFAULT:923923+ _exception(SIGSEGV, regs, SEGV_MAPERR, regs->nip);924924+ return;925925+ }926926+#endif924927}925928#endif /* CONFIG_8xx */926929
+7
arch/powerpc/kernel/vdso.c
···284284 * pages though285285 */286286 vma->vm_flags = VM_READ|VM_EXEC|VM_MAYREAD|VM_MAYWRITE|VM_MAYEXEC;287287+ /*288288+ * Make sure the vDSO gets into every core dump.289289+ * Dumping its contents makes post-mortem fully interpretable later290290+ * without matching up the same kernel and hardware config to see291291+ * what PC values meant.292292+ */293293+ vma->vm_flags |= VM_ALWAYSDUMP;287294 vma->vm_flags |= mm->def_flags;288295 vma->vm_page_prot = protection_map[vma->vm_flags & 0x7];289296 vma->vm_ops = &vdso_vmops;
+1-1
arch/powerpc/lib/Makefile
···1616 strcase.o1717obj-$(CONFIG_QUICC_ENGINE) += rheap.o1818obj-$(CONFIG_XMON) += sstep.o1919+obj-$(CONFIG_KPROBES) += sstep.o1920obj-$(CONFIG_NOT_COHERENT_CACHE) += dma-noncoherent.o20212122ifeq ($(CONFIG_PPC64),y)2223obj-$(CONFIG_SMP) += locks.o2323-obj-$(CONFIG_DEBUG_KERNEL) += sstep.o2424endif25252626# Temporary hack until we have migrated to asm-powerpc
+1
arch/sparc/kernel/process.c
···5454 * handler when auxio is not present-- unused for now...5555 */5656void (*pm_power_off)(void) = machine_power_off;5757+EXPORT_SYMBOL(pm_power_off);57585859/*5960 * sysctl - toggle power-off restriction for serial console
+4-4
arch/sparc/kernel/smp.c
···292292293293void __init smp_prepare_cpus(unsigned int max_cpus)294294{295295- extern void smp4m_boot_cpus(void);296296- extern void smp4d_boot_cpus(void);295295+ extern void __init smp4m_boot_cpus(void);296296+ extern void __init smp4d_boot_cpus(void);297297 int i, cpuid, extra;298298299299 printk("Entering SMP Mode...\n");···375375376376int __cpuinit __cpu_up(unsigned int cpu)377377{378378- extern int smp4m_boot_one_cpu(int);379379- extern int smp4d_boot_one_cpu(int);378378+ extern int __cpuinit smp4m_boot_one_cpu(int);379379+ extern int __cpuinit smp4d_boot_one_cpu(int);380380 int ret=0;381381382382 switch(sparc_cpu_model) {
+1-1
arch/sparc/kernel/sun4d_smp.c
···164164 local_flush_cache_all();165165}166166167167-int smp4d_boot_one_cpu(int i)167167+int __cpuinit smp4d_boot_one_cpu(int i)168168{169169 extern unsigned long sun4d_cpu_startup;170170 unsigned long *entry = &sun4d_cpu_startup;
···1919choice2020 prompt "Host memory split"2121 default HOST_VMSPLIT_3G2222- ---help---2323- This is needed when the host kernel on which you run has a non-default2424- (like 2G/2G) memory split, instead of the customary 3G/1G. If you did2525- not recompile your own kernel but use the default distro's one, you can2626- safely accept the "Default split" option.2222+ help2323+ This is needed when the host kernel on which you run has a non-default2424+ (like 2G/2G) memory split, instead of the customary 3G/1G. If you did2525+ not recompile your own kernel but use the default distro's one, you can2626+ safely accept the "Default split" option.27272828- It can be enabled on recent (>=2.6.16-rc2) vanilla kernels via2929- CONFIG_VM_SPLIT_*, or on previous kernels with special patches (-ck3030- patchset by Con Kolivas, or other ones) - option names match closely the3131- host CONFIG_VM_SPLIT_* ones.2828+ It can be enabled on recent (>=2.6.16-rc2) vanilla kernels via2929+ CONFIG_VM_SPLIT_*, or on previous kernels with special patches (-ck3030+ patchset by Con Kolivas, or other ones) - option names match closely the3131+ host CONFIG_VM_SPLIT_* ones.32323333- A lower setting (where 1G/3G is lowest and 3G/1G is higher) will3434- tolerate even more "normal" host kernels, but an higher setting will be3535- stricter.3333+ A lower setting (where 1G/3G is lowest and 3G/1G is higher) will3434+ tolerate even more "normal" host kernels, but an higher setting will be3535+ stricter.36363737- So, if you do not know what to do here, say 'Default split'.3737+ So, if you do not know what to do here, say 'Default split'.38383939- config HOST_VMSPLIT_3G4040- bool "Default split (3G/1G user/kernel host split)"4141- config HOST_VMSPLIT_3G_OPT4242- bool "3G/1G user/kernel host split (for full 1G low memory)"4343- config HOST_VMSPLIT_2G4444- bool "2G/2G user/kernel host split"4545- config HOST_VMSPLIT_1G4646- bool "1G/3G user/kernel host split"3939+config HOST_VMSPLIT_3G4040+ bool "Default split (3G/1G user/kernel host split)"4141+config HOST_VMSPLIT_3G_OPT4242+ bool "3G/1G user/kernel host split (for full 1G low memory)"4343+config HOST_VMSPLIT_2G4444+ bool "2G/2G user/kernel host split"4545+config HOST_VMSPLIT_1G4646+ bool "1G/3G user/kernel host split"4747endchoice48484949config TOP_ADDR···67676868config STUB_CODE6969 hex7070- default 0xbfffe000 if !HOST_2G_2G7171- default 0x7fffe000 if HOST_2G_2G7070+ default 0xbfffe000 if !HOST_VMSPLIT_2G7171+ default 0x7fffe000 if HOST_VMSPLIT_2G72727373config STUB_DATA7474 hex7575- default 0xbffff000 if !HOST_2G_2G7676- default 0x7ffff000 if HOST_2G_2G7575+ default 0xbffff000 if !HOST_VMSPLIT_2G7676+ default 0x7ffff000 if HOST_VMSPLIT_2G77777878config STUB_START7979 hex
+2-1
arch/um/sys-i386/signal.c
···219219 unsigned long save_sp = PT_REGS_SP(regs);220220 int err = 0;221221222222- stack_top &= -8UL;222222+ /* This is the same calculation as i386 - ((sp + 4) & 15) == 0 */223223+ stack_top = ((stack_top + 4) & -16UL) - 4;223224 frame = (struct sigframe __user *) stack_top - 1;224225 if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame)))225226 return 1;
+3-2
arch/um/sys-x86_64/signal.c
···191191 struct task_struct *me = current;192192193193 frame = (struct rt_sigframe __user *)194194- round_down(stack_top - sizeof(struct rt_sigframe), 16) - 8;195195- frame = (struct rt_sigframe __user *) ((unsigned long) frame - 128);194194+ round_down(stack_top - sizeof(struct rt_sigframe), 16);195195+ /* Subtract 128 for a red zone and 8 for proper alignment */196196+ frame = (struct rt_sigframe __user *) ((unsigned long) frame - 128 - 8);196197197198 if (!access_ok(VERIFY_WRITE, fp, sizeof(struct _fpstate)))198199 goto out;
-49
arch/x86_64/ia32/ia32_binfmt.c
···6464#define ELF_NGREG (sizeof (struct user_regs_struct32) / sizeof(elf_greg_t))6565typedef elf_greg_t elf_gregset_t[ELF_NGREG];66666767-/*6868- * These macros parameterize elf_core_dump in fs/binfmt_elf.c to write out6969- * extra segments containing the vsyscall DSO contents. Dumping its7070- * contents makes post-mortem fully interpretable later without matching up7171- * the same kernel and hardware config to see what PC values meant.7272- * Dumping its extra ELF program headers includes all the other information7373- * a debugger needs to easily find how the vsyscall DSO was being used.7474- */7575-#define ELF_CORE_EXTRA_PHDRS (find_vma(current->mm, VSYSCALL32_BASE) ? \7676- (VSYSCALL32_EHDR->e_phnum) : 0)7777-#define ELF_CORE_WRITE_EXTRA_PHDRS \7878-do { \7979- if (find_vma(current->mm, VSYSCALL32_BASE)) { \8080- const struct elf32_phdr *const vsyscall_phdrs = \8181- (const struct elf32_phdr *) (VSYSCALL32_BASE \8282- + VSYSCALL32_EHDR->e_phoff);\8383- int i; \8484- Elf32_Off ofs = 0; \8585- for (i = 0; i < VSYSCALL32_EHDR->e_phnum; ++i) { \8686- struct elf32_phdr phdr = vsyscall_phdrs[i]; \8787- if (phdr.p_type == PT_LOAD) { \8888- BUG_ON(ofs != 0); \8989- ofs = phdr.p_offset = offset; \9090- phdr.p_memsz = PAGE_ALIGN(phdr.p_memsz); \9191- phdr.p_filesz = phdr.p_memsz; \9292- offset += phdr.p_filesz; \9393- } \9494- else \9595- phdr.p_offset += ofs; \9696- phdr.p_paddr = 0; /* match other core phdrs */ \9797- DUMP_WRITE(&phdr, sizeof(phdr)); \9898- } \9999- } \100100-} while (0)101101-#define ELF_CORE_WRITE_EXTRA_DATA \102102-do { \103103- if (find_vma(current->mm, VSYSCALL32_BASE)) { \104104- const struct elf32_phdr *const vsyscall_phdrs = \105105- (const struct elf32_phdr *) (VSYSCALL32_BASE \106106- + VSYSCALL32_EHDR->e_phoff); \107107- int i; \108108- for (i = 0; i < VSYSCALL32_EHDR->e_phnum; ++i) { \109109- if (vsyscall_phdrs[i].p_type == PT_LOAD) \110110- DUMP_WRITE((void *) (u64) vsyscall_phdrs[i].p_vaddr,\111111- PAGE_ALIGN(vsyscall_phdrs[i].p_memsz)); \112112- } \113113- } \114114-} while (0)115115-11667struct elf_siginfo11768{11869 int si_signo; /* signal number */
+15
arch/x86_64/ia32/syscall32.c
···5959 vma->vm_end = VSYSCALL32_END;6060 /* MAYWRITE to allow gdb to COW and set breakpoints */6161 vma->vm_flags = VM_READ|VM_EXEC|VM_MAYREAD|VM_MAYEXEC|VM_MAYWRITE;6262+ /*6363+ * Make sure the vDSO gets into every core dump.6464+ * Dumping its contents makes post-mortem fully interpretable later6565+ * without matching up the same kernel and hardware config to see6666+ * what PC values meant.6767+ */6868+ vma->vm_flags |= VM_ALWAYSDUMP;6269 vma->vm_flags |= mm->def_flags;6370 vma->vm_page_prot = protection_map[vma->vm_flags & 7];6471 vma->vm_ops = &syscall32_vm_ops;···8073 mm->total_vm += npages;8174 up_write(&mm->mmap_sem);8275 return 0;7676+}7777+7878+const char *arch_vma_name(struct vm_area_struct *vma)7979+{8080+ if (vma->vm_start == VSYSCALL32_BASE &&8181+ vma->vm_mm && vma->vm_mm->task_size == IA32_PAGE_OFFSET)8282+ return "[vdso]";8383+ return NULL;8384}84858586static int __init init_syscall32(void)
-2
arch/x86_64/kernel/nmi.c
···302302 if ((nmi >= NMI_INVALID) || (nmi < NMI_NONE))303303 return 0;304304305305- if ((nmi == NMI_LOCAL_APIC) && (nmi_known_cpu() == 0))306306- return 0; /* no lapic support */307305 nmi_watchdog = nmi;308306 return 1;309307}
+6-5
block/elevator.c
···590590 */591591 rq->cmd_flags |= REQ_SOFTBARRIER;592592593593+ /*594594+ * Most requeues happen because of a busy condition,595595+ * don't force unplug of the queue for that case.596596+ */597597+ unplug_it = 0;598598+593599 if (q->ordseq == 0) {594600 list_add(&rq->queuelist, &q->queue_head);595601 break;···610604 }611605612606 list_add_tail(&rq->queuelist, pos);613613- /*614614- * most requeues happen because of a busy condition, don't615615- * force unplug of the queue for that case.616616- */617617- unplug_it = 0;618607 break;619608620609 default:
+3-2
block/scsi_ioctl.c
···223223static int sg_io(struct file *file, request_queue_t *q,224224 struct gendisk *bd_disk, struct sg_io_hdr *hdr)225225{226226- unsigned long start_time;226226+ unsigned long start_time, timeout;227227 int writing = 0, ret = 0;228228 struct request *rq;229229 char sense[SCSI_SENSE_BUFFERSIZE];···271271272272 rq->cmd_type = REQ_TYPE_BLOCK_PC;273273274274- rq->timeout = jiffies_to_msecs(hdr->timeout);274274+ timeout = msecs_to_jiffies(hdr->timeout);275275+ rq->timeout = (timeout < INT_MAX) ? timeout : INT_MAX;275276 if (!rq->timeout)276277 rq->timeout = q->sg_timeout;277278 if (!rq->timeout)
···19192020if ATA21212222+config ATA_NONSTANDARD2323+ bool2424+ default n2525+2226config SATA_AHCI2327 tristate "AHCI SATA support"2428 depends on PCI
+64-39
drivers/ata/ahci.c
···7575 AHCI_CMD_CLR_BUSY = (1 << 10),76767777 RX_FIS_D2H_REG = 0x40, /* offset of D2H Register FIS data */7878+ RX_FIS_SDB = 0x58, /* offset of SDB FIS data */7879 RX_FIS_UNK = 0x60, /* offset of Unknown FIS data */79808081 board_ahci = 0,···203202 dma_addr_t cmd_tbl_dma;204203 void *rx_fis;205204 dma_addr_t rx_fis_dma;205205+ /* for NCQ spurious interrupt analysis */206206+ int ncq_saw_spurious_sdb_cnt;207207+ unsigned int ncq_saw_d2h:1;208208+ unsigned int ncq_saw_dmas:1;206209};207210208211static u32 ahci_scr_read (struct ata_port *ap, unsigned int sc_reg);···366361 { PCI_VDEVICE(INTEL, 0x27c1), board_ahci }, /* ICH7 */367362 { PCI_VDEVICE(INTEL, 0x27c5), board_ahci }, /* ICH7M */368363 { PCI_VDEVICE(INTEL, 0x27c3), board_ahci }, /* ICH7R */369369- { PCI_VDEVICE(AL, 0x5288), board_ahci }, /* ULi M5288 */364364+ { PCI_VDEVICE(AL, 0x5288), board_ahci_ign_iferr }, /* ULi M5288 */370365 { PCI_VDEVICE(INTEL, 0x2681), board_ahci }, /* ESB2 */371366 { PCI_VDEVICE(INTEL, 0x2682), board_ahci }, /* ESB2 */372367 { PCI_VDEVICE(INTEL, 0x2683), board_ahci }, /* ESB2 */···591586{592587 u32 cmd, scontrol;593588589589+ if (!(cap & HOST_CAP_SSS))590590+ return;591591+592592+ /* put device into listen mode, first set PxSCTL.DET to 0 */593593+ scontrol = readl(port_mmio + PORT_SCR_CTL);594594+ scontrol &= ~0xf;595595+ writel(scontrol, port_mmio + PORT_SCR_CTL);596596+597597+ /* then set PxCMD.SUD to 0 */594598 cmd = readl(port_mmio + PORT_CMD) & ~PORT_CMD_ICC_MASK;595595-596596- if (cap & HOST_CAP_SSC) {597597- /* enable transitions to slumber mode */598598- scontrol = readl(port_mmio + PORT_SCR_CTL);599599- if ((scontrol & 0x0f00) > 0x100) {600600- scontrol &= ~0xf00;601601- writel(scontrol, port_mmio + PORT_SCR_CTL);602602- }603603-604604- /* put device into slumber mode */605605- writel(cmd | PORT_CMD_ICC_SLUMBER, port_mmio + PORT_CMD);606606-607607- /* wait for the transition to complete */608608- ata_wait_register(port_mmio + PORT_CMD, PORT_CMD_ICC_SLUMBER,609609- PORT_CMD_ICC_SLUMBER, 1, 50);610610- }611611-612612- /* put device into listen mode */613613- if (cap & HOST_CAP_SSS) {614614- /* first set PxSCTL.DET to 0 */615615- scontrol = readl(port_mmio + PORT_SCR_CTL);616616- scontrol &= ~0xf;617617- writel(scontrol, port_mmio + PORT_SCR_CTL);618618-619619- /* then set PxCMD.SUD to 0 */620620- cmd &= ~PORT_CMD_SPIN_UP;621621- writel(cmd, port_mmio + PORT_CMD);622622- }599599+ cmd &= ~PORT_CMD_SPIN_UP;600600+ writel(cmd, port_mmio + PORT_CMD);623601}624602625603static void ahci_init_port(void __iomem *port_mmio, u32 cap,···903915904916 /* clear D2H reception area to properly wait for D2H FIS */905917 ata_tf_init(ap->device, &tf);906906- tf.command = 0xff;918918+ tf.command = 0x80;907919 ata_tf_to_fis(&tf, d2h_fis, 0);908920909921 rc = sata_std_hardreset(ap, class);···11141126 void __iomem *mmio = ap->host->mmio_base;11151127 void __iomem *port_mmio = ahci_port_base(mmio, ap->port_no);11161128 struct ata_eh_info *ehi = &ap->eh_info;11291129+ struct ahci_port_priv *pp = ap->private_data;11171130 u32 status, qc_active;11181118- int rc;11311131+ int rc, known_irq = 0;1119113211201133 status = readl(port_mmio + PORT_IRQ_STAT);11211134 writel(status, port_mmio + PORT_IRQ_STAT);···1143115411441155 /* hmmm... a spurious interupt */1145115611461146- /* some devices send D2H reg with I bit set during NCQ command phase */11471147- if (ap->sactive && (status & PORT_IRQ_D2H_REG_FIS))11571157+ /* if !NCQ, ignore. No modern ATA device has broken HSM11581158+ * implementation for non-NCQ commands.11591159+ */11601160+ if (!ap->sactive)11481161 return;1149116211501150- /* ignore interim PIO setup fis interrupts */11511151- if (ata_tag_valid(ap->active_tag) && (status & PORT_IRQ_PIOS_FIS))11521152- return;11631163+ if (status & PORT_IRQ_D2H_REG_FIS) {11641164+ if (!pp->ncq_saw_d2h)11651165+ ata_port_printk(ap, KERN_INFO,11661166+ "D2H reg with I during NCQ, "11671167+ "this message won't be printed again\n");11681168+ pp->ncq_saw_d2h = 1;11691169+ known_irq = 1;11701170+ }1153117111541154- if (ata_ratelimit())11721172+ if (status & PORT_IRQ_DMAS_FIS) {11731173+ if (!pp->ncq_saw_dmas)11741174+ ata_port_printk(ap, KERN_INFO,11751175+ "DMAS FIS during NCQ, "11761176+ "this message won't be printed again\n");11771177+ pp->ncq_saw_dmas = 1;11781178+ known_irq = 1;11791179+ }11801180+11811181+ if (status & PORT_IRQ_SDB_FIS &&11821182+ pp->ncq_saw_spurious_sdb_cnt < 10) {11831183+ /* SDB FIS containing spurious completions might be11841184+ * dangerous, we need to know more about them. Print11851185+ * more of it.11861186+ */11871187+ const u32 *f = pp->rx_fis + RX_FIS_SDB;11881188+11891189+ ata_port_printk(ap, KERN_INFO, "Spurious SDB FIS during NCQ "11901190+ "issue=0x%x SAct=0x%x FIS=%08x:%08x%s\n",11911191+ readl(port_mmio + PORT_CMD_ISSUE),11921192+ readl(port_mmio + PORT_SCR_ACT),11931193+ le32_to_cpu(f[0]), le32_to_cpu(f[1]),11941194+ pp->ncq_saw_spurious_sdb_cnt < 10 ?11951195+ "" : ", shutting up");11961196+11971197+ pp->ncq_saw_spurious_sdb_cnt++;11981198+ known_irq = 1;11991199+ }12001200+12011201+ if (!known_irq)11551202 ata_port_printk(ap, KERN_INFO, "spurious interrupt "11561156- "(irq_stat 0x%x active_tag %d sactive 0x%x)\n",12031203+ "(irq_stat 0x%x active_tag 0x%x sactive 0x%x)\n",11571204 status, ap->active_tag, ap->sactive);11581205}11591206···12821257 /* clear IRQ */12831258 tmp = readl(port_mmio + PORT_IRQ_STAT);12841259 writel(tmp, port_mmio + PORT_IRQ_STAT);12851285- writel(1 << ap->id, mmio + HOST_IRQ_STAT);12601260+ writel(1 << ap->port_no, mmio + HOST_IRQ_STAT);1286126112871262 /* turn IRQ back on */12881263 writel(DEF_PORT_IRQ, port_mmio + PORT_IRQ_MASK);
+4-2
drivers/ata/ata_generic.c
···6464/**6565 * generic_set_mode - mode setting6666 * @ap: interface to set up6767+ * @unused: returned device on error6768 *6869 * Use a non standard set_mode function. We don't want to be tuned.6970 * The BIOS configured everything. Our job is not to fiddle. We···7271 * and respect them.7372 */74737575-static void generic_set_mode(struct ata_port *ap)7474+static int generic_set_mode(struct ata_port *ap, struct ata_device **unused)7675{7776 int dma_enabled = 0;7877 int i;···83828483 for (i = 0; i < ATA_MAX_DEVICES; i++) {8584 struct ata_device *dev = &ap->device[i];8686- if (ata_dev_enabled(dev)) {8585+ if (ata_dev_ready(dev)) {8786 /* We don't really care */8887 dev->pio_mode = XFER_PIO_0;8988 dev->dma_mode = XFER_MW_DMA_0;···10099 }101100 }102101 }102102+ return 0;103103}104104105105static struct scsi_host_template generic_sht = {
+4-13
drivers/ata/libata-core.c
···10371037 * the PIO timing number for the maximum. Turn it into10381038 * a mask.10391039 */10401040- u8 mode = id[ATA_ID_OLD_PIO_MODES] & 0xFF;10401040+ u8 mode = (id[ATA_ID_OLD_PIO_MODES] >> 8) & 0xFF;10411041 if (mode < 5) /* Valid PIO range */10421042 pio_mask = (2 << mode) - 1;10431043 else···1250125012511251 ata_sg_init(qc, sg, n_elem);12521252 qc->nsect = buflen / ATA_SECT_SIZE;12531253+ qc->nbytes = buflen;12531254 }1254125512551256 qc->private_data = &wait;···24322431 int i, rc = 0, used_dma = 0, found = 0;2433243224342433 /* has private set_mode? */24352435- if (ap->ops->set_mode) {24362436- /* FIXME: make ->set_mode handle no device case and24372437- * return error code and failing device on failure.24382438- */24392439- for (i = 0; i < ATA_MAX_DEVICES; i++) {24402440- if (ata_dev_ready(&ap->device[i])) {24412441- ap->ops->set_mode(ap);24422442- break;24432443- }24442444- }24452445- return 0;24462446- }24342434+ if (ap->ops->set_mode)24352435+ return ap->ops->set_mode(ap, r_failed_dev);2447243624482437 /* step 1: calculate xfer_mask */24492438 for (i = 0; i < ATA_MAX_DEVICES; i++) {
···2525#include <linux/libata.h>26262727#define DRV_NAME "pata_hpt3x2n"2828-#define DRV_VERSION "0.3"2828+#define DRV_VERSION "0.3.2"29293030enum {3131 HPT_PCI_FAST = (1 << 31),···297297 return 0;298298}299299300300-static int hpt3x2n_use_dpll(struct ata_port *ap, int reading)300300+static int hpt3x2n_use_dpll(struct ata_port *ap, int writing)301301{302302 long flags = (long)ap->host->private_data;303303 /* See if we should use the DPLL */304304- if (reading == 0)304304+ if (writing)305305 return USE_DPLL; /* Needed for write */306306 if (flags & PCI66)307307 return USE_DPLL; /* Needed at 66Mhz */
+3-1
drivers/ata/pata_it821x.c
···476476/**477477 * it821x_smart_set_mode - mode setting478478 * @ap: interface to set up479479+ * @unused: device that failed (error only)479480 *480481 * Use a non standard set_mode function. We don't want to be tuned.481482 * The BIOS configured everything. Our job is not to fiddle. We···484483 * and respect them.485484 */486485487487-static void it821x_smart_set_mode(struct ata_port *ap)486486+static int it821x_smart_set_mode(struct ata_port *ap, struct ata_device **unused)488487{489488 int dma_enabled = 0;490489 int i;···513512 }514513 }515514 }515515+ return 0;516516}517517518518/**
···204204205205 u32 reg;206206207207- if (id->driver_data != 368) {208208- /* Put the controller into AHCI mode in case the AHCI driver209209- has not yet been loaded. This can be done with either210210- function present */207207+ /* PATA controller is fn 1, AHCI is fn 0 */208208+ if (id->driver_data != 368 && PCI_FUNC(pdev->devfn) != 1)209209+ return -ENODEV;211210212212- /* FIXME: We may want a way to override this in future */213213- pci_write_config_byte(pdev, 0x41, 0xa1);214214-215215- /* PATA controller is fn 1, AHCI is fn 0 */216216- if (PCI_FUNC(pdev->devfn) != 1)217217- return -ENODEV;218218- }219219- if ( id->driver_data == 365 || id->driver_data == 366) {220220- /* The 365/66 have two PATA channels, redirect the second */211211+ /* The 365/66 have two PATA channels, redirect the second */212212+ if (id->driver_data == 365 || id->driver_data == 366) {221213 pci_read_config_dword(pdev, 0x80, ®);222214 reg |= (1 << 24); /* IDE1 to PATA IDE secondary */223215 pci_write_config_dword(pdev, 0x80, reg);
+3-1
drivers/ata/pata_legacy.c
···9696/**9797 * legacy_set_mode - mode setting9898 * @ap: IDE interface9999+ * @unused: Device that failed when error is returned99100 *100101 * Use a non standard set_mode function. We don't want to be tuned.101102 *···106105 * expand on this as per hdparm in the base kernel.107106 */108107109109-static void legacy_set_mode(struct ata_port *ap)108108+static int legacy_set_mode(struct ata_port *ap, struct ata_device **unused)110109{111110 int i;112111···119118 dev->flags |= ATA_DFLAG_PIO;120119 }121120 }121121+ return 0;122122}123123124124static struct scsi_host_template legacy_sht = {
+2-1
drivers/ata/pata_platform.c
···3030 * Provide our own set_mode() as we don't want to change anything that has3131 * already been configured..3232 */3333-static void pata_platform_set_mode(struct ata_port *ap)3333+static int pata_platform_set_mode(struct ata_port *ap, struct ata_device **unused)3434{3535 int i;3636···4444 dev->flags |= ATA_DFLAG_PIO;4545 }4646 }4747+ return 0;4748}48494950static void pata_platform_host_stop(struct ata_host *host)
+4-2
drivers/ata/pata_rz1000.c
···5252/**5353 * rz1000_set_mode - mode setting function5454 * @ap: ATA interface5555+ * @unused: returned device on set_mode failure5556 *5657 * Use a non standard set_mode function. We don't want to be tuned. We5758 * would prefer to be BIOS generic but for the fact our hardware is5859 * whacked out.5960 */60616161-static void rz1000_set_mode(struct ata_port *ap)6262+static int rz1000_set_mode(struct ata_port *ap, struct ata_device **unused)6263{6364 int i;64656566 for (i = 0; i < ATA_MAX_DEVICES; i++) {6667 struct ata_device *dev = &ap->device[i];6767- if (ata_dev_enabled(dev)) {6868+ if (ata_dev_ready(dev)) {6869 /* We don't really care */6970 dev->pio_mode = XFER_PIO_0;7071 dev->xfer_mode = XFER_PIO_0;···7372 dev->flags |= ATA_DFLAG_PIO;7473 }7574 }7575+ return 0;7676}77777878
···4141};424243434444-typedef struct _ati_page_map {4444+struct ati_page_map {4545 unsigned long *real;4646 unsigned long __iomem *remapped;4747-} ati_page_map;4747+};48484949static struct _ati_generic_private {5050 volatile u8 __iomem *registers;5151- ati_page_map **gatt_pages;5151+ struct ati_page_map **gatt_pages;5252 int num_tables;5353} ati_generic_private;54545555-static int ati_create_page_map(ati_page_map *page_map)5555+static int ati_create_page_map(struct ati_page_map *page_map)5656{5757 int i, err = 0;5858···8282}838384848585-static void ati_free_page_map(ati_page_map *page_map)8585+static void ati_free_page_map(struct ati_page_map *page_map)8686{8787 unmap_page_from_agp(virt_to_page(page_map->real));8888 iounmap(page_map->remapped);···9494static void ati_free_gatt_pages(void)9595{9696 int i;9797- ati_page_map **tables;9898- ati_page_map *entry;9797+ struct ati_page_map **tables;9898+ struct ati_page_map *entry;9999100100 tables = ati_generic_private.gatt_pages;101101 for (i = 0; i < ati_generic_private.num_tables; i++) {···112112113113static int ati_create_gatt_pages(int nr_tables)114114{115115- ati_page_map **tables;116116- ati_page_map *entry;115115+ struct ati_page_map **tables;116116+ struct ati_page_map *entry;117117 int retval = 0;118118 int i;119119120120- tables = kzalloc((nr_tables + 1) * sizeof(ati_page_map *),GFP_KERNEL);120120+ tables = kzalloc((nr_tables + 1) * sizeof(struct ati_page_map *),GFP_KERNEL);121121 if (tables == NULL)122122 return -ENOMEM;123123124124 for (i = 0; i < nr_tables; i++) {125125- entry = kzalloc(sizeof(ati_page_map), GFP_KERNEL);125125+ entry = kzalloc(sizeof(struct ati_page_map), GFP_KERNEL);126126 if (entry == NULL) {127127- while (i>0) {128128- kfree (tables[i-1]);127127+ while (i > 0) {128128+ kfree(tables[i-1]);129129 i--;130130 }131131- kfree (tables);132132- tables = NULL;131131+ kfree(tables);133132 retval = -ENOMEM;134133 break;135134 }136135 tables[i] = entry;137136 retval = ati_create_page_map(entry);138138- if (retval != 0) break;137137+ if (retval != 0)138138+ break;139139 }140140 ati_generic_private.num_tables = nr_tables;141141 ati_generic_private.gatt_pages = tables;···340340static int ati_create_gatt_table(struct agp_bridge_data *bridge)341341{342342 struct aper_size_info_lvl2 *value;343343- ati_page_map page_dir;343343+ struct ati_page_map page_dir;344344 unsigned long addr;345345 int retval;346346 u32 temp;···400400401401static int ati_free_gatt_table(struct agp_bridge_data *bridge)402402{403403- ati_page_map page_dir;403403+ struct ati_page_map page_dir;404404405405 page_dir.real = (unsigned long *)agp_bridge->gatt_table_real;406406 page_dir.remapped = (unsigned long __iomem *)agp_bridge->gatt_table;
+9
drivers/char/agp/intel-agp.c
···1955195519561956 pci_restore_state(pdev);1957195719581958+ /* We should restore our graphics device's config space,19591959+ * as host bridge (00:00) resumes before graphics device (02:00),19601960+ * then our access to its pci space can work right.19611961+ */19621962+ if (intel_i810_private.i810_dev)19631963+ pci_restore_state(intel_i810_private.i810_dev);19641964+ if (intel_i830_private.i830_dev)19651965+ pci_restore_state(intel_i830_private.i830_dev);19661966+19581967 if (bridge->driver == &intel_generic_driver)19591968 intel_configure();19601969 else if (bridge->driver == &intel_850_driver)
···36493649 unsigned long flags;36503650 int i;3651365136523652- INIT_LIST_HEAD(&timeouts);36533653-36543652 rcu_read_lock();36553653 list_for_each_entry_rcu(intf, &ipmi_interfaces, link) {36563654 /* See if any waiting messages need to be processed. */···36693671 /* Go through the seq table and find any messages that36703672 have timed out, putting them in the timeouts36713673 list. */36743674+ INIT_LIST_HEAD(&timeouts);36723675 spin_lock_irqsave(&intf->seq_lock, flags);36733676 for (i = 0; i < IPMI_IPMB_NUM_SEQ; i++)36743677 check_msg_timeout(intf, &(intf->seq_table[i]),
+11-9
drivers/char/sysrq.c
···215215}216216static struct sysrq_key_op sysrq_showstate_blocked_op = {217217 .handler = sysrq_handle_showstate_blocked,218218- .help_msg = "showBlockedTasks",218218+ .help_msg = "shoW-blocked-tasks",219219 .action_msg = "Show Blocked State",220220 .enable_mask = SYSRQ_ENABLE_DUMP,221221};···315315 &sysrq_loglevel_op, /* 9 */316316317317 /*318318- * Don't use for system provided sysrqs, it is handled specially on319319- * sparc and will never arrive318318+ * a: Don't use for system provided sysrqs, it is handled specially on319319+ * sparc and will never arrive.320320 */321321 NULL, /* a */322322 &sysrq_reboot_op, /* b */323323- &sysrq_crashdump_op, /* c */323323+ &sysrq_crashdump_op, /* c & ibm_emac driver debug */324324 &sysrq_showlocks_op, /* d */325325 &sysrq_term_op, /* e */326326 &sysrq_moom_op, /* f */327327+ /* g: May be registered by ppc for kgdb */327328 NULL, /* g */328329 NULL, /* h */329330 &sysrq_kill_op, /* i */···333332 NULL, /* l */334333 &sysrq_showmem_op, /* m */335334 &sysrq_unrt_op, /* n */336336- /* This will often be registered as 'Off' at init time */335335+ /* o: This will often be registered as 'Off' at init time */337336 NULL, /* o */338337 &sysrq_showregs_op, /* p */339338 NULL, /* q */340340- &sysrq_unraw_op, /* r */339339+ &sysrq_unraw_op, /* r */341340 &sysrq_sync_op, /* s */342341 &sysrq_showstate_op, /* t */343342 &sysrq_mountro_op, /* u */344344- /* May be assigned at init time by SMP VOYAGER */343343+ /* v: May be registered at init time by SMP VOYAGER */345344 NULL, /* v */346346- NULL, /* w */347347- &sysrq_showstate_blocked_op, /* x */345345+ &sysrq_showstate_blocked_op, /* w */346346+ /* x: May be registered on ppc/powerpc for xmon */347347+ NULL, /* x */348348 NULL, /* y */349349 NULL /* z */350350};
+28-15
drivers/char/tlclk.c
···186186static void switchover_timeout(unsigned long data);187187static struct timer_list switchover_timer =188188 TIMER_INITIALIZER(switchover_timeout , 0, 0);189189+static unsigned long tlclk_timer_data;189190190191static struct tlclk_alarms *alarm_events;191192···198197199198static DECLARE_WAIT_QUEUE_HEAD(wq);200199200200+static unsigned long useflags;201201+static DEFINE_MUTEX(tlclk_mutex);202202+201203static int tlclk_open(struct inode *inode, struct file *filp)202204{203205 int result;206206+207207+ if (test_and_set_bit(0, &useflags))208208+ return -EBUSY;209209+ /* this legacy device is always one per system and it doesn't210210+ * know how to handle multiple concurrent clients.211211+ */204212205213 /* Make sure there is no interrupt pending while206214 * initialising interrupt handler */···231221static int tlclk_release(struct inode *inode, struct file *filp)232222{233223 free_irq(telclk_interrupt, tlclk_interrupt);224224+ clear_bit(0, &useflags);234225235226 return 0;236227}···241230{242231 if (count < sizeof(struct tlclk_alarms))243232 return -EIO;233233+ if (mutex_lock_interruptible(&tlclk_mutex))234234+ return -EINTR;235235+244236245237 wait_event_interruptible(wq, got_event);246246- if (copy_to_user(buf, alarm_events, sizeof(struct tlclk_alarms)))238238+ if (copy_to_user(buf, alarm_events, sizeof(struct tlclk_alarms))) {239239+ mutex_unlock(&tlclk_mutex);247240 return -EFAULT;241241+ }248242249243 memset(alarm_events, 0, sizeof(struct tlclk_alarms));250244 got_event = 0;251245246246+ mutex_unlock(&tlclk_mutex);252247 return sizeof(struct tlclk_alarms);253253-}254254-255255-static ssize_t tlclk_write(struct file *filp, const char __user *buf, size_t count,256256- loff_t *f_pos)257257-{258258- return 0;259248}260249261250static const struct file_operations tlclk_fops = {262251 .read = tlclk_read,263263- .write = tlclk_write,264252 .open = tlclk_open,265253 .release = tlclk_release,266254···550540 SET_PORT_BITS(TLCLK_REG3, 0xf8, 0x7);551541 switch (val) {552542 case CLK_8_592MHz:553553- SET_PORT_BITS(TLCLK_REG0, 0xfc, 1);543543+ SET_PORT_BITS(TLCLK_REG0, 0xfc, 2);554544 break;555545 case CLK_11_184MHz:556546 SET_PORT_BITS(TLCLK_REG0, 0xfc, 0);···559549 SET_PORT_BITS(TLCLK_REG0, 0xfc, 3);560550 break;561551 case CLK_44_736MHz:562562- SET_PORT_BITS(TLCLK_REG0, 0xfc, 2);552552+ SET_PORT_BITS(TLCLK_REG0, 0xfc, 1);563553 break;564554 }565555 } else···849839850840static void switchover_timeout(unsigned long data)851841{852852- if ((data & 1)) {853853- if ((inb(TLCLK_REG1) & 0x08) != (data & 0x08))842842+ unsigned long flags = *(unsigned long *) data;843843+844844+ if ((flags & 1)) {845845+ if ((inb(TLCLK_REG1) & 0x08) != (flags & 0x08))854846 alarm_events->switchover_primary++;855847 } else {856856- if ((inb(TLCLK_REG1) & 0x08) != (data & 0x08))848848+ if ((inb(TLCLK_REG1) & 0x08) != (flags & 0x08))857849 alarm_events->switchover_secondary++;858850 }859851···913901914902 /* TIMEOUT in ~10ms */915903 switchover_timer.expires = jiffies + msecs_to_jiffies(10);916916- switchover_timer.data = inb(TLCLK_REG1);917917- add_timer(&switchover_timer);904904+ tlclk_timer_data = inb(TLCLK_REG1);905905+ switchover_timer.data = (unsigned long) &tlclk_timer_data;906906+ mod_timer(&switchover_timer, switchover_timer.expires);918907 } else {919908 got_event = 1;920909 wake_up(&wq);
+52-62
drivers/char/vr41xx_giu.c
···33 *44 * Copyright (C) 2002 MontaVista Software Inc.55 * Author: Yoichi Yuasa <yyuasa@mvista.com or source@mvista.com>66- * Copyright (C) 2003-2005 Yoichi Yuasa <yoichi_yuasa@tripeaks.co.jp>66+ * Copyright (C) 2003-2007 Yoichi Yuasa <yoichi_yuasa@tripeaks.co.jp>77 *88 * This program is free software; you can redistribute it and/or modify99 * it under the terms of the GNU General Public License as published by···125125 return data;126126}127127128128-static unsigned int startup_giuint_low_irq(unsigned int irq)128128+static void ack_giuint_low(unsigned int irq)129129{130130- unsigned int pin;131131-132132- pin = GPIO_PIN_OF_IRQ(irq);133133- giu_write(GIUINTSTATL, 1 << pin);134134- giu_set(GIUINTENL, 1 << pin);135135-136136- return 0;130130+ giu_write(GIUINTSTATL, 1 << GPIO_PIN_OF_IRQ(irq));137131}138132139139-static void shutdown_giuint_low_irq(unsigned int irq)133133+static void mask_giuint_low(unsigned int irq)140134{141135 giu_clear(GIUINTENL, 1 << GPIO_PIN_OF_IRQ(irq));142136}143137144144-static void enable_giuint_low_irq(unsigned int irq)145145-{146146- giu_set(GIUINTENL, 1 << GPIO_PIN_OF_IRQ(irq));147147-}148148-149149-#define disable_giuint_low_irq shutdown_giuint_low_irq150150-151151-static void ack_giuint_low_irq(unsigned int irq)138138+static void mask_ack_giuint_low(unsigned int irq)152139{153140 unsigned int pin;154141···144157 giu_write(GIUINTSTATL, 1 << pin);145158}146159147147-static void end_giuint_low_irq(unsigned int irq)160160+static void unmask_giuint_low(unsigned int irq)148161{149149- if (!(irq_desc[irq].status & (IRQ_DISABLED | IRQ_INPROGRESS)))150150- giu_set(GIUINTENL, 1 << GPIO_PIN_OF_IRQ(irq));162162+ giu_set(GIUINTENL, 1 << GPIO_PIN_OF_IRQ(irq));151163}152164153153-static struct hw_interrupt_type giuint_low_irq_type = {154154- .typename = "GIUINTL",155155- .startup = startup_giuint_low_irq,156156- .shutdown = shutdown_giuint_low_irq,157157- .enable = enable_giuint_low_irq,158158- .disable = disable_giuint_low_irq,159159- .ack = ack_giuint_low_irq,160160- .end = end_giuint_low_irq,165165+static struct irq_chip giuint_low_irq_chip = {166166+ .name = "GIUINTL",167167+ .ack = ack_giuint_low,168168+ .mask = mask_giuint_low,169169+ .mask_ack = mask_ack_giuint_low,170170+ .unmask = unmask_giuint_low,161171};162172163163-static unsigned int startup_giuint_high_irq(unsigned int irq)173173+static void ack_giuint_high(unsigned int irq)164174{165165- unsigned int pin;166166-167167- pin = GPIO_PIN_OF_IRQ(irq) - GIUINT_HIGH_OFFSET;168168- giu_write(GIUINTSTATH, 1 << pin);169169- giu_set(GIUINTENH, 1 << pin);170170-171171- return 0;175175+ giu_write(GIUINTSTATH, 1 << (GPIO_PIN_OF_IRQ(irq) - GIUINT_HIGH_OFFSET));172176}173177174174-static void shutdown_giuint_high_irq(unsigned int irq)178178+static void mask_giuint_high(unsigned int irq)175179{176180 giu_clear(GIUINTENH, 1 << (GPIO_PIN_OF_IRQ(irq) - GIUINT_HIGH_OFFSET));177181}178182179179-static void enable_giuint_high_irq(unsigned int irq)180180-{181181- giu_set(GIUINTENH, 1 << (GPIO_PIN_OF_IRQ(irq) - GIUINT_HIGH_OFFSET));182182-}183183-184184-#define disable_giuint_high_irq shutdown_giuint_high_irq185185-186186-static void ack_giuint_high_irq(unsigned int irq)183183+static void mask_ack_giuint_high(unsigned int irq)187184{188185 unsigned int pin;189186···176205 giu_write(GIUINTSTATH, 1 << pin);177206}178207179179-static void end_giuint_high_irq(unsigned int irq)208208+static void unmask_giuint_high(unsigned int irq)180209{181181- if (!(irq_desc[irq].status & (IRQ_DISABLED | IRQ_INPROGRESS)))182182- giu_set(GIUINTENH, 1 << (GPIO_PIN_OF_IRQ(irq) - GIUINT_HIGH_OFFSET));210210+ giu_set(GIUINTENH, 1 << (GPIO_PIN_OF_IRQ(irq) - GIUINT_HIGH_OFFSET));183211}184212185185-static struct hw_interrupt_type giuint_high_irq_type = {186186- .typename = "GIUINTH",187187- .startup = startup_giuint_high_irq,188188- .shutdown = shutdown_giuint_high_irq,189189- .enable = enable_giuint_high_irq,190190- .disable = disable_giuint_high_irq,191191- .ack = ack_giuint_high_irq,192192- .end = end_giuint_high_irq,213213+static struct irq_chip giuint_high_irq_chip = {214214+ .name = "GIUINTH",215215+ .ack = ack_giuint_high,216216+ .mask = mask_giuint_high,217217+ .mask_ack = mask_ack_giuint_high,218218+ .unmask = unmask_giuint_high,193219};194220195221static int giu_get_irq(unsigned int irq)···250282 break;251283 }252284 }285285+ set_irq_chip_and_handler(GIU_IRQ(pin),286286+ &giuint_low_irq_chip,287287+ handle_edge_irq);253288 } else {254289 giu_clear(GIUINTTYPL, mask);255290 giu_clear(GIUINTHTSELL, mask);291291+ set_irq_chip_and_handler(GIU_IRQ(pin),292292+ &giuint_low_irq_chip,293293+ handle_level_irq);256294 }257295 giu_write(GIUINTSTATL, mask);258296 } else if (pin < GIUINT_HIGH_MAX) {···285311 break;286312 }287313 }314314+ set_irq_chip_and_handler(GIU_IRQ(pin),315315+ &giuint_high_irq_chip,316316+ handle_edge_irq);288317 } else {289318 giu_clear(GIUINTTYPH, mask);290319 giu_clear(GIUINTHTSELH, mask);320320+ set_irq_chip_and_handler(GIU_IRQ(pin),321321+ &giuint_high_irq_chip,322322+ handle_level_irq);291323 }292324 giu_write(GIUINTSTATH, mask);293325 }···597617static int __devinit giu_probe(struct platform_device *dev)598618{599619 unsigned long start, size, flags = 0;600600- unsigned int nr_pins = 0;620620+ unsigned int nr_pins = 0, trigger, i, pin;601621 struct resource *res1, *res2 = NULL;602622 void *base;603603- int retval, i;623623+ struct irq_chip *chip;624624+ int retval;604625605626 switch (current_cpu_data.cputype) {606627 case CPU_VR4111:···669688 giu_write(GIUINTENL, 0);670689 giu_write(GIUINTENH, 0);671690691691+ trigger = giu_read(GIUINTTYPH) << 16;692692+ trigger |= giu_read(GIUINTTYPL);672693 for (i = GIU_IRQ_BASE; i <= GIU_IRQ_LAST; i++) {673673- if (i < GIU_IRQ(GIUINT_HIGH_OFFSET))674674- irq_desc[i].chip = &giuint_low_irq_type;694694+ pin = GPIO_PIN_OF_IRQ(i);695695+ if (pin < GIUINT_HIGH_OFFSET)696696+ chip = &giuint_low_irq_chip;675697 else676676- irq_desc[i].chip = &giuint_high_irq_type;698698+ chip = &giuint_high_irq_chip;699699+700700+ if (trigger & (1 << pin))701701+ set_irq_chip_and_handler(i, chip, handle_edge_irq);702702+ else703703+ set_irq_chip_and_handler(i, chip, handle_level_irq);704704+677705 }678706679707 return cascade_irq(GIUINT_IRQ, giu_get_irq);
+13-4
drivers/cpufreq/cpufreq.c
···722722 spin_unlock_irqrestore(&cpufreq_driver_lock, flags);723723724724 dprintk("CPU already managed, adding link\n");725725- sysfs_create_link(&sys_dev->kobj,726726- &managed_policy->kobj, "cpufreq");725725+ ret = sysfs_create_link(&sys_dev->kobj,726726+ &managed_policy->kobj,727727+ "cpufreq");728728+ if (ret) {729729+ mutex_unlock(&policy->lock);730730+ goto err_out_driver_exit;731731+ }727732728733 cpufreq_debug_enable_ratelimit();729734 mutex_unlock(&policy->lock);···775770 dprintk("CPU %u already managed, adding link\n", j);776771 cpufreq_cpu_get(cpu);777772 cpu_sys_dev = get_cpu_sysdev(j);778778- sysfs_create_link(&cpu_sys_dev->kobj, &policy->kobj,779779- "cpufreq");773773+ ret = sysfs_create_link(&cpu_sys_dev->kobj, &policy->kobj,774774+ "cpufreq");775775+ if (ret) {776776+ mutex_unlock(&policy->lock);777777+ goto err_out_unregister;778778+ }780779 }781780782781 policy->governor = NULL; /* to assure that the starting sequence is
+12-17
drivers/firmware/efivars.c
···122122 struct kobject kobj;123123};124124125125-#define get_efivar_entry(n) list_entry(n, struct efivar_entry, list)126126-127125struct efivar_attribute {128126 struct attribute attr;129127 ssize_t (*show) (struct efivar_entry *entry, char *buf);···384386static void efivar_release(struct kobject *kobj)385387{386388 struct efivar_entry *var = container_of(kobj, struct efivar_entry, kobj);387387- spin_lock(&efivars_lock);388388- list_del(&var->list);389389- spin_unlock(&efivars_lock);390389 kfree(var);391390}392391···425430efivar_create(struct subsystem *sub, const char *buf, size_t count)426431{427432 struct efi_variable *new_var = (struct efi_variable *)buf;428428- struct efivar_entry *search_efivar = NULL;433433+ struct efivar_entry *search_efivar, *n;429434 unsigned long strsize1, strsize2;430430- struct list_head *pos, *n;431435 efi_status_t status = EFI_NOT_FOUND;432436 int found = 0;433437···438444 /*439445 * Does this variable already exist?440446 */441441- list_for_each_safe(pos, n, &efivar_list) {442442- search_efivar = get_efivar_entry(pos);447447+ list_for_each_entry_safe(search_efivar, n, &efivar_list, list) {443448 strsize1 = utf8_strsize(search_efivar->var.VariableName, 1024);444449 strsize2 = utf8_strsize(new_var->VariableName, 1024);445450 if (strsize1 == strsize2 &&···483490efivar_delete(struct subsystem *sub, const char *buf, size_t count)484491{485492 struct efi_variable *del_var = (struct efi_variable *)buf;486486- struct efivar_entry *search_efivar = NULL;493493+ struct efivar_entry *search_efivar, *n;487494 unsigned long strsize1, strsize2;488488- struct list_head *pos, *n;489495 efi_status_t status = EFI_NOT_FOUND;490496 int found = 0;491497···496504 /*497505 * Does this variable already exist?498506 */499499- list_for_each_safe(pos, n, &efivar_list) {500500- search_efivar = get_efivar_entry(pos);507507+ list_for_each_entry_safe(search_efivar, n, &efivar_list, list) {501508 strsize1 = utf8_strsize(search_efivar->var.VariableName, 1024);502509 strsize2 = utf8_strsize(del_var->VariableName, 1024);503510 if (strsize1 == strsize2 &&···528537 spin_unlock(&efivars_lock);529538 return -EIO;530539 }540540+ list_del(&search_efivar->list);531541 /* We need to release this lock before unregistering. */532542 spin_unlock(&efivars_lock);533533-534543 efivar_unregister(search_efivar);535544536545 /* It's dead Jim.... */···759768static void __exit760769efivars_exit(void)761770{762762- struct list_head *pos, *n;771771+ struct efivar_entry *entry, *n;763772764764- list_for_each_safe(pos, n, &efivar_list)765765- efivar_unregister(get_efivar_entry(pos));773773+ list_for_each_entry_safe(entry, n, &efivar_list, list) {774774+ spin_lock(&efivars_lock);775775+ list_del(&entry->list);776776+ spin_unlock(&efivars_lock);777777+ efivar_unregister(entry);778778+ }766779767780 subsystem_unregister(&vars_subsys);768781 firmware_unregister(&efi_subsys);
···302302 .probe = ns87415_init_one,303303};304304305305-static int ns87415_ide_init(void)305305+static int __init ns87415_ide_init(void)306306{307307 return ide_pci_register_driver(&driver);308308}
+1-1
drivers/ide/pci/opti621.c
···382382 .probe = opti621_init_one,383383};384384385385-static int opti621_ide_init(void)385385+static int __init opti621_ide_init(void)386386{387387 return ide_pci_register_driver(&driver);388388}
+1-1
drivers/ide/pci/pdc202xx_new.c
···756756 .probe = pdc202new_init_one,757757};758758759759-static int pdc202new_ide_init(void)759759+static int __init pdc202new_ide_init(void)760760{761761 return ide_pci_register_driver(&driver);762762}
+1-1
drivers/ide/pci/pdc202xx_old.c
···719719 .probe = pdc202xx_init_one,720720};721721722722-static int pdc202xx_ide_init(void)722722+static int __init pdc202xx_ide_init(void)723723{724724 return ide_pci_register_driver(&driver);725725}
+1-1
drivers/ide/pci/rz1000.c
···7777 .probe = rz1000_init_one,7878};79798080-static int rz1000_ide_init(void)8080+static int __init rz1000_ide_init(void)8181{8282 return ide_pci_register_driver(&driver);8383}
+1-1
drivers/ide/pci/sc1200.c
···507507#endif508508};509509510510-static int sc1200_ide_init(void)510510+static int __init sc1200_ide_init(void)511511{512512 return ide_pci_register_driver(&driver);513513}
+1-1
drivers/ide/pci/serverworks.c
···666666 .probe = svwks_init_one,667667};668668669669-static int svwks_ide_init(void)669669+static int __init svwks_ide_init(void)670670{671671 return ide_pci_register_driver(&driver);672672}
+1-2
drivers/ide/pci/sgiioc4.c
···762762/* .is_remove = ioc4_ide_remove_one, */763763};764764765765-static int __devinit766766-ioc4_ide_init(void)765765+static int __init ioc4_ide_init(void)767766{768767 return ioc4_register_submodule(&ioc4_ide_submodule);769768}
···479479 int err = -EINVAL;480480481481 /* page 0 is the superblock, read it... */482482- if (bitmap->file)483483- bitmap->sb_page = read_page(bitmap->file, 0, bitmap, PAGE_SIZE);484484- else {482482+ if (bitmap->file) {483483+ loff_t isize = i_size_read(bitmap->file->f_mapping->host);484484+ int bytes = isize > PAGE_SIZE ? PAGE_SIZE : isize;485485+486486+ bitmap->sb_page = read_page(bitmap->file, 0, bitmap, bytes);487487+ } else {485488 bitmap->sb_page = read_sb_page(bitmap->mddev, bitmap->offset, 0);486489 }487490 if (IS_ERR(bitmap->sb_page)) {···880877 int count;881878 /* unmap the old page, we're done with it */882879 if (index == num_pages-1)883883- count = bytes - index * PAGE_SIZE;880880+ count = bytes + sizeof(bitmap_super_t)881881+ - index * PAGE_SIZE;884882 else885883 count = PAGE_SIZE;886884 if (index == 0) {
+19-8
drivers/md/dm.c
···11161116 if (size != get_capacity(md->disk))11171117 memset(&md->geometry, 0, sizeof(md->geometry));1118111811191119- __set_size(md, size);11191119+ if (md->suspended_bdev)11201120+ __set_size(md, size);11201121 if (size == 0)11211122 return 0;11221123···12651264 if (!dm_suspended(md))12661265 goto out;1267126612671267+ /* without bdev, the device size cannot be changed */12681268+ if (!md->suspended_bdev)12691269+ if (get_capacity(md->disk) != dm_table_get_size(table))12701270+ goto out;12711271+12681272 __unbind(md);12691273 r = __bind(md, table);12701274···13471341 /* This does not get reverted if there's an error later. */13481342 dm_table_presuspend_targets(map);1349134313501350- md->suspended_bdev = bdget_disk(md->disk, 0);13511351- if (!md->suspended_bdev) {13521352- DMWARN("bdget failed in dm_suspend");13531353- r = -ENOMEM;13541354- goto flush_and_out;13441344+ /* bdget() can stall if the pending I/Os are not flushed */13451345+ if (!noflush) {13461346+ md->suspended_bdev = bdget_disk(md->disk, 0);13471347+ if (!md->suspended_bdev) {13481348+ DMWARN("bdget failed in dm_suspend");13491349+ r = -ENOMEM;13501350+ goto flush_and_out;13511351+ }13551352 }1356135313571354 /*···1482147314831474 unlock_fs(md);1484147514851485- bdput(md->suspended_bdev);14861486- md->suspended_bdev = NULL;14761476+ if (md->suspended_bdev) {14771477+ bdput(md->suspended_bdev);14781478+ md->suspended_bdev = NULL;14791479+ }1487148014881481 clear_bit(DMF_SUSPENDED, &md->flags);14891482
+31-1
drivers/md/md.c
···16331633 * and 'events' is odd, we can roll back to the previous clean state */16341634 if (nospares16351635 && (mddev->in_sync && mddev->recovery_cp == MaxSector)16361636- && (mddev->events & 1))16361636+ && (mddev->events & 1)16371637+ && mddev->events != 1)16371638 mddev->events--;16381639 else {16391640 /* otherwise we have to go forward and ... */···35643563 char *ptr, *buf = NULL;35653564 int err = -ENOMEM;3566356535663566+ md_allow_write(mddev);35673567+35673568 file = kmalloc(sizeof(*file), GFP_KERNEL);35683569 if (!file)35693570 goto out;···50335030 mod_timer(&mddev->safemode_timer, jiffies + mddev->safemode_delay);50345031 }50355032}50335033+50345034+/* md_allow_write(mddev)50355035+ * Calling this ensures that the array is marked 'active' so that writes50365036+ * may proceed without blocking. It is important to call this before50375037+ * attempting a GFP_KERNEL allocation while holding the mddev lock.50385038+ * Must be called with mddev_lock held.50395039+ */50405040+void md_allow_write(mddev_t *mddev)50415041+{50425042+ if (!mddev->pers)50435043+ return;50445044+ if (mddev->ro)50455045+ return;50465046+50475047+ spin_lock_irq(&mddev->write_lock);50485048+ if (mddev->in_sync) {50495049+ mddev->in_sync = 0;50505050+ set_bit(MD_CHANGE_CLEAN, &mddev->flags);50515051+ if (mddev->safemode_delay &&50525052+ mddev->safemode == 0)50535053+ mddev->safemode = 1;50545054+ spin_unlock_irq(&mddev->write_lock);50555055+ md_update_sb(mddev, 0);50565056+ } else50575057+ spin_unlock_irq(&mddev->write_lock);50585058+}50595059+EXPORT_SYMBOL_GPL(md_allow_write);5036506050375061static DECLARE_WAIT_QUEUE_HEAD(resync_wait);50385062
···113113114114config BAYCOM_SER_FDX115115 tristate "BAYCOM ser12 fullduplex driver for AX.25"116116- depends on AX25116116+ depends on AX25 && !S390117117 select CRC_CCITT118118 ---help---119119 This is one of two drivers for Baycom style simple amateur radio···133133134134config BAYCOM_SER_HDX135135 tristate "BAYCOM ser12 halfduplex driver for AX.25"136136- depends on AX25136136+ depends on AX25 && !S390137137 select CRC_CCITT138138 ---help---139139 This is one of two drivers for Baycom style simple amateur radio···181181182182config YAM183183 tristate "YAM driver for AX.25"184184- depends on AX25184184+ depends on AX25 && !S390185185 help186186 The YAM is a modem for packet radio which connects to the serial187187 port and includes some of the functions of a Terminal Node
+20-25
drivers/net/irda/irda-usb.c
···441441 goto drop;442442 }443443444444- /* Make sure there is room for IrDA-USB header. The actual445445- * allocation will be done lower in skb_push().446446- * Also, we don't use directly skb_cow(), because it require447447- * headroom >= 16, which force unnecessary copies - Jean II */448448- if (skb_headroom(skb) < self->header_length) {449449- IRDA_DEBUG(0, "%s(), Insuficient skb headroom.\n", __FUNCTION__);450450- if (skb_cow(skb, self->header_length)) {451451- IRDA_WARNING("%s(), failed skb_cow() !!!\n", __FUNCTION__);452452- goto drop;453453- }454454- }444444+ memcpy(self->tx_buff + self->header_length, skb->data, skb->len);455445456446 /* Change setting for next frame */457457-458447 if (self->capability & IUC_STIR421X) {459448 __u8 turnaround_time;460460- __u8* frame;449449+ __u8* frame = self->tx_buff;461450 turnaround_time = get_turnaround_time( skb );462462- frame= skb_push(skb, self->header_length);463451 irda_usb_build_header(self, frame, 0);464452 frame[2] = turnaround_time;465453 if ((skb->len != 0) &&···460472 frame[1] = 0;461473 }462474 } else {463463- irda_usb_build_header(self, skb_push(skb, self->header_length), 0);475475+ irda_usb_build_header(self, self->tx_buff, 0);464476 }465477466478 /* FIXME: Make macro out of this one */467479 ((struct irda_skb_cb *)skb->cb)->context = self;468480469469- usb_fill_bulk_urb(urb, self->usbdev, 481481+ usb_fill_bulk_urb(urb, self->usbdev,470482 usb_sndbulkpipe(self->usbdev, self->bulk_out_ep),471471- skb->data, IRDA_SKB_MAX_MTU,483483+ self->tx_buff, skb->len + self->header_length,472484 write_bulk_callback, skb);473473- urb->transfer_buffer_length = skb->len;485485+474486 /* This flag (URB_ZERO_PACKET) indicates that what we send is not475487 * a continuous stream of data but separate packets.476488 * In this case, the USB layer will insert an empty USB frame (TD)···14431455 /* Remove the speed buffer */14441456 kfree(self->speed_buff);14451457 self->speed_buff = NULL;14581458+14591459+ kfree(self->tx_buff);14601460+ self->tx_buff = NULL;14461461}1447146214481463/********************** USB CONFIG SUBROUTINES **********************/···1515152415161525 IRDA_DEBUG(0, "%s(), And our endpoints are : in=%02X, out=%02X (%d), int=%02X\n",15171526 __FUNCTION__, self->bulk_in_ep, self->bulk_out_ep, self->bulk_out_mtu, self->bulk_int_ep);15181518- /* Should be 8, 16, 32 or 64 bytes */15191519- IRDA_ASSERT(self->bulk_out_mtu == 64, ;);1520152715211528 return((self->bulk_in_ep != 0) && (self->bulk_out_ep != 0));15221529}···1742175317431754 memset(self->speed_buff, 0, IRDA_USB_SPEED_MTU);1744175517561756+ self->tx_buff = kzalloc(IRDA_SKB_MAX_MTU + self->header_length,17571757+ GFP_KERNEL);17581758+ if (self->tx_buff == NULL)17591759+ goto err_out_4;17601760+17451761 ret = irda_usb_open(self);17461762 if (ret) 17471747- goto err_out_4;17631763+ goto err_out_5;1748176417491765 IRDA_MESSAGE("IrDA: Registered device %s\n", net->name);17501766 usb_set_intfdata(intf, self);···17601766 self->needspatch = (ret < 0);17611767 if (self->needspatch) {17621768 IRDA_ERROR("STIR421X: Couldn't upload patch\n");17631763- goto err_out_5;17691769+ goto err_out_6;17641770 }1765177117661772 /* replace IrDA class descriptor with what patched device is now reporting */17671773 irda_desc = irda_usb_find_class_desc (self->usbintf);17681774 if (irda_desc == NULL) {17691775 ret = -ENODEV;17701770- goto err_out_5;17761776+ goto err_out_6;17711777 }17721778 if (self->irda_desc)17731779 kfree (self->irda_desc);···17761782 }1777178317781784 return 0;17791779-17801780-err_out_5:17851785+err_out_6:17811786 unregister_netdev(self->netdev);17871787+err_out_5:17881788+ kfree(self->tx_buff);17821789err_out_4:17831790 kfree(self->speed_buff);17841791err_out_3:
+1
drivers/net/irda/irda-usb.h
···156156 struct irlap_cb *irlap; /* The link layer we are binded to */157157 struct qos_info qos;158158 char *speed_buff; /* Buffer for speed changes */159159+ char *tx_buff;159160160161 struct timeval stamp;161162 struct timeval now;
···4141#define PCI_CLASS_SUBCLASS_MASK 0xffff4242#endif43434444-/* in recent 2.5 interrupt handlers have non-void return value */4545-#ifndef IRQ_RETVAL4646-typedef void irqreturn_t;4747-#define IRQ_NONE4848-#define IRQ_HANDLED4949-#define IRQ_RETVAL(x)5050-#endif5151-5252-/* some stuff need to check kernelversion. Not all 2.5 stuff was present5353- * in early 2.5.x - the test is merely to separate 2.4 from 2.55454- */5555-#include <linux/version.h>5656-5757-#if LINUX_VERSION_CODE < KERNEL_VERSION(2,5,0)5858-5959-/* PDE() introduced in 2.5.4 */6060-#ifdef CONFIG_PROC_FS6161-#define PDE(inode) ((inode)->i_private)6262-#endif6363-6464-/* irda crc16 calculation exported in 2.5.42 */6565-#define irda_calc_crc16(fcs,buf,len) (GOOD_FCS)6666-6767-/* we use this for unified pci device name access */6868-#define PCIDEV_NAME(pdev) ((pdev)->name)6969-7070-#else /* 2.5 or later */7171-7272-/* whatever we get from the associated struct device - bus:slot:dev.fn id */7373-#define PCIDEV_NAME(pdev) (pci_name(pdev))7474-7575-#endif7676-7744/* ================================================================ */78457946/* non-standard PCI registers */
+9-2
drivers/net/mv643xx_eth.c
···314314315315 while (mp->tx_desc_count > 0) {316316 spin_lock_irqsave(&mp->lock, flags);317317+318318+ /* tx_desc_count might have changed before acquiring the lock */319319+ if (mp->tx_desc_count <= 0) {320320+ spin_unlock_irqrestore(&mp->lock, flags);321321+ return released;322322+ }323323+317324 tx_index = mp->tx_used_desc_q;318325 desc = &mp->p_tx_desc_area[tx_index];319326 cmd_sts = desc->cmd_sts;···339332 if (skb)340333 mp->tx_skb[tx_index] = NULL;341334342342- spin_unlock_irqrestore(&mp->lock, flags);343343-344335 if (cmd_sts & ETH_ERROR_SUMMARY) {345336 printk("%s: Error in TX\n", dev->name);346337 mp->stats.tx_errors++;347338 }339339+340340+ spin_unlock_irqrestore(&mp->lock, flags);348341349342 if (cmd_sts & ETH_TX_FIRST_DESC)350343 dma_unmap_single(NULL, addr, count, DMA_TO_DEVICE);
···286286287287 return 0;288288}289289+EXPORT_SYMBOL(phy_ethtool_sset);289290290291int phy_ethtool_gset(struct phy_device *phydev, struct ethtool_cmd *cmd)291292{···303302304303 return 0;305304}306306-305305+EXPORT_SYMBOL(phy_ethtool_gset);307306308307/* Note that this function is currently incompatible with the309308 * PHYCONTROL layer. It changes registers without regard to
+1-2
drivers/net/s2io.c
···556556 }557557 }558558559559- nic->ufo_in_band_v = kmalloc((sizeof(u64) * size), GFP_KERNEL);559559+ nic->ufo_in_band_v = kcalloc(size, sizeof(u64), GFP_KERNEL);560560 if (!nic->ufo_in_band_v)561561 return -ENOMEM;562562- memset(nic->ufo_in_band_v, 0, size);563562564563 /* Allocation and initialization of RXDs in Rings */565564 size = 0;
···968968 * We should not be called if phy_type is zero.969969 */970970 if (lp->phy_type == 0)971971- goto smc911x_phy_configure_exit;971971+ goto smc911x_phy_configure_exit_nolock;972972973973 if (smc911x_phy_reset(dev, phyaddr)) {974974 printk("%s: PHY reset timed out\n", dev->name);975975- goto smc911x_phy_configure_exit;975975+ goto smc911x_phy_configure_exit_nolock;976976 }977977 spin_lock_irqsave(&lp->lock, flags);978978···1041104110421042smc911x_phy_configure_exit:10431043 spin_unlock_irqrestore(&lp->lock, flags);10441044+smc911x_phy_configure_exit_nolock:10441045 lp->work_pending = 0;10451046}10461047
···654654 * VIA bridges which have VLink655655 */656656657657-static const struct pci_device_id via_vlink_fixup_tbl[] = {658658- /* Internal devices need IRQ line routing, pre VLink */659659- { PCI_VDEVICE(VIA, PCI_DEVICE_ID_VIA_82C686), 0 },660660- { PCI_VDEVICE(VIA, PCI_DEVICE_ID_VIA_8231), 17 },661661- /* Devices with VLink */662662- { PCI_VDEVICE(VIA, PCI_DEVICE_ID_VIA_8233_0), 17},663663- { PCI_VDEVICE(VIA, PCI_DEVICE_ID_VIA_8233A), 17 },664664- { PCI_VDEVICE(VIA, PCI_DEVICE_ID_VIA_8233C_0), 17 },665665- { PCI_VDEVICE(VIA, PCI_DEVICE_ID_VIA_8235), 16 },666666- { PCI_VDEVICE(VIA, PCI_DEVICE_ID_VIA_8237), 15 },667667- { PCI_VDEVICE(VIA, PCI_DEVICE_ID_VIA_8237A), 15 },668668- { 0, },669669-};657657+static int via_vlink_dev_lo = -1, via_vlink_dev_hi = 18;658658+659659+static void quirk_via_bridge(struct pci_dev *dev)660660+{661661+ /* See what bridge we have and find the device ranges */662662+ switch (dev->device) {663663+ case PCI_DEVICE_ID_VIA_82C686:664664+ /* The VT82C686 is special, it attaches to PCI and can have665665+ any device number. All its subdevices are functions of666666+ that single device. */667667+ via_vlink_dev_lo = PCI_SLOT(dev->devfn);668668+ via_vlink_dev_hi = PCI_SLOT(dev->devfn);669669+ break;670670+ case PCI_DEVICE_ID_VIA_8237:671671+ case PCI_DEVICE_ID_VIA_8237A:672672+ via_vlink_dev_lo = 15;673673+ break;674674+ case PCI_DEVICE_ID_VIA_8235:675675+ via_vlink_dev_lo = 16;676676+ break;677677+ case PCI_DEVICE_ID_VIA_8231:678678+ case PCI_DEVICE_ID_VIA_8233_0:679679+ case PCI_DEVICE_ID_VIA_8233A:680680+ case PCI_DEVICE_ID_VIA_8233C_0:681681+ via_vlink_dev_lo = 17;682682+ break;683683+ }684684+}685685+DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C686, quirk_via_bridge);686686+DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8231, quirk_via_bridge);687687+DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8233_0, quirk_via_bridge);688688+DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8233A, quirk_via_bridge);689689+DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8233C_0, quirk_via_bridge);690690+DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8235, quirk_via_bridge);691691+DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8237, quirk_via_bridge);692692+DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8237A, quirk_via_bridge);670693671694/**672695 * quirk_via_vlink - VIA VLink IRQ number update···698675 * If the device we are dealing with is on a PIC IRQ we need to699676 * ensure that the IRQ line register which usually is not relevant700677 * for PCI cards, is actually written so that interrupts get sent701701- * to the right place678678+ * to the right place.679679+ * We only do this on systems where a VIA south bridge was detected,680680+ * and only for VIA devices on the motherboard (see quirk_via_bridge681681+ * above).702682 */703683704684static void quirk_via_vlink(struct pci_dev *dev)705685{706706- const struct pci_device_id *via_vlink_fixup;707707- static int dev_lo = -1, dev_hi = 18;708686 u8 irq, new_irq;709687710710- /* Check if we have VLink and cache the result */711711-712712- /* Checked already - no */713713- if (dev_lo == -2)688688+ /* Check if we have VLink at all */689689+ if (via_vlink_dev_lo == -1)714690 return;715691716716- /* Not checked - see what bridge we have and find the device717717- ranges */718718-719719- if (dev_lo == -1) {720720- via_vlink_fixup = pci_find_present(via_vlink_fixup_tbl);721721- if (via_vlink_fixup == NULL) {722722- dev_lo = -2;723723- return;724724- }725725- dev_lo = via_vlink_fixup->driver_data;726726- /* 82C686 is special - 0/0 */727727- if (dev_lo == 0)728728- dev_hi = 0;729729- }730692 new_irq = dev->irq;731693732694 /* Don't quirk interrupts outside the legacy IRQ range */···719711 return;720712721713 /* Internal device ? */722722- if (dev->bus->number != 0 || PCI_SLOT(dev->devfn) > dev_hi ||723723- PCI_SLOT(dev->devfn) < dev_lo)714714+ if (dev->bus->number != 0 || PCI_SLOT(dev->devfn) > via_vlink_dev_hi ||715715+ PCI_SLOT(dev->devfn) < via_vlink_dev_lo)724716 return;725717726718 /* This is an internal VLink device on a PIC interrupt. The BIOS···12621254 pci_read_config_dword(pdev, 0x40, &conf);12631255 /* Enable dual function mode, AHCI on fn 0, IDE fn1 */12641256 /* Set the class codes correctly and then direct IDE 0 */12651265- conf &= ~0x000F0200; /* Clear bit 9 and 16-19 */12661266- conf |= 0x00C20002; /* Set bit 1, 17, 22, 23 */12571257+ conf &= ~0x000FF200; /* Clear bit 9 and 12-19 */12581258+ conf |= 0x00C2A102; /* Set 1, 8, 13, 15, 17, 22, 23 */12671259 pci_write_config_dword(pdev, 0x40, conf);1268126012691261 /* Reconfigure so that the PCI scanner discovers the
+2-8
drivers/pci/search.c
···200200 * can cause some machines to crash. So here we detect and flag that201201 * situation and bail out early.202202 */203203- if (unlikely(list_empty(&pci_devices))) {204204- printk(KERN_INFO "pci_find_subsys() called while pci_devices "205205- "is still empty\n");203203+ if (unlikely(list_empty(&pci_devices)))206204 return NULL;207207- }208205 down_read(&pci_bus_sem);209206 n = from ? from->global_list.next : pci_devices.next;210207···275278 * can cause some machines to crash. So here we detect and flag that276279 * situation and bail out early.277280 */278278- if (unlikely(list_empty(&pci_devices))) {279279- printk(KERN_NOTICE "pci_get_subsys() called while pci_devices "280280- "is still empty\n");281281+ if (unlikely(list_empty(&pci_devices)))281282 return NULL;282282- }283283 down_read(&pci_bus_sem);284284 n = from ? from->global_list.next : pci_devices.next;285285
···7676extern int ql4xextended_error_logging;7777extern int ql4xdiscoverywait;7878extern int ql4xdontresethba;7979+extern int ql4_mod_unload;7980#endif /* _QLA4x_GBL_H */
···433433 readl(&ha->reg->mailbox[i]);434434435435 set_bit(AF_MBOX_COMMAND_DONE, &ha->flags);436436- wake_up(&ha->mailbox_wait_queue);437436 }438437 } else if (mbox_status >> 12 == MBOX_ASYNC_EVENT_STATUS) {439438 /* Immediately process the AENs that don't require much work.···685686 &ha->reg->ctrl_status);686687 readl(&ha->reg->ctrl_status);687688688688- set_bit(DPC_RESET_HA_INTR, &ha->dpc_flags);689689+ if (!ql4_mod_unload)690690+ set_bit(DPC_RESET_HA_INTR, &ha->dpc_flags);689691690692 break;691693 } else if (intr_status & INTR_PENDING) {
+21-14
drivers/scsi/qla4xxx/ql4_mbx.c
···2929 u_long wait_count;3030 uint32_t intr_status;3131 unsigned long flags = 0;3232- DECLARE_WAITQUEUE(wait, current);3333-3434- mutex_lock(&ha->mbox_sem);3535-3636- /* Mailbox code active */3737- set_bit(AF_MBOX_COMMAND, &ha->flags);38323933 /* Make sure that pointers are valid */4034 if (!mbx_cmd || !mbx_sts) {4135 DEBUG2(printk("scsi%ld: %s: Invalid mbx_cmd or mbx_sts "4236 "pointer\n", ha->host_no, __func__));4343- goto mbox_exit;3737+ return status;3838+ }3939+ /* Mailbox code active */4040+ wait_count = MBOX_TOV * 100;4141+4242+ while (wait_count--) {4343+ mutex_lock(&ha->mbox_sem);4444+ if (!test_bit(AF_MBOX_COMMAND, &ha->flags)) {4545+ set_bit(AF_MBOX_COMMAND, &ha->flags);4646+ mutex_unlock(&ha->mbox_sem);4747+ break;4848+ }4949+ mutex_unlock(&ha->mbox_sem);5050+ if (!wait_count) {5151+ DEBUG2(printk("scsi%ld: %s: mbox_sem failed\n",5252+ ha->host_no, __func__));5353+ return status;5454+ }5555+ msleep(10);4456 }45574658 /* To prevent overwriting mailbox registers for a command that has···8573 spin_unlock_irqrestore(&ha->hardware_lock, flags);86748775 /* Wait for completion */8888- set_current_state(TASK_UNINTERRUPTIBLE);8989- add_wait_queue(&ha->mailbox_wait_queue, &wait);90769177 /*9278 * If we don't want status, don't wait for the mailbox command to···9383 */9484 if (outCount == 0) {9585 status = QLA_SUCCESS;9696- set_current_state(TASK_RUNNING);9797- remove_wait_queue(&ha->mailbox_wait_queue, &wait);9886 goto mbox_exit;9987 }10088 /* Wait for command to complete */···116108 spin_unlock_irqrestore(&ha->hardware_lock, flags);117109 msleep(10);118110 }119119- set_current_state(TASK_RUNNING);120120- remove_wait_queue(&ha->mailbox_wait_queue, &wait);121111122112 /* Check for mailbox timeout. */123113 if (!test_bit(AF_MBOX_COMMAND_DONE, &ha->flags)) {···161155 spin_unlock_irqrestore(&ha->hardware_lock, flags);162156163157mbox_exit:158158+ mutex_lock(&ha->mbox_sem);164159 clear_bit(AF_MBOX_COMMAND, &ha->flags);165165- clear_bit(AF_MBOX_COMMAND_DONE, &ha->flags);166160 mutex_unlock(&ha->mbox_sem);161161+ clear_bit(AF_MBOX_COMMAND_DONE, &ha->flags);167162168163 return status;169164}
+39-25
drivers/scsi/qla4xxx/ql4_os.c
···4040 "Option to enable extended error logging, "4141 "Default is 0 - no logging, 1 - debug logging");42424343+int ql4_mod_unload = 0;4444+4345/*4446 * SCSI host template entry points4547 */···424422 goto qc_host_busy;425423 }426424425425+ if (test_bit(DPC_RESET_HA_INTR, &ha->dpc_flags))426426+ goto qc_host_busy;427427+427428 spin_unlock_irq(ha->host->host_lock);428429429430 srb = qla4xxx_get_new_srb(ha, ddb_entry, cmd, done);···712707 return stat;713708}714709715715-/**716716- * qla4xxx_soft_reset - performs soft reset.717717- * @ha: Pointer to host adapter structure.718718- **/719719-int qla4xxx_soft_reset(struct scsi_qla_host *ha)710710+static void qla4xxx_hw_reset(struct scsi_qla_host *ha)720711{721721- uint32_t max_wait_time;722722- unsigned long flags = 0;723723- int status = QLA_ERROR;724712 uint32_t ctrl_status;713713+ unsigned long flags = 0;714714+715715+ DEBUG2(printk(KERN_ERR "scsi%ld: %s\n", ha->host_no, __func__));725716726717 spin_lock_irqsave(&ha->hardware_lock, flags);727718···734733 readl(&ha->reg->ctrl_status);735734736735 spin_unlock_irqrestore(&ha->hardware_lock, flags);736736+}737737+738738+/**739739+ * qla4xxx_soft_reset - performs soft reset.740740+ * @ha: Pointer to host adapter structure.741741+ **/742742+int qla4xxx_soft_reset(struct scsi_qla_host *ha)743743+{744744+ uint32_t max_wait_time;745745+ unsigned long flags = 0;746746+ int status = QLA_ERROR;747747+ uint32_t ctrl_status;748748+749749+ qla4xxx_hw_reset(ha);737750738751 /* Wait until the Network Reset Intr bit is cleared */739752 max_wait_time = RESET_INTR_TOV;···981966 struct scsi_qla_host *ha =982967 container_of(work, struct scsi_qla_host, dpc_work);983968 struct ddb_entry *ddb_entry, *dtemp;969969+ int status = QLA_ERROR;984970985971 DEBUG2(printk("scsi%ld: %s: DPC handler waking up."986986- "flags = 0x%08lx, dpc_flags = 0x%08lx\n",987987- ha->host_no, __func__, ha->flags, ha->dpc_flags));972972+ "flags = 0x%08lx, dpc_flags = 0x%08lx ctrl_stat = 0x%08x\n",973973+ ha->host_no, __func__, ha->flags, ha->dpc_flags,974974+ readw(&ha->reg->ctrl_status)));988975989976 /* Initialization not yet finished. Don't do anything yet. */990977 if (!test_bit(AF_INIT_DONE, &ha->flags))···1000983 test_bit(DPC_RESET_HA, &ha->dpc_flags))1001984 qla4xxx_recover_adapter(ha, PRESERVE_DDB_LIST);100298510031003- if (test_and_clear_bit(DPC_RESET_HA_INTR, &ha->dpc_flags)) {986986+ if (test_bit(DPC_RESET_HA_INTR, &ha->dpc_flags)) {1004987 uint8_t wait_time = RESET_INTR_TOV;10051005- unsigned long flags = 0;100698810071007- qla4xxx_flush_active_srbs(ha);10081008-10091009- spin_lock_irqsave(&ha->hardware_lock, flags);1010989 while ((readw(&ha->reg->ctrl_status) &1011990 (CSR_SOFT_RESET | CSR_FORCE_SOFT_RESET)) != 0) {1012991 if (--wait_time == 0)1013992 break;10141014-10151015- spin_unlock_irqrestore(&ha->hardware_lock,10161016- flags);10171017-1018993 msleep(1000);10191019-10201020- spin_lock_irqsave(&ha->hardware_lock, flags);1021994 }10221022- spin_unlock_irqrestore(&ha->hardware_lock, flags);10231023-1024995 if (wait_time == 0)1025996 DEBUG2(printk("scsi%ld: %s: SR|FSR "1026997 "bit not cleared-- resetting\n",1027998 ha->host_no, __func__));999999+ qla4xxx_flush_active_srbs(ha);10001000+ if (ql4xxx_lock_drvr_wait(ha) == QLA_SUCCESS) {10011001+ qla4xxx_process_aen(ha, FLUSH_DDB_CHANGED_AENS);10021002+ status = qla4xxx_initialize_adapter(ha,10031003+ PRESERVE_DDB_LIST);10041004+ }10051005+ clear_bit(DPC_RESET_HA_INTR, &ha->dpc_flags);10061006+ if (status == QLA_SUCCESS)10071007+ qla4xxx_enable_intrs(ha);10281008 }10291009 }10301010···1076106210771063 /* Issue Soft Reset to put firmware in unknown state */10781064 if (ql4xxx_lock_drvr_wait(ha) == QLA_SUCCESS)10791079- qla4xxx_soft_reset(ha);10651065+ qla4xxx_hw_reset(ha);1080106610811067 /* Remove timer thread, if present */10821068 if (ha->timer_active)···12121198 INIT_LIST_HEAD(&ha->free_srb_q);1213119912141200 mutex_init(&ha->mbox_sem);12151215- init_waitqueue_head(&ha->mailbox_wait_queue);1216120112171202 spin_lock_init(&ha->hardware_lock);12181203···1678166516791666static void __exit qla4xxx_module_exit(void)16801667{16681668+ ql4_mod_unload = 1;16811669 pci_unregister_driver(&qla4xxx_pci_driver);16821670 iscsi_unregister_transport(&qla4xxx_iscsi_transport);16831671 kmem_cache_destroy(srb_cachep);
+1-1
drivers/scsi/qla4xxx/ql4_version.h
···55 * See LICENSE.qla4xxx for copyright and licensing details.66 */7788-#define QLA4XXX_DRIVER_VERSION "5.00.07-k"88+#define QLA4XXX_DRIVER_VERSION "5.00.07-k1"
+6
drivers/scsi/scsi_scan.c
···14531453 struct device *parent = &shost->shost_gendev;14541454 struct scsi_target *starget;1455145514561456+ if (strncmp(scsi_scan_type, "none", 4) == 0)14571457+ return ERR_PTR(-ENODEV);14581458+14591459+ if (!shost->async_scan)14601460+ scsi_complete_async_scans();14611461+14561462 starget = scsi_alloc_target(parent, channel, id);14571463 if (!starget)14581464 return ERR_PTR(-ENOMEM);
···589589 */590590 if (co->index >= UART_NR)591591 co->index = 0;592592+ if (!amba_ports[co->index])593593+ return -ENODEV;592594 port = &amba_ports[co->index]->port;593595594596 if (options)
+2
drivers/serial/amba-pl011.c
···661661 if (co->index >= UART_NR)662662 co->index = 0;663663 uap = amba_ports[co->index];664664+ if (!uap)665665+ return -ENODEV;664666665667 uap->port.uartclk = clk_get_rate(uap->clk);666668
···2727static int funsoft_ioctl(struct usb_serial_port *port, struct file *file,2828 unsigned int cmd, unsigned long arg)2929{3030- struct termios t;3030+ struct ktermios t;31313232 dbg("%s - port %d, cmd 0x%04x", __FUNCTION__, port->number, cmd);3333
+1
fs/9p/error.c
···83838484 if (errno == 0) {8585 /* TODO: if error isn't found, add it dynamically */8686+ errstr[len] = 0;8687 printk(KERN_ERR "%s: errstr :%s: not found\n", __FUNCTION__,8788 errstr);8889 errno = 1;
+66-3
fs/9p/fid.c
···2525#include <linux/fs.h>2626#include <linux/sched.h>2727#include <linux/idr.h>2828+#include <asm/semaphore.h>28292930#include "debug.h"3031#include "v9fs.h"···8584 new->iounit = 0;8685 new->rdir_pos = 0;8786 new->rdir_fcall = NULL;8787+ init_MUTEX(&new->lock);8888 INIT_LIST_HEAD(&new->list);89899090 return new;···104102}105103106104/**107107- * v9fs_fid_lookup - retrieve the right fid from a particular dentry105105+ * v9fs_fid_lookup - return a locked fid from a dentry108106 * @dentry: dentry to look for fid in109109- * @type: intent of lookup (operation or traversal)110107 *111111- * find a fid in the dentry108108+ * find a fid in the dentry, obtain its semaphore and return a reference to it.109109+ * code calling lookup is responsible for releasing lock112110 *113111 * TODO: only match fids that have the same uid as current user114112 *···126124127125 if (!return_fid) {128126 dprintk(DEBUG_ERROR, "Couldn't find a fid in dentry\n");127127+ return_fid = ERR_PTR(-EBADF);129128 }130129130130+ if(down_interruptible(&return_fid->lock))131131+ return ERR_PTR(-EINTR);132132+131133 return return_fid;134134+}135135+136136+/**137137+ * v9fs_fid_clone - lookup the fid for a dentry, clone a private copy and release it138138+ * @dentry: dentry to look for fid in139139+ *140140+ * find a fid in the dentry and then clone to a new private fid141141+ *142142+ * TODO: only match fids that have the same uid as current user143143+ *144144+ */145145+146146+struct v9fs_fid *v9fs_fid_clone(struct dentry *dentry)147147+{148148+ struct v9fs_session_info *v9ses = v9fs_inode2v9ses(dentry->d_inode);149149+ struct v9fs_fid *base_fid, *new_fid = ERR_PTR(-EBADF);150150+ struct v9fs_fcall *fcall = NULL;151151+ int fid, err;152152+153153+ base_fid = v9fs_fid_lookup(dentry);154154+155155+ if(IS_ERR(base_fid))156156+ return base_fid;157157+158158+ if(base_fid) { /* clone fid */159159+ fid = v9fs_get_idpool(&v9ses->fidpool);160160+ if (fid < 0) {161161+ eprintk(KERN_WARNING, "newfid fails!\n");162162+ new_fid = ERR_PTR(-ENOSPC);163163+ goto Release_Fid;164164+ }165165+166166+ err = v9fs_t_walk(v9ses, base_fid->fid, fid, NULL, &fcall);167167+ if (err < 0) {168168+ dprintk(DEBUG_ERROR, "clone walk didn't work\n");169169+ v9fs_put_idpool(fid, &v9ses->fidpool);170170+ new_fid = ERR_PTR(err);171171+ goto Free_Fcall;172172+ }173173+ new_fid = v9fs_fid_create(v9ses, fid);174174+ if (new_fid == NULL) {175175+ dprintk(DEBUG_ERROR, "out of memory\n");176176+ new_fid = ERR_PTR(-ENOMEM);177177+ }178178+Free_Fcall:179179+ kfree(fcall);180180+ }181181+182182+Release_Fid:183183+ up(&base_fid->lock);184184+ return new_fid;185185+}186186+187187+void v9fs_fid_clunk(struct v9fs_session_info *v9ses, struct v9fs_fid *fid)188188+{189189+ v9fs_t_clunk(v9ses, fid->fid);190190+ v9fs_fid_destroy(fid);132191}
+5
fs/9p/fid.h
···3030 struct list_head list; /* list of fids associated with a dentry */3131 struct list_head active; /* XXX - debug */32323333+ struct semaphore lock;3434+3335 u32 fid;3436 unsigned char fidopen; /* set when fid is opened */3537 unsigned char fidclunked; /* set when fid has already been clunked */···5755void v9fs_fid_destroy(struct v9fs_fid *fid);5856struct v9fs_fid *v9fs_fid_create(struct v9fs_session_info *, int fid);5957int v9fs_fid_insert(struct v9fs_fid *fid, struct dentry *dentry);5858+struct v9fs_fid *v9fs_fid_clone(struct dentry *dentry);5959+void v9fs_fid_clunk(struct v9fs_session_info *v9ses, struct v9fs_fid *fid);6060+
···298298 struct task_struct *tsk = current;299299 DECLARE_WAITQUEUE(wait, tsk);300300301301+ spin_lock_irq(&ctx->ctx_lock);301302 if (!ctx->reqs_active)302302- return;303303+ goto out;303304304305 add_wait_queue(&ctx->wait, &wait);305306 set_task_state(tsk, TASK_UNINTERRUPTIBLE);306307 while (ctx->reqs_active) {308308+ spin_unlock_irq(&ctx->ctx_lock);307309 schedule();308310 set_task_state(tsk, TASK_UNINTERRUPTIBLE);311311+ spin_lock_irq(&ctx->ctx_lock);309312 }310313 __set_task_state(tsk, TASK_RUNNING);311314 remove_wait_queue(&ctx->wait, &wait);315315+316316+out:317317+ spin_unlock_irq(&ctx->ctx_lock);312318}313319314320/* wait_on_sync_kiocb:···430424 ring = kmap_atomic(ctx->ring_info.ring_pages[0], KM_USER0);431425 if (ctx->reqs_active < aio_ring_avail(&ctx->ring_info, ring)) {432426 list_add(&req->ki_list, &ctx->active_reqs);433433- get_ioctx(ctx);434427 ctx->reqs_active++;435428 okay = 1;436429 }···541536 spin_lock_irq(&ctx->ctx_lock);542537 ret = __aio_put_req(ctx, req);543538 spin_unlock_irq(&ctx->ctx_lock);544544- if (ret)545545- put_ioctx(ctx);546539 return ret;547540}548541···782779 */783780 iocb->ki_users++; /* grab extra reference */784781 aio_run_iocb(iocb);785785- if (__aio_put_req(ctx, iocb)) /* drop extra ref */786786- put_ioctx(ctx);782782+ __aio_put_req(ctx, iocb);787783 }788784 if (!list_empty(&ctx->run_list))789785 return 1;···999997 /* everything turned out well, dispose of the aiocb. */1000998 ret = __aio_put_req(ctx, iocb);100199910021002- spin_unlock_irqrestore(&ctx->ctx_lock, flags);10031003-10041000 if (waitqueue_active(&ctx->wait))10051001 wake_up(&ctx->wait);1006100210071007- if (ret)10081008- put_ioctx(ctx);10091009-10031003+ spin_unlock_irqrestore(&ctx->ctx_lock, flags);10101004 return ret;10111005}10121006
+48-3
fs/binfmt_elf.c
···682682 retval = PTR_ERR(interpreter);683683 if (IS_ERR(interpreter))684684 goto out_free_interp;685685+686686+ /*687687+ * If the binary is not readable then enforce688688+ * mm->dumpable = 0 regardless of the interpreter's689689+ * permissions.690690+ */691691+ if (file_permission(interpreter, MAY_READ) < 0)692692+ bprm->interp_flags |= BINPRM_FLAGS_ENFORCE_NONDUMP;693693+685694 retval = kernel_read(interpreter, 0, bprm->buf,686695 BINPRM_BUF_SIZE);687696 if (retval != BINPRM_BUF_SIZE) {···11871178 */11881179static int maydump(struct vm_area_struct *vma)11891180{11811181+ /* The vma can be set up to tell us the answer directly. */11821182+ if (vma->vm_flags & VM_ALWAYSDUMP)11831183+ return 1;11841184+11901185 /* Do not dump I/O mapped devices or special mappings */11911186 if (vma->vm_flags & (VM_IO | VM_RESERVED))11921187 return 0;···14371424 return sz;14381425}1439142614271427+static struct vm_area_struct *first_vma(struct task_struct *tsk,14281428+ struct vm_area_struct *gate_vma)14291429+{14301430+ struct vm_area_struct *ret = tsk->mm->mmap;14311431+14321432+ if (ret)14331433+ return ret;14341434+ return gate_vma;14351435+}14361436+/*14371437+ * Helper function for iterating across a vma list. It ensures that the caller14381438+ * will visit `gate_vma' prior to terminating the search.14391439+ */14401440+static struct vm_area_struct *next_vma(struct vm_area_struct *this_vma,14411441+ struct vm_area_struct *gate_vma)14421442+{14431443+ struct vm_area_struct *ret;14441444+14451445+ ret = this_vma->vm_next;14461446+ if (ret)14471447+ return ret;14481448+ if (this_vma == gate_vma)14491449+ return NULL;14501450+ return gate_vma;14511451+}14521452+14401453/*14411454 * Actual dumper14421455 *···14781439 int segs;14791440 size_t size = 0;14801441 int i;14811481- struct vm_area_struct *vma;14421442+ struct vm_area_struct *vma, *gate_vma;14821443 struct elfhdr *elf = NULL;14831444 loff_t offset = 0, dataoff, foffset;14841445 unsigned long limit = current->signal->rlim[RLIMIT_CORE].rlim_cur;···15641525 segs += ELF_CORE_EXTRA_PHDRS;15651526#endif1566152715281528+ gate_vma = get_gate_vma(current);15291529+ if (gate_vma != NULL)15301530+ segs++;15311531+15671532 /* Set up header */15681533 fill_elf_header(elf, segs + 1); /* including notes section */15691534···16351592 dataoff = offset = roundup(offset, ELF_EXEC_PAGESIZE);1636159316371594 /* Write program headers for segments dump */16381638- for (vma = current->mm->mmap; vma != NULL; vma = vma->vm_next) {15951595+ for (vma = first_vma(current, gate_vma); vma != NULL;15961596+ vma = next_vma(vma, gate_vma)) {16391597 struct elf_phdr phdr;16401598 size_t sz;16411599···16851641 /* Align to page */16861642 DUMP_SEEK(dataoff - foffset);1687164316881688- for (vma = current->mm->mmap; vma != NULL; vma = vma->vm_next) {16441644+ for (vma = first_vma(current, gate_vma); vma != NULL;16451645+ vma = next_vma(vma, gate_vma)) {16891646 unsigned long addr;1690164716911648 if (!maydump(vma))
+8
fs/binfmt_elf_fdpic.c
···234234 goto error;235235 }236236237237+ /*238238+ * If the binary is not readable then enforce239239+ * mm->dumpable = 0 regardless of the interpreter's240240+ * permissions.241241+ */242242+ if (file_permission(interpreter, MAY_READ) < 0)243243+ bprm->interp_flags |= BINPRM_FLAGS_ENFORCE_NONDUMP;244244+237245 retval = kernel_read(interpreter, 0, bprm->buf,238246 BINPRM_BUF_SIZE);239247 if (retval < 0)
+50-1
fs/block_dev.c
···129129 return 0;130130}131131132132+static int133133+blkdev_get_blocks(struct inode *inode, sector_t iblock,134134+ struct buffer_head *bh, int create)135135+{136136+ sector_t end_block = max_block(I_BDEV(inode));137137+ unsigned long max_blocks = bh->b_size >> inode->i_blkbits;138138+139139+ if ((iblock + max_blocks) > end_block) {140140+ max_blocks = end_block - iblock;141141+ if ((long)max_blocks <= 0) {142142+ if (create)143143+ return -EIO; /* write fully beyond EOF */144144+ /*145145+ * It is a read which is fully beyond EOF. We return146146+ * a !buffer_mapped buffer147147+ */148148+ max_blocks = 0;149149+ }150150+ }151151+152152+ bh->b_bdev = I_BDEV(inode);153153+ bh->b_blocknr = iblock;154154+ bh->b_size = max_blocks << inode->i_blkbits;155155+ if (max_blocks)156156+ set_buffer_mapped(bh);157157+ return 0;158158+}159159+160160+static ssize_t161161+blkdev_direct_IO(int rw, struct kiocb *iocb, const struct iovec *iov,162162+ loff_t offset, unsigned long nr_segs)163163+{164164+ struct file *file = iocb->ki_filp;165165+ struct inode *inode = file->f_mapping->host;166166+167167+ return blockdev_direct_IO_no_locking(rw, iocb, inode, I_BDEV(inode),168168+ iov, offset, nr_segs, blkdev_get_blocks, NULL);169169+}170170+171171+#if 0132172static int blk_end_aio(struct bio *bio, unsigned int bytes_done, int error)133173{134174 struct kiocb *iocb = bio->bi_private;···186146 iocb->ki_nbytes = -EIO;187147188148 if (atomic_dec_and_test(bio_count)) {189189- if (iocb->ki_nbytes < 0)149149+ if ((long)iocb->ki_nbytes < 0)190150 aio_complete(iocb, iocb->ki_nbytes, 0);191151 else192152 aio_complete(iocb, iocb->ki_left, 0);···228188 pvec->idx = 0;229189 }230190 return pvec->page[pvec->idx++];191191+}192192+193193+/* return a page back to pvec array */194194+static void blk_unget_page(struct page *page, struct pvec *pvec)195195+{196196+ pvec->page[--pvec->idx] = page;231197}232198233199static ssize_t···324278 count = min(count, nbytes);325279 goto same_bio;326280 }281281+ } else {282282+ blk_unget_page(page, &pvec);327283 }328284329285 /* bio is ready, submit it */···363315 return PTR_ERR(page);364316 goto completion;365317}318318+#endif366319367320static int blkdev_writepage(struct page *page, struct writeback_control *wbc)368321{
+18-1
fs/buffer.c
···28342834 int ret = 0;2835283528362836 BUG_ON(!PageLocked(page));28372837- if (PageDirty(page) || PageWriteback(page))28372837+ if (PageWriteback(page))28382838 return 0;2839283928402840 if (mapping == NULL) { /* can this still happen? */···2844284428452845 spin_lock(&mapping->private_lock);28462846 ret = drop_buffers(page, &buffers_to_free);28472847+28482848+ /*28492849+ * If the filesystem writes its buffers by hand (eg ext3)28502850+ * then we can have clean buffers against a dirty page. We28512851+ * clean the page here; otherwise the VM will never notice28522852+ * that the filesystem did any IO at all.28532853+ *28542854+ * Also, during truncate, discard_buffer will have marked all28552855+ * the page's buffers clean. We discover that here and clean28562856+ * the page also.28572857+ *28582858+ * private_lock must be held over this entire operation in order28592859+ * to synchronise against __set_page_dirty_buffers and prevent the28602860+ * dirty bit from being lost.28612861+ */28622862+ if (ret)28632863+ cancel_dirty_page(page, PAGE_CACHE_SIZE);28472864 spin_unlock(&mapping->private_lock);28482865out:28492866 if (buffers_to_free) {
+4
fs/cifs/CHANGES
···11+Version 1.4722+------------33+Fix oops in list_del during mount caused by unaligned string.44+15Version 1.4626------------37Support deep tree mounts. Better support OS/2, Win9x (DOS) time stamps.
+2-2
fs/cifs/cifs_debug.c
···143143 ses = list_entry(tmp, struct cifsSesInfo, cifsSessionList);144144 if((ses->serverDomain == NULL) || (ses->serverOS == NULL) ||145145 (ses->serverNOS == NULL)) {146146- buf += sprintf("\nentry for %s not fully displayed\n\t",147147- ses->serverName);146146+ buf += sprintf(buf, "\nentry for %s not fully "147147+ "displayed\n\t", ses->serverName);148148149149 } else {150150 length =
···182182 cFYI(1,("bleft %d",bleft));183183184184185185- /* word align, if bytes remaining is not even */186186- if(bleft % 2) {187187- bleft--;188188- data++;189189- }185185+ /* SMB header is unaligned, so cifs servers word align start of186186+ Unicode strings */187187+ data++;188188+ bleft--; /* Windows servers do not always double null terminate189189+ their final Unicode string - in which case we190190+ now will not attempt to decode the byte of junk191191+ which follows it */192192+190193 words_left = bleft / 2;191194192195 /* save off server operating system */
+12-1
fs/fs-writeback.c
···251251 WARN_ON(inode->i_state & I_WILL_FREE);252252253253 if ((wbc->sync_mode != WB_SYNC_ALL) && (inode->i_state & I_LOCK)) {254254+ struct address_space *mapping = inode->i_mapping;255255+ int ret;256256+254257 list_move(&inode->i_list, &inode->i_sb->s_dirty);255255- return 0;258258+259259+ /*260260+ * Even if we don't actually write the inode itself here,261261+ * we can at least start some of the data writeout..262262+ */263263+ spin_unlock(&inode_lock);264264+ ret = do_writepages(mapping, wbc);265265+ spin_lock(&inode_lock);266266+ return ret;256267 }257268258269 /*
···7676extern int unlink_file(const char *file);7777extern int do_mkdir(const char *file, int mode);7878extern int do_rmdir(const char *file);7979-extern int do_mknod(const char *file, int mode, int dev);7979+extern int do_mknod(const char *file, int mode, unsigned int major, unsigned int minor);8080extern int link_file(const char *from, const char *to);8181extern int do_readlink(char *file, char *buf, int size);8282extern int rename_file(char *from, char *to);
···295295 return(0);296296}297297298298-int do_mknod(const char *file, int mode, int dev)298298+int do_mknod(const char *file, int mode, unsigned int major, unsigned int minor)299299{300300 int err;301301302302- err = mknod(file, mode, dev);302302+ err = mknod(file, mode, makedev(major, minor));303303 if(err) return(-errno);304304 return(0);305305}
+2-2
fs/lockd/clntlock.c
···176176 lock_kernel();177177 lockd_up(0); /* note: this cannot fail as lockd is already running */178178179179- dprintk("lockd: reclaiming locks for host %s", host->h_name);179179+ dprintk("lockd: reclaiming locks for host %s\n", host->h_name);180180181181restart:182182 nsmstate = host->h_nsmstate;···206206207207 host->h_reclaiming = 0;208208 up_write(&host->h_rwsem);209209- dprintk("NLM: done reclaiming locks for host %s", host->h_name);209209+ dprintk("NLM: done reclaiming locks for host %s\n", host->h_name);210210211211 /* Now, wake up all processes that sleep on a blocked lock */212212 list_for_each_entry(block, &nlm_blocked, b_list) {
+1-1
fs/nfs/dir.c
···532532533533 lock_kernel();534534535535- res = nfs_revalidate_mapping(inode, filp->f_mapping);535535+ res = nfs_revalidate_mapping_nolock(inode, filp->f_mapping);536536 if (res < 0) {537537 unlock_kernel();538538 return res;
+3-2
fs/nfs/file.c
···434434 BUG();435435 }436436 if (res < 0)437437- printk(KERN_WARNING "%s: VFS is out of sync with lock manager!\n",438438- __FUNCTION__);437437+ dprintk(KERN_WARNING "%s: VFS is out of sync with lock manager"438438+ " - error %d!\n",439439+ __FUNCTION__, res);439440 return res;440441}441442
+67-30
fs/nfs/inode.c
···665665 return __nfs_revalidate_inode(server, inode);666666}667667668668+static int nfs_invalidate_mapping_nolock(struct inode *inode, struct address_space *mapping)669669+{670670+ struct nfs_inode *nfsi = NFS_I(inode);671671+672672+ if (mapping->nrpages != 0) {673673+ int ret = invalidate_inode_pages2(mapping);674674+ if (ret < 0)675675+ return ret;676676+ }677677+ spin_lock(&inode->i_lock);678678+ nfsi->cache_validity &= ~NFS_INO_INVALID_DATA;679679+ if (S_ISDIR(inode->i_mode)) {680680+ memset(nfsi->cookieverf, 0, sizeof(nfsi->cookieverf));681681+ /* This ensures we revalidate child dentries */682682+ nfsi->cache_change_attribute = jiffies;683683+ }684684+ spin_unlock(&inode->i_lock);685685+ nfs_inc_stats(inode, NFSIOS_DATAINVALIDATE);686686+ dfprintk(PAGECACHE, "NFS: (%s/%Ld) data cache invalidated\n",687687+ inode->i_sb->s_id, (long long)NFS_FILEID(inode));688688+ return 0;689689+}690690+691691+static int nfs_invalidate_mapping(struct inode *inode, struct address_space *mapping)692692+{693693+ int ret = 0;694694+695695+ mutex_lock(&inode->i_mutex);696696+ if (NFS_I(inode)->cache_validity & NFS_INO_INVALID_DATA) {697697+ ret = nfs_sync_mapping(mapping);698698+ if (ret == 0)699699+ ret = nfs_invalidate_mapping_nolock(inode, mapping);700700+ }701701+ mutex_unlock(&inode->i_mutex);702702+ return ret;703703+}704704+705705+/**706706+ * nfs_revalidate_mapping_nolock - Revalidate the pagecache707707+ * @inode - pointer to host inode708708+ * @mapping - pointer to mapping709709+ */710710+int nfs_revalidate_mapping_nolock(struct inode *inode, struct address_space *mapping)711711+{712712+ struct nfs_inode *nfsi = NFS_I(inode);713713+ int ret = 0;714714+715715+ if ((nfsi->cache_validity & NFS_INO_REVAL_PAGECACHE)716716+ || nfs_attribute_timeout(inode) || NFS_STALE(inode)) {717717+ ret = __nfs_revalidate_inode(NFS_SERVER(inode), inode);718718+ if (ret < 0)719719+ goto out;720720+ }721721+ if (nfsi->cache_validity & NFS_INO_INVALID_DATA)722722+ ret = nfs_invalidate_mapping_nolock(inode, mapping);723723+out:724724+ return ret;725725+}726726+668727/**669728 * nfs_revalidate_mapping - Revalidate the pagecache670729 * @inode - pointer to host inode671730 * @mapping - pointer to mapping731731+ *732732+ * This version of the function will take the inode->i_mutex and attempt to733733+ * flush out all dirty data if it needs to invalidate the page cache.672734 */673735int nfs_revalidate_mapping(struct inode *inode, struct address_space *mapping)674736{675737 struct nfs_inode *nfsi = NFS_I(inode);676738 int ret = 0;677739678678- if (NFS_STALE(inode))679679- ret = -ESTALE;680740 if ((nfsi->cache_validity & NFS_INO_REVAL_PAGECACHE)681681- || nfs_attribute_timeout(inode))741741+ || nfs_attribute_timeout(inode) || NFS_STALE(inode)) {682742 ret = __nfs_revalidate_inode(NFS_SERVER(inode), inode);683683- if (ret < 0)684684- goto out;685685-686686- if (nfsi->cache_validity & NFS_INO_INVALID_DATA) {687687- if (mapping->nrpages != 0) {688688- if (S_ISREG(inode->i_mode)) {689689- ret = nfs_sync_mapping(mapping);690690- if (ret < 0)691691- goto out;692692- }693693- ret = invalidate_inode_pages2(mapping);694694- if (ret < 0)695695- goto out;696696- }697697- spin_lock(&inode->i_lock);698698- nfsi->cache_validity &= ~NFS_INO_INVALID_DATA;699699- if (S_ISDIR(inode->i_mode)) {700700- memset(nfsi->cookieverf, 0, sizeof(nfsi->cookieverf));701701- /* This ensures we revalidate child dentries */702702- nfsi->cache_change_attribute = jiffies;703703- }704704- spin_unlock(&inode->i_lock);705705-706706- nfs_inc_stats(inode, NFSIOS_DATAINVALIDATE);707707- dfprintk(PAGECACHE, "NFS: (%s/%Ld) data cache invalidated\n",708708- inode->i_sb->s_id,709709- (long long)NFS_FILEID(inode));743743+ if (ret < 0)744744+ goto out;710745 }746746+ if (nfsi->cache_validity & NFS_INO_INVALID_DATA)747747+ ret = nfs_invalidate_mapping(inode, mapping);711748out:712749 return ret;713750}
···462462}463463464464int465465-nfssvc_encode_entry(struct readdir_cd *ccd, const char *name,466466- int namlen, loff_t offset, ino_t ino, unsigned int d_type)465465+nfssvc_encode_entry(void *ccdv, const char *name,466466+ int namlen, loff_t offset, u64 ino, unsigned int d_type)467467{468468+ struct readdir_cd *ccd = ccdv;468469 struct nfsd_readdirres *cd = container_of(ccd, struct nfsd_readdirres, common);469470 __be32 *p = cd->buffer;470471 int buflen, slen;
+9-20
fs/nfsd/vfs.c
···5959#include <asm/uaccess.h>60606161#define NFSDDBG_FACILITY NFSDDBG_FILEOP6262-#define NFSD_PARANOIA636264636564/* We must ignore files (but only files) which might have mandatory···821822 rqstp->rq_res.page_len = size;822823 } else if (page != pp[-1]) {823824 get_page(page);824824- put_page(*pp);825825+ if (*pp)826826+ put_page(*pp);825827 *pp = page;826828 rqstp->rq_resused++;827829 rqstp->rq_res.page_len += size;···12441244 __be32 err;12451245 int host_err;12461246 __u32 v_mtime=0, v_atime=0;12471247- int v_mode=0;1248124712491248 err = nfserr_perm;12501249 if (!flen)···12801281 goto out;1281128212821283 if (createmode == NFS3_CREATE_EXCLUSIVE) {12831283- /* while the verifier would fit in mtime+atime,12841284- * solaris7 gets confused (bugid 4218508) if these have12851285- * the high bit set, so we use the mode as well12841284+ /* solaris7 gets confused (bugid 4218508) if these have12851285+ * the high bit set, so just clear the high bits.12861286 */12871287 v_mtime = verifier[0]&0x7fffffff;12881288 v_atime = verifier[1]&0x7fffffff;12891289- v_mode = S_IFREG12901290- | ((verifier[0]&0x80000000) >> (32-7)) /* u+x */12911291- | ((verifier[1]&0x80000000) >> (32-9)) /* u+r */12921292- ;12931289 }1294129012951291 if (dchild->d_inode) {···13121318 case NFS3_CREATE_EXCLUSIVE:13131319 if ( dchild->d_inode->i_mtime.tv_sec == v_mtime13141320 && dchild->d_inode->i_atime.tv_sec == v_atime13151315- && dchild->d_inode->i_mode == v_mode13161321 && dchild->d_inode->i_size == 0 )13171322 break;13181323 /* fallthru */···13331340 }1334134113351342 if (createmode == NFS3_CREATE_EXCLUSIVE) {13361336- /* Cram the verifier into atime/mtime/mode */13431343+ /* Cram the verifier into atime/mtime */13371344 iap->ia_valid = ATTR_MTIME|ATTR_ATIME13381338- | ATTR_MTIME_SET|ATTR_ATIME_SET13391339- | ATTR_MODE;13451345+ | ATTR_MTIME_SET|ATTR_ATIME_SET;13401346 /* XXX someone who knows this better please fix it for nsec */ 13411347 iap->ia_mtime.tv_sec = v_mtime;13421348 iap->ia_atime.tv_sec = v_atime;13431349 iap->ia_mtime.tv_nsec = 0;13441350 iap->ia_atime.tv_nsec = 0;13451345- iap->ia_mode = v_mode;13461351 }1347135213481353 /* Set file attributes.13491349- * Mode has already been set but we might need to reset it13501350- * for CREATE_EXCLUSIVE13511354 * Irix appears to send along the gid when it tries to13521355 * implement setgid directories via NFS. Clear out all that cruft.13531356 */13541357 set_attr:13551355- if ((iap->ia_valid &= ~(ATTR_UID|ATTR_GID)) != 0) {13581358+ if ((iap->ia_valid &= ~(ATTR_UID|ATTR_GID|ATTR_MODE)) != 0) {13561359 __be32 err2 = nfsd_setattr(rqstp, resfhp, iap, 0, (time_t)0);13571360 if (err2)13581361 err = err2;···17151726 */17161727__be3217171728nfsd_readdir(struct svc_rqst *rqstp, struct svc_fh *fhp, loff_t *offsetp, 17181718- struct readdir_cd *cdp, encode_dent_fn func)17291729+ struct readdir_cd *cdp, filldir_t func)17191730{17201731 __be32 err;17211732 int host_err;···1740175117411752 do {17421753 cdp->err = nfserr_eof; /* will be cleared on successful read */17431743- host_err = vfs_readdir(file, (filldir_t) func, cdp);17541754+ host_err = vfs_readdir(file, func, cdp);17441755 } while (host_err >=0 && cdp->err == nfs_ok);17451756 if (host_err)17461757 err = nfserrno(host_err);
+4
fs/ntfs/aops.c
···9292 ofs = 0;9393 if (file_ofs < init_size)9494 ofs = init_size - file_ofs;9595+ local_irq_save(flags);9596 kaddr = kmap_atomic(page, KM_BIO_SRC_IRQ);9697 memset(kaddr + bh_offset(bh) + ofs, 0,9798 bh->b_size - ofs);9899 kunmap_atomic(kaddr, KM_BIO_SRC_IRQ);100100+ local_irq_restore(flags);99101 flush_dcache_page(page);100102 }101103 } else {···145143 recs = PAGE_CACHE_SIZE / rec_size;146144 /* Should have been verified before we got here... */147145 BUG_ON(!recs);146146+ local_irq_save(flags);148147 kaddr = kmap_atomic(page, KM_BIO_SRC_IRQ);149148 for (i = 0; i < recs; i++)150149 post_read_mst_fixup((NTFS_RECORD*)(kaddr +151150 i * rec_size), rec_size);152151 kunmap_atomic(kaddr, KM_BIO_SRC_IRQ);152152+ local_irq_restore(flags);153153 flush_dcache_page(page);154154 if (likely(page_uptodate && !PageError(page)))155155 SetPageUptodate(page);
···5252/* general configuration options */53535454#define S3C2410_GPIO_LEAVE (0xFFFFFFFF)5555-#define S3C2410_GPIO_INPUT (0xFFFFFFF0)5555+#define S3C2410_GPIO_INPUT (0xFFFFFFF0) /* not available on A */5656#define S3C2410_GPIO_OUTPUT (0xFFFFFFF1)5757#define S3C2410_GPIO_IRQ (0xFFFFFFF2) /* not available for all */5858-#define S3C2410_GPIO_SFN2 (0xFFFFFFF2) /* not available on A */5858+#define S3C2410_GPIO_SFN2 (0xFFFFFFF2) /* bank A => addr/cs/nand */5959#define S3C2410_GPIO_SFN3 (0xFFFFFFF3) /* not available on A */60606161/* register address for the GPIO registers.
+7-7
include/asm-arm/arch-s3c2410/regs-mem.h
···133133#define S3C2410_BANKCON_SDRAM (0x3 << 15)134134135135/* next bits only for EDO DRAM in 6,7 */136136-#define S3C2400_BANKCON_EDO_Trdc1 (0x00 << 4)137137-#define S3C2400_BANKCON_EDO_Trdc2 (0x01 << 4)138138-#define S3C2400_BANKCON_EDO_Trdc3 (0x02 << 4)139139-#define S3C2400_BANKCON_EDO_Trdc4 (0x03 << 4)136136+#define S3C2400_BANKCON_EDO_Trcd1 (0x00 << 4)137137+#define S3C2400_BANKCON_EDO_Trcd2 (0x01 << 4)138138+#define S3C2400_BANKCON_EDO_Trcd3 (0x02 << 4)139139+#define S3C2400_BANKCON_EDO_Trcd4 (0x03 << 4)140140141141/* CAS pulse width */142142#define S3C2400_BANKCON_EDO_PULSE1 (0x00 << 3)···153153#define S3C2400_BANKCON_EDO_SCANb11 (0x03 << 0)154154155155/* next bits only for SDRAM in 6,7 */156156-#define S3C2410_BANKCON_Trdc2 (0x00 << 2)157157-#define S3C2410_BANKCON_Trdc3 (0x01 << 2)158158-#define S3C2410_BANKCON_Trdc4 (0x02 << 2)156156+#define S3C2410_BANKCON_Trcd2 (0x00 << 2)157157+#define S3C2410_BANKCON_Trcd3 (0x01 << 2)158158+#define S3C2410_BANKCON_Trcd4 (0x02 << 2)159159160160/* control column address select */161161#define S3C2410_BANKCON_SCANb8 (0x00 << 0)
···118118#define TIF_USEDFPU 16 /* FPU was used by this task this quantum (SMP) */119119#define TIF_POLLING_NRFLAG 17 /* true if poll_idle() is polling TIF_NEED_RESCHED */120120#define TIF_MEMDIE 18121121+#define TIF_FREEZE 19121122#define TIF_SYSCALL_TRACE 31 /* syscall trace active */122123123124#define _TIF_SYSCALL_TRACE (1<<TIF_SYSCALL_TRACE)···130129#define _TIF_RESTORE_SIGMASK (1<<TIF_RESTORE_SIGMASK)131130#define _TIF_USEDFPU (1<<TIF_USEDFPU)132131#define _TIF_POLLING_NRFLAG (1<<TIF_POLLING_NRFLAG)132132+#define _TIF_FREEZE (1<<TIF_FREEZE)133133134134/* work to do on interrupt/exception return */135135#define _TIF_WORK_MASK (0x0000ffef & ~_TIF_SECCOMP)
+6-6
include/asm-powerpc/dma-mapping.h
···3737 */38383939#define __dma_alloc_coherent(gfp, size, handle) NULL4040-#define __dma_free_coherent(size, addr) do { } while (0)4141-#define __dma_sync(addr, size, rw) do { } while (0)4242-#define __dma_sync_page(pg, off, sz, rw) do { } while (0)4040+#define __dma_free_coherent(size, addr) ((void)0)4141+#define __dma_sync(addr, size, rw) ((void)0)4242+#define __dma_sync_page(pg, off, sz, rw) ((void)0)43434444#endif /* ! CONFIG_NOT_COHERENT_CACHE */4545···251251}252252253253/* We do nothing. */254254-#define dma_unmap_single(dev, addr, size, dir) do { } while (0)254254+#define dma_unmap_single(dev, addr, size, dir) ((void)0)255255256256static inline dma_addr_t257257dma_map_page(struct device *dev, struct page *page,···266266}267267268268/* We do nothing. */269269-#define dma_unmap_page(dev, handle, size, dir) do { } while (0)269269+#define dma_unmap_page(dev, handle, size, dir) ((void)0)270270271271static inline int272272dma_map_sg(struct device *dev, struct scatterlist *sg, int nents,···286286}287287288288/* We don't do anything here. */289289-#define dma_unmap_sg(dev, sg, nents, dir) do { } while (0)289289+#define dma_unmap_sg(dev, sg, nents, dir) ((void)0)290290291291#endif /* CONFIG_PPC64 */292292
···408408409409#include <asm-generic/pgtable-nopud.h>410410411411+#ifdef CONFIG_HIGHMEM412412+/* Clear a kernel PTE and flush it from the TLB */413413+#define kpte_clear_flush(ptep, vaddr) \414414+do { \415415+ pte_clear(&init_mm, vaddr, ptep); \416416+ __flush_tlb_one(vaddr); \417417+} while (0)418418+#endif419419+411420#endif412421#endif413422
···157157 case 1: __put_user_asm(x,ptr,retval,"b","b","iq",-EFAULT); break;\158158 case 2: __put_user_asm(x,ptr,retval,"w","w","ir",-EFAULT); break;\159159 case 4: __put_user_asm(x,ptr,retval,"l","k","ir",-EFAULT); break;\160160- case 8: __put_user_asm(x,ptr,retval,"q","","ir",-EFAULT); break;\160160+ case 8: __put_user_asm(x,ptr,retval,"q","","Zr",-EFAULT); break;\161161 default: __put_user_bad(); \162162 } \163163} while (0)
···301301extern void efi_initialize_iomem_resources(struct resource *code_resource,302302 struct resource *data_resource);303303extern unsigned long efi_get_time(void);304304-extern int __init efi_set_rtc_mmss(unsigned long nowtime);304304+extern int efi_set_rtc_mmss(unsigned long nowtime);305305extern int is_available_memory(efi_memory_desc_t * md);306306extern struct efi_memory_map memmap;307307
···177177 * Register FIS clearing BSY */178178 ATA_FLAG_DEBUGMSG = (1 << 13),179179 ATA_FLAG_SETXFER_POLLING= (1 << 14), /* use polling for SETXFER */180180+ ATA_FLAG_IGN_SIMPLEX = (1 << 15), /* ignore SIMPLEX */180181181182 /* The following flag belongs to ap->pflags but is kept in182183 * ap->flags because it's referenced in many LLDs and will be···613612 void (*dev_select)(struct ata_port *ap, unsigned int device);614613615614 void (*phy_reset) (struct ata_port *ap); /* obsolete */616616- void (*set_mode) (struct ata_port *ap);615615+ int (*set_mode) (struct ata_port *ap, struct ata_device **r_failed_dev);617616618617 void (*post_set_mode) (struct ata_port *ap);619618620620- int (*check_atapi_dma) (struct ata_queued_cmd *qc);619619+ int (*check_atapi_dma) (struct ata_queued_cmd *qc);621620622621 void (*bmdma_setup) (struct ata_queued_cmd *qc);623622 void (*bmdma_start) (struct ata_queued_cmd *qc);···10541053/**10551054 * ata_busy_wait - Wait for a port status register10561055 * @ap: Port to wait for.10561056+ * @bits: bits that must be clear10571057+ * @max: number of 10uS waits to perform10571058 *10581059 * Waits up to max*10 microseconds for the selected bits in the port's10591060 * status register to be cleared.···11521149 qc->cursect = qc->cursg = qc->cursg_ofs = 0;11531150 qc->nsect = 0;11541151 qc->nbytes = qc->curbytes = 0;11521152+ qc->n_elem = 0;11551153 qc->err_mask = 0;11541154+ qc->pad_len = 0;1156115511571156 ata_tf_init(qc->dev, &qc->tf);11581157
+5-5
include/linux/list.h
···227227 INIT_LIST_HEAD(old);228228}229229230230-/*230230+/**231231 * list_replace_rcu - replace old entry by new one232232 * @old : the element to be replaced233233 * @new : the new element to insert234234 *235235- * The old entry will be replaced with the new entry atomically.236236- * Note: 'old' should not be empty.235235+ * The @old entry will be replaced with the @new entry atomically.236236+ * Note: @old should not be empty.237237 */238238static inline void list_replace_rcu(struct list_head *old,239239 struct list_head *new)···680680 }681681}682682683683-/*683683+/**684684 * hlist_replace_rcu - replace old entry by new one685685 * @old : the element to be replaced686686 * @new : the new element to insert687687 *688688- * The old entry will be replaced with the new entry atomically.688688+ * The @old entry will be replaced with the @new entry atomically.689689 */690690static inline void hlist_replace_rcu(struct hlist_node *old,691691 struct hlist_node *new)
+1
include/linux/mm.h
···168168#define VM_NONLINEAR 0x00800000 /* Is non-linear (remap_file_pages) */169169#define VM_MAPPED_COPY 0x01000000 /* T if mapped copy of data (nommu mmap) */170170#define VM_INSERTPAGE 0x02000000 /* The vma has had "vm_insert_page()" done on it */171171+#define VM_ALWAYSDUMP 0x04000000 /* Always include in core dumps */171172172173#ifndef VM_STACK_DEFAULT_FLAGS /* arch can override this */173174#define VM_STACK_DEFAULT_FLAGS VM_DATA_DEFAULT_FLAGS
-146
include/linux/mtio.h
···10101111#include <linux/types.h>1212#include <linux/ioctl.h>1313-#include <linux/qic117.h>14131514/*1615 * Structures and definitions for mag tape io control commands···115116#define MT_ISFTAPE_UNKNOWN 0x800000 /* obsolete */116117#define MT_ISFTAPE_FLAG 0x800000117118118118-struct mt_tape_info {119119- long t_type; /* device type id (mt_type) */120120- char *t_name; /* descriptive name */121121-};122122-123123-#define MT_TAPE_INFO { \124124- {MT_ISUNKNOWN, "Unknown type of tape device"}, \125125- {MT_ISQIC02, "Generic QIC-02 tape streamer"}, \126126- {MT_ISWT5150, "Wangtek 5150, QIC-150"}, \127127- {MT_ISARCHIVE_5945L2, "Archive 5945L-2"}, \128128- {MT_ISCMSJ500, "CMS Jumbo 500"}, \129129- {MT_ISTDC3610, "Tandberg TDC 3610, QIC-24"}, \130130- {MT_ISARCHIVE_VP60I, "Archive VP60i, QIC-02"}, \131131- {MT_ISARCHIVE_2150L, "Archive Viper 2150L"}, \132132- {MT_ISARCHIVE_2060L, "Archive Viper 2060L"}, \133133- {MT_ISARCHIVESC499, "Archive SC-499 QIC-36 controller"}, \134134- {MT_ISQIC02_ALL_FEATURES, "Generic QIC-02 tape, all features"}, \135135- {MT_ISWT5099EEN24, "Wangtek 5099-een24, 60MB"}, \136136- {MT_ISTEAC_MT2ST, "Teac MT-2ST 155mb data cassette drive"}, \137137- {MT_ISEVEREX_FT40A, "Everex FT40A, QIC-40"}, \138138- {MT_ISONSTREAM_SC, "OnStream SC-, DI-, DP-, or USB tape drive"}, \139139- {MT_ISSCSI1, "Generic SCSI-1 tape"}, \140140- {MT_ISSCSI2, "Generic SCSI-2 tape"}, \141141- {0, NULL} \142142-}143143-144119145120/* structure for MTIOCPOS - mag tape get position command */146121···123150};124151125152126126-/* structure for MTIOCVOLINFO, query information about the volume127127- * currently positioned at (zftape)128128- */129129-struct mtvolinfo {130130- unsigned int mt_volno; /* vol-number */131131- unsigned int mt_blksz; /* blocksize used when recording */132132- unsigned int mt_rawsize; /* raw tape space consumed, in kb */133133- unsigned int mt_size; /* volume size after decompression, in kb */134134- unsigned int mt_cmpr:1; /* this volume has been compressed */135135-};136136-137137-/* raw access to a floppy drive, read and write an arbitrary segment.138138- * For ftape/zftape to support formatting etc.139139- */140140-#define MT_FT_RD_SINGLE 0141141-#define MT_FT_RD_AHEAD 1142142-#define MT_FT_WR_ASYNC 0 /* start tape only when all buffers are full */143143-#define MT_FT_WR_MULTI 1 /* start tape, continue until buffers are empty */144144-#define MT_FT_WR_SINGLE 2 /* write a single segment and stop afterwards */145145-#define MT_FT_WR_DELETE 3 /* write deleted data marks, one segment at time */146146-147147-struct mtftseg148148-{ 149149- unsigned mt_segno; /* the segment to read or write */150150- unsigned mt_mode; /* modes for read/write (sync/async etc.) */151151- int mt_result; /* result of r/w request, not of the ioctl */152152- void __user *mt_data; /* User space buffer: must be 29kb */153153-};154154-155155-/* get tape capacity (ftape/zftape)156156- */157157-struct mttapesize {158158- unsigned long mt_capacity; /* entire, uncompressed capacity 159159- * of a cartridge160160- */161161- unsigned long mt_used; /* what has been used so far, raw 162162- * uncompressed amount163163- */164164-};165165-166166-/* possible values of the ftfmt_op field167167- */168168-#define FTFMT_SET_PARMS 1 /* set software parms */169169-#define FTFMT_GET_PARMS 2 /* get software parms */170170-#define FTFMT_FORMAT_TRACK 3 /* start formatting a tape track */171171-#define FTFMT_STATUS 4 /* monitor formatting a tape track */172172-#define FTFMT_VERIFY 5 /* verify the given segment */173173-174174-struct ftfmtparms {175175- unsigned char ft_qicstd; /* QIC-40/QIC-80/QIC-3010/QIC-3020 */176176- unsigned char ft_fmtcode; /* Refer to the QIC specs */177177- unsigned char ft_fhm; /* floppy head max */178178- unsigned char ft_ftm; /* floppy track max */179179- unsigned short ft_spt; /* segments per track */180180- unsigned short ft_tpc; /* tracks per cartridge */181181-};182182-183183-struct ftfmttrack {184184- unsigned int ft_track; /* track to format */185185- unsigned char ft_gap3; /* size of gap3, for FORMAT_TRK */186186-};187187-188188-struct ftfmtstatus {189189- unsigned int ft_segment; /* segment currently being formatted */190190-};191191-192192-struct ftfmtverify {193193- unsigned int ft_segment; /* segment to verify */194194- unsigned long ft_bsm; /* bsm as result of VERIFY cmd */195195-};196196-197197-struct mtftformat {198198- unsigned int fmt_op; /* operation to perform */199199- union fmt_arg {200200- struct ftfmtparms fmt_parms; /* format parameters */201201- struct ftfmttrack fmt_track; /* ctrl while formatting */202202- struct ftfmtstatus fmt_status;203203- struct ftfmtverify fmt_verify; /* for verifying */ 204204- } fmt_arg;205205-};206206-207207-struct mtftcmd {208208- unsigned int ft_wait_before; /* timeout to wait for drive to get ready 209209- * before command is sent. Milliseconds210210- */211211- qic117_cmd_t ft_cmd; /* command to send */212212- unsigned char ft_parm_cnt; /* zero: no parm is sent. */213213- unsigned char ft_parms[3]; /* parameter(s) to send to214214- * the drive. The parms are nibbles215215- * driver sends cmd + 2 step pulses */216216- unsigned int ft_result_bits; /* if non zero, number of bits217217- * returned by the tape drive218218- */219219- unsigned int ft_result; /* the result returned by the tape drive*/220220- unsigned int ft_wait_after; /* timeout to wait for drive to get ready221221- * after command is sent. 0: don't wait */222222- int ft_status; /* status returned by ready wait223223- * undefined if timeout was 0.224224- */225225- int ft_error; /* error code if error status was set by 226226- * command227227- */228228-};229229-230153/* mag tape io control commands */231154#define MTIOCTOP _IOW('m', 1, struct mtop) /* do a mag tape op */232155#define MTIOCGET _IOR('m', 2, struct mtget) /* get tape status */233156#define MTIOCPOS _IOR('m', 3, struct mtpos) /* get tape position */234157235235-/* The next two are used by the QIC-02 driver for runtime reconfiguration.236236- * See tpqic02.h for struct mtconfiginfo.237237- */238238-#define MTIOCGETCONFIG _IOR('m', 4, struct mtconfiginfo) /* get tape config */239239-#define MTIOCSETCONFIG _IOW('m', 5, struct mtconfiginfo) /* set tape config */240240-241241-/* the next six are used by the floppy ftape drivers and its frontends242242- * sorry, but MTIOCTOP commands are write only.243243- */244244-#define MTIOCRDFTSEG _IOWR('m', 6, struct mtftseg) /* read a segment */245245-#define MTIOCWRFTSEG _IOWR('m', 7, struct mtftseg) /* write a segment */246246-#define MTIOCVOLINFO _IOR('m', 8, struct mtvolinfo) /* info about volume */247247-#define MTIOCGETSIZE _IOR('m', 9, struct mttapesize)/* get cartridge size*/248248-#define MTIOCFTFORMAT _IOWR('m', 10, struct mtftformat) /* format ftape */249249-#define MTIOCFTCMD _IOWR('m', 11, struct mtftcmd) /* send QIC-117 cmd */250158251159/* Generic Mag Tape (device independent) status macros for examining252160 * mt_gstat -- HP-UX compatible.
+1-1
include/linux/mutex.h
···105105extern void __mutex_init(struct mutex *lock, const char *name,106106 struct lock_class_key *key);107107108108-/***108108+/**109109 * mutex_is_locked - is the mutex locked110110 * @lock: the mutex to be queried111111 *
···1616#include <linux/plist.h>1717#include <linux/spinlock_types.h>18181919-/*1919+/**2020 * The rt_mutex structure2121 *2222 * @wait_lock: spinlock to protect the structure···7171#define DEFINE_RT_MUTEX(mutexname) \7272 struct rt_mutex mutexname = __RT_MUTEX_INITIALIZER(mutexname)73737474-/***7474+/**7575 * rt_mutex_is_locked - is the mutex locked7676 * @lock: the mutex to be queried7777 *
···144144 *145145 * Each request/reply pair can have at most one "payload", plus two pages,146146 * one for the request, and one for the reply.147147+ * We using ->sendfile to return read data, we might need one extra page148148+ * if the request is not page-aligned. So add another '1'.147149 */148148-#define RPCSVC_MAXPAGES ((RPCSVC_MAXPAYLOAD+PAGE_SIZE-1)/PAGE_SIZE + 2)150150+#define RPCSVC_MAXPAGES ((RPCSVC_MAXPAYLOAD+PAGE_SIZE-1)/PAGE_SIZE \151151+ + 2 + 1)149152150153static inline u32 svc_getnl(struct kvec *iov)151154{
+2-2
include/linux/timer.h
···4141 init_timer(timer);4242}43434444-/***4444+/**4545 * timer_pending - is a timer pending?4646 * @timer: the timer in question4747 *···63636464extern unsigned long next_timer_interrupt(void);65656666-/***6666+/**6767 * add_timer - start a timer6868 * @timer: the timer to be added6969 *
+1-1
include/net/inet6_connection_sock.h
···38383939extern void inet6_csk_addr2sockaddr(struct sock *sk, struct sockaddr *uaddr);40404141-extern int inet6_csk_xmit(struct sk_buff *skb, struct sock *sk, int ipfragok);4141+extern int inet6_csk_xmit(struct sk_buff *skb, int ipfragok);4242#endif /* _INET6_CONNECTION_SOCK_H */
+1-2
include/net/inet_connection_sock.h
···3737 * (i.e. things that depend on the address family)3838 */3939struct inet_connection_sock_af_ops {4040- int (*queue_xmit)(struct sk_buff *skb, struct sock *sk,4141- int ipfragok);4040+ int (*queue_xmit)(struct sk_buff *skb, int ipfragok);4241 void (*send_check)(struct sock *sk, int len,4342 struct sk_buff *skb);4443 int (*rebuild_header)(struct sock *sk);
+1-1
include/net/ip.h
···9797extern int ip_fragment(struct sk_buff *skb, int (*output)(struct sk_buff *));9898extern int ip_do_nat(struct sk_buff *skb);9999extern void ip_send_check(struct iphdr *ip);100100-extern int ip_queue_xmit(struct sk_buff *skb, struct sock *sk, int ipfragok);100100+extern int ip_queue_xmit(struct sk_buff *skb, int ipfragok);101101extern void ip_init(void);102102extern int ip_append_data(struct sock *sk,103103 int getfrag(void *from, char *to, int offset, int len,
···323323int blocking_notifier_call_chain(struct blocking_notifier_head *nh,324324 unsigned long val, void *v)325325{326326- int ret;326326+ int ret = NOTIFY_DONE;327327328328- down_read(&nh->rwsem);329329- ret = notifier_call_chain(&nh->head, val, v);330330- up_read(&nh->rwsem);328328+ /*329329+ * We check the head outside the lock, but if this access is330330+ * racy then it does not matter what the result of the test331331+ * is, we re-check the list after having taken the lock anyway:332332+ */333333+ if (rcu_dereference(nh->head)) {334334+ down_read(&nh->rwsem);335335+ ret = notifier_call_chain(&nh->head, val, v);336336+ up_read(&nh->rwsem);337337+ }331338 return ret;332339}333340
+2-2
mm/filemap_xip.c
···183183 address = vma->vm_start +184184 ((pgoff - vma->vm_pgoff) << PAGE_SHIFT);185185 BUG_ON(address < vma->vm_start || address >= vma->vm_end);186186- page = ZERO_PAGE(address);186186+ page = ZERO_PAGE(0);187187 pte = page_check_address(page, mm, address, &ptl);188188 if (pte) {189189 /* Nuke the page table entry. */···246246 __xip_unmap(mapping, pgoff);247247 } else {248248 /* not shared and writable, use ZERO_PAGE() */249249- page = ZERO_PAGE(address);249249+ page = ZERO_PAGE(0);250250 }251251252252out:
+9-2
mm/memory.c
···26062606 gate_vma.vm_mm = NULL;26072607 gate_vma.vm_start = FIXADDR_USER_START;26082608 gate_vma.vm_end = FIXADDR_USER_END;26092609- gate_vma.vm_page_prot = PAGE_READONLY;26102610- gate_vma.vm_flags = 0;26092609+ gate_vma.vm_flags = VM_READ | VM_MAYREAD | VM_EXEC | VM_MAYEXEC;26102610+ gate_vma.vm_page_prot = __P101;26112611+ /*26122612+ * Make sure the vDSO gets into every core dump.26132613+ * Dumping its contents makes post-mortem fully interpretable later26142614+ * without matching up the same kernel and hardware config to see26152615+ * what PC values meant.26162616+ */26172617+ gate_vma.vm_flags |= VM_ALWAYSDUMP;26112618 return 0;26122619}26132620__initcall(gate_vma_init);
+4
mm/mempolicy.c
···884884 err = get_nodes(&nodes, nmask, maxnode);885885 if (err)886886 return err;887887+#ifdef CONFIG_CPUSETS888888+ /* Restrict the nodes to the allowed nodes in the cpuset */889889+ nodes_and(nodes, nodes, current->mems_allowed);890890+#endif887891 return do_mbind(start, len, mode, &nodes, flags);888892}889893
+7
mm/mmap.c
···14771477{14781478 struct mm_struct *mm = vma->vm_mm;14791479 struct rlimit *rlim = current->signal->rlim;14801480+ unsigned long new_start;1480148114811482 /* address space limit tests */14821483 if (!may_expand_vm(mm, grow))···14961495 if (locked > limit && !capable(CAP_IPC_LOCK))14971496 return -ENOMEM;14981497 }14981498+14991499+ /* Check to ensure the stack will not grow into a hugetlb-only region */15001500+ new_start = (vma->vm_flags & VM_GROWSUP) ? vma->vm_start :15011501+ vma->vm_end - size;15021502+ if (is_hugepage_only_range(vma->vm_mm, new_start, size))15031503+ return -EFAULT;1499150415001505 /*15011506 * Overcommit.. This must be the final test, as it will
-1
mm/mremap.c
···105105 if (pte_none(*old_pte))106106 continue;107107 pte = ptep_clear_flush(vma, old_addr, old_pte);108108- /* ZERO_PAGE can be dependant on virtual addr */109108 pte = move_pte(pte, new_vma->vm_page_prot, old_addr, new_addr);110109 set_pte_at(mm, new_addr, new_pte, pte);111110 }
+18-23
mm/page-writeback.c
···133133134134#ifdef CONFIG_HIGHMEM135135 /*136136- * If this mapping can only allocate from low memory,137137- * we exclude high memory from our count.136136+ * We always exclude high memory from our count.138137 */139139- if (mapping && !(mapping_gfp_mask(mapping) & __GFP_HIGHMEM))140140- available_memory -= totalhigh_pages;138138+ available_memory -= totalhigh_pages;141139#endif142140143141···524526};525527526528/*527527- * If the machine has a large highmem:lowmem ratio then scale back the default528528- * dirty memory thresholds: allowing too much dirty highmem pins an excessive529529- * number of buffer_heads.529529+ * Called early on to tune the page writeback dirty limits.530530+ *531531+ * We used to scale dirty pages according to how total memory532532+ * related to pages that could be allocated for buffers (by533533+ * comparing nr_free_buffer_pages() to vm_total_pages.534534+ *535535+ * However, that was when we used "dirty_ratio" to scale with536536+ * all memory, and we don't do that any more. "dirty_ratio"537537+ * is now applied to total non-HIGHPAGE memory (by subtracting538538+ * totalhigh_pages from vm_total_pages), and as such we can't539539+ * get into the old insane situation any more where we had540540+ * large amounts of dirty pages compared to a small amount of541541+ * non-HIGHMEM memory.542542+ *543543+ * But we might still want to scale the dirty_ratio by how544544+ * much memory the box has..530545 */531546void __init page_writeback_init(void)532547{533533- long buffer_pages = nr_free_buffer_pages();534534- long correction;535535-536536- correction = (100 * 4 * buffer_pages) / vm_total_pages;537537-538538- if (correction < 100) {539539- dirty_background_ratio *= correction;540540- dirty_background_ratio /= 100;541541- vm_dirty_ratio *= correction;542542- vm_dirty_ratio /= 100;543543-544544- if (dirty_background_ratio <= 0)545545- dirty_background_ratio = 1;546546- if (vm_dirty_ratio <= 0)547547- vm_dirty_ratio = 1;548548- }549548 mod_timer(&wb_timer, jiffies + dirty_writeback_interval);550549 writeback_set_ratelimit();551550 register_cpu_notifier(&ratelimit_nb);
+1-2
mm/page_alloc.c
···989989 int classzone_idx, int alloc_flags)990990{991991 /* free_pages my go negative - that's OK */992992- unsigned long min = mark;993993- long free_pages = z->free_pages - (1 << order) + 1;992992+ long min = mark, free_pages = z->free_pages - (1 << order) + 1;994993 int o;995994996995 if (alloc_flags & ALLOC_HIGH)
+14-8
mm/truncate.c
···5151 do_invalidatepage(page, partial);5252}53535454+/*5555+ * This cancels just the dirty bit on the kernel page itself, it5656+ * does NOT actually remove dirty bits on any mmap's that may be5757+ * around. It also leaves the page tagged dirty, so any sync5858+ * activity will still find it on the dirty lists, and in particular,5959+ * clear_page_dirty_for_io() will still look at the dirty bits in6060+ * the VM.6161+ *6262+ * Doing this should *normally* only ever be done when a page6363+ * is truncated, and is not actually mapped anywhere at all. However,6464+ * fs/buffer.c does this when it notices that somebody has cleaned6565+ * out all the buffers on a page without actually doing it through6666+ * the VM. Can you say "ext3 is horribly ugly"? Tought you could.6767+ */5468void cancel_dirty_page(struct page *page, unsigned int account_size)5569{5656- /* If we're cancelling the page, it had better not be mapped any more */5757- if (page_mapped(page)) {5858- static unsigned int warncount;5959-6060- WARN_ON(++warncount < 5);6161- }6262-6370 if (TestClearPageDirty(page)) {6471 struct address_space *mapping = page->mapping;6572 if (mapping && mapping_cap_account_dirty(mapping)) {···429422 pagevec_release(&pvec);430423 cond_resched();431424 }432432- WARN_ON_ONCE(ret);433425 return ret;434426}435427EXPORT_SYMBOL_GPL(invalidate_inode_pages2_range);
···283283{284284 int s = *shift;285285286286- for (; dptr <= limit && *dptr != '@'; dptr++)286286+ /* Search for @, but stop at the end of the line.287287+ * We are inside a sip: URI, so we don't need to worry about288288+ * continuation lines. */289289+ while (dptr <= limit &&290290+ *dptr != '@' && *dptr != '\r' && *dptr != '\n') {287291 (*shift)++;292292+ dptr++;293293+ }288294289289- if (*dptr == '@') {295295+ if (dptr <= limit && *dptr == '@') {290296 dptr++;291297 (*shift)++;292298 } else
···10111011 for (j = 0; j < i; j++){10121012 if (after(ntohl(sp[j].start_seq),10131013 ntohl(sp[j+1].start_seq))){10141014- sp[j].start_seq = htonl(tp->recv_sack_cache[j+1].start_seq);10151015- sp[j].end_seq = htonl(tp->recv_sack_cache[j+1].end_seq);10161016- sp[j+1].start_seq = htonl(tp->recv_sack_cache[j].start_seq);10171017- sp[j+1].end_seq = htonl(tp->recv_sack_cache[j].end_seq);10141014+ struct tcp_sack_block_wire tmp;10151015+10161016+ tmp = sp[j];10171017+ sp[j] = sp[j+1];10181018+ sp[j+1] = tmp;10181019 }1019102010201021 }···44214420 * But, this leaves one open to an easy denial of44224421 * service attack, and SYN cookies can't defend44234422 * against this problem. So, we drop the data44244424- * in the interest of security over speed.44234423+ * in the interest of security over speed unless44244424+ * it's still in use.44254425 */44264426- goto discard;44264426+ kfree_skb(skb);44274427+ return 0;44274428 }44284429 goto discard;44294430
···165165166166config NF_CONNTRACK_H323167167 tristate "H.323 protocol support (EXPERIMENTAL)"168168- depends on EXPERIMENTAL && NF_CONNTRACK168168+ depends on EXPERIMENTAL && NF_CONNTRACK && (IPV6 || IPV6=n)169169 help170170 H.323 is a VoIP signalling protocol from ITU-T. As one of the most171171 important VoIP protocols, it is widely used by voice hardware and···628628629629config NETFILTER_XT_MATCH_HASHLIMIT630630 tristate '"hashlimit" match support'631631- depends on NETFILTER_XTABLES631631+ depends on NETFILTER_XTABLES && (IP6_NF_IPTABLES || IP6_NF_IPTABLES=n)632632 help633633 This option adds a `hashlimit' match.634634
···303303{304304 int s = *shift;305305306306- for (; dptr <= limit && *dptr != '@'; dptr++)306306+ /* Search for @, but stop at the end of the line.307307+ * We are inside a sip: URI, so we don't need to worry about308308+ * continuation lines. */309309+ while (dptr <= limit &&310310+ *dptr != '@' && *dptr != '\r' && *dptr != '\n') {307311 (*shift)++;312312+ dptr++;313313+ }308314309309- if (*dptr == '@') {315315+ if (dptr <= limit && *dptr == '@') {310316 dptr++;311317 (*shift)++;312318 } else
+12-17
net/netfilter/xt_connbytes.c
···5252{5353 const struct xt_connbytes_info *sinfo = matchinfo;5454 u_int64_t what = 0; /* initialize to make gcc happy */5555+ u_int64_t bytes = 0;5656+ u_int64_t pkts = 0;5557 const struct ip_conntrack_counter *counters;56585759 if (!(counters = nf_ct_get_counters(skb)))···9189 case XT_CONNBYTES_AVGPKT:9290 switch (sinfo->direction) {9391 case XT_CONNBYTES_DIR_ORIGINAL:9494- what = div64_64(counters[IP_CT_DIR_ORIGINAL].bytes,9595- counters[IP_CT_DIR_ORIGINAL].packets);9292+ bytes = counters[IP_CT_DIR_ORIGINAL].bytes;9393+ pkts = counters[IP_CT_DIR_ORIGINAL].packets;9694 break;9795 case XT_CONNBYTES_DIR_REPLY:9898- what = div64_64(counters[IP_CT_DIR_REPLY].bytes,9999- counters[IP_CT_DIR_REPLY].packets);9696+ bytes = counters[IP_CT_DIR_REPLY].bytes;9797+ pkts = counters[IP_CT_DIR_REPLY].packets;10098 break;10199 case XT_CONNBYTES_DIR_BOTH:102102- {103103- u_int64_t bytes;104104- u_int64_t pkts;105105- bytes = counters[IP_CT_DIR_ORIGINAL].bytes +106106- counters[IP_CT_DIR_REPLY].bytes;107107- pkts = counters[IP_CT_DIR_ORIGINAL].packets+108108- counters[IP_CT_DIR_REPLY].packets;109109-110110- /* FIXME_THEORETICAL: what to do if sum111111- * overflows ? */112112-113113- what = div64_64(bytes, pkts);114114- }100100+ bytes = counters[IP_CT_DIR_ORIGINAL].bytes +101101+ counters[IP_CT_DIR_REPLY].bytes;102102+ pkts = counters[IP_CT_DIR_ORIGINAL].packets +103103+ counters[IP_CT_DIR_REPLY].packets;115104 break;116105 }106106+ if (pkts != 0)107107+ what = div64_64(bytes, pkts);117108 break;118109 }119110
+23-23
net/packet/af_packet.c
···359359 if (dev == NULL)360360 goto out_unlock;361361362362+ err = -ENETDOWN;363363+ if (!(dev->flags & IFF_UP))364364+ goto out_unlock;365365+362366 /*363367 * You may not queue a frame bigger than the mtu. This is the lowest level364368 * raw protocol and you must do your own fragmentation at this level.···411407 if (err)412408 goto out_free;413409414414- err = -ENETDOWN;415415- if (!(dev->flags & IFF_UP))416416- goto out_free;417417-418410 /*419411 * Now send it420412 */···428428}429429#endif430430431431-static inline int run_filter(struct sk_buff *skb, struct sock *sk,432432- unsigned *snaplen)431431+static inline unsigned int run_filter(struct sk_buff *skb, struct sock *sk,432432+ unsigned int res)433433{434434 struct sk_filter *filter;435435- int err = 0;436435437436 rcu_read_lock_bh();438437 filter = rcu_dereference(sk->sk_filter);439439- if (filter != NULL) {440440- err = sk_run_filter(skb, filter->insns, filter->len);441441- if (!err)442442- err = -EPERM;443443- else if (*snaplen > err)444444- *snaplen = err;445445- }438438+ if (filter != NULL)439439+ res = sk_run_filter(skb, filter->insns, filter->len);446440 rcu_read_unlock_bh();447441448448- return err;442442+ return res;449443}450444451445/*···461467 struct packet_sock *po;462468 u8 * skb_head = skb->data;463469 int skb_len = skb->len;464464- unsigned snaplen;470470+ unsigned int snaplen, res;465471466472 if (skb->pkt_type == PACKET_LOOPBACK)467473 goto drop;···489495490496 snaplen = skb->len;491497492492- if (run_filter(skb, sk, &snaplen) < 0)498498+ res = run_filter(skb, sk, snaplen);499499+ if (!res)493500 goto drop_n_restore;501501+ if (snaplen > res)502502+ snaplen = res;494503495504 if (atomic_read(&sk->sk_rmem_alloc) + skb->truesize >=496505 (unsigned)sk->sk_rcvbuf)···565568 struct tpacket_hdr *h;566569 u8 * skb_head = skb->data;567570 int skb_len = skb->len;568568- unsigned snaplen;571571+ unsigned int snaplen, res;569572 unsigned long status = TP_STATUS_LOSING|TP_STATUS_USER;570573 unsigned short macoff, netoff;571574 struct sk_buff *copy_skb = NULL;···589592590593 snaplen = skb->len;591594592592- if (run_filter(skb, sk, &snaplen) < 0)595595+ res = run_filter(skb, sk, snaplen);596596+ if (!res)593597 goto drop_n_restore;598598+ if (snaplen > res)599599+ snaplen = res;594600595601 if (sk->sk_type == SOCK_DGRAM) {596602 macoff = netoff = TPACKET_ALIGN(TPACKET_HDRLEN) + 16;···738738 if (sock->type == SOCK_RAW)739739 reserve = dev->hard_header_len;740740741741+ err = -ENETDOWN;742742+ if (!(dev->flags & IFF_UP))743743+ goto out_unlock;744744+741745 err = -EMSGSIZE;742746 if (len > dev->mtu+reserve)743747 goto out_unlock;···773769 skb->protocol = proto;774770 skb->dev = dev;775771 skb->priority = sk->sk_priority;776776-777777- err = -ENETDOWN;778778- if (!(dev->flags & IFF_UP))779779- goto out_free;780772781773 /*782774 * Now send it
+5-3
net/sched/act_ipt.c
···5555 struct ipt_target *target;5656 int ret = 0;57575858- target = xt_find_target(AF_INET, t->u.user.name, t->u.user.revision);5858+ target = xt_request_find_target(AF_INET, t->u.user.name,5959+ t->u.user.revision);5960 if (!target)6061 return -ENOENT;6162···64636564 ret = xt_check_target(target, AF_INET, t->u.target_size - sizeof(*t),6665 table, hook, 0, 0);6767- if (ret)6666+ if (ret) {6767+ module_put(t->u.kernel.target->me);6868 return ret;6969-6969+ }7070 if (t->u.kernel.target->checkentry7171 && !t->u.kernel.target->checkentry(table, NULL,7272 t->u.kernel.target, t->data,
···217217218218 asoc->peer.sack_needed = 0;219219220220- error = sctp_outq_tail(&asoc->outqueue, sack);220220+ sctp_add_cmd_sf(commands, SCTP_CMD_REPLY, SCTP_CHUNK(sack));221221222222 /* Stop the SACK timer. */223223 sctp_add_cmd_sf(commands, SCTP_CMD_TIMER_STOP,···621621 /* The receiver of the HEARTBEAT ACK should also perform an622622 * RTT measurement for that destination transport address623623 * using the time value carried in the HEARTBEAT ACK chunk.624624+ * If the transport's rto_pending variable has been cleared,625625+ * it was most likely due to a retransmit. However, we want626626+ * to re-enable it to properly update the rto.624627 */628628+ if (t->rto_pending == 0)629629+ t->rto_pending = 1;630630+625631 hbinfo = (sctp_sender_hb_info_t *) chunk->skb->data;626632 sctp_transport_update_rto(t, (jiffies - hbinfo->sent_at));627633
+22-22
net/sctp/sm_statefuns.c
···440440{441441 struct sctp_chunk *chunk = arg;442442 sctp_init_chunk_t *initchunk;443443- __u32 init_tag;444443 struct sctp_chunk *err_chunk;445444 struct sctp_packet *packet;446445 sctp_error_t error;···460461461462 /* Grab the INIT header. */462463 chunk->subh.init_hdr = (sctp_inithdr_t *) chunk->skb->data;463463-464464- init_tag = ntohl(chunk->subh.init_hdr->init_tag);465465-466466- /* Verification Tag: 3.3.3467467- * If the value of the Initiate Tag in a received INIT ACK468468- * chunk is found to be 0, the receiver MUST treat it as an469469- * error and close the association by transmitting an ABORT.470470- */471471- if (!init_tag) {472472- struct sctp_chunk *reply = sctp_make_abort(asoc, chunk, 0);473473- if (!reply)474474- goto nomem;475475-476476- sctp_add_cmd_sf(commands, SCTP_CMD_REPLY, SCTP_CHUNK(reply));477477- return sctp_stop_t1_and_abort(commands, SCTP_ERROR_INV_PARAM,478478- ECONNREFUSED, asoc,479479- chunk->transport);480480- }481464482465 /* Verify the INIT chunk before processing it. */483466 err_chunk = NULL;···531550 SCTP_CHUNK(err_chunk));532551533552 return SCTP_DISPOSITION_CONSUME;534534-535535-nomem:536536- return SCTP_DISPOSITION_NOMEM;537553}538554539555/*···15311553}153215541533155515561556+/*15571557+ * Unexpected INIT-ACK handler.15581558+ *15591559+ * Section 5.2.315601560+ * If an INIT ACK received by an endpoint in any state other than the15611561+ * COOKIE-WAIT state, the endpoint should discard the INIT ACK chunk.15621562+ * An unexpected INIT ACK usually indicates the processing of an old or15631563+ * duplicated INIT chunk.15641564+*/15651565+sctp_disposition_t sctp_sf_do_5_2_3_initack(const struct sctp_endpoint *ep,15661566+ const struct sctp_association *asoc,15671567+ const sctp_subtype_t type,15681568+ void *arg, sctp_cmd_seq_t *commands)15691569+{15701570+ /* Per the above section, we'll discard the chunk if we have an15711571+ * endpoint. If this is an OOTB INIT-ACK, treat it as such.15721572+ */15731573+ if (ep == sctp_sk((sctp_get_ctl_sock()))->ep)15741574+ return sctp_sf_ootb(ep, asoc, type, arg, commands);15751575+ else15761576+ return sctp_sf_discard_chunk(ep, asoc, type, arg, commands);15771577+}1534157815351579/* Unexpected COOKIE-ECHO handler for peer restart (Table 2, action 'A')15361580 *
···490490491491 /* Set up the call info struct and execute the task */492492 status = task->tk_status;493493- if (status != 0) {494494- rpc_release_task(task);493493+ if (status != 0)495494 goto out;496496- }497495 atomic_inc(&task->tk_count);498496 status = rpc_execute(task);499497 if (status == 0)500498 status = task->tk_status;501501- rpc_put_task(task);502499out:500500+ rpc_put_task(task);503501 rpc_restore_sigmask(&oldset);504502 return status;505503}···535537 if (status == 0)536538 rpc_execute(task);537539 else538538- rpc_release_task(task);540540+ rpc_put_task(task);539541540542 rpc_restore_sigmask(&oldset); 541543 return status;
+2-1
net/sunrpc/sched.c
···4242static void __rpc_default_timer(struct rpc_task *task);4343static void rpciod_killall(void);4444static void rpc_async_schedule(struct work_struct *);4545+static void rpc_release_task(struct rpc_task *task);45464647/*4748 * RPC tasks sit here while waiting for conditions to improve.···897896}898897EXPORT_SYMBOL(rpc_put_task);899898900900-void rpc_release_task(struct rpc_task *task)899899+static void rpc_release_task(struct rpc_task *task)901900{902901#ifdef RPC_DEBUG903902 BUG_ON(task->tk_magic != RPC_TASK_MAGIC_ID);
+16-16
net/sunrpc/svc.c
···2626#include <linux/sunrpc/clnt.h>27272828#define RPCDBG_FACILITY RPCDBG_SVCDSP2929-#define RPC_PARANOIA 130293130/*3231 * Mode for mapping cpus to pools.···871872 return 0;872873873874err_short_len:874874-#ifdef RPC_PARANOIA875875- printk("svc: short len %Zd, dropping request\n", argv->iov_len);876876-#endif875875+ if (net_ratelimit())876876+ printk("svc: short len %Zd, dropping request\n", argv->iov_len);877877+877878 goto dropit; /* drop request */878879879880err_bad_dir:880880-#ifdef RPC_PARANOIA881881- printk("svc: bad direction %d, dropping request\n", dir);882882-#endif881881+ if (net_ratelimit())882882+ printk("svc: bad direction %d, dropping request\n", dir);883883+883884 serv->sv_stats->rpcbadfmt++;884885 goto dropit; /* drop request */885886···908909 goto sendit;909910910911err_bad_vers:911911-#ifdef RPC_PARANOIA912912- printk("svc: unknown version (%d)\n", vers);913913-#endif912912+ if (net_ratelimit())913913+ printk("svc: unknown version (%d for prog %d, %s)\n",914914+ vers, prog, progp->pg_name);915915+914916 serv->sv_stats->rpcbadfmt++;915917 svc_putnl(resv, RPC_PROG_MISMATCH);916918 svc_putnl(resv, progp->pg_lovers);···919919 goto sendit;920920921921err_bad_proc:922922-#ifdef RPC_PARANOIA923923- printk("svc: unknown procedure (%d)\n", proc);924924-#endif922922+ if (net_ratelimit())923923+ printk("svc: unknown procedure (%d)\n", proc);924924+925925 serv->sv_stats->rpcbadfmt++;926926 svc_putnl(resv, RPC_PROC_UNAVAIL);927927 goto sendit;928928929929err_garbage:930930-#ifdef RPC_PARANOIA931931- printk("svc: failed to decode args\n");932932-#endif930930+ if (net_ratelimit())931931+ printk("svc: failed to decode args\n");932932+933933 rpc_stat = rpc_garbage_args;934934err_bad:935935 serv->sv_stats->rpcbadfmt++;
+10-4
net/sunrpc/svcsock.c
···10621062 * bit set in the fragment length header.10631063 * But apparently no known nfs clients send fragmented10641064 * records. */10651065- printk(KERN_NOTICE "RPC: bad TCP reclen 0x%08lx (non-terminal)\n",10661066- (unsigned long) svsk->sk_reclen);10651065+ if (net_ratelimit())10661066+ printk(KERN_NOTICE "RPC: bad TCP reclen 0x%08lx"10671067+ " (non-terminal)\n",10681068+ (unsigned long) svsk->sk_reclen);10671069 goto err_delete;10681070 }10691071 svsk->sk_reclen &= 0x7fffffff;10701072 dprintk("svc: TCP record, %d bytes\n", svsk->sk_reclen);10711073 if (svsk->sk_reclen > serv->sv_max_mesg) {10721072- printk(KERN_NOTICE "RPC: bad TCP reclen 0x%08lx (large)\n",10731073- (unsigned long) svsk->sk_reclen);10741074+ if (net_ratelimit())10751075+ printk(KERN_NOTICE "RPC: bad TCP reclen 0x%08lx"10761076+ " (large)\n",10771077+ (unsigned long) svsk->sk_reclen);10741078 goto err_delete;10751079 }10761080 }···12821278 schedule_timeout_uninterruptible(msecs_to_jiffies(500));12831279 rqstp->rq_pages[i] = p;12841280 }12811281+ rqstp->rq_pages[i++] = NULL; /* this might be seen in nfs_read_actor */12821282+ BUG_ON(pages >= RPCSVC_MAXPAGES);1285128312861284 /* Make arg->head point to first page and arg->pages point to rest */12871285 arg = &rqstp->rq_arg;