Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'for_paulus' of master.kernel.org:/pub/scm/linux/kernel/git/galak/powerpc into for-2.6.21

+3176 -2573
+1 -1
CREDITS
··· 3279 3279 S: Spain 3280 3280 3281 3281 N: Linus Torvalds 3282 - E: torvalds@osdl.org 3282 + E: torvalds@linux-foundation.org 3283 3283 D: Original kernel hacker 3284 3284 S: 12725 SW Millikan Way, Suite 400 3285 3285 S: Beaverton, Oregon 97005
+4
Documentation/SubmitChecklist
··· 72 72 73 73 If the new code is substantial, addition of subsystem-specific fault 74 74 injection might be appropriate. 75 + 76 + 22: Newly-added code has been compiled with `gcc -W'. This will generate 77 + lots of noise, but is good for finding bugs like "warning: comparison 78 + between signed and unsigned".
+3 -3
Documentation/SubmittingPatches
··· 134 134 135 135 136 136 Linus Torvalds is the final arbiter of all changes accepted into the 137 - Linux kernel. His e-mail address is <torvalds@osdl.org>. He gets 138 - a lot of e-mail, so typically you should do your best to -avoid- sending 139 - him e-mail. 137 + Linux kernel. His e-mail address is <torvalds@linux-foundation.org>. 138 + He gets a lot of e-mail, so typically you should do your best to -avoid- 139 + sending him e-mail. 140 140 141 141 Patches which are bug fixes, are "obvious" changes, or similarly 142 142 require little discussion should be sent or CC'd to Linus. Patches
+7
Documentation/feature-removal-schedule.txt
··· 318 318 Who: Len Brown <len.brown@intel.com> 319 319 320 320 --------------------------- 321 + 322 + What: JFFS (version 1) 323 + When: 2.6.21 324 + Why: Unmaintained for years, superceded by JFFS2 for years. 325 + Who: Jeff Garzik <jeff@garzik.org> 326 + 327 + ---------------------------
+17 -3
Documentation/filesystems/9p.txt
··· 73 73 RESOURCES 74 74 ========= 75 75 76 - The Linux version of the 9p server is now maintained under the npfs project 77 - on sourceforge (http://sourceforge.net/projects/npfs). 76 + Our current recommendation is to use Inferno (http://www.vitanuova.com/inferno) 77 + as the 9p server. You can start a 9p server under Inferno by issuing the 78 + following command: 79 + ; styxlisten -A tcp!*!564 export '#U*' 80 + 81 + The -A specifies an unauthenticated export. The 564 is the port # (you may 82 + have to choose a higher port number if running as a normal user). The '#U*' 83 + specifies exporting the root of the Linux name space. You may specify a 84 + subset of the namespace by extending the path: '#U*'/tmp would just export 85 + /tmp. For more information, see the Inferno manual pages covering styxlisten 86 + and export. 87 + 88 + A Linux version of the 9p server is now maintained under the npfs project 89 + on sourceforge (http://sourceforge.net/projects/npfs). There is also a 90 + more stable single-threaded version of the server (named spfs) available from 91 + the same CVS repository. 78 92 79 93 There are user and developer mailing lists available through the v9fs project 80 94 on sourceforge (http://sourceforge.net/projects/v9fs). ··· 110 96 111 97 The 2.6 kernel support is working on PPC and x86. 112 98 113 - PLEASE USE THE SOURCEFORGE BUG-TRACKER TO REPORT PROBLEMS. 99 + PLEASE USE THE KERNEL BUGZILLA TO REPORT PROBLEMS. (http://bugzilla.kernel.org) 114 100
+2 -1
Documentation/i386/boot.txt
··· 2 2 ---------------------------- 3 3 4 4 H. Peter Anvin <hpa@zytor.com> 5 - Last update 2006-11-17 5 + Last update 2007-01-26 6 6 7 7 On the i386 platform, the Linux kernel uses a rather complicated boot 8 8 convention. This has evolved partially due to historical aspects, as ··· 186 186 7 GRuB 187 187 8 U-BOOT 188 188 9 Xen 189 + A Gujin 189 190 190 191 Please contact <hpa@zytor.com> if you need a bootloader ID 191 192 value assigned.
+38 -11
Documentation/kdump/kdump.txt
··· 17 17 memory image to a dump file on the local disk, or across the network to 18 18 a remote system. 19 19 20 - Kdump and kexec are currently supported on the x86, x86_64, ppc64 and IA64 20 + Kdump and kexec are currently supported on the x86, x86_64, ppc64 and ia64 21 21 architectures. 22 22 23 23 When the system kernel boots, it reserves a small section of memory for ··· 61 61 62 62 2) Download the kexec-tools user-space package from the following URL: 63 63 64 - http://www.kernel.org/pub/linux/kernel/people/horms/kexec-tools/kexec-tools-testing-20061214.tar.gz 64 + http://www.kernel.org/pub/linux/kernel/people/horms/kexec-tools/kexec-tools-testing.tar.gz 65 + 66 + This is a symlink to the latest version, which at the time of writing is 67 + 20061214, the only release of kexec-tools-testing so far. As other versions 68 + are made released, the older onese will remain available at 69 + http://www.kernel.org/pub/linux/kernel/people/horms/kexec-tools/ 65 70 66 71 Note: Latest kexec-tools-testing git tree is available at 67 72 ··· 76 71 77 72 3) Unpack the tarball with the tar command, as follows: 78 73 79 - tar xvpzf kexec-tools-testing-20061214.tar.gz 74 + tar xvpzf kexec-tools-testing.tar.gz 80 75 81 - 4) Change to the kexec-tools-1.101 directory, as follows: 76 + 4) Change to the kexec-tools directory, as follows: 82 77 83 - cd kexec-tools-testing-20061214 78 + cd kexec-tools-testing-VERSION 84 79 85 80 5) Configure the package, as follows: 86 81 ··· 229 224 230 225 Dump-capture kernel config options (Arch Dependent, ia64) 231 226 ---------------------------------------------------------- 232 - (To be filled) 227 + 228 + - No specific options are required to create a dump-capture kernel 229 + for ia64, other than those specified in the arch idependent section 230 + above. This means that it is possible to use the system kernel 231 + as a dump-capture kernel if desired. 232 + 233 + The crashkernel region can be automatically placed by the system 234 + kernel at run time. This is done by specifying the base address as 0, 235 + or omitting it all together. 236 + 237 + crashkernel=256M@0 238 + or 239 + crashkernel=256M 240 + 241 + If the start address is specified, note that the start address of the 242 + kernel will be aligned to 64Mb, so if the start address is not then 243 + any space below the alignment point will be wasted. 233 244 234 245 235 246 Boot into System Kernel ··· 263 242 On x86 and x86_64, use "crashkernel=64M@16M". 264 243 265 244 On ppc64, use "crashkernel=128M@32M". 245 + 246 + On ia64, 256M@256M is a generous value that typically works. 247 + The region may be automatically placed on ia64, see the 248 + dump-capture kernel config option notes above. 266 249 267 250 Load the Dump-capture Kernel 268 251 ============================ ··· 286 261 For ppc64: 287 262 - Use vmlinux 288 263 For ia64: 289 - (To be filled) 264 + - Use vmlinux or vmlinuz.gz 265 + 290 266 291 267 If you are using a uncompressed vmlinux image then use following command 292 268 to load dump-capture kernel. ··· 303 277 --initrd=<initrd-for-dump-capture-kernel> \ 304 278 --append="root=<root-dev> <arch-specific-options>" 305 279 280 + Please note, that --args-linux does not need to be specified for ia64. 281 + It is planned to make this a no-op on that architecture, but for now 282 + it should be omitted 283 + 306 284 Following are the arch specific command line options to be used while 307 285 loading dump-capture kernel. 308 286 309 - For i386 and x86_64: 287 + For i386, x86_64 and ia64: 310 288 "init 1 irqpoll maxcpus=1" 311 289 312 290 For ppc64: 313 291 "init 1 maxcpus=1 noirqdistrib" 314 - 315 - For IA64 316 - (To be filled) 317 292 318 293 319 294 Notes on loading the dump-capture kernel:
+35 -31
Documentation/sysrq.txt
··· 1 1 Linux Magic System Request Key Hacks 2 - Documentation for sysrq.c version 1.15 3 - Last update: $Date: 2001/01/28 10:15:59 $ 2 + Documentation for sysrq.c 3 + Last update: 2007-JAN-06 4 4 5 5 * What is the magic SysRq key? 6 6 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ··· 35 35 36 36 Note that the value of /proc/sys/kernel/sysrq influences only the invocation 37 37 via a keyboard. Invocation of any operation via /proc/sysrq-trigger is always 38 - allowed. 38 + allowed (by a user with admin privileges). 39 39 40 40 * How do I use the magic SysRq key? 41 41 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ··· 58 58 On other - If you know of the key combos for other architectures, please 59 59 let me know so I can add them to this section. 60 60 61 - On all - write a character to /proc/sysrq-trigger. eg: 61 + On all - write a character to /proc/sysrq-trigger. e.g.: 62 62 63 63 echo t > /proc/sysrq-trigger 64 64 ··· 74 74 75 75 'c' - Will perform a kexec reboot in order to take a crashdump. 76 76 77 + 'd' - Shows all locks that are held. 78 + 77 79 'o' - Will shut your system off (if configured and supported). 78 80 79 81 's' - Will attempt to sync all mounted filesystems. ··· 89 87 90 88 'm' - Will dump current memory info to your console. 91 89 90 + 'n' - Used to make RT tasks nice-able 91 + 92 92 'v' - Dumps Voyager SMP processor info to your console. 93 + 94 + 'w' - Dumps tasks that are in uninterruptable (blocked) state. 95 + 96 + 'x' - Used by xmon interface on ppc/powerpc platforms. 93 97 94 98 '0'-'9' - Sets the console log level, controlling which kernel messages 95 99 will be printed to your console. ('0', for example would make 96 100 it so that only emergency messages like PANICs or OOPSes would 97 101 make it to your console.) 98 102 99 - 'f' - Will call oom_kill to kill a memory hog process 103 + 'f' - Will call oom_kill to kill a memory hog process. 100 104 101 105 'e' - Send a SIGTERM to all processes, except for init. 102 106 107 + 'g' - Used by kgdb on ppc platforms. 108 + 103 109 'i' - Send a SIGKILL to all processes, except for init. 104 110 105 - 'l' - Send a SIGKILL to all processes, INCLUDING init. (Your system 106 - will be non-functional after this.) 107 - 108 - 'h' - Will display help ( actually any other key than those listed 111 + 'h' - Will display help (actually any other key than those listed 109 112 above will display help. but 'h' is easy to remember :-) 110 113 111 114 * Okay, so what can I use them for? 112 115 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 113 116 Well, un'R'aw is very handy when your X server or a svgalib program crashes. 114 117 115 - sa'K' (Secure Access Key) is useful when you want to be sure there are no 116 - trojan program is running at console and which could grab your password 117 - when you would try to login. It will kill all programs on given console 118 - and thus letting you make sure that the login prompt you see is actually 118 + sa'K' (Secure Access Key) is useful when you want to be sure there is no 119 + trojan program running at console which could grab your password 120 + when you would try to login. It will kill all programs on given console, 121 + thus letting you make sure that the login prompt you see is actually 119 122 the one from init, not some trojan program. 120 123 IMPORTANT: In its true form it is not a true SAK like the one in a :IMPORTANT 121 124 IMPORTANT: c2 compliant system, and it should not be mistaken as :IMPORTANT 122 125 IMPORTANT: such. :IMPORTANT 123 - It seems other find it useful as (System Attention Key) which is 126 + It seems others find it useful as (System Attention Key) which is 124 127 useful when you want to exit a program that will not let you switch consoles. 125 128 (For example, X or a svgalib program.) 126 129 ··· 146 139 Again, the unmount (remount read-only) hasn't taken place until you see the 147 140 "OK" and "Done" message appear on the screen. 148 141 149 - The loglevel'0'-'9' is useful when your console is being flooded with 150 - kernel messages you do not want to see. Setting '0' will prevent all but 142 + The loglevels '0'-'9' are useful when your console is being flooded with 143 + kernel messages you do not want to see. Selecting '0' will prevent all but 151 144 the most urgent kernel messages from reaching your console. (They will 152 145 still be logged if syslogd/klogd are alive, though.) 153 146 ··· 159 152 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 160 153 That happens to me, also. I've found that tapping shift, alt, and control 161 154 on both sides of the keyboard, and hitting an invalid sysrq sequence again 162 - will fix the problem. (ie, something like alt-sysrq-z). Switching to another 155 + will fix the problem. (i.e., something like alt-sysrq-z). Switching to another 163 156 virtual console (ALT+Fn) and then back again should also help. 164 157 165 158 * I hit SysRq, but nothing seems to happen, what's wrong? ··· 181 174 prints help, and C) an action_msg string, that will print right before your 182 175 handler is called. Your handler must conform to the prototype in 'sysrq.h'. 183 176 184 - After the sysrq_key_op is created, you can call the macro 185 - register_sysrq_key(int key, struct sysrq_key_op *op_p) that is defined in 186 - sysrq.h, this will register the operation pointed to by 'op_p' at table 187 - key 'key', if that slot in the table is blank. At module unload time, you must 188 - call the macro unregister_sysrq_key(int key, struct sysrq_key_op *op_p), which 177 + After the sysrq_key_op is created, you can call the kernel function 178 + register_sysrq_key(int key, struct sysrq_key_op *op_p); this will 179 + register the operation pointed to by 'op_p' at table key 'key', 180 + if that slot in the table is blank. At module unload time, you must call 181 + the function unregister_sysrq_key(int key, struct sysrq_key_op *op_p), which 189 182 will remove the key op pointed to by 'op_p' from the key 'key', if and only if 190 183 it is currently registered in that slot. This is in case the slot has been 191 184 overwritten since you registered it. ··· 193 186 The Magic SysRQ system works by registering key operations against a key op 194 187 lookup table, which is defined in 'drivers/char/sysrq.c'. This key table has 195 188 a number of operations registered into it at compile time, but is mutable, 196 - and 4 functions are exported for interface to it: __sysrq_lock_table, 197 - __sysrq_unlock_table, __sysrq_get_key_op, and __sysrq_put_key_op. The 198 - functions __sysrq_swap_key_ops and __sysrq_swap_key_ops_nolock are defined 199 - in the header itself, and the REGISTER and UNREGISTER macros are built from 200 - these. More complex (and dangerous!) manipulations of the table are possible 201 - using these functions, but you must be careful to always lock the table before 202 - you read or write from it, and to unlock it again when you are done. (And of 203 - course, to never ever leave an invalid pointer in the table). Null pointers in 204 - the table are always safe :) 189 + and 2 functions are exported for interface to it: 190 + register_sysrq_key and unregister_sysrq_key. 191 + Of course, never ever leave an invalid pointer in the table. I.e., when 192 + your module that called register_sysrq_key() exits, it must call 193 + unregister_sysrq_key() to clean up the sysrq key table entry that it used. 194 + Null pointers in the table are always safe. :) 205 195 206 196 If for some reason you feel the need to call the handle_sysrq function from 207 197 within a function called by handle_sysrq, you must be aware that you are in
+1 -1
Documentation/usb/CREDITS
··· 21 21 Bill Ryder <bryder@sgi.com> 22 22 Thomas Sailer <sailer@ife.ee.ethz.ch> 23 23 Gregory P. Smith <greg@electricrain.com> 24 - Linus Torvalds <torvalds@osdl.org> 24 + Linus Torvalds <torvalds@linux-foundation.org> 25 25 Roman Weissgaerber <weissg@vienna.at> 26 26 <Kazuki.Yasumatsu@fujixerox.co.jp> 27 27
+26 -22
MAINTAINERS
··· 598 598 S: Maintained 599 599 600 600 ATMEL MACB ETHERNET DRIVER 601 - P: Atmel AVR32 Support Team 602 - M: avr32@atmel.com 603 601 P: Haavard Skinnemoen 604 602 M: hskinnemoen@atmel.com 605 603 S: Supported ··· 618 620 S: Maintained 619 621 620 622 AVR32 ARCHITECTURE 621 - P: Atmel AVR32 Support Team 622 - M: avr32@atmel.com 623 623 P: Haavard Skinnemoen 624 624 M: hskinnemoen@atmel.com 625 625 W: http://www.atmel.com/products/AVR32/ ··· 626 630 S: Supported 627 631 628 632 AVR32/AT32AP MACHINE SUPPORT 629 - P: Atmel AVR32 Support Team 630 - M: avr32@atmel.com 631 633 P: Haavard Skinnemoen 632 634 M: hskinnemoen@atmel.com 633 635 S: Supported ··· 1131 1137 S: Maintained 1132 1138 1133 1139 DSCC4 DRIVER 1134 - P: Fran�ois Romieu 1135 - M: romieu@cogenit.fr 1136 - M: romieu@ensta.fr 1140 + P: Francois Romieu 1141 + M: romieu@fr.zoreil.com 1142 + L: netdev@vger.kernel.org 1137 1143 S: Maintained 1138 1144 1139 1145 DVB SUBSYSTEM AND DRIVERS ··· 1248 1254 1249 1255 ETHERNET BRIDGE 1250 1256 P: Stephen Hemminger 1251 - M: shemminger@osdl.org 1257 + M: shemminger@linux-foundation.org 1252 1258 L: bridge@osdl.org 1253 1259 W: http://bridge.sourceforge.net/ 1254 1260 S: Maintained ··· 1592 1598 W: http://www.developer.ibm.com/welcome/netfinity/serveraid.html 1593 1599 S: Supported 1594 1600 1595 - IDE DRIVER [GENERAL] 1601 + IDE SUBSYSTEM 1596 1602 P: Bartlomiej Zolnierkiewicz 1597 - M: B.Zolnierkiewicz@elka.pw.edu.pl 1598 - L: linux-kernel@vger.kernel.org 1603 + M: bzolnier@gmail.com 1599 1604 L: linux-ide@vger.kernel.org 1600 - T: git kernel.org:/pub/scm/linux/kernel/git/bart/ide-2.6.git 1605 + T: quilt kernel.org/pub/linux/kernel/people/bart/pata-2.6/ 1601 1606 S: Maintained 1602 1607 1603 1608 IDE/ATAPI CDROM DRIVER ··· 1921 1928 1922 1929 KERNEL NFSD 1923 1930 P: Neil Brown 1924 - M: neilb@cse.unsw.edu.au 1931 + M: neilb@suse.de 1925 1932 L: nfs@lists.sourceforge.net 1926 1933 W: http://nfs.sourceforge.net/ 1927 - W: http://www.cse.unsw.edu.au/~neilb/patches/linux-devel/ 1928 - S: Maintained 1934 + S: Supported 1929 1935 1930 1936 KERNEL VIRTUAL MACHINE (KVM) 1931 1937 P: Avi Kivity ··· 2269 2277 2270 2278 NETEM NETWORK EMULATOR 2271 2279 P: Stephen Hemminger 2272 - M: shemminger@osdl.org 2280 + M: shemminger@linux-foundation.org 2273 2281 L: netem@osdl.org 2274 2282 S: Maintained 2275 2283 ··· 2282 2290 P: Patrick McHardy 2283 2291 M: kaber@trash.net 2284 2292 L: netfilter-devel@lists.netfilter.org 2285 - L: netfilter@lists.netfilter.org 2293 + L: netfilter@lists.netfilter.org (subscribers-only) 2286 2294 L: coreteam@netfilter.org 2287 2295 W: http://www.netfilter.org/ 2288 2296 W: http://www.iptables.org/ ··· 2985 2993 P: Ingo Molnar 2986 2994 M: mingo@redhat.com 2987 2995 P: Neil Brown 2988 - M: neilb@cse.unsw.edu.au 2996 + M: neilb@suse.de 2989 2997 L: linux-raid@vger.kernel.org 2990 - S: Maintained 2998 + S: Supported 2991 2999 2992 3000 SOFTWARE SUSPEND: 2993 3001 P: Pavel Machek ··· 3073 3081 3074 3082 SKGE, SKY2 10/100/1000 GIGABIT ETHERNET DRIVERS 3075 3083 P: Stephen Hemminger 3076 - M: shemminger@osdl.org 3084 + M: shemminger@linux-foundation.org 3077 3085 L: netdev@vger.kernel.org 3078 3086 S: Maintained 3079 3087 ··· 3567 3575 L: i2c@lm-sensors.org 3568 3576 S: Maintained 3569 3577 3578 + VIA VELOCITY NETWORK DRIVER 3579 + P: Francois Romieu 3580 + M: romieu@fr.zoreil.com 3581 + L: netdev@vger.kernel.org 3582 + S: Maintained 3583 + 3570 3584 UCLINUX (AND M68KNOMMU) 3571 3585 P: Greg Ungerer 3572 3586 M: gerg@uclinux.org ··· 3592 3594 M: ysato@users.sourceforge.jp 3593 3595 W: http://uclinux-h8.sourceforge.jp/ 3594 3596 S: Supported 3597 + 3598 + UFS FILESYSTEM 3599 + P: Evgeniy Dushistov 3600 + M: dushistov@mail.ru 3601 + L: linux-kernel@vger.kernel.org 3602 + S: Maintained 3595 3603 3596 3604 USB DIAMOND RIO500 DRIVER 3597 3605 P: Cesar Miquel
+4 -4
Makefile
··· 1 1 VERSION = 2 2 2 PATCHLEVEL = 6 3 3 SUBLEVEL = 20 4 - EXTRAVERSION =-rc5 4 + EXTRAVERSION = 5 5 NAME = Homicidal Dwarf Hamster 6 6 7 7 # *DOCUMENTATION* ··· 1116 1116 @echo ' cscope - Generate cscope index' 1117 1117 @echo ' kernelrelease - Output the release version string' 1118 1118 @echo ' kernelversion - Output the version stored in Makefile' 1119 - @if [ -r include/asm-$(ARCH)/Kbuild ]; then \ 1119 + @if [ -r $(srctree)/include/asm-$(ARCH)/Kbuild ]; then \ 1120 1120 echo ' headers_install - Install sanitised kernel headers to INSTALL_HDR_PATH'; \ 1121 + echo ' (default: $(INSTALL_HDR_PATH))'; \ 1121 1122 fi 1122 - @echo ' (default: $(INSTALL_HDR_PATH))' 1123 1123 @echo '' 1124 1124 @echo 'Static analysers' 1125 1125 @echo ' checkstack - Generate a list of stack hogs' 1126 1126 @echo ' namespacecheck - Name space analysis on compiled kernel' 1127 - @if [ -r include/asm-$(ARCH)/Kbuild ]; then \ 1127 + @if [ -r $(srctree)/include/asm-$(ARCH)/Kbuild ]; then \ 1128 1128 echo ' headers_check - Sanity check on exported headers'; \ 1129 1129 fi 1130 1130 @echo ''
+2 -2
README
··· 278 278 the file MAINTAINERS to see if there is a particular person associated 279 279 with the part of the kernel that you are having trouble with. If there 280 280 isn't anyone listed there, then the second best thing is to mail 281 - them to me (torvalds@osdl.org), and possibly to any other relevant 282 - mailing-list or to the newsgroup. 281 + them to me (torvalds@linux-foundation.org), and possibly to any other 282 + relevant mailing-list or to the newsgroup. 283 283 284 284 - In all bug-reports, *please* tell what kernel you are talking about, 285 285 how to duplicate the problem, and what your setup is (use your common
+1
arch/alpha/kernel/process.c
··· 47 47 * Power off function, if any 48 48 */ 49 49 void (*pm_power_off)(void) = machine_power_off; 50 + EXPORT_SYMBOL(pm_power_off); 50 51 51 52 void 52 53 cpu_idle(void)
-1
arch/arm/configs/at91sam9260ek_defconfig
··· 923 923 # CONFIG_HEADERS_CHECK is not set 924 924 # CONFIG_RCU_TORTURE_TEST is not set 925 925 CONFIG_DEBUG_USER=y 926 - # CONFIG_DEBUG_WAITQ is not set 927 926 # CONFIG_DEBUG_ERRORS is not set 928 927 CONFIG_DEBUG_LL=y 929 928 # CONFIG_DEBUG_ICEDCC is not set
-1
arch/arm/configs/at91sam9261ek_defconfig
··· 1079 1079 # CONFIG_HEADERS_CHECK is not set 1080 1080 # CONFIG_RCU_TORTURE_TEST is not set 1081 1081 CONFIG_DEBUG_USER=y 1082 - # CONFIG_DEBUG_WAITQ is not set 1083 1082 # CONFIG_DEBUG_ERRORS is not set 1084 1083 CONFIG_DEBUG_LL=y 1085 1084 # CONFIG_DEBUG_ICEDCC is not set
+6 -1
arch/arm/kernel/head.S
··· 22 22 #include <asm/thread_info.h> 23 23 #include <asm/system.h> 24 24 25 + #if (PHYS_OFFSET & 0x001fffff) 26 + #error "PHYS_OFFSET must be at an even 2MiB boundary!" 27 + #endif 28 + 25 29 #define KERNEL_RAM_VADDR (PAGE_OFFSET + TEXT_OFFSET) 26 30 #define KERNEL_RAM_PADDR (PHYS_OFFSET + TEXT_OFFSET) 27 31 ··· 255 251 * Then map first 1MB of ram in case it contains our boot params. 256 252 */ 257 253 add r0, r4, #PAGE_OFFSET >> 18 258 - orr r6, r7, #PHYS_OFFSET 254 + orr r6, r7, #(PHYS_OFFSET & 0xff000000) 255 + orr r6, r6, #(PHYS_OFFSET & 0x00e00000) 259 256 str r6, [r0] 260 257 261 258 #ifdef CONFIG_XIP_KERNEL
+1 -1
arch/arm/mach-at91rm9200/at91rm9200_devices.c
··· 272 272 at91_set_A_periph(AT91_PIN_PC12, 0); /* NCS6/CFCE2 */ 273 273 274 274 /* nWAIT is _not_ a default setting */ 275 - at91_set_A_periph(AT91_PIN_PC6, 1); /* nWAIT */ 275 + at91_set_A_periph(AT91_PIN_PC6, 1); /* nWAIT */ 276 276 277 277 cf_data = *data; 278 278 platform_device_register(&at91rm9200_cf_device);
+2 -1
arch/arm/mach-at91rm9200/at91sam9260.c
··· 16 16 #include <asm/mach/map.h> 17 17 #include <asm/arch/at91sam9260.h> 18 18 #include <asm/arch/at91_pmc.h> 19 + #include <asm/arch/at91_rstc.h> 19 20 20 21 #include "generic.h" 21 22 #include "clock.h" ··· 213 212 214 213 static void at91sam9260_reset(void) 215 214 { 216 - #warning "Implement CPU reset" 215 + at91_sys_write(AT91_RSTC_CR, (0xA5 << 24) | AT91_RSTC_PROCRST | AT91_RSTC_PERRST); 217 216 } 218 217 219 218
+2 -1
arch/arm/mach-at91rm9200/at91sam9261.c
··· 16 16 #include <asm/mach/map.h> 17 17 #include <asm/arch/at91sam9261.h> 18 18 #include <asm/arch/at91_pmc.h> 19 + #include <asm/arch/at91_rstc.h> 19 20 20 21 #include "generic.h" 21 22 #include "clock.h" ··· 208 207 209 208 static void at91sam9261_reset(void) 210 209 { 211 - #warning "Implement CPU reset" 210 + at91_sys_write(AT91_RSTC_CR, (0xA5 << 24) | AT91_RSTC_PROCRST | AT91_RSTC_PERRST); 212 211 } 213 212 214 213
+13 -26
arch/arm/mach-at91rm9200/gpio.c
··· 20 20 #include <asm/io.h> 21 21 #include <asm/hardware.h> 22 22 #include <asm/arch/at91_pio.h> 23 - #include <asm/arch/at91_pmc.h> 24 23 #include <asm/arch/gpio.h> 25 24 26 25 #include "generic.h" ··· 223 224 static int gpio_irq_set_wake(unsigned pin, unsigned state) 224 225 { 225 226 unsigned mask = pin_to_mask(pin); 227 + unsigned bank = (pin - PIN_BASE) / 32; 226 228 227 - pin -= PIN_BASE; 228 - pin /= 32; 229 - 230 - if (unlikely(pin >= MAX_GPIO_BANKS)) 229 + if (unlikely(bank >= MAX_GPIO_BANKS)) 231 230 return -EINVAL; 232 231 233 232 if (state) 234 - wakeups[pin] |= mask; 233 + wakeups[bank] |= mask; 235 234 else 236 - wakeups[pin] &= ~mask; 235 + wakeups[bank] &= ~mask; 236 + 237 + set_irq_wake(gpio[bank].id, state); 237 238 238 239 return 0; 239 240 } ··· 245 246 for (i = 0; i < gpio_banks; i++) { 246 247 u32 pio = gpio[i].offset; 247 248 248 - /* 249 - * Note: drivers should have disabled GPIO interrupts that 250 - * aren't supposed to be wakeup sources. 251 - * But that is not much good on ARM..... disable_irq() does 252 - * not update the hardware immediately, so the hardware mask 253 - * (IMR) has the wrong value (not current, too much is 254 - * permitted). 255 - * 256 - * Our workaround is to disable all non-wakeup IRQs ... 257 - * which is exactly what correct drivers asked for in the 258 - * first place! 259 - */ 260 249 backups[i] = at91_sys_read(pio + PIO_IMR); 261 250 at91_sys_write(pio + PIO_IDR, backups[i]); 262 251 at91_sys_write(pio + PIO_IER, wakeups[i]); 263 252 264 - if (!wakeups[i]) { 265 - disable_irq_wake(gpio[i].id); 266 - at91_sys_write(AT91_PMC_PCDR, 1 << gpio[i].id); 267 - } else { 268 - enable_irq_wake(gpio[i].id); 253 + if (!wakeups[i]) 254 + clk_disable(gpio[i].clock); 255 + else { 269 256 #ifdef CONFIG_PM_DEBUG 270 - printk(KERN_DEBUG "GPIO-%c may wake for %08x\n", "ABCD"[i], wakeups[i]); 257 + printk(KERN_DEBUG "GPIO-%c may wake for %08x\n", 'A'+i, wakeups[i]); 271 258 #endif 272 259 } 273 260 } ··· 266 281 for (i = 0; i < gpio_banks; i++) { 267 282 u32 pio = gpio[i].offset; 268 283 284 + if (!wakeups[i]) 285 + clk_enable(gpio[i].clock); 286 + 269 287 at91_sys_write(pio + PIO_IDR, wakeups[i]); 270 288 at91_sys_write(pio + PIO_IER, backups[i]); 271 - at91_sys_write(AT91_PMC_PCER, 1 << gpio[i].id); 272 289 } 273 290 } 274 291
+13 -1
arch/arm/mach-imx/cpufreq.c
··· 184 184 long sysclk; 185 185 unsigned int bclk_div = 1; 186 186 187 + /* 188 + * Some governors do not respects CPU and policy lower limits 189 + * which leads to bad things (division by zero etc), ensure 190 + * that such things do not happen. 191 + */ 192 + if(target_freq < policy->cpuinfo.min_freq) 193 + target_freq = policy->cpuinfo.min_freq; 194 + 195 + if(target_freq < policy->min) 196 + target_freq = policy->min; 197 + 187 198 freq = target_freq * 1000; 188 199 189 200 pr_debug(KERN_DEBUG "imx: requested frequency %ld Hz, mpctl0 at boot 0x%08x\n", ··· 269 258 policy->governor = CPUFREQ_DEFAULT_GOVERNOR; 270 259 policy->cpuinfo.min_freq = 8000; 271 260 policy->cpuinfo.max_freq = 200000; 272 - policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL; 261 + /* Manual states, that PLL stabilizes in two CLK32 periods */ 262 + policy->cpuinfo.transition_latency = 4 * 1000000000LL / CLK32; 273 263 return 0; 274 264 } 275 265
+8 -4
arch/arm/mach-s3c2410/gpio.c
··· 57 57 case S3C2410_GPIO_SFN2: 58 58 case S3C2410_GPIO_SFN3: 59 59 if (pin < S3C2410_GPIO_BANKB) { 60 + function -= 1; 60 61 function &= 1; 61 62 function <<= S3C2410_GPIO_OFFSET(pin); 62 63 } else { ··· 84 83 unsigned int s3c2410_gpio_getcfg(unsigned int pin) 85 84 { 86 85 void __iomem *base = S3C24XX_GPIO_BASE(pin); 87 - unsigned long mask; 86 + unsigned long val = __raw_readl(base); 88 87 89 88 if (pin < S3C2410_GPIO_BANKB) { 90 - mask = 1 << S3C2410_GPIO_OFFSET(pin); 89 + val >>= S3C2410_GPIO_OFFSET(pin); 90 + val &= 1; 91 + val += 1; 91 92 } else { 92 - mask = 3 << S3C2410_GPIO_OFFSET(pin)*2; 93 + val >>= S3C2410_GPIO_OFFSET(pin)*2; 94 + val &= 3; 93 95 } 94 96 95 - return __raw_readl(base) & mask; 97 + return val | S3C2410_GPIO_INPUT; 96 98 } 97 99 98 100 EXPORT_SYMBOL(s3c2410_gpio_getcfg);
+3 -4
arch/arm/mach-s3c2410/pm.c
··· 451 451 irqstate = s3c_irqwake_eintmask & (1L<<irqoffs); 452 452 453 453 pinstate = s3c2410_gpio_getcfg(pin); 454 - pinstate >>= S3C2410_GPIO_OFFSET(pin)*2; 455 454 456 455 if (!irqstate) { 457 - if (pinstate == 0x02) 456 + if (pinstate == S3C2410_GPIO_IRQ) 458 457 DBG("Leaving IRQ %d (pin %d) enabled\n", irq, pin); 459 458 } else { 460 - if (pinstate == 0x02) { 459 + if (pinstate == S3C2410_GPIO_IRQ) { 461 460 DBG("Disabling IRQ %d (pin %d)\n", irq, pin); 462 - s3c2410_gpio_cfgpin(pin, 0x00); 461 + s3c2410_gpio_cfgpin(pin, S3C2410_GPIO_INPUT); 463 462 } 464 463 } 465 464 }
+2 -2
arch/arm/mach-s3c2410/s3c2412-dma.c
··· 133 133 static void s3c2412_dma_select(struct s3c2410_dma_chan *chan, 134 134 struct s3c24xx_dma_map *map) 135 135 { 136 - writel(chan->regs + S3C2412_DMA_DMAREQSEL, 137 - map->channels[0] | S3C2412_DMAREQSEL_HW); 136 + writel(map->channels[0] | S3C2412_DMAREQSEL_HW, 137 + chan->regs + S3C2412_DMA_DMAREQSEL); 138 138 } 139 139 140 140 static struct s3c24xx_dma_selection __initdata s3c2412_dma_sel = {
+7 -4
arch/arm/mm/init.c
··· 52 52 printk("Free swap: %6ldkB\n", nr_swap_pages<<(PAGE_SHIFT-10)); 53 53 54 54 for_each_online_node(node) { 55 + pg_data_t *n = NODE_DATA(node); 56 + struct page *map = n->node_mem_map - n->node_start_pfn; 57 + 55 58 for_each_nodebank (i,mi,node) { 56 59 unsigned int pfn1, pfn2; 57 60 struct page *page, *end; 58 61 59 - pfn1 = mi->bank[i].start >> PAGE_SHIFT; 60 - pfn2 = (mi->bank[i].size + mi->bank[i].start) >> PAGE_SHIFT; 62 + pfn1 = __phys_to_pfn(mi->bank[i].start); 63 + pfn2 = __phys_to_pfn(mi->bank[i].size + mi->bank[i].start); 61 64 62 - page = NODE_MEM_MAP(node) + pfn1; 63 - end = NODE_MEM_MAP(node) + pfn2; 65 + page = map + pfn1; 66 + end = map + pfn2; 64 67 65 68 do { 66 69 total++;
+2 -1
arch/arm/mm/ioremap.c
··· 300 300 addr = (unsigned long)area->addr; 301 301 302 302 #ifndef CONFIG_SMP 303 - if ((((cpu_architecture() >= CPU_ARCH_ARMv6) && (get_cr() & CR_XP)) || 303 + if (DOMAIN_IO == 0 && 304 + (((cpu_architecture() >= CPU_ARCH_ARMv6) && (get_cr() & CR_XP)) || 304 305 cpu_is_xsc3()) && 305 306 !((__pfn_to_phys(pfn) | size | addr) & ~SUPERSECTION_MASK)) { 306 307 area->flags |= VM_ARM_SECTION_MAPPING;
+1 -1
arch/arm/mm/proc-xscale.S
··· 708 708 .type __8033x_proc_info,#object 709 709 __8033x_proc_info: 710 710 .long 0x69054010 711 - .long 0xffffff30 711 + .long 0xfffffd30 712 712 .long PMD_TYPE_SECT | \ 713 713 PMD_SECT_BUFFERABLE | \ 714 714 PMD_SECT_CACHEABLE | \
+24 -1
arch/arm/tools/mach-types
··· 12 12 # 13 13 # http://www.arm.linux.org.uk/developer/machines/?action=new 14 14 # 15 - # Last update: Thu Dec 7 17:19:20 2006 15 + # Last update: Tue Jan 16 16:52:56 2007 16 16 # 17 17 # machine_is_xxx CONFIG_xxxx MACH_TYPE_xxx number 18 18 # ··· 1219 1219 hitrack MACH_HITRACK HITRACK 1208 1220 1220 syme1 MACH_SYME1 SYME1 1209 1221 1221 syhl1 MACH_SYHL1 SYHL1 1210 1222 + empca400 MACH_EMPCA400 EMPCA400 1211 1223 + em7210 MACH_EM7210 EM7210 1212 1224 + htchermes MACH_HTCHERMES HTCHERMES 1213 1225 + eti_c1 MACH_ETI_C1 ETI_C1 1214 1226 + mach_dep2410 MACH_MACH_DEP2410 MACH_DEP2410 1215 1227 + ac100 MACH_AC100 AC100 1216 1228 + sneetch MACH_SNEETCH SNEETCH 1217 1229 + studentmate MACH_STUDENTMATE STUDENTMATE 1218 1230 + zir2410 MACH_ZIR2410 ZIR2410 1219 1231 + zir2413 MACH_ZIR2413 ZIR2413 1220 1232 + dlonip3 MACH_DLONIP3 DLONIP3 1221 1233 + instream MACH_INSTREAM INSTREAM 1222 1234 + ambarella MACH_AMBARELLA AMBARELLA 1223 1235 + nevis MACH_NEVIS NEVIS 1224 1236 + htc_trinity MACH_HTC_TRINITY HTC_TRINITY 1225 1237 + ql202b MACH_QL202B QL202B 1226 1238 + vpac270 MACH_VPAC270 VPAC270 1227 1239 + rd129 MACH_RD129 RD129 1228 1240 + htcwizard MACH_HTCWIZARD HTCWIZARD 1229 1241 + xscale_treo680 MACH_XSCALE_TREO680 XSCALE_TREO680 1230 1242 + tecon_tmezon MACH_TECON_TMEZON TECON_TMEZON 1231 1243 + zylonite MACH_ZYLONITE ZYLONITE 1233 1244 + gene1270 MACH_GENE1270 GENE1270 1234
+1
arch/arm/vfp/entry.S
··· 25 25 do_vfp: 26 26 enable_irq 27 27 ldr r4, .LCvfp 28 + ldr r11, [r10, #TI_CPU] @ CPU number 28 29 add r10, r10, #TI_VFPSTATE @ r10 = workspace 29 30 ldr pc, [r4] @ call VFP entry point 30 31
+4
arch/arm/vfp/vfp.h
··· 370 370 u32 (* const fn)(int dd, int dn, int dm, u32 fpscr); 371 371 u32 flags; 372 372 }; 373 + 374 + #ifdef CONFIG_SMP 375 + extern void vfp_save_state(void *location, u32 fpexc); 376 + #endif
+24 -2
arch/arm/vfp/vfphw.S
··· 65 65 @ r2 = faulted PC+4 66 66 @ r9 = successful return 67 67 @ r10 = vfp_state union 68 + @ r11 = CPU number 68 69 @ lr = failure return 69 70 70 71 .globl vfp_support_entry ··· 80 79 DBGSTR1 "enable %x", r10 81 80 ldr r3, last_VFP_context_address 82 81 orr r1, r1, #FPEXC_ENABLE @ user FPEXC has the enable bit set 83 - ldr r4, [r3] @ last_VFP_context pointer 82 + ldr r4, [r3, r11, lsl #2] @ last_VFP_context pointer 84 83 bic r5, r1, #FPEXC_EXCEPTION @ make sure exceptions are disabled 85 84 cmp r4, r10 86 85 beq check_for_exception @ we are returning to the same ··· 92 91 @ exceptions, so we can get at the 93 92 @ rest of it 94 93 94 + #ifndef CONFIG_SMP 95 95 @ Save out the current registers to the old thread state 96 + @ No need for SMP since this is not done lazily 96 97 97 98 DBGSTR1 "save old state %p", r4 98 99 cmp r4, #0 ··· 108 105 stmia r4, {r1, r5, r6, r8} @ save FPEXC, FPSCR, FPINST, FPINST2 109 106 @ and point r4 at the word at the 110 107 @ start of the register dump 108 + #endif 111 109 112 110 no_old_VFP_process: 113 111 DBGSTR1 "load state %p", r10 114 - str r10, [r3] @ update the last_VFP_context pointer 112 + str r10, [r3, r11, lsl #2] @ update the last_VFP_context pointer 115 113 @ Load the saved state back into the VFP 116 114 VFPFLDMIA r10 @ reload the working registers while 117 115 @ FPEXC is in a safe state ··· 165 161 @ code will raise an exception if 166 162 @ required. If not, the user code will 167 163 @ retry the faulted instruction 164 + 165 + #ifdef CONFIG_SMP 166 + .globl vfp_save_state 167 + .type vfp_save_state, %function 168 + vfp_save_state: 169 + @ Save the current VFP state 170 + @ r0 - save location 171 + @ r1 - FPEXC 172 + DBGSTR1 "save VFP state %p", r0 173 + VFPFMRX r2, FPSCR @ current status 174 + VFPFMRX r3, FPINST @ FPINST (always there, rev0 onwards) 175 + tst r1, #FPEXC_FPV2 @ is there an FPINST2 to read? 176 + VFPFMRX r12, FPINST2, NE @ FPINST2 if needed - avoids reading 177 + @ nonexistant reg on rev0 178 + VFPFSTMIA r0 @ save the working registers 179 + stmia r0, {r1, r2, r3, r12} @ save FPEXC, FPSCR, FPINST, FPINST2 180 + mov pc, lr 181 + #endif 168 182 169 183 last_VFP_context_address: 170 184 .word last_VFP_context
+26 -4
arch/arm/vfp/vfpmodule.c
··· 28 28 void vfp_support_entry(void); 29 29 30 30 void (*vfp_vector)(void) = vfp_testing_entry; 31 - union vfp_state *last_VFP_context; 31 + union vfp_state *last_VFP_context[NR_CPUS]; 32 32 33 33 /* 34 34 * Dual-use variable. ··· 41 41 { 42 42 struct thread_info *thread = v; 43 43 union vfp_state *vfp; 44 + __u32 cpu = thread->cpu; 44 45 45 46 if (likely(cmd == THREAD_NOTIFY_SWITCH)) { 47 + u32 fpexc = fmrx(FPEXC); 48 + 49 + #ifdef CONFIG_SMP 50 + /* 51 + * On SMP, if VFP is enabled, save the old state in 52 + * case the thread migrates to a different CPU. The 53 + * restoring is done lazily. 54 + */ 55 + if ((fpexc & FPEXC_ENABLE) && last_VFP_context[cpu]) { 56 + vfp_save_state(last_VFP_context[cpu], fpexc); 57 + last_VFP_context[cpu]->hard.cpu = cpu; 58 + } 59 + /* 60 + * Thread migration, just force the reloading of the 61 + * state on the new CPU in case the VFP registers 62 + * contain stale data. 63 + */ 64 + if (thread->vfpstate.hard.cpu != cpu) 65 + last_VFP_context[cpu] = NULL; 66 + #endif 67 + 46 68 /* 47 69 * Always disable VFP so we can lazily save/restore the 48 70 * old state. 49 71 */ 50 - fmxr(FPEXC, fmrx(FPEXC) & ~FPEXC_ENABLE); 72 + fmxr(FPEXC, fpexc & ~FPEXC_ENABLE); 51 73 return NOTIFY_DONE; 52 74 } 53 75 ··· 90 68 } 91 69 92 70 /* flush and release case: Per-thread VFP cleanup. */ 93 - if (last_VFP_context == vfp) 94 - last_VFP_context = NULL; 71 + if (last_VFP_context[cpu] == vfp) 72 + last_VFP_context[cpu] = NULL; 95 73 96 74 return NOTIFY_DONE; 97 75 }
+27 -12
arch/avr32/configs/atstk1002_defconfig
··· 1 1 # 2 2 # Automatically generated make config: don't edit 3 - # Linux kernel version: 2.6.19-rc2 4 - # Fri Oct 20 11:52:37 2006 3 + # Linux kernel version: 2.6.20-rc6 4 + # Fri Jan 26 13:12:59 2007 5 5 # 6 6 CONFIG_AVR32=y 7 7 CONFIG_GENERIC_HARDIRQS=y ··· 9 9 CONFIG_GENERIC_IRQ_PROBE=y 10 10 CONFIG_RWSEM_GENERIC_SPINLOCK=y 11 11 CONFIG_GENERIC_TIME=y 12 + # CONFIG_ARCH_HAS_ILOG2_U32 is not set 13 + # CONFIG_ARCH_HAS_ILOG2_U64 is not set 12 14 CONFIG_GENERIC_HWEIGHT=y 13 15 CONFIG_GENERIC_CALIBRATE_DELAY=y 14 16 CONFIG_DEFCONFIG_LIST="/lib/modules/$UNAME_RELEASE/.config" ··· 38 36 # CONFIG_UTS_NS is not set 39 37 CONFIG_AUDIT=y 40 38 # CONFIG_IKCONFIG is not set 39 + CONFIG_SYSFS_DEPRECATED=y 41 40 CONFIG_RELAY=y 42 41 CONFIG_INITRAMFS_SOURCE="" 43 42 CONFIG_CC_OPTIMIZE_FOR_SIZE=y ··· 78 75 # Block layer 79 76 # 80 77 CONFIG_BLOCK=y 78 + # CONFIG_LBD is not set 81 79 # CONFIG_BLK_DEV_IO_TRACE is not set 80 + # CONFIG_LSF is not set 82 81 83 82 # 84 83 # IO Schedulers ··· 130 125 # CONFIG_OWNERSHIP_TRACE is not set 131 126 # CONFIG_HZ_100 is not set 132 127 CONFIG_HZ_250=y 128 + # CONFIG_HZ_300 is not set 133 129 # CONFIG_HZ_1000 is not set 134 130 CONFIG_HZ=250 135 131 CONFIG_CMDLINE="" ··· 188 182 # CONFIG_TCP_CONG_ADVANCED is not set 189 183 CONFIG_TCP_CONG_CUBIC=y 190 184 CONFIG_DEFAULT_TCP_CONG="cubic" 185 + # CONFIG_TCP_MD5SIG is not set 191 186 # CONFIG_IPV6 is not set 192 187 # CONFIG_INET6_XFRM_TUNNEL is not set 193 188 # CONFIG_INET6_TUNNEL is not set ··· 267 260 # User Modules And Translation Layers 268 261 # 269 262 CONFIG_MTD_CHAR=y 263 + CONFIG_MTD_BLKDEVS=y 270 264 CONFIG_MTD_BLOCK=y 271 265 # CONFIG_FTL is not set 272 266 # CONFIG_NFTL is not set ··· 363 355 # 364 356 # Misc devices 365 357 # 366 - # CONFIG_SGI_IOC4 is not set 367 358 # CONFIG_TIFM_CORE is not set 368 359 369 360 # ··· 412 405 # 413 406 # PHY device support 414 407 # 408 + # CONFIG_PHYLIB is not set 415 409 416 410 # 417 411 # Ethernet (10 or 100Mbit) 418 412 # 419 - # CONFIG_NET_ETHERNET is not set 413 + CONFIG_NET_ETHERNET=y 414 + CONFIG_MII=y 415 + CONFIG_MACB=y 420 416 421 417 # 422 418 # Ethernet (1000 Mbit) ··· 515 505 # CONFIG_GEN_RTC is not set 516 506 # CONFIG_DTLK is not set 517 507 # CONFIG_R3964 is not set 518 - 519 - # 520 - # Ftape, the floppy tape device driver 521 - # 522 508 # CONFIG_RAW_DRIVER is not set 523 509 524 510 # ··· 627 621 # 628 622 629 623 # 624 + # Virtualization 625 + # 626 + 627 + # 630 628 # File systems 631 629 # 632 630 CONFIG_EXT2_FS=m ··· 693 683 # CONFIG_BEFS_FS is not set 694 684 # CONFIG_BFS_FS is not set 695 685 # CONFIG_EFS_FS is not set 696 - # CONFIG_JFFS_FS is not set 697 686 CONFIG_JFFS2_FS=y 698 687 CONFIG_JFFS2_FS_DEBUG=0 699 688 CONFIG_JFFS2_FS_WRITEBUFFER=y ··· 772 763 CONFIG_NLS_UTF8=m 773 764 774 765 # 766 + # Distributed Lock Manager 767 + # 768 + # CONFIG_DLM is not set 769 + 770 + # 775 771 # Kernel hacking 776 772 # 777 773 CONFIG_TRACE_IRQFLAGS_SUPPORT=y ··· 784 770 CONFIG_ENABLE_MUST_CHECK=y 785 771 CONFIG_MAGIC_SYSRQ=y 786 772 # CONFIG_UNUSED_SYMBOLS is not set 773 + CONFIG_DEBUG_FS=y 774 + # CONFIG_HEADERS_CHECK is not set 787 775 CONFIG_DEBUG_KERNEL=y 788 776 CONFIG_LOG_BUF_SHIFT=14 789 777 CONFIG_DETECT_SOFTLOCKUP=y ··· 801 785 # CONFIG_DEBUG_KOBJECT is not set 802 786 CONFIG_DEBUG_BUGVERBOSE=y 803 787 # CONFIG_DEBUG_INFO is not set 804 - CONFIG_DEBUG_FS=y 805 788 # CONFIG_DEBUG_VM is not set 806 789 # CONFIG_DEBUG_LIST is not set 807 790 CONFIG_FRAME_POINTER=y 808 - # CONFIG_UNWIND_INFO is not set 809 791 CONFIG_FORCED_INLINING=y 810 - # CONFIG_HEADERS_CHECK is not set 811 792 # CONFIG_RCU_TORTURE_TEST is not set 812 793 # CONFIG_KPROBES is not set 813 794 ··· 822 809 # 823 810 # Library routines 824 811 # 812 + CONFIG_BITREVERSE=y 825 813 CONFIG_CRC_CCITT=m 826 814 # CONFIG_CRC16 is not set 827 815 CONFIG_CRC32=y ··· 831 817 CONFIG_ZLIB_INFLATE=y 832 818 CONFIG_ZLIB_DEFLATE=y 833 819 CONFIG_PLIST=y 820 + CONFIG_IOMAP_COPY=y
+1
arch/avr32/kernel/avr32_ksyms.c
··· 29 29 */ 30 30 EXPORT_SYMBOL(memset); 31 31 EXPORT_SYMBOL(memcpy); 32 + EXPORT_SYMBOL(clear_page); 32 33 33 34 /* 34 35 * Userspace access stuff.
+2
arch/i386/boot/compressed/relocs.c
··· 43 43 /* Match found */ 44 44 return 1; 45 45 } 46 + if (strncmp(sym_name, "__crc_", 6) == 0) 47 + return 1; 46 48 return 0; 47 49 } 48 50
-9
arch/i386/kernel/cpu/cpufreq/p4-clockmod.c
··· 51 51 52 52 53 53 static int has_N44_O17_errata[NR_CPUS]; 54 - static int has_N60_errata[NR_CPUS]; 55 54 static unsigned int stock_freq; 56 55 static struct cpufreq_driver p4clockmod_driver; 57 56 static unsigned int cpufreq_p4_get(unsigned int cpu); ··· 223 224 case 0x0f12: 224 225 has_N44_O17_errata[policy->cpu] = 1; 225 226 dprintk("has errata -- disabling low frequencies\n"); 226 - break; 227 - 228 - case 0x0f29: 229 - has_N60_errata[policy->cpu] = 1; 230 - dprintk("has errata -- disabling frequencies lower than 2ghz\n"); 231 - break; 232 227 } 233 228 234 229 /* get max frequency */ ··· 233 240 /* table init */ 234 241 for (i=1; (p4clockmod_table[i].frequency != CPUFREQ_TABLE_END); i++) { 235 242 if ((i<2) && (has_N44_O17_errata[policy->cpu])) 236 - p4clockmod_table[i].frequency = CPUFREQ_ENTRY_INVALID; 237 - else if (has_N60_errata[policy->cpu] && ((stock_freq * i)/8) < 2000000) 238 243 p4clockmod_table[i].frequency = CPUFREQ_ENTRY_INVALID; 239 244 else 240 245 p4clockmod_table[i].frequency = (stock_freq * i)/8;
+1 -1
arch/i386/kernel/cpu/cyrix.c
··· 173 173 ccr4 = getCx86(CX86_CCR4); 174 174 ccr4 |= 0x38; /* FPU fast, DTE cache, Mem bypass */ 175 175 176 - setCx86(CX86_CCR4, ccr4); 176 + setCx86(CX86_CCR3, ccr3); 177 177 178 178 set_cx86_memwb(); 179 179 set_cx86_reorder();
+73 -16
arch/i386/kernel/efi.c
··· 473 473 } 474 474 475 475 /* 476 + * Wrap all the virtual calls in a way that forces the parameters on the stack. 477 + */ 478 + 479 + #define efi_call_virt(f, args...) \ 480 + ((efi_##f##_t __attribute__((regparm(0)))*)efi.systab->runtime->f)(args) 481 + 482 + static efi_status_t virt_efi_get_time(efi_time_t *tm, efi_time_cap_t *tc) 483 + { 484 + return efi_call_virt(get_time, tm, tc); 485 + } 486 + 487 + static efi_status_t virt_efi_set_time (efi_time_t *tm) 488 + { 489 + return efi_call_virt(set_time, tm); 490 + } 491 + 492 + static efi_status_t virt_efi_get_wakeup_time (efi_bool_t *enabled, 493 + efi_bool_t *pending, 494 + efi_time_t *tm) 495 + { 496 + return efi_call_virt(get_wakeup_time, enabled, pending, tm); 497 + } 498 + 499 + static efi_status_t virt_efi_set_wakeup_time (efi_bool_t enabled, 500 + efi_time_t *tm) 501 + { 502 + return efi_call_virt(set_wakeup_time, enabled, tm); 503 + } 504 + 505 + static efi_status_t virt_efi_get_variable (efi_char16_t *name, 506 + efi_guid_t *vendor, u32 *attr, 507 + unsigned long *data_size, void *data) 508 + { 509 + return efi_call_virt(get_variable, name, vendor, attr, data_size, data); 510 + } 511 + 512 + static efi_status_t virt_efi_get_next_variable (unsigned long *name_size, 513 + efi_char16_t *name, 514 + efi_guid_t *vendor) 515 + { 516 + return efi_call_virt(get_next_variable, name_size, name, vendor); 517 + } 518 + 519 + static efi_status_t virt_efi_set_variable (efi_char16_t *name, 520 + efi_guid_t *vendor, 521 + unsigned long attr, 522 + unsigned long data_size, void *data) 523 + { 524 + return efi_call_virt(set_variable, name, vendor, attr, data_size, data); 525 + } 526 + 527 + static efi_status_t virt_efi_get_next_high_mono_count (u32 *count) 528 + { 529 + return efi_call_virt(get_next_high_mono_count, count); 530 + } 531 + 532 + static void virt_efi_reset_system (int reset_type, efi_status_t status, 533 + unsigned long data_size, 534 + efi_char16_t *data) 535 + { 536 + efi_call_virt(reset_system, reset_type, status, data_size, data); 537 + } 538 + 539 + /* 476 540 * This function will switch the EFI runtime services to virtual mode. 477 541 * Essentially, look through the EFI memmap and map every region that 478 542 * has the runtime attribute bit set in its memory descriptor and update ··· 589 525 * pointers in the runtime service table to the new virtual addresses. 590 526 */ 591 527 592 - efi.get_time = (efi_get_time_t *) efi.systab->runtime->get_time; 593 - efi.set_time = (efi_set_time_t *) efi.systab->runtime->set_time; 594 - efi.get_wakeup_time = (efi_get_wakeup_time_t *) 595 - efi.systab->runtime->get_wakeup_time; 596 - efi.set_wakeup_time = (efi_set_wakeup_time_t *) 597 - efi.systab->runtime->set_wakeup_time; 598 - efi.get_variable = (efi_get_variable_t *) 599 - efi.systab->runtime->get_variable; 600 - efi.get_next_variable = (efi_get_next_variable_t *) 601 - efi.systab->runtime->get_next_variable; 602 - efi.set_variable = (efi_set_variable_t *) 603 - efi.systab->runtime->set_variable; 604 - efi.get_next_high_mono_count = (efi_get_next_high_mono_count_t *) 605 - efi.systab->runtime->get_next_high_mono_count; 606 - efi.reset_system = (efi_reset_system_t *) 607 - efi.systab->runtime->reset_system; 528 + efi.get_time = virt_efi_get_time; 529 + efi.set_time = virt_efi_set_time; 530 + efi.get_wakeup_time = virt_efi_get_wakeup_time; 531 + efi.set_wakeup_time = virt_efi_set_wakeup_time; 532 + efi.get_variable = virt_efi_get_variable; 533 + efi.get_next_variable = virt_efi_get_next_variable; 534 + efi.set_variable = virt_efi_set_variable; 535 + efi.get_next_high_mono_count = virt_efi_get_next_high_mono_count; 536 + efi.reset_system = virt_efi_reset_system; 608 537 } 609 538 610 539 void __init
+4
arch/i386/kernel/entry.S
··· 302 302 pushl $(__USER_CS) 303 303 CFI_ADJUST_CFA_OFFSET 4 304 304 /*CFI_REL_OFFSET cs, 0*/ 305 + #ifndef CONFIG_COMPAT_VDSO 305 306 /* 306 307 * Push current_thread_info()->sysenter_return to the stack. 307 308 * A tiny bit of offset fixup is necessary - 4*4 means the 4 words 308 309 * pushed above; +8 corresponds to copy_thread's esp0 setting. 309 310 */ 310 311 pushl (TI_sysenter_return-THREAD_SIZE+8+4*4)(%esp) 312 + #else 313 + pushl $SYSENTER_RETURN 314 + #endif 311 315 CFI_ADJUST_CFA_OFFSET 4 312 316 CFI_REL_OFFSET eip, 0 313 317
+19 -13
arch/i386/kernel/io_apic.c
··· 1227 1227 1228 1228 static int __assign_irq_vector(int irq) 1229 1229 { 1230 - static int current_vector = FIRST_DEVICE_VECTOR, offset = 0; 1231 - int vector; 1230 + static int current_vector = FIRST_DEVICE_VECTOR, current_offset = 0; 1231 + int vector, offset, i; 1232 1232 1233 1233 BUG_ON((unsigned)irq >= NR_IRQ_VECTORS); 1234 1234 1235 1235 if (irq_vector[irq] > 0) 1236 1236 return irq_vector[irq]; 1237 1237 1238 - current_vector += 8; 1239 - if (current_vector == SYSCALL_VECTOR) 1240 - current_vector += 8; 1241 - 1242 - if (current_vector >= FIRST_SYSTEM_VECTOR) { 1243 - offset++; 1244 - if (!(offset % 8)) 1245 - return -ENOSPC; 1246 - current_vector = FIRST_DEVICE_VECTOR + offset; 1247 - } 1248 - 1249 1238 vector = current_vector; 1239 + offset = current_offset; 1240 + next: 1241 + vector += 8; 1242 + if (vector >= FIRST_SYSTEM_VECTOR) { 1243 + offset = (offset + 1) % 8; 1244 + vector = FIRST_DEVICE_VECTOR + offset; 1245 + } 1246 + if (vector == current_vector) 1247 + return -ENOSPC; 1248 + if (vector == SYSCALL_VECTOR) 1249 + goto next; 1250 + for (i = 0; i < NR_IRQ_VECTORS; i++) 1251 + if (irq_vector[i] == vector) 1252 + goto next; 1253 + 1254 + current_vector = vector; 1255 + current_offset = offset; 1250 1256 irq_vector[irq] = vector; 1251 1257 1252 1258 return vector;
+1 -7
arch/i386/kernel/nmi.c
··· 310 310 311 311 if ((nmi >= NMI_INVALID) || (nmi < NMI_NONE)) 312 312 return 0; 313 - /* 314 - * If any other x86 CPU has a local APIC, then 315 - * please test the NMI stuff there and send me the 316 - * missing bits. Right now Intel P6/P4 and AMD K7 only. 317 - */ 318 - if ((nmi == NMI_LOCAL_APIC) && (nmi_known_cpu() == 0)) 319 - return 0; /* no lapic support */ 313 + 320 314 nmi_watchdog = nmi; 321 315 return 1; 322 316 }
+8 -1
arch/i386/kernel/paravirt.c
··· 566 566 .irq_enable_sysexit = native_irq_enable_sysexit, 567 567 .iret = native_iret, 568 568 }; 569 - EXPORT_SYMBOL(paravirt_ops); 569 + 570 + /* 571 + * NOTE: CONFIG_PARAVIRT is experimental and the paravirt_ops 572 + * semantics are subject to change. Hence we only do this 573 + * internal-only export of this, until it gets sorted out and 574 + * all lowlevel CPU ops used by modules are separately exported. 575 + */ 576 + EXPORT_SYMBOL_GPL(paravirt_ops);
+9 -5
arch/i386/kernel/sysenter.c
··· 79 79 #ifdef CONFIG_COMPAT_VDSO 80 80 __set_fixmap(FIX_VDSO, __pa(syscall_page), PAGE_READONLY); 81 81 printk("Compat vDSO mapped to %08lx.\n", __fix_to_virt(FIX_VDSO)); 82 - #else 83 - /* 84 - * In the non-compat case the ELF coredumping code needs the fixmap: 85 - */ 86 - __set_fixmap(FIX_VDSO, __pa(syscall_page), PAGE_KERNEL_RO); 87 82 #endif 88 83 89 84 if (!boot_cpu_has(X86_FEATURE_SEP)) { ··· 95 100 return 0; 96 101 } 97 102 103 + #ifndef CONFIG_COMPAT_VDSO 98 104 static struct page *syscall_nopage(struct vm_area_struct *vma, 99 105 unsigned long adr, int *type) 100 106 { ··· 142 146 vma->vm_end = addr + PAGE_SIZE; 143 147 /* MAYWRITE to allow gdb to COW and set breakpoints */ 144 148 vma->vm_flags = VM_READ|VM_EXEC|VM_MAYREAD|VM_MAYEXEC|VM_MAYWRITE; 149 + /* 150 + * Make sure the vDSO gets into every core dump. 151 + * Dumping its contents makes post-mortem fully interpretable later 152 + * without matching up the same kernel and hardware config to see 153 + * what PC values meant. 154 + */ 155 + vma->vm_flags |= VM_ALWAYSDUMP; 145 156 vma->vm_flags |= mm->def_flags; 146 157 vma->vm_page_prot = protection_map[vma->vm_flags & 7]; 147 158 vma->vm_ops = &syscall_vm_ops; ··· 190 187 { 191 188 return 0; 192 189 } 190 + #endif
+1 -1
arch/i386/mach-default/setup.c
··· 102 102 * along the MCA bus. Use this to hook into that chain if you will need 103 103 * it. 104 104 **/ 105 - void __init mca_nmi_hook(void) 105 + void mca_nmi_hook(void) 106 106 { 107 107 /* If I recall correctly, there's a whole bunch of other things that 108 108 * we can do to check for NMI problems, but that's all I know about
+3
arch/ia64/kernel/acpi.c
··· 609 609 610 610 void acpi_unregister_gsi(u32 gsi) 611 611 { 612 + if (acpi_irq_model == ACPI_IRQ_MODEL_PLATFORM) 613 + return; 614 + 612 615 iosapic_unregister_intr(gsi); 613 616 } 614 617
+3
arch/ia64/kernel/irq.c
··· 122 122 for (irq=0; irq < NR_IRQS; irq++) { 123 123 desc = irq_desc + irq; 124 124 125 + if (desc->status == IRQ_DISABLED) 126 + continue; 127 + 125 128 /* 126 129 * No handling for now. 127 130 * TBD: Implement a disable function so we can now
+14
arch/mips/Kconfig
··· 1568 1568 depends on MIPS_MT 1569 1569 default y 1570 1570 1571 + config MIPS_MT_SMTC_INSTANT_REPLAY 1572 + bool "Low-latency Dispatch of Deferred SMTC IPIs" 1573 + depends on MIPS_MT_SMTC 1574 + default y 1575 + help 1576 + SMTC pseudo-interrupts between TCs are deferred and queued 1577 + if the target TC is interrupt-inhibited (IXMT). In the first 1578 + SMTC prototypes, these queued IPIs were serviced on return 1579 + to user mode, or on entry into the kernel idle loop. The 1580 + INSTANT_REPLAY option dispatches them as part of local_irq_restore() 1581 + processing, which adds runtime overhead (hence the option to turn 1582 + it off), but ensures that IPIs are handled promptly even under 1583 + heavy I/O interrupt load. 1584 + 1571 1585 config MIPS_VPE_LOADER_TOM 1572 1586 bool "Load VPE program into memory hidden from linux" 1573 1587 depends on MIPS_VPE_LOADER
+1 -1
arch/mips/Makefile
··· 623 623 624 624 ifdef CONFIG_MIPS 625 625 CHECKFLAGS += $(shell $(CC) $(CFLAGS) -dM -E -xc /dev/null | \ 626 - egrep -vw '__GNUC_(MAJOR|MINOR|PATCHLEVEL)__' | \ 626 + egrep -vw '__GNUC_(|MINOR_|PATCHLEVEL_)_' | \ 627 627 sed -e 's/^\#define /-D/' -e "s/ /='/" -e "s/$$/'/") 628 628 ifdef CONFIG_64BIT 629 629 CHECKFLAGS += -m64
+1 -1
arch/mips/dec/prom/memory.c
··· 122 122 addr += PAGE_SIZE; 123 123 } 124 124 125 - printk("Freeing unused PROM memory: %ldk freed\n", 125 + printk("Freeing unused PROM memory: %ldkb freed\n", 126 126 (end - PAGE_SIZE) >> 10); 127 127 128 128 return end - PAGE_SIZE;
+42 -24
arch/mips/kernel/smtc.c
··· 4 4 #include <linux/sched.h> 5 5 #include <linux/cpumask.h> 6 6 #include <linux/interrupt.h> 7 + #include <linux/module.h> 7 8 8 9 #include <asm/cpu.h> 9 10 #include <asm/processor.h> ··· 271 270 * of their initialization in smtc_cpu_setup(). 272 271 */ 273 272 274 - tlbsiz = tlbsiz & 0x3f; /* MIPS32 limits TLB indices to 64 */ 275 - cpu_data[0].tlbsize = tlbsiz; 273 + /* MIPS32 limits TLB indices to 64 */ 274 + if (tlbsiz > 64) 275 + tlbsiz = 64; 276 + cpu_data[0].tlbsize = current_cpu_data.tlbsize = tlbsiz; 276 277 smtc_status |= SMTC_TLB_SHARED; 278 + local_flush_tlb_all(); 277 279 278 280 printk("TLB of %d entry pairs shared by %d VPEs\n", 279 281 tlbsiz, vpes); ··· 1021 1017 * SMTC-specific hacks invoked from elsewhere in the kernel. 1022 1018 */ 1023 1019 1020 + void smtc_ipi_replay(void) 1021 + { 1022 + /* 1023 + * To the extent that we've ever turned interrupts off, 1024 + * we may have accumulated deferred IPIs. This is subtle. 1025 + * If we use the smtc_ipi_qdepth() macro, we'll get an 1026 + * exact number - but we'll also disable interrupts 1027 + * and create a window of failure where a new IPI gets 1028 + * queued after we test the depth but before we re-enable 1029 + * interrupts. So long as IXMT never gets set, however, 1030 + * we should be OK: If we pick up something and dispatch 1031 + * it here, that's great. If we see nothing, but concurrent 1032 + * with this operation, another TC sends us an IPI, IXMT 1033 + * is clear, and we'll handle it as a real pseudo-interrupt 1034 + * and not a pseudo-pseudo interrupt. 1035 + */ 1036 + if (IPIQ[smp_processor_id()].depth > 0) { 1037 + struct smtc_ipi *pipi; 1038 + extern void self_ipi(struct smtc_ipi *); 1039 + 1040 + while ((pipi = smtc_ipi_dq(&IPIQ[smp_processor_id()]))) { 1041 + self_ipi(pipi); 1042 + smtc_cpu_stats[smp_processor_id()].selfipis++; 1043 + } 1044 + } 1045 + } 1046 + 1047 + EXPORT_SYMBOL(smtc_ipi_replay); 1048 + 1024 1049 void smtc_idle_loop_hook(void) 1025 1050 { 1026 1051 #ifdef SMTC_IDLE_HOOK_DEBUG ··· 1146 1113 if (pdb_msg != &id_ho_db_msg[0]) 1147 1114 printk("CPU%d: %s", smp_processor_id(), id_ho_db_msg); 1148 1115 #endif /* SMTC_IDLE_HOOK_DEBUG */ 1149 - /* 1150 - * To the extent that we've ever turned interrupts off, 1151 - * we may have accumulated deferred IPIs. This is subtle. 1152 - * If we use the smtc_ipi_qdepth() macro, we'll get an 1153 - * exact number - but we'll also disable interrupts 1154 - * and create a window of failure where a new IPI gets 1155 - * queued after we test the depth but before we re-enable 1156 - * interrupts. So long as IXMT never gets set, however, 1157 - * we should be OK: If we pick up something and dispatch 1158 - * it here, that's great. If we see nothing, but concurrent 1159 - * with this operation, another TC sends us an IPI, IXMT 1160 - * is clear, and we'll handle it as a real pseudo-interrupt 1161 - * and not a pseudo-pseudo interrupt. 1162 - */ 1163 - if (IPIQ[smp_processor_id()].depth > 0) { 1164 - struct smtc_ipi *pipi; 1165 - extern void self_ipi(struct smtc_ipi *); 1166 1116 1167 - if ((pipi = smtc_ipi_dq(&IPIQ[smp_processor_id()])) != NULL) { 1168 - self_ipi(pipi); 1169 - smtc_cpu_stats[smp_processor_id()].selfipis++; 1170 - } 1171 - } 1117 + /* 1118 + * Replay any accumulated deferred IPIs. If "Instant Replay" 1119 + * is in use, there should never be any. 1120 + */ 1121 + #ifndef CONFIG_MIPS_MT_SMTC_INSTANT_REPLAY 1122 + smtc_ipi_replay(); 1123 + #endif /* CONFIG_MIPS_MT_SMTC_INSTANT_REPLAY */ 1172 1124 } 1173 1125 1174 1126 void smtc_soft_dump(void)
+5 -4
arch/mips/kernel/vpe.c
··· 139 139 struct list_head list; 140 140 }; 141 141 142 - struct vpecontrol_ { 142 + struct { 143 143 /* Virtual processing elements */ 144 144 struct list_head vpe_list; 145 145 146 146 /* Thread contexts */ 147 147 struct list_head tc_list; 148 - } vpecontrol; 148 + } vpecontrol = { 149 + .vpe_list = LIST_HEAD_INIT(vpecontrol.vpe_list), 150 + .tc_list = LIST_HEAD_INIT(vpecontrol.tc_list) 151 + }; 149 152 150 153 static void release_progmem(void *ptr); 151 154 /* static __attribute_used__ void dump_vpe(struct vpe * v); */ ··· 1391 1388 1392 1389 /* dump_mtregs(); */ 1393 1390 1394 - INIT_LIST_HEAD(&vpecontrol.vpe_list); 1395 - INIT_LIST_HEAD(&vpecontrol.tc_list); 1396 1391 1397 1392 val = read_c0_mvpconf0(); 1398 1393 for (i = 0; i < ((val & MVPCONF0_PTC) + 1); i++) {
+2 -1
arch/mips/mips-boards/malta/Makefile
··· 19 19 # under Linux. 20 20 # 21 21 22 - obj-y := malta_int.o malta_mtd.o malta_setup.o 22 + obj-y := malta_int.o malta_setup.o 23 + obj-$(CONFIG_MTD) += malta_mtd.o 23 24 obj-$(CONFIG_SMP) += malta_smp.o
+1 -1
arch/mips/mips-boards/sim/sim_setup.c
··· 57 57 board_time_init = sim_time_init; 58 58 prom_printf("Linux started...\n"); 59 59 60 - #ifdef CONFIG_MT_SMP 60 + #ifdef CONFIG_MIPS_MT_SMP 61 61 sanitize_tlb_entries(); 62 62 #endif 63 63 }
+2 -1
arch/mips/mm/init.c
··· 501 501 502 502 freed = prom_free_prom_memory(); 503 503 if (freed) 504 - printk(KERN_INFO "Freeing firmware memory: %ldk freed\n",freed); 504 + printk(KERN_INFO "Freeing firmware memory: %ldkb freed\n", 505 + freed >> 10); 505 506 506 507 free_init_pages("unused kernel memory", 507 508 __pa_symbol(&__init_begin),
+2 -2
arch/mips/momentum/ocelot_g/prom.c
··· 28 28 extern unsigned long marvell_base; 29 29 extern unsigned long bus_clock; 30 30 31 - #ifdef CONFIG_GALILLEO_GT64240_ETH 31 + #ifdef CONFIG_GALILEO_GT64240_ETH 32 32 extern unsigned char prom_mac_addr_base[6]; 33 33 #endif 34 34 ··· 61 61 mips_machgroup = MACH_GROUP_MOMENCO; 62 62 mips_machtype = MACH_MOMENCO_OCELOT_G; 63 63 64 - #ifdef CONFIG_GALILLEO_GT64240_ETH 64 + #ifdef CONFIG_GALILEO_GT64240_ETH 65 65 /* get the base MAC address for on-board ethernet ports */ 66 66 memcpy(prom_mac_addr_base, (void*)0xfc807cf2, 6); 67 67 #endif
+2 -2
arch/mips/momentum/ocelot_g/setup.c
··· 64 64 65 65 #include "ocelot_pld.h" 66 66 67 - #ifdef CONFIG_GALILLEO_GT64240_ETH 67 + #ifdef CONFIG_GALILEO_GT64240_ETH 68 68 extern unsigned char prom_mac_addr_base[6]; 69 69 #endif 70 70 ··· 185 185 /* do handoff reconfiguration */ 186 186 PMON_v2_setup(); 187 187 188 - #ifdef CONFIG_GALILLEO_GT64240_ETH 188 + #ifdef CONFIG_GALILEO_GT64240_ETH 189 189 /* get the mac addr */ 190 190 memcpy(prom_mac_addr_base, (void*)0xfc807cf2, 6); 191 191 #endif
+9 -3
arch/mips/vr41xx/common/irq.c
··· 1 1 /* 2 2 * Interrupt handing routines for NEC VR4100 series. 3 3 * 4 - * Copyright (C) 2005 Yoichi Yuasa <yoichi_yuasa@tripeaks.co.jp> 4 + * Copyright (C) 2005-2007 Yoichi Yuasa <yoichi_yuasa@tripeaks.co.jp> 5 5 * 6 6 * This program is free software; you can redistribute it and/or modify 7 7 * it under the terms of the GNU General Public License as published by ··· 73 73 if (cascade->get_irq != NULL) { 74 74 unsigned int source_irq = irq; 75 75 desc = irq_desc + source_irq; 76 - desc->chip->ack(source_irq); 76 + if (desc->chip->mask_ack) 77 + desc->chip->mask_ack(source_irq); 78 + else { 79 + desc->chip->mask(source_irq); 80 + desc->chip->ack(source_irq); 81 + } 77 82 irq = cascade->get_irq(irq); 78 83 if (irq < 0) 79 84 atomic_inc(&irq_err_count); 80 85 else 81 86 irq_dispatch(irq); 82 - desc->chip->end(source_irq); 87 + if (!(desc->status & IRQ_DISABLED) && desc->chip->unmask) 88 + desc->chip->unmask(source_irq); 83 89 } else 84 90 do_IRQ(irq); 85 91 }
+6 -2
arch/powerpc/Kconfig
··· 492 492 select PPC_NATIVE 493 493 select PPC_RTAS 494 494 select MMIO_NVRAM 495 + select ATA_NONSTANDARD if ATA 495 496 default n 496 497 help 497 498 This option enables support for the Maple 970FX Evaluation Board. ··· 534 533 select UDBG_RTAS_CONSOLE 535 534 536 535 config PPC_PS3 537 - bool "Sony PS3" 536 + bool "Sony PS3 (incomplete)" 538 537 depends on PPC_MULTIPLATFORM && PPC64 539 538 select PPC_CELL 540 539 help 541 540 This option enables support for the Sony PS3 game console 542 541 and other platforms using the PS3 hypervisor. 542 + Support for this platform is not yet complete, so 543 + enabling this will not result in a bootable kernel on a 544 + PS3 system. 543 545 544 546 config PPC_CELLEB 545 547 bool "Toshiba's Cell Reference Set 'Celleb' Architecture" ··· 1206 1202 1207 1203 config KPROBES 1208 1204 bool "Kprobes (EXPERIMENTAL)" 1209 - depends on PPC64 && KALLSYMS && EXPERIMENTAL && MODULES 1205 + depends on !BOOKE && !4xx && KALLSYMS && EXPERIMENTAL && MODULES 1210 1206 help 1211 1207 Kprobes allows you to trap at almost any kernel address and 1212 1208 execute a callback function. register_kprobe() establishes
+6 -2
arch/powerpc/kernel/kprobes.c
··· 46 46 if ((unsigned long)p->addr & 0x03) { 47 47 printk("Attempt to register kprobe at an unaligned address\n"); 48 48 ret = -EINVAL; 49 - } else if (IS_MTMSRD(insn) || IS_RFID(insn)) { 50 - printk("Cannot register a kprobe on rfid or mtmsrd\n"); 49 + } else if (IS_MTMSRD(insn) || IS_RFID(insn) || IS_RFI(insn)) { 50 + printk("Cannot register a kprobe on rfi/rfid or mtmsr[d]\n"); 51 51 ret = -EINVAL; 52 52 } 53 53 ··· 483 483 memcpy(&kcb->jprobe_saved_regs, regs, sizeof(struct pt_regs)); 484 484 485 485 /* setup return addr to the jprobe handler routine */ 486 + #ifdef CONFIG_PPC64 486 487 regs->nip = (unsigned long)(((func_descr_t *)jp->entry)->entry); 487 488 regs->gpr[2] = (unsigned long)(((func_descr_t *)jp->entry)->toc); 489 + #else 490 + regs->nip = (unsigned long)jp->entry; 491 + #endif 488 492 489 493 return 1; 490 494 }
+1 -1
arch/powerpc/kernel/pci_64.c
··· 1429 1429 1430 1430 for (ln = pci_root_buses.next; ln != &pci_root_buses; ln = ln->next) { 1431 1431 bus = pci_bus_b(ln); 1432 - if (in_bus >= bus->number && in_bus < (bus->number + bus->subordinate)) 1432 + if (in_bus >= bus->number && in_bus <= bus->subordinate) 1433 1433 break; 1434 1434 bus = NULL; 1435 1435 }
+74 -35
arch/powerpc/kernel/traps.c
··· 535 535 } 536 536 } 537 537 538 - static void parse_fpe(struct pt_regs *regs) 538 + static inline int __parse_fpscr(unsigned long fpscr) 539 539 { 540 - int code = 0; 541 - unsigned long fpscr; 542 - 543 - flush_fp_to_thread(current); 544 - 545 - fpscr = current->thread.fpscr.val; 540 + int ret = 0; 546 541 547 542 /* Invalid operation */ 548 543 if ((fpscr & FPSCR_VE) && (fpscr & FPSCR_VX)) 549 - code = FPE_FLTINV; 544 + ret = FPE_FLTINV; 550 545 551 546 /* Overflow */ 552 547 else if ((fpscr & FPSCR_OE) && (fpscr & FPSCR_OX)) 553 - code = FPE_FLTOVF; 548 + ret = FPE_FLTOVF; 554 549 555 550 /* Underflow */ 556 551 else if ((fpscr & FPSCR_UE) && (fpscr & FPSCR_UX)) 557 - code = FPE_FLTUND; 552 + ret = FPE_FLTUND; 558 553 559 554 /* Divide by zero */ 560 555 else if ((fpscr & FPSCR_ZE) && (fpscr & FPSCR_ZX)) 561 - code = FPE_FLTDIV; 556 + ret = FPE_FLTDIV; 562 557 563 558 /* Inexact result */ 564 559 else if ((fpscr & FPSCR_XE) && (fpscr & FPSCR_XX)) 565 - code = FPE_FLTRES; 560 + ret = FPE_FLTRES; 561 + 562 + return ret; 563 + } 564 + 565 + static void parse_fpe(struct pt_regs *regs) 566 + { 567 + int code = 0; 568 + 569 + flush_fp_to_thread(current); 570 + 571 + code = __parse_fpscr(current->thread.fpscr.val); 566 572 567 573 _exception(SIGFPE, regs, code, regs->nip); 568 574 } ··· 745 739 extern int do_mathemu(struct pt_regs *regs); 746 740 747 741 /* We can now get here via a FP Unavailable exception if the core 748 - * has no FPU, in that case no reason flags will be set */ 749 - #ifdef CONFIG_MATH_EMULATION 750 - /* (reason & REASON_ILLEGAL) would be the obvious thing here, 751 - * but there seems to be a hardware bug on the 405GP (RevD) 752 - * that means ESR is sometimes set incorrectly - either to 753 - * ESR_DST (!?) or 0. In the process of chasing this with the 754 - * hardware people - not sure if it can happen on any illegal 755 - * instruction or only on FP instructions, whether there is a 756 - * pattern to occurences etc. -dgibson 31/Mar/2003 */ 757 - if (!(reason & REASON_TRAP) && do_mathemu(regs) == 0) { 758 - emulate_single_step(regs); 759 - return; 760 - } 761 - #endif /* CONFIG_MATH_EMULATION */ 742 + * has no FPU, in that case the reason flags will be 0 */ 762 743 763 744 if (reason & REASON_FP) { 764 745 /* IEEE FP exception */ ··· 770 777 } 771 778 772 779 local_irq_enable(); 780 + 781 + #ifdef CONFIG_MATH_EMULATION 782 + /* (reason & REASON_ILLEGAL) would be the obvious thing here, 783 + * but there seems to be a hardware bug on the 405GP (RevD) 784 + * that means ESR is sometimes set incorrectly - either to 785 + * ESR_DST (!?) or 0. In the process of chasing this with the 786 + * hardware people - not sure if it can happen on any illegal 787 + * instruction or only on FP instructions, whether there is a 788 + * pattern to occurences etc. -dgibson 31/Mar/2003 */ 789 + switch (do_mathemu(regs)) { 790 + case 0: 791 + emulate_single_step(regs); 792 + return; 793 + case 1: { 794 + int code = 0; 795 + code = __parse_fpscr(current->thread.fpscr.val); 796 + _exception(SIGFPE, regs, code, regs->nip); 797 + return; 798 + } 799 + case -EFAULT: 800 + _exception(SIGSEGV, regs, SEGV_MAPERR, regs->nip); 801 + return; 802 + } 803 + /* fall through on any other errors */ 804 + #endif /* CONFIG_MATH_EMULATION */ 773 805 774 806 /* Try to emulate it if we should. */ 775 807 if (reason & (REASON_ILLEGAL | REASON_PRIVILEGED)) { ··· 909 891 910 892 #ifdef CONFIG_MATH_EMULATION 911 893 errcode = do_mathemu(regs); 894 + 895 + switch (errcode) { 896 + case 0: 897 + emulate_single_step(regs); 898 + return; 899 + case 1: { 900 + int code = 0; 901 + code = __parse_fpscr(current->thread.fpscr.val); 902 + _exception(SIGFPE, regs, code, regs->nip); 903 + return; 904 + } 905 + case -EFAULT: 906 + _exception(SIGSEGV, regs, SEGV_MAPERR, regs->nip); 907 + return; 908 + default: 909 + _exception(SIGILL, regs, ILL_ILLOPC, regs->nip); 910 + return; 911 + } 912 + 912 913 #else 913 914 errcode = Soft_emulate_8xx(regs); 914 - #endif 915 - if (errcode) { 916 - if (errcode > 0) 917 - _exception(SIGFPE, regs, 0, 0); 918 - else if (errcode == -EFAULT) 919 - _exception(SIGSEGV, regs, 0, 0); 920 - else 921 - _exception(SIGILL, regs, ILL_ILLOPC, regs->nip); 922 - } else 915 + switch (errcode) { 916 + case 0: 923 917 emulate_single_step(regs); 918 + return; 919 + case 1: 920 + _exception(SIGILL, regs, ILL_ILLOPC, regs->nip); 921 + return; 922 + case -EFAULT: 923 + _exception(SIGSEGV, regs, SEGV_MAPERR, regs->nip); 924 + return; 925 + } 926 + #endif 924 927 } 925 928 #endif /* CONFIG_8xx */ 926 929
+7
arch/powerpc/kernel/vdso.c
··· 284 284 * pages though 285 285 */ 286 286 vma->vm_flags = VM_READ|VM_EXEC|VM_MAYREAD|VM_MAYWRITE|VM_MAYEXEC; 287 + /* 288 + * Make sure the vDSO gets into every core dump. 289 + * Dumping its contents makes post-mortem fully interpretable later 290 + * without matching up the same kernel and hardware config to see 291 + * what PC values meant. 292 + */ 293 + vma->vm_flags |= VM_ALWAYSDUMP; 287 294 vma->vm_flags |= mm->def_flags; 288 295 vma->vm_page_prot = protection_map[vma->vm_flags & 0x7]; 289 296 vma->vm_ops = &vdso_vmops;
+1 -1
arch/powerpc/lib/Makefile
··· 16 16 strcase.o 17 17 obj-$(CONFIG_QUICC_ENGINE) += rheap.o 18 18 obj-$(CONFIG_XMON) += sstep.o 19 + obj-$(CONFIG_KPROBES) += sstep.o 19 20 obj-$(CONFIG_NOT_COHERENT_CACHE) += dma-noncoherent.o 20 21 21 22 ifeq ($(CONFIG_PPC64),y) 22 23 obj-$(CONFIG_SMP) += locks.o 23 - obj-$(CONFIG_DEBUG_KERNEL) += sstep.o 24 24 endif 25 25 26 26 # Temporary hack until we have migrated to asm-powerpc
+1
arch/sparc/kernel/process.c
··· 54 54 * handler when auxio is not present-- unused for now... 55 55 */ 56 56 void (*pm_power_off)(void) = machine_power_off; 57 + EXPORT_SYMBOL(pm_power_off); 57 58 58 59 /* 59 60 * sysctl - toggle power-off restriction for serial console
+4 -4
arch/sparc/kernel/smp.c
··· 292 292 293 293 void __init smp_prepare_cpus(unsigned int max_cpus) 294 294 { 295 - extern void smp4m_boot_cpus(void); 296 - extern void smp4d_boot_cpus(void); 295 + extern void __init smp4m_boot_cpus(void); 296 + extern void __init smp4d_boot_cpus(void); 297 297 int i, cpuid, extra; 298 298 299 299 printk("Entering SMP Mode...\n"); ··· 375 375 376 376 int __cpuinit __cpu_up(unsigned int cpu) 377 377 { 378 - extern int smp4m_boot_one_cpu(int); 379 - extern int smp4d_boot_one_cpu(int); 378 + extern int __cpuinit smp4m_boot_one_cpu(int); 379 + extern int __cpuinit smp4d_boot_one_cpu(int); 380 380 int ret=0; 381 381 382 382 switch(sparc_cpu_model) {
+1 -1
arch/sparc/kernel/sun4d_smp.c
··· 164 164 local_flush_cache_all(); 165 165 } 166 166 167 - int smp4d_boot_one_cpu(int i) 167 + int __cpuinit smp4d_boot_one_cpu(int i) 168 168 { 169 169 extern unsigned long sun4d_cpu_startup; 170 170 unsigned long *entry = &sun4d_cpu_startup;
+2 -2
arch/sparc64/kernel/sun4v_tlb_miss.S
··· 142 142 rdpr %tl, %g1 143 143 cmp %g1, 1 144 144 bgu,pn %xcc, winfix_trampoline 145 - nop 146 - ba,pt %xcc, sparc64_realfault_common 147 145 mov FAULT_CODE_DTLB | FAULT_CODE_WRITE, %g4 146 + ba,pt %xcc, sparc64_realfault_common 147 + nop 148 148 149 149 /* Called from trap table: 150 150 * %g4: vaddr
+25 -25
arch/um/Kconfig.i386
··· 19 19 choice 20 20 prompt "Host memory split" 21 21 default HOST_VMSPLIT_3G 22 - ---help--- 23 - This is needed when the host kernel on which you run has a non-default 24 - (like 2G/2G) memory split, instead of the customary 3G/1G. If you did 25 - not recompile your own kernel but use the default distro's one, you can 26 - safely accept the "Default split" option. 22 + help 23 + This is needed when the host kernel on which you run has a non-default 24 + (like 2G/2G) memory split, instead of the customary 3G/1G. If you did 25 + not recompile your own kernel but use the default distro's one, you can 26 + safely accept the "Default split" option. 27 27 28 - It can be enabled on recent (>=2.6.16-rc2) vanilla kernels via 29 - CONFIG_VM_SPLIT_*, or on previous kernels with special patches (-ck 30 - patchset by Con Kolivas, or other ones) - option names match closely the 31 - host CONFIG_VM_SPLIT_* ones. 28 + It can be enabled on recent (>=2.6.16-rc2) vanilla kernels via 29 + CONFIG_VM_SPLIT_*, or on previous kernels with special patches (-ck 30 + patchset by Con Kolivas, or other ones) - option names match closely the 31 + host CONFIG_VM_SPLIT_* ones. 32 32 33 - A lower setting (where 1G/3G is lowest and 3G/1G is higher) will 34 - tolerate even more "normal" host kernels, but an higher setting will be 35 - stricter. 33 + A lower setting (where 1G/3G is lowest and 3G/1G is higher) will 34 + tolerate even more "normal" host kernels, but an higher setting will be 35 + stricter. 36 36 37 - So, if you do not know what to do here, say 'Default split'. 37 + So, if you do not know what to do here, say 'Default split'. 38 38 39 - config HOST_VMSPLIT_3G 40 - bool "Default split (3G/1G user/kernel host split)" 41 - config HOST_VMSPLIT_3G_OPT 42 - bool "3G/1G user/kernel host split (for full 1G low memory)" 43 - config HOST_VMSPLIT_2G 44 - bool "2G/2G user/kernel host split" 45 - config HOST_VMSPLIT_1G 46 - bool "1G/3G user/kernel host split" 39 + config HOST_VMSPLIT_3G 40 + bool "Default split (3G/1G user/kernel host split)" 41 + config HOST_VMSPLIT_3G_OPT 42 + bool "3G/1G user/kernel host split (for full 1G low memory)" 43 + config HOST_VMSPLIT_2G 44 + bool "2G/2G user/kernel host split" 45 + config HOST_VMSPLIT_1G 46 + bool "1G/3G user/kernel host split" 47 47 endchoice 48 48 49 49 config TOP_ADDR ··· 67 67 68 68 config STUB_CODE 69 69 hex 70 - default 0xbfffe000 if !HOST_2G_2G 71 - default 0x7fffe000 if HOST_2G_2G 70 + default 0xbfffe000 if !HOST_VMSPLIT_2G 71 + default 0x7fffe000 if HOST_VMSPLIT_2G 72 72 73 73 config STUB_DATA 74 74 hex 75 - default 0xbffff000 if !HOST_2G_2G 76 - default 0x7ffff000 if HOST_2G_2G 75 + default 0xbffff000 if !HOST_VMSPLIT_2G 76 + default 0x7ffff000 if HOST_VMSPLIT_2G 77 77 78 78 config STUB_START 79 79 hex
+2 -1
arch/um/sys-i386/signal.c
··· 219 219 unsigned long save_sp = PT_REGS_SP(regs); 220 220 int err = 0; 221 221 222 - stack_top &= -8UL; 222 + /* This is the same calculation as i386 - ((sp + 4) & 15) == 0 */ 223 + stack_top = ((stack_top + 4) & -16UL) - 4; 223 224 frame = (struct sigframe __user *) stack_top - 1; 224 225 if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame))) 225 226 return 1;
+3 -2
arch/um/sys-x86_64/signal.c
··· 191 191 struct task_struct *me = current; 192 192 193 193 frame = (struct rt_sigframe __user *) 194 - round_down(stack_top - sizeof(struct rt_sigframe), 16) - 8; 195 - frame = (struct rt_sigframe __user *) ((unsigned long) frame - 128); 194 + round_down(stack_top - sizeof(struct rt_sigframe), 16); 195 + /* Subtract 128 for a red zone and 8 for proper alignment */ 196 + frame = (struct rt_sigframe __user *) ((unsigned long) frame - 128 - 8); 196 197 197 198 if (!access_ok(VERIFY_WRITE, fp, sizeof(struct _fpstate))) 198 199 goto out;
-49
arch/x86_64/ia32/ia32_binfmt.c
··· 64 64 #define ELF_NGREG (sizeof (struct user_regs_struct32) / sizeof(elf_greg_t)) 65 65 typedef elf_greg_t elf_gregset_t[ELF_NGREG]; 66 66 67 - /* 68 - * These macros parameterize elf_core_dump in fs/binfmt_elf.c to write out 69 - * extra segments containing the vsyscall DSO contents. Dumping its 70 - * contents makes post-mortem fully interpretable later without matching up 71 - * the same kernel and hardware config to see what PC values meant. 72 - * Dumping its extra ELF program headers includes all the other information 73 - * a debugger needs to easily find how the vsyscall DSO was being used. 74 - */ 75 - #define ELF_CORE_EXTRA_PHDRS (find_vma(current->mm, VSYSCALL32_BASE) ? \ 76 - (VSYSCALL32_EHDR->e_phnum) : 0) 77 - #define ELF_CORE_WRITE_EXTRA_PHDRS \ 78 - do { \ 79 - if (find_vma(current->mm, VSYSCALL32_BASE)) { \ 80 - const struct elf32_phdr *const vsyscall_phdrs = \ 81 - (const struct elf32_phdr *) (VSYSCALL32_BASE \ 82 - + VSYSCALL32_EHDR->e_phoff);\ 83 - int i; \ 84 - Elf32_Off ofs = 0; \ 85 - for (i = 0; i < VSYSCALL32_EHDR->e_phnum; ++i) { \ 86 - struct elf32_phdr phdr = vsyscall_phdrs[i]; \ 87 - if (phdr.p_type == PT_LOAD) { \ 88 - BUG_ON(ofs != 0); \ 89 - ofs = phdr.p_offset = offset; \ 90 - phdr.p_memsz = PAGE_ALIGN(phdr.p_memsz); \ 91 - phdr.p_filesz = phdr.p_memsz; \ 92 - offset += phdr.p_filesz; \ 93 - } \ 94 - else \ 95 - phdr.p_offset += ofs; \ 96 - phdr.p_paddr = 0; /* match other core phdrs */ \ 97 - DUMP_WRITE(&phdr, sizeof(phdr)); \ 98 - } \ 99 - } \ 100 - } while (0) 101 - #define ELF_CORE_WRITE_EXTRA_DATA \ 102 - do { \ 103 - if (find_vma(current->mm, VSYSCALL32_BASE)) { \ 104 - const struct elf32_phdr *const vsyscall_phdrs = \ 105 - (const struct elf32_phdr *) (VSYSCALL32_BASE \ 106 - + VSYSCALL32_EHDR->e_phoff); \ 107 - int i; \ 108 - for (i = 0; i < VSYSCALL32_EHDR->e_phnum; ++i) { \ 109 - if (vsyscall_phdrs[i].p_type == PT_LOAD) \ 110 - DUMP_WRITE((void *) (u64) vsyscall_phdrs[i].p_vaddr,\ 111 - PAGE_ALIGN(vsyscall_phdrs[i].p_memsz)); \ 112 - } \ 113 - } \ 114 - } while (0) 115 - 116 67 struct elf_siginfo 117 68 { 118 69 int si_signo; /* signal number */
+15
arch/x86_64/ia32/syscall32.c
··· 59 59 vma->vm_end = VSYSCALL32_END; 60 60 /* MAYWRITE to allow gdb to COW and set breakpoints */ 61 61 vma->vm_flags = VM_READ|VM_EXEC|VM_MAYREAD|VM_MAYEXEC|VM_MAYWRITE; 62 + /* 63 + * Make sure the vDSO gets into every core dump. 64 + * Dumping its contents makes post-mortem fully interpretable later 65 + * without matching up the same kernel and hardware config to see 66 + * what PC values meant. 67 + */ 68 + vma->vm_flags |= VM_ALWAYSDUMP; 62 69 vma->vm_flags |= mm->def_flags; 63 70 vma->vm_page_prot = protection_map[vma->vm_flags & 7]; 64 71 vma->vm_ops = &syscall32_vm_ops; ··· 80 73 mm->total_vm += npages; 81 74 up_write(&mm->mmap_sem); 82 75 return 0; 76 + } 77 + 78 + const char *arch_vma_name(struct vm_area_struct *vma) 79 + { 80 + if (vma->vm_start == VSYSCALL32_BASE && 81 + vma->vm_mm && vma->vm_mm->task_size == IA32_PAGE_OFFSET) 82 + return "[vdso]"; 83 + return NULL; 83 84 } 84 85 85 86 static int __init init_syscall32(void)
-2
arch/x86_64/kernel/nmi.c
··· 302 302 if ((nmi >= NMI_INVALID) || (nmi < NMI_NONE)) 303 303 return 0; 304 304 305 - if ((nmi == NMI_LOCAL_APIC) && (nmi_known_cpu() == 0)) 306 - return 0; /* no lapic support */ 307 305 nmi_watchdog = nmi; 308 306 return 1; 309 307 }
+6 -5
block/elevator.c
··· 590 590 */ 591 591 rq->cmd_flags |= REQ_SOFTBARRIER; 592 592 593 + /* 594 + * Most requeues happen because of a busy condition, 595 + * don't force unplug of the queue for that case. 596 + */ 597 + unplug_it = 0; 598 + 593 599 if (q->ordseq == 0) { 594 600 list_add(&rq->queuelist, &q->queue_head); 595 601 break; ··· 610 604 } 611 605 612 606 list_add_tail(&rq->queuelist, pos); 613 - /* 614 - * most requeues happen because of a busy condition, don't 615 - * force unplug of the queue for that case. 616 - */ 617 - unplug_it = 0; 618 607 break; 619 608 620 609 default:
+3 -2
block/scsi_ioctl.c
··· 223 223 static int sg_io(struct file *file, request_queue_t *q, 224 224 struct gendisk *bd_disk, struct sg_io_hdr *hdr) 225 225 { 226 - unsigned long start_time; 226 + unsigned long start_time, timeout; 227 227 int writing = 0, ret = 0; 228 228 struct request *rq; 229 229 char sense[SCSI_SENSE_BUFFERSIZE]; ··· 271 271 272 272 rq->cmd_type = REQ_TYPE_BLOCK_PC; 273 273 274 - rq->timeout = jiffies_to_msecs(hdr->timeout); 274 + timeout = msecs_to_jiffies(hdr->timeout); 275 + rq->timeout = (timeout < INT_MAX) ? timeout : INT_MAX; 275 276 if (!rq->timeout) 276 277 rq->timeout = q->sg_timeout; 277 278 if (!rq->timeout)
-4
drivers/acpi/processor_perflib.c
··· 322 322 if (result) 323 323 return result; 324 324 325 - result = acpi_processor_get_platform_limit(pr); 326 - if (result) 327 - return result; 328 - 329 325 return 0; 330 326 } 331 327
-2
drivers/acpi/video.c
··· 1677 1677 struct acpi_video_device *video_device = data; 1678 1678 struct acpi_device *device = NULL; 1679 1679 1680 - 1681 - printk("video device notify\n"); 1682 1680 if (!video_device) 1683 1681 return; 1684 1682
+4
drivers/ata/Kconfig
··· 19 19 20 20 if ATA 21 21 22 + config ATA_NONSTANDARD 23 + bool 24 + default n 25 + 22 26 config SATA_AHCI 23 27 tristate "AHCI SATA support" 24 28 depends on PCI
+64 -39
drivers/ata/ahci.c
··· 75 75 AHCI_CMD_CLR_BUSY = (1 << 10), 76 76 77 77 RX_FIS_D2H_REG = 0x40, /* offset of D2H Register FIS data */ 78 + RX_FIS_SDB = 0x58, /* offset of SDB FIS data */ 78 79 RX_FIS_UNK = 0x60, /* offset of Unknown FIS data */ 79 80 80 81 board_ahci = 0, ··· 203 202 dma_addr_t cmd_tbl_dma; 204 203 void *rx_fis; 205 204 dma_addr_t rx_fis_dma; 205 + /* for NCQ spurious interrupt analysis */ 206 + int ncq_saw_spurious_sdb_cnt; 207 + unsigned int ncq_saw_d2h:1; 208 + unsigned int ncq_saw_dmas:1; 206 209 }; 207 210 208 211 static u32 ahci_scr_read (struct ata_port *ap, unsigned int sc_reg); ··· 366 361 { PCI_VDEVICE(INTEL, 0x27c1), board_ahci }, /* ICH7 */ 367 362 { PCI_VDEVICE(INTEL, 0x27c5), board_ahci }, /* ICH7M */ 368 363 { PCI_VDEVICE(INTEL, 0x27c3), board_ahci }, /* ICH7R */ 369 - { PCI_VDEVICE(AL, 0x5288), board_ahci }, /* ULi M5288 */ 364 + { PCI_VDEVICE(AL, 0x5288), board_ahci_ign_iferr }, /* ULi M5288 */ 370 365 { PCI_VDEVICE(INTEL, 0x2681), board_ahci }, /* ESB2 */ 371 366 { PCI_VDEVICE(INTEL, 0x2682), board_ahci }, /* ESB2 */ 372 367 { PCI_VDEVICE(INTEL, 0x2683), board_ahci }, /* ESB2 */ ··· 591 586 { 592 587 u32 cmd, scontrol; 593 588 589 + if (!(cap & HOST_CAP_SSS)) 590 + return; 591 + 592 + /* put device into listen mode, first set PxSCTL.DET to 0 */ 593 + scontrol = readl(port_mmio + PORT_SCR_CTL); 594 + scontrol &= ~0xf; 595 + writel(scontrol, port_mmio + PORT_SCR_CTL); 596 + 597 + /* then set PxCMD.SUD to 0 */ 594 598 cmd = readl(port_mmio + PORT_CMD) & ~PORT_CMD_ICC_MASK; 595 - 596 - if (cap & HOST_CAP_SSC) { 597 - /* enable transitions to slumber mode */ 598 - scontrol = readl(port_mmio + PORT_SCR_CTL); 599 - if ((scontrol & 0x0f00) > 0x100) { 600 - scontrol &= ~0xf00; 601 - writel(scontrol, port_mmio + PORT_SCR_CTL); 602 - } 603 - 604 - /* put device into slumber mode */ 605 - writel(cmd | PORT_CMD_ICC_SLUMBER, port_mmio + PORT_CMD); 606 - 607 - /* wait for the transition to complete */ 608 - ata_wait_register(port_mmio + PORT_CMD, PORT_CMD_ICC_SLUMBER, 609 - PORT_CMD_ICC_SLUMBER, 1, 50); 610 - } 611 - 612 - /* put device into listen mode */ 613 - if (cap & HOST_CAP_SSS) { 614 - /* first set PxSCTL.DET to 0 */ 615 - scontrol = readl(port_mmio + PORT_SCR_CTL); 616 - scontrol &= ~0xf; 617 - writel(scontrol, port_mmio + PORT_SCR_CTL); 618 - 619 - /* then set PxCMD.SUD to 0 */ 620 - cmd &= ~PORT_CMD_SPIN_UP; 621 - writel(cmd, port_mmio + PORT_CMD); 622 - } 599 + cmd &= ~PORT_CMD_SPIN_UP; 600 + writel(cmd, port_mmio + PORT_CMD); 623 601 } 624 602 625 603 static void ahci_init_port(void __iomem *port_mmio, u32 cap, ··· 903 915 904 916 /* clear D2H reception area to properly wait for D2H FIS */ 905 917 ata_tf_init(ap->device, &tf); 906 - tf.command = 0xff; 918 + tf.command = 0x80; 907 919 ata_tf_to_fis(&tf, d2h_fis, 0); 908 920 909 921 rc = sata_std_hardreset(ap, class); ··· 1114 1126 void __iomem *mmio = ap->host->mmio_base; 1115 1127 void __iomem *port_mmio = ahci_port_base(mmio, ap->port_no); 1116 1128 struct ata_eh_info *ehi = &ap->eh_info; 1129 + struct ahci_port_priv *pp = ap->private_data; 1117 1130 u32 status, qc_active; 1118 - int rc; 1131 + int rc, known_irq = 0; 1119 1132 1120 1133 status = readl(port_mmio + PORT_IRQ_STAT); 1121 1134 writel(status, port_mmio + PORT_IRQ_STAT); ··· 1143 1154 1144 1155 /* hmmm... a spurious interupt */ 1145 1156 1146 - /* some devices send D2H reg with I bit set during NCQ command phase */ 1147 - if (ap->sactive && (status & PORT_IRQ_D2H_REG_FIS)) 1157 + /* if !NCQ, ignore. No modern ATA device has broken HSM 1158 + * implementation for non-NCQ commands. 1159 + */ 1160 + if (!ap->sactive) 1148 1161 return; 1149 1162 1150 - /* ignore interim PIO setup fis interrupts */ 1151 - if (ata_tag_valid(ap->active_tag) && (status & PORT_IRQ_PIOS_FIS)) 1152 - return; 1163 + if (status & PORT_IRQ_D2H_REG_FIS) { 1164 + if (!pp->ncq_saw_d2h) 1165 + ata_port_printk(ap, KERN_INFO, 1166 + "D2H reg with I during NCQ, " 1167 + "this message won't be printed again\n"); 1168 + pp->ncq_saw_d2h = 1; 1169 + known_irq = 1; 1170 + } 1153 1171 1154 - if (ata_ratelimit()) 1172 + if (status & PORT_IRQ_DMAS_FIS) { 1173 + if (!pp->ncq_saw_dmas) 1174 + ata_port_printk(ap, KERN_INFO, 1175 + "DMAS FIS during NCQ, " 1176 + "this message won't be printed again\n"); 1177 + pp->ncq_saw_dmas = 1; 1178 + known_irq = 1; 1179 + } 1180 + 1181 + if (status & PORT_IRQ_SDB_FIS && 1182 + pp->ncq_saw_spurious_sdb_cnt < 10) { 1183 + /* SDB FIS containing spurious completions might be 1184 + * dangerous, we need to know more about them. Print 1185 + * more of it. 1186 + */ 1187 + const u32 *f = pp->rx_fis + RX_FIS_SDB; 1188 + 1189 + ata_port_printk(ap, KERN_INFO, "Spurious SDB FIS during NCQ " 1190 + "issue=0x%x SAct=0x%x FIS=%08x:%08x%s\n", 1191 + readl(port_mmio + PORT_CMD_ISSUE), 1192 + readl(port_mmio + PORT_SCR_ACT), 1193 + le32_to_cpu(f[0]), le32_to_cpu(f[1]), 1194 + pp->ncq_saw_spurious_sdb_cnt < 10 ? 1195 + "" : ", shutting up"); 1196 + 1197 + pp->ncq_saw_spurious_sdb_cnt++; 1198 + known_irq = 1; 1199 + } 1200 + 1201 + if (!known_irq) 1155 1202 ata_port_printk(ap, KERN_INFO, "spurious interrupt " 1156 - "(irq_stat 0x%x active_tag %d sactive 0x%x)\n", 1203 + "(irq_stat 0x%x active_tag 0x%x sactive 0x%x)\n", 1157 1204 status, ap->active_tag, ap->sactive); 1158 1205 } 1159 1206 ··· 1282 1257 /* clear IRQ */ 1283 1258 tmp = readl(port_mmio + PORT_IRQ_STAT); 1284 1259 writel(tmp, port_mmio + PORT_IRQ_STAT); 1285 - writel(1 << ap->id, mmio + HOST_IRQ_STAT); 1260 + writel(1 << ap->port_no, mmio + HOST_IRQ_STAT); 1286 1261 1287 1262 /* turn IRQ back on */ 1288 1263 writel(DEF_PORT_IRQ, port_mmio + PORT_IRQ_MASK);
+4 -2
drivers/ata/ata_generic.c
··· 64 64 /** 65 65 * generic_set_mode - mode setting 66 66 * @ap: interface to set up 67 + * @unused: returned device on error 67 68 * 68 69 * Use a non standard set_mode function. We don't want to be tuned. 69 70 * The BIOS configured everything. Our job is not to fiddle. We ··· 72 71 * and respect them. 73 72 */ 74 73 75 - static void generic_set_mode(struct ata_port *ap) 74 + static int generic_set_mode(struct ata_port *ap, struct ata_device **unused) 76 75 { 77 76 int dma_enabled = 0; 78 77 int i; ··· 83 82 84 83 for (i = 0; i < ATA_MAX_DEVICES; i++) { 85 84 struct ata_device *dev = &ap->device[i]; 86 - if (ata_dev_enabled(dev)) { 85 + if (ata_dev_ready(dev)) { 87 86 /* We don't really care */ 88 87 dev->pio_mode = XFER_PIO_0; 89 88 dev->dma_mode = XFER_MW_DMA_0; ··· 100 99 } 101 100 } 102 101 } 102 + return 0; 103 103 } 104 104 105 105 static struct scsi_host_template generic_sht = {
+4 -13
drivers/ata/libata-core.c
··· 1037 1037 * the PIO timing number for the maximum. Turn it into 1038 1038 * a mask. 1039 1039 */ 1040 - u8 mode = id[ATA_ID_OLD_PIO_MODES] & 0xFF; 1040 + u8 mode = (id[ATA_ID_OLD_PIO_MODES] >> 8) & 0xFF; 1041 1041 if (mode < 5) /* Valid PIO range */ 1042 1042 pio_mask = (2 << mode) - 1; 1043 1043 else ··· 1250 1250 1251 1251 ata_sg_init(qc, sg, n_elem); 1252 1252 qc->nsect = buflen / ATA_SECT_SIZE; 1253 + qc->nbytes = buflen; 1253 1254 } 1254 1255 1255 1256 qc->private_data = &wait; ··· 2432 2431 int i, rc = 0, used_dma = 0, found = 0; 2433 2432 2434 2433 /* has private set_mode? */ 2435 - if (ap->ops->set_mode) { 2436 - /* FIXME: make ->set_mode handle no device case and 2437 - * return error code and failing device on failure. 2438 - */ 2439 - for (i = 0; i < ATA_MAX_DEVICES; i++) { 2440 - if (ata_dev_ready(&ap->device[i])) { 2441 - ap->ops->set_mode(ap); 2442 - break; 2443 - } 2444 - } 2445 - return 0; 2446 - } 2434 + if (ap->ops->set_mode) 2435 + return ap->ops->set_mode(ap, r_failed_dev); 2447 2436 2448 2437 /* step 1: calculate xfer_mask */ 2449 2438 for (i = 0; i < ATA_MAX_DEVICES; i++) {
+1 -1
drivers/ata/libata-eh.c
··· 1796 1796 *r_failed_dev = dev; 1797 1797 1798 1798 DPRINTK("EXIT\n"); 1799 - return 0; 1799 + return rc; 1800 1800 } 1801 1801 1802 1802 /**
+51 -13
drivers/ata/libata-scsi.c
··· 273 273 { 274 274 int rc = 0; 275 275 u8 scsi_cmd[MAX_COMMAND_SIZE]; 276 - u8 args[7]; 277 - struct scsi_sense_hdr sshdr; 276 + u8 args[7], *sensebuf = NULL; 277 + int cmd_result; 278 278 279 279 if (arg == NULL) 280 280 return -EINVAL; ··· 282 282 if (copy_from_user(args, arg, sizeof(args))) 283 283 return -EFAULT; 284 284 285 + sensebuf = kzalloc(SCSI_SENSE_BUFFERSIZE, GFP_NOIO); 286 + if (!sensebuf) 287 + return -ENOMEM; 288 + 285 289 memset(scsi_cmd, 0, sizeof(scsi_cmd)); 286 290 scsi_cmd[0] = ATA_16; 287 291 scsi_cmd[1] = (3 << 1); /* Non-data */ 288 - /* scsi_cmd[2] is already 0 -- no off.line, cc, or data xfer */ 292 + scsi_cmd[2] = 0x20; /* cc but no off.line or data xfer */ 289 293 scsi_cmd[4] = args[1]; 290 294 scsi_cmd[6] = args[2]; 291 295 scsi_cmd[8] = args[3]; ··· 299 295 300 296 /* Good values for timeout and retries? Values below 301 297 from scsi_ioctl_send_command() for default case... */ 302 - if (scsi_execute_req(scsidev, scsi_cmd, DMA_NONE, NULL, 0, &sshdr, 303 - (10*HZ), 5)) 304 - rc = -EIO; 298 + cmd_result = scsi_execute(scsidev, scsi_cmd, DMA_NONE, NULL, 0, 299 + sensebuf, (10*HZ), 5, 0); 305 300 306 - /* Need code to retrieve data from check condition? */ 301 + if (driver_byte(cmd_result) == DRIVER_SENSE) {/* sense data available */ 302 + u8 *desc = sensebuf + 8; 303 + cmd_result &= ~(0xFF<<24); /* DRIVER_SENSE is not an error */ 304 + 305 + /* If we set cc then ATA pass-through will cause a 306 + * check condition even if no error. Filter that. */ 307 + if (cmd_result & SAM_STAT_CHECK_CONDITION) { 308 + struct scsi_sense_hdr sshdr; 309 + scsi_normalize_sense(sensebuf, SCSI_SENSE_BUFFERSIZE, 310 + &sshdr); 311 + if (sshdr.sense_key==0 && 312 + sshdr.asc==0 && sshdr.ascq==0) 313 + cmd_result &= ~SAM_STAT_CHECK_CONDITION; 314 + } 315 + 316 + /* Send userspace ATA registers */ 317 + if (sensebuf[0] == 0x72 && /* format is "descriptor" */ 318 + desc[0] == 0x09) {/* code is "ATA Descriptor" */ 319 + args[0] = desc[13]; /* status */ 320 + args[1] = desc[3]; /* error */ 321 + args[2] = desc[5]; /* sector count (0:7) */ 322 + args[3] = desc[7]; /* lbal */ 323 + args[4] = desc[9]; /* lbam */ 324 + args[5] = desc[11]; /* lbah */ 325 + args[6] = desc[12]; /* select */ 326 + if (copy_to_user(arg, args, sizeof(args))) 327 + rc = -EFAULT; 328 + } 329 + } 330 + 331 + if (cmd_result) { 332 + rc = -EIO; 333 + goto error; 334 + } 335 + 336 + error: 337 + kfree(sensebuf); 307 338 return rc; 308 339 } 309 340 ··· 411 372 if (cmd->use_sg) { 412 373 qc->__sg = (struct scatterlist *) cmd->request_buffer; 413 374 qc->n_elem = cmd->use_sg; 414 - } else { 375 + } else if (cmd->request_bufflen) { 415 376 qc->__sg = &qc->sgent; 416 377 qc->n_elem = 1; 417 378 } ··· 1022 983 } 1023 984 1024 985 tf->command = ATA_CMD_VERIFY; /* READ VERIFY */ 1025 - } else { 1026 - tf->nsect = 0; /* time period value (0 implies now) */ 1027 - tf->command = ATA_CMD_STANDBY; 1028 - /* Consider: ATA STANDBY IMMEDIATE command */ 1029 - } 986 + } else 987 + /* Issue ATA STANDBY IMMEDIATE command */ 988 + tf->command = ATA_CMD_STANDBYNOW1; 989 + 1030 990 /* 1031 991 * Standby and Idle condition timers could be implemented but that 1032 992 * would require libata to implement the Power condition mode page
+13 -8
drivers/ata/libata-sff.c
··· 827 827 */ 828 828 void ata_bmdma_post_internal_cmd(struct ata_queued_cmd *qc) 829 829 { 830 - ata_bmdma_stop(qc); 830 + if (qc->ap->ioaddr.bmdma_addr) 831 + ata_bmdma_stop(qc); 831 832 } 832 833 833 834 #ifdef CONFIG_PCI ··· 871 870 pci_resource_start(pdev, 1) | ATA_PCI_CTL_OFS; 872 871 bmdma = pci_resource_start(pdev, 4); 873 872 if (bmdma) { 874 - if (inb(bmdma + 2) & 0x80) 873 + if ((!(port[p]->flags & ATA_FLAG_IGN_SIMPLEX)) && 874 + (inb(bmdma + 2) & 0x80)) 875 875 probe_ent->_host_flags |= ATA_HOST_SIMPLEX; 876 876 probe_ent->port[p].bmdma_addr = bmdma; 877 877 } ··· 888 886 bmdma = pci_resource_start(pdev, 4); 889 887 if (bmdma) { 890 888 bmdma += 8; 891 - if(inb(bmdma + 2) & 0x80) 889 + if ((!(port[p]->flags & ATA_FLAG_IGN_SIMPLEX)) && 890 + (inb(bmdma + 2) & 0x80)) 892 891 probe_ent->_host_flags |= ATA_HOST_SIMPLEX; 893 892 probe_ent->port[p].bmdma_addr = bmdma; 894 893 } ··· 917 914 probe_ent->irq_flags = IRQF_SHARED; 918 915 919 916 if (port_mask & ATA_PORT_PRIMARY) { 920 - probe_ent->irq = ATA_PRIMARY_IRQ; 917 + probe_ent->irq = ATA_PRIMARY_IRQ(pdev); 921 918 probe_ent->port[0].cmd_addr = ATA_PRIMARY_CMD; 922 919 probe_ent->port[0].altstatus_addr = 923 920 probe_ent->port[0].ctl_addr = ATA_PRIMARY_CTL; 924 921 if (bmdma) { 925 922 probe_ent->port[0].bmdma_addr = bmdma; 926 - if (inb(bmdma + 2) & 0x80) 923 + if ((!(port[0]->flags & ATA_FLAG_IGN_SIMPLEX)) && 924 + (inb(bmdma + 2) & 0x80)) 927 925 probe_ent->_host_flags |= ATA_HOST_SIMPLEX; 928 926 } 929 927 ata_std_ports(&probe_ent->port[0]); ··· 933 929 934 930 if (port_mask & ATA_PORT_SECONDARY) { 935 931 if (probe_ent->irq) 936 - probe_ent->irq2 = ATA_SECONDARY_IRQ; 932 + probe_ent->irq2 = ATA_SECONDARY_IRQ(pdev); 937 933 else 938 - probe_ent->irq = ATA_SECONDARY_IRQ; 934 + probe_ent->irq = ATA_SECONDARY_IRQ(pdev); 939 935 probe_ent->port[1].cmd_addr = ATA_SECONDARY_CMD; 940 936 probe_ent->port[1].altstatus_addr = 941 937 probe_ent->port[1].ctl_addr = ATA_SECONDARY_CTL; 942 938 if (bmdma) { 943 939 probe_ent->port[1].bmdma_addr = bmdma + 8; 944 - if (inb(bmdma + 10) & 0x80) 940 + if ((!(port[1]->flags & ATA_FLAG_IGN_SIMPLEX)) && 941 + (inb(bmdma + 10) & 0x80)) 945 942 probe_ent->_host_flags |= ATA_HOST_SIMPLEX; 946 943 } 947 944 ata_std_ports(&probe_ent->port[1]);
+9 -2
drivers/ata/pata_atiixp.c
··· 36 36 static int atiixp_pre_reset(struct ata_port *ap) 37 37 { 38 38 struct pci_dev *pdev = to_pci_dev(ap->host->dev); 39 - static struct pci_bits atiixp_enable_bits[] = { 39 + static const struct pci_bits atiixp_enable_bits[] = { 40 40 { 0x48, 1, 0x01, 0x00 }, 41 41 { 0x48, 1, 0x08, 0x00 } 42 42 }; 43 + u8 udma; 43 44 44 45 if (!pci_test_config_bits(pdev, &atiixp_enable_bits[ap->port_no])) 45 46 return -ENOENT; 46 47 47 - ap->cbl = ATA_CBL_PATA80; 48 + /* Hack from drivers/ide/pci. Really we want to know how to do the 49 + raw detection not play follow the bios mode guess */ 50 + pci_read_config_byte(pdev, ATIIXP_IDE_UDMA_MODE + ap->port_no, &udma); 51 + if ((udma & 0x07) >= 0x04 || (udma & 0x70) >= 0x40) 52 + ap->cbl = ATA_CBL_PATA80; 53 + else 54 + ap->cbl = ATA_CBL_PATA40; 48 55 return ata_std_prereset(ap); 49 56 } 50 57
+16 -7
drivers/ata/pata_cmd64x.c
··· 197 197 static void cmd64x_set_dmamode(struct ata_port *ap, struct ata_device *adev) 198 198 { 199 199 static const u8 udma_data[] = { 200 - 0x31, 0x21, 0x11, 0x25, 0x15, 0x05 200 + 0x30, 0x20, 0x10, 0x20, 0x10, 0x00 201 201 }; 202 202 static const u8 mwdma_data[] = { 203 203 0x30, 0x20, 0x10 ··· 213 213 pci_read_config_byte(pdev, pciD, &regD); 214 214 pci_read_config_byte(pdev, pciU, &regU); 215 215 216 - regD &= ~(0x20 << shift); 217 - regU &= ~(0x35 << shift); 216 + /* DMA bits off */ 217 + regD &= ~(0x20 << adev->devno); 218 + /* DMA control bits */ 219 + regU &= ~(0x30 << shift); 220 + /* DMA timing bits */ 221 + regU &= ~(0x05 << adev->devno); 218 222 219 - if (adev->dma_mode >= XFER_UDMA_0) 223 + if (adev->dma_mode >= XFER_UDMA_0) { 224 + /* Merge thge timing value */ 220 225 regU |= udma_data[adev->dma_mode - XFER_UDMA_0] << shift; 221 - else 226 + /* Merge the control bits */ 227 + regU |= 1 << adev->devno; /* UDMA on */ 228 + if (adev->dma_mode > 2) /* 15nS timing */ 229 + regU |= 4 << adev->devno; 230 + } else 222 231 regD |= mwdma_data[adev->dma_mode - XFER_MW_DMA_0] << shift; 223 232 224 233 regD |= 0x20 << adev->devno; ··· 248 239 struct ata_port *ap = qc->ap; 249 240 struct pci_dev *pdev = to_pci_dev(ap->host->dev); 250 241 u8 dma_intr; 251 - int dma_reg = ap->port_no ? ARTTIM23_INTR_CH1 : CFR_INTR_CH0; 252 - int dma_mask = ap->port_no ? ARTTIM2 : CFR; 242 + int dma_mask = ap->port_no ? ARTTIM23_INTR_CH1 : CFR_INTR_CH0; 243 + int dma_reg = ap->port_no ? ARTTIM2 : CFR; 253 244 254 245 ata_bmdma_stop(qc); 255 246
+3 -3
drivers/ata/pata_hpt3x2n.c
··· 25 25 #include <linux/libata.h> 26 26 27 27 #define DRV_NAME "pata_hpt3x2n" 28 - #define DRV_VERSION "0.3" 28 + #define DRV_VERSION "0.3.2" 29 29 30 30 enum { 31 31 HPT_PCI_FAST = (1 << 31), ··· 297 297 return 0; 298 298 } 299 299 300 - static int hpt3x2n_use_dpll(struct ata_port *ap, int reading) 300 + static int hpt3x2n_use_dpll(struct ata_port *ap, int writing) 301 301 { 302 302 long flags = (long)ap->host->private_data; 303 303 /* See if we should use the DPLL */ 304 - if (reading == 0) 304 + if (writing) 305 305 return USE_DPLL; /* Needed for write */ 306 306 if (flags & PCI66) 307 307 return USE_DPLL; /* Needed at 66Mhz */
+3 -1
drivers/ata/pata_it821x.c
··· 476 476 /** 477 477 * it821x_smart_set_mode - mode setting 478 478 * @ap: interface to set up 479 + * @unused: device that failed (error only) 479 480 * 480 481 * Use a non standard set_mode function. We don't want to be tuned. 481 482 * The BIOS configured everything. Our job is not to fiddle. We ··· 484 483 * and respect them. 485 484 */ 486 485 487 - static void it821x_smart_set_mode(struct ata_port *ap) 486 + static int it821x_smart_set_mode(struct ata_port *ap, struct ata_device **unused) 488 487 { 489 488 int dma_enabled = 0; 490 489 int i; ··· 513 512 } 514 513 } 515 514 } 515 + return 0; 516 516 } 517 517 518 518 /**
+3 -2
drivers/ata/pata_ixp4xx_cf.c
··· 23 23 #include <scsi/scsi_host.h> 24 24 25 25 #define DRV_NAME "pata_ixp4xx_cf" 26 - #define DRV_VERSION "0.1.1" 26 + #define DRV_VERSION "0.1.1ac1" 27 27 28 - static void ixp4xx_set_mode(struct ata_port *ap) 28 + static int ixp4xx_set_mode(struct ata_port *ap, struct ata_device *adev) 29 29 { 30 30 int i; 31 31 ··· 38 38 dev->flags |= ATA_DFLAG_PIO; 39 39 } 40 40 } 41 + return 0; 41 42 } 42 43 43 44 static void ixp4xx_phy_reset(struct ata_port *ap)
+5 -13
drivers/ata/pata_jmicron.c
··· 204 204 205 205 u32 reg; 206 206 207 - if (id->driver_data != 368) { 208 - /* Put the controller into AHCI mode in case the AHCI driver 209 - has not yet been loaded. This can be done with either 210 - function present */ 207 + /* PATA controller is fn 1, AHCI is fn 0 */ 208 + if (id->driver_data != 368 && PCI_FUNC(pdev->devfn) != 1) 209 + return -ENODEV; 211 210 212 - /* FIXME: We may want a way to override this in future */ 213 - pci_write_config_byte(pdev, 0x41, 0xa1); 214 - 215 - /* PATA controller is fn 1, AHCI is fn 0 */ 216 - if (PCI_FUNC(pdev->devfn) != 1) 217 - return -ENODEV; 218 - } 219 - if ( id->driver_data == 365 || id->driver_data == 366) { 220 - /* The 365/66 have two PATA channels, redirect the second */ 211 + /* The 365/66 have two PATA channels, redirect the second */ 212 + if (id->driver_data == 365 || id->driver_data == 366) { 221 213 pci_read_config_dword(pdev, 0x80, &reg); 222 214 reg |= (1 << 24); /* IDE1 to PATA IDE secondary */ 223 215 pci_write_config_dword(pdev, 0x80, reg);
+3 -1
drivers/ata/pata_legacy.c
··· 96 96 /** 97 97 * legacy_set_mode - mode setting 98 98 * @ap: IDE interface 99 + * @unused: Device that failed when error is returned 99 100 * 100 101 * Use a non standard set_mode function. We don't want to be tuned. 101 102 * ··· 106 105 * expand on this as per hdparm in the base kernel. 107 106 */ 108 107 109 - static void legacy_set_mode(struct ata_port *ap) 108 + static int legacy_set_mode(struct ata_port *ap, struct ata_device **unused) 110 109 { 111 110 int i; 112 111 ··· 119 118 dev->flags |= ATA_DFLAG_PIO; 120 119 } 121 120 } 121 + return 0; 122 122 } 123 123 124 124 static struct scsi_host_template legacy_sht = {
+2 -1
drivers/ata/pata_platform.c
··· 30 30 * Provide our own set_mode() as we don't want to change anything that has 31 31 * already been configured.. 32 32 */ 33 - static void pata_platform_set_mode(struct ata_port *ap) 33 + static int pata_platform_set_mode(struct ata_port *ap, struct ata_device **unused) 34 34 { 35 35 int i; 36 36 ··· 44 44 dev->flags |= ATA_DFLAG_PIO; 45 45 } 46 46 } 47 + return 0; 47 48 } 48 49 49 50 static void pata_platform_host_stop(struct ata_host *host)
+4 -2
drivers/ata/pata_rz1000.c
··· 52 52 /** 53 53 * rz1000_set_mode - mode setting function 54 54 * @ap: ATA interface 55 + * @unused: returned device on set_mode failure 55 56 * 56 57 * Use a non standard set_mode function. We don't want to be tuned. We 57 58 * would prefer to be BIOS generic but for the fact our hardware is 58 59 * whacked out. 59 60 */ 60 61 61 - static void rz1000_set_mode(struct ata_port *ap) 62 + static int rz1000_set_mode(struct ata_port *ap, struct ata_device **unused) 62 63 { 63 64 int i; 64 65 65 66 for (i = 0; i < ATA_MAX_DEVICES; i++) { 66 67 struct ata_device *dev = &ap->device[i]; 67 - if (ata_dev_enabled(dev)) { 68 + if (ata_dev_ready(dev)) { 68 69 /* We don't really care */ 69 70 dev->pio_mode = XFER_PIO_0; 70 71 dev->xfer_mode = XFER_PIO_0; ··· 73 72 dev->flags |= ATA_DFLAG_PIO; 74 73 } 75 74 } 75 + return 0; 76 76 } 77 77 78 78
+1 -1
drivers/ata/pata_sil680.c
··· 135 135 static void sil680_set_piomode(struct ata_port *ap, struct ata_device *adev) 136 136 { 137 137 static u16 speed_p[5] = { 0x328A, 0x2283, 0x1104, 0x10C3, 0x10C1 }; 138 - static u16 speed_t[5] = { 0x328A, 0x1281, 0x1281, 0x10C3, 0x10C1 }; 138 + static u16 speed_t[5] = { 0x328A, 0x2283, 0x1281, 0x10C3, 0x10C1 }; 139 139 140 140 unsigned long tfaddr = sil680_selreg(ap, 0x02); 141 141 unsigned long addr = sil680_seldev(ap, adev, 0x04);
+3 -1
drivers/ata/pata_via.c
··· 23 23 * VIA VT8233c - UDMA100 24 24 * VIA VT8235 - UDMA133 25 25 * VIA VT8237 - UDMA133 26 + * VIA VT8237S - UDMA133 26 27 * VIA VT8251 - UDMA133 27 28 * 28 29 * Most registers remain compatible across chips. Others start reserved ··· 62 61 #include <linux/libata.h> 63 62 64 63 #define DRV_NAME "pata_via" 65 - #define DRV_VERSION "0.2.0" 64 + #define DRV_VERSION "0.2.1" 66 65 67 66 /* 68 67 * The following comes directly from Vojtech Pavlik's ide/pci/via82cxxx ··· 96 95 u8 rev_max; 97 96 u16 flags; 98 97 } via_isa_bridges[] = { 98 + { "vt8237s", PCI_DEVICE_ID_VIA_8237S, 0x00, 0x2f, VIA_UDMA_133 | VIA_BAD_AST }, 99 99 { "vt8251", PCI_DEVICE_ID_VIA_8251, 0x00, 0x2f, VIA_UDMA_133 | VIA_BAD_AST }, 100 100 { "cx700", PCI_DEVICE_ID_VIA_CX700, 0x00, 0x2f, VIA_UDMA_133 | VIA_BAD_AST }, 101 101 { "vt6410", PCI_DEVICE_ID_VIA_6410, 0x00, 0x2f, VIA_UDMA_133 | VIA_BAD_AST | VIA_NO_ENABLES},
+6 -8
drivers/ata/sata_nv.c
··· 700 700 static int nv_host_intr(struct ata_port *ap, u8 irq_stat) 701 701 { 702 702 struct ata_queued_cmd *qc = ata_qc_from_tag(ap, ap->active_tag); 703 - int handled; 704 703 705 704 /* freeze if hotplugged */ 706 705 if (unlikely(irq_stat & (NV_INT_ADDED | NV_INT_REMOVED))) { ··· 718 719 } 719 720 720 721 /* handle interrupt */ 721 - handled = ata_host_intr(ap, qc); 722 - if (unlikely(!handled)) { 723 - /* spurious, clear it */ 724 - ata_check_status(ap); 725 - } 726 - 727 - return 1; 722 + return ata_host_intr(ap, qc); 728 723 } 729 724 730 725 static irqreturn_t nv_adma_interrupt(int irq, void *dev_instance) ··· 745 752 if (pp->flags & NV_ADMA_PORT_REGISTER_MODE) { 746 753 u8 irq_stat = readb(host->mmio_base + NV_INT_STATUS_CK804) 747 754 >> (NV_INT_PORT_SHIFT * i); 755 + if(ata_tag_valid(ap->active_tag)) 756 + /** NV_INT_DEV indication seems unreliable at times 757 + at least in ADMA mode. Force it on always when a 758 + command is active, to prevent losing interrupts. */ 759 + irq_stat |= NV_INT_DEV; 748 760 handled += nv_host_intr(ap, irq_stat); 749 761 continue; 750 762 }
+2 -1
drivers/ata/sata_uli.c
··· 128 128 129 129 static struct ata_port_info uli_port_info = { 130 130 .sht = &uli_sht, 131 - .flags = ATA_FLAG_SATA | ATA_FLAG_NO_LEGACY, 131 + .flags = ATA_FLAG_SATA | ATA_FLAG_NO_LEGACY | 132 + ATA_FLAG_IGN_SIMPLEX, 132 133 .pio_mask = 0x1f, /* pio0-4 */ 133 134 .udma_mask = 0x7f, /* udma0-6 */ 134 135 .port_ops = &uli_ops,
+11 -1
drivers/ata/sata_via.c
··· 74 74 static int svia_init_one (struct pci_dev *pdev, const struct pci_device_id *ent); 75 75 static u32 svia_scr_read (struct ata_port *ap, unsigned int sc_reg); 76 76 static void svia_scr_write (struct ata_port *ap, unsigned int sc_reg, u32 val); 77 + static void svia_noop_freeze(struct ata_port *ap); 77 78 static void vt6420_error_handler(struct ata_port *ap); 78 79 79 80 static const struct pci_device_id svia_pci_tbl[] = { ··· 129 128 .qc_issue = ata_qc_issue_prot, 130 129 .data_xfer = ata_pio_data_xfer, 131 130 132 - .freeze = ata_bmdma_freeze, 131 + .freeze = svia_noop_freeze, 133 132 .thaw = ata_bmdma_thaw, 134 133 .error_handler = vt6420_error_handler, 135 134 .post_internal_cmd = ata_bmdma_post_internal_cmd, ··· 203 202 if (sc_reg > SCR_CONTROL) 204 203 return; 205 204 outl(val, ap->ioaddr.scr_addr + (4 * sc_reg)); 205 + } 206 + 207 + static void svia_noop_freeze(struct ata_port *ap) 208 + { 209 + /* Some VIA controllers choke if ATA_NIEN is manipulated in 210 + * certain way. Leave it alone and just clear pending IRQ. 211 + */ 212 + ata_chk_status(ap); 213 + ata_bmdma_irq_clear(ap); 206 214 } 207 215 208 216 /**
+1 -1
drivers/atm/horizon.c
··· 1845 1845 1846 1846 /********** initialise a card **********/ 1847 1847 1848 - static int __init hrz_init (hrz_dev * dev) { 1848 + static int __devinit hrz_init (hrz_dev * dev) { 1849 1849 int onefivefive; 1850 1850 1851 1851 u16 chan;
+5
drivers/char/agp/amd-k7-agp.c
··· 101 101 for (i = 0; i < nr_tables; i++) { 102 102 entry = kzalloc(sizeof(struct amd_page_map), GFP_KERNEL); 103 103 if (entry == NULL) { 104 + while (i > 0) { 105 + kfree(tables[i-1]); 106 + i--; 107 + } 108 + kfree(tables); 104 109 retval = -ENOMEM; 105 110 break; 106 111 }
+1 -1
drivers/char/agp/amd64-agp.c
··· 655 655 .class = (PCI_CLASS_BRIDGE_HOST << 8), 656 656 .class_mask = ~0, 657 657 .vendor = PCI_VENDOR_ID_VIA, 658 - .device = PCI_DEVICE_ID_VIA_K8M890CE, 658 + .device = PCI_DEVICE_ID_VIA_VT3336, 659 659 .subvendor = PCI_ANY_ID, 660 660 .subdevice = PCI_ANY_ID, 661 661 },
+18 -18
drivers/char/agp/ati-agp.c
··· 41 41 }; 42 42 43 43 44 - typedef struct _ati_page_map { 44 + struct ati_page_map { 45 45 unsigned long *real; 46 46 unsigned long __iomem *remapped; 47 - } ati_page_map; 47 + }; 48 48 49 49 static struct _ati_generic_private { 50 50 volatile u8 __iomem *registers; 51 - ati_page_map **gatt_pages; 51 + struct ati_page_map **gatt_pages; 52 52 int num_tables; 53 53 } ati_generic_private; 54 54 55 - static int ati_create_page_map(ati_page_map *page_map) 55 + static int ati_create_page_map(struct ati_page_map *page_map) 56 56 { 57 57 int i, err = 0; 58 58 ··· 82 82 } 83 83 84 84 85 - static void ati_free_page_map(ati_page_map *page_map) 85 + static void ati_free_page_map(struct ati_page_map *page_map) 86 86 { 87 87 unmap_page_from_agp(virt_to_page(page_map->real)); 88 88 iounmap(page_map->remapped); ··· 94 94 static void ati_free_gatt_pages(void) 95 95 { 96 96 int i; 97 - ati_page_map **tables; 98 - ati_page_map *entry; 97 + struct ati_page_map **tables; 98 + struct ati_page_map *entry; 99 99 100 100 tables = ati_generic_private.gatt_pages; 101 101 for (i = 0; i < ati_generic_private.num_tables; i++) { ··· 112 112 113 113 static int ati_create_gatt_pages(int nr_tables) 114 114 { 115 - ati_page_map **tables; 116 - ati_page_map *entry; 115 + struct ati_page_map **tables; 116 + struct ati_page_map *entry; 117 117 int retval = 0; 118 118 int i; 119 119 120 - tables = kzalloc((nr_tables + 1) * sizeof(ati_page_map *),GFP_KERNEL); 120 + tables = kzalloc((nr_tables + 1) * sizeof(struct ati_page_map *),GFP_KERNEL); 121 121 if (tables == NULL) 122 122 return -ENOMEM; 123 123 124 124 for (i = 0; i < nr_tables; i++) { 125 - entry = kzalloc(sizeof(ati_page_map), GFP_KERNEL); 125 + entry = kzalloc(sizeof(struct ati_page_map), GFP_KERNEL); 126 126 if (entry == NULL) { 127 - while (i>0) { 128 - kfree (tables[i-1]); 127 + while (i > 0) { 128 + kfree(tables[i-1]); 129 129 i--; 130 130 } 131 - kfree (tables); 132 - tables = NULL; 131 + kfree(tables); 133 132 retval = -ENOMEM; 134 133 break; 135 134 } 136 135 tables[i] = entry; 137 136 retval = ati_create_page_map(entry); 138 - if (retval != 0) break; 137 + if (retval != 0) 138 + break; 139 139 } 140 140 ati_generic_private.num_tables = nr_tables; 141 141 ati_generic_private.gatt_pages = tables; ··· 340 340 static int ati_create_gatt_table(struct agp_bridge_data *bridge) 341 341 { 342 342 struct aper_size_info_lvl2 *value; 343 - ati_page_map page_dir; 343 + struct ati_page_map page_dir; 344 344 unsigned long addr; 345 345 int retval; 346 346 u32 temp; ··· 400 400 401 401 static int ati_free_gatt_table(struct agp_bridge_data *bridge) 402 402 { 403 - ati_page_map page_dir; 403 + struct ati_page_map page_dir; 404 404 405 405 page_dir.real = (unsigned long *)agp_bridge->gatt_table_real; 406 406 page_dir.remapped = (unsigned long __iomem *)agp_bridge->gatt_table;
+9
drivers/char/agp/intel-agp.c
··· 1955 1955 1956 1956 pci_restore_state(pdev); 1957 1957 1958 + /* We should restore our graphics device's config space, 1959 + * as host bridge (00:00) resumes before graphics device (02:00), 1960 + * then our access to its pci space can work right. 1961 + */ 1962 + if (intel_i810_private.i810_dev) 1963 + pci_restore_state(intel_i810_private.i810_dev); 1964 + if (intel_i830_private.i830_dev) 1965 + pci_restore_state(intel_i830_private.i830_dev); 1966 + 1958 1967 if (bridge->driver == &intel_generic_driver) 1959 1968 intel_configure(); 1960 1969 else if (bridge->driver == &intel_850_driver)
+19 -2
drivers/char/agp/via-agp.c
··· 380 380 /* P4M800CE */ 381 381 { 382 382 .device_id = PCI_DEVICE_ID_VIA_P4M800CE, 383 - .chipset_name = "P4M800CE", 383 + .chipset_name = "VT3314", 384 384 }, 385 - 385 + /* CX700 */ 386 + { 387 + .device_id = PCI_DEVICE_ID_VIA_CX700, 388 + .chipset_name = "CX700", 389 + }, 390 + /* VT3336 */ 391 + { 392 + .device_id = PCI_DEVICE_ID_VIA_VT3336, 393 + .chipset_name = "VT3336", 394 + }, 395 + /* P4M890 */ 396 + { 397 + .device_id = PCI_DEVICE_ID_VIA_P4M890, 398 + .chipset_name = "P4M890", 399 + }, 386 400 { }, /* dummy final entry, always present */ 387 401 }; 388 402 ··· 538 524 ID(PCI_DEVICE_ID_VIA_83_87XX_1), 539 525 ID(PCI_DEVICE_ID_VIA_3296_0), 540 526 ID(PCI_DEVICE_ID_VIA_P4M800CE), 527 + ID(PCI_DEVICE_ID_VIA_CX700), 528 + ID(PCI_DEVICE_ID_VIA_VT3336), 529 + ID(PCI_DEVICE_ID_VIA_P4M890), 541 530 { } 542 531 }; 543 532
+1 -2
drivers/char/ipmi/ipmi_msghandler.c
··· 3649 3649 unsigned long flags; 3650 3650 int i; 3651 3651 3652 - INIT_LIST_HEAD(&timeouts); 3653 - 3654 3652 rcu_read_lock(); 3655 3653 list_for_each_entry_rcu(intf, &ipmi_interfaces, link) { 3656 3654 /* See if any waiting messages need to be processed. */ ··· 3669 3671 /* Go through the seq table and find any messages that 3670 3672 have timed out, putting them in the timeouts 3671 3673 list. */ 3674 + INIT_LIST_HEAD(&timeouts); 3672 3675 spin_lock_irqsave(&intf->seq_lock, flags); 3673 3676 for (i = 0; i < IPMI_IPMB_NUM_SEQ; i++) 3674 3677 check_msg_timeout(intf, &(intf->seq_table[i]),
+11 -9
drivers/char/sysrq.c
··· 215 215 } 216 216 static struct sysrq_key_op sysrq_showstate_blocked_op = { 217 217 .handler = sysrq_handle_showstate_blocked, 218 - .help_msg = "showBlockedTasks", 218 + .help_msg = "shoW-blocked-tasks", 219 219 .action_msg = "Show Blocked State", 220 220 .enable_mask = SYSRQ_ENABLE_DUMP, 221 221 }; ··· 315 315 &sysrq_loglevel_op, /* 9 */ 316 316 317 317 /* 318 - * Don't use for system provided sysrqs, it is handled specially on 319 - * sparc and will never arrive 318 + * a: Don't use for system provided sysrqs, it is handled specially on 319 + * sparc and will never arrive. 320 320 */ 321 321 NULL, /* a */ 322 322 &sysrq_reboot_op, /* b */ 323 - &sysrq_crashdump_op, /* c */ 323 + &sysrq_crashdump_op, /* c & ibm_emac driver debug */ 324 324 &sysrq_showlocks_op, /* d */ 325 325 &sysrq_term_op, /* e */ 326 326 &sysrq_moom_op, /* f */ 327 + /* g: May be registered by ppc for kgdb */ 327 328 NULL, /* g */ 328 329 NULL, /* h */ 329 330 &sysrq_kill_op, /* i */ ··· 333 332 NULL, /* l */ 334 333 &sysrq_showmem_op, /* m */ 335 334 &sysrq_unrt_op, /* n */ 336 - /* This will often be registered as 'Off' at init time */ 335 + /* o: This will often be registered as 'Off' at init time */ 337 336 NULL, /* o */ 338 337 &sysrq_showregs_op, /* p */ 339 338 NULL, /* q */ 340 - &sysrq_unraw_op, /* r */ 339 + &sysrq_unraw_op, /* r */ 341 340 &sysrq_sync_op, /* s */ 342 341 &sysrq_showstate_op, /* t */ 343 342 &sysrq_mountro_op, /* u */ 344 - /* May be assigned at init time by SMP VOYAGER */ 343 + /* v: May be registered at init time by SMP VOYAGER */ 345 344 NULL, /* v */ 346 - NULL, /* w */ 347 - &sysrq_showstate_blocked_op, /* x */ 345 + &sysrq_showstate_blocked_op, /* w */ 346 + /* x: May be registered on ppc/powerpc for xmon */ 347 + NULL, /* x */ 348 348 NULL, /* y */ 349 349 NULL /* z */ 350 350 };
+28 -15
drivers/char/tlclk.c
··· 186 186 static void switchover_timeout(unsigned long data); 187 187 static struct timer_list switchover_timer = 188 188 TIMER_INITIALIZER(switchover_timeout , 0, 0); 189 + static unsigned long tlclk_timer_data; 189 190 190 191 static struct tlclk_alarms *alarm_events; 191 192 ··· 198 197 199 198 static DECLARE_WAIT_QUEUE_HEAD(wq); 200 199 200 + static unsigned long useflags; 201 + static DEFINE_MUTEX(tlclk_mutex); 202 + 201 203 static int tlclk_open(struct inode *inode, struct file *filp) 202 204 { 203 205 int result; 206 + 207 + if (test_and_set_bit(0, &useflags)) 208 + return -EBUSY; 209 + /* this legacy device is always one per system and it doesn't 210 + * know how to handle multiple concurrent clients. 211 + */ 204 212 205 213 /* Make sure there is no interrupt pending while 206 214 * initialising interrupt handler */ ··· 231 221 static int tlclk_release(struct inode *inode, struct file *filp) 232 222 { 233 223 free_irq(telclk_interrupt, tlclk_interrupt); 224 + clear_bit(0, &useflags); 234 225 235 226 return 0; 236 227 } ··· 241 230 { 242 231 if (count < sizeof(struct tlclk_alarms)) 243 232 return -EIO; 233 + if (mutex_lock_interruptible(&tlclk_mutex)) 234 + return -EINTR; 235 + 244 236 245 237 wait_event_interruptible(wq, got_event); 246 - if (copy_to_user(buf, alarm_events, sizeof(struct tlclk_alarms))) 238 + if (copy_to_user(buf, alarm_events, sizeof(struct tlclk_alarms))) { 239 + mutex_unlock(&tlclk_mutex); 247 240 return -EFAULT; 241 + } 248 242 249 243 memset(alarm_events, 0, sizeof(struct tlclk_alarms)); 250 244 got_event = 0; 251 245 246 + mutex_unlock(&tlclk_mutex); 252 247 return sizeof(struct tlclk_alarms); 253 - } 254 - 255 - static ssize_t tlclk_write(struct file *filp, const char __user *buf, size_t count, 256 - loff_t *f_pos) 257 - { 258 - return 0; 259 248 } 260 249 261 250 static const struct file_operations tlclk_fops = { 262 251 .read = tlclk_read, 263 - .write = tlclk_write, 264 252 .open = tlclk_open, 265 253 .release = tlclk_release, 266 254 ··· 550 540 SET_PORT_BITS(TLCLK_REG3, 0xf8, 0x7); 551 541 switch (val) { 552 542 case CLK_8_592MHz: 553 - SET_PORT_BITS(TLCLK_REG0, 0xfc, 1); 543 + SET_PORT_BITS(TLCLK_REG0, 0xfc, 2); 554 544 break; 555 545 case CLK_11_184MHz: 556 546 SET_PORT_BITS(TLCLK_REG0, 0xfc, 0); ··· 559 549 SET_PORT_BITS(TLCLK_REG0, 0xfc, 3); 560 550 break; 561 551 case CLK_44_736MHz: 562 - SET_PORT_BITS(TLCLK_REG0, 0xfc, 2); 552 + SET_PORT_BITS(TLCLK_REG0, 0xfc, 1); 563 553 break; 564 554 } 565 555 } else ··· 849 839 850 840 static void switchover_timeout(unsigned long data) 851 841 { 852 - if ((data & 1)) { 853 - if ((inb(TLCLK_REG1) & 0x08) != (data & 0x08)) 842 + unsigned long flags = *(unsigned long *) data; 843 + 844 + if ((flags & 1)) { 845 + if ((inb(TLCLK_REG1) & 0x08) != (flags & 0x08)) 854 846 alarm_events->switchover_primary++; 855 847 } else { 856 - if ((inb(TLCLK_REG1) & 0x08) != (data & 0x08)) 848 + if ((inb(TLCLK_REG1) & 0x08) != (flags & 0x08)) 857 849 alarm_events->switchover_secondary++; 858 850 } 859 851 ··· 913 901 914 902 /* TIMEOUT in ~10ms */ 915 903 switchover_timer.expires = jiffies + msecs_to_jiffies(10); 916 - switchover_timer.data = inb(TLCLK_REG1); 917 - add_timer(&switchover_timer); 904 + tlclk_timer_data = inb(TLCLK_REG1); 905 + switchover_timer.data = (unsigned long) &tlclk_timer_data; 906 + mod_timer(&switchover_timer, switchover_timer.expires); 918 907 } else { 919 908 got_event = 1; 920 909 wake_up(&wq);
+52 -62
drivers/char/vr41xx_giu.c
··· 3 3 * 4 4 * Copyright (C) 2002 MontaVista Software Inc. 5 5 * Author: Yoichi Yuasa <yyuasa@mvista.com or source@mvista.com> 6 - * Copyright (C) 2003-2005 Yoichi Yuasa <yoichi_yuasa@tripeaks.co.jp> 6 + * Copyright (C) 2003-2007 Yoichi Yuasa <yoichi_yuasa@tripeaks.co.jp> 7 7 * 8 8 * This program is free software; you can redistribute it and/or modify 9 9 * it under the terms of the GNU General Public License as published by ··· 125 125 return data; 126 126 } 127 127 128 - static unsigned int startup_giuint_low_irq(unsigned int irq) 128 + static void ack_giuint_low(unsigned int irq) 129 129 { 130 - unsigned int pin; 131 - 132 - pin = GPIO_PIN_OF_IRQ(irq); 133 - giu_write(GIUINTSTATL, 1 << pin); 134 - giu_set(GIUINTENL, 1 << pin); 135 - 136 - return 0; 130 + giu_write(GIUINTSTATL, 1 << GPIO_PIN_OF_IRQ(irq)); 137 131 } 138 132 139 - static void shutdown_giuint_low_irq(unsigned int irq) 133 + static void mask_giuint_low(unsigned int irq) 140 134 { 141 135 giu_clear(GIUINTENL, 1 << GPIO_PIN_OF_IRQ(irq)); 142 136 } 143 137 144 - static void enable_giuint_low_irq(unsigned int irq) 145 - { 146 - giu_set(GIUINTENL, 1 << GPIO_PIN_OF_IRQ(irq)); 147 - } 148 - 149 - #define disable_giuint_low_irq shutdown_giuint_low_irq 150 - 151 - static void ack_giuint_low_irq(unsigned int irq) 138 + static void mask_ack_giuint_low(unsigned int irq) 152 139 { 153 140 unsigned int pin; 154 141 ··· 144 157 giu_write(GIUINTSTATL, 1 << pin); 145 158 } 146 159 147 - static void end_giuint_low_irq(unsigned int irq) 160 + static void unmask_giuint_low(unsigned int irq) 148 161 { 149 - if (!(irq_desc[irq].status & (IRQ_DISABLED | IRQ_INPROGRESS))) 150 - giu_set(GIUINTENL, 1 << GPIO_PIN_OF_IRQ(irq)); 162 + giu_set(GIUINTENL, 1 << GPIO_PIN_OF_IRQ(irq)); 151 163 } 152 164 153 - static struct hw_interrupt_type giuint_low_irq_type = { 154 - .typename = "GIUINTL", 155 - .startup = startup_giuint_low_irq, 156 - .shutdown = shutdown_giuint_low_irq, 157 - .enable = enable_giuint_low_irq, 158 - .disable = disable_giuint_low_irq, 159 - .ack = ack_giuint_low_irq, 160 - .end = end_giuint_low_irq, 165 + static struct irq_chip giuint_low_irq_chip = { 166 + .name = "GIUINTL", 167 + .ack = ack_giuint_low, 168 + .mask = mask_giuint_low, 169 + .mask_ack = mask_ack_giuint_low, 170 + .unmask = unmask_giuint_low, 161 171 }; 162 172 163 - static unsigned int startup_giuint_high_irq(unsigned int irq) 173 + static void ack_giuint_high(unsigned int irq) 164 174 { 165 - unsigned int pin; 166 - 167 - pin = GPIO_PIN_OF_IRQ(irq) - GIUINT_HIGH_OFFSET; 168 - giu_write(GIUINTSTATH, 1 << pin); 169 - giu_set(GIUINTENH, 1 << pin); 170 - 171 - return 0; 175 + giu_write(GIUINTSTATH, 1 << (GPIO_PIN_OF_IRQ(irq) - GIUINT_HIGH_OFFSET)); 172 176 } 173 177 174 - static void shutdown_giuint_high_irq(unsigned int irq) 178 + static void mask_giuint_high(unsigned int irq) 175 179 { 176 180 giu_clear(GIUINTENH, 1 << (GPIO_PIN_OF_IRQ(irq) - GIUINT_HIGH_OFFSET)); 177 181 } 178 182 179 - static void enable_giuint_high_irq(unsigned int irq) 180 - { 181 - giu_set(GIUINTENH, 1 << (GPIO_PIN_OF_IRQ(irq) - GIUINT_HIGH_OFFSET)); 182 - } 183 - 184 - #define disable_giuint_high_irq shutdown_giuint_high_irq 185 - 186 - static void ack_giuint_high_irq(unsigned int irq) 183 + static void mask_ack_giuint_high(unsigned int irq) 187 184 { 188 185 unsigned int pin; 189 186 ··· 176 205 giu_write(GIUINTSTATH, 1 << pin); 177 206 } 178 207 179 - static void end_giuint_high_irq(unsigned int irq) 208 + static void unmask_giuint_high(unsigned int irq) 180 209 { 181 - if (!(irq_desc[irq].status & (IRQ_DISABLED | IRQ_INPROGRESS))) 182 - giu_set(GIUINTENH, 1 << (GPIO_PIN_OF_IRQ(irq) - GIUINT_HIGH_OFFSET)); 210 + giu_set(GIUINTENH, 1 << (GPIO_PIN_OF_IRQ(irq) - GIUINT_HIGH_OFFSET)); 183 211 } 184 212 185 - static struct hw_interrupt_type giuint_high_irq_type = { 186 - .typename = "GIUINTH", 187 - .startup = startup_giuint_high_irq, 188 - .shutdown = shutdown_giuint_high_irq, 189 - .enable = enable_giuint_high_irq, 190 - .disable = disable_giuint_high_irq, 191 - .ack = ack_giuint_high_irq, 192 - .end = end_giuint_high_irq, 213 + static struct irq_chip giuint_high_irq_chip = { 214 + .name = "GIUINTH", 215 + .ack = ack_giuint_high, 216 + .mask = mask_giuint_high, 217 + .mask_ack = mask_ack_giuint_high, 218 + .unmask = unmask_giuint_high, 193 219 }; 194 220 195 221 static int giu_get_irq(unsigned int irq) ··· 250 282 break; 251 283 } 252 284 } 285 + set_irq_chip_and_handler(GIU_IRQ(pin), 286 + &giuint_low_irq_chip, 287 + handle_edge_irq); 253 288 } else { 254 289 giu_clear(GIUINTTYPL, mask); 255 290 giu_clear(GIUINTHTSELL, mask); 291 + set_irq_chip_and_handler(GIU_IRQ(pin), 292 + &giuint_low_irq_chip, 293 + handle_level_irq); 256 294 } 257 295 giu_write(GIUINTSTATL, mask); 258 296 } else if (pin < GIUINT_HIGH_MAX) { ··· 285 311 break; 286 312 } 287 313 } 314 + set_irq_chip_and_handler(GIU_IRQ(pin), 315 + &giuint_high_irq_chip, 316 + handle_edge_irq); 288 317 } else { 289 318 giu_clear(GIUINTTYPH, mask); 290 319 giu_clear(GIUINTHTSELH, mask); 320 + set_irq_chip_and_handler(GIU_IRQ(pin), 321 + &giuint_high_irq_chip, 322 + handle_level_irq); 291 323 } 292 324 giu_write(GIUINTSTATH, mask); 293 325 } ··· 597 617 static int __devinit giu_probe(struct platform_device *dev) 598 618 { 599 619 unsigned long start, size, flags = 0; 600 - unsigned int nr_pins = 0; 620 + unsigned int nr_pins = 0, trigger, i, pin; 601 621 struct resource *res1, *res2 = NULL; 602 622 void *base; 603 - int retval, i; 623 + struct irq_chip *chip; 624 + int retval; 604 625 605 626 switch (current_cpu_data.cputype) { 606 627 case CPU_VR4111: ··· 669 688 giu_write(GIUINTENL, 0); 670 689 giu_write(GIUINTENH, 0); 671 690 691 + trigger = giu_read(GIUINTTYPH) << 16; 692 + trigger |= giu_read(GIUINTTYPL); 672 693 for (i = GIU_IRQ_BASE; i <= GIU_IRQ_LAST; i++) { 673 - if (i < GIU_IRQ(GIUINT_HIGH_OFFSET)) 674 - irq_desc[i].chip = &giuint_low_irq_type; 694 + pin = GPIO_PIN_OF_IRQ(i); 695 + if (pin < GIUINT_HIGH_OFFSET) 696 + chip = &giuint_low_irq_chip; 675 697 else 676 - irq_desc[i].chip = &giuint_high_irq_type; 698 + chip = &giuint_high_irq_chip; 699 + 700 + if (trigger & (1 << pin)) 701 + set_irq_chip_and_handler(i, chip, handle_edge_irq); 702 + else 703 + set_irq_chip_and_handler(i, chip, handle_level_irq); 704 + 677 705 } 678 706 679 707 return cascade_irq(GIUINT_IRQ, giu_get_irq);
+13 -4
drivers/cpufreq/cpufreq.c
··· 722 722 spin_unlock_irqrestore(&cpufreq_driver_lock, flags); 723 723 724 724 dprintk("CPU already managed, adding link\n"); 725 - sysfs_create_link(&sys_dev->kobj, 726 - &managed_policy->kobj, "cpufreq"); 725 + ret = sysfs_create_link(&sys_dev->kobj, 726 + &managed_policy->kobj, 727 + "cpufreq"); 728 + if (ret) { 729 + mutex_unlock(&policy->lock); 730 + goto err_out_driver_exit; 731 + } 727 732 728 733 cpufreq_debug_enable_ratelimit(); 729 734 mutex_unlock(&policy->lock); ··· 775 770 dprintk("CPU %u already managed, adding link\n", j); 776 771 cpufreq_cpu_get(cpu); 777 772 cpu_sys_dev = get_cpu_sysdev(j); 778 - sysfs_create_link(&cpu_sys_dev->kobj, &policy->kobj, 779 - "cpufreq"); 773 + ret = sysfs_create_link(&cpu_sys_dev->kobj, &policy->kobj, 774 + "cpufreq"); 775 + if (ret) { 776 + mutex_unlock(&policy->lock); 777 + goto err_out_unregister; 778 + } 780 779 } 781 780 782 781 policy->governor = NULL; /* to assure that the starting sequence is
+12 -17
drivers/firmware/efivars.c
··· 122 122 struct kobject kobj; 123 123 }; 124 124 125 - #define get_efivar_entry(n) list_entry(n, struct efivar_entry, list) 126 - 127 125 struct efivar_attribute { 128 126 struct attribute attr; 129 127 ssize_t (*show) (struct efivar_entry *entry, char *buf); ··· 384 386 static void efivar_release(struct kobject *kobj) 385 387 { 386 388 struct efivar_entry *var = container_of(kobj, struct efivar_entry, kobj); 387 - spin_lock(&efivars_lock); 388 - list_del(&var->list); 389 - spin_unlock(&efivars_lock); 390 389 kfree(var); 391 390 } 392 391 ··· 425 430 efivar_create(struct subsystem *sub, const char *buf, size_t count) 426 431 { 427 432 struct efi_variable *new_var = (struct efi_variable *)buf; 428 - struct efivar_entry *search_efivar = NULL; 433 + struct efivar_entry *search_efivar, *n; 429 434 unsigned long strsize1, strsize2; 430 - struct list_head *pos, *n; 431 435 efi_status_t status = EFI_NOT_FOUND; 432 436 int found = 0; 433 437 ··· 438 444 /* 439 445 * Does this variable already exist? 440 446 */ 441 - list_for_each_safe(pos, n, &efivar_list) { 442 - search_efivar = get_efivar_entry(pos); 447 + list_for_each_entry_safe(search_efivar, n, &efivar_list, list) { 443 448 strsize1 = utf8_strsize(search_efivar->var.VariableName, 1024); 444 449 strsize2 = utf8_strsize(new_var->VariableName, 1024); 445 450 if (strsize1 == strsize2 && ··· 483 490 efivar_delete(struct subsystem *sub, const char *buf, size_t count) 484 491 { 485 492 struct efi_variable *del_var = (struct efi_variable *)buf; 486 - struct efivar_entry *search_efivar = NULL; 493 + struct efivar_entry *search_efivar, *n; 487 494 unsigned long strsize1, strsize2; 488 - struct list_head *pos, *n; 489 495 efi_status_t status = EFI_NOT_FOUND; 490 496 int found = 0; 491 497 ··· 496 504 /* 497 505 * Does this variable already exist? 498 506 */ 499 - list_for_each_safe(pos, n, &efivar_list) { 500 - search_efivar = get_efivar_entry(pos); 507 + list_for_each_entry_safe(search_efivar, n, &efivar_list, list) { 501 508 strsize1 = utf8_strsize(search_efivar->var.VariableName, 1024); 502 509 strsize2 = utf8_strsize(del_var->VariableName, 1024); 503 510 if (strsize1 == strsize2 && ··· 528 537 spin_unlock(&efivars_lock); 529 538 return -EIO; 530 539 } 540 + list_del(&search_efivar->list); 531 541 /* We need to release this lock before unregistering. */ 532 542 spin_unlock(&efivars_lock); 533 - 534 543 efivar_unregister(search_efivar); 535 544 536 545 /* It's dead Jim.... */ ··· 759 768 static void __exit 760 769 efivars_exit(void) 761 770 { 762 - struct list_head *pos, *n; 771 + struct efivar_entry *entry, *n; 763 772 764 - list_for_each_safe(pos, n, &efivar_list) 765 - efivar_unregister(get_efivar_entry(pos)); 773 + list_for_each_entry_safe(entry, n, &efivar_list, list) { 774 + spin_lock(&efivars_lock); 775 + list_del(&entry->list); 776 + spin_unlock(&efivars_lock); 777 + efivar_unregister(entry); 778 + } 766 779 767 780 subsystem_unregister(&vars_subsys); 768 781 firmware_unregister(&efi_subsys);
+1
drivers/hid/hid-core.c
··· 543 543 } 544 544 545 545 kfree(device->rdesc); 546 + kfree(device->collection); 546 547 kfree(device); 547 548 } 548 549 EXPORT_SYMBOL_GPL(hid_free_device);
+10 -3
drivers/hid/hid-input.c
··· 35 35 36 36 #include <linux/hid.h> 37 37 38 + static int hid_pb_fnmode = 1; 39 + module_param_named(pb_fnmode, hid_pb_fnmode, int, 0644); 40 + MODULE_PARM_DESC(pb_fnmode, 41 + "Mode of fn key on PowerBooks (0 = disabled, 1 = fkeyslast, 2 = fkeysfirst)"); 42 + 38 43 #define unk KEY_UNKNOWN 39 44 40 45 static const unsigned char hid_keyboard[256] = { ··· 159 154 return 1; 160 155 } 161 156 162 - if (hid->pb_fnmode) { 157 + if (hid_pb_fnmode) { 163 158 int do_translate; 164 159 165 160 trans = find_translation(powerbook_fn_keys, usage->code); ··· 168 163 do_translate = 1; 169 164 else if (trans->flags & POWERBOOK_FLAG_FKEY) 170 165 do_translate = 171 - (hid->pb_fnmode == 2 && (hid->quirks & HID_QUIRK_POWERBOOK_FN_ON)) || 172 - (hid->pb_fnmode == 1 && !(hid->quirks & HID_QUIRK_POWERBOOK_FN_ON)); 166 + (hid_pb_fnmode == 2 && (hid->quirks & HID_QUIRK_POWERBOOK_FN_ON)) || 167 + (hid_pb_fnmode == 1 && !(hid->quirks & HID_QUIRK_POWERBOOK_FN_ON)); 173 168 else 174 169 do_translate = (hid->quirks & HID_QUIRK_POWERBOOK_FN_ON); 175 170 ··· 436 431 case 0x040: map_key_clear(KEY_MENU); break; 437 432 case 0x045: map_key_clear(KEY_RADIO); break; 438 433 434 + case 0x083: map_key_clear(KEY_LAST); break; 439 435 case 0x088: map_key_clear(KEY_PC); break; 440 436 case 0x089: map_key_clear(KEY_TV); break; 441 437 case 0x08a: map_key_clear(KEY_WWW); break; ··· 454 448 case 0x096: map_key_clear(KEY_TAPE); break; 455 449 case 0x097: map_key_clear(KEY_TV2); break; 456 450 case 0x098: map_key_clear(KEY_SAT); break; 451 + case 0x09a: map_key_clear(KEY_PVR); break; 457 452 458 453 case 0x09c: map_key_clear(KEY_CHANNELUP); break; 459 454 case 0x09d: map_key_clear(KEY_CHANNELDOWN); break;
+5
drivers/ide/ide-pnp.c
··· 73 73 { 74 74 pnp_register_driver(&idepnp_driver); 75 75 } 76 + 77 + void __exit pnpide_exit(void) 78 + { 79 + pnp_unregister_driver(&idepnp_driver); 80 + }
+8 -3
drivers/ide/ide.c
··· 1781 1781 return 1; 1782 1782 } 1783 1783 1784 - extern void pnpide_init(void); 1785 - extern void h8300_ide_init(void); 1784 + extern void __init pnpide_init(void); 1785 + extern void __exit pnpide_exit(void); 1786 + extern void __init h8300_ide_init(void); 1786 1787 1787 1788 /* 1788 1789 * probe_for_hwifs() finds/initializes "known" IDE interfaces ··· 2088 2087 return ide_init(); 2089 2088 } 2090 2089 2091 - void cleanup_module (void) 2090 + void __exit cleanup_module (void) 2092 2091 { 2093 2092 int index; 2094 2093 2095 2094 for (index = 0; index < MAX_HWIFS; ++index) 2096 2095 ide_unregister(index); 2096 + 2097 + #ifdef CONFIG_BLK_DEV_IDEPNP 2098 + pnpide_exit(); 2099 + #endif 2097 2100 2098 2101 #ifdef CONFIG_PROC_FS 2099 2102 proc_ide_destroy();
+1 -1
drivers/ide/pci/aec62xx.c
··· 441 441 .probe = aec62xx_init_one, 442 442 }; 443 443 444 - static int aec62xx_ide_init(void) 444 + static int __init aec62xx_ide_init(void) 445 445 { 446 446 return ide_pci_register_driver(&driver); 447 447 }
+1 -1
drivers/ide/pci/alim15x3.c
··· 907 907 .probe = alim15x3_init_one, 908 908 }; 909 909 910 - static int ali15x3_ide_init(void) 910 + static int __init ali15x3_ide_init(void) 911 911 { 912 912 return ide_pci_register_driver(&driver); 913 913 }
+1 -1
drivers/ide/pci/amd74xx.c
··· 544 544 .probe = amd74xx_probe, 545 545 }; 546 546 547 - static int amd74xx_ide_init(void) 547 + static int __init amd74xx_ide_init(void) 548 548 { 549 549 return ide_pci_register_driver(&driver); 550 550 }
+19 -23
drivers/ide/pci/atiixp.c
··· 291 291 292 292 static void __devinit init_hwif_atiixp(ide_hwif_t *hwif) 293 293 { 294 + u8 udma_mode = 0; 295 + u8 ch = hwif->channel; 296 + struct pci_dev *pdev = hwif->pci_dev; 297 + 294 298 if (!hwif->irq) 295 - hwif->irq = hwif->channel ? 15 : 14; 299 + hwif->irq = ch ? 15 : 14; 296 300 297 301 hwif->autodma = 0; 298 302 hwif->tuneproc = &atiixp_tuneproc; ··· 312 308 hwif->mwdma_mask = 0x06; 313 309 hwif->swdma_mask = 0x04; 314 310 315 - /* FIXME: proper cable detection needed */ 316 - hwif->udma_four = 1; 311 + pci_read_config_byte(pdev, ATIIXP_IDE_UDMA_MODE + ch, &udma_mode); 312 + if ((udma_mode & 0x07) >= 0x04 || (udma_mode & 0x70) >= 0x40) 313 + hwif->udma_four = 1; 314 + else 315 + hwif->udma_four = 0; 316 + 317 317 hwif->ide_dma_host_on = &atiixp_ide_dma_host_on; 318 318 hwif->ide_dma_host_off = &atiixp_ide_dma_host_off; 319 319 hwif->ide_dma_check = &atiixp_dma_check; ··· 328 320 hwif->drives[0].autodma = hwif->autodma; 329 321 } 330 322 331 - static void __devinit init_hwif_sb600_legacy(ide_hwif_t *hwif) 332 - { 333 - 334 - hwif->atapi_dma = 1; 335 - hwif->ultra_mask = 0x7f; 336 - hwif->mwdma_mask = 0x07; 337 - hwif->swdma_mask = 0x07; 338 - 339 - if (!noautodma) 340 - hwif->autodma = 1; 341 - hwif->drives[0].autodma = hwif->autodma; 342 - hwif->drives[1].autodma = hwif->autodma; 343 - } 344 323 345 324 static ide_pci_device_t atiixp_pci_info[] __devinitdata = { 346 325 { /* 0 */ ··· 338 343 .enablebits = {{0x48,0x01,0x00}, {0x48,0x08,0x00}}, 339 344 .bootable = ON_BOARD, 340 345 },{ /* 1 */ 341 - .name = "ATI SB600 SATA Legacy IDE", 342 - .init_hwif = init_hwif_sb600_legacy, 343 - .channels = 2, 346 + .name = "SB600_PATA", 347 + .init_hwif = init_hwif_atiixp, 348 + .channels = 1, 344 349 .autodma = AUTODMA, 345 - .bootable = ON_BOARD, 346 - } 350 + .enablebits = {{0x48,0x01,0x00}, {0x00,0x00,0x00}}, 351 + .bootable = ON_BOARD, 352 + }, 347 353 }; 348 354 349 355 /** ··· 365 369 { PCI_VENDOR_ID_ATI, PCI_DEVICE_ID_ATI_IXP200_IDE, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, 366 370 { PCI_VENDOR_ID_ATI, PCI_DEVICE_ID_ATI_IXP300_IDE, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, 367 371 { PCI_VENDOR_ID_ATI, PCI_DEVICE_ID_ATI_IXP400_IDE, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, 368 - { PCI_VENDOR_ID_ATI, PCI_DEVICE_ID_ATI_IXP600_IDE, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, 372 + { PCI_VENDOR_ID_ATI, PCI_DEVICE_ID_ATI_IXP600_IDE, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 1}, 369 373 { 0, }, 370 374 }; 371 375 MODULE_DEVICE_TABLE(pci, atiixp_pci_tbl); ··· 376 380 .probe = atiixp_init_one, 377 381 }; 378 382 379 - static int atiixp_ide_init(void) 383 + static int __init atiixp_ide_init(void) 380 384 { 381 385 return ide_pci_register_driver(&driver); 382 386 }
+1 -1
drivers/ide/pci/cmd64x.c
··· 793 793 .probe = cmd64x_init_one, 794 794 }; 795 795 796 - static int cmd64x_ide_init(void) 796 + static int __init cmd64x_ide_init(void) 797 797 { 798 798 return ide_pci_register_driver(&driver); 799 799 }
+1 -1
drivers/ide/pci/cs5520.c
··· 260 260 .probe = cs5520_init_one, 261 261 }; 262 262 263 - static int cs5520_ide_init(void) 263 + static int __init cs5520_ide_init(void) 264 264 { 265 265 return ide_pci_register_driver(&driver); 266 266 }
+1 -1
drivers/ide/pci/cs5530.c
··· 374 374 .probe = cs5530_init_one, 375 375 }; 376 376 377 - static int cs5530_ide_init(void) 377 + static int __init cs5530_ide_init(void) 378 378 { 379 379 return ide_pci_register_driver(&driver); 380 380 }
+1 -1
drivers/ide/pci/cy82c693.c
··· 519 519 .probe = cy82c693_init_one, 520 520 }; 521 521 522 - static int cy82c693_ide_init(void) 522 + static int __init cy82c693_ide_init(void) 523 523 { 524 524 return ide_pci_register_driver(&driver); 525 525 }
+1 -36
drivers/ide/pci/generic.c
··· 185 185 .channels = 2, 186 186 .autodma = AUTODMA, 187 187 .bootable = OFF_BOARD, 188 - },{ /* 15 */ 189 - .name = "JMB361", 190 - .init_hwif = init_hwif_generic, 191 - .channels = 2, 192 - .autodma = AUTODMA, 193 - .bootable = OFF_BOARD, 194 - },{ /* 16 */ 195 - .name = "JMB363", 196 - .init_hwif = init_hwif_generic, 197 - .channels = 2, 198 - .autodma = AUTODMA, 199 - .bootable = OFF_BOARD, 200 - },{ /* 17 */ 201 - .name = "JMB365", 202 - .init_hwif = init_hwif_generic, 203 - .channels = 2, 204 - .autodma = AUTODMA, 205 - .bootable = OFF_BOARD, 206 - },{ /* 18 */ 207 - .name = "JMB366", 208 - .init_hwif = init_hwif_generic, 209 - .channels = 2, 210 - .autodma = AUTODMA, 211 - .bootable = OFF_BOARD, 212 - },{ /* 19 */ 213 - .name = "JMB368", 214 - .init_hwif = init_hwif_generic, 215 - .channels = 2, 216 - .autodma = AUTODMA, 217 - .bootable = OFF_BOARD, 218 188 } 219 189 }; 220 190 ··· 251 281 { PCI_VENDOR_ID_TOSHIBA,PCI_DEVICE_ID_TOSHIBA_PICCOLO_1, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 12}, 252 282 { PCI_VENDOR_ID_TOSHIBA,PCI_DEVICE_ID_TOSHIBA_PICCOLO_2, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 13}, 253 283 { PCI_VENDOR_ID_NETCELL,PCI_DEVICE_ID_REVOLUTION, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 14}, 254 - { PCI_VENDOR_ID_JMICRON, PCI_DEVICE_ID_JMICRON_JMB361, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 15}, 255 - { PCI_VENDOR_ID_JMICRON, PCI_DEVICE_ID_JMICRON_JMB363, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 16}, 256 - { PCI_VENDOR_ID_JMICRON, PCI_DEVICE_ID_JMICRON_JMB365, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 17}, 257 - { PCI_VENDOR_ID_JMICRON, PCI_DEVICE_ID_JMICRON_JMB366, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 18}, 258 - { PCI_VENDOR_ID_JMICRON, PCI_DEVICE_ID_JMICRON_JMB368, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 19}, 259 284 /* Must come last. If you add entries adjust this table appropriately and the init_one code */ 260 285 { PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID, PCI_CLASS_STORAGE_IDE << 8, 0xFFFFFF00UL, 0}, 261 286 { 0, }, ··· 263 298 .probe = generic_init_one, 264 299 }; 265 300 266 - static int generic_ide_init(void) 301 + static int __init generic_ide_init(void) 267 302 { 268 303 return ide_pci_register_driver(&driver); 269 304 }
+1 -1
drivers/ide/pci/hpt34x.c
··· 265 265 .probe = hpt34x_init_one, 266 266 }; 267 267 268 - static int hpt34x_ide_init(void) 268 + static int __init hpt34x_ide_init(void) 269 269 { 270 270 return ide_pci_register_driver(&driver); 271 271 }
+1 -1
drivers/ide/pci/hpt366.c
··· 1613 1613 .probe = hpt366_init_one, 1614 1614 }; 1615 1615 1616 - static int hpt366_ide_init(void) 1616 + static int __init hpt366_ide_init(void) 1617 1617 { 1618 1618 return ide_pci_register_driver(&driver); 1619 1619 }
+9 -8
drivers/ide/pci/jmicron.c
··· 86 86 { 87 87 case PORT_PATA0: 88 88 if (control & (1 << 3)) /* 40/80 pin primary */ 89 - return 1; 90 - return 0; 89 + return 0; 90 + return 1; 91 91 case PORT_PATA1: 92 92 if (control5 & (1 << 19)) /* 40/80 pin secondary */ 93 93 return 0; 94 94 return 1; 95 95 case PORT_SATA: 96 - return 1; 96 + break; 97 97 } 98 + return 1; /* Avoid bogus "control reaches end of non-void function" */ 98 99 } 99 100 100 101 static void jmicron_tuneproc (ide_drive_t *drive, byte mode_wanted) ··· 241 240 } 242 241 243 242 static struct pci_device_id jmicron_pci_tbl[] = { 244 - { PCI_DEVICE(PCI_VENDOR_ID_JMICRON, PCI_DEVICE_ID_JMICRON_JMB361), 0}, 245 - { PCI_DEVICE(PCI_VENDOR_ID_JMICRON, PCI_DEVICE_ID_JMICRON_JMB363), 1}, 246 - { PCI_DEVICE(PCI_VENDOR_ID_JMICRON, PCI_DEVICE_ID_JMICRON_JMB365), 2}, 247 - { PCI_DEVICE(PCI_VENDOR_ID_JMICRON, PCI_DEVICE_ID_JMICRON_JMB366), 3}, 248 - { PCI_DEVICE(PCI_VENDOR_ID_JMICRON, PCI_DEVICE_ID_JMICRON_JMB368), 4}, 243 + { PCI_VENDOR_ID_JMICRON, PCI_DEVICE_ID_JMICRON_JMB361, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, 244 + { PCI_VENDOR_ID_JMICRON, PCI_DEVICE_ID_JMICRON_JMB363, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 1}, 245 + { PCI_VENDOR_ID_JMICRON, PCI_DEVICE_ID_JMICRON_JMB365, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 2}, 246 + { PCI_VENDOR_ID_JMICRON, PCI_DEVICE_ID_JMICRON_JMB366, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 3}, 247 + { PCI_VENDOR_ID_JMICRON, PCI_DEVICE_ID_JMICRON_JMB368, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 4}, 249 248 { 0, }, 250 249 }; 251 250
+1 -1
drivers/ide/pci/ns87415.c
··· 302 302 .probe = ns87415_init_one, 303 303 }; 304 304 305 - static int ns87415_ide_init(void) 305 + static int __init ns87415_ide_init(void) 306 306 { 307 307 return ide_pci_register_driver(&driver); 308 308 }
+1 -1
drivers/ide/pci/opti621.c
··· 382 382 .probe = opti621_init_one, 383 383 }; 384 384 385 - static int opti621_ide_init(void) 385 + static int __init opti621_ide_init(void) 386 386 { 387 387 return ide_pci_register_driver(&driver); 388 388 }
+1 -1
drivers/ide/pci/pdc202xx_new.c
··· 756 756 .probe = pdc202new_init_one, 757 757 }; 758 758 759 - static int pdc202new_ide_init(void) 759 + static int __init pdc202new_ide_init(void) 760 760 { 761 761 return ide_pci_register_driver(&driver); 762 762 }
+1 -1
drivers/ide/pci/pdc202xx_old.c
··· 719 719 .probe = pdc202xx_init_one, 720 720 }; 721 721 722 - static int pdc202xx_ide_init(void) 722 + static int __init pdc202xx_ide_init(void) 723 723 { 724 724 return ide_pci_register_driver(&driver); 725 725 }
+1 -1
drivers/ide/pci/rz1000.c
··· 77 77 .probe = rz1000_init_one, 78 78 }; 79 79 80 - static int rz1000_ide_init(void) 80 + static int __init rz1000_ide_init(void) 81 81 { 82 82 return ide_pci_register_driver(&driver); 83 83 }
+1 -1
drivers/ide/pci/sc1200.c
··· 507 507 #endif 508 508 }; 509 509 510 - static int sc1200_ide_init(void) 510 + static int __init sc1200_ide_init(void) 511 511 { 512 512 return ide_pci_register_driver(&driver); 513 513 }
+1 -1
drivers/ide/pci/serverworks.c
··· 666 666 .probe = svwks_init_one, 667 667 }; 668 668 669 - static int svwks_ide_init(void) 669 + static int __init svwks_ide_init(void) 670 670 { 671 671 return ide_pci_register_driver(&driver); 672 672 }
+1 -2
drivers/ide/pci/sgiioc4.c
··· 762 762 /* .is_remove = ioc4_ide_remove_one, */ 763 763 }; 764 764 765 - static int __devinit 766 - ioc4_ide_init(void) 765 + static int __init ioc4_ide_init(void) 767 766 { 768 767 return ioc4_register_submodule(&ioc4_ide_submodule); 769 768 }
+1 -1
drivers/ide/pci/siimage.c
··· 1096 1096 .probe = siimage_init_one, 1097 1097 }; 1098 1098 1099 - static int siimage_ide_init(void) 1099 + static int __init siimage_ide_init(void) 1100 1100 { 1101 1101 return ide_pci_register_driver(&driver); 1102 1102 }
+1 -1
drivers/ide/pci/sis5513.c
··· 968 968 .probe = sis5513_init_one, 969 969 }; 970 970 971 - static int sis5513_ide_init(void) 971 + static int __init sis5513_ide_init(void) 972 972 { 973 973 return ide_pci_register_driver(&driver); 974 974 }
+1 -1
drivers/ide/pci/sl82c105.c
··· 492 492 .probe = sl82c105_init_one, 493 493 }; 494 494 495 - static int sl82c105_ide_init(void) 495 + static int __init sl82c105_ide_init(void) 496 496 { 497 497 return ide_pci_register_driver(&driver); 498 498 }
+1 -1
drivers/ide/pci/slc90e66.c
··· 253 253 .probe = slc90e66_init_one, 254 254 }; 255 255 256 - static int slc90e66_ide_init(void) 256 + static int __init slc90e66_ide_init(void) 257 257 { 258 258 return ide_pci_register_driver(&driver); 259 259 }
+1 -1
drivers/ide/pci/triflex.c
··· 173 173 .probe = triflex_init_one, 174 174 }; 175 175 176 - static int triflex_ide_init(void) 176 + static int __init triflex_ide_init(void) 177 177 { 178 178 return ide_pci_register_driver(&driver); 179 179 }
+1 -1
drivers/ide/pci/trm290.c
··· 355 355 .probe = trm290_init_one, 356 356 }; 357 357 358 - static int trm290_ide_init(void) 358 + static int __init trm290_ide_init(void) 359 359 { 360 360 return ide_pci_register_driver(&driver); 361 361 }
+4 -1
drivers/ide/pci/via82cxxx.c
··· 78 78 u8 rev_max; 79 79 u16 flags; 80 80 } via_isa_bridges[] = { 81 + { "cx700", PCI_DEVICE_ID_VIA_CX700, 0x00, 0x2f, VIA_UDMA_133 | VIA_BAD_AST }, 82 + { "vt8237s", PCI_DEVICE_ID_VIA_8237S, 0x00, 0x2f, VIA_UDMA_133 | VIA_BAD_AST }, 81 83 { "vt6410", PCI_DEVICE_ID_VIA_6410, 0x00, 0x2f, VIA_UDMA_133 | VIA_BAD_AST }, 82 84 { "vt8251", PCI_DEVICE_ID_VIA_8251, 0x00, 0x2f, VIA_UDMA_133 | VIA_BAD_AST }, 83 85 { "vt8237", PCI_DEVICE_ID_VIA_8237, 0x00, 0x2f, VIA_UDMA_133 | VIA_BAD_AST }, ··· 506 504 { PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C576_1, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, 507 505 { PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C586_1, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, 508 506 { PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_6410, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 1}, 507 + { PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_SATA_EIDE, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 1}, 509 508 { 0, }, 510 509 }; 511 510 MODULE_DEVICE_TABLE(pci, via_pci_tbl); ··· 517 514 .probe = via_init_one, 518 515 }; 519 516 520 - static int via_ide_init(void) 517 + static int __init via_ide_init(void) 521 518 { 522 519 return ide_pci_register_driver(&driver); 523 520 }
+4 -1
drivers/infiniband/hw/ehca/ehca_cq.c
··· 344 344 unsigned long flags; 345 345 346 346 spin_lock_irqsave(&ehca_cq_idr_lock, flags); 347 - while (my_cq->nr_callbacks) 347 + while (my_cq->nr_callbacks) { 348 + spin_unlock_irqrestore(&ehca_cq_idr_lock, flags); 348 349 yield(); 350 + spin_lock_irqsave(&ehca_cq_idr_lock, flags); 351 + } 349 352 350 353 idr_remove(&ehca_cq_idr, my_cq->token); 351 354 spin_unlock_irqrestore(&ehca_cq_idr_lock, flags);
+2 -1
drivers/infiniband/hw/ehca/ehca_irq.c
··· 440 440 cq = idr_find(&ehca_cq_idr, token); 441 441 442 442 if (cq == NULL) { 443 - spin_unlock(&ehca_cq_idr_lock); 443 + spin_unlock_irqrestore(&ehca_cq_idr_lock, 444 + flags); 444 445 break; 445 446 } 446 447
+20
drivers/infiniband/ulp/srp/ib_srp.c
··· 1621 1621 switch (token) { 1622 1622 case SRP_OPT_ID_EXT: 1623 1623 p = match_strdup(args); 1624 + if (!p) { 1625 + ret = -ENOMEM; 1626 + goto out; 1627 + } 1624 1628 target->id_ext = cpu_to_be64(simple_strtoull(p, NULL, 16)); 1625 1629 kfree(p); 1626 1630 break; 1627 1631 1628 1632 case SRP_OPT_IOC_GUID: 1629 1633 p = match_strdup(args); 1634 + if (!p) { 1635 + ret = -ENOMEM; 1636 + goto out; 1637 + } 1630 1638 target->ioc_guid = cpu_to_be64(simple_strtoull(p, NULL, 16)); 1631 1639 kfree(p); 1632 1640 break; 1633 1641 1634 1642 case SRP_OPT_DGID: 1635 1643 p = match_strdup(args); 1644 + if (!p) { 1645 + ret = -ENOMEM; 1646 + goto out; 1647 + } 1636 1648 if (strlen(p) != 32) { 1637 1649 printk(KERN_WARNING PFX "bad dest GID parameter '%s'\n", p); 1638 1650 kfree(p); ··· 1668 1656 1669 1657 case SRP_OPT_SERVICE_ID: 1670 1658 p = match_strdup(args); 1659 + if (!p) { 1660 + ret = -ENOMEM; 1661 + goto out; 1662 + } 1671 1663 target->service_id = cpu_to_be64(simple_strtoull(p, NULL, 16)); 1672 1664 kfree(p); 1673 1665 break; ··· 1709 1693 1710 1694 case SRP_OPT_INITIATOR_EXT: 1711 1695 p = match_strdup(args); 1696 + if (!p) { 1697 + ret = -ENOMEM; 1698 + goto out; 1699 + } 1712 1700 target->initiator_ext = cpu_to_be64(simple_strtoull(p, NULL, 16)); 1713 1701 kfree(p); 1714 1702 break;
+33 -28
drivers/isdn/gigaset/common.c
··· 356 356 { 357 357 unsigned long flags; 358 358 unsigned i; 359 - static struct cardstate *ret = NULL; 359 + struct cardstate *ret = NULL; 360 360 361 361 spin_lock_irqsave(&drv->lock, flags); 362 362 for (i = 0; i < drv->minors; ++i) { 363 363 if (!(drv->flags[i] & VALID_MINOR)) { 364 - drv->flags[i] = VALID_MINOR; 365 - ret = drv->cs + i; 366 - } 367 - if (ret) 364 + if (try_module_get(drv->owner)) { 365 + drv->flags[i] = VALID_MINOR; 366 + ret = drv->cs + i; 367 + } 368 368 break; 369 + } 369 370 } 370 371 spin_unlock_irqrestore(&drv->lock, flags); 371 372 return ret; ··· 377 376 unsigned long flags; 378 377 struct gigaset_driver *drv = cs->driver; 379 378 spin_lock_irqsave(&drv->lock, flags); 379 + if (drv->flags[cs->minor_index] & VALID_MINOR) 380 + module_put(drv->owner); 380 381 drv->flags[cs->minor_index] = 0; 381 382 spin_unlock_irqrestore(&drv->lock, flags); 382 383 } ··· 582 579 } else if ((bcs->skb = dev_alloc_skb(SBUFSIZE + HW_HDR_LEN)) != NULL) 583 580 skb_reserve(bcs->skb, HW_HDR_LEN); 584 581 else { 585 - warn("could not allocate skb\n"); 582 + warn("could not allocate skb"); 586 583 bcs->inputstate |= INS_skip_frame; 587 584 } 588 585 ··· 635 632 int i; 636 633 637 634 gig_dbg(DEBUG_INIT, "allocating cs"); 638 - cs = alloc_cs(drv); 639 - if (!cs) 640 - goto error; 635 + if (!(cs = alloc_cs(drv))) { 636 + err("maximum number of devices exceeded"); 637 + return NULL; 638 + } 639 + mutex_init(&cs->mutex); 640 + mutex_lock(&cs->mutex); 641 + 641 642 gig_dbg(DEBUG_INIT, "allocating bcs[0..%d]", channels - 1); 642 643 cs->bcs = kmalloc(channels * sizeof(struct bc_state), GFP_KERNEL); 643 - if (!cs->bcs) 644 + if (!cs->bcs) { 645 + err("out of memory"); 644 646 goto error; 647 + } 645 648 gig_dbg(DEBUG_INIT, "allocating inbuf"); 646 649 cs->inbuf = kmalloc(sizeof(struct inbuf_t), GFP_KERNEL); 647 - if (!cs->inbuf) 650 + if (!cs->inbuf) { 651 + err("out of memory"); 648 652 goto error; 653 + } 649 654 650 655 cs->cs_init = 0; 651 656 cs->channels = channels; ··· 665 654 spin_lock_init(&cs->ev_lock); 666 655 cs->ev_tail = 0; 667 656 cs->ev_head = 0; 668 - mutex_init(&cs->mutex); 669 - mutex_lock(&cs->mutex); 670 657 671 658 tasklet_init(&cs->event_tasklet, &gigaset_handle_event, 672 659 (unsigned long) cs); ··· 693 684 694 685 for (i = 0; i < channels; ++i) { 695 686 gig_dbg(DEBUG_INIT, "setting up bcs[%d].read", i); 696 - if (!gigaset_initbcs(cs->bcs + i, cs, i)) 687 + if (!gigaset_initbcs(cs->bcs + i, cs, i)) { 688 + err("could not allocate channel %d data", i); 697 689 goto error; 690 + } 698 691 } 699 692 700 693 ++cs->cs_init; ··· 731 720 make_valid(cs, VALID_ID); 732 721 ++cs->cs_init; 733 722 gig_dbg(DEBUG_INIT, "setting up hw"); 734 - if (!cs->ops->initcshw(cs)) 723 + if (!cs->ops->initcshw(cs)) { 724 + err("could not allocate device specific data"); 735 725 goto error; 726 + } 736 727 737 728 ++cs->cs_init; 738 729 ··· 756 743 mutex_unlock(&cs->mutex); 757 744 return cs; 758 745 759 - error: if (cs) 760 - mutex_unlock(&cs->mutex); 746 + error: 747 + mutex_unlock(&cs->mutex); 761 748 gig_dbg(DEBUG_INIT, "failed"); 762 749 gigaset_freecs(cs); 763 750 return NULL; ··· 1053 1040 spin_unlock_irqrestore(&driver_lock, flags); 1054 1041 1055 1042 gigaset_if_freedriver(drv); 1056 - module_put(drv->owner); 1057 1043 1058 1044 kfree(drv->cs); 1059 1045 kfree(drv->flags); ··· 1084 1072 if (!drv) 1085 1073 return NULL; 1086 1074 1087 - if (!try_module_get(owner)) 1088 - goto out1; 1089 - 1090 - drv->cs = NULL; 1091 1075 drv->have_tty = 0; 1092 1076 drv->minor = minor; 1093 1077 drv->minors = minors; ··· 1095 1087 1096 1088 drv->cs = kmalloc(minors * sizeof *drv->cs, GFP_KERNEL); 1097 1089 if (!drv->cs) 1098 - goto out2; 1090 + goto error; 1099 1091 1100 1092 drv->flags = kmalloc(minors * sizeof *drv->flags, GFP_KERNEL); 1101 1093 if (!drv->flags) 1102 - goto out3; 1094 + goto error; 1103 1095 1104 1096 for (i = 0; i < minors; ++i) { 1105 1097 drv->flags[i] = 0; ··· 1116 1108 1117 1109 return drv; 1118 1110 1119 - out3: 1111 + error: 1120 1112 kfree(drv->cs); 1121 - out2: 1122 - module_put(owner); 1123 - out1: 1124 1113 kfree(drv); 1125 1114 return NULL; 1126 1115 }
+1
drivers/kvm/kvm.h
··· 242 242 u64 pdptrs[4]; /* pae */ 243 243 u64 shadow_efer; 244 244 u64 apic_base; 245 + u64 ia32_misc_enable_msr; 245 246 int nmsrs; 246 247 struct vmx_msr_entry *guest_msrs; 247 248 struct vmx_msr_entry *host_msrs;
+18 -1
drivers/kvm/kvm_main.c
··· 272 272 273 273 static void kvm_free_vcpu(struct kvm_vcpu *vcpu) 274 274 { 275 + vcpu_load(vcpu->kvm, vcpu_slot(vcpu)); 275 276 kvm_mmu_destroy(vcpu); 277 + vcpu_put(vcpu); 276 278 kvm_arch_ops->vcpu_free(vcpu); 277 279 } 278 280 ··· 1226 1224 case MSR_IA32_APICBASE: 1227 1225 data = vcpu->apic_base; 1228 1226 break; 1227 + case MSR_IA32_MISC_ENABLE: 1228 + data = vcpu->ia32_misc_enable_msr; 1229 + break; 1229 1230 #ifdef CONFIG_X86_64 1230 1231 case MSR_EFER: 1231 1232 data = vcpu->shadow_efer; ··· 1299 1294 break; 1300 1295 case MSR_IA32_APICBASE: 1301 1296 vcpu->apic_base = data; 1297 + break; 1298 + case MSR_IA32_MISC_ENABLE: 1299 + vcpu->ia32_misc_enable_msr = data; 1302 1300 break; 1303 1301 default: 1304 1302 printk(KERN_ERR "kvm: unhandled wrmsr: 0x%x\n", msr); ··· 1605 1597 }; 1606 1598 1607 1599 static unsigned num_msrs_to_save; 1600 + 1601 + static u32 emulated_msrs[] = { 1602 + MSR_IA32_MISC_ENABLE, 1603 + }; 1608 1604 1609 1605 static __init void kvm_init_msr_list(void) 1610 1606 { ··· 1935 1923 if (copy_from_user(&msr_list, user_msr_list, sizeof msr_list)) 1936 1924 goto out; 1937 1925 n = msr_list.nmsrs; 1938 - msr_list.nmsrs = num_msrs_to_save; 1926 + msr_list.nmsrs = num_msrs_to_save + ARRAY_SIZE(emulated_msrs); 1939 1927 if (copy_to_user(user_msr_list, &msr_list, sizeof msr_list)) 1940 1928 goto out; 1941 1929 r = -E2BIG; ··· 1944 1932 r = -EFAULT; 1945 1933 if (copy_to_user(user_msr_list->indices, &msrs_to_save, 1946 1934 num_msrs_to_save * sizeof(u32))) 1935 + goto out; 1936 + if (copy_to_user(user_msr_list->indices 1937 + + num_msrs_to_save * sizeof(u32), 1938 + &emulated_msrs, 1939 + ARRAY_SIZE(emulated_msrs) * sizeof(u32))) 1947 1940 goto out; 1948 1941 r = 0; 1949 1942 break;
+6 -10
drivers/kvm/mmu.c
··· 143 143 #define PFERR_PRESENT_MASK (1U << 0) 144 144 #define PFERR_WRITE_MASK (1U << 1) 145 145 #define PFERR_USER_MASK (1U << 2) 146 + #define PFERR_FETCH_MASK (1U << 4) 146 147 147 148 #define PT64_ROOT_LEVEL 4 148 149 #define PT32_ROOT_LEVEL 2 ··· 167 166 static int is_cpuid_PSE36(void) 168 167 { 169 168 return 1; 169 + } 170 + 171 + static int is_nx(struct kvm_vcpu *vcpu) 172 + { 173 + return vcpu->shadow_efer & EFER_NX; 170 174 } 171 175 172 176 static int is_present_pte(unsigned long pte) ··· 996 990 997 991 } 998 992 return 0; 999 - } 1000 - 1001 - static int may_access(u64 pte, int write, int user) 1002 - { 1003 - 1004 - if (user && !(pte & PT_USER_MASK)) 1005 - return 0; 1006 - if (write && !(pte & PT_WRITABLE_MASK)) 1007 - return 0; 1008 - return 1; 1009 993 } 1010 994 1011 995 static void paging_free(struct kvm_vcpu *vcpu)
+48 -31
drivers/kvm/paging_tmpl.h
··· 63 63 pt_element_t *ptep; 64 64 pt_element_t inherited_ar; 65 65 gfn_t gfn; 66 + u32 error_code; 66 67 }; 67 68 68 69 /* 69 70 * Fetch a guest pte for a guest virtual address 70 71 */ 71 - static void FNAME(walk_addr)(struct guest_walker *walker, 72 - struct kvm_vcpu *vcpu, gva_t addr) 72 + static int FNAME(walk_addr)(struct guest_walker *walker, 73 + struct kvm_vcpu *vcpu, gva_t addr, 74 + int write_fault, int user_fault, int fetch_fault) 73 75 { 74 76 hpa_t hpa; 75 77 struct kvm_memory_slot *slot; ··· 88 86 walker->ptep = &vcpu->pdptrs[(addr >> 30) & 3]; 89 87 root = *walker->ptep; 90 88 if (!(root & PT_PRESENT_MASK)) 91 - return; 89 + goto not_present; 92 90 --walker->level; 93 91 } 94 92 #endif ··· 113 111 ASSERT(((unsigned long)walker->table & PAGE_MASK) == 114 112 ((unsigned long)ptep & PAGE_MASK)); 115 113 116 - if (is_present_pte(*ptep) && !(*ptep & PT_ACCESSED_MASK)) 117 - *ptep |= PT_ACCESSED_MASK; 118 - 119 114 if (!is_present_pte(*ptep)) 120 - break; 115 + goto not_present; 116 + 117 + if (write_fault && !is_writeble_pte(*ptep)) 118 + if (user_fault || is_write_protection(vcpu)) 119 + goto access_error; 120 + 121 + if (user_fault && !(*ptep & PT_USER_MASK)) 122 + goto access_error; 123 + 124 + #if PTTYPE == 64 125 + if (fetch_fault && is_nx(vcpu) && (*ptep & PT64_NX_MASK)) 126 + goto access_error; 127 + #endif 128 + 129 + if (!(*ptep & PT_ACCESSED_MASK)) 130 + *ptep |= PT_ACCESSED_MASK; /* avoid rmw */ 121 131 122 132 if (walker->level == PT_PAGE_TABLE_LEVEL) { 123 133 walker->gfn = (*ptep & PT_BASE_ADDR_MASK) ··· 160 146 } 161 147 walker->ptep = ptep; 162 148 pgprintk("%s: pte %llx\n", __FUNCTION__, (u64)*ptep); 149 + return 1; 150 + 151 + not_present: 152 + walker->error_code = 0; 153 + goto err; 154 + 155 + access_error: 156 + walker->error_code = PFERR_PRESENT_MASK; 157 + 158 + err: 159 + if (write_fault) 160 + walker->error_code |= PFERR_WRITE_MASK; 161 + if (user_fault) 162 + walker->error_code |= PFERR_USER_MASK; 163 + if (fetch_fault) 164 + walker->error_code |= PFERR_FETCH_MASK; 165 + return 0; 163 166 } 164 167 165 168 static void FNAME(release_walker)(struct guest_walker *walker) ··· 305 274 struct kvm_mmu_page *page; 306 275 307 276 if (is_writeble_pte(*shadow_ent)) 308 - return 0; 277 + return !user || (*shadow_ent & PT_USER_MASK); 309 278 310 279 writable_shadow = *shadow_ent & PT_SHADOW_WRITABLE_MASK; 311 280 if (user) { ··· 378 347 u32 error_code) 379 348 { 380 349 int write_fault = error_code & PFERR_WRITE_MASK; 381 - int pte_present = error_code & PFERR_PRESENT_MASK; 382 350 int user_fault = error_code & PFERR_USER_MASK; 351 + int fetch_fault = error_code & PFERR_FETCH_MASK; 383 352 struct guest_walker walker; 384 353 u64 *shadow_pte; 385 354 int fixed; ··· 396 365 /* 397 366 * Look up the shadow pte for the faulting address. 398 367 */ 399 - FNAME(walk_addr)(&walker, vcpu, addr); 400 - shadow_pte = FNAME(fetch)(vcpu, addr, &walker); 368 + r = FNAME(walk_addr)(&walker, vcpu, addr, write_fault, user_fault, 369 + fetch_fault); 401 370 402 371 /* 403 372 * The page is not mapped by the guest. Let the guest handle it. 404 373 */ 405 - if (!shadow_pte) { 406 - pgprintk("%s: not mapped\n", __FUNCTION__); 407 - inject_page_fault(vcpu, addr, error_code); 374 + if (!r) { 375 + pgprintk("%s: guest page fault\n", __FUNCTION__); 376 + inject_page_fault(vcpu, addr, walker.error_code); 408 377 FNAME(release_walker)(&walker); 409 378 return 0; 410 379 } 411 380 381 + shadow_pte = FNAME(fetch)(vcpu, addr, &walker); 412 382 pgprintk("%s: shadow pte %p %llx\n", __FUNCTION__, 413 383 shadow_pte, *shadow_pte); 414 384 ··· 431 399 * mmio: emulate if accessible, otherwise its a guest fault. 432 400 */ 433 401 if (is_io_pte(*shadow_pte)) { 434 - if (may_access(*shadow_pte, write_fault, user_fault)) 435 - return 1; 436 - pgprintk("%s: io work, no access\n", __FUNCTION__); 437 - inject_page_fault(vcpu, addr, 438 - error_code | PFERR_PRESENT_MASK); 439 - kvm_mmu_audit(vcpu, "post page fault (io)"); 440 - return 0; 441 - } 442 - 443 - /* 444 - * pte not present, guest page fault. 445 - */ 446 - if (pte_present && !fixed && !write_pt) { 447 - inject_page_fault(vcpu, addr, error_code); 448 - kvm_mmu_audit(vcpu, "post page fault (guest)"); 449 - return 0; 402 + return 1; 450 403 } 451 404 452 405 ++kvm_stat.pf_fixed; ··· 446 429 pt_element_t guest_pte; 447 430 gpa_t gpa; 448 431 449 - FNAME(walk_addr)(&walker, vcpu, vaddr); 432 + FNAME(walk_addr)(&walker, vcpu, vaddr, 0, 0, 0); 450 433 guest_pte = *walker.ptep; 451 434 FNAME(release_walker)(&walker); 452 435
+22 -6
drivers/kvm/svm.c
··· 502 502 (1ULL << INTERCEPT_IOIO_PROT) | 503 503 (1ULL << INTERCEPT_MSR_PROT) | 504 504 (1ULL << INTERCEPT_TASK_SWITCH) | 505 + (1ULL << INTERCEPT_SHUTDOWN) | 505 506 (1ULL << INTERCEPT_VMRUN) | 506 507 (1ULL << INTERCEPT_VMMCALL) | 507 508 (1ULL << INTERCEPT_VMLOAD) | ··· 681 680 682 681 static void svm_get_idt(struct kvm_vcpu *vcpu, struct descriptor_table *dt) 683 682 { 684 - dt->limit = vcpu->svm->vmcb->save.ldtr.limit; 685 - dt->base = vcpu->svm->vmcb->save.ldtr.base; 683 + dt->limit = vcpu->svm->vmcb->save.idtr.limit; 684 + dt->base = vcpu->svm->vmcb->save.idtr.base; 686 685 } 687 686 688 687 static void svm_set_idt(struct kvm_vcpu *vcpu, struct descriptor_table *dt) 689 688 { 690 - vcpu->svm->vmcb->save.ldtr.limit = dt->limit; 691 - vcpu->svm->vmcb->save.ldtr.base = dt->base ; 689 + vcpu->svm->vmcb->save.idtr.limit = dt->limit; 690 + vcpu->svm->vmcb->save.idtr.base = dt->base ; 692 691 } 693 692 694 693 static void svm_get_gdt(struct kvm_vcpu *vcpu, struct descriptor_table *dt) ··· 890 889 } 891 890 892 891 kvm_run->exit_reason = KVM_EXIT_UNKNOWN; 892 + return 0; 893 + } 894 + 895 + static int shutdown_interception(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run) 896 + { 897 + /* 898 + * VMCB is undefined after a SHUTDOWN intercept 899 + * so reinitialize it. 900 + */ 901 + memset(vcpu->svm->vmcb, 0, PAGE_SIZE); 902 + init_vmcb(vcpu->svm->vmcb); 903 + 904 + kvm_run->exit_reason = KVM_EXIT_SHUTDOWN; 893 905 return 0; 894 906 } 895 907 ··· 1163 1149 case MSR_K6_STAR: 1164 1150 vcpu->svm->vmcb->save.star = data; 1165 1151 break; 1166 - #ifdef CONFIG_X86_64_ 1152 + #ifdef CONFIG_X86_64 1167 1153 case MSR_LSTAR: 1168 1154 vcpu->svm->vmcb->save.lstar = data; 1169 1155 break; ··· 1263 1249 [SVM_EXIT_IOIO] = io_interception, 1264 1250 [SVM_EXIT_MSR] = msr_interception, 1265 1251 [SVM_EXIT_TASK_SWITCH] = task_switch_interception, 1252 + [SVM_EXIT_SHUTDOWN] = shutdown_interception, 1266 1253 [SVM_EXIT_VMRUN] = invalid_op_interception, 1267 1254 [SVM_EXIT_VMMCALL] = invalid_op_interception, 1268 1255 [SVM_EXIT_VMLOAD] = invalid_op_interception, ··· 1422 1407 int r; 1423 1408 1424 1409 again: 1425 - do_interrupt_requests(vcpu, kvm_run); 1410 + if (!vcpu->mmio_read_completed) 1411 + do_interrupt_requests(vcpu, kvm_run); 1426 1412 1427 1413 clgi(); 1428 1414
+4 -1
drivers/kvm/vmx.c
··· 1116 1116 1117 1117 if (rdmsr_safe(index, &data_low, &data_high) < 0) 1118 1118 continue; 1119 + if (wrmsr_safe(index, data_low, data_high) < 0) 1120 + continue; 1119 1121 data = data_low | ((u64)data_high << 32); 1120 1122 vcpu->host_msrs[j].index = index; 1121 1123 vcpu->host_msrs[j].reserved = 0; ··· 1719 1717 vmcs_writel(HOST_GS_BASE, segment_base(gs_sel)); 1720 1718 #endif 1721 1719 1722 - do_interrupt_requests(vcpu, kvm_run); 1720 + if (!vcpu->mmio_read_completed) 1721 + do_interrupt_requests(vcpu, kvm_run); 1723 1722 1724 1723 if (vcpu->guest_debug.enabled) 1725 1724 kvm_guest_debug_pre(vcpu);
+52 -46
drivers/kvm/x86_emulate.c
··· 61 61 #define ModRM (1<<6) 62 62 /* Destination is only written; never read. */ 63 63 #define Mov (1<<7) 64 + #define BitOp (1<<8) 64 65 65 66 static u8 opcode_table[256] = { 66 67 /* 0x00 - 0x07 */ ··· 149 148 0, 0, ByteOp | DstMem | SrcNone | ModRM, DstMem | SrcNone | ModRM 150 149 }; 151 150 152 - static u8 twobyte_table[256] = { 151 + static u16 twobyte_table[256] = { 153 152 /* 0x00 - 0x0F */ 154 153 0, SrcMem | ModRM | DstReg, 0, 0, 0, 0, ImplicitOps, 0, 155 154 0, 0, 0, 0, 0, ImplicitOps | ModRM, 0, 0, ··· 181 180 /* 0x90 - 0x9F */ 182 181 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 183 182 /* 0xA0 - 0xA7 */ 184 - 0, 0, 0, DstMem | SrcReg | ModRM, 0, 0, 0, 0, 183 + 0, 0, 0, DstMem | SrcReg | ModRM | BitOp, 0, 0, 0, 0, 185 184 /* 0xA8 - 0xAF */ 186 - 0, 0, 0, DstMem | SrcReg | ModRM, 0, 0, 0, 0, 185 + 0, 0, 0, DstMem | SrcReg | ModRM | BitOp, 0, 0, 0, 0, 187 186 /* 0xB0 - 0xB7 */ 188 187 ByteOp | DstMem | SrcReg | ModRM, DstMem | SrcReg | ModRM, 0, 189 - DstMem | SrcReg | ModRM, 188 + DstMem | SrcReg | ModRM | BitOp, 190 189 0, 0, ByteOp | DstReg | SrcMem | ModRM | Mov, 191 190 DstReg | SrcMem16 | ModRM | Mov, 192 191 /* 0xB8 - 0xBF */ 193 - 0, 0, DstMem | SrcImmByte | ModRM, DstMem | SrcReg | ModRM, 192 + 0, 0, DstMem | SrcImmByte | ModRM, DstMem | SrcReg | ModRM | BitOp, 194 193 0, 0, ByteOp | DstReg | SrcMem | ModRM | Mov, 195 194 DstReg | SrcMem16 | ModRM | Mov, 196 195 /* 0xC0 - 0xCF */ ··· 470 469 int 471 470 x86_emulate_memop(struct x86_emulate_ctxt *ctxt, struct x86_emulate_ops *ops) 472 471 { 473 - u8 b, d, sib, twobyte = 0, rex_prefix = 0; 472 + unsigned d; 473 + u8 b, sib, twobyte = 0, rex_prefix = 0; 474 474 u8 modrm, modrm_mod = 0, modrm_reg = 0, modrm_rm = 0; 475 475 unsigned long *override_base = NULL; 476 476 unsigned int op_bytes, ad_bytes, lock_prefix = 0, rep_prefix = 0, i; ··· 728 726 ; 729 727 } 730 728 731 - /* Decode and fetch the destination operand: register or memory. */ 732 - switch (d & DstMask) { 733 - case ImplicitOps: 734 - /* Special instructions do their own operand decoding. */ 735 - goto special_insn; 736 - case DstReg: 737 - dst.type = OP_REG; 738 - if ((d & ByteOp) 739 - && !(twobyte_table && (b == 0xb6 || b == 0xb7))) { 740 - dst.ptr = decode_register(modrm_reg, _regs, 741 - (rex_prefix == 0)); 742 - dst.val = *(u8 *) dst.ptr; 743 - dst.bytes = 1; 744 - } else { 745 - dst.ptr = decode_register(modrm_reg, _regs, 0); 746 - switch ((dst.bytes = op_bytes)) { 747 - case 2: 748 - dst.val = *(u16 *)dst.ptr; 749 - break; 750 - case 4: 751 - dst.val = *(u32 *)dst.ptr; 752 - break; 753 - case 8: 754 - dst.val = *(u64 *)dst.ptr; 755 - break; 756 - } 757 - } 758 - break; 759 - case DstMem: 760 - dst.type = OP_MEM; 761 - dst.ptr = (unsigned long *)cr2; 762 - dst.bytes = (d & ByteOp) ? 1 : op_bytes; 763 - if (!(d & Mov) && /* optimisation - avoid slow emulated read */ 764 - ((rc = ops->read_emulated((unsigned long)dst.ptr, 765 - &dst.val, dst.bytes, ctxt)) != 0)) 766 - goto done; 767 - break; 768 - } 769 - dst.orig_val = dst.val; 770 - 771 729 /* 772 730 * Decode and fetch the source operand: register, memory 773 731 * or immediate. ··· 799 837 src.val = insn_fetch(s8, 1, _eip); 800 838 break; 801 839 } 840 + 841 + /* Decode and fetch the destination operand: register or memory. */ 842 + switch (d & DstMask) { 843 + case ImplicitOps: 844 + /* Special instructions do their own operand decoding. */ 845 + goto special_insn; 846 + case DstReg: 847 + dst.type = OP_REG; 848 + if ((d & ByteOp) 849 + && !(twobyte_table && (b == 0xb6 || b == 0xb7))) { 850 + dst.ptr = decode_register(modrm_reg, _regs, 851 + (rex_prefix == 0)); 852 + dst.val = *(u8 *) dst.ptr; 853 + dst.bytes = 1; 854 + } else { 855 + dst.ptr = decode_register(modrm_reg, _regs, 0); 856 + switch ((dst.bytes = op_bytes)) { 857 + case 2: 858 + dst.val = *(u16 *)dst.ptr; 859 + break; 860 + case 4: 861 + dst.val = *(u32 *)dst.ptr; 862 + break; 863 + case 8: 864 + dst.val = *(u64 *)dst.ptr; 865 + break; 866 + } 867 + } 868 + break; 869 + case DstMem: 870 + dst.type = OP_MEM; 871 + dst.ptr = (unsigned long *)cr2; 872 + dst.bytes = (d & ByteOp) ? 1 : op_bytes; 873 + if (d & BitOp) { 874 + dst.ptr += src.val / BITS_PER_LONG; 875 + dst.bytes = sizeof(long); 876 + } 877 + if (!(d & Mov) && /* optimisation - avoid slow emulated read */ 878 + ((rc = ops->read_emulated((unsigned long)dst.ptr, 879 + &dst.val, dst.bytes, ctxt)) != 0)) 880 + goto done; 881 + break; 882 + } 883 + dst.orig_val = dst.val; 802 884 803 885 if (twobyte) 804 886 goto twobyte_insn;
+8 -4
drivers/md/bitmap.c
··· 479 479 int err = -EINVAL; 480 480 481 481 /* page 0 is the superblock, read it... */ 482 - if (bitmap->file) 483 - bitmap->sb_page = read_page(bitmap->file, 0, bitmap, PAGE_SIZE); 484 - else { 482 + if (bitmap->file) { 483 + loff_t isize = i_size_read(bitmap->file->f_mapping->host); 484 + int bytes = isize > PAGE_SIZE ? PAGE_SIZE : isize; 485 + 486 + bitmap->sb_page = read_page(bitmap->file, 0, bitmap, bytes); 487 + } else { 485 488 bitmap->sb_page = read_sb_page(bitmap->mddev, bitmap->offset, 0); 486 489 } 487 490 if (IS_ERR(bitmap->sb_page)) { ··· 880 877 int count; 881 878 /* unmap the old page, we're done with it */ 882 879 if (index == num_pages-1) 883 - count = bytes - index * PAGE_SIZE; 880 + count = bytes + sizeof(bitmap_super_t) 881 + - index * PAGE_SIZE; 884 882 else 885 883 count = PAGE_SIZE; 886 884 if (index == 0) {
+19 -8
drivers/md/dm.c
··· 1116 1116 if (size != get_capacity(md->disk)) 1117 1117 memset(&md->geometry, 0, sizeof(md->geometry)); 1118 1118 1119 - __set_size(md, size); 1119 + if (md->suspended_bdev) 1120 + __set_size(md, size); 1120 1121 if (size == 0) 1121 1122 return 0; 1122 1123 ··· 1265 1264 if (!dm_suspended(md)) 1266 1265 goto out; 1267 1266 1267 + /* without bdev, the device size cannot be changed */ 1268 + if (!md->suspended_bdev) 1269 + if (get_capacity(md->disk) != dm_table_get_size(table)) 1270 + goto out; 1271 + 1268 1272 __unbind(md); 1269 1273 r = __bind(md, table); 1270 1274 ··· 1347 1341 /* This does not get reverted if there's an error later. */ 1348 1342 dm_table_presuspend_targets(map); 1349 1343 1350 - md->suspended_bdev = bdget_disk(md->disk, 0); 1351 - if (!md->suspended_bdev) { 1352 - DMWARN("bdget failed in dm_suspend"); 1353 - r = -ENOMEM; 1354 - goto flush_and_out; 1344 + /* bdget() can stall if the pending I/Os are not flushed */ 1345 + if (!noflush) { 1346 + md->suspended_bdev = bdget_disk(md->disk, 0); 1347 + if (!md->suspended_bdev) { 1348 + DMWARN("bdget failed in dm_suspend"); 1349 + r = -ENOMEM; 1350 + goto flush_and_out; 1351 + } 1355 1352 } 1356 1353 1357 1354 /* ··· 1482 1473 1483 1474 unlock_fs(md); 1484 1475 1485 - bdput(md->suspended_bdev); 1486 - md->suspended_bdev = NULL; 1476 + if (md->suspended_bdev) { 1477 + bdput(md->suspended_bdev); 1478 + md->suspended_bdev = NULL; 1479 + } 1487 1480 1488 1481 clear_bit(DMF_SUSPENDED, &md->flags); 1489 1482
+31 -1
drivers/md/md.c
··· 1633 1633 * and 'events' is odd, we can roll back to the previous clean state */ 1634 1634 if (nospares 1635 1635 && (mddev->in_sync && mddev->recovery_cp == MaxSector) 1636 - && (mddev->events & 1)) 1636 + && (mddev->events & 1) 1637 + && mddev->events != 1) 1637 1638 mddev->events--; 1638 1639 else { 1639 1640 /* otherwise we have to go forward and ... */ ··· 3564 3563 char *ptr, *buf = NULL; 3565 3564 int err = -ENOMEM; 3566 3565 3566 + md_allow_write(mddev); 3567 + 3567 3568 file = kmalloc(sizeof(*file), GFP_KERNEL); 3568 3569 if (!file) 3569 3570 goto out; ··· 5033 5030 mod_timer(&mddev->safemode_timer, jiffies + mddev->safemode_delay); 5034 5031 } 5035 5032 } 5033 + 5034 + /* md_allow_write(mddev) 5035 + * Calling this ensures that the array is marked 'active' so that writes 5036 + * may proceed without blocking. It is important to call this before 5037 + * attempting a GFP_KERNEL allocation while holding the mddev lock. 5038 + * Must be called with mddev_lock held. 5039 + */ 5040 + void md_allow_write(mddev_t *mddev) 5041 + { 5042 + if (!mddev->pers) 5043 + return; 5044 + if (mddev->ro) 5045 + return; 5046 + 5047 + spin_lock_irq(&mddev->write_lock); 5048 + if (mddev->in_sync) { 5049 + mddev->in_sync = 0; 5050 + set_bit(MD_CHANGE_CLEAN, &mddev->flags); 5051 + if (mddev->safemode_delay && 5052 + mddev->safemode == 0) 5053 + mddev->safemode = 1; 5054 + spin_unlock_irq(&mddev->write_lock); 5055 + md_update_sb(mddev, 0); 5056 + } else 5057 + spin_unlock_irq(&mddev->write_lock); 5058 + } 5059 + EXPORT_SYMBOL_GPL(md_allow_write); 5036 5060 5037 5061 static DECLARE_WAIT_QUEUE_HEAD(resync_wait); 5038 5062
+7
drivers/md/raid1.c
··· 1266 1266 sbio->bi_sector = r1_bio->sector + 1267 1267 conf->mirrors[i].rdev->data_offset; 1268 1268 sbio->bi_bdev = conf->mirrors[i].rdev->bdev; 1269 + for (j = 0; j < vcnt ; j++) 1270 + memcpy(page_address(sbio->bi_io_vec[j].bv_page), 1271 + page_address(pbio->bi_io_vec[j].bv_page), 1272 + PAGE_SIZE); 1273 + 1269 1274 } 1270 1275 } 1271 1276 } ··· 2103 2098 mddev->new_level = mddev->level; 2104 2099 return -EINVAL; 2105 2100 } 2101 + 2102 + md_allow_write(mddev); 2106 2103 2107 2104 raid_disks = mddev->raid_disks + mddev->delta_disks; 2108 2105
+4 -1
drivers/md/raid5.c
··· 405 405 if (newsize <= conf->pool_size) 406 406 return 0; /* never bother to shrink */ 407 407 408 + md_allow_write(conf->mddev); 409 + 408 410 /* Step 1 */ 409 411 sc = kmem_cache_create(conf->cache_name[1-conf->active_name], 410 412 sizeof(struct stripe_head)+(newsize-1)*sizeof(struct r5dev), ··· 2680 2678 mdk_rdev_t *rdev; 2681 2679 2682 2680 if (!in_chunk_boundary(mddev, raid_bio)) { 2683 - printk("chunk_aligned_read : non aligned\n"); 2681 + PRINTK("chunk_aligned_read : non aligned\n"); 2684 2682 return 0; 2685 2683 } 2686 2684 /* ··· 3252 3250 else 3253 3251 break; 3254 3252 } 3253 + md_allow_write(mddev); 3255 3254 while (new > conf->max_nr_stripes) { 3256 3255 if (grow_one_stripe(conf)) 3257 3256 conf->max_nr_stripes++;
+1
drivers/media/video/video-buf.c
··· 700 700 goto done; 701 701 } 702 702 if (buf->state == STATE_QUEUED || 703 + buf->state == STATE_PREPARED || 703 704 buf->state == STATE_ACTIVE) { 704 705 dprintk(1,"qbuf: buffer is already queued or active.\n"); 705 706 goto done;
+1
drivers/mtd/nand/cafe.c
··· 14 14 #include <linux/pci.h> 15 15 #include <linux/delay.h> 16 16 #include <linux/interrupt.h> 17 + #include <linux/dma-mapping.h> 17 18 #include <asm/io.h> 18 19 19 20 #define CAFE_NAND_CTRL1 0x00
+4 -3
drivers/net/82596.c
··· 1066 1066 short length = skb->len; 1067 1067 dev->trans_start = jiffies; 1068 1068 1069 - DEB(DEB_STARTTX,printk(KERN_DEBUG "%s: i596_start_xmit(%x,%x) called\n", dev->name, 1070 - skb->len, (unsigned int)skb->data)); 1069 + DEB(DEB_STARTTX,printk(KERN_DEBUG "%s: i596_start_xmit(%x,%p) called\n", 1070 + dev->name, skb->len, skb->data)); 1071 1071 1072 1072 if (skb->len < ETH_ZLEN) { 1073 1073 if (skb_padto(skb, ETH_ZLEN)) ··· 1246 1246 dev->priv = (void *)(dev->mem_start); 1247 1247 1248 1248 lp = dev->priv; 1249 - DEB(DEB_INIT,printk(KERN_DEBUG "%s: lp at 0x%08lx (%d bytes), lp->scb at 0x%08lx\n", 1249 + DEB(DEB_INIT,printk(KERN_DEBUG "%s: lp at 0x%08lx (%zd bytes), " 1250 + "lp->scb at 0x%08lx\n", 1250 1251 dev->name, (unsigned long)lp, 1251 1252 sizeof(struct i596_private), (unsigned long)&lp->scb)); 1252 1253 memset((void *) lp, 0, sizeof(struct i596_private));
+34 -18
drivers/net/b44.c
··· 110 110 111 111 static void b44_halt(struct b44 *); 112 112 static void b44_init_rings(struct b44 *); 113 + 114 + #define B44_FULL_RESET 1 115 + #define B44_FULL_RESET_SKIP_PHY 2 116 + #define B44_PARTIAL_RESET 3 117 + 113 118 static void b44_init_hw(struct b44 *, int); 114 119 115 120 static int dma_desc_align_mask; ··· 757 752 dest_idx * sizeof(dest_desc), 758 753 DMA_BIDIRECTIONAL); 759 754 760 - pci_dma_sync_single_for_device(bp->pdev, src_desc->addr, 755 + pci_dma_sync_single_for_device(bp->pdev, le32_to_cpu(src_desc->addr), 761 756 RX_PKT_BUF_SZ, 762 757 PCI_DMA_FROMDEVICE); 763 758 } ··· 889 884 spin_lock_irqsave(&bp->lock, flags); 890 885 b44_halt(bp); 891 886 b44_init_rings(bp); 892 - b44_init_hw(bp, 1); 887 + b44_init_hw(bp, B44_FULL_RESET_SKIP_PHY); 893 888 netif_wake_queue(bp->dev); 894 889 spin_unlock_irqrestore(&bp->lock, flags); 895 890 done = 1; ··· 959 954 960 955 b44_halt(bp); 961 956 b44_init_rings(bp); 962 - b44_init_hw(bp, 1); 957 + b44_init_hw(bp, B44_FULL_RESET); 963 958 964 959 spin_unlock_irq(&bp->lock); 965 960 ··· 1076 1071 b44_halt(bp); 1077 1072 dev->mtu = new_mtu; 1078 1073 b44_init_rings(bp); 1079 - b44_init_hw(bp, 1); 1074 + b44_init_hw(bp, B44_FULL_RESET); 1080 1075 spin_unlock_irq(&bp->lock); 1081 1076 1082 1077 b44_enable_ints(bp); ··· 1373 1368 * packet processing. Invoked with bp->lock held. 1374 1369 */ 1375 1370 static void __b44_set_rx_mode(struct net_device *); 1376 - static void b44_init_hw(struct b44 *bp, int full_reset) 1371 + static void b44_init_hw(struct b44 *bp, int reset_kind) 1377 1372 { 1378 1373 u32 val; 1379 1374 1380 1375 b44_chip_reset(bp); 1381 - if (full_reset) { 1376 + if (reset_kind == B44_FULL_RESET) { 1382 1377 b44_phy_reset(bp); 1383 1378 b44_setup_phy(bp); 1384 1379 } ··· 1395 1390 bw32(bp, B44_TXMAXLEN, bp->dev->mtu + ETH_HLEN + 8 + RX_HEADER_LEN); 1396 1391 1397 1392 bw32(bp, B44_TX_WMARK, 56); /* XXX magic */ 1398 - if (full_reset) { 1393 + if (reset_kind == B44_PARTIAL_RESET) { 1394 + bw32(bp, B44_DMARX_CTRL, (DMARX_CTRL_ENABLE | 1395 + (bp->rx_offset << DMARX_CTRL_ROSHIFT))); 1396 + } else { 1399 1397 bw32(bp, B44_DMATX_CTRL, DMATX_CTRL_ENABLE); 1400 1398 bw32(bp, B44_DMATX_ADDR, bp->tx_ring_dma + bp->dma_offset); 1401 1399 bw32(bp, B44_DMARX_CTRL, (DMARX_CTRL_ENABLE | ··· 1409 1401 bp->rx_prod = bp->rx_pending; 1410 1402 1411 1403 bw32(bp, B44_MIB_CTRL, MIB_CTRL_CLR_ON_READ); 1412 - } else { 1413 - bw32(bp, B44_DMARX_CTRL, (DMARX_CTRL_ENABLE | 1414 - (bp->rx_offset << DMARX_CTRL_ROSHIFT))); 1415 1404 } 1416 1405 1417 1406 val = br32(bp, B44_ENET_CTRL); ··· 1425 1420 goto out; 1426 1421 1427 1422 b44_init_rings(bp); 1428 - b44_init_hw(bp, 1); 1423 + b44_init_hw(bp, B44_FULL_RESET); 1429 1424 1430 1425 b44_check_phy(bp); 1431 1426 ··· 1634 1629 netif_poll_enable(dev); 1635 1630 1636 1631 if (bp->flags & B44_FLAG_WOL_ENABLE) { 1637 - b44_init_hw(bp, 0); 1632 + b44_init_hw(bp, B44_PARTIAL_RESET); 1638 1633 b44_setup_wol(bp); 1639 1634 } 1640 1635 ··· 1910 1905 1911 1906 b44_halt(bp); 1912 1907 b44_init_rings(bp); 1913 - b44_init_hw(bp, 1); 1908 + b44_init_hw(bp, B44_FULL_RESET); 1914 1909 netif_wake_queue(bp->dev); 1915 1910 spin_unlock_irq(&bp->lock); 1916 1911 ··· 1953 1948 if (bp->flags & B44_FLAG_PAUSE_AUTO) { 1954 1949 b44_halt(bp); 1955 1950 b44_init_rings(bp); 1956 - b44_init_hw(bp, 1); 1951 + b44_init_hw(bp, B44_FULL_RESET); 1957 1952 } else { 1958 1953 __b44_set_flow_ctrl(bp, bp->flags); 1959 1954 } ··· 2309 2304 2310 2305 free_irq(dev->irq, dev); 2311 2306 if (bp->flags & B44_FLAG_WOL_ENABLE) { 2312 - b44_init_hw(bp, 0); 2307 + b44_init_hw(bp, B44_PARTIAL_RESET); 2313 2308 b44_setup_wol(bp); 2314 2309 } 2315 2310 pci_disable_device(pdev); ··· 2320 2315 { 2321 2316 struct net_device *dev = pci_get_drvdata(pdev); 2322 2317 struct b44 *bp = netdev_priv(dev); 2318 + int rc = 0; 2323 2319 2324 2320 pci_restore_state(pdev); 2325 - pci_enable_device(pdev); 2321 + rc = pci_enable_device(pdev); 2322 + if (rc) { 2323 + printk(KERN_ERR PFX "%s: pci_enable_device failed\n", 2324 + dev->name); 2325 + return rc; 2326 + } 2327 + 2326 2328 pci_set_master(pdev); 2327 2329 2328 2330 if (!netif_running(dev)) 2329 2331 return 0; 2330 2332 2331 - if (request_irq(dev->irq, b44_interrupt, IRQF_SHARED, dev->name, dev)) 2333 + rc = request_irq(dev->irq, b44_interrupt, IRQF_SHARED, dev->name, dev); 2334 + if (rc) { 2332 2335 printk(KERN_ERR PFX "%s: request_irq failed\n", dev->name); 2336 + pci_disable_device(pdev); 2337 + return rc; 2338 + } 2333 2339 2334 2340 spin_lock_irq(&bp->lock); 2335 2341 2336 2342 b44_init_rings(bp); 2337 - b44_init_hw(bp, 1); 2343 + b44_init_hw(bp, B44_FULL_RESET); 2338 2344 netif_device_attach(bp->dev); 2339 2345 spin_unlock_irq(&bp->lock); 2340 2346
+17 -5
drivers/net/bnx2.c
··· 57 57 58 58 #define DRV_MODULE_NAME "bnx2" 59 59 #define PFX DRV_MODULE_NAME ": " 60 - #define DRV_MODULE_VERSION "1.5.3" 61 - #define DRV_MODULE_RELDATE "January 8, 2007" 60 + #define DRV_MODULE_VERSION "1.5.5" 61 + #define DRV_MODULE_RELDATE "February 1, 2007" 62 62 63 63 #define RUN_AT(x) (jiffies + (x)) 64 64 ··· 1354 1354 bnx2_write_phy(bp, 0x17, 0x401f); 1355 1355 bnx2_write_phy(bp, 0x15, 0x14e2); 1356 1356 bnx2_write_phy(bp, 0x18, 0x0400); 1357 + } 1358 + 1359 + if (bp->phy_flags & PHY_DIS_EARLY_DAC_FLAG) { 1360 + bnx2_write_phy(bp, MII_BNX2_DSP_ADDRESS, 1361 + MII_BNX2_DSP_EXPAND_REG | 0x8); 1362 + bnx2_read_phy(bp, MII_BNX2_DSP_RW_PORT, &val); 1363 + val &= ~(1 << 8); 1364 + bnx2_write_phy(bp, MII_BNX2_DSP_RW_PORT, val); 1357 1365 } 1358 1366 1359 1367 if (bp->dev->mtu > 1500) { ··· 5853 5845 reg = REG_RD_IND(bp, BNX2_SHM_HDR_SIGNATURE); 5854 5846 5855 5847 if ((reg & BNX2_SHM_HDR_SIGNATURE_SIG_MASK) == 5856 - BNX2_SHM_HDR_SIGNATURE_SIG) 5857 - bp->shmem_base = REG_RD_IND(bp, BNX2_SHM_HDR_ADDR_0); 5858 - else 5848 + BNX2_SHM_HDR_SIGNATURE_SIG) { 5849 + u32 off = PCI_FUNC(pdev->devfn) << 2; 5850 + 5851 + bp->shmem_base = REG_RD_IND(bp, BNX2_SHM_HDR_ADDR_0 + off); 5852 + } else 5859 5853 bp->shmem_base = HOST_VIEW_SHMEM_BASE; 5860 5854 5861 5855 /* Get the permanent MAC address. First we need to make sure the ··· 5926 5916 } else if (CHIP_NUM(bp) == CHIP_NUM_5706 || 5927 5917 CHIP_NUM(bp) == CHIP_NUM_5708) 5928 5918 bp->phy_flags |= PHY_CRC_FIX_FLAG; 5919 + else if (CHIP_ID(bp) == CHIP_ID_5709_A0) 5920 + bp->phy_flags |= PHY_DIS_EARLY_DAC_FLAG; 5929 5921 5930 5922 if ((CHIP_ID(bp) == CHIP_ID_5708_A0) || 5931 5923 (CHIP_ID(bp) == CHIP_ID_5708_B0) ||
+6
drivers/net/bnx2.h
··· 6288 6288 6289 6289 #define BCM5708S_TX_ACTL3 0x17 6290 6290 6291 + #define MII_BNX2_DSP_RW_PORT 0x15 6292 + #define MII_BNX2_DSP_ADDRESS 0x17 6293 + #define MII_BNX2_DSP_EXPAND_REG 0x0f00 6294 + 6291 6295 #define MIN_ETHERNET_PACKET_SIZE 60 6292 6296 #define MAX_ETHERNET_PACKET_SIZE 1514 6293 6297 #define MAX_ETHERNET_JUMBO_PACKET_SIZE 9014 ··· 6493 6489 #define PHY_INT_MODE_MASK_FLAG 0x300 6494 6490 #define PHY_INT_MODE_AUTO_POLLING_FLAG 0x100 6495 6491 #define PHY_INT_MODE_LINK_READY_FLAG 0x200 6492 + #define PHY_DIS_EARLY_DAC_FLAG 0x400 6496 6493 6497 6494 u32 chip_id; 6498 6495 /* chip num:16-31, rev:12-15, metal:4-11, bond_id:0-3 */ ··· 6517 6512 #define CHIP_ID_5708_A0 0x57080000 6518 6513 #define CHIP_ID_5708_B0 0x57081000 6519 6514 #define CHIP_ID_5708_B1 0x57081010 6515 + #define CHIP_ID_5709_A0 0x57090000 6520 6516 6521 6517 #define CHIP_BOND_ID(bp) (((bp)->chip_id) & 0xf) 6522 6518
+4 -3
drivers/net/bonding/bonding.h
··· 151 151 struct slave *next; 152 152 struct slave *prev; 153 153 int delay; 154 - u32 jiffies; 155 - u32 last_arp_rx; 154 + unsigned long jiffies; 155 + unsigned long last_arp_rx; 156 156 s8 link; /* one of BOND_LINK_XXXX */ 157 157 s8 state; /* one of BOND_STATE_XXXX */ 158 158 u32 original_flags; ··· 242 242 return bond->params.arp_validate & (1 << slave->state); 243 243 } 244 244 245 - extern inline u32 slave_last_rx(struct bonding *bond, struct slave *slave) 245 + extern inline unsigned long slave_last_rx(struct bonding *bond, 246 + struct slave *slave) 246 247 { 247 248 if (slave_do_arp_validate(bond, slave)) 248 249 return slave->last_arp_rx;
+2 -5
drivers/net/e100.c
··· 2718 2718 struct net_device *netdev = pci_get_drvdata(pdev); 2719 2719 struct nic *nic = netdev_priv(netdev); 2720 2720 2721 - #ifdef CONFIG_E100_NAPI 2722 2721 if (netif_running(netdev)) 2723 2722 netif_poll_disable(nic->netdev); 2724 - #endif 2725 2723 del_timer_sync(&nic->watchdog); 2726 2724 netif_carrier_off(nic->netdev); 2725 + netif_device_detach(netdev); 2727 2726 2728 2727 pci_save_state(pdev); 2729 2728 ··· 2735 2736 } 2736 2737 2737 2738 pci_disable_device(pdev); 2739 + free_irq(pdev->irq, netdev); 2738 2740 pci_set_power_state(pdev, PCI_D3hot); 2739 2741 2740 2742 return 0; ··· 2759 2759 } 2760 2760 #endif /* CONFIG_PM */ 2761 2761 2762 - 2763 2762 static void e100_shutdown(struct pci_dev *pdev) 2764 2763 { 2765 2764 struct net_device *netdev = pci_get_drvdata(pdev); 2766 2765 struct nic *nic = netdev_priv(netdev); 2767 2766 2768 - #ifdef CONFIG_E100_NAPI 2769 2767 if (netif_running(netdev)) 2770 2768 netif_poll_disable(nic->netdev); 2771 - #endif 2772 2769 del_timer_sync(&nic->watchdog); 2773 2770 netif_carrier_off(nic->netdev); 2774 2771
+1 -1
drivers/net/ehea/ehea.h
··· 39 39 #include <asm/io.h> 40 40 41 41 #define DRV_NAME "ehea" 42 - #define DRV_VERSION "EHEA_0043" 42 + #define DRV_VERSION "EHEA_0045" 43 43 44 44 #define EHEA_MSG_DEFAULT (NETIF_MSG_LINK | NETIF_MSG_TIMER \ 45 45 | NETIF_MSG_RX_ERR | NETIF_MSG_TX_ERR)
+59 -28
drivers/net/ehea/ehea_main.c
··· 558 558 u32 qp_token; 559 559 560 560 eqe = ehea_poll_eq(port->qp_eq); 561 - ehea_debug("eqe=%p", eqe); 561 + 562 562 while (eqe) { 563 - ehea_debug("*eqe=%lx", *(u64*)eqe); 564 - eqe = ehea_poll_eq(port->qp_eq); 565 563 qp_token = EHEA_BMASK_GET(EHEA_EQE_QP_TOKEN, eqe->entry); 566 - ehea_debug("next eqe=%p", eqe); 564 + ehea_error("QP aff_err: entry=0x%lx, token=0x%x", 565 + eqe->entry, qp_token); 566 + eqe = ehea_poll_eq(port->qp_eq); 567 567 } 568 568 569 569 return IRQ_HANDLED; ··· 575 575 int i; 576 576 577 577 for (i = 0; i < adapter->num_ports; i++) 578 - if (adapter->port[i]->logical_port_id == logical_port) 579 - return adapter->port[i]; 578 + if (adapter->port[i]) 579 + if (adapter->port[i]->logical_port_id == logical_port) 580 + return adapter->port[i]; 580 581 return NULL; 581 582 } 582 583 ··· 642 641 port->full_duplex = 0; 643 642 break; 644 643 } 644 + 645 + port->autoneg = 1; 645 646 646 647 /* Number of default QPs */ 647 648 port->num_def_qps = cb0->num_default_qps; ··· 731 728 } 732 729 } else { 733 730 if (hret == H_AUTHORITY) { 734 - ehea_info("Hypervisor denied setting port speed. Either" 735 - " this partition is not authorized to set " 736 - "port speed or another partition has modified" 737 - " port speed first."); 731 + ehea_info("Hypervisor denied setting port speed"); 738 732 ret = -EPERM; 739 733 } else { 740 734 ret = -EIO; ··· 998 998 | EHEA_BMASK_SET(PXLY_RC_JUMBO_FRAME, 1); 999 999 1000 1000 for (i = 0; i < port->num_def_qps; i++) 1001 - cb0->default_qpn_arr[i] = port->port_res[i].qp->init_attr.qp_nr; 1001 + cb0->default_qpn_arr[i] = port->port_res[0].qp->init_attr.qp_nr; 1002 1002 1003 1003 if (netif_msg_ifup(port)) 1004 1004 ehea_dump(cb0, sizeof(*cb0), "ehea_configure_port"); ··· 1485 1485 1486 1486 static void ehea_promiscuous_error(u64 hret, int enable) 1487 1487 { 1488 - ehea_info("Hypervisor denied %sabling promiscuous mode.%s", 1489 - enable == 1 ? "en" : "dis", 1490 - hret != H_AUTHORITY ? "" : " Another partition owning a " 1491 - "logical port on the same physical port might have altered " 1492 - "promiscuous mode first."); 1488 + if (hret == H_AUTHORITY) 1489 + ehea_info("Hypervisor denied %sabling promiscuous mode", 1490 + enable == 1 ? "en" : "dis"); 1491 + else 1492 + ehea_error("failed %sabling promiscuous mode", 1493 + enable == 1 ? "en" : "dis"); 1493 1494 } 1494 1495 1495 1496 static void ehea_promiscuous(struct net_device *dev, int enable) ··· 2268 2267 int ehea_sense_adapter_attr(struct ehea_adapter *adapter) 2269 2268 { 2270 2269 struct hcp_query_ehea *cb; 2270 + struct device_node *lhea_dn = NULL; 2271 + struct device_node *eth_dn = NULL; 2271 2272 u64 hret; 2272 2273 int ret; 2273 2274 ··· 2286 2283 goto out_herr; 2287 2284 } 2288 2285 2289 - adapter->num_ports = cb->num_ports; 2286 + /* Determine the number of available logical ports 2287 + * by counting the child nodes of the lhea OFDT entry 2288 + */ 2289 + adapter->num_ports = 0; 2290 + lhea_dn = of_find_node_by_name(lhea_dn, "lhea"); 2291 + do { 2292 + eth_dn = of_get_next_child(lhea_dn, eth_dn); 2293 + if (eth_dn) 2294 + adapter->num_ports++; 2295 + } while ( eth_dn ); 2296 + of_node_put(lhea_dn); 2297 + 2290 2298 adapter->max_mc_mac = cb->max_mc_mac - 1; 2291 2299 ret = 0; 2292 2300 ··· 2316 2302 struct ehea_adapter *adapter = port->adapter; 2317 2303 struct hcp_ehea_port_cb4 *cb4; 2318 2304 u32 *dn_log_port_id; 2305 + int jumbo = 0; 2319 2306 2320 2307 sema_init(&port->port_lock, 1); 2321 2308 port->state = EHEA_PORT_DOWN; ··· 2349 2334 2350 2335 INIT_LIST_HEAD(&port->mc_list->list); 2351 2336 2352 - ehea_set_portspeed(port, EHEA_SPEED_AUTONEG); 2353 - 2354 2337 ret = ehea_sense_port_attr(port); 2355 2338 if (ret) 2356 2339 goto out; ··· 2358 2345 if (!cb4) { 2359 2346 ehea_error("no mem for cb4"); 2360 2347 } else { 2361 - cb4->jumbo_frame = 1; 2362 - hret = ehea_h_modify_ehea_port(adapter->handle, 2363 - port->logical_port_id, 2364 - H_PORT_CB4, H_PORT_CB4_JUMBO, 2365 - cb4); 2366 - if (hret != H_SUCCESS) { 2367 - ehea_info("Jumbo frames not activated"); 2348 + hret = ehea_h_query_ehea_port(adapter->handle, 2349 + port->logical_port_id, 2350 + H_PORT_CB4, 2351 + H_PORT_CB4_JUMBO, cb4); 2352 + 2353 + if (hret == H_SUCCESS) { 2354 + if (cb4->jumbo_frame) 2355 + jumbo = 1; 2356 + else { 2357 + cb4->jumbo_frame = 1; 2358 + hret = ehea_h_modify_ehea_port(adapter->handle, 2359 + port-> 2360 + logical_port_id, 2361 + H_PORT_CB4, 2362 + H_PORT_CB4_JUMBO, 2363 + cb4); 2364 + if (hret == H_SUCCESS) 2365 + jumbo = 1; 2366 + } 2368 2367 } 2369 2368 kfree(cb4); 2370 2369 } ··· 2414 2389 ehea_error("register_netdev failed. ret=%d", ret); 2415 2390 goto out_free; 2416 2391 } 2392 + 2393 + ehea_info("%s: Jumbo frames are %sabled", dev->name, 2394 + jumbo == 1 ? "en" : "dis"); 2417 2395 2418 2396 port->netdev = dev; 2419 2397 ret = 0; ··· 2499 2471 2500 2472 adapter_handle = (u64*)get_property(dev->ofdev.node, "ibm,hea-handle", 2501 2473 NULL); 2502 - if (!adapter_handle) { 2474 + if (adapter_handle) 2475 + adapter->handle = *adapter_handle; 2476 + 2477 + if (!adapter->handle) { 2503 2478 dev_err(&dev->ofdev.dev, "failed getting handle for adapter" 2504 2479 " '%s'\n", dev->ofdev.node->full_name); 2505 2480 ret = -ENODEV; 2506 2481 goto out_free_ad; 2507 2482 } 2508 2483 2509 - adapter->handle = *adapter_handle; 2510 2484 adapter->pd = EHEA_PD_ID; 2511 2485 2512 2486 dev->ofdev.dev.driver_data = adapter; ··· 2598 2568 destroy_workqueue(adapter->ehea_wq); 2599 2569 2600 2570 ibmebus_free_irq(NULL, adapter->neq->attr.ist1, adapter); 2571 + tasklet_kill(&adapter->neq_tasklet); 2601 2572 2602 2573 ehea_destroy_eq(adapter->neq); 2603 2574
+8 -2
drivers/net/ehea/ehea_phyp.c
··· 94 94 { 95 95 long ret; 96 96 int i, sleep_msecs; 97 + u8 cb_cat; 97 98 98 99 for (i = 0; i < 5; i++) { 99 100 ret = plpar_hcall9(opcode, outs, ··· 107 106 continue; 108 107 } 109 108 110 - if (ret < H_SUCCESS) 109 + cb_cat = EHEA_BMASK_GET(H_MEHEAPORT_CAT, arg2); 110 + 111 + if ((ret < H_SUCCESS) && !(((ret == H_AUTHORITY) 112 + && (opcode == H_MODIFY_HEA_PORT)) 113 + && (((cb_cat == H_PORT_CB4) && ((arg3 == H_PORT_CB4_JUMBO) 114 + || (arg3 == H_PORT_CB4_SPEED))) || ((cb_cat == H_PORT_CB7) 115 + && (arg3 == H_PORT_CB7_DUCQPN))))) 111 116 ehea_error("opcode=%lx ret=%lx" 112 117 " arg1=%lx arg2=%lx arg3=%lx arg4=%lx" 113 118 " arg5=%lx arg6=%lx arg7=%lx arg8=%lx" ··· 127 120 outs[0], outs[1], outs[2], outs[3], 128 121 outs[4], outs[5], outs[6], outs[7], 129 122 outs[8]); 130 - 131 123 return ret; 132 124 } 133 125
+9 -4
drivers/net/fs_enet/mac-fec.c
··· 104 104 fep->interrupt = platform_get_irq_byname(pdev,"interrupt"); 105 105 if (fep->interrupt < 0) 106 106 return -EINVAL; 107 - 107 + 108 108 r = platform_get_resource_byname(pdev, IORESOURCE_MEM, "regs"); 109 - fep->fec.fecp =(void*)r->start; 109 + fep->fec.fecp = ioremap(r->start, r->end - r->start + 1); 110 110 111 111 if(fep->fec.fecp == NULL) 112 112 return -EINVAL; ··· 319 319 * Clear any outstanding interrupt. 320 320 */ 321 321 FW(fecp, ievent, 0xffc0); 322 + #ifndef CONFIG_PPC_MERGE 322 323 FW(fecp, ivec, (fep->interrupt / 2) << 29); 323 - 324 + #else 325 + FW(fecp, ivec, (virq_to_hw(fep->interrupt) / 2) << 29); 326 + #endif 324 327 325 328 /* 326 - * adjust to speed (only for DUET & RMII) 329 + * adjust to speed (only for DUET & RMII) 327 330 */ 328 331 #ifdef CONFIG_DUET 329 332 if (fpi->use_rmii) { ··· 421 418 422 419 static void pre_request_irq(struct net_device *dev, int irq) 423 420 { 421 + #ifndef CONFIG_PPC_MERGE 424 422 immap_t *immap = fs_enet_immap; 425 423 u32 siel; 426 424 ··· 435 431 siel &= ~(0x80000000 >> (irq & ~1)); 436 432 out_be32(&immap->im_siu_conf.sc_siel, siel); 437 433 } 434 + #endif 438 435 } 439 436 440 437 static void post_free_irq(struct net_device *dev, int irq)
+4 -2
drivers/net/fs_enet/mac-scc.c
··· 121 121 return -EINVAL; 122 122 123 123 r = platform_get_resource_byname(pdev, IORESOURCE_MEM, "regs"); 124 - fep->scc.sccp = (void *)r->start; 124 + fep->scc.sccp = ioremap(r->start, r->end - r->start + 1); 125 125 126 126 if (fep->scc.sccp == NULL) 127 127 return -EINVAL; 128 128 129 129 r = platform_get_resource_byname(pdev, IORESOURCE_MEM, "pram"); 130 - fep->scc.ep = (void *)r->start; 130 + fep->scc.ep = ioremap(r->start, r->end - r->start + 1); 131 131 132 132 if (fep->scc.ep == NULL) 133 133 return -EINVAL; ··· 397 397 398 398 static void pre_request_irq(struct net_device *dev, int irq) 399 399 { 400 + #ifndef CONFIG_PPC_MERGE 400 401 immap_t *immap = fs_enet_immap; 401 402 u32 siel; 402 403 ··· 411 410 siel &= ~(0x80000000 >> (irq & ~1)); 412 411 out_be32(&immap->im_siu_conf.sc_siel, siel); 413 412 } 413 + #endif 414 414 } 415 415 416 416 static void post_free_irq(struct net_device *dev, int irq)
+3 -3
drivers/net/hamradio/Kconfig
··· 113 113 114 114 config BAYCOM_SER_FDX 115 115 tristate "BAYCOM ser12 fullduplex driver for AX.25" 116 - depends on AX25 116 + depends on AX25 && !S390 117 117 select CRC_CCITT 118 118 ---help--- 119 119 This is one of two drivers for Baycom style simple amateur radio ··· 133 133 134 134 config BAYCOM_SER_HDX 135 135 tristate "BAYCOM ser12 halfduplex driver for AX.25" 136 - depends on AX25 136 + depends on AX25 && !S390 137 137 select CRC_CCITT 138 138 ---help--- 139 139 This is one of two drivers for Baycom style simple amateur radio ··· 181 181 182 182 config YAM 183 183 tristate "YAM driver for AX.25" 184 - depends on AX25 184 + depends on AX25 && !S390 185 185 help 186 186 The YAM is a modem for packet radio which connects to the serial 187 187 port and includes some of the functions of a Terminal Node
+20 -25
drivers/net/irda/irda-usb.c
··· 441 441 goto drop; 442 442 } 443 443 444 - /* Make sure there is room for IrDA-USB header. The actual 445 - * allocation will be done lower in skb_push(). 446 - * Also, we don't use directly skb_cow(), because it require 447 - * headroom >= 16, which force unnecessary copies - Jean II */ 448 - if (skb_headroom(skb) < self->header_length) { 449 - IRDA_DEBUG(0, "%s(), Insuficient skb headroom.\n", __FUNCTION__); 450 - if (skb_cow(skb, self->header_length)) { 451 - IRDA_WARNING("%s(), failed skb_cow() !!!\n", __FUNCTION__); 452 - goto drop; 453 - } 454 - } 444 + memcpy(self->tx_buff + self->header_length, skb->data, skb->len); 455 445 456 446 /* Change setting for next frame */ 457 - 458 447 if (self->capability & IUC_STIR421X) { 459 448 __u8 turnaround_time; 460 - __u8* frame; 449 + __u8* frame = self->tx_buff; 461 450 turnaround_time = get_turnaround_time( skb ); 462 - frame= skb_push(skb, self->header_length); 463 451 irda_usb_build_header(self, frame, 0); 464 452 frame[2] = turnaround_time; 465 453 if ((skb->len != 0) && ··· 460 472 frame[1] = 0; 461 473 } 462 474 } else { 463 - irda_usb_build_header(self, skb_push(skb, self->header_length), 0); 475 + irda_usb_build_header(self, self->tx_buff, 0); 464 476 } 465 477 466 478 /* FIXME: Make macro out of this one */ 467 479 ((struct irda_skb_cb *)skb->cb)->context = self; 468 480 469 - usb_fill_bulk_urb(urb, self->usbdev, 481 + usb_fill_bulk_urb(urb, self->usbdev, 470 482 usb_sndbulkpipe(self->usbdev, self->bulk_out_ep), 471 - skb->data, IRDA_SKB_MAX_MTU, 483 + self->tx_buff, skb->len + self->header_length, 472 484 write_bulk_callback, skb); 473 - urb->transfer_buffer_length = skb->len; 485 + 474 486 /* This flag (URB_ZERO_PACKET) indicates that what we send is not 475 487 * a continuous stream of data but separate packets. 476 488 * In this case, the USB layer will insert an empty USB frame (TD) ··· 1443 1455 /* Remove the speed buffer */ 1444 1456 kfree(self->speed_buff); 1445 1457 self->speed_buff = NULL; 1458 + 1459 + kfree(self->tx_buff); 1460 + self->tx_buff = NULL; 1446 1461 } 1447 1462 1448 1463 /********************** USB CONFIG SUBROUTINES **********************/ ··· 1515 1524 1516 1525 IRDA_DEBUG(0, "%s(), And our endpoints are : in=%02X, out=%02X (%d), int=%02X\n", 1517 1526 __FUNCTION__, self->bulk_in_ep, self->bulk_out_ep, self->bulk_out_mtu, self->bulk_int_ep); 1518 - /* Should be 8, 16, 32 or 64 bytes */ 1519 - IRDA_ASSERT(self->bulk_out_mtu == 64, ;); 1520 1527 1521 1528 return((self->bulk_in_ep != 0) && (self->bulk_out_ep != 0)); 1522 1529 } ··· 1742 1753 1743 1754 memset(self->speed_buff, 0, IRDA_USB_SPEED_MTU); 1744 1755 1756 + self->tx_buff = kzalloc(IRDA_SKB_MAX_MTU + self->header_length, 1757 + GFP_KERNEL); 1758 + if (self->tx_buff == NULL) 1759 + goto err_out_4; 1760 + 1745 1761 ret = irda_usb_open(self); 1746 1762 if (ret) 1747 - goto err_out_4; 1763 + goto err_out_5; 1748 1764 1749 1765 IRDA_MESSAGE("IrDA: Registered device %s\n", net->name); 1750 1766 usb_set_intfdata(intf, self); ··· 1760 1766 self->needspatch = (ret < 0); 1761 1767 if (self->needspatch) { 1762 1768 IRDA_ERROR("STIR421X: Couldn't upload patch\n"); 1763 - goto err_out_5; 1769 + goto err_out_6; 1764 1770 } 1765 1771 1766 1772 /* replace IrDA class descriptor with what patched device is now reporting */ 1767 1773 irda_desc = irda_usb_find_class_desc (self->usbintf); 1768 1774 if (irda_desc == NULL) { 1769 1775 ret = -ENODEV; 1770 - goto err_out_5; 1776 + goto err_out_6; 1771 1777 } 1772 1778 if (self->irda_desc) 1773 1779 kfree (self->irda_desc); ··· 1776 1782 } 1777 1783 1778 1784 return 0; 1779 - 1780 - err_out_5: 1785 + err_out_6: 1781 1786 unregister_netdev(self->netdev); 1787 + err_out_5: 1788 + kfree(self->tx_buff); 1782 1789 err_out_4: 1783 1790 kfree(self->speed_buff); 1784 1791 err_out_3:
+1
drivers/net/irda/irda-usb.h
··· 156 156 struct irlap_cb *irlap; /* The link layer we are binded to */ 157 157 struct qos_info qos; 158 158 char *speed_buff; /* Buffer for speed changes */ 159 + char *tx_buff; 159 160 160 161 struct timeval stamp; 161 162 struct timeval now;
+1 -1
drivers/net/irda/stir4200.c
··· 59 59 #include <asm/byteorder.h> 60 60 #include <asm/unaligned.h> 61 61 62 - MODULE_AUTHOR("Stephen Hemminger <shemminger@osdl.org>"); 62 + MODULE_AUTHOR("Stephen Hemminger <shemminger@linux-foundation.org>"); 63 63 MODULE_DESCRIPTION("IrDA-USB Dongle Driver for SigmaTel STIr4200"); 64 64 MODULE_LICENSE("GPL"); 65 65
+8 -8
drivers/net/irda/vlsi_ir.c
··· 166 166 unsigned i; 167 167 168 168 seq_printf(seq, "\n%s (vid/did: %04x/%04x)\n", 169 - PCIDEV_NAME(pdev), (int)pdev->vendor, (int)pdev->device); 169 + pci_name(pdev), (int)pdev->vendor, (int)pdev->device); 170 170 seq_printf(seq, "pci-power-state: %u\n", (unsigned) pdev->current_state); 171 171 seq_printf(seq, "resources: irq=%u / io=0x%04x / dma_mask=0x%016Lx\n", 172 172 pdev->irq, (unsigned)pci_resource_start(pdev, 0), (unsigned long long)pdev->dma_mask); ··· 1401 1401 1402 1402 if (vlsi_start_hw(idev)) 1403 1403 IRDA_ERROR("%s: failed to restart hw - %s(%s) unusable!\n", 1404 - __FUNCTION__, PCIDEV_NAME(idev->pdev), ndev->name); 1404 + __FUNCTION__, pci_name(idev->pdev), ndev->name); 1405 1405 else 1406 1406 netif_start_queue(ndev); 1407 1407 } ··· 1643 1643 pdev->current_state = 0; /* hw must be running now */ 1644 1644 1645 1645 IRDA_MESSAGE("%s: IrDA PCI controller %s detected\n", 1646 - drivername, PCIDEV_NAME(pdev)); 1646 + drivername, pci_name(pdev)); 1647 1647 1648 1648 if ( !pci_resource_start(pdev,0) 1649 1649 || !(pci_resource_flags(pdev,0) & IORESOURCE_IO) ) { ··· 1728 1728 1729 1729 pci_set_drvdata(pdev, NULL); 1730 1730 1731 - IRDA_MESSAGE("%s: %s removed\n", drivername, PCIDEV_NAME(pdev)); 1731 + IRDA_MESSAGE("%s: %s removed\n", drivername, pci_name(pdev)); 1732 1732 } 1733 1733 1734 1734 #ifdef CONFIG_PM ··· 1748 1748 1749 1749 if (!ndev) { 1750 1750 IRDA_ERROR("%s - %s: no netdevice \n", 1751 - __FUNCTION__, PCIDEV_NAME(pdev)); 1751 + __FUNCTION__, pci_name(pdev)); 1752 1752 return 0; 1753 1753 } 1754 1754 idev = ndev->priv; ··· 1759 1759 pdev->current_state = state.event; 1760 1760 } 1761 1761 else 1762 - IRDA_ERROR("%s - %s: invalid suspend request %u -> %u\n", __FUNCTION__, PCIDEV_NAME(pdev), pdev->current_state, state.event); 1762 + IRDA_ERROR("%s - %s: invalid suspend request %u -> %u\n", __FUNCTION__, pci_name(pdev), pdev->current_state, state.event); 1763 1763 up(&idev->sem); 1764 1764 return 0; 1765 1765 } ··· 1787 1787 1788 1788 if (!ndev) { 1789 1789 IRDA_ERROR("%s - %s: no netdevice \n", 1790 - __FUNCTION__, PCIDEV_NAME(pdev)); 1790 + __FUNCTION__, pci_name(pdev)); 1791 1791 return 0; 1792 1792 } 1793 1793 idev = ndev->priv; ··· 1795 1795 if (pdev->current_state == 0) { 1796 1796 up(&idev->sem); 1797 1797 IRDA_WARNING("%s - %s: already resumed\n", 1798 - __FUNCTION__, PCIDEV_NAME(pdev)); 1798 + __FUNCTION__, pci_name(pdev)); 1799 1799 return 0; 1800 1800 } 1801 1801
-33
drivers/net/irda/vlsi_ir.h
··· 41 41 #define PCI_CLASS_SUBCLASS_MASK 0xffff 42 42 #endif 43 43 44 - /* in recent 2.5 interrupt handlers have non-void return value */ 45 - #ifndef IRQ_RETVAL 46 - typedef void irqreturn_t; 47 - #define IRQ_NONE 48 - #define IRQ_HANDLED 49 - #define IRQ_RETVAL(x) 50 - #endif 51 - 52 - /* some stuff need to check kernelversion. Not all 2.5 stuff was present 53 - * in early 2.5.x - the test is merely to separate 2.4 from 2.5 54 - */ 55 - #include <linux/version.h> 56 - 57 - #if LINUX_VERSION_CODE < KERNEL_VERSION(2,5,0) 58 - 59 - /* PDE() introduced in 2.5.4 */ 60 - #ifdef CONFIG_PROC_FS 61 - #define PDE(inode) ((inode)->i_private) 62 - #endif 63 - 64 - /* irda crc16 calculation exported in 2.5.42 */ 65 - #define irda_calc_crc16(fcs,buf,len) (GOOD_FCS) 66 - 67 - /* we use this for unified pci device name access */ 68 - #define PCIDEV_NAME(pdev) ((pdev)->name) 69 - 70 - #else /* 2.5 or later */ 71 - 72 - /* whatever we get from the associated struct device - bus:slot:dev.fn id */ 73 - #define PCIDEV_NAME(pdev) (pci_name(pdev)) 74 - 75 - #endif 76 - 77 44 /* ================================================================ */ 78 45 79 46 /* non-standard PCI registers */
+9 -2
drivers/net/mv643xx_eth.c
··· 314 314 315 315 while (mp->tx_desc_count > 0) { 316 316 spin_lock_irqsave(&mp->lock, flags); 317 + 318 + /* tx_desc_count might have changed before acquiring the lock */ 319 + if (mp->tx_desc_count <= 0) { 320 + spin_unlock_irqrestore(&mp->lock, flags); 321 + return released; 322 + } 323 + 317 324 tx_index = mp->tx_used_desc_q; 318 325 desc = &mp->p_tx_desc_area[tx_index]; 319 326 cmd_sts = desc->cmd_sts; ··· 339 332 if (skb) 340 333 mp->tx_skb[tx_index] = NULL; 341 334 342 - spin_unlock_irqrestore(&mp->lock, flags); 343 - 344 335 if (cmd_sts & ETH_ERROR_SUMMARY) { 345 336 printk("%s: Error in TX\n", dev->name); 346 337 mp->stats.tx_errors++; 347 338 } 339 + 340 + spin_unlock_irqrestore(&mp->lock, flags); 348 341 349 342 if (cmd_sts & ETH_TX_FIRST_DESC) 350 343 dma_unmap_single(NULL, addr, count, DMA_TO_DEVICE);
+68 -75
drivers/net/netxen/netxen_nic.h
··· 63 63 64 64 #include "netxen_nic_hw.h" 65 65 66 - #define NETXEN_NIC_BUILD_NO "4" 66 + #define NETXEN_NIC_BUILD_NO "2" 67 67 #define _NETXEN_NIC_LINUX_MAJOR 3 68 68 #define _NETXEN_NIC_LINUX_MINOR 3 69 - #define _NETXEN_NIC_LINUX_SUBVERSION 2 70 - #define NETXEN_NIC_LINUX_VERSIONID "3.3.2" "-" NETXEN_NIC_BUILD_NO 71 - #define NETXEN_NIC_FW_VERSIONID "3.3.2" 69 + #define _NETXEN_NIC_LINUX_SUBVERSION 3 70 + #define NETXEN_NIC_LINUX_VERSIONID "3.3.3" "-" NETXEN_NIC_BUILD_NO 72 71 73 72 #define RCV_DESC_RINGSIZE \ 74 73 (sizeof(struct rcv_desc) * adapter->max_rx_desc_count) ··· 239 240 240 241 typedef u32 netxen_ctx_msg; 241 242 242 - #define _netxen_set_bits(config_word, start, bits, val) {\ 243 - unsigned long long mask = (((1ULL << (bits)) - 1) << (start)); \ 244 - unsigned long long value = (val); \ 245 - (config_word) &= ~mask; \ 246 - (config_word) |= (((value) << (start)) & mask); \ 247 - } 248 - 249 243 #define netxen_set_msg_peg_id(config_word, val) \ 250 - _netxen_set_bits(config_word, 0, 2, val) 244 + ((config_word) &= ~3, (config_word) |= val & 3) 251 245 #define netxen_set_msg_privid(config_word) \ 252 - set_bit(2, (unsigned long*)&config_word) 246 + ((config_word) |= 1 << 2) 253 247 #define netxen_set_msg_count(config_word, val) \ 254 - _netxen_set_bits(config_word, 3, 15, val) 248 + ((config_word) &= ~(0x7fff<<3), (config_word) |= (val & 0x7fff) << 3) 255 249 #define netxen_set_msg_ctxid(config_word, val) \ 256 - _netxen_set_bits(config_word, 18, 10, val) 250 + ((config_word) &= ~(0x3ff<<18), (config_word) |= (val & 0x3ff) << 18) 257 251 #define netxen_set_msg_opcode(config_word, val) \ 258 - _netxen_set_bits(config_word, 28, 4, val) 252 + ((config_word) &= ~(0xf<<24), (config_word) |= (val & 0xf) << 24) 259 253 260 254 struct netxen_rcv_context { 261 - u32 rcv_ring_addr_lo; 262 - u32 rcv_ring_addr_hi; 263 - u32 rcv_ring_size; 264 - u32 rsrvd; 255 + __le64 rcv_ring_addr; 256 + __le32 rcv_ring_size; 257 + __le32 rsrvd; 265 258 }; 266 259 267 260 struct netxen_ring_ctx { 268 261 269 262 /* one command ring */ 270 - u64 cmd_consumer_offset; 271 - u32 cmd_ring_addr_lo; 272 - u32 cmd_ring_addr_hi; 273 - u32 cmd_ring_size; 274 - u32 rsrvd; 263 + __le64 cmd_consumer_offset; 264 + __le64 cmd_ring_addr; 265 + __le32 cmd_ring_size; 266 + __le32 rsrvd; 275 267 276 268 /* three receive rings */ 277 269 struct netxen_rcv_context rcv_ctx[3]; 278 270 279 271 /* one status ring */ 280 - u32 sts_ring_addr_lo; 281 - u32 sts_ring_addr_hi; 282 - u32 sts_ring_size; 272 + __le64 sts_ring_addr; 273 + __le32 sts_ring_size; 283 274 284 - u32 ctx_id; 275 + __le32 ctx_id; 285 276 } __attribute__ ((aligned(64))); 286 277 287 278 /* ··· 295 306 ((cmd_desc)->port_ctxid |= ((var) & 0x0F)) 296 307 297 308 #define netxen_set_cmd_desc_flags(cmd_desc, val) \ 298 - _netxen_set_bits((cmd_desc)->flags_opcode, 0, 7, val) 309 + ((cmd_desc)->flags_opcode &= ~cpu_to_le16(0x7f), \ 310 + (cmd_desc)->flags_opcode |= cpu_to_le16((val) & 0x7f)) 299 311 #define netxen_set_cmd_desc_opcode(cmd_desc, val) \ 300 - _netxen_set_bits((cmd_desc)->flags_opcode, 7, 6, val) 312 + ((cmd_desc)->flags_opcode &= ~cpu_to_le16(0x3f<<7), \ 313 + (cmd_desc)->flags_opcode |= cpu_to_le16((val) & (0x3f<<7))) 301 314 302 315 #define netxen_set_cmd_desc_num_of_buff(cmd_desc, val) \ 303 - _netxen_set_bits((cmd_desc)->num_of_buffers_total_length, 0, 8, val); 316 + ((cmd_desc)->num_of_buffers_total_length &= ~cpu_to_le32(0xff), \ 317 + (cmd_desc)->num_of_buffers_total_length |= cpu_to_le32((val) & 0xff)) 304 318 #define netxen_set_cmd_desc_totallength(cmd_desc, val) \ 305 - _netxen_set_bits((cmd_desc)->num_of_buffers_total_length, 8, 24, val); 319 + ((cmd_desc)->num_of_buffers_total_length &= cpu_to_le32(0xff), \ 320 + (cmd_desc)->num_of_buffers_total_length |= cpu_to_le32(val << 24)) 306 321 307 322 #define netxen_get_cmd_desc_opcode(cmd_desc) \ 308 - (((cmd_desc)->flags_opcode >> 7) & 0x003F) 323 + ((le16_to_cpu((cmd_desc)->flags_opcode) >> 7) & 0x003F) 309 324 #define netxen_get_cmd_desc_totallength(cmd_desc) \ 310 - (((cmd_desc)->num_of_buffers_total_length >> 8) & 0x0FFFFFF) 325 + (le32_to_cpu((cmd_desc)->num_of_buffers_total_length) >> 8) 311 326 312 327 struct cmd_desc_type0 { 313 328 u8 tcp_hdr_offset; /* For LSO only */ 314 329 u8 ip_hdr_offset; /* For LSO only */ 315 330 /* Bit pattern: 0-6 flags, 7-12 opcode, 13-15 unused */ 316 - u16 flags_opcode; 331 + __le16 flags_opcode; 317 332 /* Bit pattern: 0-7 total number of segments, 318 333 8-31 Total size of the packet */ 319 - u32 num_of_buffers_total_length; 334 + __le32 num_of_buffers_total_length; 320 335 union { 321 336 struct { 322 - u32 addr_low_part2; 323 - u32 addr_high_part2; 337 + __le32 addr_low_part2; 338 + __le32 addr_high_part2; 324 339 }; 325 - u64 addr_buffer2; 340 + __le64 addr_buffer2; 326 341 }; 327 342 328 - u16 reference_handle; /* changed to u16 to add mss */ 329 - u16 mss; /* passed by NDIS_PACKET for LSO */ 343 + __le16 reference_handle; /* changed to u16 to add mss */ 344 + __le16 mss; /* passed by NDIS_PACKET for LSO */ 330 345 /* Bit pattern 0-3 port, 0-3 ctx id */ 331 346 u8 port_ctxid; 332 347 u8 total_hdr_length; /* LSO only : MAC+IP+TCP Hdr size */ 333 - u16 conn_id; /* IPSec offoad only */ 348 + __le16 conn_id; /* IPSec offoad only */ 334 349 335 350 union { 336 351 struct { 337 - u32 addr_low_part3; 338 - u32 addr_high_part3; 352 + __le32 addr_low_part3; 353 + __le32 addr_high_part3; 339 354 }; 340 - u64 addr_buffer3; 355 + __le64 addr_buffer3; 341 356 }; 342 357 union { 343 358 struct { 344 - u32 addr_low_part1; 345 - u32 addr_high_part1; 359 + __le32 addr_low_part1; 360 + __le32 addr_high_part1; 346 361 }; 347 - u64 addr_buffer1; 362 + __le64 addr_buffer1; 348 363 }; 349 364 350 - u16 buffer1_length; 351 - u16 buffer2_length; 352 - u16 buffer3_length; 353 - u16 buffer4_length; 365 + __le16 buffer1_length; 366 + __le16 buffer2_length; 367 + __le16 buffer3_length; 368 + __le16 buffer4_length; 354 369 355 370 union { 356 371 struct { 357 - u32 addr_low_part4; 358 - u32 addr_high_part4; 372 + __le32 addr_low_part4; 373 + __le32 addr_high_part4; 359 374 }; 360 - u64 addr_buffer4; 375 + __le64 addr_buffer4; 361 376 }; 362 377 363 - u64 unused; 378 + __le64 unused; 364 379 365 380 } __attribute__ ((aligned(64))); 366 381 367 382 /* Note: sizeof(rcv_desc) should always be a mutliple of 2 */ 368 383 struct rcv_desc { 369 - u16 reference_handle; 370 - u16 reserved; 371 - u32 buffer_length; /* allocated buffer length (usually 2K) */ 372 - u64 addr_buffer; 384 + __le16 reference_handle; 385 + __le16 reserved; 386 + __le32 buffer_length; /* allocated buffer length (usually 2K) */ 387 + __le64 addr_buffer; 373 388 }; 374 389 375 390 /* opcode field in status_desc */ ··· 399 406 (((status_desc)->lro & 0x80) >> 7) 400 407 401 408 #define netxen_get_sts_port(status_desc) \ 402 - ((status_desc)->status_desc_data & 0x0F) 409 + (le64_to_cpu((status_desc)->status_desc_data) & 0x0F) 403 410 #define netxen_get_sts_status(status_desc) \ 404 - (((status_desc)->status_desc_data >> 4) & 0x0F) 411 + ((le64_to_cpu((status_desc)->status_desc_data) >> 4) & 0x0F) 405 412 #define netxen_get_sts_type(status_desc) \ 406 - (((status_desc)->status_desc_data >> 8) & 0x0F) 413 + ((le64_to_cpu((status_desc)->status_desc_data) >> 8) & 0x0F) 407 414 #define netxen_get_sts_totallength(status_desc) \ 408 - (((status_desc)->status_desc_data >> 12) & 0xFFFF) 415 + ((le64_to_cpu((status_desc)->status_desc_data) >> 12) & 0xFFFF) 409 416 #define netxen_get_sts_refhandle(status_desc) \ 410 - (((status_desc)->status_desc_data >> 28) & 0xFFFF) 417 + ((le64_to_cpu((status_desc)->status_desc_data) >> 28) & 0xFFFF) 411 418 #define netxen_get_sts_prot(status_desc) \ 412 - (((status_desc)->status_desc_data >> 44) & 0x0F) 419 + ((le64_to_cpu((status_desc)->status_desc_data) >> 44) & 0x0F) 413 420 #define netxen_get_sts_owner(status_desc) \ 414 - (((status_desc)->status_desc_data >> 56) & 0x03) 421 + ((le64_to_cpu((status_desc)->status_desc_data) >> 56) & 0x03) 415 422 #define netxen_get_sts_opcode(status_desc) \ 416 - (((status_desc)->status_desc_data >> 58) & 0x03F) 423 + ((le64_to_cpu((status_desc)->status_desc_data) >> 58) & 0x03F) 417 424 418 425 #define netxen_clear_sts_owner(status_desc) \ 419 426 ((status_desc)->status_desc_data &= \ 420 - ~(((unsigned long long)3) << 56 )) 427 + ~cpu_to_le64(((unsigned long long)3) << 56 )) 421 428 #define netxen_set_sts_owner(status_desc, val) \ 422 429 ((status_desc)->status_desc_data |= \ 423 - (((unsigned long long)((val) & 0x3)) << 56 )) 430 + cpu_to_le64(((unsigned long long)((val) & 0x3)) << 56 )) 424 431 425 432 struct status_desc { 426 433 /* Bit pattern: 0-3 port, 4-7 status, 8-11 type, 12-27 total_length 427 434 28-43 reference_handle, 44-47 protocol, 48-52 unused 428 435 53-55 desc_cnt, 56-57 owner, 58-63 opcode 429 436 */ 430 - u64 status_desc_data; 431 - u32 hash_value; 437 + __le64 status_desc_data; 438 + __le32 hash_value; 432 439 u8 hash_type; 433 440 u8 msg_type; 434 441 u8 unused; ··· 999 1006 void netxen_niu_gbe_set_gmii_mode(struct netxen_adapter *adapter, int port, 1000 1007 long enable); 1001 1008 int netxen_niu_gbe_phy_read(struct netxen_adapter *adapter, long phy, long reg, 1002 - __le32 * readval); 1009 + __u32 * readval); 1003 1010 int netxen_niu_gbe_phy_write(struct netxen_adapter *adapter, long phy, 1004 - long reg, __le32 val); 1011 + long reg, __u32 val); 1005 1012 1006 1013 /* Functions available from netxen_nic_hw.c */ 1007 1014 int netxen_nic_set_mtu_xgb(struct netxen_port *port, int new_mtu);
+11 -11
drivers/net/netxen/netxen_nic_ethtool.c
··· 218 218 { 219 219 struct netxen_port *port = netdev_priv(dev); 220 220 struct netxen_adapter *adapter = port->adapter; 221 - __le32 status; 221 + __u32 status; 222 222 223 223 /* read which mode */ 224 224 if (adapter->ahw.board_type == NETXEN_NIC_GBE) { ··· 226 226 if (adapter->phy_write 227 227 && adapter->phy_write(adapter, port->portnum, 228 228 NETXEN_NIU_GB_MII_MGMT_ADDR_AUTONEG, 229 - (__le32) ecmd->autoneg) != 0) 229 + ecmd->autoneg) != 0) 230 230 return -EIO; 231 231 else 232 232 port->link_autoneg = ecmd->autoneg; ··· 279 279 } 280 280 281 281 struct netxen_niu_regs { 282 - __le32 reg[NETXEN_NIC_REGS_COUNT]; 282 + __u32 reg[NETXEN_NIC_REGS_COUNT]; 283 283 }; 284 284 285 285 static struct netxen_niu_regs niu_registers[] = { ··· 372 372 { 373 373 struct netxen_port *port = netdev_priv(dev); 374 374 struct netxen_adapter *adapter = port->adapter; 375 - __le32 mode, *regs_buff = p; 375 + __u32 mode, *regs_buff = p; 376 376 void __iomem *addr; 377 377 int i, window; 378 378 ··· 415 415 { 416 416 struct netxen_port *port = netdev_priv(dev); 417 417 struct netxen_adapter *adapter = port->adapter; 418 - __le32 status; 418 + __u32 status; 419 419 420 420 /* read which mode */ 421 421 if (adapter->ahw.board_type == NETXEN_NIC_GBE) { ··· 482 482 { 483 483 struct netxen_port *port = netdev_priv(dev); 484 484 struct netxen_adapter *adapter = port->adapter; 485 - __le32 val; 485 + __u32 val; 486 486 487 487 if (adapter->ahw.board_type == NETXEN_NIC_GBE) { 488 488 /* get flow control settings */ 489 489 netxen_nic_read_w0(adapter, 490 490 NETXEN_NIU_GB_MAC_CONFIG_0(port->portnum), 491 - (u32 *) & val); 491 + &val); 492 492 pause->rx_pause = netxen_gb_get_rx_flowctl(val); 493 493 pause->tx_pause = netxen_gb_get_tx_flowctl(val); 494 494 /* get autoneg settings */ ··· 502 502 { 503 503 struct netxen_port *port = netdev_priv(dev); 504 504 struct netxen_adapter *adapter = port->adapter; 505 - __le32 val; 505 + __u32 val; 506 506 unsigned int autoneg; 507 507 508 508 /* read mode */ ··· 522 522 523 523 netxen_nic_write_w0(adapter, 524 524 NETXEN_NIU_GB_MAC_CONFIG_0(port->portnum), 525 - *(u32 *) (&val)); 525 + *&val); 526 526 /* set autoneg */ 527 527 autoneg = pause->autoneg; 528 528 if (adapter->phy_write 529 529 && adapter->phy_write(adapter, port->portnum, 530 530 NETXEN_NIU_GB_MII_MGMT_ADDR_AUTONEG, 531 - (__le32) autoneg) != 0) 531 + autoneg) != 0) 532 532 return -EIO; 533 533 else { 534 534 port->link_autoneg = pause->autoneg; ··· 543 543 struct netxen_port *port = netdev_priv(dev); 544 544 struct netxen_adapter *adapter = port->adapter; 545 545 u32 data_read, data_written, save; 546 - __le32 mode; 546 + __u32 mode; 547 547 548 548 /* 549 549 * first test the "Read Only" registers by writing which mode
+20 -26
drivers/net/netxen/netxen_nic_hw.c
··· 95 95 struct netxen_port *port = netdev_priv(netdev); 96 96 struct netxen_adapter *adapter = port->adapter; 97 97 struct dev_mc_list *mc_ptr; 98 - __le32 netxen_mac_addr_cntl_data = 0; 98 + __u32 netxen_mac_addr_cntl_data = 0; 99 99 100 100 mc_ptr = netdev->mc_list; 101 101 if (netdev->flags & IFF_PROMISC) { ··· 236 236 } 237 237 memset(addr, 0, sizeof(struct netxen_ring_ctx)); 238 238 adapter->ctx_desc = (struct netxen_ring_ctx *)addr; 239 - adapter->ctx_desc->cmd_consumer_offset = adapter->ctx_desc_phys_addr 240 - + sizeof(struct netxen_ring_ctx); 239 + adapter->ctx_desc->cmd_consumer_offset = 240 + cpu_to_le64(adapter->ctx_desc_phys_addr + 241 + sizeof(struct netxen_ring_ctx)); 241 242 adapter->cmd_consumer = (uint32_t *) (((char *)addr) + 242 243 sizeof(struct netxen_ring_ctx)); 243 244 ··· 254 253 return -ENOMEM; 255 254 } 256 255 257 - adapter->ctx_desc->cmd_ring_addr_lo = 258 - hw->cmd_desc_phys_addr & 0xffffffffUL; 259 - adapter->ctx_desc->cmd_ring_addr_hi = 260 - ((u64) hw->cmd_desc_phys_addr >> 32); 261 - adapter->ctx_desc->cmd_ring_size = adapter->max_tx_desc_count; 256 + adapter->ctx_desc->cmd_ring_addr = 257 + cpu_to_le64(hw->cmd_desc_phys_addr); 258 + adapter->ctx_desc->cmd_ring_size = 259 + cpu_to_le32(adapter->max_tx_desc_count); 262 260 263 261 hw->cmd_desc_head = (struct cmd_desc_type0 *)addr; 264 262 ··· 278 278 return err; 279 279 } 280 280 rcv_desc->desc_head = (struct rcv_desc *)addr; 281 - adapter->ctx_desc->rcv_ctx[ring].rcv_ring_addr_lo = 282 - rcv_desc->phys_addr & 0xffffffffUL; 283 - adapter->ctx_desc->rcv_ctx[ring].rcv_ring_addr_hi = 284 - ((u64) rcv_desc->phys_addr >> 32); 281 + adapter->ctx_desc->rcv_ctx[ring].rcv_ring_addr = 282 + cpu_to_le64(rcv_desc->phys_addr); 285 283 adapter->ctx_desc->rcv_ctx[ring].rcv_ring_size = 286 - rcv_desc->max_rx_desc_count; 284 + cpu_to_le32(rcv_desc->max_rx_desc_count); 287 285 } 288 286 289 287 addr = netxen_alloc(adapter->ahw.pdev, STATUS_DESC_RINGSIZE, ··· 295 297 return err; 296 298 } 297 299 recv_ctx->rcv_status_desc_head = (struct status_desc *)addr; 298 - adapter->ctx_desc->sts_ring_addr_lo = 299 - recv_ctx->rcv_status_desc_phys_addr & 0xffffffffUL; 300 - adapter->ctx_desc->sts_ring_addr_hi = 301 - ((u64) recv_ctx->rcv_status_desc_phys_addr >> 32); 302 - adapter->ctx_desc->sts_ring_size = adapter->max_rx_desc_count; 300 + adapter->ctx_desc->sts_ring_addr = 301 + cpu_to_le64(recv_ctx->rcv_status_desc_phys_addr); 302 + adapter->ctx_desc->sts_ring_size = 303 + cpu_to_le32(adapter->max_rx_desc_count); 303 304 304 305 } 305 306 /* Window = 1 */ ··· 384 387 } 385 388 adapter->stats.xmitcsummed++; 386 389 desc->tcp_hdr_offset = skb->h.raw - skb->data; 387 - netxen_set_cmd_desc_totallength(desc, 388 - cpu_to_le32 389 - (netxen_get_cmd_desc_totallength 390 - (desc))); 391 390 desc->ip_hdr_offset = skb->nh.raw - skb->data; 392 391 } 393 392 ··· 860 867 void netxen_nic_set_link_parameters(struct netxen_port *port) 861 868 { 862 869 struct netxen_adapter *adapter = port->adapter; 863 - __le32 status; 864 - __le32 autoneg; 865 - __le32 mode; 870 + __u32 status; 871 + __u32 autoneg; 872 + __u32 mode; 866 873 867 874 netxen_nic_read_w0(adapter, NETXEN_NIU_MODE, &mode); 868 875 if (netxen_get_niu_enable_ge(mode)) { /* Gb 10/100/1000 Mbps mode */ ··· 977 984 _NETXEN_NIC_LINUX_MAJOR, fw_major); 978 985 adapter->driver_mismatch = 1; 979 986 } 980 - if (fw_minor != _NETXEN_NIC_LINUX_MINOR) { 987 + if (fw_minor != _NETXEN_NIC_LINUX_MINOR && 988 + fw_minor != (_NETXEN_NIC_LINUX_MINOR + 1)) { 981 989 printk(KERN_ERR "The mismatch in driver version and firmware " 982 990 "version minor number\n" 983 991 "Driver version minor number = %d \t"
+37 -37
drivers/net/netxen/netxen_nic_hw.h
··· 124 124 */ 125 125 126 126 #define netxen_gb_enable_tx(config_word) \ 127 - set_bit(0, (unsigned long*)(&config_word)) 127 + ((config_word) |= 1 << 0) 128 128 #define netxen_gb_enable_rx(config_word) \ 129 - set_bit(2, (unsigned long*)(&config_word)) 129 + ((config_word) |= 1 << 2) 130 130 #define netxen_gb_tx_flowctl(config_word) \ 131 - set_bit(4, (unsigned long*)(&config_word)) 131 + ((config_word) |= 1 << 4) 132 132 #define netxen_gb_rx_flowctl(config_word) \ 133 - set_bit(5, (unsigned long*)(&config_word)) 133 + ((config_word) |= 1 << 5) 134 134 #define netxen_gb_tx_reset_pb(config_word) \ 135 - set_bit(16, (unsigned long*)(&config_word)) 135 + ((config_word) |= 1 << 16) 136 136 #define netxen_gb_rx_reset_pb(config_word) \ 137 - set_bit(17, (unsigned long*)(&config_word)) 137 + ((config_word) |= 1 << 17) 138 138 #define netxen_gb_tx_reset_mac(config_word) \ 139 - set_bit(18, (unsigned long*)(&config_word)) 139 + ((config_word) |= 1 << 18) 140 140 #define netxen_gb_rx_reset_mac(config_word) \ 141 - set_bit(19, (unsigned long*)(&config_word)) 141 + ((config_word) |= 1 << 19) 142 142 #define netxen_gb_soft_reset(config_word) \ 143 - set_bit(31, (unsigned long*)(&config_word)) 143 + ((config_word) |= 1 << 31) 144 144 145 145 #define netxen_gb_unset_tx_flowctl(config_word) \ 146 - clear_bit(4, (unsigned long *)(&config_word)) 146 + ((config_word) &= ~(1 << 4)) 147 147 #define netxen_gb_unset_rx_flowctl(config_word) \ 148 - clear_bit(5, (unsigned long*)(&config_word)) 148 + ((config_word) &= ~(1 << 5)) 149 149 150 150 #define netxen_gb_get_tx_synced(config_word) \ 151 151 _netxen_crb_get_bit((config_word), 1) ··· 171 171 */ 172 172 173 173 #define netxen_gb_set_duplex(config_word) \ 174 - set_bit(0, (unsigned long*)&config_word) 174 + ((config_word) |= 1 << 0) 175 175 #define netxen_gb_set_crc_enable(config_word) \ 176 - set_bit(1, (unsigned long*)&config_word) 176 + ((config_word) |= 1 << 1) 177 177 #define netxen_gb_set_padshort(config_word) \ 178 - set_bit(2, (unsigned long*)&config_word) 178 + ((config_word) |= 1 << 2) 179 179 #define netxen_gb_set_checklength(config_word) \ 180 - set_bit(4, (unsigned long*)&config_word) 180 + ((config_word) |= 1 << 4) 181 181 #define netxen_gb_set_hugeframes(config_word) \ 182 - set_bit(5, (unsigned long*)&config_word) 182 + ((config_word) |= 1 << 5) 183 183 #define netxen_gb_set_preamblelen(config_word, val) \ 184 184 ((config_word) |= ((val) << 12) & 0xF000) 185 185 #define netxen_gb_set_intfmode(config_word, val) \ ··· 190 190 #define netxen_gb_set_mii_mgmt_clockselect(config_word, val) \ 191 191 ((config_word) |= ((val) & 0x07)) 192 192 #define netxen_gb_mii_mgmt_reset(config_word) \ 193 - set_bit(31, (unsigned long*)&config_word) 193 + ((config_word) |= 1 << 31) 194 194 #define netxen_gb_mii_mgmt_unset(config_word) \ 195 - clear_bit(31, (unsigned long*)&config_word) 195 + ((config_word) &= ~(1 << 31)) 196 196 197 197 /* 198 198 * NIU GB MII Mgmt Command Register (applies to GB0, GB1, GB2, GB3) ··· 201 201 */ 202 202 203 203 #define netxen_gb_mii_mgmt_set_read_cycle(config_word) \ 204 - set_bit(0, (unsigned long*)&config_word) 204 + ((config_word) |= 1 << 0) 205 205 #define netxen_gb_mii_mgmt_reg_addr(config_word, val) \ 206 206 ((config_word) |= ((val) & 0x1F)) 207 207 #define netxen_gb_mii_mgmt_phy_addr(config_word, val) \ ··· 274 274 #define netxen_set_phy_speed(config_word, val) \ 275 275 ((config_word) |= ((val & 0x03) << 14)) 276 276 #define netxen_set_phy_duplex(config_word) \ 277 - set_bit(13, (unsigned long*)&config_word) 277 + ((config_word) |= 1 << 13) 278 278 #define netxen_clear_phy_duplex(config_word) \ 279 - clear_bit(13, (unsigned long*)&config_word) 279 + ((config_word) &= ~(1 << 13)) 280 280 281 281 #define netxen_get_phy_jabber(config_word) \ 282 282 _netxen_crb_get_bit(config_word, 0) ··· 350 350 _netxen_crb_get_bit(config_word, 15) 351 351 352 352 #define netxen_set_phy_int_link_status_changed(config_word) \ 353 - set_bit(10, (unsigned long*)&config_word) 353 + ((config_word) |= 1 << 10) 354 354 #define netxen_set_phy_int_autoneg_completed(config_word) \ 355 - set_bit(11, (unsigned long*)&config_word) 355 + ((config_word) |= 1 << 11) 356 356 #define netxen_set_phy_int_speed_changed(config_word) \ 357 - set_bit(14, (unsigned long*)&config_word) 357 + ((config_word) |= 1 << 14) 358 358 359 359 /* 360 360 * NIU Mode Register. ··· 382 382 */ 383 383 384 384 #define netxen_set_gb_drop_gb0(config_word) \ 385 - set_bit(0, (unsigned long*)&config_word) 385 + ((config_word) |= 1 << 0) 386 386 #define netxen_set_gb_drop_gb1(config_word) \ 387 - set_bit(1, (unsigned long*)&config_word) 387 + ((config_word) |= 1 << 1) 388 388 #define netxen_set_gb_drop_gb2(config_word) \ 389 - set_bit(2, (unsigned long*)&config_word) 389 + ((config_word) |= 1 << 2) 390 390 #define netxen_set_gb_drop_gb3(config_word) \ 391 - set_bit(3, (unsigned long*)&config_word) 391 + ((config_word) |= 1 << 3) 392 392 393 393 #define netxen_clear_gb_drop_gb0(config_word) \ 394 - clear_bit(0, (unsigned long*)&config_word) 394 + ((config_word) &= ~(1 << 0)) 395 395 #define netxen_clear_gb_drop_gb1(config_word) \ 396 - clear_bit(1, (unsigned long*)&config_word) 396 + ((config_word) &= ~(1 << 1)) 397 397 #define netxen_clear_gb_drop_gb2(config_word) \ 398 - clear_bit(2, (unsigned long*)&config_word) 398 + ((config_word) &= ~(1 << 2)) 399 399 #define netxen_clear_gb_drop_gb3(config_word) \ 400 - clear_bit(3, (unsigned long*)&config_word) 400 + ((config_word) &= ~(1 << 3)) 401 401 402 402 /* 403 403 * NIU XG MAC Config Register ··· 413 413 */ 414 414 415 415 #define netxen_xg_soft_reset(config_word) \ 416 - set_bit(4, (unsigned long*)&config_word) 416 + ((config_word) |= 1 << 4) 417 417 418 418 /* 419 419 * MAC Control Register ··· 433 433 #define netxen_nic_mcr_set_id_pool0(config, val) \ 434 434 ((config) |= ((val) &0x03)) 435 435 #define netxen_nic_mcr_set_enable_xtnd0(config) \ 436 - (set_bit(3, (unsigned long *)&(config))) 436 + ((config) |= 1 << 3) 437 437 #define netxen_nic_mcr_set_id_pool1(config, val) \ 438 438 ((config) |= (((val) & 0x03) << 4)) 439 439 #define netxen_nic_mcr_set_enable_xtnd1(config) \ 440 - (set_bit(6, (unsigned long *)&(config))) 440 + ((config) |= 1 << 6) 441 441 #define netxen_nic_mcr_set_id_pool2(config, val) \ 442 442 ((config) |= (((val) & 0x03) << 8)) 443 443 #define netxen_nic_mcr_set_enable_xtnd2(config) \ 444 - (set_bit(10, (unsigned long *)&(config))) 444 + ((config) |= 1 << 10) 445 445 #define netxen_nic_mcr_set_id_pool3(config, val) \ 446 446 ((config) |= (((val) & 0x03) << 12)) 447 447 #define netxen_nic_mcr_set_enable_xtnd3(config) \ 448 - (set_bit(14, (unsigned long *)&(config))) 448 + ((config) |= 1 << 14) 449 449 #define netxen_nic_mcr_set_mode_select(config, val) \ 450 450 ((config) |= (((val) & 0x03) << 24)) 451 451 #define netxen_nic_mcr_set_enable_pool(config, val) \
+6 -9
drivers/net/netxen/netxen_nic_init.c
··· 690 690 desc_head = recv_ctx->rcv_status_desc_head; 691 691 desc = &desc_head[consumer]; 692 692 693 - if (((le16_to_cpu(netxen_get_sts_owner(desc))) 694 - & STATUS_OWNER_HOST)) 693 + if (netxen_get_sts_owner(desc) & STATUS_OWNER_HOST) 695 694 return 1; 696 695 } 697 696 ··· 786 787 struct netxen_port *port = adapter->port[netxen_get_sts_port(desc)]; 787 788 struct pci_dev *pdev = port->pdev; 788 789 struct net_device *netdev = port->netdev; 789 - int index = le16_to_cpu(netxen_get_sts_refhandle(desc)); 790 + int index = netxen_get_sts_refhandle(desc); 790 791 struct netxen_recv_context *recv_ctx = &(adapter->recv_ctx[ctxid]); 791 792 struct netxen_rx_buffer *buffer; 792 793 struct sk_buff *skb; 793 - u32 length = le16_to_cpu(netxen_get_sts_totallength(desc)); 794 + u32 length = netxen_get_sts_totallength(desc); 794 795 u32 desc_ctx; 795 796 struct netxen_rcv_desc_ctx *rcv_desc; 796 797 int ret; ··· 917 918 */ 918 919 while (count < max) { 919 920 desc = &desc_head[consumer]; 920 - if (! 921 - (le16_to_cpu(netxen_get_sts_owner(desc)) & 922 - STATUS_OWNER_HOST)) { 921 + if (!(netxen_get_sts_owner(desc) & STATUS_OWNER_HOST)) { 923 922 DPRINTK(ERR, "desc %p ownedby %x\n", desc, 924 923 netxen_get_sts_owner(desc)); 925 924 break; 926 925 } 927 926 netxen_process_rcv(adapter, ctxid, desc); 928 927 netxen_clear_sts_owner(desc); 929 - netxen_set_sts_owner(desc, cpu_to_le16(STATUS_OWNER_PHANTOM)); 928 + netxen_set_sts_owner(desc, STATUS_OWNER_PHANTOM); 930 929 consumer = (consumer + 1) & (adapter->max_rx_desc_count - 1); 931 930 count++; 932 931 } ··· 1229 1232 1230 1233 /* make a rcv descriptor */ 1231 1234 pdesc->reference_handle = cpu_to_le16(buffer->ref_handle); 1232 - pdesc->buffer_length = cpu_to_le16(rcv_desc->dma_size); 1235 + pdesc->buffer_length = cpu_to_le32(rcv_desc->dma_size); 1233 1236 pdesc->addr_buffer = cpu_to_le64(buffer->dma); 1234 1237 DPRINTK(INFO, "done writing descripter\n"); 1235 1238 producer =
+2 -2
drivers/net/netxen/netxen_nic_isr.c
··· 79 79 void netxen_handle_port_int(struct netxen_adapter *adapter, u32 portno, 80 80 u32 enable) 81 81 { 82 - __le32 int_src; 82 + __u32 int_src; 83 83 struct netxen_port *port; 84 84 85 85 /* This should clear the interrupt source */ ··· 110 110 /* write it down later.. */ 111 111 if ((netxen_get_phy_int_speed_changed(int_src)) 112 112 || (netxen_get_phy_int_link_status_changed(int_src))) { 113 - __le32 status; 113 + __u32 status; 114 114 115 115 DPRINTK(INFO, "SPEED CHANGED OR LINK STATUS CHANGED \n"); 116 116
+5 -5
drivers/net/netxen/netxen_nic_main.c
··· 117 117 void __iomem *mem_ptr1 = NULL; 118 118 void __iomem *mem_ptr2 = NULL; 119 119 120 - u8 *db_ptr = NULL; 120 + u8 __iomem *db_ptr = NULL; 121 121 unsigned long mem_base, mem_len, db_base, db_len; 122 122 int pci_using_dac, i, err; 123 123 int ring; ··· 191 191 db_len); 192 192 193 193 db_ptr = ioremap(db_base, NETXEN_DB_MAPSIZE_BYTES); 194 - if (db_ptr == 0UL) { 194 + if (!db_ptr) { 195 195 printk(KERN_ERR "%s: Failed to allocate doorbell map.", 196 196 netxen_nic_driver_name); 197 197 err = -EIO; ··· 818 818 /* Take skb->data itself */ 819 819 pbuf = &adapter->cmd_buf_arr[producer]; 820 820 if ((netdev->features & NETIF_F_TSO) && skb_shinfo(skb)->gso_size > 0) { 821 - pbuf->mss = cpu_to_le16(skb_shinfo(skb)->gso_size); 821 + pbuf->mss = skb_shinfo(skb)->gso_size; 822 822 hwdesc->mss = cpu_to_le16(skb_shinfo(skb)->gso_size); 823 823 } else { 824 824 pbuf->mss = 0; ··· 882 882 hwdesc->addr_buffer3 = cpu_to_le64(temp_dma); 883 883 break; 884 884 case 3: 885 - hwdesc->buffer4_length = temp_len; 885 + hwdesc->buffer4_length = cpu_to_le16(temp_len); 886 886 hwdesc->addr_buffer4 = cpu_to_le64(temp_dma); 887 887 break; 888 888 } ··· 1144 1144 if ((netxen_workq = create_singlethread_workqueue("netxen")) == 0) 1145 1145 return -ENOMEM; 1146 1146 1147 - return pci_module_init(&netxen_driver); 1147 + return pci_register_driver(&netxen_driver); 1148 1148 } 1149 1149 1150 1150 module_init(netxen_init_module);
+53 -53
drivers/net/netxen/netxen_nic_niu.c
··· 89 89 * 90 90 */ 91 91 int netxen_niu_gbe_phy_read(struct netxen_adapter *adapter, long phy, 92 - long reg, __le32 * readval) 92 + long reg, __u32 * readval) 93 93 { 94 94 long timeout = 0; 95 95 long result = 0; 96 96 long restore = 0; 97 - __le32 address; 98 - __le32 command; 99 - __le32 status; 100 - __le32 mac_cfg0; 97 + __u32 address; 98 + __u32 command; 99 + __u32 status; 100 + __u32 mac_cfg0; 101 101 102 102 if (phy_lock(adapter) != 0) { 103 103 return -1; ··· 112 112 &mac_cfg0, 4)) 113 113 return -EIO; 114 114 if (netxen_gb_get_soft_reset(mac_cfg0)) { 115 - __le32 temp; 115 + __u32 temp; 116 116 temp = 0; 117 117 netxen_gb_tx_reset_pb(temp); 118 118 netxen_gb_rx_reset_pb(temp); ··· 184 184 * 185 185 */ 186 186 int netxen_niu_gbe_phy_write(struct netxen_adapter *adapter, 187 - long phy, long reg, __le32 val) 187 + long phy, long reg, __u32 val) 188 188 { 189 189 long timeout = 0; 190 190 long result = 0; 191 191 long restore = 0; 192 - __le32 address; 193 - __le32 command; 194 - __le32 status; 195 - __le32 mac_cfg0; 192 + __u32 address; 193 + __u32 command; 194 + __u32 status; 195 + __u32 mac_cfg0; 196 196 197 197 /* 198 198 * MII mgmt all goes through port 0 MAC interface, so it ··· 203 203 &mac_cfg0, 4)) 204 204 return -EIO; 205 205 if (netxen_gb_get_soft_reset(mac_cfg0)) { 206 - __le32 temp; 206 + __u32 temp; 207 207 temp = 0; 208 208 netxen_gb_tx_reset_pb(temp); 209 209 netxen_gb_rx_reset_pb(temp); ··· 269 269 int port) 270 270 { 271 271 int result = 0; 272 - __le32 enable = 0; 272 + __u32 enable = 0; 273 273 netxen_set_phy_int_link_status_changed(enable); 274 274 netxen_set_phy_int_autoneg_completed(enable); 275 275 netxen_set_phy_int_speed_changed(enable); ··· 402 402 int netxen_niu_gbe_init_port(struct netxen_adapter *adapter, int port) 403 403 { 404 404 int result = 0; 405 - __le32 status; 405 + __u32 status; 406 406 if (adapter->disable_phy_interrupts) 407 407 adapter->disable_phy_interrupts(adapter, port); 408 408 mdelay(2); ··· 410 410 if (0 == 411 411 netxen_niu_gbe_phy_read(adapter, port, 412 412 NETXEN_NIU_GB_MII_MGMT_ADDR_PHY_STATUS, 413 - (__le32 *) & status)) { 413 + &status)) { 414 414 if (netxen_get_phy_link(status)) { 415 415 if (netxen_get_phy_speed(status) == 2) { 416 416 netxen_niu_gbe_set_gmii_mode(adapter, port, 1); ··· 489 489 int port, long enable) 490 490 { 491 491 int result = 0; 492 - __le32 int_src; 492 + __u32 int_src; 493 493 494 494 printk(KERN_INFO PFX "NETXEN: Handling PHY interrupt on port %d" 495 495 " (device enable = %d)\n", (int)port, (int)enable); ··· 530 530 printk(KERN_INFO PFX "autoneg_error "); 531 531 if ((netxen_get_phy_int_speed_changed(int_src)) 532 532 || (netxen_get_phy_int_link_status_changed(int_src))) { 533 - __le32 status; 533 + __u32 status; 534 534 535 535 printk(KERN_INFO PFX 536 536 "speed_changed or link status changed"); ··· 583 583 int netxen_niu_macaddr_get(struct netxen_adapter *adapter, 584 584 int phy, netxen_ethernet_macaddr_t * addr) 585 585 { 586 - u64 result = 0; 587 - __le32 stationhigh; 588 - __le32 stationlow; 586 + u32 stationhigh; 587 + u32 stationlow; 588 + u8 val[8]; 589 589 590 590 if (addr == NULL) 591 591 return -EINVAL; ··· 598 598 if (netxen_nic_hw_read_wx(adapter, NETXEN_NIU_GB_STATION_ADDR_1(phy), 599 599 &stationlow, 4)) 600 600 return -EIO; 601 + ((__le32 *)val)[1] = cpu_to_le32(stationhigh); 602 + ((__le32 *)val)[0] = cpu_to_le32(stationlow); 601 603 602 - result = (u64) netxen_gb_get_stationaddress_low(stationlow); 603 - result |= (u64) stationhigh << 16; 604 - memcpy(*addr, &result, sizeof(netxen_ethernet_macaddr_t)); 604 + memcpy(addr, val + 2, 6); 605 605 606 606 return 0; 607 607 } ··· 613 613 int netxen_niu_macaddr_set(struct netxen_port *port, 614 614 netxen_ethernet_macaddr_t addr) 615 615 { 616 - __le32 temp = 0; 616 + u8 temp[4]; 617 + u32 val; 617 618 struct netxen_adapter *adapter = port->adapter; 618 619 int phy = port->portnum; 619 620 unsigned char mac_addr[6]; 620 621 int i; 621 622 622 623 for (i = 0; i < 10; i++) { 623 - memcpy(&temp, addr, 2); 624 - temp <<= 16; 624 + temp[0] = temp[1] = 0; 625 + memcpy(temp + 2, addr, 2); 626 + val = le32_to_cpu(*(__le32 *)temp); 625 627 if (netxen_nic_hw_write_wx 626 - (adapter, NETXEN_NIU_GB_STATION_ADDR_1(phy), &temp, 4)) 628 + (adapter, NETXEN_NIU_GB_STATION_ADDR_1(phy), &val, 4)) 627 629 return -EIO; 628 630 629 - temp = 0; 630 - 631 - memcpy(&temp, ((u8 *) addr) + 2, sizeof(__le32)); 631 + memcpy(temp, ((u8 *) addr) + 2, sizeof(__le32)); 632 + val = le32_to_cpu(*(__le32 *)temp); 632 633 if (netxen_nic_hw_write_wx 633 - (adapter, NETXEN_NIU_GB_STATION_ADDR_0(phy), &temp, 4)) 634 + (adapter, NETXEN_NIU_GB_STATION_ADDR_0(phy), &val, 4)) 634 635 return -2; 635 636 636 637 netxen_niu_macaddr_get(adapter, phy, ··· 660 659 int netxen_niu_enable_gbe_port(struct netxen_adapter *adapter, 661 660 int port, netxen_niu_gbe_ifmode_t mode) 662 661 { 663 - __le32 mac_cfg0; 664 - __le32 mac_cfg1; 665 - __le32 mii_cfg; 662 + __u32 mac_cfg0; 663 + __u32 mac_cfg1; 664 + __u32 mii_cfg; 666 665 667 666 if ((port < 0) || (port > NETXEN_NIU_MAX_GBE_PORTS)) 668 667 return -EINVAL; ··· 737 736 /* Disable a GbE interface */ 738 737 int netxen_niu_disable_gbe_port(struct netxen_adapter *adapter, int port) 739 738 { 740 - __le32 mac_cfg0; 739 + __u32 mac_cfg0; 741 740 742 741 if ((port < 0) || (port > NETXEN_NIU_MAX_GBE_PORTS)) 743 742 return -EINVAL; ··· 753 752 /* Disable an XG interface */ 754 753 int netxen_niu_disable_xg_port(struct netxen_adapter *adapter, int port) 755 754 { 756 - __le32 mac_cfg; 755 + __u32 mac_cfg; 757 756 758 757 if (port != 0) 759 758 return -EINVAL; ··· 770 769 int netxen_niu_set_promiscuous_mode(struct netxen_adapter *adapter, int port, 771 770 netxen_niu_prom_mode_t mode) 772 771 { 773 - __le32 reg; 772 + __u32 reg; 774 773 775 774 if ((port < 0) || (port > NETXEN_NIU_MAX_GBE_PORTS)) 776 775 return -EINVAL; ··· 827 826 int netxen_niu_xg_macaddr_set(struct netxen_port *port, 828 827 netxen_ethernet_macaddr_t addr) 829 828 { 830 - __le32 temp = 0; 829 + u8 temp[4]; 830 + u32 val; 831 831 struct netxen_adapter *adapter = port->adapter; 832 832 833 - memcpy(&temp, addr, 2); 834 - temp = cpu_to_le32(temp); 835 - temp <<= 16; 833 + temp[0] = temp[1] = 0; 834 + memcpy(temp + 2, addr, 2); 835 + val = le32_to_cpu(*(__le32 *)temp); 836 836 if (netxen_nic_hw_write_wx(adapter, NETXEN_NIU_XGE_STATION_ADDR_0_1, 837 - &temp, 4)) 837 + &val, 4)) 838 838 return -EIO; 839 839 840 - temp = 0; 841 - 842 840 memcpy(&temp, ((u8 *) addr) + 2, sizeof(__le32)); 843 - temp = cpu_to_le32(temp); 841 + val = le32_to_cpu(*(__le32 *)temp); 844 842 if (netxen_nic_hw_write_wx(adapter, NETXEN_NIU_XGE_STATION_ADDR_0_HI, 845 - &temp, 4)) 843 + &val, 4)) 846 844 return -EIO; 847 845 848 846 return 0; ··· 854 854 int netxen_niu_xg_macaddr_get(struct netxen_adapter *adapter, int phy, 855 855 netxen_ethernet_macaddr_t * addr) 856 856 { 857 - __le32 stationhigh; 858 - __le32 stationlow; 859 - u64 result; 857 + u32 stationhigh; 858 + u32 stationlow; 859 + u8 val[8]; 860 860 861 861 if (addr == NULL) 862 862 return -EINVAL; ··· 869 869 if (netxen_nic_hw_read_wx(adapter, NETXEN_NIU_XGE_STATION_ADDR_0_1, 870 870 &stationlow, 4)) 871 871 return -EIO; 872 + ((__le32 *)val)[1] = cpu_to_le32(stationhigh); 873 + ((__le32 *)val)[0] = cpu_to_le32(stationlow); 872 874 873 - result = ((u64) stationlow) >> 16; 874 - result |= (u64) stationhigh << 16; 875 - memcpy(*addr, &result, sizeof(netxen_ethernet_macaddr_t)); 875 + memcpy(addr, val + 2, 6); 876 876 877 877 return 0; 878 878 } ··· 880 880 int netxen_niu_xg_set_promiscuous_mode(struct netxen_adapter *adapter, 881 881 int port, netxen_niu_prom_mode_t mode) 882 882 { 883 - __le32 reg; 883 + __u32 reg; 884 884 885 885 if ((port < 0) || (port > NETXEN_NIU_MAX_GBE_PORTS)) 886 886 return -EINVAL;
+5 -2
drivers/net/pcmcia/3c589_cs.c
··· 606 606 { 607 607 kio_addr_t ioaddr = dev->base_addr; 608 608 struct el3_private *priv = netdev_priv(dev); 609 + unsigned long flags; 609 610 610 611 DEBUG(3, "%s: el3_start_xmit(length = %ld) called, " 611 612 "status %4.4x.\n", dev->name, (long)skb->len, 612 613 inw(ioaddr + EL3_STATUS)); 614 + 615 + spin_lock_irqsave(&priv->lock, flags); 613 616 614 617 priv->stats.tx_bytes += skb->len; 615 618 ··· 631 628 632 629 dev_kfree_skb(skb); 633 630 pop_tx_status(dev); 631 + spin_unlock_irqrestore(&priv->lock, flags); 634 632 635 633 return 0; 636 634 } ··· 733 729 734 730 if (!netif_device_present(dev)) goto reschedule; 735 731 736 - EL3WINDOW(1); 737 732 /* Check for pending interrupt with expired latency timer: with 738 733 this, we can limp along even if the interrupt is blocked */ 739 734 if ((inw(ioaddr + EL3_STATUS) & IntLatch) && 740 735 (inb(ioaddr + EL3_TIMER) == 0xff)) { 741 736 if (!lp->fast_poll) 742 737 printk(KERN_WARNING "%s: interrupt(s) dropped!\n", dev->name); 743 - el3_interrupt(dev->irq, lp); 738 + el3_interrupt(dev->irq, dev); 744 739 lp->fast_poll = HZ; 745 740 } 746 741 if (lp->fast_poll) {
+1 -1
drivers/net/phy/fixed.c
··· 349 349 fixed_mdio_register_device(0, 100, 1); 350 350 #endif 351 351 352 - #ifdef CONFIX_FIXED_MII_10_FDX 352 + #ifdef CONFIG_FIXED_MII_10_FDX 353 353 fixed_mdio_register_device(0, 10, 1); 354 354 #endif 355 355 return 0;
+2 -1
drivers/net/phy/phy.c
··· 286 286 287 287 return 0; 288 288 } 289 + EXPORT_SYMBOL(phy_ethtool_sset); 289 290 290 291 int phy_ethtool_gset(struct phy_device *phydev, struct ethtool_cmd *cmd) 291 292 { ··· 303 302 304 303 return 0; 305 304 } 306 - 305 + EXPORT_SYMBOL(phy_ethtool_gset); 307 306 308 307 /* Note that this function is currently incompatible with the 309 308 * PHYCONTROL layer. It changes registers without regard to
+1 -2
drivers/net/s2io.c
··· 556 556 } 557 557 } 558 558 559 - nic->ufo_in_band_v = kmalloc((sizeof(u64) * size), GFP_KERNEL); 559 + nic->ufo_in_band_v = kcalloc(size, sizeof(u64), GFP_KERNEL); 560 560 if (!nic->ufo_in_band_v) 561 561 return -ENOMEM; 562 - memset(nic->ufo_in_band_v, 0, size); 563 562 564 563 /* Allocation and initialization of RXDs in Rings */ 565 564 size = 0;
+1 -1
drivers/net/skge.c
··· 60 60 #define LINK_HZ (HZ/2) 61 61 62 62 MODULE_DESCRIPTION("SysKonnect Gigabit Ethernet driver"); 63 - MODULE_AUTHOR("Stephen Hemminger <shemminger@osdl.org>"); 63 + MODULE_AUTHOR("Stephen Hemminger <shemminger@linux-foundation.org>"); 64 64 MODULE_LICENSE("GPL"); 65 65 MODULE_VERSION(DRV_VERSION); 66 66
+1 -26
drivers/net/sky2.c
··· 3639 3639 out: 3640 3640 return err; 3641 3641 } 3642 - 3643 - /* BIOS resume runs after device (it's a bug in PM) 3644 - * as a temporary workaround on suspend/resume leave MSI disabled 3645 - */ 3646 - static int sky2_suspend_late(struct pci_dev *pdev, pm_message_t state) 3647 - { 3648 - struct sky2_hw *hw = pci_get_drvdata(pdev); 3649 - 3650 - free_irq(pdev->irq, hw); 3651 - if (hw->msi) { 3652 - pci_disable_msi(pdev); 3653 - hw->msi = 0; 3654 - } 3655 - return 0; 3656 - } 3657 - 3658 - static int sky2_resume_early(struct pci_dev *pdev) 3659 - { 3660 - struct sky2_hw *hw = pci_get_drvdata(pdev); 3661 - struct net_device *dev = hw->dev[0]; 3662 - 3663 - return request_irq(pdev->irq, sky2_intr, IRQF_SHARED, dev->name, hw); 3664 - } 3665 3642 #endif 3666 3643 3667 3644 static struct pci_driver sky2_driver = { ··· 3649 3672 #ifdef CONFIG_PM 3650 3673 .suspend = sky2_suspend, 3651 3674 .resume = sky2_resume, 3652 - .suspend_late = sky2_suspend_late, 3653 - .resume_early = sky2_resume_early, 3654 3675 #endif 3655 3676 }; 3656 3677 ··· 3666 3691 module_exit(sky2_cleanup_module); 3667 3692 3668 3693 MODULE_DESCRIPTION("Marvell Yukon 2 Gigabit Ethernet driver"); 3669 - MODULE_AUTHOR("Stephen Hemminger <shemminger@osdl.org>"); 3694 + MODULE_AUTHOR("Stephen Hemminger <shemminger@linux-foundation.org>"); 3670 3695 MODULE_LICENSE("GPL"); 3671 3696 MODULE_VERSION(DRV_VERSION);
+3 -2
drivers/net/smc911x.c
··· 968 968 * We should not be called if phy_type is zero. 969 969 */ 970 970 if (lp->phy_type == 0) 971 - goto smc911x_phy_configure_exit; 971 + goto smc911x_phy_configure_exit_nolock; 972 972 973 973 if (smc911x_phy_reset(dev, phyaddr)) { 974 974 printk("%s: PHY reset timed out\n", dev->name); 975 - goto smc911x_phy_configure_exit; 975 + goto smc911x_phy_configure_exit_nolock; 976 976 } 977 977 spin_lock_irqsave(&lp->lock, flags); 978 978 ··· 1041 1041 1042 1042 smc911x_phy_configure_exit: 1043 1043 spin_unlock_irqrestore(&lp->lock, flags); 1044 + smc911x_phy_configure_exit_nolock: 1044 1045 lp->work_pending = 0; 1045 1046 } 1046 1047
+2
drivers/net/spider_net.c
··· 1925 1925 /* release chains */ 1926 1926 spider_net_release_tx_chain(card, 1); 1927 1927 1928 + spider_net_free_rx_chain_contents(card); 1929 + 1928 1930 spider_net_free_chain(card, &card->tx_chain); 1929 1931 spider_net_free_chain(card, &card->rx_chain); 1930 1932
+46 -38
drivers/pci/quirks.c
··· 654 654 * VIA bridges which have VLink 655 655 */ 656 656 657 - static const struct pci_device_id via_vlink_fixup_tbl[] = { 658 - /* Internal devices need IRQ line routing, pre VLink */ 659 - { PCI_VDEVICE(VIA, PCI_DEVICE_ID_VIA_82C686), 0 }, 660 - { PCI_VDEVICE(VIA, PCI_DEVICE_ID_VIA_8231), 17 }, 661 - /* Devices with VLink */ 662 - { PCI_VDEVICE(VIA, PCI_DEVICE_ID_VIA_8233_0), 17}, 663 - { PCI_VDEVICE(VIA, PCI_DEVICE_ID_VIA_8233A), 17 }, 664 - { PCI_VDEVICE(VIA, PCI_DEVICE_ID_VIA_8233C_0), 17 }, 665 - { PCI_VDEVICE(VIA, PCI_DEVICE_ID_VIA_8235), 16 }, 666 - { PCI_VDEVICE(VIA, PCI_DEVICE_ID_VIA_8237), 15 }, 667 - { PCI_VDEVICE(VIA, PCI_DEVICE_ID_VIA_8237A), 15 }, 668 - { 0, }, 669 - }; 657 + static int via_vlink_dev_lo = -1, via_vlink_dev_hi = 18; 658 + 659 + static void quirk_via_bridge(struct pci_dev *dev) 660 + { 661 + /* See what bridge we have and find the device ranges */ 662 + switch (dev->device) { 663 + case PCI_DEVICE_ID_VIA_82C686: 664 + /* The VT82C686 is special, it attaches to PCI and can have 665 + any device number. All its subdevices are functions of 666 + that single device. */ 667 + via_vlink_dev_lo = PCI_SLOT(dev->devfn); 668 + via_vlink_dev_hi = PCI_SLOT(dev->devfn); 669 + break; 670 + case PCI_DEVICE_ID_VIA_8237: 671 + case PCI_DEVICE_ID_VIA_8237A: 672 + via_vlink_dev_lo = 15; 673 + break; 674 + case PCI_DEVICE_ID_VIA_8235: 675 + via_vlink_dev_lo = 16; 676 + break; 677 + case PCI_DEVICE_ID_VIA_8231: 678 + case PCI_DEVICE_ID_VIA_8233_0: 679 + case PCI_DEVICE_ID_VIA_8233A: 680 + case PCI_DEVICE_ID_VIA_8233C_0: 681 + via_vlink_dev_lo = 17; 682 + break; 683 + } 684 + } 685 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C686, quirk_via_bridge); 686 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8231, quirk_via_bridge); 687 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8233_0, quirk_via_bridge); 688 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8233A, quirk_via_bridge); 689 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8233C_0, quirk_via_bridge); 690 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8235, quirk_via_bridge); 691 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8237, quirk_via_bridge); 692 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8237A, quirk_via_bridge); 670 693 671 694 /** 672 695 * quirk_via_vlink - VIA VLink IRQ number update ··· 698 675 * If the device we are dealing with is on a PIC IRQ we need to 699 676 * ensure that the IRQ line register which usually is not relevant 700 677 * for PCI cards, is actually written so that interrupts get sent 701 - * to the right place 678 + * to the right place. 679 + * We only do this on systems where a VIA south bridge was detected, 680 + * and only for VIA devices on the motherboard (see quirk_via_bridge 681 + * above). 702 682 */ 703 683 704 684 static void quirk_via_vlink(struct pci_dev *dev) 705 685 { 706 - const struct pci_device_id *via_vlink_fixup; 707 - static int dev_lo = -1, dev_hi = 18; 708 686 u8 irq, new_irq; 709 687 710 - /* Check if we have VLink and cache the result */ 711 - 712 - /* Checked already - no */ 713 - if (dev_lo == -2) 688 + /* Check if we have VLink at all */ 689 + if (via_vlink_dev_lo == -1) 714 690 return; 715 691 716 - /* Not checked - see what bridge we have and find the device 717 - ranges */ 718 - 719 - if (dev_lo == -1) { 720 - via_vlink_fixup = pci_find_present(via_vlink_fixup_tbl); 721 - if (via_vlink_fixup == NULL) { 722 - dev_lo = -2; 723 - return; 724 - } 725 - dev_lo = via_vlink_fixup->driver_data; 726 - /* 82C686 is special - 0/0 */ 727 - if (dev_lo == 0) 728 - dev_hi = 0; 729 - } 730 692 new_irq = dev->irq; 731 693 732 694 /* Don't quirk interrupts outside the legacy IRQ range */ ··· 719 711 return; 720 712 721 713 /* Internal device ? */ 722 - if (dev->bus->number != 0 || PCI_SLOT(dev->devfn) > dev_hi || 723 - PCI_SLOT(dev->devfn) < dev_lo) 714 + if (dev->bus->number != 0 || PCI_SLOT(dev->devfn) > via_vlink_dev_hi || 715 + PCI_SLOT(dev->devfn) < via_vlink_dev_lo) 724 716 return; 725 717 726 718 /* This is an internal VLink device on a PIC interrupt. The BIOS ··· 1262 1254 pci_read_config_dword(pdev, 0x40, &conf); 1263 1255 /* Enable dual function mode, AHCI on fn 0, IDE fn1 */ 1264 1256 /* Set the class codes correctly and then direct IDE 0 */ 1265 - conf &= ~0x000F0200; /* Clear bit 9 and 16-19 */ 1266 - conf |= 0x00C20002; /* Set bit 1, 17, 22, 23 */ 1257 + conf &= ~0x000FF200; /* Clear bit 9 and 12-19 */ 1258 + conf |= 0x00C2A102; /* Set 1, 8, 13, 15, 17, 22, 23 */ 1267 1259 pci_write_config_dword(pdev, 0x40, conf); 1268 1260 1269 1261 /* Reconfigure so that the PCI scanner discovers the
+2 -8
drivers/pci/search.c
··· 200 200 * can cause some machines to crash. So here we detect and flag that 201 201 * situation and bail out early. 202 202 */ 203 - if (unlikely(list_empty(&pci_devices))) { 204 - printk(KERN_INFO "pci_find_subsys() called while pci_devices " 205 - "is still empty\n"); 203 + if (unlikely(list_empty(&pci_devices))) 206 204 return NULL; 207 - } 208 205 down_read(&pci_bus_sem); 209 206 n = from ? from->global_list.next : pci_devices.next; 210 207 ··· 275 278 * can cause some machines to crash. So here we detect and flag that 276 279 * situation and bail out early. 277 280 */ 278 - if (unlikely(list_empty(&pci_devices))) { 279 - printk(KERN_NOTICE "pci_get_subsys() called while pci_devices " 280 - "is still empty\n"); 281 + if (unlikely(list_empty(&pci_devices))) 281 282 return NULL; 282 - } 283 283 down_read(&pci_bus_sem); 284 284 n = from ? from->global_list.next : pci_devices.next; 285 285
+7 -5
drivers/rtc/rtc-sh.c
··· 492 492 493 493 spin_lock_irq(&rtc->lock); 494 494 495 - /* disable alarm interrupt and clear flag */ 495 + /* disable alarm interrupt and clear the alarm flag */ 496 496 rcr1 = readb(rtc->regbase + RCR1); 497 - rcr1 &= ~RCR1_AF; 498 - writeb(rcr1 & ~RCR1_AIE, rtc->regbase + RCR1); 497 + rcr1 &= ~(RCR1_AF|RCR1_AIE); 498 + writeb(rcr1, rtc->regbase + RCR1); 499 499 500 500 rtc->rearm_aie = 0; 501 501 ··· 510 510 mon += 1; 511 511 sh_rtc_write_alarm_value(rtc, mon, RMONAR); 512 512 513 - /* Restore interrupt activation status */ 514 - writeb(rcr1, rtc->regbase + RCR1); 513 + if (wkalrm->enabled) { 514 + rcr1 |= RCR1_AIE; 515 + writeb(rcr1, rtc->regbase + RCR1); 516 + } 515 517 516 518 spin_unlock_irq(&rtc->lock); 517 519
+1 -1
drivers/rtc/rtc-sysfs.c
··· 78 78 .attrs = rtc_attrs, 79 79 }; 80 80 81 - static int __devinit rtc_sysfs_add_device(struct class_device *class_dev, 81 + static int rtc_sysfs_add_device(struct class_device *class_dev, 82 82 struct class_interface *class_intf) 83 83 { 84 84 int err;
-2
drivers/scsi/pcmcia/sym53c500_cs.c
··· 545 545 */ 546 546 if (shost->irq) 547 547 free_irq(shost->irq, shost); 548 - if (shost->dma_channel != 0xff) 549 - free_dma(shost->dma_channel); 550 548 if (shost->io_port && shost->n_io_port) 551 549 release_region(shost->io_port, shost->n_io_port); 552 550
-1
drivers/scsi/qla4xxx/ql4_def.h
··· 418 418 * concurrently. 419 419 */ 420 420 struct mutex mbox_sem; 421 - wait_queue_head_t mailbox_wait_queue; 422 421 423 422 /* temporary mailbox status registers */ 424 423 volatile uint8_t mbox_status_count;
+1
drivers/scsi/qla4xxx/ql4_glbl.h
··· 76 76 extern int ql4xextended_error_logging; 77 77 extern int ql4xdiscoverywait; 78 78 extern int ql4xdontresethba; 79 + extern int ql4_mod_unload; 79 80 #endif /* _QLA4x_GBL_H */
+9 -9
drivers/scsi/qla4xxx/ql4_init.c
··· 958 958 return status; 959 959 } 960 960 961 - int ql4xxx_lock_drvr_wait(struct scsi_qla_host *a) 961 + int ql4xxx_lock_drvr_wait(struct scsi_qla_host *ha) 962 962 { 963 - #define QL4_LOCK_DRVR_WAIT 300 964 - #define QL4_LOCK_DRVR_SLEEP 100 963 + #define QL4_LOCK_DRVR_WAIT 30 964 + #define QL4_LOCK_DRVR_SLEEP 1 965 965 966 966 int drvr_wait = QL4_LOCK_DRVR_WAIT; 967 967 while (drvr_wait) { 968 - if (ql4xxx_lock_drvr(a) == 0) { 969 - msleep(QL4_LOCK_DRVR_SLEEP); 968 + if (ql4xxx_lock_drvr(ha) == 0) { 969 + ssleep(QL4_LOCK_DRVR_SLEEP); 970 970 if (drvr_wait) { 971 971 DEBUG2(printk("scsi%ld: %s: Waiting for " 972 - "Global Init Semaphore...n", 973 - a->host_no, 974 - __func__)); 972 + "Global Init Semaphore(%d)...n", 973 + ha->host_no, 974 + __func__, drvr_wait)); 975 975 } 976 976 drvr_wait -= QL4_LOCK_DRVR_SLEEP; 977 977 } else { 978 978 DEBUG2(printk("scsi%ld: %s: Global Init Semaphore " 979 - "acquired.n", a->host_no, __func__)); 979 + "acquired.n", ha->host_no, __func__)); 980 980 return QLA_SUCCESS; 981 981 } 982 982 }
+2 -2
drivers/scsi/qla4xxx/ql4_isr.c
··· 433 433 readl(&ha->reg->mailbox[i]); 434 434 435 435 set_bit(AF_MBOX_COMMAND_DONE, &ha->flags); 436 - wake_up(&ha->mailbox_wait_queue); 437 436 } 438 437 } else if (mbox_status >> 12 == MBOX_ASYNC_EVENT_STATUS) { 439 438 /* Immediately process the AENs that don't require much work. ··· 685 686 &ha->reg->ctrl_status); 686 687 readl(&ha->reg->ctrl_status); 687 688 688 - set_bit(DPC_RESET_HA_INTR, &ha->dpc_flags); 689 + if (!ql4_mod_unload) 690 + set_bit(DPC_RESET_HA_INTR, &ha->dpc_flags); 689 691 690 692 break; 691 693 } else if (intr_status & INTR_PENDING) {
+21 -14
drivers/scsi/qla4xxx/ql4_mbx.c
··· 29 29 u_long wait_count; 30 30 uint32_t intr_status; 31 31 unsigned long flags = 0; 32 - DECLARE_WAITQUEUE(wait, current); 33 - 34 - mutex_lock(&ha->mbox_sem); 35 - 36 - /* Mailbox code active */ 37 - set_bit(AF_MBOX_COMMAND, &ha->flags); 38 32 39 33 /* Make sure that pointers are valid */ 40 34 if (!mbx_cmd || !mbx_sts) { 41 35 DEBUG2(printk("scsi%ld: %s: Invalid mbx_cmd or mbx_sts " 42 36 "pointer\n", ha->host_no, __func__)); 43 - goto mbox_exit; 37 + return status; 38 + } 39 + /* Mailbox code active */ 40 + wait_count = MBOX_TOV * 100; 41 + 42 + while (wait_count--) { 43 + mutex_lock(&ha->mbox_sem); 44 + if (!test_bit(AF_MBOX_COMMAND, &ha->flags)) { 45 + set_bit(AF_MBOX_COMMAND, &ha->flags); 46 + mutex_unlock(&ha->mbox_sem); 47 + break; 48 + } 49 + mutex_unlock(&ha->mbox_sem); 50 + if (!wait_count) { 51 + DEBUG2(printk("scsi%ld: %s: mbox_sem failed\n", 52 + ha->host_no, __func__)); 53 + return status; 54 + } 55 + msleep(10); 44 56 } 45 57 46 58 /* To prevent overwriting mailbox registers for a command that has ··· 85 73 spin_unlock_irqrestore(&ha->hardware_lock, flags); 86 74 87 75 /* Wait for completion */ 88 - set_current_state(TASK_UNINTERRUPTIBLE); 89 - add_wait_queue(&ha->mailbox_wait_queue, &wait); 90 76 91 77 /* 92 78 * If we don't want status, don't wait for the mailbox command to ··· 93 83 */ 94 84 if (outCount == 0) { 95 85 status = QLA_SUCCESS; 96 - set_current_state(TASK_RUNNING); 97 - remove_wait_queue(&ha->mailbox_wait_queue, &wait); 98 86 goto mbox_exit; 99 87 } 100 88 /* Wait for command to complete */ ··· 116 108 spin_unlock_irqrestore(&ha->hardware_lock, flags); 117 109 msleep(10); 118 110 } 119 - set_current_state(TASK_RUNNING); 120 - remove_wait_queue(&ha->mailbox_wait_queue, &wait); 121 111 122 112 /* Check for mailbox timeout. */ 123 113 if (!test_bit(AF_MBOX_COMMAND_DONE, &ha->flags)) { ··· 161 155 spin_unlock_irqrestore(&ha->hardware_lock, flags); 162 156 163 157 mbox_exit: 158 + mutex_lock(&ha->mbox_sem); 164 159 clear_bit(AF_MBOX_COMMAND, &ha->flags); 165 - clear_bit(AF_MBOX_COMMAND_DONE, &ha->flags); 166 160 mutex_unlock(&ha->mbox_sem); 161 + clear_bit(AF_MBOX_COMMAND_DONE, &ha->flags); 167 162 168 163 return status; 169 164 }
+39 -25
drivers/scsi/qla4xxx/ql4_os.c
··· 40 40 "Option to enable extended error logging, " 41 41 "Default is 0 - no logging, 1 - debug logging"); 42 42 43 + int ql4_mod_unload = 0; 44 + 43 45 /* 44 46 * SCSI host template entry points 45 47 */ ··· 424 422 goto qc_host_busy; 425 423 } 426 424 425 + if (test_bit(DPC_RESET_HA_INTR, &ha->dpc_flags)) 426 + goto qc_host_busy; 427 + 427 428 spin_unlock_irq(ha->host->host_lock); 428 429 429 430 srb = qla4xxx_get_new_srb(ha, ddb_entry, cmd, done); ··· 712 707 return stat; 713 708 } 714 709 715 - /** 716 - * qla4xxx_soft_reset - performs soft reset. 717 - * @ha: Pointer to host adapter structure. 718 - **/ 719 - int qla4xxx_soft_reset(struct scsi_qla_host *ha) 710 + static void qla4xxx_hw_reset(struct scsi_qla_host *ha) 720 711 { 721 - uint32_t max_wait_time; 722 - unsigned long flags = 0; 723 - int status = QLA_ERROR; 724 712 uint32_t ctrl_status; 713 + unsigned long flags = 0; 714 + 715 + DEBUG2(printk(KERN_ERR "scsi%ld: %s\n", ha->host_no, __func__)); 725 716 726 717 spin_lock_irqsave(&ha->hardware_lock, flags); 727 718 ··· 734 733 readl(&ha->reg->ctrl_status); 735 734 736 735 spin_unlock_irqrestore(&ha->hardware_lock, flags); 736 + } 737 + 738 + /** 739 + * qla4xxx_soft_reset - performs soft reset. 740 + * @ha: Pointer to host adapter structure. 741 + **/ 742 + int qla4xxx_soft_reset(struct scsi_qla_host *ha) 743 + { 744 + uint32_t max_wait_time; 745 + unsigned long flags = 0; 746 + int status = QLA_ERROR; 747 + uint32_t ctrl_status; 748 + 749 + qla4xxx_hw_reset(ha); 737 750 738 751 /* Wait until the Network Reset Intr bit is cleared */ 739 752 max_wait_time = RESET_INTR_TOV; ··· 981 966 struct scsi_qla_host *ha = 982 967 container_of(work, struct scsi_qla_host, dpc_work); 983 968 struct ddb_entry *ddb_entry, *dtemp; 969 + int status = QLA_ERROR; 984 970 985 971 DEBUG2(printk("scsi%ld: %s: DPC handler waking up." 986 - "flags = 0x%08lx, dpc_flags = 0x%08lx\n", 987 - ha->host_no, __func__, ha->flags, ha->dpc_flags)); 972 + "flags = 0x%08lx, dpc_flags = 0x%08lx ctrl_stat = 0x%08x\n", 973 + ha->host_no, __func__, ha->flags, ha->dpc_flags, 974 + readw(&ha->reg->ctrl_status))); 988 975 989 976 /* Initialization not yet finished. Don't do anything yet. */ 990 977 if (!test_bit(AF_INIT_DONE, &ha->flags)) ··· 1000 983 test_bit(DPC_RESET_HA, &ha->dpc_flags)) 1001 984 qla4xxx_recover_adapter(ha, PRESERVE_DDB_LIST); 1002 985 1003 - if (test_and_clear_bit(DPC_RESET_HA_INTR, &ha->dpc_flags)) { 986 + if (test_bit(DPC_RESET_HA_INTR, &ha->dpc_flags)) { 1004 987 uint8_t wait_time = RESET_INTR_TOV; 1005 - unsigned long flags = 0; 1006 988 1007 - qla4xxx_flush_active_srbs(ha); 1008 - 1009 - spin_lock_irqsave(&ha->hardware_lock, flags); 1010 989 while ((readw(&ha->reg->ctrl_status) & 1011 990 (CSR_SOFT_RESET | CSR_FORCE_SOFT_RESET)) != 0) { 1012 991 if (--wait_time == 0) 1013 992 break; 1014 - 1015 - spin_unlock_irqrestore(&ha->hardware_lock, 1016 - flags); 1017 - 1018 993 msleep(1000); 1019 - 1020 - spin_lock_irqsave(&ha->hardware_lock, flags); 1021 994 } 1022 - spin_unlock_irqrestore(&ha->hardware_lock, flags); 1023 - 1024 995 if (wait_time == 0) 1025 996 DEBUG2(printk("scsi%ld: %s: SR|FSR " 1026 997 "bit not cleared-- resetting\n", 1027 998 ha->host_no, __func__)); 999 + qla4xxx_flush_active_srbs(ha); 1000 + if (ql4xxx_lock_drvr_wait(ha) == QLA_SUCCESS) { 1001 + qla4xxx_process_aen(ha, FLUSH_DDB_CHANGED_AENS); 1002 + status = qla4xxx_initialize_adapter(ha, 1003 + PRESERVE_DDB_LIST); 1004 + } 1005 + clear_bit(DPC_RESET_HA_INTR, &ha->dpc_flags); 1006 + if (status == QLA_SUCCESS) 1007 + qla4xxx_enable_intrs(ha); 1028 1008 } 1029 1009 } 1030 1010 ··· 1076 1062 1077 1063 /* Issue Soft Reset to put firmware in unknown state */ 1078 1064 if (ql4xxx_lock_drvr_wait(ha) == QLA_SUCCESS) 1079 - qla4xxx_soft_reset(ha); 1065 + qla4xxx_hw_reset(ha); 1080 1066 1081 1067 /* Remove timer thread, if present */ 1082 1068 if (ha->timer_active) ··· 1212 1198 INIT_LIST_HEAD(&ha->free_srb_q); 1213 1199 1214 1200 mutex_init(&ha->mbox_sem); 1215 - init_waitqueue_head(&ha->mailbox_wait_queue); 1216 1201 1217 1202 spin_lock_init(&ha->hardware_lock); 1218 1203 ··· 1678 1665 1679 1666 static void __exit qla4xxx_module_exit(void) 1680 1667 { 1668 + ql4_mod_unload = 1; 1681 1669 pci_unregister_driver(&qla4xxx_pci_driver); 1682 1670 iscsi_unregister_transport(&qla4xxx_iscsi_transport); 1683 1671 kmem_cache_destroy(srb_cachep);
+1 -1
drivers/scsi/qla4xxx/ql4_version.h
··· 5 5 * See LICENSE.qla4xxx for copyright and licensing details. 6 6 */ 7 7 8 - #define QLA4XXX_DRIVER_VERSION "5.00.07-k" 8 + #define QLA4XXX_DRIVER_VERSION "5.00.07-k1"
+6
drivers/scsi/scsi_scan.c
··· 1453 1453 struct device *parent = &shost->shost_gendev; 1454 1454 struct scsi_target *starget; 1455 1455 1456 + if (strncmp(scsi_scan_type, "none", 4) == 0) 1457 + return ERR_PTR(-ENODEV); 1458 + 1459 + if (!shost->async_scan) 1460 + scsi_complete_async_scans(); 1461 + 1456 1462 starget = scsi_alloc_target(parent, channel, id); 1457 1463 if (!starget) 1458 1464 return ERR_PTR(-ENOMEM);
+10 -10
drivers/scsi/sd.c
··· 1647 1647 if (error) 1648 1648 goto out_put; 1649 1649 1650 - class_device_initialize(&sdkp->cdev); 1651 - sdkp->cdev.dev = &sdp->sdev_gendev; 1652 - sdkp->cdev.class = &sd_disk_class; 1653 - strncpy(sdkp->cdev.class_id, sdp->sdev_gendev.bus_id, BUS_ID_SIZE); 1654 - 1655 - if (class_device_add(&sdkp->cdev)) 1656 - goto out_put; 1657 - 1658 - get_device(&sdp->sdev_gendev); 1659 - 1660 1650 sdkp->device = sdp; 1661 1651 sdkp->driver = &sd_template; 1662 1652 sdkp->disk = gd; ··· 1659 1669 else 1660 1670 sdp->timeout = SD_MOD_TIMEOUT; 1661 1671 } 1672 + 1673 + class_device_initialize(&sdkp->cdev); 1674 + sdkp->cdev.dev = &sdp->sdev_gendev; 1675 + sdkp->cdev.class = &sd_disk_class; 1676 + strncpy(sdkp->cdev.class_id, sdp->sdev_gendev.bus_id, BUS_ID_SIZE); 1677 + 1678 + if (class_device_add(&sdkp->cdev)) 1679 + goto out_put; 1680 + 1681 + get_device(&sdp->sdev_gendev); 1662 1682 1663 1683 gd->major = sd_major((index & 0xf0) >> 4); 1664 1684 gd->first_minor = ((index & 0xf) << 4) | (index & 0xfff00);
+11 -8
drivers/scsi/st.c
··· 2816 2816 2817 2817 if (cmd_in == MTWEOF && 2818 2818 cmdstatp->have_sense && 2819 - (cmdstatp->flags & SENSE_EOM) && 2820 - (cmdstatp->sense_hdr.sense_key == NO_SENSE || 2821 - cmdstatp->sense_hdr.sense_key == RECOVERED_ERROR) && 2822 - undone == 0) { 2823 - ioctl_result = 0; /* EOF written successfully at EOM */ 2824 - if (fileno >= 0) 2825 - fileno++; 2819 + (cmdstatp->flags & SENSE_EOM)) { 2820 + if (cmdstatp->sense_hdr.sense_key == NO_SENSE || 2821 + cmdstatp->sense_hdr.sense_key == RECOVERED_ERROR) { 2822 + ioctl_result = 0; /* EOF(s) written successfully at EOM */ 2823 + STps->eof = ST_NOEOF; 2824 + } else { /* Writing EOF(s) failed */ 2825 + if (fileno >= 0) 2826 + fileno -= undone; 2827 + if (undone < arg) 2828 + STps->eof = ST_NOEOF; 2829 + } 2826 2830 STps->drv_file = fileno; 2827 - STps->eof = ST_NOEOF; 2828 2831 } else if ((cmd_in == MTFSF) || (cmd_in == MTFSFM)) { 2829 2832 if (fileno >= 0) 2830 2833 STps->drv_file = fileno - undone;
+2
drivers/serial/amba-pl010.c
··· 589 589 */ 590 590 if (co->index >= UART_NR) 591 591 co->index = 0; 592 + if (!amba_ports[co->index]) 593 + return -ENODEV; 592 594 port = &amba_ports[co->index]->port; 593 595 594 596 if (options)
+2
drivers/serial/amba-pl011.c
··· 661 661 if (co->index >= UART_NR) 662 662 co->index = 0; 663 663 uap = amba_ports[co->index]; 664 + if (!uap) 665 + return -ENODEV; 664 666 665 667 uap->port.uartclk = clk_get_rate(uap->clk); 666 668
+4 -3
drivers/serial/atmel_serial.c
··· 689 689 struct atmel_uart_data *data = pdev->dev.platform_data; 690 690 691 691 port->iotype = UPIO_MEM; 692 - port->flags = UPF_BOOT_AUTOCONF; 692 + port->flags = UPF_BOOT_AUTOCONF; 693 693 port->ops = &atmel_pops; 694 - port->fifosize = 1; 694 + port->fifosize = 1; 695 695 port->line = pdev->id; 696 696 port->dev = &pdev->dev; 697 697 ··· 890 890 if (device_may_wakeup(&pdev->dev) && !at91_suspend_entering_slow_clock()) 891 891 enable_irq_wake(port->irq); 892 892 else { 893 - disable_irq_wake(port->irq); 894 893 uart_suspend_port(&atmel_uart, port); 895 894 atmel_port->suspended = 1; 896 895 } ··· 906 907 uart_resume_port(&atmel_uart, port); 907 908 atmel_port->suspended = 0; 908 909 } 910 + else 911 + disable_irq_wake(port->irq); 909 912 910 913 return 0; 911 914 }
+1 -1
drivers/serial/atmel_serial.h
··· 106 106 #define ATMEL_US_CSR 0x14 /* Channel Status Register */ 107 107 #define ATMEL_US_RHR 0x18 /* Receiver Holding Register */ 108 108 #define ATMEL_US_THR 0x1c /* Transmitter Holding Register */ 109 - #define ATMEL_US_SYNH (1 << 15) /* Transmit/Receive Sync [SAM9 only] */ 109 + #define ATMEL_US_SYNH (1 << 15) /* Transmit/Receive Sync [AT91SAM9261 only] */ 110 110 111 111 #define ATMEL_US_BRGR 0x20 /* Baud Rate Generator Register */ 112 112 #define ATMEL_US_CD (0xffff << 0) /* Clock Divider */
+3 -2
drivers/spi/pxa2xx_spi.c
··· 1169 1169 spi->bits_per_word - 16 : spi->bits_per_word) 1170 1170 | SSCR0_SSE 1171 1171 | (spi->bits_per_word > 16 ? SSCR0_EDSS : 0); 1172 - chip->cr1 |= (((spi->mode & SPI_CPHA) != 0) << 4) 1173 - | (((spi->mode & SPI_CPOL) != 0) << 3); 1172 + chip->cr1 &= ~(SSCR1_SPO | SSCR1_SPH); 1173 + chip->cr1 |= (((spi->mode & SPI_CPHA) != 0) ? SSCR1_SPH : 0) 1174 + | (((spi->mode & SPI_CPOL) != 0) ? SSCR1_SPO : 0); 1174 1175 1175 1176 /* NOTE: PXA25x_SSP _could_ use external clocking ... */ 1176 1177 if (drv_data->ssp_type != PXA25x_SSP)
+13 -8
drivers/spi/spi.c
··· 366 366 367 367 class_device_initialize(&master->cdev); 368 368 master->cdev.class = &spi_master_class; 369 - kobj_set_kset_s(&master->cdev, spi_master_class.subsys); 370 369 master->cdev.dev = get_device(dev); 371 370 spi_master_set_devdata(master, &master[1]); 372 371 ··· 465 466 */ 466 467 struct spi_master *spi_busnum_to_master(u16 bus_num) 467 468 { 468 - char name[9]; 469 - struct kobject *bus; 469 + struct class_device *cdev; 470 + struct spi_master *master = NULL; 471 + struct spi_master *m; 470 472 471 - snprintf(name, sizeof name, "spi%u", bus_num); 472 - bus = kset_find_obj(&spi_master_class.subsys.kset, name); 473 - if (bus) 474 - return container_of(bus, struct spi_master, cdev.kobj); 475 - return NULL; 473 + down(&spi_master_class.sem); 474 + list_for_each_entry(cdev, &spi_master_class.children, node) { 475 + m = container_of(cdev, struct spi_master, cdev); 476 + if (m->bus_num == bus_num) { 477 + master = spi_master_get(m); 478 + break; 479 + } 480 + } 481 + up(&spi_master_class.sem); 482 + return master; 476 483 } 477 484 EXPORT_SYMBOL_GPL(spi_busnum_to_master); 478 485
+14 -14
drivers/spi/spi_s3c24xx.c
··· 10 10 * 11 11 */ 12 12 13 - 14 - //#define DEBUG 15 - 16 13 #include <linux/init.h> 17 14 #include <linux/spinlock.h> 18 15 #include <linux/workqueue.h> ··· 41 44 int len; 42 45 int count; 43 46 47 + int (*set_cs)(struct s3c2410_spi_info *spi, 48 + int cs, int pol); 49 + 44 50 /* data buffers */ 45 51 const unsigned char *tx; 46 52 unsigned char *rx; ··· 64 64 return spi_master_get_devdata(sdev->master); 65 65 } 66 66 67 + static void s3c24xx_spi_gpiocs(struct s3c2410_spi_info *spi, int cs, int pol) 68 + { 69 + s3c2410_gpio_setpin(spi->pin_cs, pol); 70 + } 71 + 67 72 static void s3c24xx_spi_chipsel(struct spi_device *spi, int value) 68 73 { 69 74 struct s3c24xx_spi *hw = to_hw(spi); ··· 77 72 78 73 switch (value) { 79 74 case BITBANG_CS_INACTIVE: 80 - if (hw->pdata->set_cs) 81 - hw->pdata->set_cs(hw->pdata, value, cspol); 82 - else 83 - s3c2410_gpio_setpin(hw->pdata->pin_cs, cspol ^ 1); 75 + hw->pdata->set_cs(hw->pdata, spi->chip_select, cspol^1); 84 76 break; 85 77 86 78 case BITBANG_CS_ACTIVE: ··· 98 96 /* write new configration */ 99 97 100 98 writeb(spcon, hw->regs + S3C2410_SPCON); 101 - 102 - if (hw->pdata->set_cs) 103 - hw->pdata->set_cs(hw->pdata, value, cspol); 104 - else 105 - s3c2410_gpio_setpin(hw->pdata->pin_cs, cspol); 99 + hw->pdata->set_cs(hw->pdata, spi->chip_select, cspol); 106 100 107 101 break; 108 - 109 102 } 110 103 } 111 104 ··· 327 330 /* setup any gpio we can */ 328 331 329 332 if (!hw->pdata->set_cs) { 333 + hw->set_cs = s3c24xx_spi_gpiocs; 334 + 330 335 s3c2410_gpio_setpin(hw->pdata->pin_cs, 1); 331 336 s3c2410_gpio_cfgpin(hw->pdata->pin_cs, S3C2410_GPIO_OUTPUT); 332 - } 337 + } else 338 + hw->set_cs = hw->pdata->set_cs; 333 339 334 340 /* register our spi controller */ 335 341
-12
drivers/usb/input/hid-core.c
··· 56 56 module_param_named(mousepoll, hid_mousepoll_interval, uint, 0644); 57 57 MODULE_PARM_DESC(mousepoll, "Polling interval of mice"); 58 58 59 - static int usbhid_pb_fnmode = 1; 60 - module_param_named(pb_fnmode, usbhid_pb_fnmode, int, 0644); 61 - MODULE_PARM_DESC(pb_fnmode, 62 - "Mode of fn key on PowerBooks (0 = disabled, 1 = fkeyslast, 2 = fkeysfirst)"); 63 - 64 59 /* 65 60 * Input submission and I/O error handler. 66 61 */ ··· 577 582 } 578 583 579 584 #define USB_VENDOR_ID_GTCO 0x078c 580 - #define USB_VENDOR_ID_GTCO_IPANEL_1 0x08ca 581 585 #define USB_VENDOR_ID_GTCO_IPANEL_2 0x5543 582 586 #define USB_DEVICE_ID_GTCO_90 0x0090 583 587 #define USB_DEVICE_ID_GTCO_100 0x0100 ··· 623 629 #define USB_DEVICE_ID_GTCO_1004 0x1004 624 630 #define USB_DEVICE_ID_GTCO_1005 0x1005 625 631 #define USB_DEVICE_ID_GTCO_1006 0x1006 626 - #define USB_DEVICE_ID_GTCO_10 0x0010 627 632 #define USB_DEVICE_ID_GTCO_8 0x0008 628 633 #define USB_DEVICE_ID_GTCO_d 0x000d 629 634 ··· 876 883 { USB_VENDOR_ID_GTCO, USB_DEVICE_ID_GTCO_1004, HID_QUIRK_IGNORE }, 877 884 { USB_VENDOR_ID_GTCO, USB_DEVICE_ID_GTCO_1005, HID_QUIRK_IGNORE }, 878 885 { USB_VENDOR_ID_GTCO, USB_DEVICE_ID_GTCO_1006, HID_QUIRK_IGNORE }, 879 - { USB_VENDOR_ID_GTCO_IPANEL_1, USB_DEVICE_ID_GTCO_10, HID_QUIRK_IGNORE }, 880 886 { USB_VENDOR_ID_GTCO_IPANEL_2, USB_DEVICE_ID_GTCO_8, HID_QUIRK_IGNORE }, 881 887 { USB_VENDOR_ID_GTCO_IPANEL_2, USB_DEVICE_ID_GTCO_d, HID_QUIRK_IGNORE }, 882 888 { USB_VENDOR_ID_IMATION, USB_DEVICE_ID_DISC_STAKKA, HID_QUIRK_IGNORE }, ··· 1241 1249 hid->hiddev_hid_event = hiddev_hid_event; 1242 1250 hid->hiddev_report_event = hiddev_report_event; 1243 1251 #endif 1244 - #ifdef CONFIG_USB_HIDINPUT_POWERBOOK 1245 - hid->pb_fnmode = usbhid_pb_fnmode; 1246 - #endif 1247 - 1248 1252 return hid; 1249 1253 1250 1254 fail:
+2 -1
drivers/usb/net/rtl8150.c
··· 284 284 u8 data[3], tmp; 285 285 286 286 data[0] = phy; 287 - *(data + 1) = cpu_to_le16p(&reg); 287 + data[1] = reg & 0xff; 288 + data[2] = (reg >> 8) & 0xff; 288 289 tmp = indx | PHY_WRITE | PHY_GO; 289 290 i = 0; 290 291
+1 -1
drivers/usb/serial/funsoft.c
··· 27 27 static int funsoft_ioctl(struct usb_serial_port *port, struct file *file, 28 28 unsigned int cmd, unsigned long arg) 29 29 { 30 - struct termios t; 30 + struct ktermios t; 31 31 32 32 dbg("%s - port %d, cmd 0x%04x", __FUNCTION__, port->number, cmd); 33 33
+1
fs/9p/error.c
··· 83 83 84 84 if (errno == 0) { 85 85 /* TODO: if error isn't found, add it dynamically */ 86 + errstr[len] = 0; 86 87 printk(KERN_ERR "%s: errstr :%s: not found\n", __FUNCTION__, 87 88 errstr); 88 89 errno = 1;
+66 -3
fs/9p/fid.c
··· 25 25 #include <linux/fs.h> 26 26 #include <linux/sched.h> 27 27 #include <linux/idr.h> 28 + #include <asm/semaphore.h> 28 29 29 30 #include "debug.h" 30 31 #include "v9fs.h" ··· 85 84 new->iounit = 0; 86 85 new->rdir_pos = 0; 87 86 new->rdir_fcall = NULL; 87 + init_MUTEX(&new->lock); 88 88 INIT_LIST_HEAD(&new->list); 89 89 90 90 return new; ··· 104 102 } 105 103 106 104 /** 107 - * v9fs_fid_lookup - retrieve the right fid from a particular dentry 105 + * v9fs_fid_lookup - return a locked fid from a dentry 108 106 * @dentry: dentry to look for fid in 109 - * @type: intent of lookup (operation or traversal) 110 107 * 111 - * find a fid in the dentry 108 + * find a fid in the dentry, obtain its semaphore and return a reference to it. 109 + * code calling lookup is responsible for releasing lock 112 110 * 113 111 * TODO: only match fids that have the same uid as current user 114 112 * ··· 126 124 127 125 if (!return_fid) { 128 126 dprintk(DEBUG_ERROR, "Couldn't find a fid in dentry\n"); 127 + return_fid = ERR_PTR(-EBADF); 129 128 } 130 129 130 + if(down_interruptible(&return_fid->lock)) 131 + return ERR_PTR(-EINTR); 132 + 131 133 return return_fid; 134 + } 135 + 136 + /** 137 + * v9fs_fid_clone - lookup the fid for a dentry, clone a private copy and release it 138 + * @dentry: dentry to look for fid in 139 + * 140 + * find a fid in the dentry and then clone to a new private fid 141 + * 142 + * TODO: only match fids that have the same uid as current user 143 + * 144 + */ 145 + 146 + struct v9fs_fid *v9fs_fid_clone(struct dentry *dentry) 147 + { 148 + struct v9fs_session_info *v9ses = v9fs_inode2v9ses(dentry->d_inode); 149 + struct v9fs_fid *base_fid, *new_fid = ERR_PTR(-EBADF); 150 + struct v9fs_fcall *fcall = NULL; 151 + int fid, err; 152 + 153 + base_fid = v9fs_fid_lookup(dentry); 154 + 155 + if(IS_ERR(base_fid)) 156 + return base_fid; 157 + 158 + if(base_fid) { /* clone fid */ 159 + fid = v9fs_get_idpool(&v9ses->fidpool); 160 + if (fid < 0) { 161 + eprintk(KERN_WARNING, "newfid fails!\n"); 162 + new_fid = ERR_PTR(-ENOSPC); 163 + goto Release_Fid; 164 + } 165 + 166 + err = v9fs_t_walk(v9ses, base_fid->fid, fid, NULL, &fcall); 167 + if (err < 0) { 168 + dprintk(DEBUG_ERROR, "clone walk didn't work\n"); 169 + v9fs_put_idpool(fid, &v9ses->fidpool); 170 + new_fid = ERR_PTR(err); 171 + goto Free_Fcall; 172 + } 173 + new_fid = v9fs_fid_create(v9ses, fid); 174 + if (new_fid == NULL) { 175 + dprintk(DEBUG_ERROR, "out of memory\n"); 176 + new_fid = ERR_PTR(-ENOMEM); 177 + } 178 + Free_Fcall: 179 + kfree(fcall); 180 + } 181 + 182 + Release_Fid: 183 + up(&base_fid->lock); 184 + return new_fid; 185 + } 186 + 187 + void v9fs_fid_clunk(struct v9fs_session_info *v9ses, struct v9fs_fid *fid) 188 + { 189 + v9fs_t_clunk(v9ses, fid->fid); 190 + v9fs_fid_destroy(fid); 132 191 }
+5
fs/9p/fid.h
··· 30 30 struct list_head list; /* list of fids associated with a dentry */ 31 31 struct list_head active; /* XXX - debug */ 32 32 33 + struct semaphore lock; 34 + 33 35 u32 fid; 34 36 unsigned char fidopen; /* set when fid is opened */ 35 37 unsigned char fidclunked; /* set when fid has already been clunked */ ··· 57 55 void v9fs_fid_destroy(struct v9fs_fid *fid); 58 56 struct v9fs_fid *v9fs_fid_create(struct v9fs_session_info *, int fid); 59 57 int v9fs_fid_insert(struct v9fs_fid *fid, struct dentry *dentry); 58 + struct v9fs_fid *v9fs_fid_clone(struct dentry *dentry); 59 + void v9fs_fid_clunk(struct v9fs_session_info *v9ses, struct v9fs_fid *fid); 60 +
+3 -1
fs/9p/mux.c
··· 132 132 v9fs_mux_poll_tasks[i].task = NULL; 133 133 134 134 v9fs_mux_wq = create_workqueue("v9fs"); 135 - if (!v9fs_mux_wq) 135 + if (!v9fs_mux_wq) { 136 + printk(KERN_WARNING "v9fs: mux: creating workqueue failed\n"); 136 137 return -ENOMEM; 138 + } 137 139 138 140 return 0; 139 141 }
+8 -3
fs/9p/v9fs.c
··· 457 457 458 458 v9fs_error_init(); 459 459 460 - printk(KERN_INFO "Installing v9fs 9P2000 file system support\n"); 460 + printk(KERN_INFO "Installing v9fs 9p2000 file system support\n"); 461 461 462 462 ret = v9fs_mux_global_init(); 463 - if (!ret) 463 + if (ret) { 464 + printk(KERN_WARNING "v9fs: starting mux failed\n"); 464 465 return ret; 466 + } 465 467 ret = register_filesystem(&v9fs_fs_type); 466 - if (!ret) 468 + if (ret) { 469 + printk(KERN_WARNING "v9fs: registering file system failed\n"); 467 470 v9fs_mux_global_exit(); 471 + } 472 + 468 473 return ret; 469 474 } 470 475
+7 -40
fs/9p/vfs_file.c
··· 55 55 struct v9fs_fid *vfid; 56 56 struct v9fs_fcall *fcall = NULL; 57 57 int omode; 58 - int fid = V9FS_NOFID; 59 58 int err; 60 59 61 60 dprintk(DEBUG_VFS, "inode: %p file: %p \n", inode, file); 62 61 63 - vfid = v9fs_fid_lookup(file->f_path.dentry); 64 - if (!vfid) { 65 - dprintk(DEBUG_ERROR, "Couldn't resolve fid from dentry\n"); 66 - return -EBADF; 67 - } 62 + vfid = v9fs_fid_clone(file->f_path.dentry); 63 + if (IS_ERR(vfid)) 64 + return PTR_ERR(vfid); 68 65 69 - fid = v9fs_get_idpool(&v9ses->fidpool); 70 - if (fid < 0) { 71 - eprintk(KERN_WARNING, "newfid fails!\n"); 72 - return -ENOSPC; 73 - } 74 - 75 - err = v9fs_t_walk(v9ses, vfid->fid, fid, NULL, &fcall); 76 - if (err < 0) { 77 - dprintk(DEBUG_ERROR, "rewalk didn't work\n"); 78 - if (fcall && fcall->id == RWALK) 79 - goto clunk_fid; 80 - else { 81 - v9fs_put_idpool(fid, &v9ses->fidpool); 82 - goto free_fcall; 83 - } 84 - } 85 - kfree(fcall); 86 - 87 - /* TODO: do special things for O_EXCL, O_NOFOLLOW, O_SYNC */ 88 - /* translate open mode appropriately */ 89 66 omode = v9fs_uflags2omode(file->f_flags); 90 - err = v9fs_t_open(v9ses, fid, omode, &fcall); 67 + err = v9fs_t_open(v9ses, vfid->fid, omode, &fcall); 91 68 if (err < 0) { 92 69 PRINT_FCALL_ERROR("open failed", fcall); 93 - goto clunk_fid; 94 - } 95 - 96 - vfid = kmalloc(sizeof(struct v9fs_fid), GFP_KERNEL); 97 - if (vfid == NULL) { 98 - dprintk(DEBUG_ERROR, "out of memory\n"); 99 - err = -ENOMEM; 100 - goto clunk_fid; 70 + goto Clunk_Fid; 101 71 } 102 72 103 73 file->private_data = vfid; 104 - vfid->fid = fid; 105 74 vfid->fidopen = 1; 106 75 vfid->fidclunked = 0; 107 76 vfid->iounit = fcall->params.ropen.iounit; ··· 81 112 82 113 return 0; 83 114 84 - clunk_fid: 85 - v9fs_t_clunk(v9ses, fid); 86 - 87 - free_fcall: 115 + Clunk_Fid: 116 + v9fs_fid_clunk(v9ses, vfid); 88 117 kfree(fcall); 89 118 90 119 return err;
+119 -87
fs/9p/vfs_inode.c
··· 416 416 sb = file_inode->i_sb; 417 417 v9ses = v9fs_inode2v9ses(file_inode); 418 418 v9fid = v9fs_fid_lookup(file); 419 - 420 - if (!v9fid) { 421 - dprintk(DEBUG_ERROR, 422 - "no v9fs_fid\n"); 423 - return -EBADF; 424 - } 419 + if(IS_ERR(v9fid)) 420 + return PTR_ERR(v9fid); 425 421 426 422 fid = v9fid->fid; 427 423 if (fid < 0) { ··· 429 433 result = v9fs_t_remove(v9ses, fid, &fcall); 430 434 if (result < 0) { 431 435 PRINT_FCALL_ERROR("remove fails", fcall); 436 + goto Error; 432 437 } 433 438 434 439 v9fs_put_idpool(fid, &v9ses->fidpool); 435 440 v9fs_fid_destroy(v9fid); 436 441 442 + Error: 437 443 kfree(fcall); 438 444 return result; 439 445 } ··· 471 473 inode = NULL; 472 474 vfid = NULL; 473 475 v9ses = v9fs_inode2v9ses(dir); 474 - dfid = v9fs_fid_lookup(dentry->d_parent); 475 - perm = unixmode2p9mode(v9ses, mode); 476 + dfid = v9fs_fid_clone(dentry->d_parent); 477 + if(IS_ERR(dfid)) { 478 + err = PTR_ERR(dfid); 479 + goto error; 480 + } 476 481 482 + perm = unixmode2p9mode(v9ses, mode); 477 483 if (nd && nd->flags & LOOKUP_OPEN) 478 484 flags = nd->intent.open.flags - 1; 479 485 else ··· 487 485 perm, v9fs_uflags2omode(flags), NULL, &fid, &qid, &iounit); 488 486 489 487 if (err) 490 - goto error; 488 + goto clunk_dfid; 491 489 492 490 vfid = v9fs_clone_walk(v9ses, dfid->fid, dentry); 491 + v9fs_fid_clunk(v9ses, dfid); 493 492 if (IS_ERR(vfid)) { 494 493 err = PTR_ERR(vfid); 495 494 vfid = NULL; ··· 528 525 529 526 return 0; 530 527 528 + clunk_dfid: 529 + v9fs_fid_clunk(v9ses, dfid); 530 + 531 531 error: 532 532 if (vfid) 533 533 v9fs_fid_destroy(vfid); ··· 557 551 inode = NULL; 558 552 vfid = NULL; 559 553 v9ses = v9fs_inode2v9ses(dir); 560 - dfid = v9fs_fid_lookup(dentry->d_parent); 554 + dfid = v9fs_fid_clone(dentry->d_parent); 555 + if(IS_ERR(dfid)) { 556 + err = PTR_ERR(dfid); 557 + goto error; 558 + } 559 + 561 560 perm = unixmode2p9mode(v9ses, mode | S_IFDIR); 562 561 563 562 err = v9fs_create(v9ses, dfid->fid, (char *) dentry->d_name.name, ··· 570 559 571 560 if (err) { 572 561 dprintk(DEBUG_ERROR, "create error %d\n", err); 573 - goto error; 574 - } 575 - 576 - err = v9fs_t_clunk(v9ses, fid); 577 - if (err) { 578 - dprintk(DEBUG_ERROR, "clunk error %d\n", err); 579 - goto error; 562 + goto clean_up_dfid; 580 563 } 581 564 582 565 vfid = v9fs_clone_walk(v9ses, dfid->fid, dentry); 583 566 if (IS_ERR(vfid)) { 584 567 err = PTR_ERR(vfid); 585 568 vfid = NULL; 586 - goto error; 569 + goto clean_up_dfid; 587 570 } 588 571 572 + v9fs_fid_clunk(v9ses, dfid); 589 573 inode = v9fs_inode_from_fid(v9ses, vfid->fid, dir->i_sb); 590 574 if (IS_ERR(inode)) { 591 575 err = PTR_ERR(inode); 592 576 inode = NULL; 593 - goto error; 577 + goto clean_up_fids; 594 578 } 595 579 596 580 dentry->d_op = &v9fs_dentry_operations; 597 581 d_instantiate(dentry, inode); 598 582 return 0; 599 583 600 - error: 584 + clean_up_fids: 601 585 if (vfid) 602 586 v9fs_fid_destroy(vfid); 603 587 588 + clean_up_dfid: 589 + v9fs_fid_clunk(v9ses, dfid); 590 + 591 + error: 604 592 return err; 605 593 } 606 594 ··· 632 622 dentry->d_op = &v9fs_dentry_operations; 633 623 dirfid = v9fs_fid_lookup(dentry->d_parent); 634 624 635 - if (!dirfid) { 636 - dprintk(DEBUG_ERROR, "no dirfid\n"); 637 - return ERR_PTR(-EINVAL); 638 - } 625 + if(IS_ERR(dirfid)) 626 + return ERR_PTR(PTR_ERR(dirfid)); 639 627 640 628 dirfidnum = dirfid->fid; 641 - 642 - if (dirfidnum < 0) { 643 - dprintk(DEBUG_ERROR, "no dirfid for inode %p, #%lu\n", 644 - dir, dir->i_ino); 645 - return ERR_PTR(-EBADF); 646 - } 647 629 648 630 newfid = v9fs_get_idpool(&v9ses->fidpool); 649 631 if (newfid < 0) { 650 632 eprintk(KERN_WARNING, "newfid fails!\n"); 651 - return ERR_PTR(-ENOSPC); 633 + result = -ENOSPC; 634 + goto Release_Dirfid; 652 635 } 653 636 654 637 result = v9fs_t_walk(v9ses, dirfidnum, newfid, 655 638 (char *)dentry->d_name.name, &fcall); 639 + 640 + up(&dirfid->lock); 656 641 657 642 if (result < 0) { 658 643 if (fcall && fcall->id == RWALK) ··· 706 701 707 702 return NULL; 708 703 709 - FreeFcall: 704 + Release_Dirfid: 705 + up(&dirfid->lock); 706 + 707 + FreeFcall: 710 708 kfree(fcall); 709 + 711 710 return ERR_PTR(result); 712 711 } 713 712 ··· 755 746 struct inode *old_inode = old_dentry->d_inode; 756 747 struct v9fs_session_info *v9ses = v9fs_inode2v9ses(old_inode); 757 748 struct v9fs_fid *oldfid = v9fs_fid_lookup(old_dentry); 758 - struct v9fs_fid *olddirfid = 759 - v9fs_fid_lookup(old_dentry->d_parent); 760 - struct v9fs_fid *newdirfid = 761 - v9fs_fid_lookup(new_dentry->d_parent); 749 + struct v9fs_fid *olddirfid; 750 + struct v9fs_fid *newdirfid; 762 751 struct v9fs_wstat wstat; 763 752 struct v9fs_fcall *fcall = NULL; 764 753 int fid = -1; ··· 766 759 767 760 dprintk(DEBUG_VFS, "\n"); 768 761 769 - if ((!oldfid) || (!olddirfid) || (!newdirfid)) { 770 - dprintk(DEBUG_ERROR, "problem with arguments\n"); 771 - return -EBADF; 762 + if(IS_ERR(oldfid)) 763 + return PTR_ERR(oldfid); 764 + 765 + olddirfid = v9fs_fid_clone(old_dentry->d_parent); 766 + if(IS_ERR(olddirfid)) { 767 + retval = PTR_ERR(olddirfid); 768 + goto Release_lock; 769 + } 770 + 771 + newdirfid = v9fs_fid_clone(new_dentry->d_parent); 772 + if(IS_ERR(newdirfid)) { 773 + retval = PTR_ERR(newdirfid); 774 + goto Clunk_olddir; 772 775 } 773 776 774 777 /* 9P can only handle file rename in the same directory */ 775 778 if (memcmp(&olddirfid->qid, &newdirfid->qid, sizeof(newdirfid->qid))) { 776 779 dprintk(DEBUG_ERROR, "old dir and new dir are different\n"); 777 - retval = -EPERM; 778 - goto FreeFcallnBail; 780 + retval = -EXDEV; 781 + goto Clunk_newdir; 779 782 } 780 783 781 784 fid = oldfid->fid; ··· 796 779 dprintk(DEBUG_ERROR, "no fid for old file #%lu\n", 797 780 old_inode->i_ino); 798 781 retval = -EBADF; 799 - goto FreeFcallnBail; 782 + goto Clunk_newdir; 800 783 } 801 784 802 785 v9fs_blank_wstat(&wstat); ··· 805 788 806 789 retval = v9fs_t_wstat(v9ses, fid, &wstat, &fcall); 807 790 808 - FreeFcallnBail: 809 791 if (retval < 0) 810 792 PRINT_FCALL_ERROR("wstat error", fcall); 811 793 812 794 kfree(fcall); 795 + 796 + Clunk_newdir: 797 + v9fs_fid_clunk(v9ses, newdirfid); 798 + 799 + Clunk_olddir: 800 + v9fs_fid_clunk(v9ses, olddirfid); 801 + 802 + Release_lock: 803 + up(&oldfid->lock); 804 + 813 805 return retval; 814 806 } 815 807 ··· 836 810 { 837 811 struct v9fs_fcall *fcall = NULL; 838 812 struct v9fs_session_info *v9ses = v9fs_inode2v9ses(dentry->d_inode); 839 - struct v9fs_fid *fid = v9fs_fid_lookup(dentry); 813 + struct v9fs_fid *fid = v9fs_fid_clone(dentry); 840 814 int err = -EPERM; 841 815 842 816 dprintk(DEBUG_VFS, "dentry: %p\n", dentry); 843 - if (!fid) { 844 - dprintk(DEBUG_ERROR, 845 - "couldn't find fid associated with dentry\n"); 846 - return -EBADF; 847 - } 817 + if(IS_ERR(fid)) 818 + return PTR_ERR(fid); 848 819 849 820 err = v9fs_t_stat(v9ses, fid->fid, &fcall); 850 821 ··· 854 831 } 855 832 856 833 kfree(fcall); 834 + v9fs_fid_clunk(v9ses, fid); 857 835 return err; 858 836 } 859 837 ··· 868 844 static int v9fs_vfs_setattr(struct dentry *dentry, struct iattr *iattr) 869 845 { 870 846 struct v9fs_session_info *v9ses = v9fs_inode2v9ses(dentry->d_inode); 871 - struct v9fs_fid *fid = v9fs_fid_lookup(dentry); 847 + struct v9fs_fid *fid = v9fs_fid_clone(dentry); 872 848 struct v9fs_fcall *fcall = NULL; 873 849 struct v9fs_wstat wstat; 874 850 int res = -EPERM; 875 851 876 852 dprintk(DEBUG_VFS, "\n"); 877 - 878 - if (!fid) { 879 - dprintk(DEBUG_ERROR, 880 - "Couldn't find fid associated with dentry\n"); 881 - return -EBADF; 882 - } 853 + if(IS_ERR(fid)) 854 + return PTR_ERR(fid); 883 855 884 856 v9fs_blank_wstat(&wstat); 885 857 if (iattr->ia_valid & ATTR_MODE) ··· 907 887 if (res >= 0) 908 888 res = inode_setattr(dentry->d_inode, iattr); 909 889 890 + v9fs_fid_clunk(v9ses, fid); 910 891 return res; 911 892 } 912 893 ··· 1008 987 1009 988 struct v9fs_fcall *fcall = NULL; 1010 989 struct v9fs_session_info *v9ses = v9fs_inode2v9ses(dentry->d_inode); 1011 - struct v9fs_fid *fid = v9fs_fid_lookup(dentry); 990 + struct v9fs_fid *fid = v9fs_fid_clone(dentry); 1012 991 1013 - if (!fid) { 1014 - dprintk(DEBUG_ERROR, "could not resolve fid from dentry\n"); 1015 - retval = -EBADF; 1016 - goto FreeFcall; 1017 - } 992 + if(IS_ERR(fid)) 993 + return PTR_ERR(fid); 1018 994 1019 995 if (!v9ses->extended) { 1020 996 retval = -EBADF; 1021 997 dprintk(DEBUG_ERROR, "not extended\n"); 1022 - goto FreeFcall; 998 + goto ClunkFid; 1023 999 } 1024 1000 1025 1001 dprintk(DEBUG_VFS, " %s\n", dentry->d_name.name); ··· 1027 1009 goto FreeFcall; 1028 1010 } 1029 1011 1030 - if (!fcall) 1031 - return -EIO; 1012 + if (!fcall) { 1013 + retval = -EIO; 1014 + goto ClunkFid; 1015 + } 1032 1016 1033 1017 if (!(fcall->params.rstat.stat.mode & V9FS_DMSYMLINK)) { 1034 1018 retval = -EINVAL; ··· 1048 1028 fcall->params.rstat.stat.extension.str, buffer); 1049 1029 retval = buflen; 1050 1030 1051 - FreeFcall: 1031 + FreeFcall: 1052 1032 kfree(fcall); 1033 + 1034 + ClunkFid: 1035 + v9fs_fid_clunk(v9ses, fid); 1053 1036 1054 1037 return retval; 1055 1038 } ··· 1146 1123 int err; 1147 1124 u32 fid, perm; 1148 1125 struct v9fs_session_info *v9ses; 1149 - struct v9fs_fid *dfid, *vfid; 1150 - struct inode *inode; 1126 + struct v9fs_fid *dfid, *vfid = NULL; 1127 + struct inode *inode = NULL; 1151 1128 1152 - inode = NULL; 1153 - vfid = NULL; 1154 1129 v9ses = v9fs_inode2v9ses(dir); 1155 - dfid = v9fs_fid_lookup(dentry->d_parent); 1156 - perm = unixmode2p9mode(v9ses, mode); 1157 - 1158 1130 if (!v9ses->extended) { 1159 1131 dprintk(DEBUG_ERROR, "not extended\n"); 1160 1132 return -EPERM; 1161 1133 } 1162 1134 1135 + dfid = v9fs_fid_clone(dentry->d_parent); 1136 + if(IS_ERR(dfid)) { 1137 + err = PTR_ERR(dfid); 1138 + goto error; 1139 + } 1140 + 1141 + perm = unixmode2p9mode(v9ses, mode); 1142 + 1163 1143 err = v9fs_create(v9ses, dfid->fid, (char *) dentry->d_name.name, 1164 1144 perm, V9FS_OREAD, (char *) extension, &fid, NULL, NULL); 1165 1145 1166 1146 if (err) 1167 - goto error; 1147 + goto clunk_dfid; 1168 1148 1169 1149 err = v9fs_t_clunk(v9ses, fid); 1170 1150 if (err) 1171 - goto error; 1151 + goto clunk_dfid; 1172 1152 1173 1153 vfid = v9fs_clone_walk(v9ses, dfid->fid, dentry); 1174 1154 if (IS_ERR(vfid)) { 1175 1155 err = PTR_ERR(vfid); 1176 1156 vfid = NULL; 1177 - goto error; 1157 + goto clunk_dfid; 1178 1158 } 1179 1159 1180 1160 inode = v9fs_inode_from_fid(v9ses, vfid->fid, dir->i_sb); 1181 1161 if (IS_ERR(inode)) { 1182 1162 err = PTR_ERR(inode); 1183 1163 inode = NULL; 1184 - goto error; 1164 + goto free_vfid; 1185 1165 } 1186 1166 1187 1167 dentry->d_op = &v9fs_dentry_operations; 1188 1168 d_instantiate(dentry, inode); 1189 1169 return 0; 1190 1170 1191 - error: 1192 - if (vfid) 1193 - v9fs_fid_destroy(vfid); 1171 + free_vfid: 1172 + v9fs_fid_destroy(vfid); 1194 1173 1174 + clunk_dfid: 1175 + v9fs_fid_clunk(v9ses, dfid); 1176 + 1177 + error: 1195 1178 return err; 1196 1179 1197 1180 } ··· 1238 1209 struct dentry *dentry) 1239 1210 { 1240 1211 int retval; 1212 + struct v9fs_session_info *v9ses = v9fs_inode2v9ses(dir); 1241 1213 struct v9fs_fid *oldfid; 1242 1214 char *name; 1243 1215 1244 1216 dprintk(DEBUG_VFS, " %lu,%s,%s\n", dir->i_ino, dentry->d_name.name, 1245 1217 old_dentry->d_name.name); 1246 1218 1247 - oldfid = v9fs_fid_lookup(old_dentry); 1248 - if (!oldfid) { 1249 - dprintk(DEBUG_ERROR, "can't find oldfid\n"); 1250 - return -EPERM; 1251 - } 1219 + oldfid = v9fs_fid_clone(old_dentry); 1220 + if(IS_ERR(oldfid)) 1221 + return PTR_ERR(oldfid); 1252 1222 1253 1223 name = __getname(); 1254 - if (unlikely(!name)) 1255 - return -ENOMEM; 1224 + if (unlikely(!name)) { 1225 + retval = -ENOMEM; 1226 + goto clunk_fid; 1227 + } 1256 1228 1257 1229 sprintf(name, "%d\n", oldfid->fid); 1258 1230 retval = v9fs_vfs_mkspecial(dir, dentry, V9FS_DMLINK, name); 1259 1231 __putname(name); 1260 1232 1233 + clunk_fid: 1234 + v9fs_fid_clunk(v9ses, oldfid); 1261 1235 return retval; 1262 1236 } 1263 1237
+9 -11
fs/aio.c
··· 298 298 struct task_struct *tsk = current; 299 299 DECLARE_WAITQUEUE(wait, tsk); 300 300 301 + spin_lock_irq(&ctx->ctx_lock); 301 302 if (!ctx->reqs_active) 302 - return; 303 + goto out; 303 304 304 305 add_wait_queue(&ctx->wait, &wait); 305 306 set_task_state(tsk, TASK_UNINTERRUPTIBLE); 306 307 while (ctx->reqs_active) { 308 + spin_unlock_irq(&ctx->ctx_lock); 307 309 schedule(); 308 310 set_task_state(tsk, TASK_UNINTERRUPTIBLE); 311 + spin_lock_irq(&ctx->ctx_lock); 309 312 } 310 313 __set_task_state(tsk, TASK_RUNNING); 311 314 remove_wait_queue(&ctx->wait, &wait); 315 + 316 + out: 317 + spin_unlock_irq(&ctx->ctx_lock); 312 318 } 313 319 314 320 /* wait_on_sync_kiocb: ··· 430 424 ring = kmap_atomic(ctx->ring_info.ring_pages[0], KM_USER0); 431 425 if (ctx->reqs_active < aio_ring_avail(&ctx->ring_info, ring)) { 432 426 list_add(&req->ki_list, &ctx->active_reqs); 433 - get_ioctx(ctx); 434 427 ctx->reqs_active++; 435 428 okay = 1; 436 429 } ··· 541 536 spin_lock_irq(&ctx->ctx_lock); 542 537 ret = __aio_put_req(ctx, req); 543 538 spin_unlock_irq(&ctx->ctx_lock); 544 - if (ret) 545 - put_ioctx(ctx); 546 539 return ret; 547 540 } 548 541 ··· 782 779 */ 783 780 iocb->ki_users++; /* grab extra reference */ 784 781 aio_run_iocb(iocb); 785 - if (__aio_put_req(ctx, iocb)) /* drop extra ref */ 786 - put_ioctx(ctx); 782 + __aio_put_req(ctx, iocb); 787 783 } 788 784 if (!list_empty(&ctx->run_list)) 789 785 return 1; ··· 999 997 /* everything turned out well, dispose of the aiocb. */ 1000 998 ret = __aio_put_req(ctx, iocb); 1001 999 1002 - spin_unlock_irqrestore(&ctx->ctx_lock, flags); 1003 - 1004 1000 if (waitqueue_active(&ctx->wait)) 1005 1001 wake_up(&ctx->wait); 1006 1002 1007 - if (ret) 1008 - put_ioctx(ctx); 1009 - 1003 + spin_unlock_irqrestore(&ctx->ctx_lock, flags); 1010 1004 return ret; 1011 1005 } 1012 1006
+48 -3
fs/binfmt_elf.c
··· 682 682 retval = PTR_ERR(interpreter); 683 683 if (IS_ERR(interpreter)) 684 684 goto out_free_interp; 685 + 686 + /* 687 + * If the binary is not readable then enforce 688 + * mm->dumpable = 0 regardless of the interpreter's 689 + * permissions. 690 + */ 691 + if (file_permission(interpreter, MAY_READ) < 0) 692 + bprm->interp_flags |= BINPRM_FLAGS_ENFORCE_NONDUMP; 693 + 685 694 retval = kernel_read(interpreter, 0, bprm->buf, 686 695 BINPRM_BUF_SIZE); 687 696 if (retval != BINPRM_BUF_SIZE) { ··· 1187 1178 */ 1188 1179 static int maydump(struct vm_area_struct *vma) 1189 1180 { 1181 + /* The vma can be set up to tell us the answer directly. */ 1182 + if (vma->vm_flags & VM_ALWAYSDUMP) 1183 + return 1; 1184 + 1190 1185 /* Do not dump I/O mapped devices or special mappings */ 1191 1186 if (vma->vm_flags & (VM_IO | VM_RESERVED)) 1192 1187 return 0; ··· 1437 1424 return sz; 1438 1425 } 1439 1426 1427 + static struct vm_area_struct *first_vma(struct task_struct *tsk, 1428 + struct vm_area_struct *gate_vma) 1429 + { 1430 + struct vm_area_struct *ret = tsk->mm->mmap; 1431 + 1432 + if (ret) 1433 + return ret; 1434 + return gate_vma; 1435 + } 1436 + /* 1437 + * Helper function for iterating across a vma list. It ensures that the caller 1438 + * will visit `gate_vma' prior to terminating the search. 1439 + */ 1440 + static struct vm_area_struct *next_vma(struct vm_area_struct *this_vma, 1441 + struct vm_area_struct *gate_vma) 1442 + { 1443 + struct vm_area_struct *ret; 1444 + 1445 + ret = this_vma->vm_next; 1446 + if (ret) 1447 + return ret; 1448 + if (this_vma == gate_vma) 1449 + return NULL; 1450 + return gate_vma; 1451 + } 1452 + 1440 1453 /* 1441 1454 * Actual dumper 1442 1455 * ··· 1478 1439 int segs; 1479 1440 size_t size = 0; 1480 1441 int i; 1481 - struct vm_area_struct *vma; 1442 + struct vm_area_struct *vma, *gate_vma; 1482 1443 struct elfhdr *elf = NULL; 1483 1444 loff_t offset = 0, dataoff, foffset; 1484 1445 unsigned long limit = current->signal->rlim[RLIMIT_CORE].rlim_cur; ··· 1564 1525 segs += ELF_CORE_EXTRA_PHDRS; 1565 1526 #endif 1566 1527 1528 + gate_vma = get_gate_vma(current); 1529 + if (gate_vma != NULL) 1530 + segs++; 1531 + 1567 1532 /* Set up header */ 1568 1533 fill_elf_header(elf, segs + 1); /* including notes section */ 1569 1534 ··· 1635 1592 dataoff = offset = roundup(offset, ELF_EXEC_PAGESIZE); 1636 1593 1637 1594 /* Write program headers for segments dump */ 1638 - for (vma = current->mm->mmap; vma != NULL; vma = vma->vm_next) { 1595 + for (vma = first_vma(current, gate_vma); vma != NULL; 1596 + vma = next_vma(vma, gate_vma)) { 1639 1597 struct elf_phdr phdr; 1640 1598 size_t sz; 1641 1599 ··· 1685 1641 /* Align to page */ 1686 1642 DUMP_SEEK(dataoff - foffset); 1687 1643 1688 - for (vma = current->mm->mmap; vma != NULL; vma = vma->vm_next) { 1644 + for (vma = first_vma(current, gate_vma); vma != NULL; 1645 + vma = next_vma(vma, gate_vma)) { 1689 1646 unsigned long addr; 1690 1647 1691 1648 if (!maydump(vma))
+8
fs/binfmt_elf_fdpic.c
··· 234 234 goto error; 235 235 } 236 236 237 + /* 238 + * If the binary is not readable then enforce 239 + * mm->dumpable = 0 regardless of the interpreter's 240 + * permissions. 241 + */ 242 + if (file_permission(interpreter, MAY_READ) < 0) 243 + bprm->interp_flags |= BINPRM_FLAGS_ENFORCE_NONDUMP; 244 + 237 245 retval = kernel_read(interpreter, 0, bprm->buf, 238 246 BINPRM_BUF_SIZE); 239 247 if (retval < 0)
+50 -1
fs/block_dev.c
··· 129 129 return 0; 130 130 } 131 131 132 + static int 133 + blkdev_get_blocks(struct inode *inode, sector_t iblock, 134 + struct buffer_head *bh, int create) 135 + { 136 + sector_t end_block = max_block(I_BDEV(inode)); 137 + unsigned long max_blocks = bh->b_size >> inode->i_blkbits; 138 + 139 + if ((iblock + max_blocks) > end_block) { 140 + max_blocks = end_block - iblock; 141 + if ((long)max_blocks <= 0) { 142 + if (create) 143 + return -EIO; /* write fully beyond EOF */ 144 + /* 145 + * It is a read which is fully beyond EOF. We return 146 + * a !buffer_mapped buffer 147 + */ 148 + max_blocks = 0; 149 + } 150 + } 151 + 152 + bh->b_bdev = I_BDEV(inode); 153 + bh->b_blocknr = iblock; 154 + bh->b_size = max_blocks << inode->i_blkbits; 155 + if (max_blocks) 156 + set_buffer_mapped(bh); 157 + return 0; 158 + } 159 + 160 + static ssize_t 161 + blkdev_direct_IO(int rw, struct kiocb *iocb, const struct iovec *iov, 162 + loff_t offset, unsigned long nr_segs) 163 + { 164 + struct file *file = iocb->ki_filp; 165 + struct inode *inode = file->f_mapping->host; 166 + 167 + return blockdev_direct_IO_no_locking(rw, iocb, inode, I_BDEV(inode), 168 + iov, offset, nr_segs, blkdev_get_blocks, NULL); 169 + } 170 + 171 + #if 0 132 172 static int blk_end_aio(struct bio *bio, unsigned int bytes_done, int error) 133 173 { 134 174 struct kiocb *iocb = bio->bi_private; ··· 186 146 iocb->ki_nbytes = -EIO; 187 147 188 148 if (atomic_dec_and_test(bio_count)) { 189 - if (iocb->ki_nbytes < 0) 149 + if ((long)iocb->ki_nbytes < 0) 190 150 aio_complete(iocb, iocb->ki_nbytes, 0); 191 151 else 192 152 aio_complete(iocb, iocb->ki_left, 0); ··· 228 188 pvec->idx = 0; 229 189 } 230 190 return pvec->page[pvec->idx++]; 191 + } 192 + 193 + /* return a page back to pvec array */ 194 + static void blk_unget_page(struct page *page, struct pvec *pvec) 195 + { 196 + pvec->page[--pvec->idx] = page; 231 197 } 232 198 233 199 static ssize_t ··· 324 278 count = min(count, nbytes); 325 279 goto same_bio; 326 280 } 281 + } else { 282 + blk_unget_page(page, &pvec); 327 283 } 328 284 329 285 /* bio is ready, submit it */ ··· 363 315 return PTR_ERR(page); 364 316 goto completion; 365 317 } 318 + #endif 366 319 367 320 static int blkdev_writepage(struct page *page, struct writeback_control *wbc) 368 321 {
+18 -1
fs/buffer.c
··· 2834 2834 int ret = 0; 2835 2835 2836 2836 BUG_ON(!PageLocked(page)); 2837 - if (PageDirty(page) || PageWriteback(page)) 2837 + if (PageWriteback(page)) 2838 2838 return 0; 2839 2839 2840 2840 if (mapping == NULL) { /* can this still happen? */ ··· 2844 2844 2845 2845 spin_lock(&mapping->private_lock); 2846 2846 ret = drop_buffers(page, &buffers_to_free); 2847 + 2848 + /* 2849 + * If the filesystem writes its buffers by hand (eg ext3) 2850 + * then we can have clean buffers against a dirty page. We 2851 + * clean the page here; otherwise the VM will never notice 2852 + * that the filesystem did any IO at all. 2853 + * 2854 + * Also, during truncate, discard_buffer will have marked all 2855 + * the page's buffers clean. We discover that here and clean 2856 + * the page also. 2857 + * 2858 + * private_lock must be held over this entire operation in order 2859 + * to synchronise against __set_page_dirty_buffers and prevent the 2860 + * dirty bit from being lost. 2861 + */ 2862 + if (ret) 2863 + cancel_dirty_page(page, PAGE_CACHE_SIZE); 2847 2864 spin_unlock(&mapping->private_lock); 2848 2865 out: 2849 2866 if (buffers_to_free) {
+4
fs/cifs/CHANGES
··· 1 + Version 1.47 2 + ------------ 3 + Fix oops in list_del during mount caused by unaligned string. 4 + 1 5 Version 1.46 2 6 ------------ 3 7 Support deep tree mounts. Better support OS/2, Win9x (DOS) time stamps.
+2 -2
fs/cifs/cifs_debug.c
··· 143 143 ses = list_entry(tmp, struct cifsSesInfo, cifsSessionList); 144 144 if((ses->serverDomain == NULL) || (ses->serverOS == NULL) || 145 145 (ses->serverNOS == NULL)) { 146 - buf += sprintf("\nentry for %s not fully displayed\n\t", 147 - ses->serverName); 146 + buf += sprintf(buf, "\nentry for %s not fully " 147 + "displayed\n\t", ses->serverName); 148 148 149 149 } else { 150 150 length =
+1 -1
fs/cifs/cifsfs.h
··· 100 100 extern ssize_t cifs_listxattr(struct dentry *, char *, size_t); 101 101 extern int cifs_ioctl (struct inode * inode, struct file * filep, 102 102 unsigned int command, unsigned long arg); 103 - #define CIFS_VERSION "1.46" 103 + #define CIFS_VERSION "1.47" 104 104 #endif /* _CIFSFS_H */
+2 -6
fs/cifs/misc.c
··· 71 71 { 72 72 struct cifsSesInfo *ret_buf; 73 73 74 - ret_buf = 75 - (struct cifsSesInfo *) kzalloc(sizeof (struct cifsSesInfo), 76 - GFP_KERNEL); 74 + ret_buf = kzalloc(sizeof (struct cifsSesInfo), GFP_KERNEL); 77 75 if (ret_buf) { 78 76 write_lock(&GlobalSMBSeslock); 79 77 atomic_inc(&sesInfoAllocCount); ··· 107 109 tconInfoAlloc(void) 108 110 { 109 111 struct cifsTconInfo *ret_buf; 110 - ret_buf = 111 - (struct cifsTconInfo *) kzalloc(sizeof (struct cifsTconInfo), 112 - GFP_KERNEL); 112 + ret_buf = kzalloc(sizeof (struct cifsTconInfo), GFP_KERNEL); 113 113 if (ret_buf) { 114 114 write_lock(&GlobalSMBSeslock); 115 115 atomic_inc(&tconInfoAllocCount);
+8 -5
fs/cifs/sess.c
··· 182 182 cFYI(1,("bleft %d",bleft)); 183 183 184 184 185 - /* word align, if bytes remaining is not even */ 186 - if(bleft % 2) { 187 - bleft--; 188 - data++; 189 - } 185 + /* SMB header is unaligned, so cifs servers word align start of 186 + Unicode strings */ 187 + data++; 188 + bleft--; /* Windows servers do not always double null terminate 189 + their final Unicode string - in which case we 190 + now will not attempt to decode the byte of junk 191 + which follows it */ 192 + 190 193 words_left = bleft / 2; 191 194 192 195 /* save off server operating system */
+12 -1
fs/fs-writeback.c
··· 251 251 WARN_ON(inode->i_state & I_WILL_FREE); 252 252 253 253 if ((wbc->sync_mode != WB_SYNC_ALL) && (inode->i_state & I_LOCK)) { 254 + struct address_space *mapping = inode->i_mapping; 255 + int ret; 256 + 254 257 list_move(&inode->i_list, &inode->i_sb->s_dirty); 255 - return 0; 258 + 259 + /* 260 + * Even if we don't actually write the inode itself here, 261 + * we can at least start some of the data writeout.. 262 + */ 263 + spin_unlock(&inode_lock); 264 + ret = do_writepages(mapping, wbc); 265 + spin_lock(&inode_lock); 266 + return ret; 256 267 } 257 268 258 269 /*
+4
fs/fuse/control.c
··· 193 193 194 194 static void fuse_ctl_kill_sb(struct super_block *sb) 195 195 { 196 + struct fuse_conn *fc; 197 + 196 198 mutex_lock(&fuse_mutex); 197 199 fuse_control_sb = NULL; 200 + list_for_each_entry(fc, &fuse_conn_list, entry) 201 + fc->ctl_ndents = 0; 198 202 mutex_unlock(&fuse_mutex); 199 203 200 204 kill_litter_super(sb);
+1 -1
fs/hostfs/hostfs.h
··· 76 76 extern int unlink_file(const char *file); 77 77 extern int do_mkdir(const char *file, int mode); 78 78 extern int do_rmdir(const char *file); 79 - extern int do_mknod(const char *file, int mode, int dev); 79 + extern int do_mknod(const char *file, int mode, unsigned int major, unsigned int minor); 80 80 extern int link_file(const char *from, const char *to); 81 81 extern int do_readlink(char *file, char *buf, int size); 82 82 extern int rename_file(char *from, char *to);
+1 -1
fs/hostfs/hostfs_kern.c
··· 755 755 goto out_put; 756 756 757 757 init_special_inode(inode, mode, dev); 758 - err = do_mknod(name, mode, dev); 758 + err = do_mknod(name, mode, MAJOR(dev), MINOR(dev)); 759 759 if(err) 760 760 goto out_free; 761 761
+2 -2
fs/hostfs/hostfs_user.c
··· 295 295 return(0); 296 296 } 297 297 298 - int do_mknod(const char *file, int mode, int dev) 298 + int do_mknod(const char *file, int mode, unsigned int major, unsigned int minor) 299 299 { 300 300 int err; 301 301 302 - err = mknod(file, mode, dev); 302 + err = mknod(file, mode, makedev(major, minor)); 303 303 if(err) return(-errno); 304 304 return(0); 305 305 }
+2 -2
fs/lockd/clntlock.c
··· 176 176 lock_kernel(); 177 177 lockd_up(0); /* note: this cannot fail as lockd is already running */ 178 178 179 - dprintk("lockd: reclaiming locks for host %s", host->h_name); 179 + dprintk("lockd: reclaiming locks for host %s\n", host->h_name); 180 180 181 181 restart: 182 182 nsmstate = host->h_nsmstate; ··· 206 206 207 207 host->h_reclaiming = 0; 208 208 up_write(&host->h_rwsem); 209 - dprintk("NLM: done reclaiming locks for host %s", host->h_name); 209 + dprintk("NLM: done reclaiming locks for host %s\n", host->h_name); 210 210 211 211 /* Now, wake up all processes that sleep on a blocked lock */ 212 212 list_for_each_entry(block, &nlm_blocked, b_list) {
+1 -1
fs/nfs/dir.c
··· 532 532 533 533 lock_kernel(); 534 534 535 - res = nfs_revalidate_mapping(inode, filp->f_mapping); 535 + res = nfs_revalidate_mapping_nolock(inode, filp->f_mapping); 536 536 if (res < 0) { 537 537 unlock_kernel(); 538 538 return res;
+3 -2
fs/nfs/file.c
··· 434 434 BUG(); 435 435 } 436 436 if (res < 0) 437 - printk(KERN_WARNING "%s: VFS is out of sync with lock manager!\n", 438 - __FUNCTION__); 437 + dprintk(KERN_WARNING "%s: VFS is out of sync with lock manager" 438 + " - error %d!\n", 439 + __FUNCTION__, res); 439 440 return res; 440 441 } 441 442
+67 -30
fs/nfs/inode.c
··· 665 665 return __nfs_revalidate_inode(server, inode); 666 666 } 667 667 668 + static int nfs_invalidate_mapping_nolock(struct inode *inode, struct address_space *mapping) 669 + { 670 + struct nfs_inode *nfsi = NFS_I(inode); 671 + 672 + if (mapping->nrpages != 0) { 673 + int ret = invalidate_inode_pages2(mapping); 674 + if (ret < 0) 675 + return ret; 676 + } 677 + spin_lock(&inode->i_lock); 678 + nfsi->cache_validity &= ~NFS_INO_INVALID_DATA; 679 + if (S_ISDIR(inode->i_mode)) { 680 + memset(nfsi->cookieverf, 0, sizeof(nfsi->cookieverf)); 681 + /* This ensures we revalidate child dentries */ 682 + nfsi->cache_change_attribute = jiffies; 683 + } 684 + spin_unlock(&inode->i_lock); 685 + nfs_inc_stats(inode, NFSIOS_DATAINVALIDATE); 686 + dfprintk(PAGECACHE, "NFS: (%s/%Ld) data cache invalidated\n", 687 + inode->i_sb->s_id, (long long)NFS_FILEID(inode)); 688 + return 0; 689 + } 690 + 691 + static int nfs_invalidate_mapping(struct inode *inode, struct address_space *mapping) 692 + { 693 + int ret = 0; 694 + 695 + mutex_lock(&inode->i_mutex); 696 + if (NFS_I(inode)->cache_validity & NFS_INO_INVALID_DATA) { 697 + ret = nfs_sync_mapping(mapping); 698 + if (ret == 0) 699 + ret = nfs_invalidate_mapping_nolock(inode, mapping); 700 + } 701 + mutex_unlock(&inode->i_mutex); 702 + return ret; 703 + } 704 + 705 + /** 706 + * nfs_revalidate_mapping_nolock - Revalidate the pagecache 707 + * @inode - pointer to host inode 708 + * @mapping - pointer to mapping 709 + */ 710 + int nfs_revalidate_mapping_nolock(struct inode *inode, struct address_space *mapping) 711 + { 712 + struct nfs_inode *nfsi = NFS_I(inode); 713 + int ret = 0; 714 + 715 + if ((nfsi->cache_validity & NFS_INO_REVAL_PAGECACHE) 716 + || nfs_attribute_timeout(inode) || NFS_STALE(inode)) { 717 + ret = __nfs_revalidate_inode(NFS_SERVER(inode), inode); 718 + if (ret < 0) 719 + goto out; 720 + } 721 + if (nfsi->cache_validity & NFS_INO_INVALID_DATA) 722 + ret = nfs_invalidate_mapping_nolock(inode, mapping); 723 + out: 724 + return ret; 725 + } 726 + 668 727 /** 669 728 * nfs_revalidate_mapping - Revalidate the pagecache 670 729 * @inode - pointer to host inode 671 730 * @mapping - pointer to mapping 731 + * 732 + * This version of the function will take the inode->i_mutex and attempt to 733 + * flush out all dirty data if it needs to invalidate the page cache. 672 734 */ 673 735 int nfs_revalidate_mapping(struct inode *inode, struct address_space *mapping) 674 736 { 675 737 struct nfs_inode *nfsi = NFS_I(inode); 676 738 int ret = 0; 677 739 678 - if (NFS_STALE(inode)) 679 - ret = -ESTALE; 680 740 if ((nfsi->cache_validity & NFS_INO_REVAL_PAGECACHE) 681 - || nfs_attribute_timeout(inode)) 741 + || nfs_attribute_timeout(inode) || NFS_STALE(inode)) { 682 742 ret = __nfs_revalidate_inode(NFS_SERVER(inode), inode); 683 - if (ret < 0) 684 - goto out; 685 - 686 - if (nfsi->cache_validity & NFS_INO_INVALID_DATA) { 687 - if (mapping->nrpages != 0) { 688 - if (S_ISREG(inode->i_mode)) { 689 - ret = nfs_sync_mapping(mapping); 690 - if (ret < 0) 691 - goto out; 692 - } 693 - ret = invalidate_inode_pages2(mapping); 694 - if (ret < 0) 695 - goto out; 696 - } 697 - spin_lock(&inode->i_lock); 698 - nfsi->cache_validity &= ~NFS_INO_INVALID_DATA; 699 - if (S_ISDIR(inode->i_mode)) { 700 - memset(nfsi->cookieverf, 0, sizeof(nfsi->cookieverf)); 701 - /* This ensures we revalidate child dentries */ 702 - nfsi->cache_change_attribute = jiffies; 703 - } 704 - spin_unlock(&inode->i_lock); 705 - 706 - nfs_inc_stats(inode, NFSIOS_DATAINVALIDATE); 707 - dfprintk(PAGECACHE, "NFS: (%s/%Ld) data cache invalidated\n", 708 - inode->i_sb->s_id, 709 - (long long)NFS_FILEID(inode)); 743 + if (ret < 0) 744 + goto out; 710 745 } 746 + if (nfsi->cache_validity & NFS_INO_INVALID_DATA) 747 + ret = nfs_invalidate_mapping(inode, mapping); 711 748 out: 712 749 return ret; 713 750 }
+3 -1
fs/nfs/symlink.c
··· 50 50 { 51 51 struct inode *inode = dentry->d_inode; 52 52 struct page *page; 53 - void *err = ERR_PTR(nfs_revalidate_mapping(inode, inode->i_mapping)); 53 + void *err; 54 + 55 + err = ERR_PTR(nfs_revalidate_mapping_nolock(inode, inode->i_mapping)); 54 56 if (err) 55 57 goto read_failed; 56 58 page = read_cache_page(&inode->i_data, 0,
-1
fs/nfsd/export.c
··· 35 35 #include <linux/lockd/bind.h> 36 36 37 37 #define NFSDDBG_FACILITY NFSDDBG_EXPORT 38 - #define NFSD_PARANOIA 1 39 38 40 39 typedef struct auth_domain svc_client; 41 40 typedef struct svc_export svc_export;
+5 -4
fs/nfsd/nfs3xdr.c
··· 990 990 } 991 991 992 992 int 993 - nfs3svc_encode_entry(struct readdir_cd *cd, const char *name, 994 - int namlen, loff_t offset, ino_t ino, unsigned int d_type) 993 + nfs3svc_encode_entry(void *cd, const char *name, 994 + int namlen, loff_t offset, u64 ino, unsigned int d_type) 995 995 { 996 996 return encode_entry(cd, name, namlen, offset, ino, d_type, 0); 997 997 } 998 998 999 999 int 1000 - nfs3svc_encode_entry_plus(struct readdir_cd *cd, const char *name, 1001 - int namlen, loff_t offset, ino_t ino, unsigned int d_type) 1000 + nfs3svc_encode_entry_plus(void *cd, const char *name, 1001 + int namlen, loff_t offset, u64 ino, 1002 + unsigned int d_type) 1002 1003 { 1003 1004 return encode_entry(cd, name, namlen, offset, ino, d_type, 1); 1004 1005 }
+3 -2
fs/nfsd/nfs4xdr.c
··· 1880 1880 } 1881 1881 1882 1882 static int 1883 - nfsd4_encode_dirent(struct readdir_cd *ccd, const char *name, int namlen, 1884 - loff_t offset, ino_t ino, unsigned int d_type) 1883 + nfsd4_encode_dirent(void *ccdv, const char *name, int namlen, 1884 + loff_t offset, u64 ino, unsigned int d_type) 1885 1885 { 1886 + struct readdir_cd *ccd = ccdv; 1886 1887 struct nfsd4_readdir *cd = container_of(ccd, struct nfsd4_readdir, common); 1887 1888 int buflen; 1888 1889 __be32 *p = cd->buffer;
+6 -8
fs/nfsd/nfsfh.c
··· 24 24 #include <linux/nfsd/nfsd.h> 25 25 26 26 #define NFSDDBG_FACILITY NFSDDBG_FH 27 - #define NFSD_PARANOIA 1 28 - /* #define NFSD_DEBUG_VERBOSE 1 */ 29 27 30 28 31 29 static int nfsd_nr_verified; ··· 228 230 error = nfserrno(PTR_ERR(dentry)); 229 231 goto out; 230 232 } 231 - #ifdef NFSD_PARANOIA 233 + 232 234 if (S_ISDIR(dentry->d_inode->i_mode) && 233 235 (dentry->d_flags & DCACHE_DISCONNECTED)) { 234 236 printk("nfsd: find_fh_dentry returned a DISCONNECTED directory: %s/%s\n", 235 237 dentry->d_parent->d_name.name, dentry->d_name.name); 236 238 } 237 - #endif 238 239 239 240 fhp->fh_dentry = dentry; 240 241 fhp->fh_export = exp; ··· 264 267 /* Finally, check access permissions. */ 265 268 error = nfsd_permission(exp, dentry, access); 266 269 267 - #ifdef NFSD_PARANOIA_EXTREME 268 270 if (error) { 269 - printk("fh_verify: %s/%s permission failure, acc=%x, error=%d\n", 270 - dentry->d_parent->d_name.name, dentry->d_name.name, access, (error >> 24)); 271 + dprintk("fh_verify: %s/%s permission failure, " 272 + "acc=%x, error=%d\n", 273 + dentry->d_parent->d_name.name, 274 + dentry->d_name.name, 275 + access, ntohl(error)); 271 276 } 272 - #endif 273 277 out: 274 278 if (exp && !IS_ERR(exp)) 275 279 exp_put(exp);
+4 -4
fs/nfsd/nfssvc.c
··· 72 72 .pg_prog = NFS_ACL_PROGRAM, 73 73 .pg_nvers = NFSD_ACL_NRVERS, 74 74 .pg_vers = nfsd_acl_versions, 75 - .pg_name = "nfsd", 75 + .pg_name = "nfsacl", 76 76 .pg_class = "nfsd", 77 77 .pg_stats = &nfsd_acl_svcstats, 78 78 .pg_authenticate = &svc_set_client, ··· 118 118 switch(change) { 119 119 case NFSD_SET: 120 120 nfsd_versions[vers] = nfsd_version[vers]; 121 - break; 122 121 #if defined(CONFIG_NFSD_V2_ACL) || defined(CONFIG_NFSD_V3_ACL) 123 122 if (vers < NFSD_ACL_NRVERS) 124 - nfsd_acl_version[vers] = nfsd_acl_version[vers]; 123 + nfsd_acl_versions[vers] = nfsd_acl_version[vers]; 125 124 #endif 125 + break; 126 126 case NFSD_CLEAR: 127 127 nfsd_versions[vers] = NULL; 128 128 #if defined(CONFIG_NFSD_V2_ACL) || defined(CONFIG_NFSD_V3_ACL) 129 129 if (vers < NFSD_ACL_NRVERS) 130 - nfsd_acl_version[vers] = NULL; 130 + nfsd_acl_versions[vers] = NULL; 131 131 #endif 132 132 break; 133 133 case NFSD_TEST:
+3 -2
fs/nfsd/nfsxdr.c
··· 462 462 } 463 463 464 464 int 465 - nfssvc_encode_entry(struct readdir_cd *ccd, const char *name, 466 - int namlen, loff_t offset, ino_t ino, unsigned int d_type) 465 + nfssvc_encode_entry(void *ccdv, const char *name, 466 + int namlen, loff_t offset, u64 ino, unsigned int d_type) 467 467 { 468 + struct readdir_cd *ccd = ccdv; 468 469 struct nfsd_readdirres *cd = container_of(ccd, struct nfsd_readdirres, common); 469 470 __be32 *p = cd->buffer; 470 471 int buflen, slen;
+9 -20
fs/nfsd/vfs.c
··· 59 59 #include <asm/uaccess.h> 60 60 61 61 #define NFSDDBG_FACILITY NFSDDBG_FILEOP 62 - #define NFSD_PARANOIA 63 62 64 63 65 64 /* We must ignore files (but only files) which might have mandatory ··· 821 822 rqstp->rq_res.page_len = size; 822 823 } else if (page != pp[-1]) { 823 824 get_page(page); 824 - put_page(*pp); 825 + if (*pp) 826 + put_page(*pp); 825 827 *pp = page; 826 828 rqstp->rq_resused++; 827 829 rqstp->rq_res.page_len += size; ··· 1244 1244 __be32 err; 1245 1245 int host_err; 1246 1246 __u32 v_mtime=0, v_atime=0; 1247 - int v_mode=0; 1248 1247 1249 1248 err = nfserr_perm; 1250 1249 if (!flen) ··· 1280 1281 goto out; 1281 1282 1282 1283 if (createmode == NFS3_CREATE_EXCLUSIVE) { 1283 - /* while the verifier would fit in mtime+atime, 1284 - * solaris7 gets confused (bugid 4218508) if these have 1285 - * the high bit set, so we use the mode as well 1284 + /* solaris7 gets confused (bugid 4218508) if these have 1285 + * the high bit set, so just clear the high bits. 1286 1286 */ 1287 1287 v_mtime = verifier[0]&0x7fffffff; 1288 1288 v_atime = verifier[1]&0x7fffffff; 1289 - v_mode = S_IFREG 1290 - | ((verifier[0]&0x80000000) >> (32-7)) /* u+x */ 1291 - | ((verifier[1]&0x80000000) >> (32-9)) /* u+r */ 1292 - ; 1293 1289 } 1294 1290 1295 1291 if (dchild->d_inode) { ··· 1312 1318 case NFS3_CREATE_EXCLUSIVE: 1313 1319 if ( dchild->d_inode->i_mtime.tv_sec == v_mtime 1314 1320 && dchild->d_inode->i_atime.tv_sec == v_atime 1315 - && dchild->d_inode->i_mode == v_mode 1316 1321 && dchild->d_inode->i_size == 0 ) 1317 1322 break; 1318 1323 /* fallthru */ ··· 1333 1340 } 1334 1341 1335 1342 if (createmode == NFS3_CREATE_EXCLUSIVE) { 1336 - /* Cram the verifier into atime/mtime/mode */ 1343 + /* Cram the verifier into atime/mtime */ 1337 1344 iap->ia_valid = ATTR_MTIME|ATTR_ATIME 1338 - | ATTR_MTIME_SET|ATTR_ATIME_SET 1339 - | ATTR_MODE; 1345 + | ATTR_MTIME_SET|ATTR_ATIME_SET; 1340 1346 /* XXX someone who knows this better please fix it for nsec */ 1341 1347 iap->ia_mtime.tv_sec = v_mtime; 1342 1348 iap->ia_atime.tv_sec = v_atime; 1343 1349 iap->ia_mtime.tv_nsec = 0; 1344 1350 iap->ia_atime.tv_nsec = 0; 1345 - iap->ia_mode = v_mode; 1346 1351 } 1347 1352 1348 1353 /* Set file attributes. 1349 - * Mode has already been set but we might need to reset it 1350 - * for CREATE_EXCLUSIVE 1351 1354 * Irix appears to send along the gid when it tries to 1352 1355 * implement setgid directories via NFS. Clear out all that cruft. 1353 1356 */ 1354 1357 set_attr: 1355 - if ((iap->ia_valid &= ~(ATTR_UID|ATTR_GID)) != 0) { 1358 + if ((iap->ia_valid &= ~(ATTR_UID|ATTR_GID|ATTR_MODE)) != 0) { 1356 1359 __be32 err2 = nfsd_setattr(rqstp, resfhp, iap, 0, (time_t)0); 1357 1360 if (err2) 1358 1361 err = err2; ··· 1715 1726 */ 1716 1727 __be32 1717 1728 nfsd_readdir(struct svc_rqst *rqstp, struct svc_fh *fhp, loff_t *offsetp, 1718 - struct readdir_cd *cdp, encode_dent_fn func) 1729 + struct readdir_cd *cdp, filldir_t func) 1719 1730 { 1720 1731 __be32 err; 1721 1732 int host_err; ··· 1740 1751 1741 1752 do { 1742 1753 cdp->err = nfserr_eof; /* will be cleared on successful read */ 1743 - host_err = vfs_readdir(file, (filldir_t) func, cdp); 1754 + host_err = vfs_readdir(file, func, cdp); 1744 1755 } while (host_err >=0 && cdp->err == nfs_ok); 1745 1756 if (host_err) 1746 1757 err = nfserrno(host_err);
+4
fs/ntfs/aops.c
··· 92 92 ofs = 0; 93 93 if (file_ofs < init_size) 94 94 ofs = init_size - file_ofs; 95 + local_irq_save(flags); 95 96 kaddr = kmap_atomic(page, KM_BIO_SRC_IRQ); 96 97 memset(kaddr + bh_offset(bh) + ofs, 0, 97 98 bh->b_size - ofs); 98 99 kunmap_atomic(kaddr, KM_BIO_SRC_IRQ); 100 + local_irq_restore(flags); 99 101 flush_dcache_page(page); 100 102 } 101 103 } else { ··· 145 143 recs = PAGE_CACHE_SIZE / rec_size; 146 144 /* Should have been verified before we got here... */ 147 145 BUG_ON(!recs); 146 + local_irq_save(flags); 148 147 kaddr = kmap_atomic(page, KM_BIO_SRC_IRQ); 149 148 for (i = 0; i < recs; i++) 150 149 post_read_mst_fixup((NTFS_RECORD*)(kaddr + 151 150 i * rec_size), rec_size); 152 151 kunmap_atomic(kaddr, KM_BIO_SRC_IRQ); 152 + local_irq_restore(flags); 153 153 flush_dcache_page(page); 154 154 if (likely(page_uptodate && !PageError(page))) 155 155 SetPageUptodate(page);
+1 -1
fs/ocfs2/ocfs2_fs.h
··· 587 587 588 588 if (index >= 0 && index < OCFS2_MAX_BACKUP_SUPERBLOCKS) { 589 589 offset <<= (2 * index); 590 - offset /= sb->s_blocksize; 590 + offset >>= sb->s_blocksize_bits; 591 591 return offset; 592 592 } 593 593
+16 -4
fs/proc/base.c
··· 371 371 372 372 if (task) { 373 373 task_lock(task); 374 - ns = task->nsproxy->mnt_ns; 375 - if (ns) 376 - get_mnt_ns(ns); 374 + if (task->nsproxy) { 375 + ns = task->nsproxy->mnt_ns; 376 + if (ns) 377 + get_mnt_ns(ns); 378 + } 377 379 task_unlock(task); 378 380 put_task_struct(task); 379 381 } ··· 2328 2326 { 2329 2327 struct dentry *dentry = filp->f_path.dentry; 2330 2328 struct inode *inode = dentry->d_inode; 2331 - struct task_struct *leader = get_proc_task(inode); 2329 + struct task_struct *leader = NULL; 2332 2330 struct task_struct *task; 2333 2331 int retval = -ENOENT; 2334 2332 ino_t ino; 2335 2333 int tid; 2336 2334 unsigned long pos = filp->f_pos; /* avoiding "long long" filp->f_pos */ 2337 2335 2336 + task = get_proc_task(inode); 2337 + if (!task) 2338 + goto out_no_task; 2339 + rcu_read_lock(); 2340 + if (pid_alive(task)) { 2341 + leader = task->group_leader; 2342 + get_task_struct(leader); 2343 + } 2344 + rcu_read_unlock(); 2345 + put_task_struct(task); 2338 2346 if (!leader) 2339 2347 goto out_no_task; 2340 2348 retval = 0;
+19 -1
fs/reiserfs/file.c
··· 48 48 } 49 49 50 50 mutex_lock(&inode->i_mutex); 51 + 52 + mutex_lock(&(REISERFS_I(inode)->i_mmap)); 53 + if (REISERFS_I(inode)->i_flags & i_ever_mapped) 54 + REISERFS_I(inode)->i_flags &= ~i_pack_on_close_mask; 55 + 51 56 reiserfs_write_lock(inode->i_sb); 52 57 /* freeing preallocation only involves relogging blocks that 53 58 * are already in the current transaction. preallocation gets ··· 105 100 err = reiserfs_truncate_file(inode, 0); 106 101 } 107 102 out: 103 + mutex_unlock(&(REISERFS_I(inode)->i_mmap)); 108 104 mutex_unlock(&inode->i_mutex); 109 105 reiserfs_write_unlock(inode->i_sb); 110 106 return err; 107 + } 108 + 109 + static int reiserfs_file_mmap(struct file *file, struct vm_area_struct *vma) 110 + { 111 + struct inode *inode; 112 + 113 + inode = file->f_path.dentry->d_inode; 114 + mutex_lock(&(REISERFS_I(inode)->i_mmap)); 115 + REISERFS_I(inode)->i_flags |= i_ever_mapped; 116 + mutex_unlock(&(REISERFS_I(inode)->i_mmap)); 117 + 118 + return generic_file_mmap(file, vma); 111 119 } 112 120 113 121 static void reiserfs_vfs_truncate_file(struct inode *inode) ··· 1545 1527 #ifdef CONFIG_COMPAT 1546 1528 .compat_ioctl = reiserfs_compat_ioctl, 1547 1529 #endif 1548 - .mmap = generic_file_mmap, 1530 + .mmap = reiserfs_file_mmap, 1549 1531 .open = generic_file_open, 1550 1532 .release = reiserfs_file_release, 1551 1533 .fsync = reiserfs_sync_file,
+2
fs/reiserfs/inode.c
··· 1125 1125 REISERFS_I(inode)->i_prealloc_count = 0; 1126 1126 REISERFS_I(inode)->i_trans_id = 0; 1127 1127 REISERFS_I(inode)->i_jl = NULL; 1128 + mutex_init(&(REISERFS_I(inode)->i_mmap)); 1128 1129 reiserfs_init_acl_access(inode); 1129 1130 reiserfs_init_acl_default(inode); 1130 1131 reiserfs_init_xattr_rwsem(inode); ··· 1833 1832 REISERFS_I(inode)->i_attrs = 1834 1833 REISERFS_I(dir)->i_attrs & REISERFS_INHERIT_MASK; 1835 1834 sd_attrs_to_i_attrs(REISERFS_I(inode)->i_attrs, inode); 1835 + mutex_init(&(REISERFS_I(inode)->i_mmap)); 1836 1836 reiserfs_init_acl_access(inode); 1837 1837 reiserfs_init_acl_default(inode); 1838 1838 reiserfs_init_xattr_rwsem(inode);
+30 -16
fs/ufs/balloc.c
··· 227 227 * We can come here from ufs_writepage or ufs_prepare_write, 228 228 * locked_page is argument of these functions, so we already lock it. 229 229 */ 230 - static void ufs_change_blocknr(struct inode *inode, unsigned int baseblk, 230 + static void ufs_change_blocknr(struct inode *inode, unsigned int beg, 231 231 unsigned int count, unsigned int oldb, 232 232 unsigned int newb, struct page *locked_page) 233 233 { 234 - unsigned int blk_per_page = 1 << (PAGE_CACHE_SHIFT - inode->i_blkbits); 235 - struct address_space *mapping = inode->i_mapping; 236 - pgoff_t index, cur_index = locked_page->index; 237 - unsigned int i, j; 234 + const unsigned mask = (1 << (PAGE_CACHE_SHIFT - inode->i_blkbits)) - 1; 235 + struct address_space * const mapping = inode->i_mapping; 236 + pgoff_t index, cur_index; 237 + unsigned end, pos, j; 238 238 struct page *page; 239 239 struct buffer_head *head, *bh; 240 240 241 241 UFSD("ENTER, ino %lu, count %u, oldb %u, newb %u\n", 242 242 inode->i_ino, count, oldb, newb); 243 243 244 + BUG_ON(!locked_page); 244 245 BUG_ON(!PageLocked(locked_page)); 245 246 246 - for (i = 0; i < count; i += blk_per_page) { 247 - index = (baseblk+i) >> (PAGE_CACHE_SHIFT - inode->i_blkbits); 247 + cur_index = locked_page->index; 248 + 249 + for (end = count + beg; beg < end; beg = (beg | mask) + 1) { 250 + index = beg >> (PAGE_CACHE_SHIFT - inode->i_blkbits); 248 251 249 252 if (likely(cur_index != index)) { 250 253 page = ufs_get_locked_page(mapping, index); ··· 256 253 } else 257 254 page = locked_page; 258 255 259 - j = i; 260 256 head = page_buffers(page); 261 257 bh = head; 258 + pos = beg & mask; 259 + for (j = 0; j < pos; ++j) 260 + bh = bh->b_this_page; 261 + j = 0; 262 262 do { 263 - if (likely(bh->b_blocknr == j + oldb && j < count)) { 264 - unmap_underlying_metadata(bh->b_bdev, 265 - bh->b_blocknr); 266 - bh->b_blocknr = newb + j++; 267 - mark_buffer_dirty(bh); 263 + if (buffer_mapped(bh)) { 264 + pos = bh->b_blocknr - oldb; 265 + if (pos < count) { 266 + UFSD(" change from %llu to %llu\n", 267 + (unsigned long long)pos + oldb, 268 + (unsigned long long)pos + newb); 269 + bh->b_blocknr = newb + pos; 270 + unmap_underlying_metadata(bh->b_bdev, 271 + bh->b_blocknr); 272 + mark_buffer_dirty(bh); 273 + ++j; 274 + } 268 275 } 269 276 270 277 bh = bh->b_this_page; 271 278 } while (bh != head); 272 279 273 - set_page_dirty(page); 280 + if (j) 281 + set_page_dirty(page); 274 282 275 283 if (likely(cur_index != index)) 276 284 ufs_put_locked_page(page); ··· 429 415 } 430 416 result = ufs_alloc_fragments (inode, cgno, goal, request, err); 431 417 if (result) { 418 + ufs_clear_frags(inode, result + oldcount, newcount - oldcount, 419 + locked_page != NULL); 432 420 ufs_change_blocknr(inode, fragment - oldcount, oldcount, tmp, 433 421 result, locked_page); 434 422 435 423 *p = cpu_to_fs32(sb, result); 436 424 *err = 0; 437 425 UFS_I(inode)->i_lastfrag = max_t(u32, UFS_I(inode)->i_lastfrag, fragment + count); 438 - ufs_clear_frags(inode, result + oldcount, newcount - oldcount, 439 - locked_page != NULL); 440 426 unlock_super(sb); 441 427 if (newcount < request) 442 428 ufs_free_fragments (inode, result + newcount, request - newcount);
+9 -5
fs/ufs/inode.c
··· 242 242 goal = tmp + uspi->s_fpb; 243 243 tmp = ufs_new_fragments (inode, p, fragment - blockoff, 244 244 goal, required + blockoff, 245 - err, locked_page); 245 + err, 246 + phys != NULL ? locked_page : NULL); 246 247 } 247 248 /* 248 249 * We will extend last allocated block ··· 251 250 else if (lastblock == block) { 252 251 tmp = ufs_new_fragments(inode, p, fragment - (blockoff - lastblockoff), 253 252 fs32_to_cpu(sb, *p), required + (blockoff - lastblockoff), 254 - err, locked_page); 253 + err, phys != NULL ? locked_page : NULL); 255 254 } else /* (lastblock > block) */ { 256 255 /* 257 256 * We will allocate new block before last allocated block ··· 262 261 goal = tmp + uspi->s_fpb; 263 262 } 264 263 tmp = ufs_new_fragments(inode, p, fragment - blockoff, 265 - goal, uspi->s_fpb, err, locked_page); 264 + goal, uspi->s_fpb, err, 265 + phys != NULL ? locked_page : NULL); 266 266 } 267 267 if (!tmp) { 268 268 if ((!blockoff && *p) || ··· 440 438 * it much more readable: 441 439 */ 442 440 #define GET_INODE_DATABLOCK(x) \ 443 - ufs_inode_getfrag(inode, x, fragment, 1, &err, &phys, &new, bh_result->b_page) 441 + ufs_inode_getfrag(inode, x, fragment, 1, &err, &phys, &new,\ 442 + bh_result->b_page) 444 443 #define GET_INODE_PTR(x) \ 445 - ufs_inode_getfrag(inode, x, fragment, uspi->s_fpb, &err, NULL, NULL, NULL) 444 + ufs_inode_getfrag(inode, x, fragment, uspi->s_fpb, &err, NULL, NULL,\ 445 + bh_result->b_page) 446 446 #define GET_INDIRECT_DATABLOCK(x) \ 447 447 ufs_inode_getblock(inode, bh, x, fragment, \ 448 448 &err, &phys, &new, bh_result->b_page)
+2 -2
fs/ufs/truncate.c
··· 109 109 tmp = fs32_to_cpu(sb, *p); 110 110 if (!tmp ) 111 111 ufs_panic (sb, "ufs_trunc_direct", "internal error"); 112 + frag2 -= frag1; 112 113 frag1 = ufs_fragnum (frag1); 113 - frag2 = ufs_fragnum (frag2); 114 114 115 - ufs_free_fragments (inode, tmp + frag1, frag2 - frag1); 115 + ufs_free_fragments(inode, tmp + frag1, frag2); 116 116 mark_inode_dirty(inode); 117 117 frag_to_free = tmp + frag1; 118 118
+11 -9
include/asm-alpha/dma-mapping.h
··· 41 41 #define dma_map_single(dev, va, size, dir) virt_to_phys(va) 42 42 #define dma_map_page(dev, page, off, size, dir) (page_to_pa(page) + off) 43 43 44 - #define dma_unmap_single(dev, addr, size, dir) do { } while (0) 45 - #define dma_unmap_page(dev, addr, size, dir) do { } while (0) 46 - #define dma_unmap_sg(dev, sg, nents, dir) do { } while (0) 44 + #define dma_unmap_single(dev, addr, size, dir) ((void)0) 45 + #define dma_unmap_page(dev, addr, size, dir) ((void)0) 46 + #define dma_unmap_sg(dev, sg, nents, dir) ((void)0) 47 47 48 48 #define dma_mapping_error(addr) (0) 49 49 ··· 55 55 56 56 int dma_set_mask(struct device *dev, u64 mask); 57 57 58 - #define dma_sync_single_for_cpu(dev, addr, size, dir) do { } while (0) 59 - #define dma_sync_single_for_device(dev, addr, size, dir) do { } while (0) 60 - #define dma_sync_single_range(dev, addr, off, size, dir) do { } while (0) 61 - #define dma_sync_sg_for_cpu(dev, sg, nents, dir) do { } while (0) 62 - #define dma_sync_sg_for_device(dev, sg, nents, dir) do { } while (0) 63 - #define dma_cache_sync(dev, va, size, dir) do { } while (0) 58 + #define dma_sync_single_for_cpu(dev, addr, size, dir) ((void)0) 59 + #define dma_sync_single_for_device(dev, addr, size, dir) ((void)0) 60 + #define dma_sync_single_range(dev, addr, off, size, dir) ((void)0) 61 + #define dma_sync_sg_for_cpu(dev, sg, nents, dir) ((void)0) 62 + #define dma_sync_sg_for_device(dev, sg, nents, dir) ((void)0) 63 + #define dma_cache_sync(dev, va, size, dir) ((void)0) 64 + #define dma_sync_single_range_for_cpu(dev, addr, offset, size, dir) ((void)0) 65 + #define dma_sync_single_range_for_device(dev, addr, offset, size, dir) ((void)0) 64 66 65 67 #define dma_get_cache_alignment() L1_CACHE_BYTES 66 68
+8 -3
include/asm-alpha/unistd.h
··· 342 342 #define __NR_io_cancel 402 343 343 #define __NR_exit_group 405 344 344 #define __NR_lookup_dcookie 406 345 - #define __NR_sys_epoll_create 407 346 - #define __NR_sys_epoll_ctl 408 347 - #define __NR_sys_epoll_wait 409 345 + #define __NR_epoll_create 407 346 + #define __NR_epoll_ctl 408 347 + #define __NR_epoll_wait 409 348 + /* Feb 2007: These three sys_epoll defines shouldn't be here but culling 349 + * them would break userspace apps ... we'll kill them off in 2010 :) */ 350 + #define __NR_sys_epoll_create __NR_epoll_create 351 + #define __NR_sys_epoll_ctl __NR_epoll_ctl 352 + #define __NR_sys_epoll_wait __NR_epoll_wait 348 353 #define __NR_remap_file_pages 410 349 354 #define __NR_set_tid_address 411 350 355 #define __NR_restart_syscall 412
+4 -4
include/asm-arm/arch-at91rm9200/at91_ecc.h
··· 14 14 #define AT91_ECC_H 15 15 16 16 #define AT91_ECC_CR (AT91_ECC + 0x00) /* Control register */ 17 - #define AT91_ECC_RST (1 << 0) /* Reset parity */ 17 + #define AT91_ECC_RST (1 << 0) /* Reset parity */ 18 18 19 19 #define AT91_ECC_MR (AT91_ECC + 0x04) /* Mode register */ 20 20 #define AT91_ECC_PAGESIZE (3 << 0) /* Page Size */ ··· 23 23 #define AT91_ECC_PAGESIZE_2112 (2) 24 24 #define AT91_ECC_PAGESIZE_4224 (3) 25 25 26 - #define AT91_ECC_SR (AT91_ECC + 0x08) /* Status register */ 26 + #define AT91_ECC_SR (AT91_ECC + 0x08) /* Status register */ 27 27 #define AT91_ECC_RECERR (1 << 0) /* Recoverable Error */ 28 28 #define AT91_ECC_ECCERR (1 << 1) /* ECC Single Bit Error */ 29 29 #define AT91_ECC_MULERR (1 << 2) /* Multiple Errors */ 30 30 31 - #define AT91_ECC_PR (AT91_ECC + 0x0c) /* Parity register */ 31 + #define AT91_ECC_PR (AT91_ECC + 0x0c) /* Parity register */ 32 32 #define AT91_ECC_BITADDR (0xf << 0) /* Bit Error Address */ 33 33 #define AT91_ECC_WORDADDR (0xfff << 4) /* Word Error Address */ 34 34 35 - #define AT91_ECC_NPR (AT91_ECC + 0x10) /* NParity register */ 35 + #define AT91_ECC_NPR (AT91_ECC + 0x10) /* NParity register */ 36 36 #define AT91_ECC_NPARITY (0xffff << 0) /* NParity */ 37 37 38 38 #endif
+1 -1
include/asm-arm/arch-at91rm9200/at91_pmc.h
··· 61 61 #define AT91_PMC_CSS_PLLA (2 << 0) 62 62 #define AT91_PMC_CSS_PLLB (3 << 0) 63 63 #define AT91_PMC_PRES (7 << 2) /* Master Clock Prescaler */ 64 - #define AT91_PMC_PRES_1 (0 << 2) 64 + #define AT91_PMC_PRES_1 (0 << 2) 65 65 #define AT91_PMC_PRES_2 (1 << 2) 66 66 #define AT91_PMC_PRES_4 (2 << 2) 67 67 #define AT91_PMC_PRES_8 (3 << 2)
+1 -1
include/asm-arm/arch-at91rm9200/at91_rstc.h
··· 17 17 #define AT91_RSTC_PROCRST (1 << 0) /* Processor Reset */ 18 18 #define AT91_RSTC_PERRST (1 << 2) /* Peripheral Reset */ 19 19 #define AT91_RSTC_EXTRST (1 << 3) /* External Reset */ 20 - #define AT01_RSTC_KEY (0xff << 24) /* KEY Password */ 20 + #define AT91_RSTC_KEY (0xff << 24) /* KEY Password */ 21 21 22 22 #define AT91_RSTC_SR (AT91_RSTC + 0x04) /* Reset Controller Status Register */ 23 23 #define AT91_RSTC_URSTS (1 << 0) /* User Reset Status */
+8 -8
include/asm-arm/arch-at91rm9200/at91_rtc.h
··· 21 21 #define AT91_RTC_UPDCAL (1 << 1) /* Update Request Calendar Register */ 22 22 #define AT91_RTC_TIMEVSEL (3 << 8) /* Time Event Selection */ 23 23 #define AT91_RTC_TIMEVSEL_MINUTE (0 << 8) 24 - #define AT91_RTC_TIMEVSEL_HOUR (1 << 8) 25 - #define AT91_RTC_TIMEVSEL_DAY24 (2 << 8) 26 - #define AT91_RTC_TIMEVSEL_DAY12 (3 << 8) 24 + #define AT91_RTC_TIMEVSEL_HOUR (1 << 8) 25 + #define AT91_RTC_TIMEVSEL_DAY24 (2 << 8) 26 + #define AT91_RTC_TIMEVSEL_DAY12 (3 << 8) 27 27 #define AT91_RTC_CALEVSEL (3 << 16) /* Calendar Event Selection */ 28 - #define AT91_RTC_CALEVSEL_WEEK (0 << 16) 29 - #define AT91_RTC_CALEVSEL_MONTH (1 << 16) 30 - #define AT91_RTC_CALEVSEL_YEAR (2 << 16) 28 + #define AT91_RTC_CALEVSEL_WEEK (0 << 16) 29 + #define AT91_RTC_CALEVSEL_MONTH (1 << 16) 30 + #define AT91_RTC_CALEVSEL_YEAR (2 << 16) 31 31 32 32 #define AT91_RTC_MR (AT91_RTC + 0x04) /* Mode Register */ 33 - #define AT91_RTC_HRMOD (1 << 0) /* 12/24 Hour Mode */ 33 + #define AT91_RTC_HRMOD (1 << 0) /* 12/24 Hour Mode */ 34 34 35 35 #define AT91_RTC_TIMR (AT91_RTC + 0x08) /* Time Register */ 36 36 #define AT91_RTC_SEC (0x7f << 0) /* Current Second */ 37 37 #define AT91_RTC_MIN (0x7f << 8) /* Current Minute */ 38 - #define AT91_RTC_HOUR (0x3f << 16) /* Current Hour */ 38 + #define AT91_RTC_HOUR (0x3f << 16) /* Current Hour */ 39 39 #define AT91_RTC_AMPM (1 << 22) /* Ante Meridiem Post Meridiem Indicator */ 40 40 41 41 #define AT91_RTC_CALR (AT91_RTC + 0x0c) /* Calendar Register */
+1 -1
include/asm-arm/arch-at91rm9200/at91rm9200.h
··· 274 274 #define AT91_PD19_TPK7 (1 << 19) /* B: ETM Trace Packet Port 7 */ 275 275 #define AT91_PD20_NPCS3 (1 << 20) /* A: SPI Peripheral Chip Select 3 */ 276 276 #define AT91_PD20_TPK8 (1 << 20) /* B: ETM Trace Packet Port 8 */ 277 - #define AT91_PD21_RTS0 (1 << 21) /* A: USART Ready To Send 0 */ 277 + #define AT91_PD21_RTS0 (1 << 21) /* A: USART Ready To Send 0 */ 278 278 #define AT91_PD21_TPK9 (1 << 21) /* B: ETM Trace Packet Port 9 */ 279 279 #define AT91_PD22_RTS1 (1 << 22) /* A: USART Ready To Send 1 */ 280 280 #define AT91_PD22_TPK10 (1 << 22) /* B: ETM Trace Packet Port 10 */
+1 -1
include/asm-arm/arch-at91rm9200/at91sam9260_matrix.h
··· 58 58 #define AT91_MATRIX_RCB1 (1 << 1) /* Remap Command for AHB Master 1 (ARM926EJ-S Data Master) */ 59 59 60 60 #define AT91_MATRIX_EBICSA (AT91_MATRIX + 0x11C) /* EBI Chip Select Assignment Register */ 61 - #define AT91_MATRIX_CS1A (1 << 1) /* Chip Select 1 Assignment */ 61 + #define AT91_MATRIX_CS1A (1 << 1) /* Chip Select 1 Assignment */ 62 62 #define AT91_MATRIX_CS1A_SMC (0 << 1) 63 63 #define AT91_MATRIX_CS1A_SDRAMC (1 << 1) 64 64 #define AT91_MATRIX_CS3A (1 << 3) /* Chip Select 3 Assignment */
+3 -3
include/asm-arm/arch-at91rm9200/at91sam9261_matrix.h
··· 15 15 16 16 #define AT91_MATRIX_MCFG (AT91_MATRIX + 0x00) /* Master Configuration Register */ 17 17 #define AT91_MATRIX_RCB0 (1 << 0) /* Remap Command for AHB Master 0 (ARM926EJ-S Instruction Master) */ 18 - #define AT01_MATRIX_RCB1 (1 << 1) /* Remap Command for AHB Master 1 (ARM926EJ-S Data Master) */ 18 + #define AT91_MATRIX_RCB1 (1 << 1) /* Remap Command for AHB Master 1 (ARM926EJ-S Data Master) */ 19 19 20 20 #define AT91_MATRIX_SCFG0 (AT91_MATRIX + 0x04) /* Slave Configuration Register 0 */ 21 21 #define AT91_MATRIX_SCFG1 (AT91_MATRIX + 0x08) /* Slave Configuration Register 1 */ ··· 43 43 44 44 #define AT91_MATRIX_EBICSA (AT91_MATRIX + 0x30) /* EBI Chip Select Assignment Register */ 45 45 #define AT91_MATRIX_CS1A (1 << 1) /* Chip Select 1 Assignment */ 46 - #define AT91_MATRIX_CS1A_SMC (0 << 1) 47 - #define AT91_MATRIX_CS1A_SDRAMC (1 << 1) 46 + #define AT91_MATRIX_CS1A_SMC (0 << 1) 47 + #define AT91_MATRIX_CS1A_SDRAMC (1 << 1) 48 48 #define AT91_MATRIX_CS3A (1 << 3) /* Chip Select 3 Assignment */ 49 49 #define AT91_MATRIX_CS3A_SMC (0 << 3) 50 50 #define AT91_MATRIX_CS3A_SMC_SMARTMEDIA (1 << 3)
+8 -8
include/asm-arm/arch-at91rm9200/at91sam926x_mc.h
··· 33 33 #define AT91_SDRAMC_NC_9 (1 << 0) 34 34 #define AT91_SDRAMC_NC_10 (2 << 0) 35 35 #define AT91_SDRAMC_NC_11 (3 << 0) 36 - #define AT91_SDRAMC_NR (3 << 2) /* Number of Row Bits */ 36 + #define AT91_SDRAMC_NR (3 << 2) /* Number of Row Bits */ 37 37 #define AT91_SDRAMC_NR_11 (0 << 2) 38 38 #define AT91_SDRAMC_NR_12 (1 << 2) 39 39 #define AT91_SDRAMC_NR_13 (2 << 2) 40 - #define AT91_SDRAMC_NB (1 << 4) /* Number of Banks */ 40 + #define AT91_SDRAMC_NB (1 << 4) /* Number of Banks */ 41 41 #define AT91_SDRAMC_NB_2 (0 << 4) 42 - #define AT91_SDRAMC_NB_4 (1 << 4) 43 - #define AT91_SDRAMC_CAS (3 << 5) /* CAS Latency */ 42 + #define AT91_SDRAMC_NB_4 (1 << 4) 43 + #define AT91_SDRAMC_CAS (3 << 5) /* CAS Latency */ 44 44 #define AT91_SDRAMC_CAS_1 (1 << 5) 45 45 #define AT91_SDRAMC_CAS_2 (2 << 5) 46 46 #define AT91_SDRAMC_CAS_3 (3 << 5) ··· 110 110 #define AT91_SMC_MODE(n) (AT91_SMC + 0x0c + ((n)*0x10)) /* Mode Register for CS n */ 111 111 #define AT91_SMC_READMODE (1 << 0) /* Read Mode */ 112 112 #define AT91_SMC_WRITEMODE (1 << 1) /* Write Mode */ 113 - #define AT91_SMC_EXNWMODE (3 << 5) /* NWAIT Mode */ 114 - #define AT91_SMC_EXNWMODE_DISABLE (0 << 5) 115 - #define AT91_SMC_EXNWMODE_FROZEN (2 << 5) 116 - #define AT91_SMC_EXNWMODE_READY (3 << 5) 113 + #define AT91_SMC_EXNWMODE (3 << 4) /* NWAIT Mode */ 114 + #define AT91_SMC_EXNWMODE_DISABLE (0 << 4) 115 + #define AT91_SMC_EXNWMODE_FROZEN (2 << 4) 116 + #define AT91_SMC_EXNWMODE_READY (3 << 4) 117 117 #define AT91_SMC_BAT (1 << 8) /* Byte Access Type */ 118 118 #define AT91_SMC_BAT_SELECT (0 << 8) 119 119 #define AT91_SMC_BAT_WRITE (1 << 8)
+2 -2
include/asm-arm/arch-s3c2410/regs-gpio.h
··· 52 52 /* general configuration options */ 53 53 54 54 #define S3C2410_GPIO_LEAVE (0xFFFFFFFF) 55 - #define S3C2410_GPIO_INPUT (0xFFFFFFF0) 55 + #define S3C2410_GPIO_INPUT (0xFFFFFFF0) /* not available on A */ 56 56 #define S3C2410_GPIO_OUTPUT (0xFFFFFFF1) 57 57 #define S3C2410_GPIO_IRQ (0xFFFFFFF2) /* not available for all */ 58 - #define S3C2410_GPIO_SFN2 (0xFFFFFFF2) /* not available on A */ 58 + #define S3C2410_GPIO_SFN2 (0xFFFFFFF2) /* bank A => addr/cs/nand */ 59 59 #define S3C2410_GPIO_SFN3 (0xFFFFFFF3) /* not available on A */ 60 60 61 61 /* register address for the GPIO registers.
+7 -7
include/asm-arm/arch-s3c2410/regs-mem.h
··· 133 133 #define S3C2410_BANKCON_SDRAM (0x3 << 15) 134 134 135 135 /* next bits only for EDO DRAM in 6,7 */ 136 - #define S3C2400_BANKCON_EDO_Trdc1 (0x00 << 4) 137 - #define S3C2400_BANKCON_EDO_Trdc2 (0x01 << 4) 138 - #define S3C2400_BANKCON_EDO_Trdc3 (0x02 << 4) 139 - #define S3C2400_BANKCON_EDO_Trdc4 (0x03 << 4) 136 + #define S3C2400_BANKCON_EDO_Trcd1 (0x00 << 4) 137 + #define S3C2400_BANKCON_EDO_Trcd2 (0x01 << 4) 138 + #define S3C2400_BANKCON_EDO_Trcd3 (0x02 << 4) 139 + #define S3C2400_BANKCON_EDO_Trcd4 (0x03 << 4) 140 140 141 141 /* CAS pulse width */ 142 142 #define S3C2400_BANKCON_EDO_PULSE1 (0x00 << 3) ··· 153 153 #define S3C2400_BANKCON_EDO_SCANb11 (0x03 << 0) 154 154 155 155 /* next bits only for SDRAM in 6,7 */ 156 - #define S3C2410_BANKCON_Trdc2 (0x00 << 2) 157 - #define S3C2410_BANKCON_Trdc3 (0x01 << 2) 158 - #define S3C2410_BANKCON_Trdc4 (0x02 << 2) 156 + #define S3C2410_BANKCON_Trcd2 (0x00 << 2) 157 + #define S3C2410_BANKCON_Trcd3 (0x01 << 2) 158 + #define S3C2410_BANKCON_Trcd4 (0x02 << 2) 159 159 160 160 /* control column address select */ 161 161 #define S3C2410_BANKCON_SCANb8 (0x00 << 0)
+3
include/asm-arm/fpstate.h
··· 35 35 */ 36 36 __u32 fpinst; 37 37 __u32 fpinst2; 38 + #ifdef CONFIG_SMP 39 + __u32 cpu; 40 + #endif 38 41 }; 39 42 40 43 union vfp_state {
+6
include/asm-frv/Kbuild
··· 1 1 include include/asm-generic/Kbuild.asm 2 + 3 + header-y += registers.h 4 + 5 + unifdef-y += termios.h 6 + unifdef-y += ptrace.h 7 + unifdef-y += page.h
+2 -2
include/asm-frv/page.h
··· 76 76 77 77 #endif /* __ASSEMBLY__ */ 78 78 79 - #endif /* __KERNEL__ */ 80 - 81 79 #ifdef CONFIG_CONTIGUOUS_PAGE_ALLOC 82 80 #define WANT_PAGE_VIRTUAL 1 83 81 #endif 84 82 85 83 #include <asm-generic/memory_model.h> 86 84 #include <asm-generic/page.h> 85 + 86 + #endif /* __KERNEL__ */ 87 87 88 88 #endif /* _ASM_PAGE_H */
+4
include/asm-frv/ptrace.h
··· 12 12 #define _ASM_PTRACE_H 13 13 14 14 #include <asm/registers.h> 15 + #ifdef __KERNEL__ 15 16 #include <asm/irq_regs.h> 16 17 17 18 #define in_syscall(regs) (((regs)->tbr & TBR_TT) == TBR_TT_TRAP0) 19 + #endif 18 20 19 21 20 22 #define PT_PSR 0 ··· 62 60 #define PTRACE_GETFDPIC_EXEC 0 /* [addr] request the executable loadmap */ 63 61 #define PTRACE_GETFDPIC_INTERP 1 /* [addr] request the interpreter loadmap */ 64 62 63 + #ifdef __KERNEL__ 65 64 #ifndef __ASSEMBLY__ 66 65 67 66 /* ··· 77 74 extern unsigned long user_stack(const struct pt_regs *); 78 75 extern void show_regs(struct pt_regs *); 79 76 #define profile_pc(regs) ((regs)->pc) 77 + #endif 80 78 81 79 #endif /* !__ASSEMBLY__ */ 82 80 #endif /* _ASM_PTRACE_H */
+2
include/asm-frv/termios.h
··· 69 69 #define N_SYNC_PPP 14 70 70 #define N_HCI 15 /* Bluetooth HCI UART */ 71 71 72 + #ifdef __KERNEL__ 72 73 #include <asm-generic/termios.h> 74 + #endif 73 75 74 76 #endif /* _ASM_TERMIOS_H */
+2 -2
include/asm-generic/libata-portmap.h
··· 3 3 4 4 #define ATA_PRIMARY_CMD 0x1F0 5 5 #define ATA_PRIMARY_CTL 0x3F6 6 - #define ATA_PRIMARY_IRQ 14 6 + #define ATA_PRIMARY_IRQ(dev) 14 7 7 8 8 #define ATA_SECONDARY_CMD 0x170 9 9 #define ATA_SECONDARY_CTL 0x376 10 - #define ATA_SECONDARY_IRQ 15 10 + #define ATA_SECONDARY_IRQ(dev) 15 11 11 12 12 #endif
+3 -48
include/asm-i386/elf.h
··· 143 143 # define VDSO_PRELINK 0 144 144 #endif 145 145 146 - #define VDSO_COMPAT_SYM(x) \ 147 - (VDSO_COMPAT_BASE + (unsigned long)(x) - VDSO_PRELINK) 148 - 149 146 #define VDSO_SYM(x) \ 150 - (VDSO_BASE + (unsigned long)(x) - VDSO_PRELINK) 147 + (VDSO_COMPAT_BASE + (unsigned long)(x) - VDSO_PRELINK) 151 148 152 149 #define VDSO_HIGH_EHDR ((const struct elfhdr *) VDSO_HIGH_BASE) 153 150 #define VDSO_EHDR ((const struct elfhdr *) VDSO_COMPAT_BASE) ··· 153 156 154 157 #define VDSO_ENTRY VDSO_SYM(&__kernel_vsyscall) 155 158 159 + #ifndef CONFIG_COMPAT_VDSO 156 160 #define ARCH_HAS_SETUP_ADDITIONAL_PAGES 157 161 struct linux_binprm; 158 162 extern int arch_setup_additional_pages(struct linux_binprm *bprm, 159 163 int executable_stack); 164 + #endif 160 165 161 166 extern unsigned int vdso_enabled; 162 167 ··· 166 167 do if (vdso_enabled) { \ 167 168 NEW_AUX_ENT(AT_SYSINFO, VDSO_ENTRY); \ 168 169 NEW_AUX_ENT(AT_SYSINFO_EHDR, VDSO_COMPAT_BASE); \ 169 - } while (0) 170 - 171 - /* 172 - * These macros parameterize elf_core_dump in fs/binfmt_elf.c to write out 173 - * extra segments containing the vsyscall DSO contents. Dumping its 174 - * contents makes post-mortem fully interpretable later without matching up 175 - * the same kernel and hardware config to see what PC values meant. 176 - * Dumping its extra ELF program headers includes all the other information 177 - * a debugger needs to easily find how the vsyscall DSO was being used. 178 - */ 179 - #define ELF_CORE_EXTRA_PHDRS (VDSO_HIGH_EHDR->e_phnum) 180 - #define ELF_CORE_WRITE_EXTRA_PHDRS \ 181 - do { \ 182 - const struct elf_phdr *const vsyscall_phdrs = \ 183 - (const struct elf_phdr *) (VDSO_HIGH_BASE \ 184 - + VDSO_HIGH_EHDR->e_phoff); \ 185 - int i; \ 186 - Elf32_Off ofs = 0; \ 187 - for (i = 0; i < VDSO_HIGH_EHDR->e_phnum; ++i) { \ 188 - struct elf_phdr phdr = vsyscall_phdrs[i]; \ 189 - if (phdr.p_type == PT_LOAD) { \ 190 - BUG_ON(ofs != 0); \ 191 - ofs = phdr.p_offset = offset; \ 192 - phdr.p_memsz = PAGE_ALIGN(phdr.p_memsz); \ 193 - phdr.p_filesz = phdr.p_memsz; \ 194 - offset += phdr.p_filesz; \ 195 - } \ 196 - else \ 197 - phdr.p_offset += ofs; \ 198 - phdr.p_paddr = 0; /* match other core phdrs */ \ 199 - DUMP_WRITE(&phdr, sizeof(phdr)); \ 200 - } \ 201 - } while (0) 202 - #define ELF_CORE_WRITE_EXTRA_DATA \ 203 - do { \ 204 - const struct elf_phdr *const vsyscall_phdrs = \ 205 - (const struct elf_phdr *) (VDSO_HIGH_BASE \ 206 - + VDSO_HIGH_EHDR->e_phoff); \ 207 - int i; \ 208 - for (i = 0; i < VDSO_HIGH_EHDR->e_phnum; ++i) { \ 209 - if (vsyscall_phdrs[i].p_type == PT_LOAD) \ 210 - DUMP_WRITE((void *) vsyscall_phdrs[i].p_vaddr, \ 211 - PAGE_ALIGN(vsyscall_phdrs[i].p_memsz)); \ 212 - } \ 213 170 } while (0) 214 171 215 172 #endif
+2
include/asm-i386/fixmap.h
··· 23 23 extern unsigned long __FIXADDR_TOP; 24 24 #else 25 25 #define __FIXADDR_TOP 0xfffff000 26 + #define FIXADDR_USER_START __fix_to_virt(FIX_VDSO) 27 + #define FIXADDR_USER_END __fix_to_virt(FIX_VDSO - 1) 26 28 #endif 27 29 28 30 #ifndef __ASSEMBLY__
+2
include/asm-i386/page.h
··· 143 143 #include <asm-generic/memory_model.h> 144 144 #include <asm-generic/page.h> 145 145 146 + #ifndef CONFIG_COMPAT_VDSO 146 147 #define __HAVE_ARCH_GATE_AREA 1 148 + #endif 147 149 #endif /* __KERNEL__ */ 148 150 149 151 #endif /* _I386_PAGE_H */
+3 -3
include/asm-ia64/checksum.h
··· 72 72 73 73 #define _HAVE_ARCH_IPV6_CSUM 1 74 74 struct in6_addr; 75 - extern unsigned short int csum_ipv6_magic(struct in6_addr *saddr, 76 - struct in6_addr *daddr, __u32 len, unsigned short proto, 77 - unsigned int csum); 75 + extern __sum16 csum_ipv6_magic(const struct in6_addr *saddr, 76 + const struct in6_addr *daddr, __u32 len, unsigned short proto, 77 + __wsum csum); 78 78 79 79 #endif /* _ASM_IA64_CHECKSUM_H */
+6
include/asm-ia64/pci.h
··· 167 167 168 168 #define pcibios_scan_all_fns(a, b) 0 169 169 170 + #define HAVE_ARCH_PCI_GET_LEGACY_IDE_IRQ 171 + static inline int pci_get_legacy_ide_irq(struct pci_dev *dev, int channel) 172 + { 173 + return channel ? 15 : 14; 174 + } 175 + 170 176 #endif /* _ASM_IA64_PCI_H */
+1
include/asm-m68k/uaccess.h
··· 7 7 #include <linux/compiler.h> 8 8 #include <linux/errno.h> 9 9 #include <linux/types.h> 10 + #include <linux/sched.h> 10 11 #include <asm/segment.h> 11 12 12 13 #define VERIFY_READ 0
+2 -1
include/asm-mips/checksum.h
··· 159 159 #endif 160 160 " .set pop" 161 161 : "=r" (sum) 162 - : "0" (daddr), "r"(saddr), 162 + : "0" ((__force unsigned long)daddr), 163 + "r" ((__force unsigned long)saddr), 163 164 #ifdef __MIPSEL__ 164 165 "r" ((proto + len) << 8), 165 166 #else
+1 -1
include/asm-mips/hazards.h
··· 157 157 * processors. 158 158 */ 159 159 ASMMACRO(mtc0_tlbw_hazard, 160 - nop 160 + nop; nop 161 161 ) 162 162 ASMMACRO(tlbw_use_hazard, 163 163 nop; nop; nop
+22
include/asm-mips/irqflags.h
··· 15 15 16 16 #include <asm/hazards.h> 17 17 18 + /* 19 + * CONFIG_MIPS_MT_SMTC_INSTANT_REPLAY does prompt replay of deferred IPIs, 20 + * at the cost of branch and call overhead on each local_irq_restore() 21 + */ 22 + 23 + #ifdef CONFIG_MIPS_MT_SMTC_INSTANT_REPLAY 24 + 25 + extern void smtc_ipi_replay(void); 26 + 27 + #define irq_restore_epilog(flags) \ 28 + do { \ 29 + if (!(flags & 0x0400)) \ 30 + smtc_ipi_replay(); \ 31 + } while (0) 32 + 33 + #else 34 + 35 + #define irq_restore_epilog(ignore) do { } while (0) 36 + 37 + #endif /* CONFIG_MIPS_MT_SMTC_INSTANT_REPLAY */ 38 + 18 39 __asm__ ( 19 40 " .macro raw_local_irq_enable \n" 20 41 " .set push \n" ··· 214 193 : "=r" (__tmp1) \ 215 194 : "0" (flags) \ 216 195 : "memory"); \ 196 + irq_restore_epilog(flags); \ 217 197 } while(0) 218 198 219 199 static inline int raw_irqs_disabled_flags(unsigned long flags)
-10
include/asm-mips/pgtable.h
··· 69 69 #define ZERO_PAGE(vaddr) \ 70 70 (virt_to_page((void *)(empty_zero_page + (((unsigned long)(vaddr)) & zero_page_mask)))) 71 71 72 - #define __HAVE_ARCH_MOVE_PTE 73 - #define move_pte(pte, prot, old_addr, new_addr) \ 74 - ({ \ 75 - pte_t newpte = (pte); \ 76 - if (pte_present(pte) && pfn_valid(pte_pfn(pte)) && \ 77 - pte_page(pte) == ZERO_PAGE(old_addr)) \ 78 - newpte = mk_pte(ZERO_PAGE(new_addr), (prot)); \ 79 - newpte; \ 80 - }) 81 - 82 72 extern void paging_init(void); 83 73 84 74 /*
+2
include/asm-mips/thread_info.h
··· 118 118 #define TIF_USEDFPU 16 /* FPU was used by this task this quantum (SMP) */ 119 119 #define TIF_POLLING_NRFLAG 17 /* true if poll_idle() is polling TIF_NEED_RESCHED */ 120 120 #define TIF_MEMDIE 18 121 + #define TIF_FREEZE 19 121 122 #define TIF_SYSCALL_TRACE 31 /* syscall trace active */ 122 123 123 124 #define _TIF_SYSCALL_TRACE (1<<TIF_SYSCALL_TRACE) ··· 130 129 #define _TIF_RESTORE_SIGMASK (1<<TIF_RESTORE_SIGMASK) 131 130 #define _TIF_USEDFPU (1<<TIF_USEDFPU) 132 131 #define _TIF_POLLING_NRFLAG (1<<TIF_POLLING_NRFLAG) 132 + #define _TIF_FREEZE (1<<TIF_FREEZE) 133 133 134 134 /* work to do on interrupt/exception return */ 135 135 #define _TIF_WORK_MASK (0x0000ffef & ~_TIF_SECCOMP)
+6 -6
include/asm-powerpc/dma-mapping.h
··· 37 37 */ 38 38 39 39 #define __dma_alloc_coherent(gfp, size, handle) NULL 40 - #define __dma_free_coherent(size, addr) do { } while (0) 41 - #define __dma_sync(addr, size, rw) do { } while (0) 42 - #define __dma_sync_page(pg, off, sz, rw) do { } while (0) 40 + #define __dma_free_coherent(size, addr) ((void)0) 41 + #define __dma_sync(addr, size, rw) ((void)0) 42 + #define __dma_sync_page(pg, off, sz, rw) ((void)0) 43 43 44 44 #endif /* ! CONFIG_NOT_COHERENT_CACHE */ 45 45 ··· 251 251 } 252 252 253 253 /* We do nothing. */ 254 - #define dma_unmap_single(dev, addr, size, dir) do { } while (0) 254 + #define dma_unmap_single(dev, addr, size, dir) ((void)0) 255 255 256 256 static inline dma_addr_t 257 257 dma_map_page(struct device *dev, struct page *page, ··· 266 266 } 267 267 268 268 /* We do nothing. */ 269 - #define dma_unmap_page(dev, handle, size, dir) do { } while (0) 269 + #define dma_unmap_page(dev, handle, size, dir) ((void)0) 270 270 271 271 static inline int 272 272 dma_map_sg(struct device *dev, struct scatterlist *sg, int nents, ··· 286 286 } 287 287 288 288 /* We don't do anything here. */ 289 - #define dma_unmap_sg(dev, sg, nents, dir) do { } while (0) 289 + #define dma_unmap_sg(dev, sg, nents, dir) ((void)0) 290 290 291 291 #endif /* CONFIG_PPC64 */ 292 292
+6 -1
include/asm-powerpc/kprobes.h
··· 44 44 #define IS_TDI(instr) (((instr) & 0xfc000000) == 0x08000000) 45 45 #define IS_TWI(instr) (((instr) & 0xfc000000) == 0x0c000000) 46 46 47 + #ifdef CONFIG_PPC64 47 48 /* 48 49 * 64bit powerpc uses function descriptors. 49 50 * Handle cases where: ··· 68 67 } 69 68 70 69 #define JPROBE_ENTRY(pentry) (kprobe_opcode_t *)((func_descr_t *)pentry) 71 - 72 70 #define is_trap(instr) (IS_TW(instr) || IS_TD(instr) || \ 73 71 IS_TWI(instr) || IS_TDI(instr)) 72 + #else 73 + /* Use stock kprobe_lookup_name since ppc32 doesn't use function descriptors */ 74 + #define JPROBE_ENTRY(pentry) (kprobe_opcode_t *)(pentry) 75 + #define is_trap(instr) (IS_TW(instr) || IS_TWI(instr)) 76 + #endif 74 77 75 78 #define ARCH_SUPPORTS_KRETPROBES 76 79 #define ARCH_INACTIVE_KPROBE_COUNT 1
+12
include/asm-powerpc/libata-portmap.h
··· 1 + #ifndef __ASM_POWERPC_LIBATA_PORTMAP_H 2 + #define __ASM_POWERPC_LIBATA_PORTMAP_H 3 + 4 + #define ATA_PRIMARY_CMD 0x1F0 5 + #define ATA_PRIMARY_CTL 0x3F6 6 + #define ATA_PRIMARY_IRQ(dev) pci_get_legacy_ide_irq(dev, 0) 7 + 8 + #define ATA_SECONDARY_CMD 0x170 9 + #define ATA_SECONDARY_CTL 0x376 10 + #define ATA_SECONDARY_IRQ(dev) pci_get_legacy_ide_irq(dev, 1) 11 + 12 + #endif
+1
include/asm-powerpc/sstep.h
··· 21 21 */ 22 22 #define IS_MTMSRD(instr) (((instr) & 0xfc0007be) == 0x7c000124) 23 23 #define IS_RFID(instr) (((instr) & 0xfc0007fe) == 0x4c000024) 24 + #define IS_RFI(instr) (((instr) & 0xfc0007fe) == 0x4c000064) 24 25 25 26 /* Emulate instructions that cause a transfer of control. */ 26 27 extern int emulate_step(struct pt_regs *regs, unsigned int instr);
+1 -1
include/asm-sparc/checksum.h
··· 151 151 "xnor\t%%g0, %0, %0" 152 152 : "=r" (sum), "=&r" (iph) 153 153 : "r" (ihl), "1" (iph) 154 - : "g2", "g3", "g4", "cc"); 154 + : "g2", "g3", "g4", "cc", "memory"); 155 155 return sum; 156 156 } 157 157
+9
include/asm-um/pgtable.h
··· 408 408 409 409 #include <asm-generic/pgtable-nopud.h> 410 410 411 + #ifdef CONFIG_HIGHMEM 412 + /* Clear a kernel PTE and flush it from the TLB */ 413 + #define kpte_clear_flush(ptep, vaddr) \ 414 + do { \ 415 + pte_clear(&init_mm, vaddr, ptep); \ 416 + __flush_tlb_one(vaddr); \ 417 + } while (0) 418 + #endif 419 + 411 420 #endif 412 421 #endif 413 422
+3
include/asm-x86_64/dma-mapping.h
··· 63 63 return (dma_addr == bad_dma_address); 64 64 } 65 65 66 + #define dma_alloc_noncoherent(d, s, h, f) dma_alloc_coherent(d, s, h, f) 67 + #define dma_free_noncoherent(d, s, v, h) dma_free_coherent(d, s, v, h) 68 + 66 69 extern void *dma_alloc_coherent(struct device *dev, size_t size, 67 70 dma_addr_t *dma_handle, gfp_t gfp); 68 71 extern void dma_free_coherent(struct device *dev, size_t size, void *vaddr,
+1 -1
include/asm-x86_64/uaccess.h
··· 157 157 case 1: __put_user_asm(x,ptr,retval,"b","b","iq",-EFAULT); break;\ 158 158 case 2: __put_user_asm(x,ptr,retval,"w","w","ir",-EFAULT); break;\ 159 159 case 4: __put_user_asm(x,ptr,retval,"l","k","ir",-EFAULT); break;\ 160 - case 8: __put_user_asm(x,ptr,retval,"q","","ir",-EFAULT); break;\ 160 + case 8: __put_user_asm(x,ptr,retval,"q","","Zr",-EFAULT); break;\ 161 161 default: __put_user_bad(); \ 162 162 } \ 163 163 } while (0)
+2 -3
include/linux/Kbuild
··· 69 69 header-y += i2c-dev.h 70 70 header-y += i8k.h 71 71 header-y += icmp.h 72 - header-y += if_addr.h 73 72 header-y += if_arcnet.h 74 73 header-y += if_arp.h 75 74 header-y += if_bonding.h ··· 78 79 header-y += if.h 79 80 header-y += if_hippi.h 80 81 header-y += if_infiniband.h 81 - header-y += if_link.h 82 82 header-y += if_packet.h 83 83 header-y += if_plip.h 84 84 header-y += if_ppp.h ··· 127 129 header-y += ppdev.h 128 130 header-y += prctl.h 129 131 header-y += ps2esdi.h 130 - header-y += qic117.h 131 132 header-y += qnxtypes.h 132 133 header-y += quotaio_v1.h 133 134 header-y += quotaio_v2.h ··· 211 214 unifdef-y += i2c.h 212 215 unifdef-y += i2o-dev.h 213 216 unifdef-y += icmpv6.h 217 + unifdef-y += if_addr.h 214 218 unifdef-y += if_bridge.h 215 219 unifdef-y += if_ec.h 216 220 unifdef-y += if_eql.h ··· 219 221 unifdef-y += if_fddi.h 220 222 unifdef-y += if_frad.h 221 223 unifdef-y += if_ltalk.h 224 + unifdef-y += if_link.h 222 225 unifdef-y += if_pppox.h 223 226 unifdef-y += if_shaper.h 224 227 unifdef-y += if_tr.h
+2 -4
include/linux/bitops.h
··· 31 31 return sizeof(w) == 4 ? hweight32(w) : hweight64(w); 32 32 } 33 33 34 - /* 34 + /** 35 35 * rol32 - rotate a 32-bit value left 36 - * 37 36 * @word: value to rotate 38 37 * @shift: bits to roll 39 38 */ ··· 41 42 return (word << shift) | (word >> (32 - shift)); 42 43 } 43 44 44 - /* 45 + /** 45 46 * ror32 - rotate a 32-bit value right 46 - * 47 47 * @word: value to rotate 48 48 * @shift: bits to roll 49 49 */
+4
include/linux/cdev.h
··· 6 6 #include <linux/kdev_t.h> 7 7 #include <linux/list.h> 8 8 9 + struct file_operations; 10 + struct inode; 11 + struct module; 12 + 9 13 struct cdev { 10 14 struct kobject kobj; 11 15 struct module *owner;
+1 -1
include/linux/efi.h
··· 301 301 extern void efi_initialize_iomem_resources(struct resource *code_resource, 302 302 struct resource *data_resource); 303 303 extern unsigned long efi_get_time(void); 304 - extern int __init efi_set_rtc_mmss(unsigned long nowtime); 304 + extern int efi_set_rtc_mmss(unsigned long nowtime); 305 305 extern int is_available_memory(efi_memory_desc_t * md); 306 306 extern struct efi_memory_map memmap; 307 307
+5 -3
include/linux/hdreg.h
··· 60 60 #define TAG_MASK 0xf8 61 61 #endif /* __KERNEL__ */ 62 62 63 + #include <linux/types.h> 64 + 63 65 /* 64 66 * Command Header sizes for IOCTL commands 65 67 */ 66 68 67 - #define HDIO_DRIVE_CMD_HDR_SIZE (4 * sizeof(u8)) 68 - #define HDIO_DRIVE_HOB_HDR_SIZE (8 * sizeof(u8)) 69 - #define HDIO_DRIVE_TASK_HDR_SIZE (8 * sizeof(u8)) 69 + #define HDIO_DRIVE_CMD_HDR_SIZE (4 * sizeof(__u8)) 70 + #define HDIO_DRIVE_HOB_HDR_SIZE (8 * sizeof(__u8)) 71 + #define HDIO_DRIVE_TASK_HDR_SIZE (8 * sizeof(__u8)) 70 72 71 73 #define IDE_DRIVE_TASK_INVALID -1 72 74 #define IDE_DRIVE_TASK_NO_DATA 0
-1
include/linux/hid.h
··· 438 438 struct hid_usage *, __s32); 439 439 void (*hiddev_report_event) (struct hid_device *, struct hid_report *); 440 440 #ifdef CONFIG_USB_HIDINPUT_POWERBOOK 441 - unsigned int pb_fnmode; 442 441 unsigned long pb_pressed_fn[NBITS(KEY_MAX)]; 443 442 unsigned long pb_pressed_numlock[NBITS(KEY_MAX)]; 444 443 #endif
+3 -2
include/linux/i2o-dev.h
··· 24 24 #define MAX_I2O_CONTROLLERS 32 25 25 26 26 #include <linux/ioctl.h> 27 + #include <linux/types.h> 27 28 28 29 /* 29 30 * I2O Control IOCTLs and structures 30 31 */ 31 32 #define I2O_MAGIC_NUMBER 'i' 32 - #define I2OGETIOPS _IOR(I2O_MAGIC_NUMBER,0,u8[MAX_I2O_CONTROLLERS]) 33 + #define I2OGETIOPS _IOR(I2O_MAGIC_NUMBER,0,__u8[MAX_I2O_CONTROLLERS]) 33 34 #define I2OHRTGET _IOWR(I2O_MAGIC_NUMBER,1,struct i2o_cmd_hrtlct) 34 35 #define I2OLCTGET _IOWR(I2O_MAGIC_NUMBER,2,struct i2o_cmd_hrtlct) 35 36 #define I2OPARMSET _IOWR(I2O_MAGIC_NUMBER,3,struct i2o_cmd_psetget) ··· 38 37 #define I2OSWDL _IOWR(I2O_MAGIC_NUMBER,5,struct i2o_sw_xfer) 39 38 #define I2OSWUL _IOWR(I2O_MAGIC_NUMBER,6,struct i2o_sw_xfer) 40 39 #define I2OSWDEL _IOWR(I2O_MAGIC_NUMBER,7,struct i2o_sw_xfer) 41 - #define I2OVALIDATE _IOR(I2O_MAGIC_NUMBER,8,u32) 40 + #define I2OVALIDATE _IOR(I2O_MAGIC_NUMBER,8,__u32) 42 41 #define I2OHTML _IOWR(I2O_MAGIC_NUMBER,9,struct i2o_html) 43 42 #define I2OEVTREG _IOW(I2O_MAGIC_NUMBER,10,struct i2o_evt_id) 44 43 #define I2OEVTGET _IOR(I2O_MAGIC_NUMBER,11,struct i2o_evt_info)
+2
include/linux/if_tunnel.h
··· 1 1 #ifndef _IF_TUNNEL_H_ 2 2 #define _IF_TUNNEL_H_ 3 3 4 + #include <linux/types.h> 5 + 4 6 #define SIOCGETTUNNEL (SIOCDEVPRIVATE + 0) 5 7 #define SIOCADDTUNNEL (SIOCDEVPRIVATE + 1) 6 8 #define SIOCDELTUNNEL (SIOCDEVPRIVATE + 2)
+1
include/linux/kvm.h
··· 46 46 KVM_EXIT_HLT = 5, 47 47 KVM_EXIT_MMIO = 6, 48 48 KVM_EXIT_IRQ_WINDOW_OPEN = 7, 49 + KVM_EXIT_SHUTDOWN = 8, 49 50 }; 50 51 51 52 /* for KVM_RUN */
+7 -2
include/linux/libata.h
··· 177 177 * Register FIS clearing BSY */ 178 178 ATA_FLAG_DEBUGMSG = (1 << 13), 179 179 ATA_FLAG_SETXFER_POLLING= (1 << 14), /* use polling for SETXFER */ 180 + ATA_FLAG_IGN_SIMPLEX = (1 << 15), /* ignore SIMPLEX */ 180 181 181 182 /* The following flag belongs to ap->pflags but is kept in 182 183 * ap->flags because it's referenced in many LLDs and will be ··· 613 612 void (*dev_select)(struct ata_port *ap, unsigned int device); 614 613 615 614 void (*phy_reset) (struct ata_port *ap); /* obsolete */ 616 - void (*set_mode) (struct ata_port *ap); 615 + int (*set_mode) (struct ata_port *ap, struct ata_device **r_failed_dev); 617 616 618 617 void (*post_set_mode) (struct ata_port *ap); 619 618 620 - int (*check_atapi_dma) (struct ata_queued_cmd *qc); 619 + int (*check_atapi_dma) (struct ata_queued_cmd *qc); 621 620 622 621 void (*bmdma_setup) (struct ata_queued_cmd *qc); 623 622 void (*bmdma_start) (struct ata_queued_cmd *qc); ··· 1054 1053 /** 1055 1054 * ata_busy_wait - Wait for a port status register 1056 1055 * @ap: Port to wait for. 1056 + * @bits: bits that must be clear 1057 + * @max: number of 10uS waits to perform 1057 1058 * 1058 1059 * Waits up to max*10 microseconds for the selected bits in the port's 1059 1060 * status register to be cleared. ··· 1152 1149 qc->cursect = qc->cursg = qc->cursg_ofs = 0; 1153 1150 qc->nsect = 0; 1154 1151 qc->nbytes = qc->curbytes = 0; 1152 + qc->n_elem = 0; 1155 1153 qc->err_mask = 0; 1154 + qc->pad_len = 0; 1156 1155 1157 1156 ata_tf_init(qc->dev, &qc->tf); 1158 1157
+5 -5
include/linux/list.h
··· 227 227 INIT_LIST_HEAD(old); 228 228 } 229 229 230 - /* 230 + /** 231 231 * list_replace_rcu - replace old entry by new one 232 232 * @old : the element to be replaced 233 233 * @new : the new element to insert 234 234 * 235 - * The old entry will be replaced with the new entry atomically. 236 - * Note: 'old' should not be empty. 235 + * The @old entry will be replaced with the @new entry atomically. 236 + * Note: @old should not be empty. 237 237 */ 238 238 static inline void list_replace_rcu(struct list_head *old, 239 239 struct list_head *new) ··· 680 680 } 681 681 } 682 682 683 - /* 683 + /** 684 684 * hlist_replace_rcu - replace old entry by new one 685 685 * @old : the element to be replaced 686 686 * @new : the new element to insert 687 687 * 688 - * The old entry will be replaced with the new entry atomically. 688 + * The @old entry will be replaced with the @new entry atomically. 689 689 */ 690 690 static inline void hlist_replace_rcu(struct hlist_node *old, 691 691 struct hlist_node *new)
+1
include/linux/mm.h
··· 168 168 #define VM_NONLINEAR 0x00800000 /* Is non-linear (remap_file_pages) */ 169 169 #define VM_MAPPED_COPY 0x01000000 /* T if mapped copy of data (nommu mmap) */ 170 170 #define VM_INSERTPAGE 0x02000000 /* The vma has had "vm_insert_page()" done on it */ 171 + #define VM_ALWAYSDUMP 0x04000000 /* Always include in core dumps */ 171 172 172 173 #ifndef VM_STACK_DEFAULT_FLAGS /* arch can override this */ 173 174 #define VM_STACK_DEFAULT_FLAGS VM_DATA_DEFAULT_FLAGS
-146
include/linux/mtio.h
··· 10 10 11 11 #include <linux/types.h> 12 12 #include <linux/ioctl.h> 13 - #include <linux/qic117.h> 14 13 15 14 /* 16 15 * Structures and definitions for mag tape io control commands ··· 115 116 #define MT_ISFTAPE_UNKNOWN 0x800000 /* obsolete */ 116 117 #define MT_ISFTAPE_FLAG 0x800000 117 118 118 - struct mt_tape_info { 119 - long t_type; /* device type id (mt_type) */ 120 - char *t_name; /* descriptive name */ 121 - }; 122 - 123 - #define MT_TAPE_INFO { \ 124 - {MT_ISUNKNOWN, "Unknown type of tape device"}, \ 125 - {MT_ISQIC02, "Generic QIC-02 tape streamer"}, \ 126 - {MT_ISWT5150, "Wangtek 5150, QIC-150"}, \ 127 - {MT_ISARCHIVE_5945L2, "Archive 5945L-2"}, \ 128 - {MT_ISCMSJ500, "CMS Jumbo 500"}, \ 129 - {MT_ISTDC3610, "Tandberg TDC 3610, QIC-24"}, \ 130 - {MT_ISARCHIVE_VP60I, "Archive VP60i, QIC-02"}, \ 131 - {MT_ISARCHIVE_2150L, "Archive Viper 2150L"}, \ 132 - {MT_ISARCHIVE_2060L, "Archive Viper 2060L"}, \ 133 - {MT_ISARCHIVESC499, "Archive SC-499 QIC-36 controller"}, \ 134 - {MT_ISQIC02_ALL_FEATURES, "Generic QIC-02 tape, all features"}, \ 135 - {MT_ISWT5099EEN24, "Wangtek 5099-een24, 60MB"}, \ 136 - {MT_ISTEAC_MT2ST, "Teac MT-2ST 155mb data cassette drive"}, \ 137 - {MT_ISEVEREX_FT40A, "Everex FT40A, QIC-40"}, \ 138 - {MT_ISONSTREAM_SC, "OnStream SC-, DI-, DP-, or USB tape drive"}, \ 139 - {MT_ISSCSI1, "Generic SCSI-1 tape"}, \ 140 - {MT_ISSCSI2, "Generic SCSI-2 tape"}, \ 141 - {0, NULL} \ 142 - } 143 - 144 119 145 120 /* structure for MTIOCPOS - mag tape get position command */ 146 121 ··· 123 150 }; 124 151 125 152 126 - /* structure for MTIOCVOLINFO, query information about the volume 127 - * currently positioned at (zftape) 128 - */ 129 - struct mtvolinfo { 130 - unsigned int mt_volno; /* vol-number */ 131 - unsigned int mt_blksz; /* blocksize used when recording */ 132 - unsigned int mt_rawsize; /* raw tape space consumed, in kb */ 133 - unsigned int mt_size; /* volume size after decompression, in kb */ 134 - unsigned int mt_cmpr:1; /* this volume has been compressed */ 135 - }; 136 - 137 - /* raw access to a floppy drive, read and write an arbitrary segment. 138 - * For ftape/zftape to support formatting etc. 139 - */ 140 - #define MT_FT_RD_SINGLE 0 141 - #define MT_FT_RD_AHEAD 1 142 - #define MT_FT_WR_ASYNC 0 /* start tape only when all buffers are full */ 143 - #define MT_FT_WR_MULTI 1 /* start tape, continue until buffers are empty */ 144 - #define MT_FT_WR_SINGLE 2 /* write a single segment and stop afterwards */ 145 - #define MT_FT_WR_DELETE 3 /* write deleted data marks, one segment at time */ 146 - 147 - struct mtftseg 148 - { 149 - unsigned mt_segno; /* the segment to read or write */ 150 - unsigned mt_mode; /* modes for read/write (sync/async etc.) */ 151 - int mt_result; /* result of r/w request, not of the ioctl */ 152 - void __user *mt_data; /* User space buffer: must be 29kb */ 153 - }; 154 - 155 - /* get tape capacity (ftape/zftape) 156 - */ 157 - struct mttapesize { 158 - unsigned long mt_capacity; /* entire, uncompressed capacity 159 - * of a cartridge 160 - */ 161 - unsigned long mt_used; /* what has been used so far, raw 162 - * uncompressed amount 163 - */ 164 - }; 165 - 166 - /* possible values of the ftfmt_op field 167 - */ 168 - #define FTFMT_SET_PARMS 1 /* set software parms */ 169 - #define FTFMT_GET_PARMS 2 /* get software parms */ 170 - #define FTFMT_FORMAT_TRACK 3 /* start formatting a tape track */ 171 - #define FTFMT_STATUS 4 /* monitor formatting a tape track */ 172 - #define FTFMT_VERIFY 5 /* verify the given segment */ 173 - 174 - struct ftfmtparms { 175 - unsigned char ft_qicstd; /* QIC-40/QIC-80/QIC-3010/QIC-3020 */ 176 - unsigned char ft_fmtcode; /* Refer to the QIC specs */ 177 - unsigned char ft_fhm; /* floppy head max */ 178 - unsigned char ft_ftm; /* floppy track max */ 179 - unsigned short ft_spt; /* segments per track */ 180 - unsigned short ft_tpc; /* tracks per cartridge */ 181 - }; 182 - 183 - struct ftfmttrack { 184 - unsigned int ft_track; /* track to format */ 185 - unsigned char ft_gap3; /* size of gap3, for FORMAT_TRK */ 186 - }; 187 - 188 - struct ftfmtstatus { 189 - unsigned int ft_segment; /* segment currently being formatted */ 190 - }; 191 - 192 - struct ftfmtverify { 193 - unsigned int ft_segment; /* segment to verify */ 194 - unsigned long ft_bsm; /* bsm as result of VERIFY cmd */ 195 - }; 196 - 197 - struct mtftformat { 198 - unsigned int fmt_op; /* operation to perform */ 199 - union fmt_arg { 200 - struct ftfmtparms fmt_parms; /* format parameters */ 201 - struct ftfmttrack fmt_track; /* ctrl while formatting */ 202 - struct ftfmtstatus fmt_status; 203 - struct ftfmtverify fmt_verify; /* for verifying */ 204 - } fmt_arg; 205 - }; 206 - 207 - struct mtftcmd { 208 - unsigned int ft_wait_before; /* timeout to wait for drive to get ready 209 - * before command is sent. Milliseconds 210 - */ 211 - qic117_cmd_t ft_cmd; /* command to send */ 212 - unsigned char ft_parm_cnt; /* zero: no parm is sent. */ 213 - unsigned char ft_parms[3]; /* parameter(s) to send to 214 - * the drive. The parms are nibbles 215 - * driver sends cmd + 2 step pulses */ 216 - unsigned int ft_result_bits; /* if non zero, number of bits 217 - * returned by the tape drive 218 - */ 219 - unsigned int ft_result; /* the result returned by the tape drive*/ 220 - unsigned int ft_wait_after; /* timeout to wait for drive to get ready 221 - * after command is sent. 0: don't wait */ 222 - int ft_status; /* status returned by ready wait 223 - * undefined if timeout was 0. 224 - */ 225 - int ft_error; /* error code if error status was set by 226 - * command 227 - */ 228 - }; 229 - 230 153 /* mag tape io control commands */ 231 154 #define MTIOCTOP _IOW('m', 1, struct mtop) /* do a mag tape op */ 232 155 #define MTIOCGET _IOR('m', 2, struct mtget) /* get tape status */ 233 156 #define MTIOCPOS _IOR('m', 3, struct mtpos) /* get tape position */ 234 157 235 - /* The next two are used by the QIC-02 driver for runtime reconfiguration. 236 - * See tpqic02.h for struct mtconfiginfo. 237 - */ 238 - #define MTIOCGETCONFIG _IOR('m', 4, struct mtconfiginfo) /* get tape config */ 239 - #define MTIOCSETCONFIG _IOW('m', 5, struct mtconfiginfo) /* set tape config */ 240 - 241 - /* the next six are used by the floppy ftape drivers and its frontends 242 - * sorry, but MTIOCTOP commands are write only. 243 - */ 244 - #define MTIOCRDFTSEG _IOWR('m', 6, struct mtftseg) /* read a segment */ 245 - #define MTIOCWRFTSEG _IOWR('m', 7, struct mtftseg) /* write a segment */ 246 - #define MTIOCVOLINFO _IOR('m', 8, struct mtvolinfo) /* info about volume */ 247 - #define MTIOCGETSIZE _IOR('m', 9, struct mttapesize)/* get cartridge size*/ 248 - #define MTIOCFTFORMAT _IOWR('m', 10, struct mtftformat) /* format ftape */ 249 - #define MTIOCFTCMD _IOWR('m', 11, struct mtftcmd) /* send QIC-117 cmd */ 250 158 251 159 /* Generic Mag Tape (device independent) status macros for examining 252 160 * mt_gstat -- HP-UX compatible.
+1 -1
include/linux/mutex.h
··· 105 105 extern void __mutex_init(struct mutex *lock, const char *name, 106 106 struct lock_class_key *key); 107 107 108 - /*** 108 + /** 109 109 * mutex_is_locked - is the mutex locked 110 110 * @lock: the mutex to be queried 111 111 *
+1 -1
include/linux/netfilter_ipv4/ip_tables.h
··· 28 28 #include <linux/netfilter/x_tables.h> 29 29 30 30 #define IPT_FUNCTION_MAXNAMELEN XT_FUNCTION_MAXNAMELEN 31 - #define IPT_TABLE_MAXNAMELEN XT_FUNCTION_MAXNAMELEN 31 + #define IPT_TABLE_MAXNAMELEN XT_TABLE_MAXNAMELEN 32 32 #define ipt_match xt_match 33 33 #define ipt_target xt_target 34 34 #define ipt_table xt_table
+1
include/linux/nfs_fs.h
··· 308 308 extern int nfs_revalidate_inode(struct nfs_server *server, struct inode *inode); 309 309 extern int __nfs_revalidate_inode(struct nfs_server *, struct inode *); 310 310 extern int nfs_revalidate_mapping(struct inode *inode, struct address_space *mapping); 311 + extern int nfs_revalidate_mapping_nolock(struct inode *inode, struct address_space *mapping); 311 312 extern int nfs_setattr(struct dentry *, struct iattr *); 312 313 extern void nfs_setattr_update_inode(struct inode *inode, struct iattr *attr); 313 314 extern void nfs_begin_attr_update(struct inode *);
+1 -3
include/linux/nfsd/nfsd.h
··· 52 52 struct readdir_cd { 53 53 __be32 err; /* 0, nfserr, or nfserr_eof */ 54 54 }; 55 - typedef int (*encode_dent_fn)(struct readdir_cd *, const char *, 56 - int, loff_t, ino_t, unsigned int); 57 55 typedef int (*nfsd_dirop_t)(struct inode *, struct dentry *, int, int); 58 56 59 57 extern struct svc_program nfsd_program; ··· 115 117 int nfsd_truncate(struct svc_rqst *, struct svc_fh *, 116 118 unsigned long size); 117 119 __be32 nfsd_readdir(struct svc_rqst *, struct svc_fh *, 118 - loff_t *, struct readdir_cd *, encode_dent_fn); 120 + loff_t *, struct readdir_cd *, filldir_t); 119 121 __be32 nfsd_statfs(struct svc_rqst *, struct svc_fh *, 120 122 struct kstatfs *); 121 123
+4 -11
include/linux/nfsd/nfsfh.h
··· 217 217 static __inline__ struct svc_fh * 218 218 fh_copy(struct svc_fh *dst, struct svc_fh *src) 219 219 { 220 - if (src->fh_dentry || src->fh_locked) { 221 - struct dentry *dentry = src->fh_dentry; 222 - printk(KERN_ERR "fh_copy: copying %s/%s, already verified!\n", 223 - dentry->d_parent->d_name.name, dentry->d_name.name); 224 - } 220 + WARN_ON(src->fh_dentry || src->fh_locked); 225 221 226 222 *dst = *src; 227 223 return dst; ··· 296 300 dfprintk(FILEOP, "nfsd: fh_lock(%s) locked = %d\n", 297 301 SVCFH_fmt(fhp), fhp->fh_locked); 298 302 299 - if (!fhp->fh_dentry) { 300 - printk(KERN_ERR "fh_lock: fh not verified!\n"); 301 - return; 302 - } 303 + BUG_ON(!dentry); 304 + 303 305 if (fhp->fh_locked) { 304 306 printk(KERN_WARNING "fh_lock: %s/%s already locked!\n", 305 307 dentry->d_parent->d_name.name, dentry->d_name.name); ··· 322 328 static inline void 323 329 fh_unlock(struct svc_fh *fhp) 324 330 { 325 - if (!fhp->fh_dentry) 326 - printk(KERN_ERR "fh_unlock: fh not verified!\n"); 331 + BUG_ON(!fhp->fh_dentry); 327 332 328 333 if (fhp->fh_locked) { 329 334 fill_post_wcc(fhp);
+2 -2
include/linux/nfsd/xdr.h
··· 165 165 int nfssvc_encode_statfsres(struct svc_rqst *, __be32 *, struct nfsd_statfsres *); 166 166 int nfssvc_encode_readdirres(struct svc_rqst *, __be32 *, struct nfsd_readdirres *); 167 167 168 - int nfssvc_encode_entry(struct readdir_cd *, const char *name, 169 - int namlen, loff_t offset, ino_t ino, unsigned int); 168 + int nfssvc_encode_entry(void *, const char *name, 169 + int namlen, loff_t offset, u64 ino, unsigned int); 170 170 171 171 int nfssvc_release_fhandle(struct svc_rqst *, __be32 *, struct nfsd_fhandle *); 172 172
+4 -4
include/linux/nfsd/xdr3.h
··· 331 331 struct nfsd3_attrstat *); 332 332 int nfs3svc_release_fhandle2(struct svc_rqst *, __be32 *, 333 333 struct nfsd3_fhandle_pair *); 334 - int nfs3svc_encode_entry(struct readdir_cd *, const char *name, 335 - int namlen, loff_t offset, ino_t ino, 334 + int nfs3svc_encode_entry(void *, const char *name, 335 + int namlen, loff_t offset, u64 ino, 336 336 unsigned int); 337 - int nfs3svc_encode_entry_plus(struct readdir_cd *, const char *name, 338 - int namlen, loff_t offset, ino_t ino, 337 + int nfs3svc_encode_entry_plus(void *, const char *name, 338 + int namlen, loff_t offset, u64 ino, 339 339 unsigned int); 340 340 /* Helper functions for NFSv3 ACL code */ 341 341 __be32 *nfs3svc_encode_post_op_attr(struct svc_rqst *rqstp, __be32 *p,
+4 -2
include/linux/pci_ids.h
··· 1277 1277 #define PCI_DEVICE_ID_VIA_3296_0 0x0296 1278 1278 #define PCI_DEVICE_ID_VIA_8363_0 0x0305 1279 1279 #define PCI_DEVICE_ID_VIA_P4M800CE 0x0314 1280 - #define PCI_DEVICE_ID_VIA_K8M890CE 0x0336 1280 + #define PCI_DEVICE_ID_VIA_P4M890 0x0327 1281 + #define PCI_DEVICE_ID_VIA_VT3336 0x0336 1281 1282 #define PCI_DEVICE_ID_VIA_8371_0 0x0391 1282 1283 #define PCI_DEVICE_ID_VIA_8501_0 0x0501 1283 1284 #define PCI_DEVICE_ID_VIA_82C561 0x0561 1284 1285 #define PCI_DEVICE_ID_VIA_82C586_1 0x0571 1285 1286 #define PCI_DEVICE_ID_VIA_82C576 0x0576 1286 - #define PCI_DEVICE_ID_VIA_SATA_EIDE 0x0581 1287 1287 #define PCI_DEVICE_ID_VIA_82C586_0 0x0586 1288 1288 #define PCI_DEVICE_ID_VIA_82C596 0x0596 1289 1289 #define PCI_DEVICE_ID_VIA_82C597_0 0x0597 ··· 1326 1326 #define PCI_DEVICE_ID_VIA_8237 0x3227 1327 1327 #define PCI_DEVICE_ID_VIA_8251 0x3287 1328 1328 #define PCI_DEVICE_ID_VIA_8237A 0x3337 1329 + #define PCI_DEVICE_ID_VIA_8237S 0x3372 1330 + #define PCI_DEVICE_ID_VIA_SATA_EIDE 0x5324 1329 1331 #define PCI_DEVICE_ID_VIA_8231 0x8231 1330 1332 #define PCI_DEVICE_ID_VIA_8231_4 0x8235 1331 1333 #define PCI_DEVICE_ID_VIA_8365_1 0x8305
+1 -1
include/linux/pid_namespace.h
··· 39 39 40 40 static inline struct task_struct *child_reaper(struct task_struct *tsk) 41 41 { 42 - return tsk->nsproxy->pid_ns->child_reaper; 42 + return init_pid_ns.child_reaper; 43 43 } 44 44 45 45 #endif /* _LINUX_PID_NS_H */
-290
include/linux/qic117.h
··· 1 - #ifndef _QIC117_H 2 - #define _QIC117_H 3 - 4 - /* 5 - * Copyright (C) 1993-1996 Bas Laarhoven, 6 - * (C) 1997 Claus-Justus Heine. 7 - 8 - This program is free software; you can redistribute it and/or modify 9 - it under the terms of the GNU General Public License as published by 10 - the Free Software Foundation; either version 2, or (at your option) 11 - any later version. 12 - 13 - This program is distributed in the hope that it will be useful, 14 - but WITHOUT ANY WARRANTY; without even the implied warranty of 15 - MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 16 - GNU General Public License for more details. 17 - 18 - You should have received a copy of the GNU General Public License 19 - along with this program; see the file COPYING. If not, write to 20 - the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA. 21 - 22 - * 23 - * $Source: /homes/cvs/ftape-stacked/include/linux/qic117.h,v $ 24 - * $Revision: 1.2 $ 25 - * $Date: 1997/10/05 19:19:32 $ 26 - * 27 - * This file contains QIC-117 spec. related definitions for the 28 - * QIC-40/80/3010/3020 floppy-tape driver "ftape" for Linux. 29 - * 30 - * These data were taken from the Quarter-Inch Cartridge 31 - * Drive Standards, Inc. document titled: 32 - * `Common Command Set Interface Specification for Flexible 33 - * Disk Controller Based Minicartridge Tape Drives' 34 - * document QIC-117 Revision J, 28 Aug 96. 35 - * For more information, contact: 36 - * Quarter-Inch Cartridge Drive Standards, Inc. 37 - * 311 East Carrillo Street 38 - * Santa Barbara, California 93101 39 - * Telephone (805) 963-3853 40 - * Fax (805) 962-1541 41 - * WWW http://www.qic.org 42 - * 43 - * Current QIC standard revisions (of interest) are: 44 - * QIC-40-MC, Rev. M, 2 Sep 92. 45 - * QIC-80-MC, Rev. N, 20 Mar 96. 46 - * QIC-80-MC, Rev. K, 15 Dec 94. 47 - * QIC-113, Rev. G, 15 Jun 95. 48 - * QIC-117, Rev. J, 28 Aug 96. 49 - * QIC-122, Rev. B, 6 Mar 91. 50 - * QIC-130, Rev. C, 2 Sep 92. 51 - * QIC-3010-MC, Rev. F, 14 Jun 95. 52 - * QIC-3020-MC, Rev. G, 31 Aug 95. 53 - * QIC-CRF3, Rev. B, 15 Jun 95. 54 - * */ 55 - 56 - /* 57 - * QIC-117 common command set rev. J. 58 - * These commands are sent to the tape unit 59 - * as number of pulses over the step line. 60 - */ 61 - 62 - typedef enum { 63 - QIC_NO_COMMAND = 0, 64 - QIC_RESET = 1, 65 - QIC_REPORT_NEXT_BIT = 2, 66 - QIC_PAUSE = 3, 67 - QIC_MICRO_STEP_PAUSE = 4, 68 - QIC_ALTERNATE_TIMEOUT = 5, 69 - QIC_REPORT_DRIVE_STATUS = 6, 70 - QIC_REPORT_ERROR_CODE = 7, 71 - QIC_REPORT_DRIVE_CONFIGURATION = 8, 72 - QIC_REPORT_ROM_VERSION = 9, 73 - QIC_LOGICAL_FORWARD = 10, 74 - QIC_PHYSICAL_REVERSE = 11, 75 - QIC_PHYSICAL_FORWARD = 12, 76 - QIC_SEEK_HEAD_TO_TRACK = 13, 77 - QIC_SEEK_LOAD_POINT = 14, 78 - QIC_ENTER_FORMAT_MODE = 15, 79 - QIC_WRITE_REFERENCE_BURST = 16, 80 - QIC_ENTER_VERIFY_MODE = 17, 81 - QIC_STOP_TAPE = 18, 82 - /* commands 19-20: reserved */ 83 - QIC_MICRO_STEP_HEAD_UP = 21, 84 - QIC_MICRO_STEP_HEAD_DOWN = 22, 85 - QIC_SOFT_SELECT = 23, 86 - QIC_SOFT_DESELECT = 24, 87 - QIC_SKIP_REVERSE = 25, 88 - QIC_SKIP_FORWARD = 26, 89 - QIC_SELECT_RATE = 27, 90 - /* command 27, in ccs2: Select Rate or Format */ 91 - QIC_ENTER_DIAGNOSTIC_1 = 28, 92 - QIC_ENTER_DIAGNOSTIC_2 = 29, 93 - QIC_ENTER_PRIMARY_MODE = 30, 94 - /* command 31: vendor unique */ 95 - QIC_REPORT_VENDOR_ID = 32, 96 - QIC_REPORT_TAPE_STATUS = 33, 97 - QIC_SKIP_EXTENDED_REVERSE = 34, 98 - QIC_SKIP_EXTENDED_FORWARD = 35, 99 - QIC_CALIBRATE_TAPE_LENGTH = 36, 100 - QIC_REPORT_FORMAT_SEGMENTS = 37, 101 - QIC_SET_FORMAT_SEGMENTS = 38, 102 - /* commands 39-45: reserved */ 103 - QIC_PHANTOM_SELECT = 46, 104 - QIC_PHANTOM_DESELECT = 47 105 - } qic117_cmd_t; 106 - 107 - typedef enum { 108 - discretional = 0, required, ccs1, ccs2 109 - } qic_compatibility; 110 - 111 - typedef enum { 112 - unused, mode, motion, report 113 - } command_types; 114 - 115 - struct qic117_command_table { 116 - char *name; 117 - __u8 mask; 118 - __u8 state; 119 - __u8 cmd_type; 120 - __u8 non_intr; 121 - __u8 level; 122 - }; 123 - 124 - #define QIC117_COMMANDS {\ 125 - /* command mask state cmd_type */\ 126 - /* | name | | | non_intr */\ 127 - /* | | | | | | level */\ 128 - /* 0*/ {NULL, 0x00, 0x00, mode, 0, discretional},\ 129 - /* 1*/ {"soft reset", 0x00, 0x00, motion, 1, required},\ 130 - /* 2*/ {"report next bit", 0x00, 0x00, report, 0, required},\ 131 - /* 3*/ {"pause", 0x36, 0x24, motion, 1, required},\ 132 - /* 4*/ {"micro step pause", 0x36, 0x24, motion, 1, required},\ 133 - /* 5*/ {"alternate command timeout", 0x00, 0x00, mode, 0, required},\ 134 - /* 6*/ {"report drive status", 0x00, 0x00, report, 0, required},\ 135 - /* 7*/ {"report error code", 0x01, 0x01, report, 0, required},\ 136 - /* 8*/ {"report drive configuration",0x00, 0x00, report, 0, required},\ 137 - /* 9*/ {"report rom version", 0x00, 0x00, report, 0, required},\ 138 - /*10*/ {"logical forward", 0x37, 0x25, motion, 0, required},\ 139 - /*11*/ {"physical reverse", 0x17, 0x05, motion, 0, required},\ 140 - /*12*/ {"physical forward", 0x17, 0x05, motion, 0, required},\ 141 - /*13*/ {"seek head to track", 0x37, 0x25, motion, 0, required},\ 142 - /*14*/ {"seek load point", 0x17, 0x05, motion, 1, required},\ 143 - /*15*/ {"enter format mode", 0x1f, 0x05, mode, 0, required},\ 144 - /*16*/ {"write reference burst", 0x1f, 0x05, motion, 1, required},\ 145 - /*17*/ {"enter verify mode", 0x37, 0x25, mode, 0, required},\ 146 - /*18*/ {"stop tape", 0x00, 0x00, motion, 1, required},\ 147 - /*19*/ {"reserved (19)", 0x00, 0x00, unused, 0, discretional},\ 148 - /*20*/ {"reserved (20)", 0x00, 0x00, unused, 0, discretional},\ 149 - /*21*/ {"micro step head up", 0x02, 0x00, motion, 0, required},\ 150 - /*22*/ {"micro step head down", 0x02, 0x00, motion, 0, required},\ 151 - /*23*/ {"soft select", 0x00, 0x00, mode, 0, discretional},\ 152 - /*24*/ {"soft deselect", 0x00, 0x00, mode, 0, discretional},\ 153 - /*25*/ {"skip segments reverse", 0x36, 0x24, motion, 1, required},\ 154 - /*26*/ {"skip segments forward", 0x36, 0x24, motion, 1, required},\ 155 - /*27*/ {"select rate or format", 0x03, 0x01, mode, 0, required /* [ccs2] */},\ 156 - /*28*/ {"enter diag mode 1", 0x00, 0x00, mode, 0, discretional},\ 157 - /*29*/ {"enter diag mode 2", 0x00, 0x00, mode, 0, discretional},\ 158 - /*30*/ {"enter primary mode", 0x00, 0x00, mode, 0, required},\ 159 - /*31*/ {"vendor unique (31)", 0x00, 0x00, unused, 0, discretional},\ 160 - /*32*/ {"report vendor id", 0x00, 0x00, report, 0, required},\ 161 - /*33*/ {"report tape status", 0x04, 0x04, report, 0, ccs1},\ 162 - /*34*/ {"skip extended reverse", 0x36, 0x24, motion, 1, ccs1},\ 163 - /*35*/ {"skip extended forward", 0x36, 0x24, motion, 1, ccs1},\ 164 - /*36*/ {"calibrate tape length", 0x17, 0x05, motion, 1, ccs2},\ 165 - /*37*/ {"report format segments", 0x17, 0x05, report, 0, ccs2},\ 166 - /*38*/ {"set format segments", 0x17, 0x05, mode, 0, ccs2},\ 167 - /*39*/ {"reserved (39)", 0x00, 0x00, unused, 0, discretional},\ 168 - /*40*/ {"vendor unique (40)", 0x00, 0x00, unused, 0, discretional},\ 169 - /*41*/ {"vendor unique (41)", 0x00, 0x00, unused, 0, discretional},\ 170 - /*42*/ {"vendor unique (42)", 0x00, 0x00, unused, 0, discretional},\ 171 - /*43*/ {"vendor unique (43)", 0x00, 0x00, unused, 0, discretional},\ 172 - /*44*/ {"vendor unique (44)", 0x00, 0x00, unused, 0, discretional},\ 173 - /*45*/ {"vendor unique (45)", 0x00, 0x00, unused, 0, discretional},\ 174 - /*46*/ {"phantom select", 0x00, 0x00, mode, 0, discretional},\ 175 - /*47*/ {"phantom deselect", 0x00, 0x00, mode, 0, discretional},\ 176 - } 177 - 178 - /* 179 - * Status bits returned by QIC_REPORT_DRIVE_STATUS 180 - */ 181 - 182 - #define QIC_STATUS_READY 0x01 /* Drive is ready or idle. */ 183 - #define QIC_STATUS_ERROR 0x02 /* Error detected, must read 184 - error code to clear this */ 185 - #define QIC_STATUS_CARTRIDGE_PRESENT 0x04 /* Tape is present */ 186 - #define QIC_STATUS_WRITE_PROTECT 0x08 /* Tape is write protected */ 187 - #define QIC_STATUS_NEW_CARTRIDGE 0x10 /* New cartridge inserted, must 188 - read error status to clear. */ 189 - #define QIC_STATUS_REFERENCED 0x20 /* Cartridge appears to have been 190 - formatted. */ 191 - #define QIC_STATUS_AT_BOT 0x40 /* Cartridge is at physical 192 - beginning of tape. */ 193 - #define QIC_STATUS_AT_EOT 0x80 /* Cartridge is at physical end 194 - of tape. */ 195 - /* 196 - * Status bits returned by QIC_REPORT_DRIVE_CONFIGURATION 197 - */ 198 - 199 - #define QIC_CONFIG_RATE_MASK 0x18 200 - #define QIC_CONFIG_RATE_SHIFT 3 201 - #define QIC_CONFIG_RATE_250 0 202 - #define QIC_CONFIG_RATE_500 2 203 - #define QIC_CONFIG_RATE_1000 3 204 - #define QIC_CONFIG_RATE_2000 1 205 - #define QIC_CONFIG_RATE_4000 0 /* since QIC-117 Rev. J */ 206 - 207 - #define QIC_CONFIG_LONG 0x40 /* Extra Length Tape Detected */ 208 - #define QIC_CONFIG_80 0x80 /* QIC-80 detected. */ 209 - 210 - /* 211 - * Status bits returned by QIC_REPORT_TAPE_STATUS 212 - */ 213 - 214 - #define QIC_TAPE_STD_MASK 0x0f 215 - #define QIC_TAPE_QIC40 0x01 216 - #define QIC_TAPE_QIC80 0x02 217 - #define QIC_TAPE_QIC3020 0x03 218 - #define QIC_TAPE_QIC3010 0x04 219 - 220 - #define QIC_TAPE_LEN_MASK 0x70 221 - #define QIC_TAPE_205FT 0x10 222 - #define QIC_TAPE_307FT 0x20 223 - #define QIC_TAPE_VARIABLE 0x30 224 - #define QIC_TAPE_1100FT 0x40 225 - #define QIC_TAPE_FLEX 0x60 226 - 227 - #define QIC_TAPE_WIDE 0x80 228 - 229 - /* Define a value (in feet) slightly higher than 230 - * the possible maximum tape length. 231 - */ 232 - #define QIC_TOP_TAPE_LEN 1500 233 - 234 - /* 235 - * Errors: List of error codes, and their severity. 236 - */ 237 - 238 - typedef struct { 239 - char *message; /* Text describing the error. */ 240 - unsigned int fatal:1; /* Non-zero if the error is fatal. */ 241 - } ftape_error; 242 - 243 - #define QIC117_ERRORS {\ 244 - /* 0*/ { "No error", 0, },\ 245 - /* 1*/ { "Command Received while Drive Not Ready", 0, },\ 246 - /* 2*/ { "Cartridge Not Present or Removed", 1, },\ 247 - /* 3*/ { "Motor Speed Error (not within 1%)", 1, },\ 248 - /* 4*/ { "Motor Speed Fault (jammed, or gross speed error", 1, },\ 249 - /* 5*/ { "Cartridge Write Protected", 1, },\ 250 - /* 6*/ { "Undefined or Reserved Command Code", 1, },\ 251 - /* 7*/ { "Illegal Track Address Specified for Seek", 1, },\ 252 - /* 8*/ { "Illegal Command in Report Subcontext", 0, },\ 253 - /* 9*/ { "Illegal Entry into a Diagnostic Mode", 1, },\ 254 - /*10*/ { "Broken Tape Detected (based on hole sensor)", 1, },\ 255 - /*11*/ { "Warning--Read Gain Setting Error", 1, },\ 256 - /*12*/ { "Command Received While Error Status Pending (obs)", 1, },\ 257 - /*13*/ { "Command Received While New Cartridge Pending", 1, },\ 258 - /*14*/ { "Command Illegal or Undefined in Primary Mode", 1, },\ 259 - /*15*/ { "Command Illegal or Undefined in Format Mode", 1, },\ 260 - /*16*/ { "Command Illegal or Undefined in Verify Mode", 1, },\ 261 - /*17*/ { "Logical Forward Not at Logical BOT or no Format Segments in Format Mode", 1, },\ 262 - /*18*/ { "Logical EOT Before All Segments generated", 1, },\ 263 - /*19*/ { "Command Illegal When Cartridge Not Referenced", 1, },\ 264 - /*20*/ { "Self-Diagnostic Failed (cannot be cleared)", 1, },\ 265 - /*21*/ { "Warning EEPROM Not Initialized, Defaults Set", 1, },\ 266 - /*22*/ { "EEPROM Corrupted or Hardware Failure", 1, },\ 267 - /*23*/ { "Motion Time-out Error", 1, },\ 268 - /*24*/ { "Data Segment Too Long -- Logical Forward or Pause", 1, },\ 269 - /*25*/ { "Transmit Overrun (obs)", 1, },\ 270 - /*26*/ { "Power On Reset Occurred", 0, },\ 271 - /*27*/ { "Software Reset Occurred", 0, },\ 272 - /*28*/ { "Diagnostic Mode 1 Error", 1, },\ 273 - /*29*/ { "Diagnostic Mode 2 Error", 1, },\ 274 - /*30*/ { "Command Received During Non-Interruptible Process", 1, },\ 275 - /*31*/ { "Rate or Format Selection Error", 1, },\ 276 - /*32*/ { "Illegal Command While in High Speed Mode", 1, },\ 277 - /*33*/ { "Illegal Seek Segment Value", 1, },\ 278 - /*34*/ { "Invalid Media", 1, },\ 279 - /*35*/ { "Head Positioning Failure", 1, },\ 280 - /*36*/ { "Write Reference Burst Failure", 1, },\ 281 - /*37*/ { "Prom Code Missing", 1, },\ 282 - /*38*/ { "Invalid Format", 1, },\ 283 - /*39*/ { "EOT/BOT System Failure", 1, },\ 284 - /*40*/ { "Prom A Checksum Error", 1, },\ 285 - /*41*/ { "Drive Wakeup Reset Occurred", 1, },\ 286 - /*42*/ { "Prom B Checksum Error", 1, },\ 287 - /*43*/ { "Illegal Entry into Format Mode", 1, },\ 288 - } 289 - 290 - #endif /* _QIC117_H */
+1 -1
include/linux/raid/md.h
··· 94 94 struct page *page, int rw); 95 95 extern void md_do_sync(mddev_t *mddev); 96 96 extern void md_new_event(mddev_t *mddev); 97 - 97 + extern void md_allow_write(mddev_t *mddev); 98 98 99 99 #endif /* CONFIG_MD */ 100 100 #endif
+2
include/linux/reiserfs_fs_i.h
··· 25 25 i_link_saved_truncate_mask = 0x0020, 26 26 i_has_xattr_dir = 0x0040, 27 27 i_data_log = 0x0080, 28 + i_ever_mapped = 0x0100 28 29 } reiserfs_inode_flags; 29 30 30 31 struct reiserfs_inode_info { ··· 53 52 ** flushed */ 54 53 unsigned long i_trans_id; 55 54 struct reiserfs_journal_list *i_jl; 55 + struct mutex i_mmap; 56 56 #ifdef CONFIG_REISERFS_FS_POSIX_ACL 57 57 struct posix_acl *i_acl_access; 58 58 struct posix_acl *i_acl_default;
+2 -2
include/linux/rtmutex.h
··· 16 16 #include <linux/plist.h> 17 17 #include <linux/spinlock_types.h> 18 18 19 - /* 19 + /** 20 20 * The rt_mutex structure 21 21 * 22 22 * @wait_lock: spinlock to protect the structure ··· 71 71 #define DEFINE_RT_MUTEX(mutexname) \ 72 72 struct rt_mutex mutexname = __RT_MUTEX_INITIALIZER(mutexname) 73 73 74 - /*** 74 + /** 75 75 * rt_mutex_is_locked - is the mutex locked 76 76 * @lock: the mutex to be queried 77 77 *
-1
include/linux/sunrpc/sched.h
··· 250 250 int flags, const struct rpc_call_ops *ops, 251 251 void *data); 252 252 void rpc_put_task(struct rpc_task *); 253 - void rpc_release_task(struct rpc_task *); 254 253 void rpc_exit_task(struct rpc_task *); 255 254 void rpc_release_calldata(const struct rpc_call_ops *, void *); 256 255 void rpc_killall_tasks(struct rpc_clnt *);
+4 -1
include/linux/sunrpc/svc.h
··· 144 144 * 145 145 * Each request/reply pair can have at most one "payload", plus two pages, 146 146 * one for the request, and one for the reply. 147 + * We using ->sendfile to return read data, we might need one extra page 148 + * if the request is not page-aligned. So add another '1'. 147 149 */ 148 - #define RPCSVC_MAXPAGES ((RPCSVC_MAXPAYLOAD+PAGE_SIZE-1)/PAGE_SIZE + 2) 150 + #define RPCSVC_MAXPAGES ((RPCSVC_MAXPAYLOAD+PAGE_SIZE-1)/PAGE_SIZE \ 151 + + 2 + 1) 149 152 150 153 static inline u32 svc_getnl(struct kvec *iov) 151 154 {
+2 -2
include/linux/timer.h
··· 41 41 init_timer(timer); 42 42 } 43 43 44 - /*** 44 + /** 45 45 * timer_pending - is a timer pending? 46 46 * @timer: the timer in question 47 47 * ··· 63 63 64 64 extern unsigned long next_timer_interrupt(void); 65 65 66 - /*** 66 + /** 67 67 * add_timer - start a timer 68 68 * @timer: the timer to be added 69 69 *
+1 -1
include/net/inet6_connection_sock.h
··· 38 38 39 39 extern void inet6_csk_addr2sockaddr(struct sock *sk, struct sockaddr *uaddr); 40 40 41 - extern int inet6_csk_xmit(struct sk_buff *skb, struct sock *sk, int ipfragok); 41 + extern int inet6_csk_xmit(struct sk_buff *skb, int ipfragok); 42 42 #endif /* _INET6_CONNECTION_SOCK_H */
+1 -2
include/net/inet_connection_sock.h
··· 37 37 * (i.e. things that depend on the address family) 38 38 */ 39 39 struct inet_connection_sock_af_ops { 40 - int (*queue_xmit)(struct sk_buff *skb, struct sock *sk, 41 - int ipfragok); 40 + int (*queue_xmit)(struct sk_buff *skb, int ipfragok); 42 41 void (*send_check)(struct sock *sk, int len, 43 42 struct sk_buff *skb); 44 43 int (*rebuild_header)(struct sock *sk);
+1 -1
include/net/ip.h
··· 97 97 extern int ip_fragment(struct sk_buff *skb, int (*output)(struct sk_buff *)); 98 98 extern int ip_do_nat(struct sk_buff *skb); 99 99 extern void ip_send_check(struct iphdr *ip); 100 - extern int ip_queue_xmit(struct sk_buff *skb, struct sock *sk, int ipfragok); 100 + extern int ip_queue_xmit(struct sk_buff *skb, int ipfragok); 101 101 extern void ip_init(void); 102 102 extern int ip_append_data(struct sock *sk, 103 103 int getfrag(void *from, char *to, int offset, int len,
+1
include/net/netfilter/nf_conntrack_compat.h
··· 6 6 #if defined(CONFIG_IP_NF_CONNTRACK) || defined(CONFIG_IP_NF_CONNTRACK_MODULE) 7 7 8 8 #include <linux/netfilter_ipv4/ip_conntrack.h> 9 + #include <linux/socket.h> 9 10 10 11 #ifdef CONFIG_IP_NF_CONNTRACK_MARK 11 12 static inline u_int32_t *nf_ct_get_mark(const struct sk_buff *skb,
+1
include/net/sctp/sm.h
··· 134 134 sctp_state_fn_t sctp_sf_discard_chunk; 135 135 sctp_state_fn_t sctp_sf_do_5_2_1_siminit; 136 136 sctp_state_fn_t sctp_sf_do_5_2_2_dupinit; 137 + sctp_state_fn_t sctp_sf_do_5_2_3_initack; 137 138 sctp_state_fn_t sctp_sf_do_5_2_4_dupcook; 138 139 sctp_state_fn_t sctp_sf_unk_chunk; 139 140 sctp_state_fn_t sctp_sf_do_8_5_1_E_sa;
+15 -3
include/sound/core.h
··· 132 132 int shutdown; /* this card is going down */ 133 133 int free_on_last_close; /* free in context of file_release */ 134 134 wait_queue_head_t shutdown_sleep; 135 - struct device *parent; 136 - struct device *dev; 135 + struct device *dev; /* device assigned to this card */ 136 + #ifndef CONFIG_SYSFS_DEPRECATED 137 + struct device *card_dev; /* cardX object for sysfs */ 138 + #endif 137 139 138 140 #ifdef CONFIG_PM 139 141 unsigned int power_state; /* power state */ ··· 192 190 void *private_data; /* private data for f_ops->open */ 193 191 struct device *dev; /* device for sysfs */ 194 192 }; 193 + 194 + /* return a device pointer linked to each sound device as a parent */ 195 + static inline struct device *snd_card_get_device_link(struct snd_card *card) 196 + { 197 + #ifdef CONFIG_SYSFS_DEPRECATED 198 + return card ? card->dev : NULL; 199 + #else 200 + return card ? card->card_dev : NULL; 201 + #endif 202 + } 195 203 196 204 /* sound.c */ 197 205 ··· 269 257 int snd_card_file_remove(struct snd_card *card, struct file *file); 270 258 271 259 #ifndef snd_card_set_dev 272 - #define snd_card_set_dev(card,devptr) ((card)->parent = (devptr)) 260 + #define snd_card_set_dev(card,devptr) ((card)->dev = (devptr)) 273 261 #endif 274 262 275 263 /* device.c */
+1 -1
ipc/shm.c
··· 279 279 if (size < SHMMIN || size > ns->shm_ctlmax) 280 280 return -EINVAL; 281 281 282 - if (ns->shm_tot + numpages >= ns->shm_ctlall) 282 + if (ns->shm_tot + numpages > ns->shm_ctlall) 283 283 return -ENOSPC; 284 284 285 285 shp = ipc_rcu_alloc(sizeof(*shp));
+1 -1
kernel/exit.c
··· 938 938 939 939 tsk->exit_code = code; 940 940 proc_exit_connector(tsk); 941 - exit_notify(tsk); 942 941 exit_task_namespaces(tsk); 942 + exit_notify(tsk); 943 943 #ifdef CONFIG_NUMA 944 944 mpol_free(tsk->mempolicy); 945 945 tsk->mempolicy = NULL;
+1 -1
kernel/fork.c
··· 1313 1313 return regs; 1314 1314 } 1315 1315 1316 - struct task_struct * __devinit fork_idle(int cpu) 1316 + struct task_struct * __cpuinit fork_idle(int cpu) 1317 1317 { 1318 1318 struct task_struct *task; 1319 1319 struct pt_regs regs;
+3
kernel/irq/manage.c
··· 315 315 /* Undo nested disables: */ 316 316 desc->depth = 1; 317 317 } 318 + /* Reset broken irq detection when installing new handler */ 319 + desc->irq_count = 0; 320 + desc->irqs_unhandled = 0; 318 321 spin_unlock_irqrestore(&desc->lock, flags); 319 322 320 323 new->irq = irq;
+13 -7
kernel/kprobes.c
··· 87 87 int ngarbage; 88 88 }; 89 89 90 + enum kprobe_slot_state { 91 + SLOT_CLEAN = 0, 92 + SLOT_DIRTY = 1, 93 + SLOT_USED = 2, 94 + }; 95 + 90 96 static struct hlist_head kprobe_insn_pages; 91 97 static int kprobe_garbage_slots; 92 98 static int collect_garbage_slots(void); ··· 136 130 if (kip->nused < INSNS_PER_PAGE) { 137 131 int i; 138 132 for (i = 0; i < INSNS_PER_PAGE; i++) { 139 - if (!kip->slot_used[i]) { 140 - kip->slot_used[i] = 1; 133 + if (kip->slot_used[i] == SLOT_CLEAN) { 134 + kip->slot_used[i] = SLOT_USED; 141 135 kip->nused++; 142 136 return kip->insns + (i * MAX_INSN_SIZE); 143 137 } ··· 169 163 } 170 164 INIT_HLIST_NODE(&kip->hlist); 171 165 hlist_add_head(&kip->hlist, &kprobe_insn_pages); 172 - memset(kip->slot_used, 0, INSNS_PER_PAGE); 173 - kip->slot_used[0] = 1; 166 + memset(kip->slot_used, SLOT_CLEAN, INSNS_PER_PAGE); 167 + kip->slot_used[0] = SLOT_USED; 174 168 kip->nused = 1; 175 169 kip->ngarbage = 0; 176 170 return kip->insns; ··· 179 173 /* Return 1 if all garbages are collected, otherwise 0. */ 180 174 static int __kprobes collect_one_slot(struct kprobe_insn_page *kip, int idx) 181 175 { 182 - kip->slot_used[idx] = 0; 176 + kip->slot_used[idx] = SLOT_CLEAN; 183 177 kip->nused--; 184 178 if (kip->nused == 0) { 185 179 /* ··· 218 212 continue; 219 213 kip->ngarbage = 0; /* we will collect all garbages */ 220 214 for (i = 0; i < INSNS_PER_PAGE; i++) { 221 - if (kip->slot_used[i] == -1 && 215 + if (kip->slot_used[i] == SLOT_DIRTY && 222 216 collect_one_slot(kip, i)) 223 217 break; 224 218 } ··· 238 232 slot < kip->insns + (INSNS_PER_PAGE * MAX_INSN_SIZE)) { 239 233 int i = (slot - kip->insns) / MAX_INSN_SIZE; 240 234 if (dirty) { 241 - kip->slot_used[i] = -1; 235 + kip->slot_used[i] = SLOT_DIRTY; 242 236 kip->ngarbage++; 243 237 } else { 244 238 collect_one_slot(kip, i);
+1 -1
kernel/pid.c
··· 197 197 hlist_del_rcu(&pid->pid_chain); 198 198 spin_unlock_irqrestore(&pidmap_lock, flags); 199 199 200 - free_pidmap(current->nsproxy->pid_ns, pid->nr); 200 + free_pidmap(&init_pid_ns, pid->nr); 201 201 call_rcu(&pid->rcu, delayed_put_pid); 202 202 } 203 203
+2 -1
kernel/profile.c
··· 331 331 local_irq_restore(flags); 332 332 put_cpu(); 333 333 } 334 - EXPORT_SYMBOL_GPL(profile_hits); 335 334 336 335 static int __devinit profile_cpu_callback(struct notifier_block *info, 337 336 unsigned long action, void *__cpu) ··· 399 400 atomic_add(nr_hits, &prof_buffer[min(pc, prof_len - 1)]); 400 401 } 401 402 #endif /* !CONFIG_SMP */ 403 + 404 + EXPORT_SYMBOL_GPL(profile_hits); 402 405 403 406 void profile_tick(int type) 404 407 {
+11 -4
kernel/sys.c
··· 323 323 int blocking_notifier_call_chain(struct blocking_notifier_head *nh, 324 324 unsigned long val, void *v) 325 325 { 326 - int ret; 326 + int ret = NOTIFY_DONE; 327 327 328 - down_read(&nh->rwsem); 329 - ret = notifier_call_chain(&nh->head, val, v); 330 - up_read(&nh->rwsem); 328 + /* 329 + * We check the head outside the lock, but if this access is 330 + * racy then it does not matter what the result of the test 331 + * is, we re-check the list after having taken the lock anyway: 332 + */ 333 + if (rcu_dereference(nh->head)) { 334 + down_read(&nh->rwsem); 335 + ret = notifier_call_chain(&nh->head, val, v); 336 + up_read(&nh->rwsem); 337 + } 331 338 return ret; 332 339 } 333 340
+2 -2
mm/filemap_xip.c
··· 183 183 address = vma->vm_start + 184 184 ((pgoff - vma->vm_pgoff) << PAGE_SHIFT); 185 185 BUG_ON(address < vma->vm_start || address >= vma->vm_end); 186 - page = ZERO_PAGE(address); 186 + page = ZERO_PAGE(0); 187 187 pte = page_check_address(page, mm, address, &ptl); 188 188 if (pte) { 189 189 /* Nuke the page table entry. */ ··· 246 246 __xip_unmap(mapping, pgoff); 247 247 } else { 248 248 /* not shared and writable, use ZERO_PAGE() */ 249 - page = ZERO_PAGE(address); 249 + page = ZERO_PAGE(0); 250 250 } 251 251 252 252 out:
+9 -2
mm/memory.c
··· 2606 2606 gate_vma.vm_mm = NULL; 2607 2607 gate_vma.vm_start = FIXADDR_USER_START; 2608 2608 gate_vma.vm_end = FIXADDR_USER_END; 2609 - gate_vma.vm_page_prot = PAGE_READONLY; 2610 - gate_vma.vm_flags = 0; 2609 + gate_vma.vm_flags = VM_READ | VM_MAYREAD | VM_EXEC | VM_MAYEXEC; 2610 + gate_vma.vm_page_prot = __P101; 2611 + /* 2612 + * Make sure the vDSO gets into every core dump. 2613 + * Dumping its contents makes post-mortem fully interpretable later 2614 + * without matching up the same kernel and hardware config to see 2615 + * what PC values meant. 2616 + */ 2617 + gate_vma.vm_flags |= VM_ALWAYSDUMP; 2611 2618 return 0; 2612 2619 } 2613 2620 __initcall(gate_vma_init);
+4
mm/mempolicy.c
··· 884 884 err = get_nodes(&nodes, nmask, maxnode); 885 885 if (err) 886 886 return err; 887 + #ifdef CONFIG_CPUSETS 888 + /* Restrict the nodes to the allowed nodes in the cpuset */ 889 + nodes_and(nodes, nodes, current->mems_allowed); 890 + #endif 887 891 return do_mbind(start, len, mode, &nodes, flags); 888 892 } 889 893
+7
mm/mmap.c
··· 1477 1477 { 1478 1478 struct mm_struct *mm = vma->vm_mm; 1479 1479 struct rlimit *rlim = current->signal->rlim; 1480 + unsigned long new_start; 1480 1481 1481 1482 /* address space limit tests */ 1482 1483 if (!may_expand_vm(mm, grow)) ··· 1496 1495 if (locked > limit && !capable(CAP_IPC_LOCK)) 1497 1496 return -ENOMEM; 1498 1497 } 1498 + 1499 + /* Check to ensure the stack will not grow into a hugetlb-only region */ 1500 + new_start = (vma->vm_flags & VM_GROWSUP) ? vma->vm_start : 1501 + vma->vm_end - size; 1502 + if (is_hugepage_only_range(vma->vm_mm, new_start, size)) 1503 + return -EFAULT; 1499 1504 1500 1505 /* 1501 1506 * Overcommit.. This must be the final test, as it will
-1
mm/mremap.c
··· 105 105 if (pte_none(*old_pte)) 106 106 continue; 107 107 pte = ptep_clear_flush(vma, old_addr, old_pte); 108 - /* ZERO_PAGE can be dependant on virtual addr */ 109 108 pte = move_pte(pte, new_vma->vm_page_prot, old_addr, new_addr); 110 109 set_pte_at(mm, new_addr, new_pte, pte); 111 110 }
+18 -23
mm/page-writeback.c
··· 133 133 134 134 #ifdef CONFIG_HIGHMEM 135 135 /* 136 - * If this mapping can only allocate from low memory, 137 - * we exclude high memory from our count. 136 + * We always exclude high memory from our count. 138 137 */ 139 - if (mapping && !(mapping_gfp_mask(mapping) & __GFP_HIGHMEM)) 140 - available_memory -= totalhigh_pages; 138 + available_memory -= totalhigh_pages; 141 139 #endif 142 140 143 141 ··· 524 526 }; 525 527 526 528 /* 527 - * If the machine has a large highmem:lowmem ratio then scale back the default 528 - * dirty memory thresholds: allowing too much dirty highmem pins an excessive 529 - * number of buffer_heads. 529 + * Called early on to tune the page writeback dirty limits. 530 + * 531 + * We used to scale dirty pages according to how total memory 532 + * related to pages that could be allocated for buffers (by 533 + * comparing nr_free_buffer_pages() to vm_total_pages. 534 + * 535 + * However, that was when we used "dirty_ratio" to scale with 536 + * all memory, and we don't do that any more. "dirty_ratio" 537 + * is now applied to total non-HIGHPAGE memory (by subtracting 538 + * totalhigh_pages from vm_total_pages), and as such we can't 539 + * get into the old insane situation any more where we had 540 + * large amounts of dirty pages compared to a small amount of 541 + * non-HIGHMEM memory. 542 + * 543 + * But we might still want to scale the dirty_ratio by how 544 + * much memory the box has.. 530 545 */ 531 546 void __init page_writeback_init(void) 532 547 { 533 - long buffer_pages = nr_free_buffer_pages(); 534 - long correction; 535 - 536 - correction = (100 * 4 * buffer_pages) / vm_total_pages; 537 - 538 - if (correction < 100) { 539 - dirty_background_ratio *= correction; 540 - dirty_background_ratio /= 100; 541 - vm_dirty_ratio *= correction; 542 - vm_dirty_ratio /= 100; 543 - 544 - if (dirty_background_ratio <= 0) 545 - dirty_background_ratio = 1; 546 - if (vm_dirty_ratio <= 0) 547 - vm_dirty_ratio = 1; 548 - } 549 548 mod_timer(&wb_timer, jiffies + dirty_writeback_interval); 550 549 writeback_set_ratelimit(); 551 550 register_cpu_notifier(&ratelimit_nb);
+1 -2
mm/page_alloc.c
··· 989 989 int classzone_idx, int alloc_flags) 990 990 { 991 991 /* free_pages my go negative - that's OK */ 992 - unsigned long min = mark; 993 - long free_pages = z->free_pages - (1 << order) + 1; 992 + long min = mark, free_pages = z->free_pages - (1 << order) + 1; 994 993 int o; 995 994 996 995 if (alloc_flags & ALLOC_HIGH)
+14 -8
mm/truncate.c
··· 51 51 do_invalidatepage(page, partial); 52 52 } 53 53 54 + /* 55 + * This cancels just the dirty bit on the kernel page itself, it 56 + * does NOT actually remove dirty bits on any mmap's that may be 57 + * around. It also leaves the page tagged dirty, so any sync 58 + * activity will still find it on the dirty lists, and in particular, 59 + * clear_page_dirty_for_io() will still look at the dirty bits in 60 + * the VM. 61 + * 62 + * Doing this should *normally* only ever be done when a page 63 + * is truncated, and is not actually mapped anywhere at all. However, 64 + * fs/buffer.c does this when it notices that somebody has cleaned 65 + * out all the buffers on a page without actually doing it through 66 + * the VM. Can you say "ext3 is horribly ugly"? Tought you could. 67 + */ 54 68 void cancel_dirty_page(struct page *page, unsigned int account_size) 55 69 { 56 - /* If we're cancelling the page, it had better not be mapped any more */ 57 - if (page_mapped(page)) { 58 - static unsigned int warncount; 59 - 60 - WARN_ON(++warncount < 5); 61 - } 62 - 63 70 if (TestClearPageDirty(page)) { 64 71 struct address_space *mapping = page->mapping; 65 72 if (mapping && mapping_cap_account_dirty(mapping)) { ··· 429 422 pagevec_release(&pvec); 430 423 cond_resched(); 431 424 } 432 - WARN_ON_ONCE(ret); 433 425 return ret; 434 426 } 435 427 EXPORT_SYMBOL_GPL(invalidate_inode_pages2_range);
+8 -2
net/bluetooth/l2cap.c
··· 585 585 goto done; 586 586 } 587 587 588 + if (la->l2_psm > 0 && btohs(la->l2_psm) < 0x1001 && 589 + !capable(CAP_NET_BIND_SERVICE)) { 590 + err = -EACCES; 591 + goto done; 592 + } 593 + 588 594 write_lock_bh(&l2cap_sk_list.lock); 589 595 590 596 if (la->l2_psm && __l2cap_get_sock_by_addr(la->l2_psm, &la->l2_bdaddr)) { ··· 2156 2150 2157 2151 str += sprintf(str, "%s %s %d %d 0x%4.4x 0x%4.4x %d %d 0x%x\n", 2158 2152 batostr(&bt_sk(sk)->src), batostr(&bt_sk(sk)->dst), 2159 - sk->sk_state, pi->psm, pi->scid, pi->dcid, pi->imtu, 2160 - pi->omtu, pi->link_mode); 2153 + sk->sk_state, btohs(pi->psm), pi->scid, pi->dcid, 2154 + pi->imtu, pi->omtu, pi->link_mode); 2161 2155 } 2162 2156 2163 2157 read_unlock_bh(&l2cap_sk_list.lock);
+8 -14
net/core/flow.c
··· 231 231 232 232 err = resolver(key, family, dir, &obj, &obj_ref); 233 233 234 - if (fle) { 235 - if (err) { 236 - /* Force security policy check on next lookup */ 237 - *head = fle->next; 238 - flow_entry_kill(cpu, fle); 239 - } else { 240 - fle->genid = atomic_read(&flow_cache_genid); 234 + if (fle && !err) { 235 + fle->genid = atomic_read(&flow_cache_genid); 241 236 242 - if (fle->object) 243 - atomic_dec(fle->object_ref); 237 + if (fle->object) 238 + atomic_dec(fle->object_ref); 244 239 245 - fle->object = obj; 246 - fle->object_ref = obj_ref; 247 - if (obj) 248 - atomic_inc(fle->object_ref); 249 - } 240 + fle->object = obj; 241 + fle->object_ref = obj_ref; 242 + if (obj) 243 + atomic_inc(fle->object_ref); 250 244 } 251 245 local_bh_enable(); 252 246
+2 -2
net/dccp/output.c
··· 124 124 DCCP_INC_STATS(DCCP_MIB_OUTSEGS); 125 125 126 126 memset(&(IPCB(skb)->opt), 0, sizeof(IPCB(skb)->opt)); 127 - err = icsk->icsk_af_ops->queue_xmit(skb, sk, 0); 127 + err = icsk->icsk_af_ops->queue_xmit(skb, 0); 128 128 return net_xmit_eval(err); 129 129 } 130 130 return -ENOBUFS; ··· 396 396 code); 397 397 if (skb != NULL) { 398 398 memset(&(IPCB(skb)->opt), 0, sizeof(IPCB(skb)->opt)); 399 - err = inet_csk(sk)->icsk_af_ops->queue_xmit(skb, sk, 0); 399 + err = inet_csk(sk)->icsk_af_ops->queue_xmit(skb, 0); 400 400 return net_xmit_eval(err); 401 401 } 402 402 }
+9 -2
net/decnet/dn_dev.c
··· 1145 1145 init_timer(&dn_db->timer); 1146 1146 1147 1147 dn_db->uptime = jiffies; 1148 + 1149 + dn_db->neigh_parms = neigh_parms_alloc(dev, &dn_neigh_table); 1150 + if (!dn_db->neigh_parms) { 1151 + dev->dn_ptr = NULL; 1152 + kfree(dn_db); 1153 + return NULL; 1154 + } 1155 + 1148 1156 if (dn_db->parms.up) { 1149 1157 if (dn_db->parms.up(dev) < 0) { 1158 + neigh_parms_release(&dn_neigh_table, dn_db->neigh_parms); 1150 1159 dev->dn_ptr = NULL; 1151 1160 kfree(dn_db); 1152 1161 return NULL; 1153 1162 } 1154 1163 } 1155 - 1156 - dn_db->neigh_parms = neigh_parms_alloc(dev, &dn_neigh_table); 1157 1164 1158 1165 dn_dev_sysctl_register(dev, &dn_db->parms); 1159 1166
+23 -11
net/ipv4/fib_trie.c
··· 1989 1989 unsigned cindex = iter->index; 1990 1990 struct tnode *p; 1991 1991 1992 + /* A single entry routing table */ 1993 + if (!tn) 1994 + return NULL; 1995 + 1992 1996 pr_debug("get_next iter={node=%p index=%d depth=%d}\n", 1993 1997 iter->tnode, iter->index, iter->depth); 1994 1998 rescan: ··· 2041 2037 if(!iter) 2042 2038 return NULL; 2043 2039 2044 - if (n && IS_TNODE(n)) { 2045 - iter->tnode = (struct tnode *) n; 2046 - iter->trie = t; 2047 - iter->index = 0; 2048 - iter->depth = 1; 2040 + if (n) { 2041 + if (IS_TNODE(n)) { 2042 + iter->tnode = (struct tnode *) n; 2043 + iter->trie = t; 2044 + iter->index = 0; 2045 + iter->depth = 1; 2046 + } else { 2047 + iter->tnode = NULL; 2048 + iter->trie = t; 2049 + iter->index = 0; 2050 + iter->depth = 0; 2051 + } 2049 2052 return n; 2050 2053 } 2051 2054 return NULL; ··· 2290 2279 if (v == SEQ_START_TOKEN) 2291 2280 return 0; 2292 2281 2282 + if (!NODE_PARENT(n)) { 2283 + if (iter->trie == trie_local) 2284 + seq_puts(seq, "<local>:\n"); 2285 + else 2286 + seq_puts(seq, "<main>:\n"); 2287 + } 2288 + 2293 2289 if (IS_TNODE(n)) { 2294 2290 struct tnode *tn = (struct tnode *) n; 2295 2291 __be32 prf = htonl(MASK_PFX(tn->key, tn->pos)); 2296 2292 2297 - if (!NODE_PARENT(n)) { 2298 - if (iter->trie == trie_local) 2299 - seq_puts(seq, "<local>:\n"); 2300 - else 2301 - seq_puts(seq, "<main>:\n"); 2302 - } 2303 2293 seq_indent(seq, iter->depth-1); 2304 2294 seq_printf(seq, " +-- %d.%d.%d.%d/%d %d %d %d\n", 2305 2295 NIPQUAD(prf), tn->pos, tn->bits, tn->full_children,
+2 -1
net/ipv4/ip_output.c
··· 281 281 !(IPCB(skb)->flags & IPSKB_REROUTED)); 282 282 } 283 283 284 - int ip_queue_xmit(struct sk_buff *skb, struct sock *sk, int ipfragok) 284 + int ip_queue_xmit(struct sk_buff *skb, int ipfragok) 285 285 { 286 + struct sock *sk = skb->sk; 286 287 struct inet_sock *inet = inet_sk(sk); 287 288 struct ip_options *opt = inet->opt; 288 289 struct rtable *rt;
+10 -10
net/ipv4/netfilter/Makefile
··· 4 4 5 5 # objects for the standalone - connection tracking / NAT 6 6 ip_conntrack-objs := ip_conntrack_standalone.o ip_conntrack_core.o ip_conntrack_proto_generic.o ip_conntrack_proto_tcp.o ip_conntrack_proto_udp.o ip_conntrack_proto_icmp.o 7 + # objects for l3 independent conntrack 8 + nf_conntrack_ipv4-objs := nf_conntrack_l3proto_ipv4.o nf_conntrack_proto_icmp.o 9 + ifeq ($(CONFIG_NF_CONNTRACK_PROC_COMPAT),y) 10 + ifeq ($(CONFIG_PROC_FS),y) 11 + nf_conntrack_ipv4-objs += nf_conntrack_l3proto_ipv4_compat.o 12 + endif 13 + endif 14 + 7 15 ip_nat-objs := ip_nat_core.o ip_nat_helper.o ip_nat_proto_unknown.o ip_nat_proto_tcp.o ip_nat_proto_udp.o ip_nat_proto_icmp.o 8 16 nf_nat-objs := nf_nat_core.o nf_nat_helper.o nf_nat_proto_unknown.o nf_nat_proto_tcp.o nf_nat_proto_udp.o nf_nat_proto_icmp.o 9 17 ifneq ($(CONFIG_NF_NAT),) ··· 28 20 29 21 # connection tracking 30 22 obj-$(CONFIG_IP_NF_CONNTRACK) += ip_conntrack.o 23 + obj-$(CONFIG_NF_CONNTRACK_IPV4) += nf_conntrack_ipv4.o 24 + 31 25 obj-$(CONFIG_IP_NF_NAT) += ip_nat.o 32 26 obj-$(CONFIG_NF_NAT) += nf_nat.o 33 27 ··· 116 106 117 107 obj-$(CONFIG_IP_NF_QUEUE) += ip_queue.o 118 108 119 - # objects for l3 independent conntrack 120 - nf_conntrack_ipv4-objs := nf_conntrack_l3proto_ipv4.o nf_conntrack_proto_icmp.o 121 - ifeq ($(CONFIG_NF_CONNTRACK_PROC_COMPAT),y) 122 - ifeq ($(CONFIG_PROC_FS),y) 123 - nf_conntrack_ipv4-objs += nf_conntrack_l3proto_ipv4_compat.o 124 - endif 125 - endif 126 - 127 - # l3 independent conntrack 128 - obj-$(CONFIG_NF_CONNTRACK_IPV4) += nf_conntrack_ipv4.o
+3 -1
net/ipv4/netfilter/ip_conntrack_netlink.c
··· 374 374 && ctnetlink_dump_helpinfo(skb, ct) < 0) 375 375 goto nfattr_failure; 376 376 377 + #ifdef CONFIG_IP_NF_CONNTRACK_MARK 377 378 if ((events & IPCT_MARK || ct->mark) 378 379 && ctnetlink_dump_mark(skb, ct) < 0) 379 380 goto nfattr_failure; 381 + #endif 380 382 381 383 if (events & IPCT_COUNTER_FILLING && 382 384 (ctnetlink_dump_counters(skb, ct, IP_CT_DIR_ORIGINAL) < 0 || ··· 961 959 if (cda[CTA_PROTOINFO-1]) { 962 960 err = ctnetlink_change_protoinfo(ct, cda); 963 961 if (err < 0) 964 - return err; 962 + goto err; 965 963 } 966 964 967 965 #if defined(CONFIG_IP_NF_CONNTRACK_MARK)
+8 -2
net/ipv4/netfilter/ip_conntrack_sip.c
··· 283 283 { 284 284 int s = *shift; 285 285 286 - for (; dptr <= limit && *dptr != '@'; dptr++) 286 + /* Search for @, but stop at the end of the line. 287 + * We are inside a sip: URI, so we don't need to worry about 288 + * continuation lines. */ 289 + while (dptr <= limit && 290 + *dptr != '@' && *dptr != '\r' && *dptr != '\n') { 287 291 (*shift)++; 292 + dptr++; 293 + } 288 294 289 - if (*dptr == '@') { 295 + if (dptr <= limit && *dptr == '@') { 290 296 dptr++; 291 297 (*shift)++; 292 298 } else
+2 -2
net/ipv4/netfilter/nf_nat_pptp.c
··· 72 72 DEBUGP("we are PAC->PNS\n"); 73 73 /* build tuple for PNS->PAC */ 74 74 t.src.l3num = AF_INET; 75 - t.src.u3.ip = master->tuplehash[exp->dir].tuple.src.u3.ip; 75 + t.src.u3.ip = master->tuplehash[!exp->dir].tuple.src.u3.ip; 76 76 t.src.u.gre.key = nat_pptp_info->pns_call_id; 77 - t.dst.u3.ip = master->tuplehash[exp->dir].tuple.dst.u3.ip; 77 + t.dst.u3.ip = master->tuplehash[!exp->dir].tuple.dst.u3.ip; 78 78 t.dst.u.gre.key = nat_pptp_info->pac_call_id; 79 79 t.dst.protonum = IPPROTO_GRE; 80 80 }
+9 -6
net/ipv4/tcp_input.c
··· 1011 1011 for (j = 0; j < i; j++){ 1012 1012 if (after(ntohl(sp[j].start_seq), 1013 1013 ntohl(sp[j+1].start_seq))){ 1014 - sp[j].start_seq = htonl(tp->recv_sack_cache[j+1].start_seq); 1015 - sp[j].end_seq = htonl(tp->recv_sack_cache[j+1].end_seq); 1016 - sp[j+1].start_seq = htonl(tp->recv_sack_cache[j].start_seq); 1017 - sp[j+1].end_seq = htonl(tp->recv_sack_cache[j].end_seq); 1014 + struct tcp_sack_block_wire tmp; 1015 + 1016 + tmp = sp[j]; 1017 + sp[j] = sp[j+1]; 1018 + sp[j+1] = tmp; 1018 1019 } 1019 1020 1020 1021 } ··· 4421 4420 * But, this leaves one open to an easy denial of 4422 4421 * service attack, and SYN cookies can't defend 4423 4422 * against this problem. So, we drop the data 4424 - * in the interest of security over speed. 4423 + * in the interest of security over speed unless 4424 + * it's still in use. 4425 4425 */ 4426 - goto discard; 4426 + kfree_skb(skb); 4427 + return 0; 4427 4428 } 4428 4429 goto discard; 4429 4430
+4 -2
net/ipv4/tcp_output.c
··· 467 467 468 468 th = (struct tcphdr *) skb_push(skb, tcp_header_size); 469 469 skb->h.th = th; 470 + skb_set_owner_w(skb, sk); 470 471 471 472 /* Build TCP header and checksum it. */ 472 473 th->source = inet->sport; ··· 541 540 if (after(tcb->end_seq, tp->snd_nxt) || tcb->seq == tcb->end_seq) 542 541 TCP_INC_STATS(TCP_MIB_OUTSEGS); 543 542 544 - err = icsk->icsk_af_ops->queue_xmit(skb, sk, 0); 543 + err = icsk->icsk_af_ops->queue_xmit(skb, 0); 545 544 if (likely(err <= 0)) 546 545 return err; 547 546 ··· 1651 1650 1652 1651 memcpy(skb_put(skb, next_skb_size), next_skb->data, next_skb_size); 1653 1652 1654 - skb->ip_summed = next_skb->ip_summed; 1653 + if (next_skb->ip_summed == CHECKSUM_PARTIAL) 1654 + skb->ip_summed = CHECKSUM_PARTIAL; 1655 1655 1656 1656 if (skb->ip_summed != CHECKSUM_PARTIAL) 1657 1657 skb->csum = csum_block_add(skb->csum, next_skb->csum, skb_size);
+1 -1
net/ipv4/tcp_probe.c
··· 30 30 31 31 #include <net/tcp.h> 32 32 33 - MODULE_AUTHOR("Stephen Hemminger <shemminger@osdl.org>"); 33 + MODULE_AUTHOR("Stephen Hemminger <shemminger@linux-foundation.org>"); 34 34 MODULE_DESCRIPTION("TCP cwnd snooper"); 35 35 MODULE_LICENSE("GPL"); 36 36
+8 -2
net/ipv6/addrconf.c
··· 341 341 static struct inet6_dev * ipv6_add_dev(struct net_device *dev) 342 342 { 343 343 struct inet6_dev *ndev; 344 + struct in6_addr maddr; 344 345 345 346 ASSERT_RTNL(); 346 347 ··· 426 425 #endif 427 426 /* protected by rtnl_lock */ 428 427 rcu_assign_pointer(dev->ip6_ptr, ndev); 428 + 429 + /* Join all-node multicast group */ 430 + ipv6_addr_all_nodes(&maddr); 431 + ipv6_dev_mc_inc(dev, &maddr); 432 + 429 433 return ndev; 430 434 } 431 435 ··· 3393 3387 #ifdef CONFIG_IPV6_ROUTER_PREF 3394 3388 array[DEVCONF_ACCEPT_RA_RTR_PREF] = cnf->accept_ra_rtr_pref; 3395 3389 array[DEVCONF_RTR_PROBE_INTERVAL] = cnf->rtr_probe_interval; 3396 - #ifdef CONFIV_IPV6_ROUTE_INFO 3390 + #ifdef CONFIG_IPV6_ROUTE_INFO 3397 3391 array[DEVCONF_ACCEPT_RA_RT_INFO_MAX_PLEN] = cnf->accept_ra_rt_info_max_plen; 3398 3392 #endif 3399 3393 #endif ··· 3898 3892 .proc_handler = &proc_dointvec_jiffies, 3899 3893 .strategy = &sysctl_jiffies, 3900 3894 }, 3901 - #ifdef CONFIV_IPV6_ROUTE_INFO 3895 + #ifdef CONFIG_IPV6_ROUTE_INFO 3902 3896 { 3903 3897 .ctl_name = NET_IPV6_ACCEPT_RA_RT_INFO_MAX_PLEN, 3904 3898 .procname = "accept_ra_rt_info_max_plen",
+2 -1
net/ipv6/inet6_connection_sock.c
··· 139 139 140 140 EXPORT_SYMBOL_GPL(inet6_csk_addr2sockaddr); 141 141 142 - int inet6_csk_xmit(struct sk_buff *skb, struct sock *sk, int ipfragok) 142 + int inet6_csk_xmit(struct sk_buff *skb, int ipfragok) 143 143 { 144 + struct sock *sk = skb->sk; 144 145 struct inet_sock *inet = inet_sk(sk); 145 146 struct ipv6_pinfo *np = inet6_sk(sk); 146 147 struct flowi fl;
-6
net/ipv6/mcast.c
··· 2258 2258 2259 2259 void ipv6_mc_init_dev(struct inet6_dev *idev) 2260 2260 { 2261 - struct in6_addr maddr; 2262 - 2263 2261 write_lock_bh(&idev->lock); 2264 2262 rwlock_init(&idev->mc_lock); 2265 2263 idev->mc_gq_running = 0; ··· 2273 2275 idev->mc_maxdelay = IGMP6_UNSOLICITED_IVAL; 2274 2276 idev->mc_v1_seen = 0; 2275 2277 write_unlock_bh(&idev->lock); 2276 - 2277 - /* Add all-nodes address. */ 2278 - ipv6_addr_all_nodes(&maddr); 2279 - ipv6_dev_mc_inc(idev->dev, &maddr); 2280 2278 } 2281 2279 2282 2280 /*
+7
net/ipv6/ndisc.c
··· 1413 1413 return; 1414 1414 } 1415 1415 1416 + if (!ipv6_addr_equal(&skb->nh.ipv6h->daddr, target) && 1417 + !(ipv6_addr_type(target) & IPV6_ADDR_LINKLOCAL)) { 1418 + ND_PRINTK2(KERN_WARNING 1419 + "ICMPv6 Redirect: target address is not link-local.\n"); 1420 + return; 1421 + } 1422 + 1416 1423 ndisc_flow_init(&fl, NDISC_REDIRECT, &saddr_buf, &skb->nh.ipv6h->saddr, 1417 1424 dev->ifindex); 1418 1425
+1
net/ipv6/route.c
··· 2017 2017 + nla_total_size(4) /* RTA_IIF */ 2018 2018 + nla_total_size(4) /* RTA_OIF */ 2019 2019 + nla_total_size(4) /* RTA_PRIORITY */ 2020 + + RTAX_MAX * nla_total_size(4) /* RTA_METRICS */ 2020 2021 + nla_total_size(sizeof(struct rta_cacheinfo)); 2021 2022 } 2022 2023
+2 -2
net/netfilter/Kconfig
··· 165 165 166 166 config NF_CONNTRACK_H323 167 167 tristate "H.323 protocol support (EXPERIMENTAL)" 168 - depends on EXPERIMENTAL && NF_CONNTRACK 168 + depends on EXPERIMENTAL && NF_CONNTRACK && (IPV6 || IPV6=n) 169 169 help 170 170 H.323 is a VoIP signalling protocol from ITU-T. As one of the most 171 171 important VoIP protocols, it is widely used by voice hardware and ··· 628 628 629 629 config NETFILTER_XT_MATCH_HASHLIMIT 630 630 tristate '"hashlimit" match support' 631 - depends on NETFILTER_XTABLES 631 + depends on NETFILTER_XTABLES && (IP6_NF_IPTABLES || IP6_NF_IPTABLES=n) 632 632 help 633 633 This option adds a `hashlimit' match. 634 634
+3 -1
net/netfilter/nf_conntrack_netlink.c
··· 389 389 && ctnetlink_dump_helpinfo(skb, ct) < 0) 390 390 goto nfattr_failure; 391 391 392 + #ifdef CONFIG_NF_CONNTRACK_MARK 392 393 if ((events & IPCT_MARK || ct->mark) 393 394 && ctnetlink_dump_mark(skb, ct) < 0) 394 395 goto nfattr_failure; 396 + #endif 395 397 396 398 if (events & IPCT_COUNTER_FILLING && 397 399 (ctnetlink_dump_counters(skb, ct, IP_CT_DIR_ORIGINAL) < 0 || ··· 983 981 if (cda[CTA_PROTOINFO-1]) { 984 982 err = ctnetlink_change_protoinfo(ct, cda); 985 983 if (err < 0) 986 - return err; 984 + goto err; 987 985 } 988 986 989 987 #if defined(CONFIG_NF_CONNTRACK_MARK)
+1 -1
net/netfilter/nf_conntrack_pptp.c
··· 113 113 114 114 rcu_read_lock(); 115 115 nf_nat_pptp_expectfn = rcu_dereference(nf_nat_pptp_hook_expectfn); 116 - if (nf_nat_pptp_expectfn && ct->status & IPS_NAT_MASK) 116 + if (nf_nat_pptp_expectfn && ct->master->status & IPS_NAT_MASK) 117 117 nf_nat_pptp_expectfn(ct, exp); 118 118 else { 119 119 struct nf_conntrack_tuple inv_t;
+8 -2
net/netfilter/nf_conntrack_sip.c
··· 303 303 { 304 304 int s = *shift; 305 305 306 - for (; dptr <= limit && *dptr != '@'; dptr++) 306 + /* Search for @, but stop at the end of the line. 307 + * We are inside a sip: URI, so we don't need to worry about 308 + * continuation lines. */ 309 + while (dptr <= limit && 310 + *dptr != '@' && *dptr != '\r' && *dptr != '\n') { 307 311 (*shift)++; 312 + dptr++; 313 + } 308 314 309 - if (*dptr == '@') { 315 + if (dptr <= limit && *dptr == '@') { 310 316 dptr++; 311 317 (*shift)++; 312 318 } else
+12 -17
net/netfilter/xt_connbytes.c
··· 52 52 { 53 53 const struct xt_connbytes_info *sinfo = matchinfo; 54 54 u_int64_t what = 0; /* initialize to make gcc happy */ 55 + u_int64_t bytes = 0; 56 + u_int64_t pkts = 0; 55 57 const struct ip_conntrack_counter *counters; 56 58 57 59 if (!(counters = nf_ct_get_counters(skb))) ··· 91 89 case XT_CONNBYTES_AVGPKT: 92 90 switch (sinfo->direction) { 93 91 case XT_CONNBYTES_DIR_ORIGINAL: 94 - what = div64_64(counters[IP_CT_DIR_ORIGINAL].bytes, 95 - counters[IP_CT_DIR_ORIGINAL].packets); 92 + bytes = counters[IP_CT_DIR_ORIGINAL].bytes; 93 + pkts = counters[IP_CT_DIR_ORIGINAL].packets; 96 94 break; 97 95 case XT_CONNBYTES_DIR_REPLY: 98 - what = div64_64(counters[IP_CT_DIR_REPLY].bytes, 99 - counters[IP_CT_DIR_REPLY].packets); 96 + bytes = counters[IP_CT_DIR_REPLY].bytes; 97 + pkts = counters[IP_CT_DIR_REPLY].packets; 100 98 break; 101 99 case XT_CONNBYTES_DIR_BOTH: 102 - { 103 - u_int64_t bytes; 104 - u_int64_t pkts; 105 - bytes = counters[IP_CT_DIR_ORIGINAL].bytes + 106 - counters[IP_CT_DIR_REPLY].bytes; 107 - pkts = counters[IP_CT_DIR_ORIGINAL].packets+ 108 - counters[IP_CT_DIR_REPLY].packets; 109 - 110 - /* FIXME_THEORETICAL: what to do if sum 111 - * overflows ? */ 112 - 113 - what = div64_64(bytes, pkts); 114 - } 100 + bytes = counters[IP_CT_DIR_ORIGINAL].bytes + 101 + counters[IP_CT_DIR_REPLY].bytes; 102 + pkts = counters[IP_CT_DIR_ORIGINAL].packets + 103 + counters[IP_CT_DIR_REPLY].packets; 115 104 break; 116 105 } 106 + if (pkts != 0) 107 + what = div64_64(bytes, pkts); 117 108 break; 118 109 } 119 110
+23 -23
net/packet/af_packet.c
··· 359 359 if (dev == NULL) 360 360 goto out_unlock; 361 361 362 + err = -ENETDOWN; 363 + if (!(dev->flags & IFF_UP)) 364 + goto out_unlock; 365 + 362 366 /* 363 367 * You may not queue a frame bigger than the mtu. This is the lowest level 364 368 * raw protocol and you must do your own fragmentation at this level. ··· 411 407 if (err) 412 408 goto out_free; 413 409 414 - err = -ENETDOWN; 415 - if (!(dev->flags & IFF_UP)) 416 - goto out_free; 417 - 418 410 /* 419 411 * Now send it 420 412 */ ··· 428 428 } 429 429 #endif 430 430 431 - static inline int run_filter(struct sk_buff *skb, struct sock *sk, 432 - unsigned *snaplen) 431 + static inline unsigned int run_filter(struct sk_buff *skb, struct sock *sk, 432 + unsigned int res) 433 433 { 434 434 struct sk_filter *filter; 435 - int err = 0; 436 435 437 436 rcu_read_lock_bh(); 438 437 filter = rcu_dereference(sk->sk_filter); 439 - if (filter != NULL) { 440 - err = sk_run_filter(skb, filter->insns, filter->len); 441 - if (!err) 442 - err = -EPERM; 443 - else if (*snaplen > err) 444 - *snaplen = err; 445 - } 438 + if (filter != NULL) 439 + res = sk_run_filter(skb, filter->insns, filter->len); 446 440 rcu_read_unlock_bh(); 447 441 448 - return err; 442 + return res; 449 443 } 450 444 451 445 /* ··· 461 467 struct packet_sock *po; 462 468 u8 * skb_head = skb->data; 463 469 int skb_len = skb->len; 464 - unsigned snaplen; 470 + unsigned int snaplen, res; 465 471 466 472 if (skb->pkt_type == PACKET_LOOPBACK) 467 473 goto drop; ··· 489 495 490 496 snaplen = skb->len; 491 497 492 - if (run_filter(skb, sk, &snaplen) < 0) 498 + res = run_filter(skb, sk, snaplen); 499 + if (!res) 493 500 goto drop_n_restore; 501 + if (snaplen > res) 502 + snaplen = res; 494 503 495 504 if (atomic_read(&sk->sk_rmem_alloc) + skb->truesize >= 496 505 (unsigned)sk->sk_rcvbuf) ··· 565 568 struct tpacket_hdr *h; 566 569 u8 * skb_head = skb->data; 567 570 int skb_len = skb->len; 568 - unsigned snaplen; 571 + unsigned int snaplen, res; 569 572 unsigned long status = TP_STATUS_LOSING|TP_STATUS_USER; 570 573 unsigned short macoff, netoff; 571 574 struct sk_buff *copy_skb = NULL; ··· 589 592 590 593 snaplen = skb->len; 591 594 592 - if (run_filter(skb, sk, &snaplen) < 0) 595 + res = run_filter(skb, sk, snaplen); 596 + if (!res) 593 597 goto drop_n_restore; 598 + if (snaplen > res) 599 + snaplen = res; 594 600 595 601 if (sk->sk_type == SOCK_DGRAM) { 596 602 macoff = netoff = TPACKET_ALIGN(TPACKET_HDRLEN) + 16; ··· 738 738 if (sock->type == SOCK_RAW) 739 739 reserve = dev->hard_header_len; 740 740 741 + err = -ENETDOWN; 742 + if (!(dev->flags & IFF_UP)) 743 + goto out_unlock; 744 + 741 745 err = -EMSGSIZE; 742 746 if (len > dev->mtu+reserve) 743 747 goto out_unlock; ··· 773 769 skb->protocol = proto; 774 770 skb->dev = dev; 775 771 skb->priority = sk->sk_priority; 776 - 777 - err = -ENETDOWN; 778 - if (!(dev->flags & IFF_UP)) 779 - goto out_free; 780 772 781 773 /* 782 774 * Now send it
+5 -3
net/sched/act_ipt.c
··· 55 55 struct ipt_target *target; 56 56 int ret = 0; 57 57 58 - target = xt_find_target(AF_INET, t->u.user.name, t->u.user.revision); 58 + target = xt_request_find_target(AF_INET, t->u.user.name, 59 + t->u.user.revision); 59 60 if (!target) 60 61 return -ENOENT; 61 62 ··· 64 63 65 64 ret = xt_check_target(target, AF_INET, t->u.target_size - sizeof(*t), 66 65 table, hook, 0, 0); 67 - if (ret) 66 + if (ret) { 67 + module_put(t->u.kernel.target->me); 68 68 return ret; 69 - 69 + } 70 70 if (t->u.kernel.target->checkentry 71 71 && !t->u.kernel.target->checkentry(table, NULL, 72 72 t->u.kernel.target, t->data,
+1 -1
net/sctp/protocol.c
··· 804 804 NIPQUAD(((struct rtable *)skb->dst)->rt_dst)); 805 805 806 806 SCTP_INC_STATS(SCTP_MIB_OUTSCTPPACKS); 807 - return ip_queue_xmit(skb, skb->sk, ipfragok); 807 + return ip_queue_xmit(skb, ipfragok); 808 808 } 809 809 810 810 static struct sctp_af sctp_ipv4_specific;
+4 -2
net/sctp/sm_make_chunk.c
··· 1562 1562 if (*errp) { 1563 1563 report.num_missing = htonl(1); 1564 1564 report.type = paramtype; 1565 - sctp_init_cause(*errp, SCTP_ERROR_INV_PARAM, 1565 + sctp_init_cause(*errp, SCTP_ERROR_MISS_PARAM, 1566 1566 &report, sizeof(report)); 1567 1567 } 1568 1568 ··· 1775 1775 1776 1776 /* Verify stream values are non-zero. */ 1777 1777 if ((0 == peer_init->init_hdr.num_outbound_streams) || 1778 - (0 == peer_init->init_hdr.num_inbound_streams)) { 1778 + (0 == peer_init->init_hdr.num_inbound_streams) || 1779 + (0 == peer_init->init_hdr.init_tag) || 1780 + (SCTP_DEFAULT_MINWINDOW > ntohl(peer_init->init_hdr.a_rwnd))) { 1779 1781 1780 1782 sctp_process_inv_mandatory(asoc, chunk, errp); 1781 1783 return 0;
+7 -1
net/sctp/sm_sideeffect.c
··· 217 217 218 218 asoc->peer.sack_needed = 0; 219 219 220 - error = sctp_outq_tail(&asoc->outqueue, sack); 220 + sctp_add_cmd_sf(commands, SCTP_CMD_REPLY, SCTP_CHUNK(sack)); 221 221 222 222 /* Stop the SACK timer. */ 223 223 sctp_add_cmd_sf(commands, SCTP_CMD_TIMER_STOP, ··· 621 621 /* The receiver of the HEARTBEAT ACK should also perform an 622 622 * RTT measurement for that destination transport address 623 623 * using the time value carried in the HEARTBEAT ACK chunk. 624 + * If the transport's rto_pending variable has been cleared, 625 + * it was most likely due to a retransmit. However, we want 626 + * to re-enable it to properly update the rto. 624 627 */ 628 + if (t->rto_pending == 0) 629 + t->rto_pending = 1; 630 + 625 631 hbinfo = (sctp_sender_hb_info_t *) chunk->skb->data; 626 632 sctp_transport_update_rto(t, (jiffies - hbinfo->sent_at)); 627 633
+22 -22
net/sctp/sm_statefuns.c
··· 440 440 { 441 441 struct sctp_chunk *chunk = arg; 442 442 sctp_init_chunk_t *initchunk; 443 - __u32 init_tag; 444 443 struct sctp_chunk *err_chunk; 445 444 struct sctp_packet *packet; 446 445 sctp_error_t error; ··· 460 461 461 462 /* Grab the INIT header. */ 462 463 chunk->subh.init_hdr = (sctp_inithdr_t *) chunk->skb->data; 463 - 464 - init_tag = ntohl(chunk->subh.init_hdr->init_tag); 465 - 466 - /* Verification Tag: 3.3.3 467 - * If the value of the Initiate Tag in a received INIT ACK 468 - * chunk is found to be 0, the receiver MUST treat it as an 469 - * error and close the association by transmitting an ABORT. 470 - */ 471 - if (!init_tag) { 472 - struct sctp_chunk *reply = sctp_make_abort(asoc, chunk, 0); 473 - if (!reply) 474 - goto nomem; 475 - 476 - sctp_add_cmd_sf(commands, SCTP_CMD_REPLY, SCTP_CHUNK(reply)); 477 - return sctp_stop_t1_and_abort(commands, SCTP_ERROR_INV_PARAM, 478 - ECONNREFUSED, asoc, 479 - chunk->transport); 480 - } 481 464 482 465 /* Verify the INIT chunk before processing it. */ 483 466 err_chunk = NULL; ··· 531 550 SCTP_CHUNK(err_chunk)); 532 551 533 552 return SCTP_DISPOSITION_CONSUME; 534 - 535 - nomem: 536 - return SCTP_DISPOSITION_NOMEM; 537 553 } 538 554 539 555 /* ··· 1531 1553 } 1532 1554 1533 1555 1556 + /* 1557 + * Unexpected INIT-ACK handler. 1558 + * 1559 + * Section 5.2.3 1560 + * If an INIT ACK received by an endpoint in any state other than the 1561 + * COOKIE-WAIT state, the endpoint should discard the INIT ACK chunk. 1562 + * An unexpected INIT ACK usually indicates the processing of an old or 1563 + * duplicated INIT chunk. 1564 + */ 1565 + sctp_disposition_t sctp_sf_do_5_2_3_initack(const struct sctp_endpoint *ep, 1566 + const struct sctp_association *asoc, 1567 + const sctp_subtype_t type, 1568 + void *arg, sctp_cmd_seq_t *commands) 1569 + { 1570 + /* Per the above section, we'll discard the chunk if we have an 1571 + * endpoint. If this is an OOTB INIT-ACK, treat it as such. 1572 + */ 1573 + if (ep == sctp_sk((sctp_get_ctl_sock()))->ep) 1574 + return sctp_sf_ootb(ep, asoc, type, arg, commands); 1575 + else 1576 + return sctp_sf_discard_chunk(ep, asoc, type, arg, commands); 1577 + } 1534 1578 1535 1579 /* Unexpected COOKIE-ECHO handler for peer restart (Table 2, action 'A') 1536 1580 *
+1 -1
net/sctp/sm_statetable.c
··· 152 152 /* SCTP_STATE_EMPTY */ \ 153 153 TYPE_SCTP_FUNC(sctp_sf_ootb), \ 154 154 /* SCTP_STATE_CLOSED */ \ 155 - TYPE_SCTP_FUNC(sctp_sf_discard_chunk), \ 155 + TYPE_SCTP_FUNC(sctp_sf_do_5_2_3_initack), \ 156 156 /* SCTP_STATE_COOKIE_WAIT */ \ 157 157 TYPE_SCTP_FUNC(sctp_sf_do_5_1C_ack), \ 158 158 /* SCTP_STATE_COOKIE_ECHOED */ \
+3 -5
net/sunrpc/clnt.c
··· 490 490 491 491 /* Set up the call info struct and execute the task */ 492 492 status = task->tk_status; 493 - if (status != 0) { 494 - rpc_release_task(task); 493 + if (status != 0) 495 494 goto out; 496 - } 497 495 atomic_inc(&task->tk_count); 498 496 status = rpc_execute(task); 499 497 if (status == 0) 500 498 status = task->tk_status; 501 - rpc_put_task(task); 502 499 out: 500 + rpc_put_task(task); 503 501 rpc_restore_sigmask(&oldset); 504 502 return status; 505 503 } ··· 535 537 if (status == 0) 536 538 rpc_execute(task); 537 539 else 538 - rpc_release_task(task); 540 + rpc_put_task(task); 539 541 540 542 rpc_restore_sigmask(&oldset); 541 543 return status;
+2 -1
net/sunrpc/sched.c
··· 42 42 static void __rpc_default_timer(struct rpc_task *task); 43 43 static void rpciod_killall(void); 44 44 static void rpc_async_schedule(struct work_struct *); 45 + static void rpc_release_task(struct rpc_task *task); 45 46 46 47 /* 47 48 * RPC tasks sit here while waiting for conditions to improve. ··· 897 896 } 898 897 EXPORT_SYMBOL(rpc_put_task); 899 898 900 - void rpc_release_task(struct rpc_task *task) 899 + static void rpc_release_task(struct rpc_task *task) 901 900 { 902 901 #ifdef RPC_DEBUG 903 902 BUG_ON(task->tk_magic != RPC_TASK_MAGIC_ID);
+16 -16
net/sunrpc/svc.c
··· 26 26 #include <linux/sunrpc/clnt.h> 27 27 28 28 #define RPCDBG_FACILITY RPCDBG_SVCDSP 29 - #define RPC_PARANOIA 1 30 29 31 30 /* 32 31 * Mode for mapping cpus to pools. ··· 871 872 return 0; 872 873 873 874 err_short_len: 874 - #ifdef RPC_PARANOIA 875 - printk("svc: short len %Zd, dropping request\n", argv->iov_len); 876 - #endif 875 + if (net_ratelimit()) 876 + printk("svc: short len %Zd, dropping request\n", argv->iov_len); 877 + 877 878 goto dropit; /* drop request */ 878 879 879 880 err_bad_dir: 880 - #ifdef RPC_PARANOIA 881 - printk("svc: bad direction %d, dropping request\n", dir); 882 - #endif 881 + if (net_ratelimit()) 882 + printk("svc: bad direction %d, dropping request\n", dir); 883 + 883 884 serv->sv_stats->rpcbadfmt++; 884 885 goto dropit; /* drop request */ 885 886 ··· 908 909 goto sendit; 909 910 910 911 err_bad_vers: 911 - #ifdef RPC_PARANOIA 912 - printk("svc: unknown version (%d)\n", vers); 913 - #endif 912 + if (net_ratelimit()) 913 + printk("svc: unknown version (%d for prog %d, %s)\n", 914 + vers, prog, progp->pg_name); 915 + 914 916 serv->sv_stats->rpcbadfmt++; 915 917 svc_putnl(resv, RPC_PROG_MISMATCH); 916 918 svc_putnl(resv, progp->pg_lovers); ··· 919 919 goto sendit; 920 920 921 921 err_bad_proc: 922 - #ifdef RPC_PARANOIA 923 - printk("svc: unknown procedure (%d)\n", proc); 924 - #endif 922 + if (net_ratelimit()) 923 + printk("svc: unknown procedure (%d)\n", proc); 924 + 925 925 serv->sv_stats->rpcbadfmt++; 926 926 svc_putnl(resv, RPC_PROC_UNAVAIL); 927 927 goto sendit; 928 928 929 929 err_garbage: 930 - #ifdef RPC_PARANOIA 931 - printk("svc: failed to decode args\n"); 932 - #endif 930 + if (net_ratelimit()) 931 + printk("svc: failed to decode args\n"); 932 + 933 933 rpc_stat = rpc_garbage_args; 934 934 err_bad: 935 935 serv->sv_stats->rpcbadfmt++;
+10 -4
net/sunrpc/svcsock.c
··· 1062 1062 * bit set in the fragment length header. 1063 1063 * But apparently no known nfs clients send fragmented 1064 1064 * records. */ 1065 - printk(KERN_NOTICE "RPC: bad TCP reclen 0x%08lx (non-terminal)\n", 1066 - (unsigned long) svsk->sk_reclen); 1065 + if (net_ratelimit()) 1066 + printk(KERN_NOTICE "RPC: bad TCP reclen 0x%08lx" 1067 + " (non-terminal)\n", 1068 + (unsigned long) svsk->sk_reclen); 1067 1069 goto err_delete; 1068 1070 } 1069 1071 svsk->sk_reclen &= 0x7fffffff; 1070 1072 dprintk("svc: TCP record, %d bytes\n", svsk->sk_reclen); 1071 1073 if (svsk->sk_reclen > serv->sv_max_mesg) { 1072 - printk(KERN_NOTICE "RPC: bad TCP reclen 0x%08lx (large)\n", 1073 - (unsigned long) svsk->sk_reclen); 1074 + if (net_ratelimit()) 1075 + printk(KERN_NOTICE "RPC: bad TCP reclen 0x%08lx" 1076 + " (large)\n", 1077 + (unsigned long) svsk->sk_reclen); 1074 1078 goto err_delete; 1075 1079 } 1076 1080 } ··· 1282 1278 schedule_timeout_uninterruptible(msecs_to_jiffies(500)); 1283 1279 rqstp->rq_pages[i] = p; 1284 1280 } 1281 + rqstp->rq_pages[i++] = NULL; /* this might be seen in nfs_read_actor */ 1282 + BUG_ON(pages >= RPCSVC_MAXPAGES); 1285 1283 1286 1284 /* Make arg->head point to first page and arg->pages point to rest */ 1287 1285 arg = &rqstp->rq_arg;
+1
net/x25/x25_dev.c
··· 56 56 sk_add_backlog(sk, skb); 57 57 } 58 58 bh_unlock_sock(sk); 59 + sock_put(sk); 59 60 return queued; 60 61 } 61 62
+5 -11
net/xfrm/xfrm_policy.c
··· 650 650 struct xfrm_policy *pol; 651 651 struct xfrm_policy *delpol; 652 652 struct hlist_head *chain; 653 - struct hlist_node *entry, *newpos, *last; 653 + struct hlist_node *entry, *newpos; 654 654 struct dst_entry *gc_list; 655 655 656 656 write_lock_bh(&xfrm_policy_lock); 657 657 chain = policy_hash_bysel(&policy->selector, policy->family, dir); 658 658 delpol = NULL; 659 659 newpos = NULL; 660 - last = NULL; 661 660 hlist_for_each_entry(pol, entry, chain, bydst) { 662 - if (!delpol && 663 - pol->type == policy->type && 661 + if (pol->type == policy->type && 664 662 !selector_cmp(&pol->selector, &policy->selector) && 665 - xfrm_sec_ctx_match(pol->security, policy->security)) { 663 + xfrm_sec_ctx_match(pol->security, policy->security) && 664 + !WARN_ON(delpol)) { 666 665 if (excl) { 667 666 write_unlock_bh(&xfrm_policy_lock); 668 667 return -EEXIST; ··· 670 671 if (policy->priority > pol->priority) 671 672 continue; 672 673 } else if (policy->priority >= pol->priority) { 673 - last = &pol->bydst; 674 + newpos = &pol->bydst; 674 675 continue; 675 676 } 676 - if (!newpos) 677 - newpos = &pol->bydst; 678 677 if (delpol) 679 678 break; 680 - last = &pol->bydst; 681 679 } 682 - if (!newpos) 683 - newpos = last; 684 680 if (newpos) 685 681 hlist_add_after(newpos, &policy->bydst); 686 682 else
+1 -1
scripts/Makefile.headersinst
··· 109 109 quiet_cmd_gen = GEN $(patsubst $(INSTALL_HDR_PATH)/%,%,$@) 110 110 cmd_gen = \ 111 111 FNAME=$(patsubst $(INSTALL_HDR_PATH)/$(_dst)/%,%,$@) \ 112 - STUBDEF=__ASM_STUB_`echo $$FNAME | tr a-z. A-Z_`; \ 112 + STUBDEF=__ASM_STUB_`echo $$FNAME | tr a-z.- A-Z__`; \ 113 113 (echo "/* File autogenerated by 'make headers_install' */" ; \ 114 114 echo "\#ifndef $$STUBDEF" ; \ 115 115 echo "\#define $$STUBDEF" ; \
+9
security/selinux/include/xfrm.h
··· 37 37 int selinux_xfrm_postroute_last(u32 isec_sid, struct sk_buff *skb, 38 38 struct avc_audit_data *ad, u8 proto); 39 39 int selinux_xfrm_decode_session(struct sk_buff *skb, u32 *sid, int ckall); 40 + 41 + static inline void selinux_xfrm_notify_policyload(void) 42 + { 43 + atomic_inc(&flow_cache_genid); 44 + } 40 45 #else 41 46 static inline int selinux_xfrm_sock_rcv_skb(u32 isec_sid, struct sk_buff *skb, 42 47 struct avc_audit_data *ad) ··· 59 54 { 60 55 *sid = SECSID_NULL; 61 56 return 0; 57 + } 58 + 59 + static inline void selinux_xfrm_notify_policyload(void) 60 + { 62 61 } 63 62 #endif 64 63
+3
security/selinux/ss/services.c
··· 1299 1299 avc_ss_reset(seqno); 1300 1300 selnl_notify_policyload(seqno); 1301 1301 selinux_netlbl_cache_invalidate(); 1302 + selinux_xfrm_notify_policyload(); 1302 1303 return 0; 1303 1304 } 1304 1305 ··· 1355 1354 avc_ss_reset(seqno); 1356 1355 selnl_notify_policyload(seqno); 1357 1356 selinux_netlbl_cache_invalidate(); 1357 + selinux_xfrm_notify_policyload(); 1358 1358 1359 1359 return 0; 1360 1360 ··· 1855 1853 if (!rc) { 1856 1854 avc_ss_reset(seqno); 1857 1855 selnl_notify_policyload(seqno); 1856 + selinux_xfrm_notify_policyload(); 1858 1857 } 1859 1858 return rc; 1860 1859 }
+11 -7
sound/core/init.c
··· 361 361 snd_printk(KERN_WARNING "unable to free card info\n"); 362 362 /* Not fatal error */ 363 363 } 364 - if (card->dev) 365 - device_unregister(card->dev); 364 + #ifndef CONFIG_SYSFS_DEPRECATED 365 + if (card->card_dev) 366 + device_unregister(card->card_dev); 367 + #endif 366 368 kfree(card); 367 369 return 0; 368 370 } ··· 499 497 int err; 500 498 501 499 snd_assert(card != NULL, return -EINVAL); 502 - if (!card->dev) { 503 - card->dev = device_create(sound_class, card->parent, 0, 504 - "card%i", card->number); 505 - if (IS_ERR(card->dev)) 506 - card->dev = NULL; 500 + #ifndef CONFIG_SYSFS_DEPRECATED 501 + if (!card->card_dev) { 502 + card->card_dev = device_create(sound_class, card->dev, 0, 503 + "card%i", card->number); 504 + if (IS_ERR(card->card_dev)) 505 + card->card_dev = NULL; 507 506 } 507 + #endif 508 508 if ((err = snd_device_register_all(card)) < 0) 509 509 return err; 510 510 mutex_lock(&snd_card_mutex);
+1 -3
sound/core/sound.c
··· 238 238 { 239 239 int minor; 240 240 struct snd_minor *preg; 241 - struct device *device = NULL; 241 + struct device *device = snd_card_get_device_link(card); 242 242 243 243 snd_assert(name, return -EINVAL); 244 244 preg = kmalloc(sizeof *preg, GFP_KERNEL); ··· 263 263 return minor; 264 264 } 265 265 snd_minors[minor] = preg; 266 - if (card) 267 - device = card->dev; 268 266 preg->dev = device_create(sound_class, device, MKDEV(major, minor), 269 267 "%s", name); 270 268 if (preg->dev)
+1 -3
sound/core/sound_oss.c
··· 106 106 int cidx = SNDRV_MINOR_OSS_CARD(minor); 107 107 int track2 = -1; 108 108 int register1 = -1, register2 = -1; 109 - struct device *carddev = NULL; 109 + struct device *carddev = snd_card_get_device_link(card); 110 110 111 111 if (card && card->number >= 8) 112 112 return 0; /* ignore silently */ ··· 134 134 track2 = SNDRV_MINOR_OSS(cidx, SNDRV_MINOR_OSS_DMMIDI1); 135 135 break; 136 136 } 137 - if (card) 138 - carddev = card->dev; 139 137 register1 = register_sound_special_device(f_ops, minor, carddev); 140 138 if (register1 != minor) 141 139 goto __end;
+1 -1
sound/usb/usx2y/usbusx2yaudio.c
··· 322 322 usX2Y_error_urb_status(usX2Y, subs, urb); 323 323 return; 324 324 } 325 - if (likely(urb->start_frame == usX2Y->wait_iso_frame)) 325 + if (likely((urb->start_frame & 0xFFFF) == (usX2Y->wait_iso_frame & 0xFFFF))) 326 326 subs->completed_urb = urb; 327 327 else { 328 328 usX2Y_error_sequence(usX2Y, subs, urb);
+1 -1
sound/usb/usx2y/usx2yhwdeppcm.c
··· 243 243 usX2Y_error_urb_status(usX2Y, subs, urb); 244 244 return; 245 245 } 246 - if (likely(urb->start_frame == usX2Y->wait_iso_frame)) 246 + if (likely((urb->start_frame & 0xFFFF) == (usX2Y->wait_iso_frame & 0xFFFF))) 247 247 subs->completed_urb = urb; 248 248 else { 249 249 usX2Y_error_sequence(usX2Y, subs, urb);