Merge git://git.kernel.org/pub/scm/linux/kernel/git/bunk/trivial

* git://git.kernel.org/pub/scm/linux/kernel/git/bunk/trivial: (48 commits)
Documentation: fix minor kernel-doc warnings
BUG_ON() Conversion in drivers/net/
BUG_ON() Conversion in drivers/s390/net/lcs.c
BUG_ON() Conversion in mm/slab.c
BUG_ON() Conversion in mm/highmem.c
BUG_ON() Conversion in kernel/signal.c
BUG_ON() Conversion in kernel/signal.c
BUG_ON() Conversion in kernel/ptrace.c
BUG_ON() Conversion in ipc/shm.c
BUG_ON() Conversion in fs/freevxfs/
BUG_ON() Conversion in fs/udf/
BUG_ON() Conversion in fs/sysv/
BUG_ON() Conversion in fs/inode.c
BUG_ON() Conversion in fs/fcntl.c
BUG_ON() Conversion in fs/dquot.c
BUG_ON() Conversion in md/raid10.c
BUG_ON() Conversion in md/raid6main.c
BUG_ON() Conversion in md/raid5.c
Fix minor documentation typo
BFP->BPF in Documentation/networking/tuntap.txt
...

+260 -367
+1 -1
Documentation/DocBook/Makefile
··· 2 # This makefile is used to generate the kernel documentation, 3 # primarily based on in-line comments in various source files. 4 # See Documentation/kernel-doc-nano-HOWTO.txt for instruction in how 5 - # to ducument the SRC - and how to read it. 6 # To add a new book the only step required is to add the book to the 7 # list of DOCBOOKS. 8
··· 2 # This makefile is used to generate the kernel documentation, 3 # primarily based on in-line comments in various source files. 4 # See Documentation/kernel-doc-nano-HOWTO.txt for instruction in how 5 + # to document the SRC - and how to read it. 6 # To add a new book the only step required is to add the book to the 7 # list of DOCBOOKS. 8
-1
Documentation/DocBook/kernel-api.tmpl
··· 322 <chapter id="sysfs"> 323 <title>The Filesystem for Exporting Kernel Objects</title> 324 !Efs/sysfs/file.c 325 - !Efs/sysfs/dir.c 326 !Efs/sysfs/symlink.c 327 !Efs/sysfs/bin.c 328 </chapter>
··· 322 <chapter id="sysfs"> 323 <title>The Filesystem for Exporting Kernel Objects</title> 324 !Efs/sysfs/file.c 325 !Efs/sysfs/symlink.c 326 !Efs/sysfs/bin.c 327 </chapter>
+1 -1
Documentation/acpi-hotkey.txt
··· 30 echo "event_num:event_type:event_argument" > 31 /proc/acpi/hotkey/action. 32 The result of the execution of this aml method is 33 - attached to /proc/acpi/hotkey/poll_method, which is dnyamically 34 created. Please use command "cat /proc/acpi/hotkey/polling_method" 35 to retrieve it. 36
··· 30 echo "event_num:event_type:event_argument" > 31 /proc/acpi/hotkey/action. 32 The result of the execution of this aml method is 33 + attached to /proc/acpi/hotkey/poll_method, which is dynamically 34 created. Please use command "cat /proc/acpi/hotkey/polling_method" 35 to retrieve it. 36
+108 -80
Documentation/fujitsu/frv/kernel-ABI.txt
··· 1 - ================================= 2 - INTERNAL KERNEL ABI FOR FR-V ARCH 3 - ================================= 4 5 - The internal FRV kernel ABI is not quite the same as the userspace ABI. A number of the registers 6 - are used for special purposed, and the ABI is not consistent between modules vs core, and MMU vs 7 - no-MMU. 8 9 - This partly stems from the fact that FRV CPUs do not have a separate supervisor stack pointer, and 10 - most of them do not have any scratch registers, thus requiring at least one general purpose 11 - register to be clobbered in such an event. Also, within the kernel core, it is possible to simply 12 - jump or call directly between functions using a relative offset. This cannot be extended to modules 13 - for the displacement is likely to be too far. Thus in modules the address of a function to call 14 - must be calculated in a register and then used, requiring two extra instructions. 15 16 This document has the following sections: 17 ··· 41 CPU OPERATING MODES 42 =================== 43 44 - The FR-V CPU has three basic operating modes. In order of increasing capability: 45 46 (1) User mode. 47 ··· 50 51 (2) Kernel mode. 52 53 - Normal kernel mode. There are many additional control registers available that may be 54 - accessed in this mode, in addition to all the stuff available to user mode. This has two 55 - submodes: 56 57 (a) Exceptions enabled (PSR.T == 1). 58 59 - Exceptions will invoke the appropriate normal kernel mode handler. On entry to the 60 - handler, the PSR.T bit will be cleared. 61 62 (b) Exceptions disabled (PSR.T == 0). 63 64 - No exceptions or interrupts may happen. Any mandatory exceptions will cause the CPU to 65 - halt unless the CPU is told to jump into debug mode instead. 66 67 (3) Debug mode. 68 69 - No exceptions may happen in this mode. Memory protection and management exceptions will be 70 - flagged for later consideration, but the exception handler won't be invoked. Debugging traps 71 - such as hardware breakpoints and watchpoints will be ignored. This mode is entered only by 72 - debugging events obtained from the other two modes. 73 74 - All kernel mode registers may be accessed, plus a few extra debugging specific registers. 75 76 77 ================================= 78 INTERNAL KERNEL-MODE REGISTER ABI 79 ================================= 80 81 - There are a number of permanent register assignments that are set up by entry.S in the exception 82 - prologue. Note that there is a complete set of exception prologues for each of user->kernel 83 - transition and kernel->kernel transition. There are also user->debug and kernel->debug mode 84 - transition prologues. 85 86 87 REGISTER FLAVOUR USE 88 - =============== ======= ==================================================== 89 GR1 Supervisor stack pointer 90 GR15 Current thread info pointer 91 GR16 GP-Rel base register for small data ··· 99 GR31 NOMMU Destroyed by debug mode entry 100 GR31 MMU Destroyed by TLB miss kernel mode entry 101 CCR.ICC2 Virtual interrupt disablement tracking 102 - CCCR.CC3 Cleared by exception prologue (atomic op emulation) 103 SCR0 MMU See mmu-layout.txt. 104 SCR1 MMU See mmu-layout.txt. 105 - SCR2 MMU Save for EAR0 (destroyed by icache insns in debug mode) 106 SCR3 MMU Save for GR31 during debug exceptions 107 DAMR/IAMR NOMMU Fixed memory protection layout. 108 DAMR/IAMR MMU See mmu-layout.txt. ··· 113 Certain registers are also used or modified across function calls: 114 115 REGISTER CALL RETURN 116 - =============== =============================== =============================== 117 GR0 Fixed Zero - 118 GR2 Function call frame pointer 119 GR3 Special Preserved 120 GR3-GR7 - Clobbered 121 - GR8 Function call arg #1 Return value (or clobbered) 122 - GR9 Function call arg #2 Return value MSW (or clobbered) 123 GR10-GR13 Function call arg #3-#6 Clobbered 124 GR14 - Clobbered 125 GR15-GR16 Special Preserved 126 GR17-GR27 - Preserved 127 - GR28-GR31 Special Only accessed explicitly 128 LR Return address after CALL Clobbered 129 CCR/CCCR - Mostly Clobbered 130 ··· 136 INTERNAL DEBUG-MODE REGISTER ABI 137 ================================ 138 139 - This is the same as the kernel-mode register ABI for functions calls. The difference is that in 140 - debug-mode there's a different stack and a different exception frame. Almost all the global 141 - registers from kernel-mode (including the stack pointer) may be changed. 142 143 REGISTER FLAVOUR USE 144 - =============== ======= ==================================================== 145 GR1 Debug stack pointer 146 GR16 GP-Rel base register for small data 147 - GR31 Current debug exception frame pointer (__debug_frame) 148 SCR3 MMU Saved value of GR31 149 150 151 - Note that debug mode is able to interfere with the kernel's emulated atomic ops, so it must be 152 - exceedingly careful not to do any that would interact with the main kernel in this regard. Hence 153 - the debug mode code (gdbstub) is almost completely self-contained. The only external code used is 154 - the sprintf family of functions. 155 156 - Futhermore, break.S is so complicated because single-step mode does not switch off on entry to an 157 - exception. That means unless manually disabled, single-stepping will blithely go on stepping into 158 - things like interrupts. See gdbstub.txt for more information. 159 160 161 ========================== 162 VIRTUAL INTERRUPT HANDLING 163 ========================== 164 165 - Because accesses to the PSR is so slow, and to disable interrupts we have to access it twice (once 166 - to read and once to write), we don't actually disable interrupts at all if we don't have to. What 167 - we do instead is use the ICC2 condition code flags to note virtual disablement, such that if we 168 - then do take an interrupt, we note the flag, really disable interrupts, set another flag and resume 169 - execution at the point the interrupt happened. Setting condition flags as a side effect of an 170 - arithmetic or logical instruction is really fast. This use of the ICC2 only occurs within the 171 kernel - it does not affect userspace. 172 173 The flags we use are: 174 175 (*) CCR.ICC2.Z [Zero flag] 176 177 - Set to virtually disable interrupts, clear when interrupts are virtually enabled. Can be 178 - modified by logical instructions without affecting the Carry flag. 179 180 (*) CCR.ICC2.C [Carry flag] 181 ··· 195 196 ICC2.Z is 0, ICC2.C is 1. 197 198 - (2) An interrupt occurs. The exception prologue examines ICC2.Z and determines that nothing needs 199 - doing. This is done simply with an unlikely BEQ instruction. 200 201 (3) The interrupts are disabled (local_irq_disable) 202 ··· 207 208 ICC2.Z would be set to 0. 209 210 - A TIHI #2 instruction (trap #2 if condition HI - Z==0 && C==0) would be used to trap if 211 - interrupts were now virtually enabled, but physically disabled - which they're not, so the 212 - trap isn't taken. The kernel would then be back to state (1). 213 214 - (5) An interrupt occurs. The exception prologue examines ICC2.Z and determines that the interrupt 215 - shouldn't actually have happened. It jumps aside, and there disabled interrupts by setting 216 - PSR.PIL to 14 and then it clears ICC2.C. 217 218 (6) If interrupts were then saved and disabled again (local_irq_save): 219 220 - ICC2.Z would be shifted into the save variable and masked off (giving a 1). 221 222 - ICC2.Z would then be set to 1 (thus unchanged), and ICC2.C would be unaffected (ie: 0). 223 224 (7) If interrupts were then restored from state (6) (local_irq_restore): 225 226 - ICC2.Z would be set to indicate the result of XOR'ing the saved value (ie: 1) with 1, which 227 - gives a result of 0 - thus leaving ICC2.Z set. 228 229 ICC2.C would remain unaffected (ie: 0). 230 231 - A TIHI #2 instruction would be used to again assay the current state, but this would do 232 - nothing as Z==1. 233 234 (8) If interrupts were then enabled (local_irq_enable): 235 236 - ICC2.Z would be cleared. ICC2.C would be left unaffected. Both flags would now be 0. 237 238 - A TIHI #2 instruction again issued to assay the current state would then trap as both Z==0 239 - [interrupts virtually enabled] and C==0 [interrupts really disabled] would then be true. 240 241 - (9) The trap #2 handler would simply enable hardware interrupts (set PSR.PIL to 0), set ICC2.C to 242 - 1 and return. 243 244 (10) Immediately upon returning, the pending interrupt would be taken. 245 246 - (11) The interrupt handler would take the path of actually processing the interrupt (ICC2.Z is 247 - clear, BEQ fails as per step (2)). 248 249 - (12) The interrupt handler would then set ICC2.C to 1 since hardware interrupts are definitely 250 - enabled - or else the kernel wouldn't be here. 251 252 (13) On return from the interrupt handler, things would be back to state (1). 253 254 - This trap (#2) is only available in kernel mode. In user mode it will result in SIGILL.
··· 1 + ================================= 2 + INTERNAL KERNEL ABI FOR FR-V ARCH 3 + ================================= 4 5 + The internal FRV kernel ABI is not quite the same as the userspace ABI. A 6 + number of the registers are used for special purposed, and the ABI is not 7 + consistent between modules vs core, and MMU vs no-MMU. 8 9 + This partly stems from the fact that FRV CPUs do not have a separate 10 + supervisor stack pointer, and most of them do not have any scratch 11 + registers, thus requiring at least one general purpose register to be 12 + clobbered in such an event. Also, within the kernel core, it is possible to 13 + simply jump or call directly between functions using a relative offset. 14 + This cannot be extended to modules for the displacement is likely to be too 15 + far. Thus in modules the address of a function to call must be calculated 16 + in a register and then used, requiring two extra instructions. 17 18 This document has the following sections: 19 ··· 39 CPU OPERATING MODES 40 =================== 41 42 + The FR-V CPU has three basic operating modes. In order of increasing 43 + capability: 44 45 (1) User mode. 46 ··· 47 48 (2) Kernel mode. 49 50 + Normal kernel mode. There are many additional control registers 51 + available that may be accessed in this mode, in addition to all the 52 + stuff available to user mode. This has two submodes: 53 54 (a) Exceptions enabled (PSR.T == 1). 55 56 + Exceptions will invoke the appropriate normal kernel mode 57 + handler. On entry to the handler, the PSR.T bit will be cleared. 58 59 (b) Exceptions disabled (PSR.T == 0). 60 61 + No exceptions or interrupts may happen. Any mandatory exceptions 62 + will cause the CPU to halt unless the CPU is told to jump into 63 + debug mode instead. 64 65 (3) Debug mode. 66 67 + No exceptions may happen in this mode. Memory protection and 68 + management exceptions will be flagged for later consideration, but 69 + the exception handler won't be invoked. Debugging traps such as 70 + hardware breakpoints and watchpoints will be ignored. This mode is 71 + entered only by debugging events obtained from the other two modes. 72 73 + All kernel mode registers may be accessed, plus a few extra debugging 74 + specific registers. 75 76 77 ================================= 78 INTERNAL KERNEL-MODE REGISTER ABI 79 ================================= 80 81 + There are a number of permanent register assignments that are set up by 82 + entry.S in the exception prologue. Note that there is a complete set of 83 + exception prologues for each of user->kernel transition and kernel->kernel 84 + transition. There are also user->debug and kernel->debug mode transition 85 + prologues. 86 87 88 REGISTER FLAVOUR USE 89 + =============== ======= ============================================== 90 GR1 Supervisor stack pointer 91 GR15 Current thread info pointer 92 GR16 GP-Rel base register for small data ··· 92 GR31 NOMMU Destroyed by debug mode entry 93 GR31 MMU Destroyed by TLB miss kernel mode entry 94 CCR.ICC2 Virtual interrupt disablement tracking 95 + CCCR.CC3 Cleared by exception prologue 96 + (atomic op emulation) 97 SCR0 MMU See mmu-layout.txt. 98 SCR1 MMU See mmu-layout.txt. 99 + SCR2 MMU Save for EAR0 (destroyed by icache insns 100 + in debug mode) 101 SCR3 MMU Save for GR31 during debug exceptions 102 DAMR/IAMR NOMMU Fixed memory protection layout. 103 DAMR/IAMR MMU See mmu-layout.txt. ··· 104 Certain registers are also used or modified across function calls: 105 106 REGISTER CALL RETURN 107 + =============== =============================== ====================== 108 GR0 Fixed Zero - 109 GR2 Function call frame pointer 110 GR3 Special Preserved 111 GR3-GR7 - Clobbered 112 + GR8 Function call arg #1 Return value 113 + (or clobbered) 114 + GR9 Function call arg #2 Return value MSW 115 + (or clobbered) 116 GR10-GR13 Function call arg #3-#6 Clobbered 117 GR14 - Clobbered 118 GR15-GR16 Special Preserved 119 GR17-GR27 - Preserved 120 + GR28-GR31 Special Only accessed 121 + explicitly 122 LR Return address after CALL Clobbered 123 CCR/CCCR - Mostly Clobbered 124 ··· 124 INTERNAL DEBUG-MODE REGISTER ABI 125 ================================ 126 127 + This is the same as the kernel-mode register ABI for functions calls. The 128 + difference is that in debug-mode there's a different stack and a different 129 + exception frame. Almost all the global registers from kernel-mode 130 + (including the stack pointer) may be changed. 131 132 REGISTER FLAVOUR USE 133 + =============== ======= ============================================== 134 GR1 Debug stack pointer 135 GR16 GP-Rel base register for small data 136 + GR31 Current debug exception frame pointer 137 + (__debug_frame) 138 SCR3 MMU Saved value of GR31 139 140 141 + Note that debug mode is able to interfere with the kernel's emulated atomic 142 + ops, so it must be exceedingly careful not to do any that would interact 143 + with the main kernel in this regard. Hence the debug mode code (gdbstub) is 144 + almost completely self-contained. The only external code used is the 145 + sprintf family of functions. 146 147 + Futhermore, break.S is so complicated because single-step mode does not 148 + switch off on entry to an exception. That means unless manually disabled, 149 + single-stepping will blithely go on stepping into things like interrupts. 150 + See gdbstub.txt for more information. 151 152 153 ========================== 154 VIRTUAL INTERRUPT HANDLING 155 ========================== 156 157 + Because accesses to the PSR is so slow, and to disable interrupts we have 158 + to access it twice (once to read and once to write), we don't actually 159 + disable interrupts at all if we don't have to. What we do instead is use 160 + the ICC2 condition code flags to note virtual disablement, such that if we 161 + then do take an interrupt, we note the flag, really disable interrupts, set 162 + another flag and resume execution at the point the interrupt happened. 163 + Setting condition flags as a side effect of an arithmetic or logical 164 + instruction is really fast. This use of the ICC2 only occurs within the 165 kernel - it does not affect userspace. 166 167 The flags we use are: 168 169 (*) CCR.ICC2.Z [Zero flag] 170 171 + Set to virtually disable interrupts, clear when interrupts are 172 + virtually enabled. Can be modified by logical instructions without 173 + affecting the Carry flag. 174 175 (*) CCR.ICC2.C [Carry flag] 176 ··· 176 177 ICC2.Z is 0, ICC2.C is 1. 178 179 + (2) An interrupt occurs. The exception prologue examines ICC2.Z and 180 + determines that nothing needs doing. This is done simply with an 181 + unlikely BEQ instruction. 182 183 (3) The interrupts are disabled (local_irq_disable) 184 ··· 187 188 ICC2.Z would be set to 0. 189 190 + A TIHI #2 instruction (trap #2 if condition HI - Z==0 && C==0) would 191 + be used to trap if interrupts were now virtually enabled, but 192 + physically disabled - which they're not, so the trap isn't taken. The 193 + kernel would then be back to state (1). 194 195 + (5) An interrupt occurs. The exception prologue examines ICC2.Z and 196 + determines that the interrupt shouldn't actually have happened. It 197 + jumps aside, and there disabled interrupts by setting PSR.PIL to 14 198 + and then it clears ICC2.C. 199 200 (6) If interrupts were then saved and disabled again (local_irq_save): 201 202 + ICC2.Z would be shifted into the save variable and masked off 203 + (giving a 1). 204 205 + ICC2.Z would then be set to 1 (thus unchanged), and ICC2.C would be 206 + unaffected (ie: 0). 207 208 (7) If interrupts were then restored from state (6) (local_irq_restore): 209 210 + ICC2.Z would be set to indicate the result of XOR'ing the saved 211 + value (ie: 1) with 1, which gives a result of 0 - thus leaving 212 + ICC2.Z set. 213 214 ICC2.C would remain unaffected (ie: 0). 215 216 + A TIHI #2 instruction would be used to again assay the current state, 217 + but this would do nothing as Z==1. 218 219 (8) If interrupts were then enabled (local_irq_enable): 220 221 + ICC2.Z would be cleared. ICC2.C would be left unaffected. Both 222 + flags would now be 0. 223 224 + A TIHI #2 instruction again issued to assay the current state would 225 + then trap as both Z==0 [interrupts virtually enabled] and C==0 226 + [interrupts really disabled] would then be true. 227 228 + (9) The trap #2 handler would simply enable hardware interrupts 229 + (set PSR.PIL to 0), set ICC2.C to 1 and return. 230 231 (10) Immediately upon returning, the pending interrupt would be taken. 232 233 + (11) The interrupt handler would take the path of actually processing the 234 + interrupt (ICC2.Z is clear, BEQ fails as per step (2)). 235 236 + (12) The interrupt handler would then set ICC2.C to 1 since hardware 237 + interrupts are definitely enabled - or else the kernel wouldn't be here. 238 239 (13) On return from the interrupt handler, things would be back to state (1). 240 241 + This trap (#2) is only available in kernel mode. In user mode it will 242 + result in SIGILL.
+14 -20
Documentation/kernel-parameters.txt
··· 1 - February 2003 Kernel Parameters v2.5.59 2 ~~~~~~~~~~~~~~~~~ 3 4 The following is a consolidated list of the kernel parameters as implemented ··· 17 18 usbcore.blinkenlights=1 19 20 - The text in square brackets at the beginning of the description states the 21 - restrictions on the kernel for the said kernel parameter to be valid. The 22 - restrictions referred to are that the relevant option is valid if: 23 24 ACPI ACPI support is enabled. 25 ALSA ALSA sound support is enabled. ··· 1054 noltlbs [PPC] Do not use large page/tlb entries for kernel 1055 lowmem mapping on PPC40x. 1056 1057 - nomce [IA-32] Machine Check Exception 1058 - 1059 nomca [IA-64] Disable machine check abort handling 1060 1061 noresidual [PPC] Don't use residual data on PReP machines. 1062 ··· 1690 1691 1692 ______________________________________________________________________ 1693 - Changelog: 1694 - 1695 - 2000-06-?? Mr. Unknown 1696 - The last known update (for 2.4.0) - the changelog was not kept before. 1697 - 1698 - 2002-11-24 Petr Baudis <pasky@ucw.cz> 1699 - Randy Dunlap <randy.dunlap@verizon.net> 1700 - Update for 2.5.49, description for most of the options introduced, 1701 - references to other documentation (C files, READMEs, ..), added S390, 1702 - PPC, SPARC, MTD, ALSA and OSS category. Minor corrections and 1703 - reformatting. 1704 - 1705 - 2005-10-19 Randy Dunlap <rdunlap@xenotime.net> 1706 - Lots of typos, whitespace, some reformatting. 1707 1708 TODO: 1709
··· 1 + Kernel Parameters 2 ~~~~~~~~~~~~~~~~~ 3 4 The following is a consolidated list of the kernel parameters as implemented ··· 17 18 usbcore.blinkenlights=1 19 20 + This document may not be entirely up to date and comprehensive. The command 21 + "modinfo -p ${modulename}" shows a current list of all parameters of a loadable 22 + module. Loadable modules, after being loaded into the running kernel, also 23 + reveal their parameters in /sys/module/${modulename}/parameters/. Some of these 24 + parameters may be changed at runtime by the command 25 + "echo -n ${value} > /sys/module/${modulename}/parameters/${parm}". 26 + 27 + The parameters listed below are only valid if certain kernel build options were 28 + enabled and if respective hardware is present. The text in square brackets at 29 + the beginning of each description states the restrictions within which a 30 + parameter is applicable: 31 32 ACPI ACPI support is enabled. 33 ALSA ALSA sound support is enabled. ··· 1046 noltlbs [PPC] Do not use large page/tlb entries for kernel 1047 lowmem mapping on PPC40x. 1048 1049 nomca [IA-64] Disable machine check abort handling 1050 + 1051 + nomce [IA-32] Machine Check Exception 1052 1053 noresidual [PPC] Don't use residual data on PReP machines. 1054 ··· 1682 1683 1684 ______________________________________________________________________ 1685 1686 TODO: 1687
+1 -1
Documentation/networking/packet_mmap.txt
··· 254 255 <block number> * <block size> / <frame size> 256 257 - Suposse the following parameters, which apply for 2.6 kernel and an 258 i386 architecture: 259 260 <size-max> = 131072 bytes
··· 254 255 <block number> * <block size> / <frame size> 256 257 + Suppose the following parameters, which apply for 2.6 kernel and an 258 i386 architecture: 259 260 <size-max> = 131072 bytes
+1 -1
Documentation/networking/tuntap.txt
··· 138 ethernet frames when using tap. 139 140 5. What is the difference between BPF and TUN/TAP driver? 141 - BFP is an advanced packet filter. It can be attached to existing 142 network interface. It does not provide a virtual network interface. 143 A TUN/TAP driver does provide a virtual network interface and it is possible 144 to attach BPF to this interface.
··· 138 ethernet frames when using tap. 139 140 5. What is the difference between BPF and TUN/TAP driver? 141 + BPF is an advanced packet filter. It can be attached to existing 142 network interface. It does not provide a virtual network interface. 143 A TUN/TAP driver does provide a virtual network interface and it is possible 144 to attach BPF to this interface.
+1 -1
arch/i386/kernel/crash.c
··· 69 * for the data I pass, and I need tags 70 * on the data to indicate what information I have 71 * squirrelled away. ELF notes happen to provide 72 - * all of that that no need to invent something new. 73 */ 74 buf = (u32*)per_cpu_ptr(crash_notes, cpu); 75 if (!buf)
··· 69 * for the data I pass, and I need tags 70 * on the data to indicate what information I have 71 * squirrelled away. ELF notes happen to provide 72 + * all of that, so there is no need to invent something new. 73 */ 74 buf = (u32*)per_cpu_ptr(crash_notes, cpu); 75 if (!buf)
+1 -1
block/ll_rw_blk.c
··· 1740 1741 /** 1742 * blk_cleanup_queue: - release a &request_queue_t when it is no longer needed 1743 - * @q: the request queue to be released 1744 * 1745 * Description: 1746 * blk_cleanup_queue is the pair to blk_init_queue() or
··· 1740 1741 /** 1742 * blk_cleanup_queue: - release a &request_queue_t when it is no longer needed 1743 + * @kobj: the kobj belonging of the request queue to be released 1744 * 1745 * Description: 1746 * blk_cleanup_queue is the pair to blk_init_queue() or
+1 -2
drivers/md/dm-target.c
··· 78 if (--ti->use == 0) 79 module_put(ti->tt.module); 80 81 - if (ti->use < 0) 82 - BUG(); 83 up_read(&_lock); 84 85 return;
··· 78 if (--ti->use == 0) 79 module_put(ti->tt.module); 80 81 + BUG_ON(ti->use < 0); 82 up_read(&_lock); 83 84 return;
+2 -4
drivers/md/raid1.c
··· 1558 int buffs; 1559 1560 buffs = RESYNC_WINDOW / RESYNC_BLOCK_SIZE; 1561 - if (conf->r1buf_pool) 1562 - BUG(); 1563 conf->r1buf_pool = mempool_create(buffs, r1buf_pool_alloc, r1buf_pool_free, 1564 conf->poolinfo); 1565 if (!conf->r1buf_pool) ··· 1731 !conf->fullsync && 1732 !test_bit(MD_RECOVERY_REQUESTED, &mddev->recovery)) 1733 break; 1734 - if (sync_blocks < (PAGE_SIZE>>9)) 1735 - BUG(); 1736 if (len > (sync_blocks<<9)) 1737 len = sync_blocks<<9; 1738 }
··· 1558 int buffs; 1559 1560 buffs = RESYNC_WINDOW / RESYNC_BLOCK_SIZE; 1561 + BUG_ON(conf->r1buf_pool); 1562 conf->r1buf_pool = mempool_create(buffs, r1buf_pool_alloc, r1buf_pool_free, 1563 conf->poolinfo); 1564 if (!conf->r1buf_pool) ··· 1732 !conf->fullsync && 1733 !test_bit(MD_RECOVERY_REQUESTED, &mddev->recovery)) 1734 break; 1735 + BUG_ON(sync_blocks < (PAGE_SIZE>>9)); 1736 if (len > (sync_blocks<<9)) 1737 len = sync_blocks<<9; 1738 }
+2 -4
drivers/md/raid10.c
··· 1117 for (i=0; i<conf->copies; i++) 1118 if (r10_bio->devs[i].bio == bio) 1119 break; 1120 - if (i == conf->copies) 1121 - BUG(); 1122 update_head_pos(i, r10_bio); 1123 d = r10_bio->devs[i].devnum; 1124 ··· 1517 int buffs; 1518 1519 buffs = RESYNC_WINDOW / RESYNC_BLOCK_SIZE; 1520 - if (conf->r10buf_pool) 1521 - BUG(); 1522 conf->r10buf_pool = mempool_create(buffs, r10buf_pool_alloc, r10buf_pool_free, conf); 1523 if (!conf->r10buf_pool) 1524 return -ENOMEM;
··· 1117 for (i=0; i<conf->copies; i++) 1118 if (r10_bio->devs[i].bio == bio) 1119 break; 1120 + BUG_ON(i == conf->copies); 1121 update_head_pos(i, r10_bio); 1122 d = r10_bio->devs[i].devnum; 1123 ··· 1518 int buffs; 1519 1520 buffs = RESYNC_WINDOW / RESYNC_BLOCK_SIZE; 1521 + BUG_ON(conf->r10buf_pool); 1522 conf->r10buf_pool = mempool_create(buffs, r10buf_pool_alloc, r10buf_pool_free, conf); 1523 if (!conf->r10buf_pool) 1524 return -ENOMEM;
+12 -22
drivers/md/raid5.c
··· 73 static void __release_stripe(raid5_conf_t *conf, struct stripe_head *sh) 74 { 75 if (atomic_dec_and_test(&sh->count)) { 76 - if (!list_empty(&sh->lru)) 77 - BUG(); 78 - if (atomic_read(&conf->active_stripes)==0) 79 - BUG(); 80 if (test_bit(STRIPE_HANDLE, &sh->state)) { 81 if (test_bit(STRIPE_DELAYED, &sh->state)) 82 list_add_tail(&sh->lru, &conf->delayed_list); ··· 182 raid5_conf_t *conf = sh->raid_conf; 183 int i; 184 185 - if (atomic_read(&sh->count) != 0) 186 - BUG(); 187 - if (test_bit(STRIPE_HANDLE, &sh->state)) 188 - BUG(); 189 190 CHECK_DEVLOCK(); 191 PRINTK("init_stripe called, stripe %llu\n", ··· 265 init_stripe(sh, sector, pd_idx, disks); 266 } else { 267 if (atomic_read(&sh->count)) { 268 - if (!list_empty(&sh->lru)) 269 - BUG(); 270 } else { 271 if (!test_bit(STRIPE_HANDLE, &sh->state)) 272 atomic_inc(&conf->active_stripes); ··· 460 spin_unlock_irq(&conf->device_lock); 461 if (!sh) 462 return 0; 463 - if (atomic_read(&sh->count)) 464 - BUG(); 465 shrink_buffers(sh, conf->pool_size); 466 kmem_cache_free(conf->slab_cache, sh); 467 atomic_dec(&conf->active_stripes); ··· 876 ptr[0] = page_address(sh->dev[pd_idx].page); 877 switch(method) { 878 case READ_MODIFY_WRITE: 879 - if (!test_bit(R5_UPTODATE, &sh->dev[pd_idx].flags)) 880 - BUG(); 881 for (i=disks ; i-- ;) { 882 if (i==pd_idx) 883 continue; ··· 889 if (test_and_clear_bit(R5_Overlap, &sh->dev[i].flags)) 890 wake_up(&conf->wait_for_overlap); 891 892 - if (sh->dev[i].written) BUG(); 893 sh->dev[i].written = chosen; 894 check_xor(); 895 } ··· 905 if (test_and_clear_bit(R5_Overlap, &sh->dev[i].flags)) 906 wake_up(&conf->wait_for_overlap); 907 908 - if (sh->dev[i].written) BUG(); 909 sh->dev[i].written = chosen; 910 } 911 break; ··· 988 if (*bip && (*bip)->bi_sector < bi->bi_sector + ((bi->bi_size)>>9)) 989 goto overlap; 990 991 - if (*bip && bi->bi_next && (*bip) != bi->bi_next) 992 - BUG(); 993 if (*bip) 994 bi->bi_next = *bip; 995 *bip = bi; ··· 1422 set_bit(STRIPE_HANDLE, &sh->state); 1423 if (failed == 0) { 1424 char *pagea; 1425 - if (uptodate != disks) 1426 - BUG(); 1427 compute_parity(sh, CHECK_PARITY); 1428 uptodate--; 1429 pagea = page_address(sh->dev[sh->pd_idx].page); ··· 2087 2088 list_del_init(first); 2089 atomic_inc(&sh->count); 2090 - if (atomic_read(&sh->count)!= 1) 2091 - BUG(); 2092 spin_unlock_irq(&conf->device_lock); 2093 2094 handled++;
··· 73 static void __release_stripe(raid5_conf_t *conf, struct stripe_head *sh) 74 { 75 if (atomic_dec_and_test(&sh->count)) { 76 + BUG_ON(!list_empty(&sh->lru)); 77 + BUG_ON(atomic_read(&conf->active_stripes)==0); 78 if (test_bit(STRIPE_HANDLE, &sh->state)) { 79 if (test_bit(STRIPE_DELAYED, &sh->state)) 80 list_add_tail(&sh->lru, &conf->delayed_list); ··· 184 raid5_conf_t *conf = sh->raid_conf; 185 int i; 186 187 + BUG_ON(atomic_read(&sh->count) != 0); 188 + BUG_ON(test_bit(STRIPE_HANDLE, &sh->state)); 189 190 CHECK_DEVLOCK(); 191 PRINTK("init_stripe called, stripe %llu\n", ··· 269 init_stripe(sh, sector, pd_idx, disks); 270 } else { 271 if (atomic_read(&sh->count)) { 272 + BUG_ON(!list_empty(&sh->lru)); 273 } else { 274 if (!test_bit(STRIPE_HANDLE, &sh->state)) 275 atomic_inc(&conf->active_stripes); ··· 465 spin_unlock_irq(&conf->device_lock); 466 if (!sh) 467 return 0; 468 + BUG_ON(atomic_read(&sh->count)); 469 shrink_buffers(sh, conf->pool_size); 470 kmem_cache_free(conf->slab_cache, sh); 471 atomic_dec(&conf->active_stripes); ··· 882 ptr[0] = page_address(sh->dev[pd_idx].page); 883 switch(method) { 884 case READ_MODIFY_WRITE: 885 + BUG_ON(!test_bit(R5_UPTODATE, &sh->dev[pd_idx].flags)); 886 for (i=disks ; i-- ;) { 887 if (i==pd_idx) 888 continue; ··· 896 if (test_and_clear_bit(R5_Overlap, &sh->dev[i].flags)) 897 wake_up(&conf->wait_for_overlap); 898 899 + BUG_ON(sh->dev[i].written); 900 sh->dev[i].written = chosen; 901 check_xor(); 902 } ··· 912 if (test_and_clear_bit(R5_Overlap, &sh->dev[i].flags)) 913 wake_up(&conf->wait_for_overlap); 914 915 + BUG_ON(sh->dev[i].written); 916 sh->dev[i].written = chosen; 917 } 918 break; ··· 995 if (*bip && (*bip)->bi_sector < bi->bi_sector + ((bi->bi_size)>>9)) 996 goto overlap; 997 998 + BUG_ON(*bip && bi->bi_next && (*bip) != bi->bi_next); 999 if (*bip) 1000 bi->bi_next = *bip; 1001 *bip = bi; ··· 1430 set_bit(STRIPE_HANDLE, &sh->state); 1431 if (failed == 0) { 1432 char *pagea; 1433 + BUG_ON(uptodate != disks); 1434 compute_parity(sh, CHECK_PARITY); 1435 uptodate--; 1436 pagea = page_address(sh->dev[sh->pd_idx].page); ··· 2096 2097 list_del_init(first); 2098 atomic_inc(&sh->count); 2099 + BUG_ON(atomic_read(&sh->count)!= 1); 2100 spin_unlock_irq(&conf->device_lock); 2101 2102 handled++;
+10 -19
drivers/md/raid6main.c
··· 91 static void __release_stripe(raid6_conf_t *conf, struct stripe_head *sh) 92 { 93 if (atomic_dec_and_test(&sh->count)) { 94 - if (!list_empty(&sh->lru)) 95 - BUG(); 96 - if (atomic_read(&conf->active_stripes)==0) 97 - BUG(); 98 if (test_bit(STRIPE_HANDLE, &sh->state)) { 99 if (test_bit(STRIPE_DELAYED, &sh->state)) 100 list_add_tail(&sh->lru, &conf->delayed_list); ··· 200 raid6_conf_t *conf = sh->raid_conf; 201 int disks = conf->raid_disks, i; 202 203 - if (atomic_read(&sh->count) != 0) 204 - BUG(); 205 - if (test_bit(STRIPE_HANDLE, &sh->state)) 206 - BUG(); 207 208 CHECK_DEVLOCK(); 209 PRINTK("init_stripe called, stripe %llu\n", ··· 280 init_stripe(sh, sector, pd_idx); 281 } else { 282 if (atomic_read(&sh->count)) { 283 - if (!list_empty(&sh->lru)) 284 - BUG(); 285 } else { 286 if (!test_bit(STRIPE_HANDLE, &sh->state)) 287 atomic_inc(&conf->active_stripes); 288 - if (list_empty(&sh->lru)) 289 - BUG(); 290 list_del_init(&sh->lru); 291 } 292 } ··· 347 spin_unlock_irq(&conf->device_lock); 348 if (!sh) 349 return 0; 350 - if (atomic_read(&sh->count)) 351 - BUG(); 352 shrink_buffers(sh, conf->raid_disks); 353 kmem_cache_free(conf->slab_cache, sh); 354 atomic_dec(&conf->active_stripes); ··· 773 if (test_and_clear_bit(R5_Overlap, &sh->dev[i].flags)) 774 wake_up(&conf->wait_for_overlap); 775 776 - if (sh->dev[i].written) BUG(); 777 sh->dev[i].written = chosen; 778 } 779 break; ··· 963 if (*bip && (*bip)->bi_sector < bi->bi_sector + ((bi->bi_size)>>9)) 964 goto overlap; 965 966 - if (*bip && bi->bi_next && (*bip) != bi->bi_next) 967 - BUG(); 968 if (*bip) 969 bi->bi_next = *bip; 970 *bip = bi; ··· 1898 1899 list_del_init(first); 1900 atomic_inc(&sh->count); 1901 - if (atomic_read(&sh->count)!= 1) 1902 - BUG(); 1903 spin_unlock_irq(&conf->device_lock); 1904 1905 handled++;
··· 91 static void __release_stripe(raid6_conf_t *conf, struct stripe_head *sh) 92 { 93 if (atomic_dec_and_test(&sh->count)) { 94 + BUG_ON(!list_empty(&sh->lru)); 95 + BUG_ON(atomic_read(&conf->active_stripes)==0); 96 if (test_bit(STRIPE_HANDLE, &sh->state)) { 97 if (test_bit(STRIPE_DELAYED, &sh->state)) 98 list_add_tail(&sh->lru, &conf->delayed_list); ··· 202 raid6_conf_t *conf = sh->raid_conf; 203 int disks = conf->raid_disks, i; 204 205 + BUG_ON(atomic_read(&sh->count) != 0); 206 + BUG_ON(test_bit(STRIPE_HANDLE, &sh->state)); 207 208 CHECK_DEVLOCK(); 209 PRINTK("init_stripe called, stripe %llu\n", ··· 284 init_stripe(sh, sector, pd_idx); 285 } else { 286 if (atomic_read(&sh->count)) { 287 + BUG_ON(!list_empty(&sh->lru)); 288 } else { 289 if (!test_bit(STRIPE_HANDLE, &sh->state)) 290 atomic_inc(&conf->active_stripes); 291 + BUG_ON(list_empty(&sh->lru)); 292 list_del_init(&sh->lru); 293 } 294 } ··· 353 spin_unlock_irq(&conf->device_lock); 354 if (!sh) 355 return 0; 356 + BUG_ON(atomic_read(&sh->count)); 357 shrink_buffers(sh, conf->raid_disks); 358 kmem_cache_free(conf->slab_cache, sh); 359 atomic_dec(&conf->active_stripes); ··· 780 if (test_and_clear_bit(R5_Overlap, &sh->dev[i].flags)) 781 wake_up(&conf->wait_for_overlap); 782 783 + BUG_ON(sh->dev[i].written); 784 sh->dev[i].written = chosen; 785 } 786 break; ··· 970 if (*bip && (*bip)->bi_sector < bi->bi_sector + ((bi->bi_size)>>9)) 971 goto overlap; 972 973 + BUG_ON(*bip && bi->bi_next && (*bip) != bi->bi_next); 974 if (*bip) 975 bi->bi_next = *bip; 976 *bip = bi; ··· 1906 1907 list_del_init(first); 1908 atomic_inc(&sh->count); 1909 + BUG_ON(atomic_read(&sh->count)!= 1); 1910 spin_unlock_irq(&conf->device_lock); 1911 1912 handled++;
-21
drivers/mtd/chips/Kconfig
··· 200 provides support for one of those command sets, used on chips 201 including the AMD Am29LV320. 202 203 - config MTD_CFI_AMDSTD_RETRY 204 - int "Retry failed commands (erase/program)" 205 - depends on MTD_CFI_AMDSTD 206 - default "0" 207 - help 208 - Some chips, when attached to a shared bus, don't properly filter 209 - bus traffic that is destined to other devices. This broken 210 - behavior causes erase and program sequences to be aborted when 211 - the sequences are mixed with traffic for other devices. 212 - 213 - SST49LF040 (and related) chips are know to be broken. 214 - 215 - config MTD_CFI_AMDSTD_RETRY_MAX 216 - int "Max retries of failed commands (erase/program)" 217 - depends on MTD_CFI_AMDSTD_RETRY 218 - default "0" 219 - help 220 - If you have an SST49LF040 (or related chip) then this value should 221 - be set to at least 1. This can also be adjusted at driver load 222 - time with the retry_cmd_max module parameter. 223 - 224 config MTD_CFI_STAA 225 tristate "Support for ST (Advanced Architecture) flash chips" 226 depends on MTD_GEN_PROBE
··· 200 provides support for one of those command sets, used on chips 201 including the AMD Am29LV320. 202 203 config MTD_CFI_STAA 204 tristate "Support for ST (Advanced Architecture) flash chips" 205 depends on MTD_GEN_PROBE
+4 -8
drivers/net/8139cp.c
··· 539 unsigned buflen; 540 541 skb = cp->rx_skb[rx_tail].skb; 542 - if (!skb) 543 - BUG(); 544 545 desc = &cp->rx_ring[rx_tail]; 546 status = le32_to_cpu(desc->opts1); ··· 722 break; 723 724 skb = cp->tx_skb[tx_tail].skb; 725 - if (!skb) 726 - BUG(); 727 728 pci_unmap_single(cp->pdev, cp->tx_skb[tx_tail].mapping, 729 cp->tx_skb[tx_tail].len, PCI_DMA_TODEVICE); ··· 1548 tmp_stats[i++] = le16_to_cpu(nic_stats->tx_abort); 1549 tmp_stats[i++] = le16_to_cpu(nic_stats->tx_underrun); 1550 tmp_stats[i++] = cp->cp_stats.rx_frags; 1551 - if (i != CP_NUM_STATS) 1552 - BUG(); 1553 1554 pci_free_consistent(cp->pdev, sizeof(*nic_stats), nic_stats, dma); 1555 } ··· 1853 struct net_device *dev = pci_get_drvdata(pdev); 1854 struct cp_private *cp = netdev_priv(dev); 1855 1856 - if (!dev) 1857 - BUG(); 1858 unregister_netdev(dev); 1859 iounmap(cp->regs); 1860 if (cp->wol_enabled) pci_set_power_state (pdev, PCI_D0);
··· 539 unsigned buflen; 540 541 skb = cp->rx_skb[rx_tail].skb; 542 + BUG_ON(!skb); 543 544 desc = &cp->rx_ring[rx_tail]; 545 status = le32_to_cpu(desc->opts1); ··· 723 break; 724 725 skb = cp->tx_skb[tx_tail].skb; 726 + BUG_ON(!skb); 727 728 pci_unmap_single(cp->pdev, cp->tx_skb[tx_tail].mapping, 729 cp->tx_skb[tx_tail].len, PCI_DMA_TODEVICE); ··· 1550 tmp_stats[i++] = le16_to_cpu(nic_stats->tx_abort); 1551 tmp_stats[i++] = le16_to_cpu(nic_stats->tx_underrun); 1552 tmp_stats[i++] = cp->cp_stats.rx_frags; 1553 + BUG_ON(i != CP_NUM_STATS); 1554 1555 pci_free_consistent(cp->pdev, sizeof(*nic_stats), nic_stats, dma); 1556 } ··· 1856 struct net_device *dev = pci_get_drvdata(pdev); 1857 struct cp_private *cp = netdev_priv(dev); 1858 1859 + BUG_ON(!dev); 1860 unregister_netdev(dev); 1861 iounmap(cp->regs); 1862 if (cp->wol_enabled) pci_set_power_state (pdev, PCI_D0);
+1 -2
drivers/net/arcnet/arcnet.c
··· 765 BUGMSG(D_DURING, "in arcnet_interrupt\n"); 766 767 lp = dev->priv; 768 - if (!lp) 769 - BUG(); 770 771 spin_lock(&lp->lock); 772
··· 765 BUGMSG(D_DURING, "in arcnet_interrupt\n"); 766 767 lp = dev->priv; 768 + BUG_ON(!lp); 769 770 spin_lock(&lp->lock); 771
+1 -2
drivers/net/b44.c
··· 608 struct ring_info *rp = &bp->tx_buffers[cons]; 609 struct sk_buff *skb = rp->skb; 610 611 - if (unlikely(skb == NULL)) 612 - BUG(); 613 614 pci_unmap_single(bp->pdev, 615 pci_unmap_addr(rp, mapping),
··· 608 struct ring_info *rp = &bp->tx_buffers[cons]; 609 struct sk_buff *skb = rp->skb; 610 611 + BUG_ON(skb == NULL); 612 613 pci_unmap_single(bp->pdev, 614 pci_unmap_addr(rp, mapping),
+1 -2
drivers/net/chelsio/sge.c
··· 1093 if (likely(e->DataValid)) { 1094 struct freelQ *fl = &sge->freelQ[e->FreelistQid]; 1095 1096 - if (unlikely(!e->Sop || !e->Eop)) 1097 - BUG(); 1098 if (unlikely(e->Offload)) 1099 unexpected_offload(adapter, fl); 1100 else
··· 1093 if (likely(e->DataValid)) { 1094 struct freelQ *fl = &sge->freelQ[e->FreelistQid]; 1095 1096 + BUG_ON(!e->Sop || !e->Eop); 1097 if (unlikely(e->Offload)) 1098 unexpected_offload(adapter, fl); 1099 else
+1 -2
drivers/net/e1000/e1000_main.c
··· 3308 3309 while (poll_dev != &adapter->polling_netdev[i]) { 3310 i++; 3311 - if (unlikely(i == adapter->num_rx_queues)) 3312 - BUG(); 3313 } 3314 3315 if (likely(adapter->num_tx_queues == 1)) {
··· 3308 3309 while (poll_dev != &adapter->polling_netdev[i]) { 3310 i++; 3311 + BUG_ON(i == adapter->num_rx_queues); 3312 } 3313 3314 if (likely(adapter->num_tx_queues == 1)) {
+1 -2
drivers/net/eql.c
··· 203 printk(KERN_INFO "%s: remember to turn off Van-Jacobson compression on " 204 "your slave devices.\n", dev->name); 205 206 - if (!list_empty(&eql->queue.all_slaves)) 207 - BUG(); 208 209 eql->min_slaves = 1; 210 eql->max_slaves = EQL_DEFAULT_MAX_SLAVES; /* 4 usually... */
··· 203 printk(KERN_INFO "%s: remember to turn off Van-Jacobson compression on " 204 "your slave devices.\n", dev->name); 205 206 + BUG_ON(!list_empty(&eql->queue.all_slaves)); 207 208 eql->min_slaves = 1; 209 eql->max_slaves = EQL_DEFAULT_MAX_SLAVES; /* 4 usually... */
+1 -2
drivers/net/irda/sa1100_ir.c
··· 695 /* 696 * We must not be transmitting... 697 */ 698 - if (si->txskb) 699 - BUG(); 700 701 netif_stop_queue(dev); 702
··· 695 /* 696 * We must not be transmitting... 697 */ 698 + BUG_ON(si->txskb); 699 700 netif_stop_queue(dev); 701
+1 -3
drivers/net/ne2k-pci.c
··· 645 { 646 struct net_device *dev = pci_get_drvdata(pdev); 647 648 - if (!dev) 649 - BUG(); 650 - 651 unregister_netdev(dev); 652 release_region(dev->base_addr, NE_IO_EXTENT); 653 free_netdev(dev);
··· 645 { 646 struct net_device *dev = pci_get_drvdata(pdev); 647 648 + BUG_ON(!dev); 649 unregister_netdev(dev); 650 release_region(dev->base_addr, NE_IO_EXTENT); 651 free_netdev(dev);
+1 -2
drivers/net/ns83820.c
··· 568 #endif 569 570 sg = dev->rx_info.descs + (next_empty * DESC_SIZE); 571 - if (unlikely(NULL != dev->rx_info.skbs[next_empty])) 572 - BUG(); 573 dev->rx_info.skbs[next_empty] = skb; 574 575 dev->rx_info.next_empty = (next_empty + 1) % NR_RX_DESC;
··· 568 #endif 569 570 sg = dev->rx_info.descs + (next_empty * DESC_SIZE); 571 + BUG_ON(NULL != dev->rx_info.skbs[next_empty]); 572 dev->rx_info.skbs[next_empty] = skb; 573 574 dev->rx_info.next_empty = (next_empty + 1) % NR_RX_DESC;
+1 -2
drivers/net/starfire.c
··· 2122 struct net_device *dev = pci_get_drvdata(pdev); 2123 struct netdev_private *np = netdev_priv(dev); 2124 2125 - if (!dev) 2126 - BUG(); 2127 2128 unregister_netdev(dev); 2129
··· 2122 struct net_device *dev = pci_get_drvdata(pdev); 2123 struct netdev_private *np = netdev_priv(dev); 2124 2125 + BUG_ON(!dev); 2126 2127 unregister_netdev(dev); 2128
+5 -10
drivers/net/tg3.c
··· 2959 struct sk_buff *skb = ri->skb; 2960 int i; 2961 2962 - if (unlikely(skb == NULL)) 2963 - BUG(); 2964 - 2965 pci_unmap_single(tp->pdev, 2966 pci_unmap_addr(ri, mapping), 2967 skb_headlen(skb), ··· 2970 sw_idx = NEXT_TX(sw_idx); 2971 2972 for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { 2973 - if (unlikely(sw_idx == hw_idx)) 2974 - BUG(); 2975 2976 ri = &tp->tx_buffers[sw_idx]; 2977 - if (unlikely(ri->skb != NULL)) 2978 - BUG(); 2979 2980 pci_unmap_page(tp->pdev, 2981 pci_unmap_addr(ri, mapping), ··· 4924 { 4925 int i; 4926 4927 - if (offset == TX_CPU_BASE && 4928 - (tp->tg3_flags2 & TG3_FLG2_5705_PLUS)) 4929 - BUG(); 4930 4931 if (offset == RX_CPU_BASE) { 4932 for (i = 0; i < 10000; i++) {
··· 2959 struct sk_buff *skb = ri->skb; 2960 int i; 2961 2962 + BUG_ON(skb == NULL); 2963 pci_unmap_single(tp->pdev, 2964 pci_unmap_addr(ri, mapping), 2965 skb_headlen(skb), ··· 2972 sw_idx = NEXT_TX(sw_idx); 2973 2974 for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { 2975 + BUG_ON(sw_idx == hw_idx); 2976 2977 ri = &tp->tx_buffers[sw_idx]; 2978 + BUG_ON(ri->skb != NULL); 2979 2980 pci_unmap_page(tp->pdev, 2981 pci_unmap_addr(ri, mapping), ··· 4928 { 4929 int i; 4930 4931 + BUG_ON(offset == TX_CPU_BASE && 4932 + (tp->tg3_flags2 & TG3_FLG2_5705_PLUS)); 4933 4934 if (offset == RX_CPU_BASE) { 4935 for (i = 0; i < 10000; i++) {
+1 -2
drivers/net/tokenring/abyss.c
··· 438 { 439 struct net_device *dev = pci_get_drvdata(pdev); 440 441 - if (!dev) 442 - BUG(); 443 unregister_netdev(dev); 444 release_region(dev->base_addr-0x10, ABYSS_IO_EXTENT); 445 free_irq(dev->irq, dev);
··· 438 { 439 struct net_device *dev = pci_get_drvdata(pdev); 440 441 + BUG_ON(!dev); 442 unregister_netdev(dev); 443 release_region(dev->base_addr-0x10, ABYSS_IO_EXTENT); 444 free_irq(dev->irq, dev);
+1 -2
drivers/net/tokenring/madgemc.c
··· 735 struct net_local *tp; 736 struct card_info *card; 737 738 - if (!dev) 739 - BUG(); 740 741 tp = dev->priv; 742 card = tp->tmspriv;
··· 735 struct net_local *tp; 736 struct card_info *card; 737 738 + BUG_ON(!dev); 739 740 tp = dev->priv; 741 card = tp->tmspriv;
+3 -6
drivers/net/wireless/ipw2200.c
··· 5573 case IEEE80211_52GHZ_BAND: 5574 network->mode = IEEE_A; 5575 i = ieee80211_channel_to_index(priv->ieee, priv->channel); 5576 - if (i == -1) 5577 - BUG(); 5578 if (geo->a[i].flags & IEEE80211_CH_PASSIVE_ONLY) { 5579 IPW_WARNING("Overriding invalid channel\n"); 5580 priv->channel = geo->a[0].channel; ··· 5586 else 5587 network->mode = IEEE_B; 5588 i = ieee80211_channel_to_index(priv->ieee, priv->channel); 5589 - if (i == -1) 5590 - BUG(); 5591 if (geo->bg[i].flags & IEEE80211_CH_PASSIVE_ONLY) { 5592 IPW_WARNING("Overriding invalid channel\n"); 5593 priv->channel = geo->bg[0].channel; ··· 6713 6714 switch (priv->ieee->iw_mode) { 6715 case IW_MODE_ADHOC: 6716 - if (!(network->capability & WLAN_CAPABILITY_IBSS)) 6717 - BUG(); 6718 6719 qos_data = &ibss_data; 6720 break;
··· 5573 case IEEE80211_52GHZ_BAND: 5574 network->mode = IEEE_A; 5575 i = ieee80211_channel_to_index(priv->ieee, priv->channel); 5576 + BUG_ON(i == -1); 5577 if (geo->a[i].flags & IEEE80211_CH_PASSIVE_ONLY) { 5578 IPW_WARNING("Overriding invalid channel\n"); 5579 priv->channel = geo->a[0].channel; ··· 5587 else 5588 network->mode = IEEE_B; 5589 i = ieee80211_channel_to_index(priv->ieee, priv->channel); 5590 + BUG_ON(i == -1); 5591 if (geo->bg[i].flags & IEEE80211_CH_PASSIVE_ONLY) { 5592 IPW_WARNING("Overriding invalid channel\n"); 5593 priv->channel = geo->bg[0].channel; ··· 6715 6716 switch (priv->ieee->iw_mode) { 6717 case IW_MODE_ADHOC: 6718 + BUG_ON(!(network->capability & WLAN_CAPABILITY_IBSS)); 6719 6720 qos_data = &ibss_data; 6721 break;
+1 -2
drivers/net/yellowfin.c
··· 1441 struct net_device *dev = pci_get_drvdata(pdev); 1442 struct yellowfin_private *np; 1443 1444 - if (!dev) 1445 - BUG(); 1446 np = netdev_priv(dev); 1447 1448 pci_free_consistent(pdev, STATUS_TOTAL_SIZE, np->tx_status,
··· 1441 struct net_device *dev = pci_get_drvdata(pdev); 1442 struct yellowfin_private *np; 1443 1444 + BUG_ON(!dev); 1445 np = netdev_priv(dev); 1446 1447 pci_free_consistent(pdev, STATUS_TOTAL_SIZE, np->tx_status,
+3 -5
drivers/s390/block/dasd_erp.c
··· 32 int size; 33 34 /* Sanity checks */ 35 - if ( magic == NULL || datasize > PAGE_SIZE || 36 - (cplength*sizeof(struct ccw1)) > PAGE_SIZE) 37 - BUG(); 38 39 size = (sizeof(struct dasd_ccw_req) + 7L) & -8L; 40 if (cplength > 0) ··· 124 struct dasd_device *device; 125 int success; 126 127 - if (cqr->refers == NULL || cqr->function == NULL) 128 - BUG(); 129 130 device = cqr->device; 131 success = cqr->status == DASD_CQR_DONE;
··· 32 int size; 33 34 /* Sanity checks */ 35 + BUG_ON( magic == NULL || datasize > PAGE_SIZE || 36 + (cplength*sizeof(struct ccw1)) > PAGE_SIZE); 37 38 size = (sizeof(struct dasd_ccw_req) + 7L) & -8L; 39 if (cplength > 0) ··· 125 struct dasd_device *device; 126 int success; 127 128 + BUG_ON(cqr->refers == NULL || cqr->function == NULL); 129 130 device = cqr->device; 131 success = cqr->status == DASD_CQR_DONE;
+1 -1
drivers/s390/char/sclp_rw.c
··· 24 25 /* 26 * The room for the SCCB (only for writing) is not equal to a pages size 27 - * (as it is specified as the maximum size in the the SCLP ducumentation) 28 * because of the additional data structure described above. 29 */ 30 #define MAX_SCCB_ROOM (PAGE_SIZE - sizeof(struct sclp_buffer))
··· 24 25 /* 26 * The room for the SCCB (only for writing) is not equal to a pages size 27 + * (as it is specified as the maximum size in the the SCLP documentation) 28 * because of the additional data structure described above. 29 */ 30 #define MAX_SCCB_ROOM (PAGE_SIZE - sizeof(struct sclp_buffer))
+4 -9
drivers/s390/char/tape_block.c
··· 198 199 device = (struct tape_device *) queue->queuedata; 200 DBF_LH(6, "tapeblock_request_fn(device=%p)\n", device); 201 - if (device == NULL) 202 - BUG(); 203 - 204 tapeblock_trigger_requeue(device); 205 } 206 ··· 305 int rc; 306 307 device = (struct tape_device *) disk->private_data; 308 - if (!device) 309 - BUG(); 310 311 if (!device->blk_data.medium_changed) 312 return 0; ··· 437 438 rc = 0; 439 disk = inode->i_bdev->bd_disk; 440 - if (!disk) 441 - BUG(); 442 device = disk->private_data; 443 - if (!device) 444 - BUG(); 445 minor = iminor(inode); 446 447 DBF_LH(6, "tapeblock_ioctl(0x%0x)\n", command);
··· 198 199 device = (struct tape_device *) queue->queuedata; 200 DBF_LH(6, "tapeblock_request_fn(device=%p)\n", device); 201 + BUG_ON(device == NULL); 202 tapeblock_trigger_requeue(device); 203 } 204 ··· 307 int rc; 308 309 device = (struct tape_device *) disk->private_data; 310 + BUG_ON(!device); 311 312 if (!device->blk_data.medium_changed) 313 return 0; ··· 440 441 rc = 0; 442 disk = inode->i_bdev->bd_disk; 443 + BUG_ON(!disk); 444 device = disk->private_data; 445 + BUG_ON(!device); 446 minor = iminor(inode); 447 448 DBF_LH(6, "tapeblock_ioctl(0x%0x)\n", command);
+5 -8
drivers/s390/net/lcs.c
··· 675 int index, rc; 676 677 LCS_DBF_TEXT(5, trace, "rdybuff"); 678 - if (buffer->state != BUF_STATE_LOCKED && 679 - buffer->state != BUF_STATE_PROCESSED) 680 - BUG(); 681 spin_lock_irqsave(get_ccwdev_lock(channel->ccwdev), flags); 682 buffer->state = BUF_STATE_READY; 683 index = buffer - channel->iob; ··· 700 int index, prev, next; 701 702 LCS_DBF_TEXT(5, trace, "prcsbuff"); 703 - if (buffer->state != BUF_STATE_READY) 704 - BUG(); 705 buffer->state = BUF_STATE_PROCESSED; 706 index = buffer - channel->iob; 707 prev = (index - 1) & (LCS_NUM_BUFFS - 1); ··· 732 unsigned long flags; 733 734 LCS_DBF_TEXT(5, trace, "relbuff"); 735 - if (buffer->state != BUF_STATE_LOCKED && 736 - buffer->state != BUF_STATE_PROCESSED) 737 - BUG(); 738 spin_lock_irqsave(get_ccwdev_lock(channel->ccwdev), flags); 739 buffer->state = BUF_STATE_EMPTY; 740 spin_unlock_irqrestore(get_ccwdev_lock(channel->ccwdev), flags);
··· 675 int index, rc; 676 677 LCS_DBF_TEXT(5, trace, "rdybuff"); 678 + BUG_ON(buffer->state != BUF_STATE_LOCKED && 679 + buffer->state != BUF_STATE_PROCESSED); 680 spin_lock_irqsave(get_ccwdev_lock(channel->ccwdev), flags); 681 buffer->state = BUF_STATE_READY; 682 index = buffer - channel->iob; ··· 701 int index, prev, next; 702 703 LCS_DBF_TEXT(5, trace, "prcsbuff"); 704 + BUG_ON(buffer->state != BUF_STATE_READY); 705 buffer->state = BUF_STATE_PROCESSED; 706 index = buffer - channel->iob; 707 prev = (index - 1) & (LCS_NUM_BUFFS - 1); ··· 734 unsigned long flags; 735 736 LCS_DBF_TEXT(5, trace, "relbuff"); 737 + BUG_ON(buffer->state != BUF_STATE_LOCKED && 738 + buffer->state != BUF_STATE_PROCESSED); 739 spin_lock_irqsave(get_ccwdev_lock(channel->ccwdev), flags); 740 buffer->state = BUF_STATE_EMPTY; 741 spin_unlock_irqrestore(get_ccwdev_lock(channel->ccwdev), flags);
+1 -1
drivers/scsi/aic7xxx/Kconfig.aic7xxx
··· 86 default "0" 87 help 88 Bit mask of debug options that is only valid if the 89 - CONFIG_AIC7XXX_DEBUG_ENBLE option is enabled. The bits in this mask 90 are defined in the drivers/scsi/aic7xxx/aic7xxx.h - search for the 91 variable ahc_debug in that file to find them. 92
··· 86 default "0" 87 help 88 Bit mask of debug options that is only valid if the 89 + CONFIG_AIC7XXX_DEBUG_ENABLE option is enabled. The bits in this mask 90 are defined in the drivers/scsi/aic7xxx/aic7xxx.h - search for the 91 variable ahc_debug in that file to find them. 92
+1 -1
drivers/serial/jsm/jsm.h
··· 20 * 21 * Contact Information: 22 * Scott H Kilau <Scott_Kilau@digi.com> 23 - * Wendy Xiong <wendyx@us.ltcfwd.linux.ibm.com> 24 * 25 ***********************************************************************/ 26
··· 20 * 21 * Contact Information: 22 * Scott H Kilau <Scott_Kilau@digi.com> 23 + * Wendy Xiong <wendyx@us.ibm.com> 24 * 25 ***********************************************************************/ 26
+1 -1
drivers/serial/jsm/jsm_driver.c
··· 20 * 21 * Contact Information: 22 * Scott H Kilau <Scott_Kilau@digi.com> 23 - * Wendy Xiong <wendyx@us.ltcfwd.linux.ibm.com> 24 * 25 * 26 ***********************************************************************/
··· 20 * 21 * Contact Information: 22 * Scott H Kilau <Scott_Kilau@digi.com> 23 + * Wendy Xiong <wendyx@us.ibm.com> 24 * 25 * 26 ***********************************************************************/
+1 -1
drivers/serial/jsm/jsm_neo.c
··· 20 * 21 * Contact Information: 22 * Scott H Kilau <Scott_Kilau@digi.com> 23 - * Wendy Xiong <wendyx@us.ltcfwd.linux.ibm.com> 24 * 25 ***********************************************************************/ 26 #include <linux/delay.h> /* For udelay */
··· 20 * 21 * Contact Information: 22 * Scott H Kilau <Scott_Kilau@digi.com> 23 + * Wendy Xiong <wendyx@us.ibm.com> 24 * 25 ***********************************************************************/ 26 #include <linux/delay.h> /* For udelay */
+1 -2
fs/direct-io.c
··· 929 block_in_page += this_chunk_blocks; 930 dio->blocks_available -= this_chunk_blocks; 931 next_block: 932 - if (dio->block_in_file > dio->final_block_in_request) 933 - BUG(); 934 if (dio->block_in_file == dio->final_block_in_request) 935 break; 936 }
··· 929 block_in_page += this_chunk_blocks; 930 dio->blocks_available -= this_chunk_blocks; 931 next_block: 932 + BUG_ON(dio->block_in_file > dio->final_block_in_request); 933 if (dio->block_in_file == dio->final_block_in_request) 934 break; 935 }
+2 -4
fs/dquot.c
··· 590 atomic_dec(&dquot->dq_count); 591 #ifdef __DQUOT_PARANOIA 592 /* sanity check */ 593 - if (!list_empty(&dquot->dq_free)) 594 - BUG(); 595 #endif 596 put_dquot_last(dquot); 597 spin_unlock(&dq_list_lock); ··· 665 return NODQUOT; 666 } 667 #ifdef __DQUOT_PARANOIA 668 - if (!dquot->dq_sb) /* Has somebody invalidated entry under us? */ 669 - BUG(); 670 #endif 671 672 return dquot;
··· 590 atomic_dec(&dquot->dq_count); 591 #ifdef __DQUOT_PARANOIA 592 /* sanity check */ 593 + BUG_ON(!list_empty(&dquot->dq_free)); 594 #endif 595 put_dquot_last(dquot); 596 spin_unlock(&dq_list_lock); ··· 666 return NODQUOT; 667 } 668 #ifdef __DQUOT_PARANOIA 669 + BUG_ON(!dquot->dq_sb); /* Has somebody invalidated entry under us? */ 670 #endif 671 672 return dquot;
+1 -1
fs/exec.c
··· 561 arch_pick_mmap_layout(mm); 562 if (old_mm) { 563 up_read(&old_mm->mmap_sem); 564 - if (active_mm != old_mm) BUG(); 565 mmput(old_mm); 566 return 0; 567 }
··· 561 arch_pick_mmap_layout(mm); 562 if (old_mm) { 563 up_read(&old_mm->mmap_sem); 564 + BUG_ON(active_mm != old_mm); 565 mmput(old_mm); 566 return 0; 567 }
+1 -2
fs/fcntl.c
··· 453 /* Make sure we are called with one of the POLL_* 454 reasons, otherwise we could leak kernel stack into 455 userspace. */ 456 - if ((reason & __SI_MASK) != __SI_POLL) 457 - BUG(); 458 if (reason - POLL_IN >= NSIGPOLL) 459 si.si_band = ~0L; 460 else
··· 453 /* Make sure we are called with one of the POLL_* 454 reasons, otherwise we could leak kernel stack into 455 userspace. */ 456 + BUG_ON((reason & __SI_MASK) != __SI_POLL); 457 if (reason - POLL_IN >= NSIGPOLL) 458 si.si_band = ~0L; 459 else
+3 -6
fs/freevxfs/vxfs_olt.c
··· 42 static inline void 43 vxfs_get_fshead(struct vxfs_oltfshead *fshp, struct vxfs_sb_info *infp) 44 { 45 - if (infp->vsi_fshino) 46 - BUG(); 47 infp->vsi_fshino = fshp->olt_fsino[0]; 48 } 49 50 static inline void 51 vxfs_get_ilist(struct vxfs_oltilist *ilistp, struct vxfs_sb_info *infp) 52 { 53 - if (infp->vsi_iext) 54 - BUG(); 55 infp->vsi_iext = ilistp->olt_iext[0]; 56 } 57 58 static inline u_long 59 vxfs_oblock(struct super_block *sbp, daddr_t block, u_long bsize) 60 { 61 - if (sbp->s_blocksize % bsize) 62 - BUG(); 63 return (block * (sbp->s_blocksize / bsize)); 64 } 65
··· 42 static inline void 43 vxfs_get_fshead(struct vxfs_oltfshead *fshp, struct vxfs_sb_info *infp) 44 { 45 + BUG_ON(infp->vsi_fshino); 46 infp->vsi_fshino = fshp->olt_fsino[0]; 47 } 48 49 static inline void 50 vxfs_get_ilist(struct vxfs_oltilist *ilistp, struct vxfs_sb_info *infp) 51 { 52 + BUG_ON(infp->vsi_iext); 53 infp->vsi_iext = ilistp->olt_iext[0]; 54 } 55 56 static inline u_long 57 vxfs_oblock(struct super_block *sbp, daddr_t block, u_long bsize) 58 { 59 + BUG_ON(sbp->s_blocksize % bsize); 60 return (block * (sbp->s_blocksize / bsize)); 61 } 62
+2 -4
fs/hfsplus/bnode.c
··· 466 for (p = &node->tree->node_hash[hfs_bnode_hash(node->this)]; 467 *p && *p != node; p = &(*p)->next_hash) 468 ; 469 - if (!*p) 470 - BUG(); 471 *p = node->next_hash; 472 node->tree->node_hash_cnt--; 473 } ··· 621 622 dprint(DBG_BNODE_REFS, "put_node(%d:%d): %d\n", 623 node->tree->cnid, node->this, atomic_read(&node->refcnt)); 624 - if (!atomic_read(&node->refcnt)) 625 - BUG(); 626 if (!atomic_dec_and_lock(&node->refcnt, &tree->hash_lock)) 627 return; 628 for (i = 0; i < tree->pages_per_bnode; i++) {
··· 466 for (p = &node->tree->node_hash[hfs_bnode_hash(node->this)]; 467 *p && *p != node; p = &(*p)->next_hash) 468 ; 469 + BUG_ON(!*p); 470 *p = node->next_hash; 471 node->tree->node_hash_cnt--; 472 } ··· 622 623 dprint(DBG_BNODE_REFS, "put_node(%d:%d): %d\n", 624 node->tree->cnid, node->this, atomic_read(&node->refcnt)); 625 + BUG_ON(!atomic_read(&node->refcnt)); 626 if (!atomic_dec_and_lock(&node->refcnt, &tree->hash_lock)) 627 return; 628 for (i = 0; i < tree->pages_per_bnode; i++) {
+1 -2
fs/hfsplus/btree.c
··· 269 u8 *data, byte, m; 270 271 dprint(DBG_BNODE_MOD, "btree_free_node: %u\n", node->this); 272 - if (!node->this) 273 - BUG(); 274 tree = node->tree; 275 nidx = node->this; 276 node = hfs_bnode_find(tree, 0);
··· 269 u8 *data, byte, m; 270 271 dprint(DBG_BNODE_MOD, "btree_free_node: %u\n", node->this); 272 + BUG_ON(!node->this); 273 tree = node->tree; 274 nidx = node->this; 275 node = hfs_bnode_find(tree, 0);
+5 -10
fs/inode.c
··· 172 173 void destroy_inode(struct inode *inode) 174 { 175 - if (inode_has_buffers(inode)) 176 - BUG(); 177 security_inode_free(inode); 178 if (inode->i_sb->s_op->destroy_inode) 179 inode->i_sb->s_op->destroy_inode(inode); ··· 248 might_sleep(); 249 invalidate_inode_buffers(inode); 250 251 - if (inode->i_data.nrpages) 252 - BUG(); 253 - if (!(inode->i_state & I_FREEING)) 254 - BUG(); 255 - if (inode->i_state & I_CLEAR) 256 - BUG(); 257 wait_on_inode(inode); 258 DQUOT_DROP(inode); 259 if (inode->i_sb && inode->i_sb->s_op->clear_inode) ··· 1050 hlist_del_init(&inode->i_hash); 1051 spin_unlock(&inode_lock); 1052 wake_up_inode(inode); 1053 - if (inode->i_state != I_CLEAR) 1054 - BUG(); 1055 destroy_inode(inode); 1056 } 1057
··· 172 173 void destroy_inode(struct inode *inode) 174 { 175 + BUG_ON(inode_has_buffers(inode)); 176 security_inode_free(inode); 177 if (inode->i_sb->s_op->destroy_inode) 178 inode->i_sb->s_op->destroy_inode(inode); ··· 249 might_sleep(); 250 invalidate_inode_buffers(inode); 251 252 + BUG_ON(inode->i_data.nrpages); 253 + BUG_ON(!(inode->i_state & I_FREEING)); 254 + BUG_ON(inode->i_state & I_CLEAR); 255 wait_on_inode(inode); 256 DQUOT_DROP(inode); 257 if (inode->i_sb && inode->i_sb->s_op->clear_inode) ··· 1054 hlist_del_init(&inode->i_hash); 1055 spin_unlock(&inode_lock); 1056 wake_up_inode(inode); 1057 + BUG_ON(inode->i_state != I_CLEAR); 1058 destroy_inode(inode); 1059 } 1060
+1 -2
fs/jffs2/background.c
··· 35 pid_t pid; 36 int ret = 0; 37 38 - if (c->gc_task) 39 - BUG(); 40 41 init_completion(&c->gc_thread_start); 42 init_completion(&c->gc_thread_exit);
··· 35 pid_t pid; 36 int ret = 0; 37 38 + BUG_ON(c->gc_task); 39 40 init_completion(&c->gc_thread_start); 41 init_completion(&c->gc_thread_exit);
+2 -4
fs/smbfs/file.c
··· 178 unsigned offset = PAGE_CACHE_SIZE; 179 int err; 180 181 - if (!mapping) 182 - BUG(); 183 inode = mapping->host; 184 - if (!inode) 185 - BUG(); 186 187 end_index = inode->i_size >> PAGE_CACHE_SHIFT; 188
··· 178 unsigned offset = PAGE_CACHE_SIZE; 179 int err; 180 181 + BUG_ON(!mapping); 182 inode = mapping->host; 183 + BUG_ON(!inode); 184 185 end_index = inode->i_size >> PAGE_CACHE_SHIFT; 186
+1 -1
fs/sysfs/dir.c
··· 50 return sd; 51 } 52 53 - /** 54 * 55 * Return -EEXIST if there is already a sysfs element with the same name for 56 * the same parent.
··· 50 return sd; 51 } 52 53 + /* 54 * 55 * Return -EEXIST if there is already a sysfs element with the same name for 56 * the same parent.
+1 -2
fs/sysfs/inode.c
··· 175 struct bin_attribute * bin_attr; 176 struct sysfs_symlink * sl; 177 178 - if (!sd || !sd->s_element) 179 - BUG(); 180 181 switch (sd->s_type) { 182 case SYSFS_DIR:
··· 175 struct bin_attribute * bin_attr; 176 struct sysfs_symlink * sl; 177 178 + BUG_ON(!sd || !sd->s_element); 179 180 switch (sd->s_type) { 181 case SYSFS_DIR:
+2 -4
fs/sysv/dir.c
··· 253 254 lock_page(page); 255 err = mapping->a_ops->prepare_write(NULL, page, from, to); 256 - if (err) 257 - BUG(); 258 de->inode = 0; 259 err = dir_commit_chunk(page, from, to); 260 dir_put_page(page); ··· 352 353 lock_page(page); 354 err = page->mapping->a_ops->prepare_write(NULL, page, from, to); 355 - if (err) 356 - BUG(); 357 de->inode = cpu_to_fs16(SYSV_SB(inode->i_sb), inode->i_ino); 358 err = dir_commit_chunk(page, from, to); 359 dir_put_page(page);
··· 253 254 lock_page(page); 255 err = mapping->a_ops->prepare_write(NULL, page, from, to); 256 + BUG_ON(err); 257 de->inode = 0; 258 err = dir_commit_chunk(page, from, to); 259 dir_put_page(page); ··· 353 354 lock_page(page); 355 err = page->mapping->a_ops->prepare_write(NULL, page, from, to); 356 + BUG_ON(err); 357 de->inode = cpu_to_fs16(SYSV_SB(inode->i_sb), inode->i_ino); 358 err = dir_commit_chunk(page, from, to); 359 dir_put_page(page);
+2 -4
fs/udf/inode.c
··· 312 err = 0; 313 314 bh = inode_getblk(inode, block, &err, &phys, &new); 315 - if (bh) 316 - BUG(); 317 if (err) 318 goto abort; 319 - if (!phys) 320 - BUG(); 321 322 if (new) 323 set_buffer_new(bh_result);
··· 312 err = 0; 313 314 bh = inode_getblk(inode, block, &err, &phys, &new); 315 + BUG_ON(bh); 316 if (err) 317 goto abort; 318 + BUG_ON(!phys); 319 320 if (new) 321 set_buffer_new(bh_result);
+1 -1
include/linux/fs.h
··· 864 */ 865 struct mutex s_vfs_rename_mutex; /* Kludge */ 866 867 - /* Granuality of c/m/atime in ns. 868 Cannot be worse than a second */ 869 u32 s_time_gran; 870 };
··· 864 */ 865 struct mutex s_vfs_rename_mutex; /* Kludge */ 866 867 + /* Granularity of c/m/atime in ns. 868 Cannot be worse than a second */ 869 u32 s_time_gran; 870 };
+1 -1
include/linux/hrtimer.h
··· 80 * @first: pointer to the timer node which expires first 81 * @resolution: the resolution of the clock, in nanoseconds 82 * @get_time: function to retrieve the current time of the clock 83 - * @get_sofirq_time: function to retrieve the current time from the softirq 84 * @curr_timer: the timer which is executing a callback right now 85 * @softirq_time: the time when running the hrtimer queue in the softirq 86 */
··· 80 * @first: pointer to the timer node which expires first 81 * @resolution: the resolution of the clock, in nanoseconds 82 * @get_time: function to retrieve the current time of the clock 83 + * @get_softirq_time: function to retrieve the current time from the softirq 84 * @curr_timer: the timer which is executing a callback right now 85 * @softirq_time: the time when running the hrtimer queue in the softirq 86 */
+7 -8
ipc/shm.c
··· 91 static inline void shm_inc (int id) { 92 struct shmid_kernel *shp; 93 94 - if(!(shp = shm_lock(id))) 95 - BUG(); 96 shp->shm_atim = get_seconds(); 97 shp->shm_lprid = current->tgid; 98 shp->shm_nattch++; ··· 142 143 mutex_lock(&shm_ids.mutex); 144 /* remove from the list of attaches of the shm segment */ 145 - if(!(shp = shm_lock(id))) 146 - BUG(); 147 shp->shm_lprid = current->tgid; 148 shp->shm_dtim = get_seconds(); 149 shp->shm_nattch--; ··· 283 err = -EEXIST; 284 } else { 285 shp = shm_lock(id); 286 - if(shp==NULL) 287 - BUG(); 288 if (shp->shm_segsz < size) 289 err = -EINVAL; 290 else if (ipcperms(&shp->shm_perm, shmflg)) ··· 773 up_write(&current->mm->mmap_sem); 774 775 mutex_lock(&shm_ids.mutex); 776 - if(!(shp = shm_lock(shmid))) 777 - BUG(); 778 shp->shm_nattch--; 779 if(shp->shm_nattch == 0 && 780 shp->shm_perm.mode & SHM_DEST)
··· 91 static inline void shm_inc (int id) { 92 struct shmid_kernel *shp; 93 94 + shp = shm_lock(id); 95 + BUG_ON(!shp); 96 shp->shm_atim = get_seconds(); 97 shp->shm_lprid = current->tgid; 98 shp->shm_nattch++; ··· 142 143 mutex_lock(&shm_ids.mutex); 144 /* remove from the list of attaches of the shm segment */ 145 + shp = shm_lock(id); 146 + BUG_ON(!shp); 147 shp->shm_lprid = current->tgid; 148 shp->shm_dtim = get_seconds(); 149 shp->shm_nattch--; ··· 283 err = -EEXIST; 284 } else { 285 shp = shm_lock(id); 286 + BUG_ON(shp==NULL); 287 if (shp->shm_segsz < size) 288 err = -EINVAL; 289 else if (ipcperms(&shp->shm_perm, shmflg)) ··· 774 up_write(&current->mm->mmap_sem); 775 776 mutex_lock(&shm_ids.mutex); 777 + shp = shm_lock(shmid); 778 + BUG_ON(!shp); 779 shp->shm_nattch--; 780 if(shp->shm_nattch == 0 && 781 shp->shm_perm.mode & SHM_DEST)
+2 -4
ipc/util.c
··· 266 { 267 struct kern_ipc_perm* p; 268 int lid = id % SEQ_MULTIPLIER; 269 - if(lid >= ids->entries->size) 270 - BUG(); 271 272 /* 273 * do not need a rcu_dereference()() here to force ordering ··· 274 */ 275 p = ids->entries->p[lid]; 276 ids->entries->p[lid] = NULL; 277 - if(p==NULL) 278 - BUG(); 279 ids->in_use--; 280 281 if (lid == ids->max_id) {
··· 266 { 267 struct kern_ipc_perm* p; 268 int lid = id % SEQ_MULTIPLIER; 269 + BUG_ON(lid >= ids->entries->size); 270 271 /* 272 * do not need a rcu_dereference()() here to force ordering ··· 275 */ 276 p = ids->entries->p[lid]; 277 ids->entries->p[lid] = NULL; 278 + BUG_ON(p==NULL); 279 ids->in_use--; 280 281 if (lid == ids->max_id) {
+1 -1
kernel/power/Kconfig
··· 41 depends on PM && SWAP && (X86 && (!SMP || SUSPEND_SMP)) || ((FRV || PPC32) && !SMP) 42 ---help--- 43 Enable the possibility of suspending the machine. 44 - It doesn't need APM. 45 You may suspend your machine by 'swsusp' or 'shutdown -z <time>' 46 (patch for sysvinit needed). 47
··· 41 depends on PM && SWAP && (X86 && (!SMP || SUSPEND_SMP)) || ((FRV || PPC32) && !SMP) 42 ---help--- 43 Enable the possibility of suspending the machine. 44 + It doesn't need ACPI or APM. 45 You may suspend your machine by 'swsusp' or 'shutdown -z <time>' 46 (patch for sysvinit needed). 47
+2 -4
kernel/printk.c
··· 360 unsigned long cur_index, start_print; 361 static int msg_level = -1; 362 363 - if (((long)(start - end)) > 0) 364 - BUG(); 365 366 cur_index = start; 367 start_print = start; ··· 707 */ 708 void acquire_console_sem(void) 709 { 710 - if (in_interrupt()) 711 - BUG(); 712 down(&console_sem); 713 console_locked = 1; 714 console_may_schedule = 1;
··· 360 unsigned long cur_index, start_print; 361 static int msg_level = -1; 362 363 + BUG_ON(((long)(start - end)) > 0); 364 365 cur_index = start; 366 start_print = start; ··· 708 */ 709 void acquire_console_sem(void) 710 { 711 + BUG_ON(in_interrupt()); 712 down(&console_sem); 713 console_locked = 1; 714 console_may_schedule = 1;
+1 -2
kernel/ptrace.c
··· 30 */ 31 void __ptrace_link(task_t *child, task_t *new_parent) 32 { 33 - if (!list_empty(&child->ptrace_list)) 34 - BUG(); 35 if (child->parent == new_parent) 36 return; 37 list_add(&child->ptrace_list, &child->parent->ptrace_children);
··· 30 */ 31 void __ptrace_link(task_t *child, task_t *new_parent) 32 { 33 + BUG_ON(!list_empty(&child->ptrace_list)); 34 if (child->parent == new_parent) 35 return; 36 list_add(&child->ptrace_list, &child->parent->ptrace_children);
+2 -4
kernel/signal.c
··· 769 { 770 int ret = 0; 771 772 - if (!irqs_disabled()) 773 - BUG(); 774 assert_spin_locked(&t->sighand->siglock); 775 776 /* Short-circuit ignored signals. */ ··· 1383 * the overrun count. Other uses should not try to 1384 * send the signal multiple times. 1385 */ 1386 - if (q->info.si_code != SI_TIMER) 1387 - BUG(); 1388 q->info.si_overrun++; 1389 goto out; 1390 }
··· 769 { 770 int ret = 0; 771 772 + BUG_ON(!irqs_disabled()); 773 assert_spin_locked(&t->sighand->siglock); 774 775 /* Short-circuit ignored signals. */ ··· 1384 * the overrun count. Other uses should not try to 1385 * send the signal multiple times. 1386 */ 1387 + BUG_ON(q->info.si_code != SI_TIMER); 1388 q->info.si_overrun++; 1389 goto out; 1390 }
+4 -4
kernel/time.c
··· 410 * current_fs_time - Return FS time 411 * @sb: Superblock. 412 * 413 - * Return the current time truncated to the time granuality supported by 414 * the fs. 415 */ 416 struct timespec current_fs_time(struct super_block *sb) ··· 421 EXPORT_SYMBOL(current_fs_time); 422 423 /** 424 - * timespec_trunc - Truncate timespec to a granuality 425 * @t: Timespec 426 - * @gran: Granuality in ns. 427 * 428 - * Truncate a timespec to a granuality. gran must be smaller than a second. 429 * Always rounds down. 430 * 431 * This function should be only used for timestamps returned by
··· 410 * current_fs_time - Return FS time 411 * @sb: Superblock. 412 * 413 + * Return the current time truncated to the time granularity supported by 414 * the fs. 415 */ 416 struct timespec current_fs_time(struct super_block *sb) ··· 421 EXPORT_SYMBOL(current_fs_time); 422 423 /** 424 + * timespec_trunc - Truncate timespec to a granularity 425 * @t: Timespec 426 + * @gran: Granularity in ns. 427 * 428 + * Truncate a timespec to a granularity. gran must be smaller than a second. 429 * Always rounds down. 430 * 431 * This function should be only used for timestamps returned by
+1 -2
kernel/timer.c
··· 1479 unsigned long flags; 1480 1481 /* Sanity check */ 1482 - if (ti->frequency == 0 || ti->mask == 0) 1483 - BUG(); 1484 1485 ti->nsec_per_cyc = ((u64)NSEC_PER_SEC << ti->shift) / ti->frequency; 1486 spin_lock(&time_interpolator_lock);
··· 1479 unsigned long flags; 1480 1481 /* Sanity check */ 1482 + BUG_ON(ti->frequency == 0 || ti->mask == 0); 1483 1484 ti->nsec_per_cyc = ((u64)NSEC_PER_SEC << ti->shift) / ti->frequency; 1485 spin_lock(&time_interpolator_lock);
+5 -10
mm/highmem.c
··· 74 pkmap_count[i] = 0; 75 76 /* sanity check */ 77 - if (pte_none(pkmap_page_table[i])) 78 - BUG(); 79 80 /* 81 * Don't need an atomic fetch-and-clear op here; ··· 157 if (!vaddr) 158 vaddr = map_new_virtual(page); 159 pkmap_count[PKMAP_NR(vaddr)]++; 160 - if (pkmap_count[PKMAP_NR(vaddr)] < 2) 161 - BUG(); 162 spin_unlock(&kmap_lock); 163 return (void*) vaddr; 164 } ··· 172 173 spin_lock(&kmap_lock); 174 vaddr = (unsigned long)page_address(page); 175 - if (!vaddr) 176 - BUG(); 177 nr = PKMAP_NR(vaddr); 178 179 /* ··· 217 return 0; 218 219 page_pool = mempool_create_page_pool(POOL_SIZE, 0); 220 - if (!page_pool) 221 - BUG(); 222 printk("highmem bounce pool size: %d pages\n", POOL_SIZE); 223 224 return 0; ··· 260 261 isa_page_pool = mempool_create(ISA_POOL_SIZE, mempool_alloc_pages_isa, 262 mempool_free_pages, (void *) 0); 263 - if (!isa_page_pool) 264 - BUG(); 265 266 printk("isa bounce pool size: %d pages\n", ISA_POOL_SIZE); 267 return 0;
··· 74 pkmap_count[i] = 0; 75 76 /* sanity check */ 77 + BUG_ON(pte_none(pkmap_page_table[i])); 78 79 /* 80 * Don't need an atomic fetch-and-clear op here; ··· 158 if (!vaddr) 159 vaddr = map_new_virtual(page); 160 pkmap_count[PKMAP_NR(vaddr)]++; 161 + BUG_ON(pkmap_count[PKMAP_NR(vaddr)] < 2); 162 spin_unlock(&kmap_lock); 163 return (void*) vaddr; 164 } ··· 174 175 spin_lock(&kmap_lock); 176 vaddr = (unsigned long)page_address(page); 177 + BUG_ON(!vaddr); 178 nr = PKMAP_NR(vaddr); 179 180 /* ··· 220 return 0; 221 222 page_pool = mempool_create_page_pool(POOL_SIZE, 0); 223 + BUG_ON(!page_pool); 224 printk("highmem bounce pool size: %d pages\n", POOL_SIZE); 225 226 return 0; ··· 264 265 isa_page_pool = mempool_create(ISA_POOL_SIZE, mempool_alloc_pages_isa, 266 mempool_free_pages, (void *) 0); 267 + BUG_ON(!isa_page_pool); 268 269 printk("isa bounce pool size: %d pages\n", ISA_POOL_SIZE); 270 return 0;
+3 -6
mm/mmap.c
··· 294 i = browse_rb(&mm->mm_rb); 295 if (i != mm->map_count) 296 printk("map_count %d rb %d\n", mm->map_count, i), bug = 1; 297 - if (bug) 298 - BUG(); 299 } 300 #else 301 #define validate_mm(mm) do { } while (0) ··· 431 struct rb_node ** rb_link, * rb_parent; 432 433 __vma = find_vma_prepare(mm, vma->vm_start,&prev, &rb_link, &rb_parent); 434 - if (__vma && __vma->vm_start < vma->vm_end) 435 - BUG(); 436 __vma_link(mm, vma, prev, rb_link, rb_parent); 437 mm->map_count++; 438 } ··· 811 * (e.g. stash info in next's anon_vma_node when assigning 812 * an anon_vma, or when trying vma_merge). Another time. 813 */ 814 - if (find_vma_prev(vma->vm_mm, vma->vm_start, &near) != vma) 815 - BUG(); 816 if (!near) 817 goto none; 818
··· 294 i = browse_rb(&mm->mm_rb); 295 if (i != mm->map_count) 296 printk("map_count %d rb %d\n", mm->map_count, i), bug = 1; 297 + BUG_ON(bug); 298 } 299 #else 300 #define validate_mm(mm) do { } while (0) ··· 432 struct rb_node ** rb_link, * rb_parent; 433 434 __vma = find_vma_prepare(mm, vma->vm_start,&prev, &rb_link, &rb_parent); 435 + BUG_ON(__vma && __vma->vm_start < vma->vm_end); 436 __vma_link(mm, vma, prev, rb_link, rb_parent); 437 mm->map_count++; 438 } ··· 813 * (e.g. stash info in next's anon_vma_node when assigning 814 * an anon_vma, or when trying vma_merge). Another time. 815 */ 816 + BUG_ON(find_vma_prev(vma->vm_mm, vma->vm_start, &near) != vma); 817 if (!near) 818 goto none; 819
+1 -1
mm/page-writeback.c
··· 258 /** 259 * balance_dirty_pages_ratelimited_nr - balance dirty memory state 260 * @mapping: address_space which was dirtied 261 - * @nr_pages: number of pages which the caller has just dirtied 262 * 263 * Processes which are dirtying memory should call in here once for each page 264 * which was newly dirtied. The function will periodically check the system's
··· 258 /** 259 * balance_dirty_pages_ratelimited_nr - balance dirty memory state 260 * @mapping: address_space which was dirtied 261 + * @nr_pages_dirtied: number of pages which the caller has just dirtied 262 * 263 * Processes which are dirtying memory should call in here once for each page 264 * which was newly dirtied. The function will periodically check the system's
+6 -12
mm/slab.c
··· 1297 if (cache_cache.num) 1298 break; 1299 } 1300 - if (!cache_cache.num) 1301 - BUG(); 1302 cache_cache.gfporder = order; 1303 cache_cache.colour = left_over / cache_cache.colour_off; 1304 cache_cache.slab_size = ALIGN(cache_cache.num * sizeof(kmem_bufctl_t) + ··· 1973 * Always checks flags, a caller might be expecting debug support which 1974 * isn't available. 1975 */ 1976 - if (flags & ~CREATE_MASK) 1977 - BUG(); 1978 1979 /* 1980 * Check that size is in terms of words. This is needed to avoid ··· 2204 2205 slabp = list_entry(l3->slabs_free.prev, struct slab, list); 2206 #if DEBUG 2207 - if (slabp->inuse) 2208 - BUG(); 2209 #endif 2210 list_del(&slabp->list); 2211 ··· 2245 */ 2246 int kmem_cache_shrink(struct kmem_cache *cachep) 2247 { 2248 - if (!cachep || in_interrupt()) 2249 - BUG(); 2250 2251 return __cache_shrink(cachep); 2252 } ··· 2273 int i; 2274 struct kmem_list3 *l3; 2275 2276 - if (!cachep || in_interrupt()) 2277 - BUG(); 2278 2279 /* Don't let CPUs to come and go */ 2280 lock_cpu_hotplug(); ··· 2472 * Be lazy and only check for valid flags here, keeping it out of the 2473 * critical path in kmem_cache_alloc(). 2474 */ 2475 - if (flags & ~(SLAB_DMA | SLAB_LEVEL_MASK | SLAB_NO_GROW)) 2476 - BUG(); 2477 if (flags & SLAB_NO_GROW) 2478 return 0; 2479
··· 1297 if (cache_cache.num) 1298 break; 1299 } 1300 + BUG_ON(!cache_cache.num); 1301 cache_cache.gfporder = order; 1302 cache_cache.colour = left_over / cache_cache.colour_off; 1303 cache_cache.slab_size = ALIGN(cache_cache.num * sizeof(kmem_bufctl_t) + ··· 1974 * Always checks flags, a caller might be expecting debug support which 1975 * isn't available. 1976 */ 1977 + BUG_ON(flags & ~CREATE_MASK); 1978 1979 /* 1980 * Check that size is in terms of words. This is needed to avoid ··· 2206 2207 slabp = list_entry(l3->slabs_free.prev, struct slab, list); 2208 #if DEBUG 2209 + BUG_ON(slabp->inuse); 2210 #endif 2211 list_del(&slabp->list); 2212 ··· 2248 */ 2249 int kmem_cache_shrink(struct kmem_cache *cachep) 2250 { 2251 + BUG_ON(!cachep || in_interrupt()); 2252 2253 return __cache_shrink(cachep); 2254 } ··· 2277 int i; 2278 struct kmem_list3 *l3; 2279 2280 + BUG_ON(!cachep || in_interrupt()); 2281 2282 /* Don't let CPUs to come and go */ 2283 lock_cpu_hotplug(); ··· 2477 * Be lazy and only check for valid flags here, keeping it out of the 2478 * critical path in kmem_cache_alloc(). 2479 */ 2480 + BUG_ON(flags & ~(SLAB_DMA | SLAB_LEVEL_MASK | SLAB_NO_GROW)); 2481 if (flags & SLAB_NO_GROW) 2482 return 0; 2483
+1 -2
mm/swap_state.c
··· 148 swp_entry_t entry; 149 int err; 150 151 - if (!PageLocked(page)) 152 - BUG(); 153 154 for (;;) { 155 entry = get_swap_page();
··· 148 swp_entry_t entry; 149 int err; 150 151 + BUG_ON(!PageLocked(page)); 152 153 for (;;) { 154 entry = get_swap_page();
+1 -2
mm/vmalloc.c
··· 321 int i; 322 323 for (i = 0; i < area->nr_pages; i++) { 324 - if (unlikely(!area->pages[i])) 325 - BUG(); 326 __free_page(area->pages[i]); 327 } 328
··· 321 int i; 322 323 for (i = 0; i < area->nr_pages; i++) { 324 + BUG_ON(!area->pages[i]); 325 __free_page(area->pages[i]); 326 } 327