Merge git://git.kernel.org/pub/scm/linux/kernel/git/bunk/trivial

* git://git.kernel.org/pub/scm/linux/kernel/git/bunk/trivial: (48 commits)
Documentation: fix minor kernel-doc warnings
BUG_ON() Conversion in drivers/net/
BUG_ON() Conversion in drivers/s390/net/lcs.c
BUG_ON() Conversion in mm/slab.c
BUG_ON() Conversion in mm/highmem.c
BUG_ON() Conversion in kernel/signal.c
BUG_ON() Conversion in kernel/signal.c
BUG_ON() Conversion in kernel/ptrace.c
BUG_ON() Conversion in ipc/shm.c
BUG_ON() Conversion in fs/freevxfs/
BUG_ON() Conversion in fs/udf/
BUG_ON() Conversion in fs/sysv/
BUG_ON() Conversion in fs/inode.c
BUG_ON() Conversion in fs/fcntl.c
BUG_ON() Conversion in fs/dquot.c
BUG_ON() Conversion in md/raid10.c
BUG_ON() Conversion in md/raid6main.c
BUG_ON() Conversion in md/raid5.c
Fix minor documentation typo
BFP->BPF in Documentation/networking/tuntap.txt
...

+260 -367
+1 -1
Documentation/DocBook/Makefile
··· 2 2 # This makefile is used to generate the kernel documentation, 3 3 # primarily based on in-line comments in various source files. 4 4 # See Documentation/kernel-doc-nano-HOWTO.txt for instruction in how 5 - # to ducument the SRC - and how to read it. 5 + # to document the SRC - and how to read it. 6 6 # To add a new book the only step required is to add the book to the 7 7 # list of DOCBOOKS. 8 8
-1
Documentation/DocBook/kernel-api.tmpl
··· 322 322 <chapter id="sysfs"> 323 323 <title>The Filesystem for Exporting Kernel Objects</title> 324 324 !Efs/sysfs/file.c 325 - !Efs/sysfs/dir.c 326 325 !Efs/sysfs/symlink.c 327 326 !Efs/sysfs/bin.c 328 327 </chapter>
+1 -1
Documentation/acpi-hotkey.txt
··· 30 30 echo "event_num:event_type:event_argument" > 31 31 /proc/acpi/hotkey/action. 32 32 The result of the execution of this aml method is 33 - attached to /proc/acpi/hotkey/poll_method, which is dnyamically 33 + attached to /proc/acpi/hotkey/poll_method, which is dynamically 34 34 created. Please use command "cat /proc/acpi/hotkey/polling_method" 35 35 to retrieve it. 36 36
+108 -80
Documentation/fujitsu/frv/kernel-ABI.txt
··· 1 - ================================= 2 - INTERNAL KERNEL ABI FOR FR-V ARCH 3 - ================================= 1 + ================================= 2 + INTERNAL KERNEL ABI FOR FR-V ARCH 3 + ================================= 4 4 5 - The internal FRV kernel ABI is not quite the same as the userspace ABI. A number of the registers 6 - are used for special purposed, and the ABI is not consistent between modules vs core, and MMU vs 7 - no-MMU. 5 + The internal FRV kernel ABI is not quite the same as the userspace ABI. A 6 + number of the registers are used for special purposed, and the ABI is not 7 + consistent between modules vs core, and MMU vs no-MMU. 8 8 9 - This partly stems from the fact that FRV CPUs do not have a separate supervisor stack pointer, and 10 - most of them do not have any scratch registers, thus requiring at least one general purpose 11 - register to be clobbered in such an event. Also, within the kernel core, it is possible to simply 12 - jump or call directly between functions using a relative offset. This cannot be extended to modules 13 - for the displacement is likely to be too far. Thus in modules the address of a function to call 14 - must be calculated in a register and then used, requiring two extra instructions. 9 + This partly stems from the fact that FRV CPUs do not have a separate 10 + supervisor stack pointer, and most of them do not have any scratch 11 + registers, thus requiring at least one general purpose register to be 12 + clobbered in such an event. Also, within the kernel core, it is possible to 13 + simply jump or call directly between functions using a relative offset. 14 + This cannot be extended to modules for the displacement is likely to be too 15 + far. Thus in modules the address of a function to call must be calculated 16 + in a register and then used, requiring two extra instructions. 15 17 16 18 This document has the following sections: 17 19 ··· 41 39 CPU OPERATING MODES 42 40 =================== 43 41 44 - The FR-V CPU has three basic operating modes. In order of increasing capability: 42 + The FR-V CPU has three basic operating modes. In order of increasing 43 + capability: 45 44 46 45 (1) User mode. 47 46 ··· 50 47 51 48 (2) Kernel mode. 52 49 53 - Normal kernel mode. There are many additional control registers available that may be 54 - accessed in this mode, in addition to all the stuff available to user mode. This has two 55 - submodes: 50 + Normal kernel mode. There are many additional control registers 51 + available that may be accessed in this mode, in addition to all the 52 + stuff available to user mode. This has two submodes: 56 53 57 54 (a) Exceptions enabled (PSR.T == 1). 58 55 59 - Exceptions will invoke the appropriate normal kernel mode handler. On entry to the 60 - handler, the PSR.T bit will be cleared. 56 + Exceptions will invoke the appropriate normal kernel mode 57 + handler. On entry to the handler, the PSR.T bit will be cleared. 61 58 62 59 (b) Exceptions disabled (PSR.T == 0). 63 60 64 - No exceptions or interrupts may happen. Any mandatory exceptions will cause the CPU to 65 - halt unless the CPU is told to jump into debug mode instead. 61 + No exceptions or interrupts may happen. Any mandatory exceptions 62 + will cause the CPU to halt unless the CPU is told to jump into 63 + debug mode instead. 66 64 67 65 (3) Debug mode. 68 66 69 - No exceptions may happen in this mode. Memory protection and management exceptions will be 70 - flagged for later consideration, but the exception handler won't be invoked. Debugging traps 71 - such as hardware breakpoints and watchpoints will be ignored. This mode is entered only by 72 - debugging events obtained from the other two modes. 67 + No exceptions may happen in this mode. Memory protection and 68 + management exceptions will be flagged for later consideration, but 69 + the exception handler won't be invoked. Debugging traps such as 70 + hardware breakpoints and watchpoints will be ignored. This mode is 71 + entered only by debugging events obtained from the other two modes. 73 72 74 - All kernel mode registers may be accessed, plus a few extra debugging specific registers. 73 + All kernel mode registers may be accessed, plus a few extra debugging 74 + specific registers. 75 75 76 76 77 77 ================================= 78 78 INTERNAL KERNEL-MODE REGISTER ABI 79 79 ================================= 80 80 81 - There are a number of permanent register assignments that are set up by entry.S in the exception 82 - prologue. Note that there is a complete set of exception prologues for each of user->kernel 83 - transition and kernel->kernel transition. There are also user->debug and kernel->debug mode 84 - transition prologues. 81 + There are a number of permanent register assignments that are set up by 82 + entry.S in the exception prologue. Note that there is a complete set of 83 + exception prologues for each of user->kernel transition and kernel->kernel 84 + transition. There are also user->debug and kernel->debug mode transition 85 + prologues. 85 86 86 87 87 88 REGISTER FLAVOUR USE 88 - =============== ======= ==================================================== 89 + =============== ======= ============================================== 89 90 GR1 Supervisor stack pointer 90 91 GR15 Current thread info pointer 91 92 GR16 GP-Rel base register for small data ··· 99 92 GR31 NOMMU Destroyed by debug mode entry 100 93 GR31 MMU Destroyed by TLB miss kernel mode entry 101 94 CCR.ICC2 Virtual interrupt disablement tracking 102 - CCCR.CC3 Cleared by exception prologue (atomic op emulation) 95 + CCCR.CC3 Cleared by exception prologue 96 + (atomic op emulation) 103 97 SCR0 MMU See mmu-layout.txt. 104 98 SCR1 MMU See mmu-layout.txt. 105 - SCR2 MMU Save for EAR0 (destroyed by icache insns in debug mode) 99 + SCR2 MMU Save for EAR0 (destroyed by icache insns 100 + in debug mode) 106 101 SCR3 MMU Save for GR31 during debug exceptions 107 102 DAMR/IAMR NOMMU Fixed memory protection layout. 108 103 DAMR/IAMR MMU See mmu-layout.txt. ··· 113 104 Certain registers are also used or modified across function calls: 114 105 115 106 REGISTER CALL RETURN 116 - =============== =============================== =============================== 107 + =============== =============================== ====================== 117 108 GR0 Fixed Zero - 118 109 GR2 Function call frame pointer 119 110 GR3 Special Preserved 120 111 GR3-GR7 - Clobbered 121 - GR8 Function call arg #1 Return value (or clobbered) 122 - GR9 Function call arg #2 Return value MSW (or clobbered) 112 + GR8 Function call arg #1 Return value 113 + (or clobbered) 114 + GR9 Function call arg #2 Return value MSW 115 + (or clobbered) 123 116 GR10-GR13 Function call arg #3-#6 Clobbered 124 117 GR14 - Clobbered 125 118 GR15-GR16 Special Preserved 126 119 GR17-GR27 - Preserved 127 - GR28-GR31 Special Only accessed explicitly 120 + GR28-GR31 Special Only accessed 121 + explicitly 128 122 LR Return address after CALL Clobbered 129 123 CCR/CCCR - Mostly Clobbered 130 124 ··· 136 124 INTERNAL DEBUG-MODE REGISTER ABI 137 125 ================================ 138 126 139 - This is the same as the kernel-mode register ABI for functions calls. The difference is that in 140 - debug-mode there's a different stack and a different exception frame. Almost all the global 141 - registers from kernel-mode (including the stack pointer) may be changed. 127 + This is the same as the kernel-mode register ABI for functions calls. The 128 + difference is that in debug-mode there's a different stack and a different 129 + exception frame. Almost all the global registers from kernel-mode 130 + (including the stack pointer) may be changed. 142 131 143 132 REGISTER FLAVOUR USE 144 - =============== ======= ==================================================== 133 + =============== ======= ============================================== 145 134 GR1 Debug stack pointer 146 135 GR16 GP-Rel base register for small data 147 - GR31 Current debug exception frame pointer (__debug_frame) 136 + GR31 Current debug exception frame pointer 137 + (__debug_frame) 148 138 SCR3 MMU Saved value of GR31 149 139 150 140 151 - Note that debug mode is able to interfere with the kernel's emulated atomic ops, so it must be 152 - exceedingly careful not to do any that would interact with the main kernel in this regard. Hence 153 - the debug mode code (gdbstub) is almost completely self-contained. The only external code used is 154 - the sprintf family of functions. 141 + Note that debug mode is able to interfere with the kernel's emulated atomic 142 + ops, so it must be exceedingly careful not to do any that would interact 143 + with the main kernel in this regard. Hence the debug mode code (gdbstub) is 144 + almost completely self-contained. The only external code used is the 145 + sprintf family of functions. 155 146 156 - Futhermore, break.S is so complicated because single-step mode does not switch off on entry to an 157 - exception. That means unless manually disabled, single-stepping will blithely go on stepping into 158 - things like interrupts. See gdbstub.txt for more information. 147 + Futhermore, break.S is so complicated because single-step mode does not 148 + switch off on entry to an exception. That means unless manually disabled, 149 + single-stepping will blithely go on stepping into things like interrupts. 150 + See gdbstub.txt for more information. 159 151 160 152 161 153 ========================== 162 154 VIRTUAL INTERRUPT HANDLING 163 155 ========================== 164 156 165 - Because accesses to the PSR is so slow, and to disable interrupts we have to access it twice (once 166 - to read and once to write), we don't actually disable interrupts at all if we don't have to. What 167 - we do instead is use the ICC2 condition code flags to note virtual disablement, such that if we 168 - then do take an interrupt, we note the flag, really disable interrupts, set another flag and resume 169 - execution at the point the interrupt happened. Setting condition flags as a side effect of an 170 - arithmetic or logical instruction is really fast. This use of the ICC2 only occurs within the 157 + Because accesses to the PSR is so slow, and to disable interrupts we have 158 + to access it twice (once to read and once to write), we don't actually 159 + disable interrupts at all if we don't have to. What we do instead is use 160 + the ICC2 condition code flags to note virtual disablement, such that if we 161 + then do take an interrupt, we note the flag, really disable interrupts, set 162 + another flag and resume execution at the point the interrupt happened. 163 + Setting condition flags as a side effect of an arithmetic or logical 164 + instruction is really fast. This use of the ICC2 only occurs within the 171 165 kernel - it does not affect userspace. 172 166 173 167 The flags we use are: 174 168 175 169 (*) CCR.ICC2.Z [Zero flag] 176 170 177 - Set to virtually disable interrupts, clear when interrupts are virtually enabled. Can be 178 - modified by logical instructions without affecting the Carry flag. 171 + Set to virtually disable interrupts, clear when interrupts are 172 + virtually enabled. Can be modified by logical instructions without 173 + affecting the Carry flag. 179 174 180 175 (*) CCR.ICC2.C [Carry flag] 181 176 ··· 195 176 196 177 ICC2.Z is 0, ICC2.C is 1. 197 178 198 - (2) An interrupt occurs. The exception prologue examines ICC2.Z and determines that nothing needs 199 - doing. This is done simply with an unlikely BEQ instruction. 179 + (2) An interrupt occurs. The exception prologue examines ICC2.Z and 180 + determines that nothing needs doing. This is done simply with an 181 + unlikely BEQ instruction. 200 182 201 183 (3) The interrupts are disabled (local_irq_disable) 202 184 ··· 207 187 208 188 ICC2.Z would be set to 0. 209 189 210 - A TIHI #2 instruction (trap #2 if condition HI - Z==0 && C==0) would be used to trap if 211 - interrupts were now virtually enabled, but physically disabled - which they're not, so the 212 - trap isn't taken. The kernel would then be back to state (1). 190 + A TIHI #2 instruction (trap #2 if condition HI - Z==0 && C==0) would 191 + be used to trap if interrupts were now virtually enabled, but 192 + physically disabled - which they're not, so the trap isn't taken. The 193 + kernel would then be back to state (1). 213 194 214 - (5) An interrupt occurs. The exception prologue examines ICC2.Z and determines that the interrupt 215 - shouldn't actually have happened. It jumps aside, and there disabled interrupts by setting 216 - PSR.PIL to 14 and then it clears ICC2.C. 195 + (5) An interrupt occurs. The exception prologue examines ICC2.Z and 196 + determines that the interrupt shouldn't actually have happened. It 197 + jumps aside, and there disabled interrupts by setting PSR.PIL to 14 198 + and then it clears ICC2.C. 217 199 218 200 (6) If interrupts were then saved and disabled again (local_irq_save): 219 201 220 - ICC2.Z would be shifted into the save variable and masked off (giving a 1). 202 + ICC2.Z would be shifted into the save variable and masked off 203 + (giving a 1). 221 204 222 - ICC2.Z would then be set to 1 (thus unchanged), and ICC2.C would be unaffected (ie: 0). 205 + ICC2.Z would then be set to 1 (thus unchanged), and ICC2.C would be 206 + unaffected (ie: 0). 223 207 224 208 (7) If interrupts were then restored from state (6) (local_irq_restore): 225 209 226 - ICC2.Z would be set to indicate the result of XOR'ing the saved value (ie: 1) with 1, which 227 - gives a result of 0 - thus leaving ICC2.Z set. 210 + ICC2.Z would be set to indicate the result of XOR'ing the saved 211 + value (ie: 1) with 1, which gives a result of 0 - thus leaving 212 + ICC2.Z set. 228 213 229 214 ICC2.C would remain unaffected (ie: 0). 230 215 231 - A TIHI #2 instruction would be used to again assay the current state, but this would do 232 - nothing as Z==1. 216 + A TIHI #2 instruction would be used to again assay the current state, 217 + but this would do nothing as Z==1. 233 218 234 219 (8) If interrupts were then enabled (local_irq_enable): 235 220 236 - ICC2.Z would be cleared. ICC2.C would be left unaffected. Both flags would now be 0. 221 + ICC2.Z would be cleared. ICC2.C would be left unaffected. Both 222 + flags would now be 0. 237 223 238 - A TIHI #2 instruction again issued to assay the current state would then trap as both Z==0 239 - [interrupts virtually enabled] and C==0 [interrupts really disabled] would then be true. 224 + A TIHI #2 instruction again issued to assay the current state would 225 + then trap as both Z==0 [interrupts virtually enabled] and C==0 226 + [interrupts really disabled] would then be true. 240 227 241 - (9) The trap #2 handler would simply enable hardware interrupts (set PSR.PIL to 0), set ICC2.C to 242 - 1 and return. 228 + (9) The trap #2 handler would simply enable hardware interrupts 229 + (set PSR.PIL to 0), set ICC2.C to 1 and return. 243 230 244 231 (10) Immediately upon returning, the pending interrupt would be taken. 245 232 246 - (11) The interrupt handler would take the path of actually processing the interrupt (ICC2.Z is 247 - clear, BEQ fails as per step (2)). 233 + (11) The interrupt handler would take the path of actually processing the 234 + interrupt (ICC2.Z is clear, BEQ fails as per step (2)). 248 235 249 - (12) The interrupt handler would then set ICC2.C to 1 since hardware interrupts are definitely 250 - enabled - or else the kernel wouldn't be here. 236 + (12) The interrupt handler would then set ICC2.C to 1 since hardware 237 + interrupts are definitely enabled - or else the kernel wouldn't be here. 251 238 252 239 (13) On return from the interrupt handler, things would be back to state (1). 253 240 254 - This trap (#2) is only available in kernel mode. In user mode it will result in SIGILL. 241 + This trap (#2) is only available in kernel mode. In user mode it will 242 + result in SIGILL.
+14 -20
Documentation/kernel-parameters.txt
··· 1 - February 2003 Kernel Parameters v2.5.59 1 + Kernel Parameters 2 2 ~~~~~~~~~~~~~~~~~ 3 3 4 4 The following is a consolidated list of the kernel parameters as implemented ··· 17 17 18 18 usbcore.blinkenlights=1 19 19 20 - The text in square brackets at the beginning of the description states the 21 - restrictions on the kernel for the said kernel parameter to be valid. The 22 - restrictions referred to are that the relevant option is valid if: 20 + This document may not be entirely up to date and comprehensive. The command 21 + "modinfo -p ${modulename}" shows a current list of all parameters of a loadable 22 + module. Loadable modules, after being loaded into the running kernel, also 23 + reveal their parameters in /sys/module/${modulename}/parameters/. Some of these 24 + parameters may be changed at runtime by the command 25 + "echo -n ${value} > /sys/module/${modulename}/parameters/${parm}". 26 + 27 + The parameters listed below are only valid if certain kernel build options were 28 + enabled and if respective hardware is present. The text in square brackets at 29 + the beginning of each description states the restrictions within which a 30 + parameter is applicable: 23 31 24 32 ACPI ACPI support is enabled. 25 33 ALSA ALSA sound support is enabled. ··· 1054 1046 noltlbs [PPC] Do not use large page/tlb entries for kernel 1055 1047 lowmem mapping on PPC40x. 1056 1048 1057 - nomce [IA-32] Machine Check Exception 1058 - 1059 1049 nomca [IA-64] Disable machine check abort handling 1050 + 1051 + nomce [IA-32] Machine Check Exception 1060 1052 1061 1053 noresidual [PPC] Don't use residual data on PReP machines. 1062 1054 ··· 1690 1682 1691 1683 1692 1684 ______________________________________________________________________ 1693 - Changelog: 1694 - 1695 - 2000-06-?? Mr. Unknown 1696 - The last known update (for 2.4.0) - the changelog was not kept before. 1697 - 1698 - 2002-11-24 Petr Baudis <pasky@ucw.cz> 1699 - Randy Dunlap <randy.dunlap@verizon.net> 1700 - Update for 2.5.49, description for most of the options introduced, 1701 - references to other documentation (C files, READMEs, ..), added S390, 1702 - PPC, SPARC, MTD, ALSA and OSS category. Minor corrections and 1703 - reformatting. 1704 - 1705 - 2005-10-19 Randy Dunlap <rdunlap@xenotime.net> 1706 - Lots of typos, whitespace, some reformatting. 1707 1685 1708 1686 TODO: 1709 1687
+1 -1
Documentation/networking/packet_mmap.txt
··· 254 254 255 255 <block number> * <block size> / <frame size> 256 256 257 - Suposse the following parameters, which apply for 2.6 kernel and an 257 + Suppose the following parameters, which apply for 2.6 kernel and an 258 258 i386 architecture: 259 259 260 260 <size-max> = 131072 bytes
+1 -1
Documentation/networking/tuntap.txt
··· 138 138 ethernet frames when using tap. 139 139 140 140 5. What is the difference between BPF and TUN/TAP driver? 141 - BFP is an advanced packet filter. It can be attached to existing 141 + BPF is an advanced packet filter. It can be attached to existing 142 142 network interface. It does not provide a virtual network interface. 143 143 A TUN/TAP driver does provide a virtual network interface and it is possible 144 144 to attach BPF to this interface.
+1 -1
arch/i386/kernel/crash.c
··· 69 69 * for the data I pass, and I need tags 70 70 * on the data to indicate what information I have 71 71 * squirrelled away. ELF notes happen to provide 72 - * all of that that no need to invent something new. 72 + * all of that, so there is no need to invent something new. 73 73 */ 74 74 buf = (u32*)per_cpu_ptr(crash_notes, cpu); 75 75 if (!buf)
+1 -1
block/ll_rw_blk.c
··· 1740 1740 1741 1741 /** 1742 1742 * blk_cleanup_queue: - release a &request_queue_t when it is no longer needed 1743 - * @q: the request queue to be released 1743 + * @kobj: the kobj belonging of the request queue to be released 1744 1744 * 1745 1745 * Description: 1746 1746 * blk_cleanup_queue is the pair to blk_init_queue() or
+1 -2
drivers/md/dm-target.c
··· 78 78 if (--ti->use == 0) 79 79 module_put(ti->tt.module); 80 80 81 - if (ti->use < 0) 82 - BUG(); 81 + BUG_ON(ti->use < 0); 83 82 up_read(&_lock); 84 83 85 84 return;
+2 -4
drivers/md/raid1.c
··· 1558 1558 int buffs; 1559 1559 1560 1560 buffs = RESYNC_WINDOW / RESYNC_BLOCK_SIZE; 1561 - if (conf->r1buf_pool) 1562 - BUG(); 1561 + BUG_ON(conf->r1buf_pool); 1563 1562 conf->r1buf_pool = mempool_create(buffs, r1buf_pool_alloc, r1buf_pool_free, 1564 1563 conf->poolinfo); 1565 1564 if (!conf->r1buf_pool) ··· 1731 1732 !conf->fullsync && 1732 1733 !test_bit(MD_RECOVERY_REQUESTED, &mddev->recovery)) 1733 1734 break; 1734 - if (sync_blocks < (PAGE_SIZE>>9)) 1735 - BUG(); 1735 + BUG_ON(sync_blocks < (PAGE_SIZE>>9)); 1736 1736 if (len > (sync_blocks<<9)) 1737 1737 len = sync_blocks<<9; 1738 1738 }
+2 -4
drivers/md/raid10.c
··· 1117 1117 for (i=0; i<conf->copies; i++) 1118 1118 if (r10_bio->devs[i].bio == bio) 1119 1119 break; 1120 - if (i == conf->copies) 1121 - BUG(); 1120 + BUG_ON(i == conf->copies); 1122 1121 update_head_pos(i, r10_bio); 1123 1122 d = r10_bio->devs[i].devnum; 1124 1123 ··· 1517 1518 int buffs; 1518 1519 1519 1520 buffs = RESYNC_WINDOW / RESYNC_BLOCK_SIZE; 1520 - if (conf->r10buf_pool) 1521 - BUG(); 1521 + BUG_ON(conf->r10buf_pool); 1522 1522 conf->r10buf_pool = mempool_create(buffs, r10buf_pool_alloc, r10buf_pool_free, conf); 1523 1523 if (!conf->r10buf_pool) 1524 1524 return -ENOMEM;
+12 -22
drivers/md/raid5.c
··· 73 73 static void __release_stripe(raid5_conf_t *conf, struct stripe_head *sh) 74 74 { 75 75 if (atomic_dec_and_test(&sh->count)) { 76 - if (!list_empty(&sh->lru)) 77 - BUG(); 78 - if (atomic_read(&conf->active_stripes)==0) 79 - BUG(); 76 + BUG_ON(!list_empty(&sh->lru)); 77 + BUG_ON(atomic_read(&conf->active_stripes)==0); 80 78 if (test_bit(STRIPE_HANDLE, &sh->state)) { 81 79 if (test_bit(STRIPE_DELAYED, &sh->state)) 82 80 list_add_tail(&sh->lru, &conf->delayed_list); ··· 182 184 raid5_conf_t *conf = sh->raid_conf; 183 185 int i; 184 186 185 - if (atomic_read(&sh->count) != 0) 186 - BUG(); 187 - if (test_bit(STRIPE_HANDLE, &sh->state)) 188 - BUG(); 187 + BUG_ON(atomic_read(&sh->count) != 0); 188 + BUG_ON(test_bit(STRIPE_HANDLE, &sh->state)); 189 189 190 190 CHECK_DEVLOCK(); 191 191 PRINTK("init_stripe called, stripe %llu\n", ··· 265 269 init_stripe(sh, sector, pd_idx, disks); 266 270 } else { 267 271 if (atomic_read(&sh->count)) { 268 - if (!list_empty(&sh->lru)) 269 - BUG(); 272 + BUG_ON(!list_empty(&sh->lru)); 270 273 } else { 271 274 if (!test_bit(STRIPE_HANDLE, &sh->state)) 272 275 atomic_inc(&conf->active_stripes); ··· 460 465 spin_unlock_irq(&conf->device_lock); 461 466 if (!sh) 462 467 return 0; 463 - if (atomic_read(&sh->count)) 464 - BUG(); 468 + BUG_ON(atomic_read(&sh->count)); 465 469 shrink_buffers(sh, conf->pool_size); 466 470 kmem_cache_free(conf->slab_cache, sh); 467 471 atomic_dec(&conf->active_stripes); ··· 876 882 ptr[0] = page_address(sh->dev[pd_idx].page); 877 883 switch(method) { 878 884 case READ_MODIFY_WRITE: 879 - if (!test_bit(R5_UPTODATE, &sh->dev[pd_idx].flags)) 880 - BUG(); 885 + BUG_ON(!test_bit(R5_UPTODATE, &sh->dev[pd_idx].flags)); 881 886 for (i=disks ; i-- ;) { 882 887 if (i==pd_idx) 883 888 continue; ··· 889 896 if (test_and_clear_bit(R5_Overlap, &sh->dev[i].flags)) 890 897 wake_up(&conf->wait_for_overlap); 891 898 892 - if (sh->dev[i].written) BUG(); 899 + BUG_ON(sh->dev[i].written); 893 900 sh->dev[i].written = chosen; 894 901 check_xor(); 895 902 } ··· 905 912 if (test_and_clear_bit(R5_Overlap, &sh->dev[i].flags)) 906 913 wake_up(&conf->wait_for_overlap); 907 914 908 - if (sh->dev[i].written) BUG(); 915 + BUG_ON(sh->dev[i].written); 909 916 sh->dev[i].written = chosen; 910 917 } 911 918 break; ··· 988 995 if (*bip && (*bip)->bi_sector < bi->bi_sector + ((bi->bi_size)>>9)) 989 996 goto overlap; 990 997 991 - if (*bip && bi->bi_next && (*bip) != bi->bi_next) 992 - BUG(); 998 + BUG_ON(*bip && bi->bi_next && (*bip) != bi->bi_next); 993 999 if (*bip) 994 1000 bi->bi_next = *bip; 995 1001 *bip = bi; ··· 1422 1430 set_bit(STRIPE_HANDLE, &sh->state); 1423 1431 if (failed == 0) { 1424 1432 char *pagea; 1425 - if (uptodate != disks) 1426 - BUG(); 1433 + BUG_ON(uptodate != disks); 1427 1434 compute_parity(sh, CHECK_PARITY); 1428 1435 uptodate--; 1429 1436 pagea = page_address(sh->dev[sh->pd_idx].page); ··· 2087 2096 2088 2097 list_del_init(first); 2089 2098 atomic_inc(&sh->count); 2090 - if (atomic_read(&sh->count)!= 1) 2091 - BUG(); 2099 + BUG_ON(atomic_read(&sh->count)!= 1); 2092 2100 spin_unlock_irq(&conf->device_lock); 2093 2101 2094 2102 handled++;
+10 -19
drivers/md/raid6main.c
··· 91 91 static void __release_stripe(raid6_conf_t *conf, struct stripe_head *sh) 92 92 { 93 93 if (atomic_dec_and_test(&sh->count)) { 94 - if (!list_empty(&sh->lru)) 95 - BUG(); 96 - if (atomic_read(&conf->active_stripes)==0) 97 - BUG(); 94 + BUG_ON(!list_empty(&sh->lru)); 95 + BUG_ON(atomic_read(&conf->active_stripes)==0); 98 96 if (test_bit(STRIPE_HANDLE, &sh->state)) { 99 97 if (test_bit(STRIPE_DELAYED, &sh->state)) 100 98 list_add_tail(&sh->lru, &conf->delayed_list); ··· 200 202 raid6_conf_t *conf = sh->raid_conf; 201 203 int disks = conf->raid_disks, i; 202 204 203 - if (atomic_read(&sh->count) != 0) 204 - BUG(); 205 - if (test_bit(STRIPE_HANDLE, &sh->state)) 206 - BUG(); 205 + BUG_ON(atomic_read(&sh->count) != 0); 206 + BUG_ON(test_bit(STRIPE_HANDLE, &sh->state)); 207 207 208 208 CHECK_DEVLOCK(); 209 209 PRINTK("init_stripe called, stripe %llu\n", ··· 280 284 init_stripe(sh, sector, pd_idx); 281 285 } else { 282 286 if (atomic_read(&sh->count)) { 283 - if (!list_empty(&sh->lru)) 284 - BUG(); 287 + BUG_ON(!list_empty(&sh->lru)); 285 288 } else { 286 289 if (!test_bit(STRIPE_HANDLE, &sh->state)) 287 290 atomic_inc(&conf->active_stripes); 288 - if (list_empty(&sh->lru)) 289 - BUG(); 291 + BUG_ON(list_empty(&sh->lru)); 290 292 list_del_init(&sh->lru); 291 293 } 292 294 } ··· 347 353 spin_unlock_irq(&conf->device_lock); 348 354 if (!sh) 349 355 return 0; 350 - if (atomic_read(&sh->count)) 351 - BUG(); 356 + BUG_ON(atomic_read(&sh->count)); 352 357 shrink_buffers(sh, conf->raid_disks); 353 358 kmem_cache_free(conf->slab_cache, sh); 354 359 atomic_dec(&conf->active_stripes); ··· 773 780 if (test_and_clear_bit(R5_Overlap, &sh->dev[i].flags)) 774 781 wake_up(&conf->wait_for_overlap); 775 782 776 - if (sh->dev[i].written) BUG(); 783 + BUG_ON(sh->dev[i].written); 777 784 sh->dev[i].written = chosen; 778 785 } 779 786 break; ··· 963 970 if (*bip && (*bip)->bi_sector < bi->bi_sector + ((bi->bi_size)>>9)) 964 971 goto overlap; 965 972 966 - if (*bip && bi->bi_next && (*bip) != bi->bi_next) 967 - BUG(); 973 + BUG_ON(*bip && bi->bi_next && (*bip) != bi->bi_next); 968 974 if (*bip) 969 975 bi->bi_next = *bip; 970 976 *bip = bi; ··· 1898 1906 1899 1907 list_del_init(first); 1900 1908 atomic_inc(&sh->count); 1901 - if (atomic_read(&sh->count)!= 1) 1902 - BUG(); 1909 + BUG_ON(atomic_read(&sh->count)!= 1); 1903 1910 spin_unlock_irq(&conf->device_lock); 1904 1911 1905 1912 handled++;
-21
drivers/mtd/chips/Kconfig
··· 200 200 provides support for one of those command sets, used on chips 201 201 including the AMD Am29LV320. 202 202 203 - config MTD_CFI_AMDSTD_RETRY 204 - int "Retry failed commands (erase/program)" 205 - depends on MTD_CFI_AMDSTD 206 - default "0" 207 - help 208 - Some chips, when attached to a shared bus, don't properly filter 209 - bus traffic that is destined to other devices. This broken 210 - behavior causes erase and program sequences to be aborted when 211 - the sequences are mixed with traffic for other devices. 212 - 213 - SST49LF040 (and related) chips are know to be broken. 214 - 215 - config MTD_CFI_AMDSTD_RETRY_MAX 216 - int "Max retries of failed commands (erase/program)" 217 - depends on MTD_CFI_AMDSTD_RETRY 218 - default "0" 219 - help 220 - If you have an SST49LF040 (or related chip) then this value should 221 - be set to at least 1. This can also be adjusted at driver load 222 - time with the retry_cmd_max module parameter. 223 - 224 203 config MTD_CFI_STAA 225 204 tristate "Support for ST (Advanced Architecture) flash chips" 226 205 depends on MTD_GEN_PROBE
+4 -8
drivers/net/8139cp.c
··· 539 539 unsigned buflen; 540 540 541 541 skb = cp->rx_skb[rx_tail].skb; 542 - if (!skb) 543 - BUG(); 542 + BUG_ON(!skb); 544 543 545 544 desc = &cp->rx_ring[rx_tail]; 546 545 status = le32_to_cpu(desc->opts1); ··· 722 723 break; 723 724 724 725 skb = cp->tx_skb[tx_tail].skb; 725 - if (!skb) 726 - BUG(); 726 + BUG_ON(!skb); 727 727 728 728 pci_unmap_single(cp->pdev, cp->tx_skb[tx_tail].mapping, 729 729 cp->tx_skb[tx_tail].len, PCI_DMA_TODEVICE); ··· 1548 1550 tmp_stats[i++] = le16_to_cpu(nic_stats->tx_abort); 1549 1551 tmp_stats[i++] = le16_to_cpu(nic_stats->tx_underrun); 1550 1552 tmp_stats[i++] = cp->cp_stats.rx_frags; 1551 - if (i != CP_NUM_STATS) 1552 - BUG(); 1553 + BUG_ON(i != CP_NUM_STATS); 1553 1554 1554 1555 pci_free_consistent(cp->pdev, sizeof(*nic_stats), nic_stats, dma); 1555 1556 } ··· 1853 1856 struct net_device *dev = pci_get_drvdata(pdev); 1854 1857 struct cp_private *cp = netdev_priv(dev); 1855 1858 1856 - if (!dev) 1857 - BUG(); 1859 + BUG_ON(!dev); 1858 1860 unregister_netdev(dev); 1859 1861 iounmap(cp->regs); 1860 1862 if (cp->wol_enabled) pci_set_power_state (pdev, PCI_D0);
+1 -2
drivers/net/arcnet/arcnet.c
··· 765 765 BUGMSG(D_DURING, "in arcnet_interrupt\n"); 766 766 767 767 lp = dev->priv; 768 - if (!lp) 769 - BUG(); 768 + BUG_ON(!lp); 770 769 771 770 spin_lock(&lp->lock); 772 771
+1 -2
drivers/net/b44.c
··· 608 608 struct ring_info *rp = &bp->tx_buffers[cons]; 609 609 struct sk_buff *skb = rp->skb; 610 610 611 - if (unlikely(skb == NULL)) 612 - BUG(); 611 + BUG_ON(skb == NULL); 613 612 614 613 pci_unmap_single(bp->pdev, 615 614 pci_unmap_addr(rp, mapping),
+1 -2
drivers/net/chelsio/sge.c
··· 1093 1093 if (likely(e->DataValid)) { 1094 1094 struct freelQ *fl = &sge->freelQ[e->FreelistQid]; 1095 1095 1096 - if (unlikely(!e->Sop || !e->Eop)) 1097 - BUG(); 1096 + BUG_ON(!e->Sop || !e->Eop); 1098 1097 if (unlikely(e->Offload)) 1099 1098 unexpected_offload(adapter, fl); 1100 1099 else
+1 -2
drivers/net/e1000/e1000_main.c
··· 3308 3308 3309 3309 while (poll_dev != &adapter->polling_netdev[i]) { 3310 3310 i++; 3311 - if (unlikely(i == adapter->num_rx_queues)) 3312 - BUG(); 3311 + BUG_ON(i == adapter->num_rx_queues); 3313 3312 } 3314 3313 3315 3314 if (likely(adapter->num_tx_queues == 1)) {
+1 -2
drivers/net/eql.c
··· 203 203 printk(KERN_INFO "%s: remember to turn off Van-Jacobson compression on " 204 204 "your slave devices.\n", dev->name); 205 205 206 - if (!list_empty(&eql->queue.all_slaves)) 207 - BUG(); 206 + BUG_ON(!list_empty(&eql->queue.all_slaves)); 208 207 209 208 eql->min_slaves = 1; 210 209 eql->max_slaves = EQL_DEFAULT_MAX_SLAVES; /* 4 usually... */
+1 -2
drivers/net/irda/sa1100_ir.c
··· 695 695 /* 696 696 * We must not be transmitting... 697 697 */ 698 - if (si->txskb) 699 - BUG(); 698 + BUG_ON(si->txskb); 700 699 701 700 netif_stop_queue(dev); 702 701
+1 -3
drivers/net/ne2k-pci.c
··· 645 645 { 646 646 struct net_device *dev = pci_get_drvdata(pdev); 647 647 648 - if (!dev) 649 - BUG(); 650 - 648 + BUG_ON(!dev); 651 649 unregister_netdev(dev); 652 650 release_region(dev->base_addr, NE_IO_EXTENT); 653 651 free_netdev(dev);
+1 -2
drivers/net/ns83820.c
··· 568 568 #endif 569 569 570 570 sg = dev->rx_info.descs + (next_empty * DESC_SIZE); 571 - if (unlikely(NULL != dev->rx_info.skbs[next_empty])) 572 - BUG(); 571 + BUG_ON(NULL != dev->rx_info.skbs[next_empty]); 573 572 dev->rx_info.skbs[next_empty] = skb; 574 573 575 574 dev->rx_info.next_empty = (next_empty + 1) % NR_RX_DESC;
+1 -2
drivers/net/starfire.c
··· 2122 2122 struct net_device *dev = pci_get_drvdata(pdev); 2123 2123 struct netdev_private *np = netdev_priv(dev); 2124 2124 2125 - if (!dev) 2126 - BUG(); 2125 + BUG_ON(!dev); 2127 2126 2128 2127 unregister_netdev(dev); 2129 2128
+5 -10
drivers/net/tg3.c
··· 2959 2959 struct sk_buff *skb = ri->skb; 2960 2960 int i; 2961 2961 2962 - if (unlikely(skb == NULL)) 2963 - BUG(); 2964 - 2962 + BUG_ON(skb == NULL); 2965 2963 pci_unmap_single(tp->pdev, 2966 2964 pci_unmap_addr(ri, mapping), 2967 2965 skb_headlen(skb), ··· 2970 2972 sw_idx = NEXT_TX(sw_idx); 2971 2973 2972 2974 for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { 2973 - if (unlikely(sw_idx == hw_idx)) 2974 - BUG(); 2975 + BUG_ON(sw_idx == hw_idx); 2975 2976 2976 2977 ri = &tp->tx_buffers[sw_idx]; 2977 - if (unlikely(ri->skb != NULL)) 2978 - BUG(); 2978 + BUG_ON(ri->skb != NULL); 2979 2979 2980 2980 pci_unmap_page(tp->pdev, 2981 2981 pci_unmap_addr(ri, mapping), ··· 4924 4928 { 4925 4929 int i; 4926 4930 4927 - if (offset == TX_CPU_BASE && 4928 - (tp->tg3_flags2 & TG3_FLG2_5705_PLUS)) 4929 - BUG(); 4931 + BUG_ON(offset == TX_CPU_BASE && 4932 + (tp->tg3_flags2 & TG3_FLG2_5705_PLUS)); 4930 4933 4931 4934 if (offset == RX_CPU_BASE) { 4932 4935 for (i = 0; i < 10000; i++) {
+1 -2
drivers/net/tokenring/abyss.c
··· 438 438 { 439 439 struct net_device *dev = pci_get_drvdata(pdev); 440 440 441 - if (!dev) 442 - BUG(); 441 + BUG_ON(!dev); 443 442 unregister_netdev(dev); 444 443 release_region(dev->base_addr-0x10, ABYSS_IO_EXTENT); 445 444 free_irq(dev->irq, dev);
+1 -2
drivers/net/tokenring/madgemc.c
··· 735 735 struct net_local *tp; 736 736 struct card_info *card; 737 737 738 - if (!dev) 739 - BUG(); 738 + BUG_ON(!dev); 740 739 741 740 tp = dev->priv; 742 741 card = tp->tmspriv;
+3 -6
drivers/net/wireless/ipw2200.c
··· 5573 5573 case IEEE80211_52GHZ_BAND: 5574 5574 network->mode = IEEE_A; 5575 5575 i = ieee80211_channel_to_index(priv->ieee, priv->channel); 5576 - if (i == -1) 5577 - BUG(); 5576 + BUG_ON(i == -1); 5578 5577 if (geo->a[i].flags & IEEE80211_CH_PASSIVE_ONLY) { 5579 5578 IPW_WARNING("Overriding invalid channel\n"); 5580 5579 priv->channel = geo->a[0].channel; ··· 5586 5587 else 5587 5588 network->mode = IEEE_B; 5588 5589 i = ieee80211_channel_to_index(priv->ieee, priv->channel); 5589 - if (i == -1) 5590 - BUG(); 5590 + BUG_ON(i == -1); 5591 5591 if (geo->bg[i].flags & IEEE80211_CH_PASSIVE_ONLY) { 5592 5592 IPW_WARNING("Overriding invalid channel\n"); 5593 5593 priv->channel = geo->bg[0].channel; ··· 6713 6715 6714 6716 switch (priv->ieee->iw_mode) { 6715 6717 case IW_MODE_ADHOC: 6716 - if (!(network->capability & WLAN_CAPABILITY_IBSS)) 6717 - BUG(); 6718 + BUG_ON(!(network->capability & WLAN_CAPABILITY_IBSS)); 6718 6719 6719 6720 qos_data = &ibss_data; 6720 6721 break;
+1 -2
drivers/net/yellowfin.c
··· 1441 1441 struct net_device *dev = pci_get_drvdata(pdev); 1442 1442 struct yellowfin_private *np; 1443 1443 1444 - if (!dev) 1445 - BUG(); 1444 + BUG_ON(!dev); 1446 1445 np = netdev_priv(dev); 1447 1446 1448 1447 pci_free_consistent(pdev, STATUS_TOTAL_SIZE, np->tx_status,
+3 -5
drivers/s390/block/dasd_erp.c
··· 32 32 int size; 33 33 34 34 /* Sanity checks */ 35 - if ( magic == NULL || datasize > PAGE_SIZE || 36 - (cplength*sizeof(struct ccw1)) > PAGE_SIZE) 37 - BUG(); 35 + BUG_ON( magic == NULL || datasize > PAGE_SIZE || 36 + (cplength*sizeof(struct ccw1)) > PAGE_SIZE); 38 37 39 38 size = (sizeof(struct dasd_ccw_req) + 7L) & -8L; 40 39 if (cplength > 0) ··· 124 125 struct dasd_device *device; 125 126 int success; 126 127 127 - if (cqr->refers == NULL || cqr->function == NULL) 128 - BUG(); 128 + BUG_ON(cqr->refers == NULL || cqr->function == NULL); 129 129 130 130 device = cqr->device; 131 131 success = cqr->status == DASD_CQR_DONE;
+1 -1
drivers/s390/char/sclp_rw.c
··· 24 24 25 25 /* 26 26 * The room for the SCCB (only for writing) is not equal to a pages size 27 - * (as it is specified as the maximum size in the the SCLP ducumentation) 27 + * (as it is specified as the maximum size in the the SCLP documentation) 28 28 * because of the additional data structure described above. 29 29 */ 30 30 #define MAX_SCCB_ROOM (PAGE_SIZE - sizeof(struct sclp_buffer))
+4 -9
drivers/s390/char/tape_block.c
··· 198 198 199 199 device = (struct tape_device *) queue->queuedata; 200 200 DBF_LH(6, "tapeblock_request_fn(device=%p)\n", device); 201 - if (device == NULL) 202 - BUG(); 203 - 201 + BUG_ON(device == NULL); 204 202 tapeblock_trigger_requeue(device); 205 203 } 206 204 ··· 305 307 int rc; 306 308 307 309 device = (struct tape_device *) disk->private_data; 308 - if (!device) 309 - BUG(); 310 + BUG_ON(!device); 310 311 311 312 if (!device->blk_data.medium_changed) 312 313 return 0; ··· 437 440 438 441 rc = 0; 439 442 disk = inode->i_bdev->bd_disk; 440 - if (!disk) 441 - BUG(); 443 + BUG_ON(!disk); 442 444 device = disk->private_data; 443 - if (!device) 444 - BUG(); 445 + BUG_ON(!device); 445 446 minor = iminor(inode); 446 447 447 448 DBF_LH(6, "tapeblock_ioctl(0x%0x)\n", command);
+5 -8
drivers/s390/net/lcs.c
··· 675 675 int index, rc; 676 676 677 677 LCS_DBF_TEXT(5, trace, "rdybuff"); 678 - if (buffer->state != BUF_STATE_LOCKED && 679 - buffer->state != BUF_STATE_PROCESSED) 680 - BUG(); 678 + BUG_ON(buffer->state != BUF_STATE_LOCKED && 679 + buffer->state != BUF_STATE_PROCESSED); 681 680 spin_lock_irqsave(get_ccwdev_lock(channel->ccwdev), flags); 682 681 buffer->state = BUF_STATE_READY; 683 682 index = buffer - channel->iob; ··· 700 701 int index, prev, next; 701 702 702 703 LCS_DBF_TEXT(5, trace, "prcsbuff"); 703 - if (buffer->state != BUF_STATE_READY) 704 - BUG(); 704 + BUG_ON(buffer->state != BUF_STATE_READY); 705 705 buffer->state = BUF_STATE_PROCESSED; 706 706 index = buffer - channel->iob; 707 707 prev = (index - 1) & (LCS_NUM_BUFFS - 1); ··· 732 734 unsigned long flags; 733 735 734 736 LCS_DBF_TEXT(5, trace, "relbuff"); 735 - if (buffer->state != BUF_STATE_LOCKED && 736 - buffer->state != BUF_STATE_PROCESSED) 737 - BUG(); 737 + BUG_ON(buffer->state != BUF_STATE_LOCKED && 738 + buffer->state != BUF_STATE_PROCESSED); 738 739 spin_lock_irqsave(get_ccwdev_lock(channel->ccwdev), flags); 739 740 buffer->state = BUF_STATE_EMPTY; 740 741 spin_unlock_irqrestore(get_ccwdev_lock(channel->ccwdev), flags);
+1 -1
drivers/scsi/aic7xxx/Kconfig.aic7xxx
··· 86 86 default "0" 87 87 help 88 88 Bit mask of debug options that is only valid if the 89 - CONFIG_AIC7XXX_DEBUG_ENBLE option is enabled. The bits in this mask 89 + CONFIG_AIC7XXX_DEBUG_ENABLE option is enabled. The bits in this mask 90 90 are defined in the drivers/scsi/aic7xxx/aic7xxx.h - search for the 91 91 variable ahc_debug in that file to find them. 92 92
+1 -1
drivers/serial/jsm/jsm.h
··· 20 20 * 21 21 * Contact Information: 22 22 * Scott H Kilau <Scott_Kilau@digi.com> 23 - * Wendy Xiong <wendyx@us.ltcfwd.linux.ibm.com> 23 + * Wendy Xiong <wendyx@us.ibm.com> 24 24 * 25 25 ***********************************************************************/ 26 26
+1 -1
drivers/serial/jsm/jsm_driver.c
··· 20 20 * 21 21 * Contact Information: 22 22 * Scott H Kilau <Scott_Kilau@digi.com> 23 - * Wendy Xiong <wendyx@us.ltcfwd.linux.ibm.com> 23 + * Wendy Xiong <wendyx@us.ibm.com> 24 24 * 25 25 * 26 26 ***********************************************************************/
+1 -1
drivers/serial/jsm/jsm_neo.c
··· 20 20 * 21 21 * Contact Information: 22 22 * Scott H Kilau <Scott_Kilau@digi.com> 23 - * Wendy Xiong <wendyx@us.ltcfwd.linux.ibm.com> 23 + * Wendy Xiong <wendyx@us.ibm.com> 24 24 * 25 25 ***********************************************************************/ 26 26 #include <linux/delay.h> /* For udelay */
+1 -2
fs/direct-io.c
··· 929 929 block_in_page += this_chunk_blocks; 930 930 dio->blocks_available -= this_chunk_blocks; 931 931 next_block: 932 - if (dio->block_in_file > dio->final_block_in_request) 933 - BUG(); 932 + BUG_ON(dio->block_in_file > dio->final_block_in_request); 934 933 if (dio->block_in_file == dio->final_block_in_request) 935 934 break; 936 935 }
+2 -4
fs/dquot.c
··· 590 590 atomic_dec(&dquot->dq_count); 591 591 #ifdef __DQUOT_PARANOIA 592 592 /* sanity check */ 593 - if (!list_empty(&dquot->dq_free)) 594 - BUG(); 593 + BUG_ON(!list_empty(&dquot->dq_free)); 595 594 #endif 596 595 put_dquot_last(dquot); 597 596 spin_unlock(&dq_list_lock); ··· 665 666 return NODQUOT; 666 667 } 667 668 #ifdef __DQUOT_PARANOIA 668 - if (!dquot->dq_sb) /* Has somebody invalidated entry under us? */ 669 - BUG(); 669 + BUG_ON(!dquot->dq_sb); /* Has somebody invalidated entry under us? */ 670 670 #endif 671 671 672 672 return dquot;
+1 -1
fs/exec.c
··· 561 561 arch_pick_mmap_layout(mm); 562 562 if (old_mm) { 563 563 up_read(&old_mm->mmap_sem); 564 - if (active_mm != old_mm) BUG(); 564 + BUG_ON(active_mm != old_mm); 565 565 mmput(old_mm); 566 566 return 0; 567 567 }
+1 -2
fs/fcntl.c
··· 453 453 /* Make sure we are called with one of the POLL_* 454 454 reasons, otherwise we could leak kernel stack into 455 455 userspace. */ 456 - if ((reason & __SI_MASK) != __SI_POLL) 457 - BUG(); 456 + BUG_ON((reason & __SI_MASK) != __SI_POLL); 458 457 if (reason - POLL_IN >= NSIGPOLL) 459 458 si.si_band = ~0L; 460 459 else
+3 -6
fs/freevxfs/vxfs_olt.c
··· 42 42 static inline void 43 43 vxfs_get_fshead(struct vxfs_oltfshead *fshp, struct vxfs_sb_info *infp) 44 44 { 45 - if (infp->vsi_fshino) 46 - BUG(); 45 + BUG_ON(infp->vsi_fshino); 47 46 infp->vsi_fshino = fshp->olt_fsino[0]; 48 47 } 49 48 50 49 static inline void 51 50 vxfs_get_ilist(struct vxfs_oltilist *ilistp, struct vxfs_sb_info *infp) 52 51 { 53 - if (infp->vsi_iext) 54 - BUG(); 52 + BUG_ON(infp->vsi_iext); 55 53 infp->vsi_iext = ilistp->olt_iext[0]; 56 54 } 57 55 58 56 static inline u_long 59 57 vxfs_oblock(struct super_block *sbp, daddr_t block, u_long bsize) 60 58 { 61 - if (sbp->s_blocksize % bsize) 62 - BUG(); 59 + BUG_ON(sbp->s_blocksize % bsize); 63 60 return (block * (sbp->s_blocksize / bsize)); 64 61 } 65 62
+2 -4
fs/hfsplus/bnode.c
··· 466 466 for (p = &node->tree->node_hash[hfs_bnode_hash(node->this)]; 467 467 *p && *p != node; p = &(*p)->next_hash) 468 468 ; 469 - if (!*p) 470 - BUG(); 469 + BUG_ON(!*p); 471 470 *p = node->next_hash; 472 471 node->tree->node_hash_cnt--; 473 472 } ··· 621 622 622 623 dprint(DBG_BNODE_REFS, "put_node(%d:%d): %d\n", 623 624 node->tree->cnid, node->this, atomic_read(&node->refcnt)); 624 - if (!atomic_read(&node->refcnt)) 625 - BUG(); 625 + BUG_ON(!atomic_read(&node->refcnt)); 626 626 if (!atomic_dec_and_lock(&node->refcnt, &tree->hash_lock)) 627 627 return; 628 628 for (i = 0; i < tree->pages_per_bnode; i++) {
+1 -2
fs/hfsplus/btree.c
··· 269 269 u8 *data, byte, m; 270 270 271 271 dprint(DBG_BNODE_MOD, "btree_free_node: %u\n", node->this); 272 - if (!node->this) 273 - BUG(); 272 + BUG_ON(!node->this); 274 273 tree = node->tree; 275 274 nidx = node->this; 276 275 node = hfs_bnode_find(tree, 0);
+5 -10
fs/inode.c
··· 172 172 173 173 void destroy_inode(struct inode *inode) 174 174 { 175 - if (inode_has_buffers(inode)) 176 - BUG(); 175 + BUG_ON(inode_has_buffers(inode)); 177 176 security_inode_free(inode); 178 177 if (inode->i_sb->s_op->destroy_inode) 179 178 inode->i_sb->s_op->destroy_inode(inode); ··· 248 249 might_sleep(); 249 250 invalidate_inode_buffers(inode); 250 251 251 - if (inode->i_data.nrpages) 252 - BUG(); 253 - if (!(inode->i_state & I_FREEING)) 254 - BUG(); 255 - if (inode->i_state & I_CLEAR) 256 - BUG(); 252 + BUG_ON(inode->i_data.nrpages); 253 + BUG_ON(!(inode->i_state & I_FREEING)); 254 + BUG_ON(inode->i_state & I_CLEAR); 257 255 wait_on_inode(inode); 258 256 DQUOT_DROP(inode); 259 257 if (inode->i_sb && inode->i_sb->s_op->clear_inode) ··· 1050 1054 hlist_del_init(&inode->i_hash); 1051 1055 spin_unlock(&inode_lock); 1052 1056 wake_up_inode(inode); 1053 - if (inode->i_state != I_CLEAR) 1054 - BUG(); 1057 + BUG_ON(inode->i_state != I_CLEAR); 1055 1058 destroy_inode(inode); 1056 1059 } 1057 1060
+1 -2
fs/jffs2/background.c
··· 35 35 pid_t pid; 36 36 int ret = 0; 37 37 38 - if (c->gc_task) 39 - BUG(); 38 + BUG_ON(c->gc_task); 40 39 41 40 init_completion(&c->gc_thread_start); 42 41 init_completion(&c->gc_thread_exit);
+2 -4
fs/smbfs/file.c
··· 178 178 unsigned offset = PAGE_CACHE_SIZE; 179 179 int err; 180 180 181 - if (!mapping) 182 - BUG(); 181 + BUG_ON(!mapping); 183 182 inode = mapping->host; 184 - if (!inode) 185 - BUG(); 183 + BUG_ON(!inode); 186 184 187 185 end_index = inode->i_size >> PAGE_CACHE_SHIFT; 188 186
+1 -1
fs/sysfs/dir.c
··· 50 50 return sd; 51 51 } 52 52 53 - /** 53 + /* 54 54 * 55 55 * Return -EEXIST if there is already a sysfs element with the same name for 56 56 * the same parent.
+1 -2
fs/sysfs/inode.c
··· 175 175 struct bin_attribute * bin_attr; 176 176 struct sysfs_symlink * sl; 177 177 178 - if (!sd || !sd->s_element) 179 - BUG(); 178 + BUG_ON(!sd || !sd->s_element); 180 179 181 180 switch (sd->s_type) { 182 181 case SYSFS_DIR:
+2 -4
fs/sysv/dir.c
··· 253 253 254 254 lock_page(page); 255 255 err = mapping->a_ops->prepare_write(NULL, page, from, to); 256 - if (err) 257 - BUG(); 256 + BUG_ON(err); 258 257 de->inode = 0; 259 258 err = dir_commit_chunk(page, from, to); 260 259 dir_put_page(page); ··· 352 353 353 354 lock_page(page); 354 355 err = page->mapping->a_ops->prepare_write(NULL, page, from, to); 355 - if (err) 356 - BUG(); 356 + BUG_ON(err); 357 357 de->inode = cpu_to_fs16(SYSV_SB(inode->i_sb), inode->i_ino); 358 358 err = dir_commit_chunk(page, from, to); 359 359 dir_put_page(page);
+2 -4
fs/udf/inode.c
··· 312 312 err = 0; 313 313 314 314 bh = inode_getblk(inode, block, &err, &phys, &new); 315 - if (bh) 316 - BUG(); 315 + BUG_ON(bh); 317 316 if (err) 318 317 goto abort; 319 - if (!phys) 320 - BUG(); 318 + BUG_ON(!phys); 321 319 322 320 if (new) 323 321 set_buffer_new(bh_result);
+1 -1
include/linux/fs.h
··· 864 864 */ 865 865 struct mutex s_vfs_rename_mutex; /* Kludge */ 866 866 867 - /* Granuality of c/m/atime in ns. 867 + /* Granularity of c/m/atime in ns. 868 868 Cannot be worse than a second */ 869 869 u32 s_time_gran; 870 870 };
+1 -1
include/linux/hrtimer.h
··· 80 80 * @first: pointer to the timer node which expires first 81 81 * @resolution: the resolution of the clock, in nanoseconds 82 82 * @get_time: function to retrieve the current time of the clock 83 - * @get_sofirq_time: function to retrieve the current time from the softirq 83 + * @get_softirq_time: function to retrieve the current time from the softirq 84 84 * @curr_timer: the timer which is executing a callback right now 85 85 * @softirq_time: the time when running the hrtimer queue in the softirq 86 86 */
+7 -8
ipc/shm.c
··· 91 91 static inline void shm_inc (int id) { 92 92 struct shmid_kernel *shp; 93 93 94 - if(!(shp = shm_lock(id))) 95 - BUG(); 94 + shp = shm_lock(id); 95 + BUG_ON(!shp); 96 96 shp->shm_atim = get_seconds(); 97 97 shp->shm_lprid = current->tgid; 98 98 shp->shm_nattch++; ··· 142 142 143 143 mutex_lock(&shm_ids.mutex); 144 144 /* remove from the list of attaches of the shm segment */ 145 - if(!(shp = shm_lock(id))) 146 - BUG(); 145 + shp = shm_lock(id); 146 + BUG_ON(!shp); 147 147 shp->shm_lprid = current->tgid; 148 148 shp->shm_dtim = get_seconds(); 149 149 shp->shm_nattch--; ··· 283 283 err = -EEXIST; 284 284 } else { 285 285 shp = shm_lock(id); 286 - if(shp==NULL) 287 - BUG(); 286 + BUG_ON(shp==NULL); 288 287 if (shp->shm_segsz < size) 289 288 err = -EINVAL; 290 289 else if (ipcperms(&shp->shm_perm, shmflg)) ··· 773 774 up_write(&current->mm->mmap_sem); 774 775 775 776 mutex_lock(&shm_ids.mutex); 776 - if(!(shp = shm_lock(shmid))) 777 - BUG(); 777 + shp = shm_lock(shmid); 778 + BUG_ON(!shp); 778 779 shp->shm_nattch--; 779 780 if(shp->shm_nattch == 0 && 780 781 shp->shm_perm.mode & SHM_DEST)
+2 -4
ipc/util.c
··· 266 266 { 267 267 struct kern_ipc_perm* p; 268 268 int lid = id % SEQ_MULTIPLIER; 269 - if(lid >= ids->entries->size) 270 - BUG(); 269 + BUG_ON(lid >= ids->entries->size); 271 270 272 271 /* 273 272 * do not need a rcu_dereference()() here to force ordering ··· 274 275 */ 275 276 p = ids->entries->p[lid]; 276 277 ids->entries->p[lid] = NULL; 277 - if(p==NULL) 278 - BUG(); 278 + BUG_ON(p==NULL); 279 279 ids->in_use--; 280 280 281 281 if (lid == ids->max_id) {
+1 -1
kernel/power/Kconfig
··· 41 41 depends on PM && SWAP && (X86 && (!SMP || SUSPEND_SMP)) || ((FRV || PPC32) && !SMP) 42 42 ---help--- 43 43 Enable the possibility of suspending the machine. 44 - It doesn't need APM. 44 + It doesn't need ACPI or APM. 45 45 You may suspend your machine by 'swsusp' or 'shutdown -z <time>' 46 46 (patch for sysvinit needed). 47 47
+2 -4
kernel/printk.c
··· 360 360 unsigned long cur_index, start_print; 361 361 static int msg_level = -1; 362 362 363 - if (((long)(start - end)) > 0) 364 - BUG(); 363 + BUG_ON(((long)(start - end)) > 0); 365 364 366 365 cur_index = start; 367 366 start_print = start; ··· 707 708 */ 708 709 void acquire_console_sem(void) 709 710 { 710 - if (in_interrupt()) 711 - BUG(); 711 + BUG_ON(in_interrupt()); 712 712 down(&console_sem); 713 713 console_locked = 1; 714 714 console_may_schedule = 1;
+1 -2
kernel/ptrace.c
··· 30 30 */ 31 31 void __ptrace_link(task_t *child, task_t *new_parent) 32 32 { 33 - if (!list_empty(&child->ptrace_list)) 34 - BUG(); 33 + BUG_ON(!list_empty(&child->ptrace_list)); 35 34 if (child->parent == new_parent) 36 35 return; 37 36 list_add(&child->ptrace_list, &child->parent->ptrace_children);
+2 -4
kernel/signal.c
··· 769 769 { 770 770 int ret = 0; 771 771 772 - if (!irqs_disabled()) 773 - BUG(); 772 + BUG_ON(!irqs_disabled()); 774 773 assert_spin_locked(&t->sighand->siglock); 775 774 776 775 /* Short-circuit ignored signals. */ ··· 1383 1384 * the overrun count. Other uses should not try to 1384 1385 * send the signal multiple times. 1385 1386 */ 1386 - if (q->info.si_code != SI_TIMER) 1387 - BUG(); 1387 + BUG_ON(q->info.si_code != SI_TIMER); 1388 1388 q->info.si_overrun++; 1389 1389 goto out; 1390 1390 }
+4 -4
kernel/time.c
··· 410 410 * current_fs_time - Return FS time 411 411 * @sb: Superblock. 412 412 * 413 - * Return the current time truncated to the time granuality supported by 413 + * Return the current time truncated to the time granularity supported by 414 414 * the fs. 415 415 */ 416 416 struct timespec current_fs_time(struct super_block *sb) ··· 421 421 EXPORT_SYMBOL(current_fs_time); 422 422 423 423 /** 424 - * timespec_trunc - Truncate timespec to a granuality 424 + * timespec_trunc - Truncate timespec to a granularity 425 425 * @t: Timespec 426 - * @gran: Granuality in ns. 426 + * @gran: Granularity in ns. 427 427 * 428 - * Truncate a timespec to a granuality. gran must be smaller than a second. 428 + * Truncate a timespec to a granularity. gran must be smaller than a second. 429 429 * Always rounds down. 430 430 * 431 431 * This function should be only used for timestamps returned by
+1 -2
kernel/timer.c
··· 1479 1479 unsigned long flags; 1480 1480 1481 1481 /* Sanity check */ 1482 - if (ti->frequency == 0 || ti->mask == 0) 1483 - BUG(); 1482 + BUG_ON(ti->frequency == 0 || ti->mask == 0); 1484 1483 1485 1484 ti->nsec_per_cyc = ((u64)NSEC_PER_SEC << ti->shift) / ti->frequency; 1486 1485 spin_lock(&time_interpolator_lock);
+5 -10
mm/highmem.c
··· 74 74 pkmap_count[i] = 0; 75 75 76 76 /* sanity check */ 77 - if (pte_none(pkmap_page_table[i])) 78 - BUG(); 77 + BUG_ON(pte_none(pkmap_page_table[i])); 79 78 80 79 /* 81 80 * Don't need an atomic fetch-and-clear op here; ··· 157 158 if (!vaddr) 158 159 vaddr = map_new_virtual(page); 159 160 pkmap_count[PKMAP_NR(vaddr)]++; 160 - if (pkmap_count[PKMAP_NR(vaddr)] < 2) 161 - BUG(); 161 + BUG_ON(pkmap_count[PKMAP_NR(vaddr)] < 2); 162 162 spin_unlock(&kmap_lock); 163 163 return (void*) vaddr; 164 164 } ··· 172 174 173 175 spin_lock(&kmap_lock); 174 176 vaddr = (unsigned long)page_address(page); 175 - if (!vaddr) 176 - BUG(); 177 + BUG_ON(!vaddr); 177 178 nr = PKMAP_NR(vaddr); 178 179 179 180 /* ··· 217 220 return 0; 218 221 219 222 page_pool = mempool_create_page_pool(POOL_SIZE, 0); 220 - if (!page_pool) 221 - BUG(); 223 + BUG_ON(!page_pool); 222 224 printk("highmem bounce pool size: %d pages\n", POOL_SIZE); 223 225 224 226 return 0; ··· 260 264 261 265 isa_page_pool = mempool_create(ISA_POOL_SIZE, mempool_alloc_pages_isa, 262 266 mempool_free_pages, (void *) 0); 263 - if (!isa_page_pool) 264 - BUG(); 267 + BUG_ON(!isa_page_pool); 265 268 266 269 printk("isa bounce pool size: %d pages\n", ISA_POOL_SIZE); 267 270 return 0;
+3 -6
mm/mmap.c
··· 294 294 i = browse_rb(&mm->mm_rb); 295 295 if (i != mm->map_count) 296 296 printk("map_count %d rb %d\n", mm->map_count, i), bug = 1; 297 - if (bug) 298 - BUG(); 297 + BUG_ON(bug); 299 298 } 300 299 #else 301 300 #define validate_mm(mm) do { } while (0) ··· 431 432 struct rb_node ** rb_link, * rb_parent; 432 433 433 434 __vma = find_vma_prepare(mm, vma->vm_start,&prev, &rb_link, &rb_parent); 434 - if (__vma && __vma->vm_start < vma->vm_end) 435 - BUG(); 435 + BUG_ON(__vma && __vma->vm_start < vma->vm_end); 436 436 __vma_link(mm, vma, prev, rb_link, rb_parent); 437 437 mm->map_count++; 438 438 } ··· 811 813 * (e.g. stash info in next's anon_vma_node when assigning 812 814 * an anon_vma, or when trying vma_merge). Another time. 813 815 */ 814 - if (find_vma_prev(vma->vm_mm, vma->vm_start, &near) != vma) 815 - BUG(); 816 + BUG_ON(find_vma_prev(vma->vm_mm, vma->vm_start, &near) != vma); 816 817 if (!near) 817 818 goto none; 818 819
+1 -1
mm/page-writeback.c
··· 258 258 /** 259 259 * balance_dirty_pages_ratelimited_nr - balance dirty memory state 260 260 * @mapping: address_space which was dirtied 261 - * @nr_pages: number of pages which the caller has just dirtied 261 + * @nr_pages_dirtied: number of pages which the caller has just dirtied 262 262 * 263 263 * Processes which are dirtying memory should call in here once for each page 264 264 * which was newly dirtied. The function will periodically check the system's
+6 -12
mm/slab.c
··· 1297 1297 if (cache_cache.num) 1298 1298 break; 1299 1299 } 1300 - if (!cache_cache.num) 1301 - BUG(); 1300 + BUG_ON(!cache_cache.num); 1302 1301 cache_cache.gfporder = order; 1303 1302 cache_cache.colour = left_over / cache_cache.colour_off; 1304 1303 cache_cache.slab_size = ALIGN(cache_cache.num * sizeof(kmem_bufctl_t) + ··· 1973 1974 * Always checks flags, a caller might be expecting debug support which 1974 1975 * isn't available. 1975 1976 */ 1976 - if (flags & ~CREATE_MASK) 1977 - BUG(); 1977 + BUG_ON(flags & ~CREATE_MASK); 1978 1978 1979 1979 /* 1980 1980 * Check that size is in terms of words. This is needed to avoid ··· 2204 2206 2205 2207 slabp = list_entry(l3->slabs_free.prev, struct slab, list); 2206 2208 #if DEBUG 2207 - if (slabp->inuse) 2208 - BUG(); 2209 + BUG_ON(slabp->inuse); 2209 2210 #endif 2210 2211 list_del(&slabp->list); 2211 2212 ··· 2245 2248 */ 2246 2249 int kmem_cache_shrink(struct kmem_cache *cachep) 2247 2250 { 2248 - if (!cachep || in_interrupt()) 2249 - BUG(); 2251 + BUG_ON(!cachep || in_interrupt()); 2250 2252 2251 2253 return __cache_shrink(cachep); 2252 2254 } ··· 2273 2277 int i; 2274 2278 struct kmem_list3 *l3; 2275 2279 2276 - if (!cachep || in_interrupt()) 2277 - BUG(); 2280 + BUG_ON(!cachep || in_interrupt()); 2278 2281 2279 2282 /* Don't let CPUs to come and go */ 2280 2283 lock_cpu_hotplug(); ··· 2472 2477 * Be lazy and only check for valid flags here, keeping it out of the 2473 2478 * critical path in kmem_cache_alloc(). 2474 2479 */ 2475 - if (flags & ~(SLAB_DMA | SLAB_LEVEL_MASK | SLAB_NO_GROW)) 2476 - BUG(); 2480 + BUG_ON(flags & ~(SLAB_DMA | SLAB_LEVEL_MASK | SLAB_NO_GROW)); 2477 2481 if (flags & SLAB_NO_GROW) 2478 2482 return 0; 2479 2483
+1 -2
mm/swap_state.c
··· 148 148 swp_entry_t entry; 149 149 int err; 150 150 151 - if (!PageLocked(page)) 152 - BUG(); 151 + BUG_ON(!PageLocked(page)); 153 152 154 153 for (;;) { 155 154 entry = get_swap_page();
+1 -2
mm/vmalloc.c
··· 321 321 int i; 322 322 323 323 for (i = 0; i < area->nr_pages; i++) { 324 - if (unlikely(!area->pages[i])) 325 - BUG(); 324 + BUG_ON(!area->pages[i]); 326 325 __free_page(area->pages[i]); 327 326 } 328 327