Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

sched: Pull up the might_sleep() check into cond_resched()

might_sleep() is called late-ish in cond_resched(), after the
need_resched()/preempt enabled/system running tests are
checked.

It's better to check the sleeps while atomic earlier and not
depend on some environment datas that reduce the chances to
detect a problem.

Also define cond_resched_*() helpers as macros, so that the
FILE/LINE reported in the sleeping while atomic warning
displays the real origin and not sched.h

Changes in v2:

- Call __might_sleep() directly instead of might_sleep() which
may call cond_resched()

- Turn cond_resched() into a macro so that the file:line
couple reported refers to the caller of cond_resched() and
not __cond_resched() itself.

Changes in v3:

- Also propagate this __might_sleep() pull up to
cond_resched_lock() and cond_resched_softirq()

Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <1247725694-6082-6-git-send-email-fweisbec@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>

authored by

Frederic Weisbecker and committed by
Ingo Molnar
613afbf8 6f80bd98

+25 -17
+1
fs/dcache.c
··· 32 32 #include <linux/swap.h> 33 33 #include <linux/bootmem.h> 34 34 #include <linux/fs_struct.h> 35 + #include <linux/hardirq.h> 35 36 #include "internal.h" 36 37 37 38 int sysctl_vfs_cache_pressure __read_mostly = 100;
+19 -10
include/linux/sched.h
··· 2286 2286 */ 2287 2287 extern int _cond_resched(void); 2288 2288 2289 - static inline int cond_resched(void) 2290 - { 2291 - return _cond_resched(); 2292 - } 2289 + #define cond_resched() ({ \ 2290 + __might_sleep(__FILE__, __LINE__, 0); \ 2291 + _cond_resched(); \ 2292 + }) 2293 2293 2294 - extern int cond_resched_lock(spinlock_t * lock); 2295 - extern int cond_resched_softirq(void); 2296 - static inline int cond_resched_bkl(void) 2297 - { 2298 - return _cond_resched(); 2299 - } 2294 + extern int __cond_resched_lock(spinlock_t *lock); 2295 + 2296 + #define cond_resched_lock(lock) ({ \ 2297 + __might_sleep(__FILE__, __LINE__, PREEMPT_OFFSET); \ 2298 + __cond_resched_lock(lock); \ 2299 + }) 2300 + 2301 + extern int __cond_resched_softirq(void); 2302 + 2303 + #define cond_resched_softirq() ({ \ 2304 + __might_sleep(__FILE__, __LINE__, SOFTIRQ_OFFSET); \ 2305 + __cond_resched_softirq(); \ 2306 + }) 2307 + 2308 + #define cond_resched_bkl() cond_resched() 2300 2309 2301 2310 /* 2302 2311 * Does a critical section need to be broken due to another
+5 -7
kernel/sched.c
··· 6610 6610 6611 6611 static void __cond_resched(void) 6612 6612 { 6613 - __might_sleep(__FILE__, __LINE__, 0); 6614 - 6615 6613 add_preempt_count(PREEMPT_ACTIVE); 6616 6614 schedule(); 6617 6615 sub_preempt_count(PREEMPT_ACTIVE); ··· 6626 6628 EXPORT_SYMBOL(_cond_resched); 6627 6629 6628 6630 /* 6629 - * cond_resched_lock() - if a reschedule is pending, drop the given lock, 6631 + * __cond_resched_lock() - if a reschedule is pending, drop the given lock, 6630 6632 * call schedule, and on return reacquire the lock. 6631 6633 * 6632 6634 * This works OK both with and without CONFIG_PREEMPT. We do strange low-level 6633 6635 * operations here to prevent schedule() from being called twice (once via 6634 6636 * spin_unlock(), once by hand). 6635 6637 */ 6636 - int cond_resched_lock(spinlock_t *lock) 6638 + int __cond_resched_lock(spinlock_t *lock) 6637 6639 { 6638 6640 int resched = should_resched(); 6639 6641 int ret = 0; ··· 6649 6651 } 6650 6652 return ret; 6651 6653 } 6652 - EXPORT_SYMBOL(cond_resched_lock); 6654 + EXPORT_SYMBOL(__cond_resched_lock); 6653 6655 6654 - int __sched cond_resched_softirq(void) 6656 + int __sched __cond_resched_softirq(void) 6655 6657 { 6656 6658 BUG_ON(!in_softirq()); 6657 6659 ··· 6663 6665 } 6664 6666 return 0; 6665 6667 } 6666 - EXPORT_SYMBOL(cond_resched_softirq); 6668 + EXPORT_SYMBOL(__cond_resched_softirq); 6667 6669 6668 6670 /** 6669 6671 * yield - yield the current processor to other threads.