Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

locking/mutex: Introduce ww_mutex_set_context_slowpath()

... which is equivalent to the fastpath counter part.
This mainly allows getting some WW specific code out
of generic mutex paths.

Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/1420573509-24774-4-git-send-email-dave@stgolabs.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>

authored by

Davidlohr Bueso and committed by
Ingo Molnar
4bd19084 e42f678a

+26 -18
+26 -18
kernel/locking/mutex.c
··· 147 147 } 148 148 149 149 /* 150 - * after acquiring lock with fastpath or when we lost out in contested 150 + * After acquiring lock with fastpath or when we lost out in contested 151 151 * slowpath, set ctx and wake up any waiters so they can recheck. 152 152 * 153 153 * This function is never called when CONFIG_DEBUG_LOCK_ALLOC is set, ··· 191 191 spin_unlock_mutex(&lock->base.wait_lock, flags); 192 192 } 193 193 194 + /* 195 + * After acquiring lock in the slowpath set ctx and wake up any 196 + * waiters so they can recheck. 197 + * 198 + * Callers must hold the mutex wait_lock. 199 + */ 200 + static __always_inline void 201 + ww_mutex_set_context_slowpath(struct ww_mutex *lock, 202 + struct ww_acquire_ctx *ctx) 203 + { 204 + struct mutex_waiter *cur; 205 + 206 + ww_mutex_lock_acquired(lock, ctx); 207 + lock->ctx = ctx; 208 + 209 + /* 210 + * Give any possible sleeping processes the chance to wake up, 211 + * so they can recheck if they have to back off. 212 + */ 213 + list_for_each_entry(cur, &lock->base.wait_list, list) { 214 + debug_mutex_wake_waiter(&lock->base, cur); 215 + wake_up_process(cur->task); 216 + } 217 + } 194 218 195 219 #ifdef CONFIG_MUTEX_SPIN_ON_OWNER 196 220 static inline bool owner_running(struct mutex *lock, struct task_struct *owner) ··· 600 576 601 577 if (use_ww_ctx) { 602 578 struct ww_mutex *ww = container_of(lock, struct ww_mutex, base); 603 - struct mutex_waiter *cur; 604 - 605 - /* 606 - * This branch gets optimized out for the common case, 607 - * and is only important for ww_mutex_lock. 608 - */ 609 - ww_mutex_lock_acquired(ww, ww_ctx); 610 - ww->ctx = ww_ctx; 611 - 612 - /* 613 - * Give any possible sleeping processes the chance to wake up, 614 - * so they can recheck if they have to back off. 615 - */ 616 - list_for_each_entry(cur, &lock->wait_list, list) { 617 - debug_mutex_wake_waiter(lock, cur); 618 - wake_up_process(cur->task); 619 - } 579 + ww_mutex_set_context_slowpath(ww, ww_ctx); 620 580 } 621 581 622 582 spin_unlock_mutex(&lock->wait_lock, flags);