Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

locking/mutex: Allow next waiter lockless wakeup

Make use of wake-queues and enable the wakeup to occur after releasing the
wait_lock. This is similar to what we do with rtmutex top waiter,
slightly shortening the critical region and allow other waiters to
acquire the wait_lock sooner. In low contention cases it can also help
the recently woken waiter to find the wait_lock available (fastpath)
when it continues execution.

Reviewed-by: Waiman Long <Waiman.Long@hpe.com>
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Ding Tianhong <dingtianhong@huawei.com>
Cc: Jason Low <jason.low2@hp.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Paul E. McKenney <paulmck@us.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Waiman Long <waiman.long@hpe.com>
Cc: Will Deacon <Will.Deacon@arm.com>
Link: http://lkml.kernel.org/r/20160125022343.GA3322@linux-uzut.site
Signed-off-by: Ingo Molnar <mingo@kernel.org>

authored by

Davidlohr Bueso and committed by
Ingo Molnar
1329ce6f 32d62510

+3 -2
+3 -2
kernel/locking/mutex.c
··· 716 716 __mutex_unlock_common_slowpath(struct mutex *lock, int nested) 717 717 { 718 718 unsigned long flags; 719 + WAKE_Q(wake_q); 719 720 720 721 /* 721 722 * As a performance measurement, release the lock before doing other ··· 744 743 struct mutex_waiter, list); 745 744 746 745 debug_mutex_wake_waiter(lock, waiter); 747 - 748 - wake_up_process(waiter->task); 746 + wake_q_add(&wake_q, waiter->task); 749 747 } 750 748 751 749 spin_unlock_mutex(&lock->wait_lock, flags); 750 + wake_up_q(&wake_q); 752 751 } 753 752 754 753 /*