Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

locking/mutex: Simplify some ww_mutex code in __mutex_lock_common()

This patch removes some of the redundant ww_mutex code in
__mutex_lock_common().

Tested-by: Jason Low <jason.low2@hpe.com>
Signed-off-by: Waiman Long <Waiman.Long@hpe.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: Ding Tianhong <dingtianhong@huawei.com>
Cc: Imre Deak <imre.deak@intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Paul E. McKenney <paulmck@us.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Will Deacon <Will.Deacon@arm.com>
Link: http://lkml.kernel.org/r/1472254509-27508-1-git-send-email-Waiman.Long@hpe.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>

authored by

Waiman Long and committed by
Ingo Molnar
a40ca565 5bbd7e64

+4 -9
+4 -9
kernel/locking/mutex.c
··· 587 587 struct mutex_waiter waiter; 588 588 unsigned long flags; 589 589 bool first = false; 590 + struct ww_mutex *ww; 590 591 int ret; 591 592 592 593 if (use_ww_ctx) { 593 - struct ww_mutex *ww = container_of(lock, struct ww_mutex, base); 594 + ww = container_of(lock, struct ww_mutex, base); 594 595 if (unlikely(ww_ctx == READ_ONCE(ww->ctx))) 595 596 return -EALREADY; 596 597 } ··· 603 602 mutex_optimistic_spin(lock, ww_ctx, use_ww_ctx)) { 604 603 /* got the lock, yay! */ 605 604 lock_acquired(&lock->dep_map, ip); 606 - if (use_ww_ctx) { 607 - struct ww_mutex *ww; 608 - ww = container_of(lock, struct ww_mutex, base); 609 - 605 + if (use_ww_ctx) 610 606 ww_mutex_set_context_fastpath(ww, ww_ctx); 611 - } 612 607 preempt_enable(); 613 608 return 0; 614 609 } ··· 688 691 /* got the lock - cleanup and rejoice! */ 689 692 lock_acquired(&lock->dep_map, ip); 690 693 691 - if (use_ww_ctx) { 692 - struct ww_mutex *ww = container_of(lock, struct ww_mutex, base); 694 + if (use_ww_ctx) 693 695 ww_mutex_set_context_slowpath(ww, ww_ctx); 694 - } 695 696 696 697 spin_unlock_mutex(&lock->wait_lock, flags); 697 698 preempt_enable();