Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

locking/rwsem: Avoid double checking before try acquiring write lock

Commit 9b0fc9c09f1b ("rwsem: skip initial trylock in rwsem_down_write_failed")
checks for if there are known active lockers in order to avoid write trylocking
using expensive cmpxchg() when it likely wouldn't get the lock.

However, a subsequent patch was added such that we directly
check for sem->count == RWSEM_WAITING_BIAS right before trying
that cmpxchg().

Thus, commit 9b0fc9c09f1b now just adds overhead.

This patch modifies it so that we only do a check for if
count == RWSEM_WAITING_BIAS.

Also, add a comment on why we do an "extra check" of count
before the cmpxchg().

Signed-off-by: Jason Low <jason.low2@hp.com>
Acked-by: Davidlohr Bueso <dbueso@suse.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Aswin Chandramouleeswaran <aswin@hp.com>
Cc: Chegu Vinod <chegu_vinod@hp.com>
Cc: Peter Hurley <peter@hurleysoftware.com>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/1410913017.2447.22.camel@j-VirtualBox
Signed-off-by: Ingo Molnar <mingo@kernel.org>

authored by

Jason Low and committed by
Ingo Molnar
debfab74 db0e716a

+11 -9
+11 -9
kernel/locking/rwsem-xadd.c
··· 250 250 251 251 static inline bool rwsem_try_write_lock(long count, struct rw_semaphore *sem) 252 252 { 253 - if (!(count & RWSEM_ACTIVE_MASK)) { 254 - /* try acquiring the write lock */ 255 - if (sem->count == RWSEM_WAITING_BIAS && 256 - cmpxchg(&sem->count, RWSEM_WAITING_BIAS, 257 - RWSEM_ACTIVE_WRITE_BIAS) == RWSEM_WAITING_BIAS) { 258 - if (!list_is_singular(&sem->wait_list)) 259 - rwsem_atomic_update(RWSEM_WAITING_BIAS, sem); 260 - return true; 261 - } 253 + /* 254 + * Try acquiring the write lock. Check count first in order 255 + * to reduce unnecessary expensive cmpxchg() operations. 256 + */ 257 + if (count == RWSEM_WAITING_BIAS && 258 + cmpxchg(&sem->count, RWSEM_WAITING_BIAS, 259 + RWSEM_ACTIVE_WRITE_BIAS) == RWSEM_WAITING_BIAS) { 260 + if (!list_is_singular(&sem->wait_list)) 261 + rwsem_atomic_update(RWSEM_WAITING_BIAS, sem); 262 + return true; 262 263 } 264 + 263 265 return false; 264 266 } 265 267