Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

locking/qspinlock: Rework some comments

While working my way through the code again; I felt the comments could
use help.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: andrea.parri@amarulasolutions.com
Cc: longman@redhat.com
Link: https://lkml.kernel.org/r/20181003130257.156322446@infradead.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>

authored by

Peter Zijlstra and committed by
Ingo Molnar
756b1df4 53bf57fa

+26 -10
+26 -10
kernel/locking/qspinlock.c
··· 326 326 /* 327 327 * trylock || pending 328 328 * 329 - * 0,0,0 -> 0,0,1 ; trylock 330 - * 0,0,1 -> 0,1,1 ; pending 329 + * 0,0,* -> 0,1,* -> 0,0,1 pending, trylock 331 330 */ 332 331 val = atomic_fetch_or_acquire(_Q_PENDING_VAL, &lock->val); 332 + 333 333 /* 334 - * If we observe any contention; undo and queue. 334 + * If we observe contention, there is a concurrent locker. 335 + * 336 + * Undo and queue; our setting of PENDING might have made the 337 + * n,0,0 -> 0,0,0 transition fail and it will now be waiting 338 + * on @next to become !NULL. 335 339 */ 336 340 if (unlikely(val & ~_Q_LOCKED_MASK)) { 341 + 342 + /* Undo PENDING if we set it. */ 337 343 if (!(val & _Q_PENDING_MASK)) 338 344 clear_pending(lock); 345 + 339 346 goto queue; 340 347 } 341 348 ··· 481 474 */ 482 475 483 476 /* 484 - * In the PV case we might already have _Q_LOCKED_VAL set. 477 + * In the PV case we might already have _Q_LOCKED_VAL set, because 478 + * of lock stealing; therefore we must also allow: 485 479 * 486 - * The atomic_cond_read_acquire() call above has provided the 487 - * necessary acquire semantics required for locking. 480 + * n,0,1 -> 0,0,1 481 + * 482 + * Note: at this point: (val & _Q_PENDING_MASK) == 0, because of the 483 + * above wait condition, therefore any concurrent setting of 484 + * PENDING will make the uncontended transition fail. 488 485 */ 489 - if (((val & _Q_TAIL_MASK) == tail) && 490 - atomic_try_cmpxchg_relaxed(&lock->val, &val, _Q_LOCKED_VAL)) 491 - goto release; /* No contention */ 486 + if ((val & _Q_TAIL_MASK) == tail) { 487 + if (atomic_try_cmpxchg_relaxed(&lock->val, &val, _Q_LOCKED_VAL)) 488 + goto release; /* No contention */ 489 + } 492 490 493 - /* Either somebody is queued behind us or _Q_PENDING_VAL is set */ 491 + /* 492 + * Either somebody is queued behind us or _Q_PENDING_VAL got set 493 + * which will then detect the remaining tail and queue behind us 494 + * ensuring we'll see a @next. 495 + */ 494 496 set_locked(lock); 495 497 496 498 /*