Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

locking: Fix typos in comments

Fix ~16 single-word typos in locking code comments.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Paul E. McKenney <paulmck@kernel.org>
Cc: Will Deacon <will.deacon@arm.com>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>

+16 -16
+1 -1
arch/arm/include/asm/spinlock.h
··· 22 22 * assembler to insert a extra (16-bit) IT instruction, depending on the 23 23 * presence or absence of neighbouring conditional instructions. 24 24 * 25 - * To avoid this unpredictableness, an approprite IT is inserted explicitly: 25 + * To avoid this unpredictability, an appropriate IT is inserted explicitly: 26 26 * the assembler won't change IT instructions which are explicitly present 27 27 * in the input. 28 28 */
+1 -1
include/linux/lockdep.h
··· 155 155 extern void lockdep_init_task(struct task_struct *task); 156 156 157 157 /* 158 - * Split the recrursion counter in two to readily detect 'off' vs recursion. 158 + * Split the recursion counter in two to readily detect 'off' vs recursion. 159 159 */ 160 160 #define LOCKDEP_RECURSION_BITS 16 161 161 #define LOCKDEP_OFF (1U << LOCKDEP_RECURSION_BITS)
+1 -1
include/linux/rwsem.h
··· 110 110 111 111 /* 112 112 * This is the same regardless of which rwsem implementation that is being used. 113 - * It is just a heuristic meant to be called by somebody alreadying holding the 113 + * It is just a heuristic meant to be called by somebody already holding the 114 114 * rwsem to see if somebody from an incompatible type is wanting access to the 115 115 * lock. 116 116 */
+2 -2
kernel/locking/lockdep.c
··· 1747 1747 1748 1748 /* 1749 1749 * Step 4: if not match, expand the path by adding the 1750 - * forward or backwards dependencis in the search 1750 + * forward or backwards dependencies in the search 1751 1751 * 1752 1752 */ 1753 1753 first = true; ··· 1916 1916 * -> B is -(ER)-> or -(EN)->, then we don't need to add A -> B into the 1917 1917 * dependency graph, as any strong path ..-> A -> B ->.. we can get with 1918 1918 * having dependency A -> B, we could already get a equivalent path ..-> A -> 1919 - * .. -> B -> .. with A -> .. -> B. Therefore A -> B is reduntant. 1919 + * .. -> B -> .. with A -> .. -> B. Therefore A -> B is redundant. 1920 1920 * 1921 1921 * We need to make sure both the start and the end of A -> .. -> B is not 1922 1922 * weaker than A -> B. For the start part, please see the comment in
+1 -1
kernel/locking/lockdep_proc.c
··· 348 348 debug_locks); 349 349 350 350 /* 351 - * Zappped classes and lockdep data buffers reuse statistics. 351 + * Zapped classes and lockdep data buffers reuse statistics. 352 352 */ 353 353 seq_puts(m, "\n"); 354 354 seq_printf(m, " zapped classes: %11lu\n",
+1 -1
kernel/locking/mcs_spinlock.h
··· 7 7 * The MCS lock (proposed by Mellor-Crummey and Scott) is a simple spin-lock 8 8 * with the desirable properties of being fair, and with each cpu trying 9 9 * to acquire the lock spinning on a local variable. 10 - * It avoids expensive cache bouncings that common test-and-set spin-lock 10 + * It avoids expensive cache bounces that common test-and-set spin-lock 11 11 * implementations incur. 12 12 */ 13 13 #ifndef __LINUX_MCS_SPINLOCK_H
+2 -2
kernel/locking/mutex.c
··· 92 92 } 93 93 94 94 /* 95 - * Trylock variant that retuns the owning task on failure. 95 + * Trylock variant that returns the owning task on failure. 96 96 */ 97 97 static inline struct task_struct *__mutex_trylock_or_owner(struct mutex *lock) 98 98 { ··· 207 207 208 208 /* 209 209 * Give up ownership to a specific task, when @task = NULL, this is equivalent 210 - * to a regular unlock. Sets PICKUP on a handoff, clears HANDOF, preserves 210 + * to a regular unlock. Sets PICKUP on a handoff, clears HANDOFF, preserves 211 211 * WAITERS. Provides RELEASE semantics like a regular unlock, the 212 212 * __mutex_trylock() provides a matching ACQUIRE semantics for the handoff. 213 213 */
+2 -2
kernel/locking/osq_lock.c
··· 135 135 */ 136 136 137 137 /* 138 - * Wait to acquire the lock or cancelation. Note that need_resched() 138 + * Wait to acquire the lock or cancellation. Note that need_resched() 139 139 * will come with an IPI, which will wake smp_cond_load_relaxed() if it 140 140 * is implemented with a monitor-wait. vcpu_is_preempted() relies on 141 141 * polling, be careful. ··· 164 164 165 165 /* 166 166 * We can only fail the cmpxchg() racing against an unlock(), 167 - * in which case we should observe @node->locked becomming 167 + * in which case we should observe @node->locked becoming 168 168 * true. 169 169 */ 170 170 if (smp_load_acquire(&node->locked))
+2 -2
kernel/locking/rtmutex.c
··· 706 706 } else if (prerequeue_top_waiter == waiter) { 707 707 /* 708 708 * The waiter was the top waiter on the lock, but is 709 - * no longer the top prority waiter. Replace waiter in 709 + * no longer the top priority waiter. Replace waiter in 710 710 * the owner tasks pi waiters tree with the new top 711 711 * (highest priority) waiter and adjust the priority 712 712 * of the owner. ··· 1194 1194 return; 1195 1195 1196 1196 /* 1197 - * Yell lowdly and stop the task right here. 1197 + * Yell loudly and stop the task right here. 1198 1198 */ 1199 1199 rt_mutex_print_deadlock(w); 1200 1200 while (1) {
+1 -1
kernel/locking/rwsem.c
··· 819 819 * we try to get it. The new owner may be a spinnable 820 820 * writer. 821 821 * 822 - * To take advantage of two scenarios listed agove, the RT 822 + * To take advantage of two scenarios listed above, the RT 823 823 * task is made to retry one more time to see if it can 824 824 * acquire the lock or continue spinning on the new owning 825 825 * writer. Of course, if the time lag is long enough or the
+2 -2
kernel/locking/spinlock.c
··· 58 58 /* 59 59 * We build the __lock_function inlines here. They are too large for 60 60 * inlining all over the place, but here is only one user per function 61 - * which embedds them into the calling _lock_function below. 61 + * which embeds them into the calling _lock_function below. 62 62 * 63 63 * This could be a long-held lock. We both prepare to spin for a long 64 - * time (making _this_ CPU preemptable if possible), and we also signal 64 + * time (making _this_ CPU preemptible if possible), and we also signal 65 65 * towards that other CPU that it should break the lock ASAP. 66 66 */ 67 67 #define BUILD_LOCK_OPS(op, locktype) \