Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

sched/core: Fix typos in comments

Signed-off-by: Tal Zussman <tz2294@columbia.edu>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20201113005156.GA8408@charmander

authored by

Tal Zussman and committed by
Peter Zijlstra
b19a888c 9032dc21

+15 -15
+15 -15
kernel/sched/core.c
··· 97 97 * 98 98 * Normal scheduling state is serialized by rq->lock. __schedule() takes the 99 99 * local CPU's rq->lock, it optionally removes the task from the runqueue and 100 - * always looks at the local rq data structures to find the most elegible task 100 + * always looks at the local rq data structures to find the most eligible task 101 101 * to run next. 102 102 * 103 103 * Task enqueue is also under rq->lock, possibly taken from another CPU. ··· 518 518 519 519 /* 520 520 * Atomically grab the task, if ->wake_q is !nil already it means 521 - * its already queued (either by us or someone else) and will get the 521 + * it's already queued (either by us or someone else) and will get the 522 522 * wakeup due to that. 523 523 * 524 524 * In order to ensure that a pending wakeup will observe our pending ··· 769 769 return false; 770 770 771 771 /* 772 - * If there are more than one RR tasks, we need the tick to effect the 772 + * If there are more than one RR tasks, we need the tick to affect the 773 773 * actual RR behaviour. 774 774 */ 775 775 if (rq->rt.rr_nr_running) { ··· 1187 1187 * accounting was performed at enqueue time and we can just return 1188 1188 * here. 1189 1189 * 1190 - * Need to be careful of the following enqeueue/dequeue ordering 1190 + * Need to be careful of the following enqueue/dequeue ordering 1191 1191 * problem too 1192 1192 * 1193 1193 * enqueue(taskA) 1194 1194 * // sched_uclamp_used gets enabled 1195 1195 * enqueue(taskB) 1196 1196 * dequeue(taskA) 1197 - * // Must not decrement bukcet->tasks here 1197 + * // Must not decrement bucket->tasks here 1198 1198 * dequeue(taskB) 1199 1199 * 1200 1200 * where we could end up with stale data in uc_se and ··· 2924 2924 #ifdef CONFIG_SMP 2925 2925 if (p->sched_class->task_woken) { 2926 2926 /* 2927 - * Our task @p is fully woken up and running; so its safe to 2927 + * Our task @p is fully woken up and running; so it's safe to 2928 2928 * drop the rq->lock, hereafter rq is only used for statistics. 2929 2929 */ 2930 2930 rq_unpin_lock(rq, rf); ··· 3411 3411 3412 3412 /* 3413 3413 * If the owning (remote) CPU is still in the middle of schedule() with 3414 - * this task as prev, wait until its done referencing the task. 3414 + * this task as prev, wait until it's done referencing the task. 3415 3415 * 3416 3416 * Pairs with the smp_store_release() in finish_task(). 3417 3417 * ··· 3816 3816 #ifdef CONFIG_SMP 3817 3817 if (p->sched_class->task_woken) { 3818 3818 /* 3819 - * Nothing relies on rq->lock after this, so its fine to 3819 + * Nothing relies on rq->lock after this, so it's fine to 3820 3820 * drop it. 3821 3821 */ 3822 3822 rq_unpin_lock(rq, &rf); ··· 4343 4343 } 4344 4344 4345 4345 /* 4346 - * IO-wait accounting, and how its mostly bollocks (on SMP). 4346 + * IO-wait accounting, and how it's mostly bollocks (on SMP). 4347 4347 * 4348 4348 * The idea behind IO-wait account is to account the idle time that we could 4349 4349 * have spend running if it were not for IO. That is, if we were to improve the ··· 4838 4838 /* 4839 4839 * Optimization: we know that if all tasks are in the fair class we can 4840 4840 * call that function directly, but only if the @prev task wasn't of a 4841 - * higher scheduling class, because otherwise those loose the 4841 + * higher scheduling class, because otherwise those lose the 4842 4842 * opportunity to pull in more work from other CPUs. 4843 4843 */ 4844 4844 if (likely(prev->sched_class <= &fair_sched_class && ··· 5361 5361 * right. rt_mutex_slowunlock()+rt_mutex_postunlock() work together to 5362 5362 * ensure a task is de-boosted (pi_task is set to NULL) before the 5363 5363 * task is allowed to run again (and can exit). This ensures the pointer 5364 - * points to a blocked task -- which guaratees the task is present. 5364 + * points to a blocked task -- which guarantees the task is present. 5365 5365 */ 5366 5366 p->pi_top_task = pi_task; 5367 5367 ··· 5479 5479 /* 5480 5480 * The RT priorities are set via sched_setscheduler(), but we still 5481 5481 * allow the 'normal' nice value to be set - but as expected 5482 - * it wont have any effect on scheduling until the task is 5482 + * it won't have any effect on scheduling until the task is 5483 5483 * SCHED_DEADLINE, SCHED_FIFO or SCHED_RR: 5484 5484 */ 5485 5485 if (task_has_dl_policy(p) || task_has_rt_policy(p)) { ··· 6668 6668 * 6669 6669 * The scheduler is at all times free to pick the calling task as the most 6670 6670 * eligible task to run, if removing the yield() call from your code breaks 6671 - * it, its already broken. 6671 + * it, it's already broken. 6672 6672 * 6673 6673 * Typical broken usage is: 6674 6674 * ··· 7042 7042 7043 7043 #ifdef CONFIG_SMP 7044 7044 /* 7045 - * Its possible that init_idle() gets called multiple times on a task, 7045 + * It's possible that init_idle() gets called multiple times on a task, 7046 7046 * in that case do_set_cpus_allowed() will not do the right thing. 7047 7047 * 7048 7048 * And since this is boot we can forgo the serialization. ··· 8225 8225 return -EINVAL; 8226 8226 #endif 8227 8227 /* 8228 - * Serialize against wake_up_new_task() such that if its 8228 + * Serialize against wake_up_new_task() such that if it's 8229 8229 * running, we're sure to observe its full state. 8230 8230 */ 8231 8231 raw_spin_lock_irq(&task->pi_lock);