···1414 * @sched_clock_mask: Bitmask for two's complement subtraction of non 64bit1515 * clocks.1616 * @read_sched_clock: Current clock source (or dummy source when suspended).1717- * @mult: Multipler for scaled math conversion.1717+ * @mult: Multiplier for scaled math conversion.1818 * @shift: Shift value for scaled math conversion.1919 *2020 * Care must be taken when updating this structure; it is read by
+2-2
kernel/sched/core.c
···55065506 }5507550755085508 /*55095509- * Try and select tasks for each sibling in decending sched_class55095509+ * Try and select tasks for each sibling in descending sched_class55105510 * order.55115511 */55125512 for_each_class(class) {···5520552055215521 /*55225522 * If this sibling doesn't yet have a suitable task to55235523- * run; ask for the most elegible task, given the55235523+ * run; ask for the most eligible task, given the55245524 * highest priority task already selected for this55255525 * core.55265526 */
+3-3
kernel/sched/fair.c
···1080810808 * sched_slice() considers only this active rq and it gets the1080910809 * whole slice. But during force idle, we have siblings acting1081010810 * like a single runqueue and hence we need to consider runnable1081110811- * tasks on this cpu and the forced idle cpu. Ideally, we should1081110811+ * tasks on this CPU and the forced idle CPU. Ideally, we should1081210812 * go through the forced idle rq, but that would be a perf hit.1081310813- * We can assume that the forced idle cpu has atleast1081310813+ * We can assume that the forced idle CPU has at least1081410814 * MIN_NR_TASKS_DURING_FORCEIDLE - 1 tasks and use that to check1081510815- * if we need to give up the cpu.1081510815+ * if we need to give up the CPU.1081610816 */1081710817 if (rq->core->core_forceidle && rq->cfs.nr_running == 1 &&1081810818 __entity_slice_used(&curr->se, MIN_NR_TASKS_DURING_FORCEIDLE))