Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

sched/fair: Fix external p->on_rq users

Sean noted that ever since commit 152e11f6df29 ("sched/fair: Implement
delayed dequeue") KVM's preemption notifiers have started
mis-classifying preemption vs blocking.

Notably p->on_rq is no longer sufficient to determine if a task is
runnable or blocked -- the aforementioned commit introduces tasks that
remain on the runqueue even through they will not run again, and
should be considered blocked for many cases.

Add the task_is_runnable() helper to classify things and audit all
external users of the p->on_rq state. Also add a few comments.

Fixes: 152e11f6df29 ("sched/fair: Implement delayed dequeue")
Reported-by: Sean Christopherson <seanjc@google.com>
Tested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lkml.kernel.org/r/20241010091843.GK33184@noisy.programming.kicks-ass.net

authored by

Peter Zijlstra and committed by
Ingo Molnar
cd9626e9 c6508124

+38 -7
+5
include/linux/sched.h
··· 2133 2133 2134 2134 #endif /* CONFIG_SMP */ 2135 2135 2136 + static inline bool task_is_runnable(struct task_struct *p) 2137 + { 2138 + return p->on_rq && !p->se.sched_delayed; 2139 + } 2140 + 2136 2141 extern bool sched_task_on_rq(struct task_struct *p); 2137 2142 extern unsigned long get_wchan(struct task_struct *p); 2138 2143 extern struct task_struct *cpu_curr_snapshot(int cpu);
+1 -1
kernel/events/core.c
··· 9251 9251 }, 9252 9252 }; 9253 9253 9254 - if (!sched_in && task->on_rq) { 9254 + if (!sched_in && task_is_runnable(task)) { 9255 9255 switch_event.event_id.header.misc |= 9256 9256 PERF_RECORD_MISC_SWITCH_OUT_PREEMPT; 9257 9257 }
+6 -1
kernel/freezer.c
··· 109 109 { 110 110 unsigned int state = READ_ONCE(p->__state); 111 111 112 - if (p->on_rq) 112 + /* 113 + * Allow freezing the sched_delayed tasks; they will not execute until 114 + * ttwu() fixes them up, so it is safe to swap their state now, instead 115 + * of waiting for them to get fully dequeued. 116 + */ 117 + if (task_is_runnable(p)) 113 118 return 0; 114 119 115 120 if (p != current && task_curr(p))
+9
kernel/rcu/tasks.h
··· 986 986 return false; 987 987 988 988 /* 989 + * t->on_rq && !t->se.sched_delayed *could* be considered sleeping but 990 + * since it is a spurious state (it will transition into the 991 + * traditional blocked state or get woken up without outside 992 + * dependencies), not considering it such should only affect timing. 993 + * 994 + * Be conservative for now and not include it. 995 + */ 996 + 997 + /* 989 998 * Idle tasks (or idle injection) within the idle loop are RCU-tasks 990 999 * quiescent states. But CPU boot code performed by the idle task 991 1000 * isn't a quiescent state.
+9 -3
kernel/sched/core.c
··· 548 548 * ON_RQ_MIGRATING state is used for migration without holding both 549 549 * rq->locks. It indicates task_cpu() is not stable, see task_rq_lock(). 550 550 * 551 + * Additionally it is possible to be ->on_rq but still be considered not 552 + * runnable when p->se.sched_delayed is true. These tasks are on the runqueue 553 + * but will be dequeued as soon as they get picked again. See the 554 + * task_is_runnable() helper. 555 + * 551 556 * p->on_cpu <- { 0, 1 }: 552 557 * 553 558 * is set by prepare_task() and cleared by finish_task() such that it will be ··· 4322 4317 * @arg: Argument to function. 4323 4318 * 4324 4319 * Fix the task in it's current state by avoiding wakeups and or rq operations 4325 - * and call @func(@arg) on it. This function can use ->on_rq and task_curr() 4326 - * to work out what the state is, if required. Given that @func can be invoked 4327 - * with a runqueue lock held, it had better be quite lightweight. 4320 + * and call @func(@arg) on it. This function can use task_is_runnable() and 4321 + * task_curr() to work out what the state is, if required. Given that @func 4322 + * can be invoked with a runqueue lock held, it had better be quite 4323 + * lightweight. 4328 4324 * 4329 4325 * Returns: 4330 4326 * Whatever @func returns
+6
kernel/time/tick-sched.c
··· 434 434 * smp_mb__after_spin_lock() 435 435 * tick_nohz_task_switch() 436 436 * LOAD p->tick_dep_mask 437 + * 438 + * XXX given a task picks up the dependency on schedule(), should we 439 + * only care about tasks that are currently on the CPU instead of all 440 + * that are on the runqueue? 441 + * 442 + * That is, does this want to be: task_on_cpu() / task_curr()? 437 443 */ 438 444 if (!sched_task_on_rq(tsk)) 439 445 return;
+1 -1
kernel/trace/trace_selftest.c
··· 1485 1485 /* reset the max latency */ 1486 1486 tr->max_latency = 0; 1487 1487 1488 - while (p->on_rq) { 1488 + while (task_is_runnable(p)) { 1489 1489 /* 1490 1490 * Sleep to make sure the -deadline thread is asleep too. 1491 1491 * On virtual machines we can't rely on timings,
+1 -1
virt/kvm/kvm_main.c
··· 6387 6387 6388 6388 WRITE_ONCE(vcpu->scheduled_out, true); 6389 6389 6390 - if (current->on_rq && vcpu->wants_to_run) { 6390 + if (task_is_runnable(current) && vcpu->wants_to_run) { 6391 6391 WRITE_ONCE(vcpu->preempted, true); 6392 6392 WRITE_ONCE(vcpu->ready, true); 6393 6393 }