sched: fix race in schedule()

Fix a hard to trigger crash seen in the -rt kernel that also affects
the vanilla scheduler.

There is a race condition between schedule() and some dequeue/enqueue
functions; rt_mutex_setprio(), __setscheduler() and sched_move_task().

When scheduling to idle, idle_balance() is called to pull tasks from
other busy processor. It might drop the rq lock. It means that those 3
functions encounter on_rq=0 and running=1. The current task should be
put when running.

Here is a possible scenario:

CPU0 CPU1
| schedule()
| ->deactivate_task()
| ->idle_balance()
| -->load_balance_newidle()
rt_mutex_setprio() |
| --->double_lock_balance()
*get lock *rel lock
* on_rq=0, ruuning=1 |
* sched_class is changed |
*rel lock *get lock
: |
:
->put_prev_task_rt()
->pick_next_task_fair()
=> panic

The current process of CPU1(P1) is scheduling. Deactivated P1, and the
scheduler looks for another process on other CPU's runqueue because CPU1
will be idle. idle_balance(), load_balance_newidle() and
double_lock_balance() are called and double_lock_balance() could drop
the rq lock. On the other hand, CPU0 is trying to boost the priority of
P1. The result of boosting only P1's prio and sched_class are changed to
RT. The sched entities of P1 and P1's group are never put. It makes
cfs_rq invalid, because the cfs_rq has curr and no leaf, but
pick_next_task_fair() is called, then the kernel panics.

Signed-off-by: Hiroshi Shimamoto <h-shimamoto@ct.jp.nec.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>

authored by Hiroshi Shimamoto and committed by Ingo Molnar 0e1f3483 4faa8496

+16 -22
+16 -22
kernel/sched.c
··· 4268 4268 oldprio = p->prio; 4269 4269 on_rq = p->se.on_rq; 4270 4270 running = task_current(rq, p); 4271 - if (on_rq) { 4271 + if (on_rq) 4272 4272 dequeue_task(rq, p, 0); 4273 - if (running) 4274 - p->sched_class->put_prev_task(rq, p); 4275 - } 4273 + if (running) 4274 + p->sched_class->put_prev_task(rq, p); 4276 4275 4277 4276 if (rt_prio(prio)) 4278 4277 p->sched_class = &rt_sched_class; ··· 4280 4281 4281 4282 p->prio = prio; 4282 4283 4284 + if (running) 4285 + p->sched_class->set_curr_task(rq); 4283 4286 if (on_rq) { 4284 - if (running) 4285 - p->sched_class->set_curr_task(rq); 4286 - 4287 4287 enqueue_task(rq, p, 0); 4288 4288 4289 4289 check_class_changed(rq, p, prev_class, oldprio, running); ··· 4579 4581 update_rq_clock(rq); 4580 4582 on_rq = p->se.on_rq; 4581 4583 running = task_current(rq, p); 4582 - if (on_rq) { 4584 + if (on_rq) 4583 4585 deactivate_task(rq, p, 0); 4584 - if (running) 4585 - p->sched_class->put_prev_task(rq, p); 4586 - } 4586 + if (running) 4587 + p->sched_class->put_prev_task(rq, p); 4587 4588 4588 4589 oldprio = p->prio; 4589 4590 __setscheduler(rq, p, policy, param->sched_priority); 4590 4591 4592 + if (running) 4593 + p->sched_class->set_curr_task(rq); 4591 4594 if (on_rq) { 4592 - if (running) 4593 - p->sched_class->set_curr_task(rq); 4594 - 4595 4595 activate_task(rq, p, 0); 4596 4596 4597 4597 check_class_changed(rq, p, prev_class, oldprio, running); ··· 7614 7618 running = task_current(rq, tsk); 7615 7619 on_rq = tsk->se.on_rq; 7616 7620 7617 - if (on_rq) { 7621 + if (on_rq) 7618 7622 dequeue_task(rq, tsk, 0); 7619 - if (unlikely(running)) 7620 - tsk->sched_class->put_prev_task(rq, tsk); 7621 - } 7623 + if (unlikely(running)) 7624 + tsk->sched_class->put_prev_task(rq, tsk); 7622 7625 7623 7626 set_task_rq(tsk, task_cpu(tsk)); 7624 7627 ··· 7626 7631 tsk->sched_class->moved_group(tsk); 7627 7632 #endif 7628 7633 7629 - if (on_rq) { 7630 - if (unlikely(running)) 7631 - tsk->sched_class->set_curr_task(rq); 7634 + if (unlikely(running)) 7635 + tsk->sched_class->set_curr_task(rq); 7636 + if (on_rq) 7632 7637 enqueue_task(rq, tsk, 0); 7633 - } 7634 7638 7635 7639 task_rq_unlock(rq, &flags); 7636 7640 }