Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

x86: cpu-hotplug: Prevent softirq wakeup on wrong CPU

After a newly plugged CPU sets the cpu_online bit it enables
interrupts and goes idle. The cpu which brought up the new cpu waits
for the cpu_online bit and when it observes it, it sets the cpu_active
bit for this cpu. The cpu_active bit is the relevant one for the
scheduler to consider the cpu as a viable target.

With forced threaded interrupt handlers which imply forced threaded
softirqs we observed the following race:

cpu 0 cpu 1

bringup(cpu1);
set_cpu_online(smp_processor_id(), true);
local_irq_enable();
while (!cpu_online(cpu1));
timer_interrupt()
-> wake_up(softirq_thread_cpu1);
-> enqueue_on(softirq_thread_cpu1, cpu0);

^^^^

cpu_notify(CPU_ONLINE, cpu1);
-> sched_cpu_active(cpu1)
-> set_cpu_active((cpu1, true);

When an interrupt happens before the cpu_active bit is set by the cpu
which brought up the newly onlined cpu, then the scheduler refuses to
enqueue the woken thread which is bound to that newly onlined cpu on
that newly onlined cpu due to the not yet set cpu_active bit and
selects a fallback runqueue. Not really an expected and desirable
behaviour.

So far this has only been observed with forced hard/softirq threading,
but in theory this could happen without forced threaded hard/softirqs
as well. It's probably unobservable as it would take a massive
interrupt storm on the newly onlined cpu which causes the softirq loop
to wake up the softirq thread and an even longer delay of the cpu
which waits for the cpu_online bit.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Peter Zijlstra <peterz@infradead.org>
Cc: stable@kernel.org # 2.6.39

+13
+13
arch/x86/kernel/smpboot.c
··· 285 285 per_cpu(cpu_state, smp_processor_id()) = CPU_ONLINE; 286 286 x86_platform.nmi_init(); 287 287 288 + /* 289 + * Wait until the cpu which brought this one up marked it 290 + * online before enabling interrupts. If we don't do that then 291 + * we can end up waking up the softirq thread before this cpu 292 + * reached the active state, which makes the scheduler unhappy 293 + * and schedule the softirq thread on the wrong cpu. This is 294 + * only observable with forced threaded interrupts, but in 295 + * theory it could also happen w/o them. It's just way harder 296 + * to achieve. 297 + */ 298 + while (!cpumask_test_cpu(smp_processor_id(), cpu_active_mask)) 299 + cpu_relax(); 300 + 288 301 /* enable local interrupts */ 289 302 local_irq_enable(); 290 303