[PATCH] sched: filter affine wakeups

)

From: Nick Piggin <nickpiggin@yahoo.com.au>

Track the last waker CPU, and only consider wakeup-balancing if there's a
match between current waker CPU and the previous waker CPU. This ensures
that there is some correlation between two subsequent wakeup events before
we move the task. Should help random-wakeup workloads on large SMP
systems, by reducing the migration attempts by a factor of nr_cpus.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>

authored by akpm@osdl.org and committed by Linus Torvalds d7102e95 198e2f18

+13 -2
+4 -1
include/linux/sched.h
··· 696 696 697 697 int lock_depth; /* BKL lock depth */ 698 698 699 - #if defined(CONFIG_SMP) && defined(__ARCH_WANT_UNLOCKED_CTXSW) 699 + #if defined(CONFIG_SMP) 700 + int last_waker_cpu; /* CPU that last woke this task up */ 701 + #if defined(__ARCH_WANT_UNLOCKED_CTXSW) 700 702 int oncpu; 703 + #endif 701 704 #endif 702 705 int prio, static_prio; 703 706 struct list_head run_list;
+9 -1
kernel/sched.c
··· 1290 1290 } 1291 1291 } 1292 1292 1293 + if (p->last_waker_cpu != this_cpu) 1294 + goto out_set_cpu; 1295 + 1293 1296 if (unlikely(!cpu_isset(this_cpu, p->cpus_allowed))) 1294 1297 goto out_set_cpu; 1295 1298 ··· 1362 1359 this_cpu = smp_processor_id(); 1363 1360 cpu = task_cpu(p); 1364 1361 } 1362 + 1363 + p->last_waker_cpu = this_cpu; 1365 1364 1366 1365 out_activate: 1367 1366 #endif /* CONFIG_SMP */ ··· 1446 1441 #ifdef CONFIG_SCHEDSTATS 1447 1442 memset(&p->sched_info, 0, sizeof(p->sched_info)); 1448 1443 #endif 1449 - #if defined(CONFIG_SMP) && defined(__ARCH_WANT_UNLOCKED_CTXSW) 1444 + #if defined(CONFIG_SMP) 1445 + p->last_waker_cpu = cpu; 1446 + #if defined(__ARCH_WANT_UNLOCKED_CTXSW) 1450 1447 p->oncpu = 0; 1448 + #endif 1451 1449 #endif 1452 1450 #ifdef CONFIG_PREEMPT 1453 1451 /* Want to start with kernel preemption disabled. */