clockevents: fix reprogramming decision in oneshot broadcast

Resolve the following regression of a choppy, almost unusable laptop:

http://lkml.org/lkml/2007/12/7/299
http://bugzilla.kernel.org/show_bug.cgi?id=9525

A previous version of the code did the reprogramming of the broadcast
device in the return from idle code. This was removed, but the logic in
tick_handle_oneshot_broadcast() was kept the same.

When a broadcast interrupt happens we signal the expiry to all CPUs
which have an expired event. If none of the CPUs has an expired event,
which can happen in dyntick mode, then we reprogram the broadcast
device. We do not reprogram otherwise, but this is only correct if all
CPUs, which are in the idle broadcast state have been woken up.

The code ignores, that there might be pending not yet expired events on
other CPUs, which are in the idle broadcast state. So the delivery of
those events can be delayed for quite a time.

Change the tick_handle_oneshot_broadcast() function to check for CPUs,
which are in broadcast state and are not woken up by the current event,
and enforce the rearming of the broadcast device for those CPUs.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>

authored by Thomas Gleixner and committed by Ingo Molnar cdc6f27d bd87f1f0

+21 -35
+21 -35
kernel/time/tick-broadcast.c
··· 384 384 } 385 385 386 386 /* 387 - * Reprogram the broadcast device: 388 - * 389 - * Called with tick_broadcast_lock held and interrupts disabled. 390 - */ 391 - static int tick_broadcast_reprogram(void) 392 - { 393 - ktime_t expires = { .tv64 = KTIME_MAX }; 394 - struct tick_device *td; 395 - int cpu; 396 - 397 - /* 398 - * Find the event which expires next: 399 - */ 400 - for (cpu = first_cpu(tick_broadcast_oneshot_mask); cpu != NR_CPUS; 401 - cpu = next_cpu(cpu, tick_broadcast_oneshot_mask)) { 402 - td = &per_cpu(tick_cpu_device, cpu); 403 - if (td->evtdev->next_event.tv64 < expires.tv64) 404 - expires = td->evtdev->next_event; 405 - } 406 - 407 - if (expires.tv64 == KTIME_MAX) 408 - return 0; 409 - 410 - return tick_broadcast_set_event(expires, 0); 411 - } 412 - 413 - /* 414 387 * Handle oneshot mode broadcasting 415 388 */ 416 389 static void tick_handle_oneshot_broadcast(struct clock_event_device *dev) 417 390 { 418 391 struct tick_device *td; 419 392 cpumask_t mask; 420 - ktime_t now; 393 + ktime_t now, next_event; 421 394 int cpu; 422 395 423 396 spin_lock(&tick_broadcast_lock); 424 397 again: 425 398 dev->next_event.tv64 = KTIME_MAX; 399 + next_event.tv64 = KTIME_MAX; 426 400 mask = CPU_MASK_NONE; 427 401 now = ktime_get(); 428 402 /* Find all expired events */ ··· 405 431 td = &per_cpu(tick_cpu_device, cpu); 406 432 if (td->evtdev->next_event.tv64 <= now.tv64) 407 433 cpu_set(cpu, mask); 434 + else if (td->evtdev->next_event.tv64 < next_event.tv64) 435 + next_event.tv64 = td->evtdev->next_event.tv64; 408 436 } 409 437 410 438 /* 411 - * Wakeup the cpus which have an expired event. The broadcast 412 - * device is reprogrammed in the return from idle code. 439 + * Wakeup the cpus which have an expired event. 413 440 */ 414 - if (!tick_do_broadcast(mask)) { 441 + tick_do_broadcast(mask); 442 + 443 + /* 444 + * Two reasons for reprogram: 445 + * 446 + * - The global event did not expire any CPU local 447 + * events. This happens in dyntick mode, as the maximum PIT 448 + * delta is quite small. 449 + * 450 + * - There are pending events on sleeping CPUs which were not 451 + * in the event mask 452 + */ 453 + if (next_event.tv64 != KTIME_MAX) { 415 454 /* 416 - * The global event did not expire any CPU local 417 - * events. This happens in dyntick mode, as the 418 - * maximum PIT delta is quite small. 455 + * Rearm the broadcast device. If event expired, 456 + * repeat the above 419 457 */ 420 - if (tick_broadcast_reprogram()) 458 + if (tick_broadcast_set_event(next_event, 0)) 421 459 goto again; 422 460 } 423 461 spin_unlock(&tick_broadcast_lock);