Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

[PATCH] x86-64: Dynamically adjust machine check interval

Background:
We've found that MCEs (specifically DRAM SBEs) tend to come in bunches,
especially when we are trying really hard to stress the system out. The
current MCE poller uses a static interval which does not care whether it
has or has not found MCEs recently.

Description:
This patch makes the MCE poller adjust the polling interval dynamically.
If we find an MCE, poll 2x faster (down to 10 ms). When we stop finding
MCEs, poll 2x slower (up to check_interval seconds). The check_interval
tunable becomes the max polling interval. The "Machine check events
logged" printk() is rate limited to the check_interval, which should be
identical behavior to the old functionality.

Result:
If you start to take a lot of correctable errors (not exceptions), you
log them faster and more accurately (less chance of overflowing the MCA
registers). If you don't take a lot of errors, you will see no change.

Alternatives:
I considered simply reducing the polling interval to 10 ms immediately
and keeping it there as long as we continue to find errors. This felt a
bit heavy handed, but does perform significantly better for the default
check_interval of 5 minutes (we're using a few seconds when testing for
DRAM errors). I could be convinced to go with this, if anyone felt it
was not too aggressive.

Testing:
I used an error-injecting DIMM to create lots of correctable DRAM errors
and verified that the polling interval accelerates. The printk() only
happens once per check_interval seconds.

Patch:
This patch is against 2.6.21-rc7.

Signed-Off-By: Tim Hockin <thockin@google.com>
Signed-off-by: Andi Kleen <ak@suse.de>

authored by

Tim Hockin and committed by
Andi Kleen
8a336b0a f82af20e

+30 -9
+6 -1
Documentation/x86_64/machinecheck
··· 36 36 37 37 check_interval 38 38 How often to poll for corrected machine check errors, in seconds 39 - (Note output is hexademical). Default 5 minutes. 39 + (Note output is hexademical). Default 5 minutes. When the poller 40 + finds MCEs it triggers an exponential speedup (poll more often) on 41 + the polling interval. When the poller stops finding MCEs, it 42 + triggers an exponential backoff (poll less often) on the polling 43 + interval. The check_interval variable is both the initial and 44 + maximum polling interval. 40 45 41 46 tolerant 42 47 Tolerance level. When a machine check exception occurs for a non
+24 -8
arch/x86_64/kernel/mce.c
··· 323 323 #endif /* CONFIG_X86_MCE_INTEL */ 324 324 325 325 /* 326 - * Periodic polling timer for "silent" machine check errors. 326 + * Periodic polling timer for "silent" machine check errors. If the 327 + * poller finds an MCE, poll 2x faster. When the poller finds no more 328 + * errors, poll 2x slower (up to check_interval seconds). 327 329 */ 328 330 329 331 static int check_interval = 5 * 60; /* 5 minutes */ 332 + static int next_interval; /* in jiffies */ 330 333 static void mcheck_timer(struct work_struct *work); 331 334 static DECLARE_DELAYED_WORK(mcheck_work, mcheck_timer); 332 335 ··· 342 339 static void mcheck_timer(struct work_struct *work) 343 340 { 344 341 on_each_cpu(mcheck_check_cpu, NULL, 1, 1); 345 - schedule_delayed_work(&mcheck_work, check_interval * HZ); 346 342 347 343 /* 348 344 * It's ok to read stale data here for notify_user and ··· 351 349 * writes. 352 350 */ 353 351 if (notify_user && console_logged) { 352 + static unsigned long last_print; 353 + unsigned long now = jiffies; 354 + 355 + /* if we logged an MCE, reduce the polling interval */ 356 + next_interval = max(next_interval/2, HZ/100); 354 357 notify_user = 0; 355 358 clear_bit(0, &console_logged); 356 - printk(KERN_INFO "Machine check events logged\n"); 359 + if (time_after_eq(now, last_print + (check_interval*HZ))) { 360 + last_print = now; 361 + printk(KERN_INFO "Machine check events logged\n"); 362 + } 363 + } else { 364 + next_interval = min(next_interval*2, check_interval*HZ); 357 365 } 366 + 367 + schedule_delayed_work(&mcheck_work, next_interval); 358 368 } 359 369 360 370 361 371 static __init int periodic_mcheck_init(void) 362 372 { 363 - if (check_interval) 364 - schedule_delayed_work(&mcheck_work, check_interval*HZ); 373 + next_interval = check_interval * HZ; 374 + if (next_interval) 375 + schedule_delayed_work(&mcheck_work, next_interval); 365 376 return 0; 366 377 } 367 378 __initcall(periodic_mcheck_init); ··· 612 597 /* Reinit MCEs after user configuration changes */ 613 598 static void mce_restart(void) 614 599 { 615 - if (check_interval) 600 + if (next_interval) 616 601 cancel_delayed_work(&mcheck_work); 617 602 /* Timer race is harmless here */ 618 603 on_each_cpu(mce_init, NULL, 1, 1); 619 - if (check_interval) 620 - schedule_delayed_work(&mcheck_work, check_interval*HZ); 604 + next_interval = check_interval * HZ; 605 + if (next_interval) 606 + schedule_delayed_work(&mcheck_work, next_interval); 621 607 } 622 608 623 609 static struct sysdev_class mce_sysclass = {