Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

KVM: selftests: Reduce number of "unavailable PMU events" combos tested

Reduce the number of combinations of unavailable PMU events masks that are
testing by the PMU counters test. In reality, testing every possible
combination isn't all that interesting, and certainly not worth the tens
of seconds (or worse, minutes) of runtime. Fully testing the N^2 space
will be especially problematic in the near future, as 5! new arch events
are on their way.

Use alternating bit patterns (and 0 and -1u) in the hopes that _if_ there
is ever a KVM bug, it's not something horribly convoluted that shows up
only with a super specific pattern/value.

Reported-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Tested-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Link: https://lore.kernel.org/r/20250919214648.1585683-4-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>

+23 -15
+23 -15
tools/testing/selftests/kvm/x86/pmu_counters_test.c
··· 578 578 }; 579 579 580 580 /* 581 + * To keep the total runtime reasonable, test only a handful of select, 582 + * semi-arbitrary values for the mask of unavailable PMU events. Test 583 + * 0 (all events available) and all ones (no events available) as well 584 + * as alternating bit sequencues, e.g. to detect if KVM is checking the 585 + * wrong bit(s). 586 + */ 587 + const uint32_t unavailable_masks[] = { 588 + 0x0, 589 + 0xffffffffu, 590 + 0xaaaaaaaau, 591 + 0x55555555u, 592 + 0xf0f0f0f0u, 593 + 0x0f0f0f0fu, 594 + 0xa0a0a0a0u, 595 + 0x0a0a0a0au, 596 + 0x50505050u, 597 + 0x05050505u, 598 + }; 599 + 600 + /* 581 601 * Test up to PMU v5, which is the current maximum version defined by 582 602 * Intel, i.e. is the last version that is guaranteed to be backwards 583 603 * compatible with KVM's existing behavior. ··· 634 614 635 615 pr_info("Testing arch events, PMU version %u, perf_caps = %lx\n", 636 616 v, perf_caps[i]); 637 - /* 638 - * To keep the total runtime reasonable, test every 639 - * possible non-zero, non-reserved bitmap combination 640 - * only with the native PMU version and the full bit 641 - * vector length. 642 - */ 643 - if (v == pmu_version) { 644 - for (k = 1; k < (BIT(NR_INTEL_ARCH_EVENTS) - 1); k++) 645 - test_arch_events(v, perf_caps[i], NR_INTEL_ARCH_EVENTS, k); 646 - } 617 + 647 618 /* 648 619 * Test single bits for all PMU version and lengths up 649 620 * the number of events +1 (to verify KVM doesn't do ··· 643 632 * ones i.e. all events being available and unavailable. 644 633 */ 645 634 for (j = 0; j <= NR_INTEL_ARCH_EVENTS + 1; j++) { 646 - test_arch_events(v, perf_caps[i], j, 0); 647 - test_arch_events(v, perf_caps[i], j, -1u); 648 - 649 - for (k = 0; k < NR_INTEL_ARCH_EVENTS; k++) 650 - test_arch_events(v, perf_caps[i], j, BIT(k)); 635 + for (k = 1; k < ARRAY_SIZE(unavailable_masks); k++) 636 + test_arch_events(v, perf_caps[i], j, unavailable_masks[k]); 651 637 } 652 638 653 639 pr_info("Testing GP counters, PMU version %u, perf_caps = %lx\n",