perf/x86/intel: Fix segfault with PEBS-via-PT with sample_freq

Currently, using PEBS-via-PT with a sample frequency instead of a sample
period, causes a segfault. For example:

BUG: kernel NULL pointer dereference, address: 0000000000000195
<NMI>
? __die_body.cold+0x19/0x27
? page_fault_oops+0xca/0x290
? exc_page_fault+0x7e/0x1b0
? asm_exc_page_fault+0x26/0x30
? intel_pmu_pebs_event_update_no_drain+0x40/0x60
? intel_pmu_pebs_event_update_no_drain+0x32/0x60
intel_pmu_drain_pebs_icl+0x333/0x350
handle_pmi_common+0x272/0x3c0
intel_pmu_handle_irq+0x10a/0x2e0
perf_event_nmi_handler+0x2a/0x50

That happens because intel_pmu_pebs_event_update_no_drain() assumes all the
pebs_enabled bits represent counter indexes, which is not always the case.
In this particular case, bits 60 and 61 are set for PEBS-via-PT purposes.

The behaviour of PEBS-via-PT with sample frequency is questionable because
although a PMI is generated (PEBS_PMI_AFTER_EACH_RECORD), the period is not
adjusted anyway.

Putting that aside, fix intel_pmu_pebs_event_update_no_drain() by passing
the mask of counter bits instead of 'size'. Note, prior to the Fixes
commit, 'size' would be limited to the maximum counter index, so the issue
was not hit.

Fixes: 722e42e45c2f1 ("perf/x86: Support counter mask")
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Ian Rogers <irogers@google.com>
Cc: linux-perf-users@vger.kernel.org
Link: https://lore.kernel.org/r/20250508134452.73960-1-adrian.hunter@intel.com

authored by Adrian Hunter and committed by Ingo Molnar 99bcd91f 82f2b0b9

+5 -4
+5 -4
arch/x86/events/intel/ds.c
··· 2465 2465 setup_pebs_fixed_sample_data); 2466 2466 } 2467 2467 2468 - static void intel_pmu_pebs_event_update_no_drain(struct cpu_hw_events *cpuc, int size) 2468 + static void intel_pmu_pebs_event_update_no_drain(struct cpu_hw_events *cpuc, u64 mask) 2469 2469 { 2470 + u64 pebs_enabled = cpuc->pebs_enabled & mask; 2470 2471 struct perf_event *event; 2471 2472 int bit; 2472 2473 ··· 2478 2477 * It needs to call intel_pmu_save_and_restart_reload() to 2479 2478 * update the event->count for this case. 2480 2479 */ 2481 - for_each_set_bit(bit, (unsigned long *)&cpuc->pebs_enabled, size) { 2480 + for_each_set_bit(bit, (unsigned long *)&pebs_enabled, X86_PMC_IDX_MAX) { 2482 2481 event = cpuc->events[bit]; 2483 2482 if (event->hw.flags & PERF_X86_EVENT_AUTO_RELOAD) 2484 2483 intel_pmu_save_and_restart_reload(event, 0); ··· 2513 2512 } 2514 2513 2515 2514 if (unlikely(base >= top)) { 2516 - intel_pmu_pebs_event_update_no_drain(cpuc, size); 2515 + intel_pmu_pebs_event_update_no_drain(cpuc, mask); 2517 2516 return; 2518 2517 } 2519 2518 ··· 2627 2626 (hybrid(cpuc->pmu, fixed_cntr_mask64) << INTEL_PMC_IDX_FIXED); 2628 2627 2629 2628 if (unlikely(base >= top)) { 2630 - intel_pmu_pebs_event_update_no_drain(cpuc, X86_PMC_IDX_MAX); 2629 + intel_pmu_pebs_event_update_no_drain(cpuc, mask); 2631 2630 return; 2632 2631 } 2633 2632