Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

perf/core: Fix crash in perf_event_read()

Alexei had his box explode because doing read() on a package
(rapl/uncore) event that isn't currently scheduled in ends up doing an
out-of-bounds load.

Rework the code to more explicitly deal with event->oncpu being -1.

Reported-by: Alexei Starovoitov <alexei.starovoitov@gmail.com>
Tested-by: Alexei Starovoitov <ast@kernel.org>
Tested-by: David Carrillo-Cisneros <davidcc@google.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: eranian@google.com
Fixes: d6a2f9035bfc ("perf/core: Introduce PMU_EV_CAP_READ_ACTIVE_PKG")
Link: http://lkml.kernel.org/r/20170131102710.GL6515@twins.programming.kicks-ass.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>

authored by

Peter Zijlstra and committed by
Ingo Molnar
451d24d1 53e74a11

+15 -10
+15 -10
kernel/events/core.c
··· 3487 3487 int ret; 3488 3488 }; 3489 3489 3490 - static int find_cpu_to_read(struct perf_event *event, int local_cpu) 3490 + static int __perf_event_read_cpu(struct perf_event *event, int event_cpu) 3491 3491 { 3492 - int event_cpu = event->oncpu; 3493 3492 u16 local_pkg, event_pkg; 3494 3493 3495 3494 if (event->group_caps & PERF_EV_CAP_READ_ACTIVE_PKG) { 3496 - event_pkg = topology_physical_package_id(event_cpu); 3497 - local_pkg = topology_physical_package_id(local_cpu); 3495 + int local_cpu = smp_processor_id(); 3496 + 3497 + event_pkg = topology_physical_package_id(event_cpu); 3498 + local_pkg = topology_physical_package_id(local_cpu); 3498 3499 3499 3500 if (event_pkg == local_pkg) 3500 3501 return local_cpu; ··· 3625 3624 3626 3625 static int perf_event_read(struct perf_event *event, bool group) 3627 3626 { 3628 - int ret = 0, cpu_to_read, local_cpu; 3627 + int event_cpu, ret = 0; 3629 3628 3630 3629 /* 3631 3630 * If event is enabled and currently active on a CPU, update the ··· 3638 3637 .ret = 0, 3639 3638 }; 3640 3639 3641 - local_cpu = get_cpu(); 3642 - cpu_to_read = find_cpu_to_read(event, local_cpu); 3643 - put_cpu(); 3640 + event_cpu = READ_ONCE(event->oncpu); 3641 + if ((unsigned)event_cpu >= nr_cpu_ids) 3642 + return 0; 3643 + 3644 + preempt_disable(); 3645 + event_cpu = __perf_event_read_cpu(event, event_cpu); 3644 3646 3645 3647 /* 3646 3648 * Purposely ignore the smp_call_function_single() return 3647 3649 * value. 3648 3650 * 3649 - * If event->oncpu isn't a valid CPU it means the event got 3651 + * If event_cpu isn't a valid CPU it means the event got 3650 3652 * scheduled out and that will have updated the event count. 3651 3653 * 3652 3654 * Therefore, either way, we'll have an up-to-date event count 3653 3655 * after this. 3654 3656 */ 3655 - (void)smp_call_function_single(cpu_to_read, __perf_event_read, &data, 1); 3657 + (void)smp_call_function_single(event_cpu, __perf_event_read, &data, 1); 3658 + preempt_enable(); 3656 3659 ret = data.ret; 3657 3660 } else if (event->state == PERF_EVENT_STATE_INACTIVE) { 3658 3661 struct perf_event_context *ctx = event->ctx;