Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

bpf: Remove migrate_disable in kprobe_multi_link_prog_run

Graph tracer framework ensures we won't migrate, kprobe_multi_link_prog_run
called all the way from graph tracer, which disables preemption in
function_graph_enter_regs, as Jiri and Yonghong suggested, there is no
need to use migrate_disable. As a result, some overhead may will be reduced.
And add cant_sleep check for __this_cpu_inc_return.

Fixes: 0dcac2725406 ("bpf: Add multi kprobe link")
Signed-off-by: Tao Chen <chen.dylane@linux.dev>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20250814121430.2347454-1-chen.dylane@linux.dev

authored by

Tao Chen and committed by
Andrii Nakryiko
abdaf49b c80d7972

+7 -2
+7 -2
kernel/trace/bpf_trace.c
··· 2728 2728 struct pt_regs *regs; 2729 2729 int err; 2730 2730 2731 + /* 2732 + * graph tracer framework ensures we won't migrate, so there is no need 2733 + * to use migrate_disable for bpf_prog_run again. The check here just for 2734 + * __this_cpu_inc_return. 2735 + */ 2736 + cant_sleep(); 2737 + 2731 2738 if (unlikely(__this_cpu_inc_return(bpf_prog_active) != 1)) { 2732 2739 bpf_prog_inc_misses_counter(link->link.prog); 2733 2740 err = 1; 2734 2741 goto out; 2735 2742 } 2736 2743 2737 - migrate_disable(); 2738 2744 rcu_read_lock(); 2739 2745 regs = ftrace_partial_regs(fregs, bpf_kprobe_multi_pt_regs_ptr()); 2740 2746 old_run_ctx = bpf_set_run_ctx(&run_ctx.session_ctx.run_ctx); 2741 2747 err = bpf_prog_run(link->link.prog, regs); 2742 2748 bpf_reset_run_ctx(old_run_ctx); 2743 2749 rcu_read_unlock(); 2744 - migrate_enable(); 2745 2750 2746 2751 out: 2747 2752 __this_cpu_dec(bpf_prog_active);