x86/kprobes: Avoid kretprobe recursion bug

Avoid kretprobe recursion loop bg by setting a dummy
kprobes to current_kprobe per-CPU variable.

This bug has been introduced with the asm-coded trampoline
code, since previously it used another kprobe for hooking
the function return placeholder (which only has a nop) and
trampoline handler was called from that kprobe.

This revives the old lost kprobe again.

With this fix, we don't see deadlock anymore.

And you can see that all inner-called kretprobe are skipped.

event_1 235 0
event_2 19375 19612

The 1st column is recorded count and the 2nd is missed count.
Above shows (event_1 rec) + (event_2 rec) ~= (event_2 missed)
(some difference are here because the counter is racy)

Reported-by: Andrea Righi <righi.andrea@gmail.com>
Tested-by: Andrea Righi <righi.andrea@gmail.com>
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Acked-by: Steven Rostedt <rostedt@goodmis.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: stable@vger.kernel.org
Fixes: c9becf58d935 ("[PATCH] kretprobe: kretprobe-booster")
Link: http://lkml.kernel.org/r/155094064889.6137.972160690963039.stgit@devbox
Signed-off-by: Ingo Molnar <mingo@kernel.org>

authored by Masami Hiramatsu and committed by Ingo Molnar b191fa96 fabe38ab

+20 -2
+20 -2
arch/x86/kernel/kprobes/core.c
··· 749 749 NOKPROBE_SYMBOL(kretprobe_trampoline); 750 750 STACK_FRAME_NON_STANDARD(kretprobe_trampoline); 751 751 752 + static struct kprobe kretprobe_kprobe = { 753 + .addr = (void *)kretprobe_trampoline, 754 + }; 755 + 752 756 /* 753 757 * Called from kretprobe_trampoline 754 758 */ 755 759 static __used void *trampoline_handler(struct pt_regs *regs) 756 760 { 761 + struct kprobe_ctlblk *kcb; 757 762 struct kretprobe_instance *ri = NULL; 758 763 struct hlist_head *head, empty_rp; 759 764 struct hlist_node *tmp; ··· 767 762 kprobe_opcode_t *correct_ret_addr = NULL; 768 763 void *frame_pointer; 769 764 bool skipped = false; 765 + 766 + preempt_disable(); 767 + 768 + /* 769 + * Set a dummy kprobe for avoiding kretprobe recursion. 770 + * Since kretprobe never run in kprobe handler, kprobe must not 771 + * be running at this point. 772 + */ 773 + kcb = get_kprobe_ctlblk(); 774 + __this_cpu_write(current_kprobe, &kretprobe_kprobe); 775 + kcb->kprobe_status = KPROBE_HIT_ACTIVE; 770 776 771 777 INIT_HLIST_HEAD(&empty_rp); 772 778 kretprobe_hash_lock(current, &head, &flags); ··· 854 838 orig_ret_address = (unsigned long)ri->ret_addr; 855 839 if (ri->rp && ri->rp->handler) { 856 840 __this_cpu_write(current_kprobe, &ri->rp->kp); 857 - get_kprobe_ctlblk()->kprobe_status = KPROBE_HIT_ACTIVE; 858 841 ri->ret_addr = correct_ret_addr; 859 842 ri->rp->handler(ri, regs); 860 - __this_cpu_write(current_kprobe, NULL); 843 + __this_cpu_write(current_kprobe, &kretprobe_kprobe); 861 844 } 862 845 863 846 recycle_rp_inst(ri, &empty_rp); ··· 871 856 } 872 857 873 858 kretprobe_hash_unlock(current, &flags); 859 + 860 + __this_cpu_write(current_kprobe, NULL); 861 + preempt_enable(); 874 862 875 863 hlist_for_each_entry_safe(ri, tmp, &empty_rp, hlist) { 876 864 hlist_del(&ri->hlist);