Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

x86/ftrace, x86/asm: Kill ftrace_caller_end label

One of ftrace_caller_end and ftrace_return is redundant so unify them.
Rename ftrace_return to ftrace_epilogue to mean that everything after
that label represents, like an afterword, work which happens *after* the
ftrace call, e.g., the function graph tracer for one.

Steve wants this to rather mean "[a]n event which reflects meaningfully
on a recently ended conflict or struggle." I can imagine that ftrace can
be a struggle sometimes.

Anyway, beef up the comment about the code contents and layout before
ftrace_epilogue label.

Signed-off-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Steven Rostedt <rostedt@goodmis.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1455612202-14414-4-git-send-email-bp@alien8.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>

authored by

Borislav Petkov and committed by
Ingo Molnar
f1b92bb6 4f6c8938

+13 -12
+5 -6
arch/x86/kernel/ftrace.c
··· 697 697 #endif 698 698 699 699 /* Defined as markers to the end of the ftrace default trampolines */ 700 - extern void ftrace_caller_end(void); 701 700 extern void ftrace_regs_caller_end(void); 702 - extern void ftrace_return(void); 701 + extern void ftrace_epilogue(void); 703 702 extern void ftrace_caller_op_ptr(void); 704 703 extern void ftrace_regs_caller_op_ptr(void); 705 704 ··· 745 746 op_offset = (unsigned long)ftrace_regs_caller_op_ptr; 746 747 } else { 747 748 start_offset = (unsigned long)ftrace_caller; 748 - end_offset = (unsigned long)ftrace_caller_end; 749 + end_offset = (unsigned long)ftrace_epilogue; 749 750 op_offset = (unsigned long)ftrace_caller_op_ptr; 750 751 } 751 752 ··· 753 754 754 755 /* 755 756 * Allocate enough size to store the ftrace_caller code, 756 - * the jmp to ftrace_return, as well as the address of 757 + * the jmp to ftrace_epilogue, as well as the address of 757 758 * the ftrace_ops this trampoline is used for. 758 759 */ 759 760 trampoline = alloc_tramp(size + MCOUNT_INSN_SIZE + sizeof(void *)); ··· 771 772 772 773 ip = (unsigned long)trampoline + size; 773 774 774 - /* The trampoline ends with a jmp to ftrace_return */ 775 - jmp = ftrace_jmp_replace(ip, (unsigned long)ftrace_return); 775 + /* The trampoline ends with a jmp to ftrace_epilogue */ 776 + jmp = ftrace_jmp_replace(ip, (unsigned long)ftrace_epilogue); 776 777 memcpy(trampoline + size, jmp, MCOUNT_INSN_SIZE); 777 778 778 779 /*
+8 -6
arch/x86/kernel/mcount_64.S
··· 168 168 restore_mcount_regs 169 169 170 170 /* 171 - * The copied trampoline must call ftrace_return as it 171 + * The copied trampoline must call ftrace_epilogue as it 172 172 * still may need to call the function graph tracer. 173 + * 174 + * The code up to this label is copied into trampolines so 175 + * think twice before adding any new code or changing the 176 + * layout here. 173 177 */ 174 - GLOBAL(ftrace_caller_end) 175 - 176 - GLOBAL(ftrace_return) 178 + GLOBAL(ftrace_epilogue) 177 179 178 180 #ifdef CONFIG_FUNCTION_GRAPH_TRACER 179 181 GLOBAL(ftrace_graph_call) ··· 246 244 popfq 247 245 248 246 /* 249 - * As this jmp to ftrace_return can be a short jump 247 + * As this jmp to ftrace_epilogue can be a short jump 250 248 * it must not be copied into the trampoline. 251 249 * The trampoline will add the code to jump 252 250 * to the return. 253 251 */ 254 252 GLOBAL(ftrace_regs_caller_end) 255 253 256 - jmp ftrace_return 254 + jmp ftrace_epilogue 257 255 258 256 END(ftrace_regs_caller) 259 257