Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

x86/retpoline,kprobes: Skip optprobe check for indirect jumps with retpolines and IBT

The kprobes optimization check can_optimize() calls
insn_is_indirect_jump() to detect indirect jump instructions in
a target function. If any is found, creating an optprobe is disallowed
in the function because the jump could be from a jump table and could
potentially land in the middle of the target optprobe.

With retpolines, insn_is_indirect_jump() additionally looks for calls to
indirect thunks which the compiler potentially used to replace original
jumps. This extra check is however unnecessary because jump tables are
disabled when the kernel is built with retpolines. The same is currently
the case with IBT.

Based on this observation, remove the logic to look for calls to
indirect thunks and skip the check for indirect jumps altogether if the
kernel is built with retpolines or IBT. Remove subsequently the symbols
__indirect_thunk_start and __indirect_thunk_end which are no longer
needed.

Dropping this logic indirectly fixes a problem where the range
[__indirect_thunk_start, __indirect_thunk_end] wrongly included also the
return thunk. It caused that machines which used the return thunk as
a mitigation and didn't have it patched by any alternative ended up not
being able to use optprobes in any regular function.

Fixes: 0b53c374b9ef ("x86/retpoline: Use -mfunction-return")
Suggested-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Suggested-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
Signed-off-by: Petr Pavlu <petr.pavlu@suse.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
Link: https://lore.kernel.org/r/20230711091952.27944-3-petr.pavlu@suse.com

authored by

Petr Pavlu and committed by
Borislav Petkov (AMD)
833fd800 79cd2a11

+17 -32
-3
arch/x86/include/asm/nospec-branch.h
··· 478 478 SPEC_STORE_BYPASS_SECCOMP, 479 479 }; 480 480 481 - extern char __indirect_thunk_start[]; 482 - extern char __indirect_thunk_end[]; 483 - 484 481 static __always_inline 485 482 void alternative_msr_write(unsigned int msr, u64 val, unsigned int feature) 486 483 {
+16 -24
arch/x86/kernel/kprobes/opt.c
··· 226 226 } 227 227 228 228 /* Check whether insn is indirect jump */ 229 - static int __insn_is_indirect_jump(struct insn *insn) 229 + static int insn_is_indirect_jump(struct insn *insn) 230 230 { 231 231 return ((insn->opcode.bytes[0] == 0xff && 232 232 (X86_MODRM_REG(insn->modrm.value) & 6) == 4) || /* Jump */ ··· 258 258 target = (unsigned long)insn->next_byte + insn->immediate.value; 259 259 260 260 return (start <= target && target <= start + len); 261 - } 262 - 263 - static int insn_is_indirect_jump(struct insn *insn) 264 - { 265 - int ret = __insn_is_indirect_jump(insn); 266 - 267 - #ifdef CONFIG_RETPOLINE 268 - /* 269 - * Jump to x86_indirect_thunk_* is treated as an indirect jump. 270 - * Note that even with CONFIG_RETPOLINE=y, the kernel compiled with 271 - * older gcc may use indirect jump. So we add this check instead of 272 - * replace indirect-jump check. 273 - */ 274 - if (!ret) 275 - ret = insn_jump_into_range(insn, 276 - (unsigned long)__indirect_thunk_start, 277 - (unsigned long)__indirect_thunk_end - 278 - (unsigned long)__indirect_thunk_start); 279 - #endif 280 - return ret; 281 261 } 282 262 283 263 /* Decode whole function to ensure any instructions don't jump into target */ ··· 314 334 /* Recover address */ 315 335 insn.kaddr = (void *)addr; 316 336 insn.next_byte = (void *)(addr + insn.length); 317 - /* Check any instructions don't jump into target */ 318 - if (insn_is_indirect_jump(&insn) || 319 - insn_jump_into_range(&insn, paddr + INT3_INSN_SIZE, 337 + /* 338 + * Check any instructions don't jump into target, indirectly or 339 + * directly. 340 + * 341 + * The indirect case is present to handle a code with jump 342 + * tables. When the kernel uses retpolines, the check should in 343 + * theory additionally look for jumps to indirect thunks. 344 + * However, the kernel built with retpolines or IBT has jump 345 + * tables disabled so the check can be skipped altogether. 346 + */ 347 + if (!IS_ENABLED(CONFIG_RETPOLINE) && 348 + !IS_ENABLED(CONFIG_X86_KERNEL_IBT) && 349 + insn_is_indirect_jump(&insn)) 350 + return 0; 351 + if (insn_jump_into_range(&insn, paddr + INT3_INSN_SIZE, 320 352 DISP32_SIZE)) 321 353 return 0; 322 354 addr += insn.length;
-2
arch/x86/kernel/vmlinux.lds.S
··· 133 133 KPROBES_TEXT 134 134 SOFTIRQENTRY_TEXT 135 135 #ifdef CONFIG_RETPOLINE 136 - __indirect_thunk_start = .; 137 136 *(.text..__x86.indirect_thunk) 138 137 *(.text..__x86.return_thunk) 139 - __indirect_thunk_end = .; 140 138 #endif 141 139 STATIC_CALL_TEXT 142 140
+1 -3
tools/perf/util/thread-stack.c
··· 1038 1038 1039 1039 static bool is_x86_retpoline(const char *name) 1040 1040 { 1041 - const char *p = strstr(name, "__x86_indirect_thunk_"); 1042 - 1043 - return p == name || !strcmp(name, "__indirect_thunk_start"); 1041 + return strstr(name, "__x86_indirect_thunk_") == name; 1044 1042 } 1045 1043 1046 1044 /*