Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

x86/alternative: Align insn bytes vertically

For easier inspection which bytes have changed.

For example:

feat: 7*32+12, old: (__x86_indirect_thunk_r10+0x0/0x20 (ffffffff81c02480) len: 17), repl: (ffffffff897813aa, len: 17)
ffffffff81c02480: old_insn: 41 ff e2 90 90 90 90 90 90 90 90 90 90 90 90 90 90
ffffffff897813aa: rpl_insn: e8 07 00 00 00 f3 90 0f ae e8 eb f9 4c 89 14 24 c3
ffffffff81c02480: final_insn: e8 07 00 00 00 f3 90 0f ae e8 eb f9 4c 89 14 24 c3

No functional changes.

Signed-off-by: Borislav Petkov <bp@suse.de>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20210601193713.16190-1-bp@alien8.de

+2 -2
+2 -2
arch/x86/kernel/alternative.c
··· 273 273 instr, instr, a->instrlen, 274 274 replacement, a->replacementlen); 275 275 276 - DUMP_BYTES(instr, a->instrlen, "%px: old_insn: ", instr); 277 - DUMP_BYTES(replacement, a->replacementlen, "%px: rpl_insn: ", replacement); 276 + DUMP_BYTES(instr, a->instrlen, "%px: old_insn: ", instr); 277 + DUMP_BYTES(replacement, a->replacementlen, "%px: rpl_insn: ", replacement); 278 278 279 279 memcpy(insn_buff, replacement, a->replacementlen); 280 280 insn_buff_sz = a->replacementlen;