Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

powerpc/ftrace: Don't use copy_from_kernel_nofault() in module_trampoline_target()

module_trampoline_target() is quite a hot path used when
activating/deactivating function tracer.

Avoid the heavy copy_from_kernel_nofault() by doing four calls
to copy_inst_from_kernel_nofault().

Use __copy_inst_from_kernel_nofault() for the 3 last calls. First call
is done to copy_from_kernel_nofault() to check address is within
kernel space. No risk to wrap out the top of kernel space because the
last page is never mapped so if address is in last page the first copy
will fails and the other ones will never be performed.

And also make it notrace just like all functions that call it.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/c55559103e014b7863161559d340e8e9484eaaa6.1652074503.git.christophe.leroy@csgroup.eu

authored by

Christophe Leroy and committed by
Michael Ellerman
8052d043 8dfdbe43

+18 -9
+18 -9
arch/powerpc/kernel/module_32.c
··· 289 289 } 290 290 291 291 #ifdef CONFIG_DYNAMIC_FTRACE 292 - int module_trampoline_target(struct module *mod, unsigned long addr, 293 - unsigned long *target) 292 + notrace int module_trampoline_target(struct module *mod, unsigned long addr, 293 + unsigned long *target) 294 294 { 295 - unsigned int jmp[4]; 295 + ppc_inst_t jmp[4]; 296 296 297 297 /* Find where the trampoline jumps to */ 298 - if (copy_from_kernel_nofault(jmp, (void *)addr, sizeof(jmp))) 298 + if (copy_inst_from_kernel_nofault(jmp, (void *)addr)) 299 + return -EFAULT; 300 + if (__copy_inst_from_kernel_nofault(jmp + 1, (void *)addr + 4)) 301 + return -EFAULT; 302 + if (__copy_inst_from_kernel_nofault(jmp + 2, (void *)addr + 8)) 303 + return -EFAULT; 304 + if (__copy_inst_from_kernel_nofault(jmp + 3, (void *)addr + 12)) 299 305 return -EFAULT; 300 306 301 307 /* verify that this is what we expect it to be */ 302 - if ((jmp[0] & 0xffff0000) != PPC_RAW_LIS(_R12, 0) || 303 - (jmp[1] & 0xffff0000) != PPC_RAW_ADDI(_R12, _R12, 0) || 304 - jmp[2] != PPC_RAW_MTCTR(_R12) || 305 - jmp[3] != PPC_RAW_BCTR()) 308 + if ((ppc_inst_val(jmp[0]) & 0xffff0000) != PPC_RAW_LIS(_R12, 0)) 309 + return -EINVAL; 310 + if ((ppc_inst_val(jmp[1]) & 0xffff0000) != PPC_RAW_ADDI(_R12, _R12, 0)) 311 + return -EINVAL; 312 + if (ppc_inst_val(jmp[2]) != PPC_RAW_MTCTR(_R12)) 313 + return -EINVAL; 314 + if (ppc_inst_val(jmp[3]) != PPC_RAW_BCTR()) 306 315 return -EINVAL; 307 316 308 - addr = (jmp[1] & 0xffff) | ((jmp[0] & 0xffff) << 16); 317 + addr = (ppc_inst_val(jmp[1]) & 0xffff) | ((ppc_inst_val(jmp[0]) & 0xffff) << 16); 309 318 if (addr & 0x8000) 310 319 addr -= 0x10000; 311 320