Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

LoongArch: BPF: Enhance the bpf_arch_text_poke() function

Enhance the bpf_arch_text_poke() function to enable accurate location
of BPF program entry points.

When modifying the entry point of a BPF program, skip the "move t0, ra"
instruction to ensure the correct logic and copy of the jump address.

Cc: stable@vger.kernel.org
Fixes: 677e6123e3d2 ("LoongArch: BPF: Disable trampoline for kernel module function trace")
Signed-off-by: Chenghao Duan <duanchenghao@kylinos.cn>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>

authored by

Chenghao Duan and committed by
Huacai Chen
73721d86 26138762

+16 -1
+16 -1
arch/loongarch/net/bpf_jit.c
··· 1309 1309 { 1310 1310 int ret; 1311 1311 bool is_call; 1312 + unsigned long size = 0; 1313 + unsigned long offset = 0; 1314 + void *image = NULL; 1315 + char namebuf[KSYM_NAME_LEN]; 1312 1316 u32 old_insns[LOONGARCH_LONG_JUMP_NINSNS] = {[0 ... 4] = INSN_NOP}; 1313 1317 u32 new_insns[LOONGARCH_LONG_JUMP_NINSNS] = {[0 ... 4] = INSN_NOP}; 1314 1318 1315 1319 /* Only poking bpf text is supported. Since kernel function entry 1316 1320 * is set up by ftrace, we rely on ftrace to poke kernel functions. 1317 1321 */ 1318 - if (!is_bpf_text_address((unsigned long)ip)) 1322 + if (!__bpf_address_lookup((unsigned long)ip, &size, &offset, namebuf)) 1319 1323 return -ENOTSUPP; 1324 + 1325 + image = ip - offset; 1326 + 1327 + /* zero offset means we're poking bpf prog entry */ 1328 + if (offset == 0) { 1329 + /* skip to the nop instruction in bpf prog entry: 1330 + * move t0, ra 1331 + * nop 1332 + */ 1333 + ip = image + LOONGARCH_INSN_SIZE; 1334 + } 1320 1335 1321 1336 is_call = old_t == BPF_MOD_CALL; 1322 1337 ret = emit_jump_or_nops(old_addr, ip, old_insns, is_call);