Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

bpf: Fix net.core.bpf_jit_harden race

It is the bpf_jit_harden counterpart to commit 60b58afc96c9 ("bpf: fix
net.core.bpf_jit_enable race"). bpf_jit_harden will be tested twice
for each subprog if there are subprogs in bpf program and constant
blinding may increase the length of program, so when running
"./test_progs -t subprogs" and toggling bpf_jit_harden between 0 and 2,
jit_subprogs may fail because constant blinding increases the length
of subprog instructions during extra passs.

So cache the value of bpf_jit_blinding_enabled() during program
allocation, and use the cached value during constant blinding, subprog
JITing and args tracking of tail call.

Signed-off-by: Hou Tao <houtao1@huawei.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20220309123321.2400262-4-houtao1@huawei.com

authored by

Hou Tao and committed by
Alexei Starovoitov
d2a3b7c5 73e14451

+6 -3
+1
include/linux/filter.h
··· 566 566 gpl_compatible:1, /* Is filter GPL compatible? */ 567 567 cb_access:1, /* Is control block accessed? */ 568 568 dst_needed:1, /* Do we need dst entry? */ 569 + blinding_requested:1, /* needs constant blinding */ 569 570 blinded:1, /* Was blinded */ 570 571 is_func:1, /* program is a bpf function */ 571 572 kprobe_override:1, /* Do we override a kprobe? */
+2 -1
kernel/bpf/core.c
··· 105 105 fp->aux = aux; 106 106 fp->aux->prog = fp; 107 107 fp->jit_requested = ebpf_jit_enabled(); 108 + fp->blinding_requested = bpf_jit_blinding_enabled(fp); 108 109 109 110 INIT_LIST_HEAD_RCU(&fp->aux->ksym.lnode); 110 111 mutex_init(&fp->aux->used_maps_mutex); ··· 1383 1382 struct bpf_insn *insn; 1384 1383 int i, rewritten; 1385 1384 1386 - if (!bpf_jit_blinding_enabled(prog) || prog->blinded) 1385 + if (!prog->blinding_requested || prog->blinded) 1387 1386 return prog; 1388 1387 1389 1388 clone = bpf_prog_clone_create(prog, GFP_USER);
+3 -2
kernel/bpf/verifier.c
··· 13023 13023 func[i]->aux->name[0] = 'F'; 13024 13024 func[i]->aux->stack_depth = env->subprog_info[i].stack_depth; 13025 13025 func[i]->jit_requested = 1; 13026 + func[i]->blinding_requested = prog->blinding_requested; 13026 13027 func[i]->aux->kfunc_tab = prog->aux->kfunc_tab; 13027 13028 func[i]->aux->kfunc_btf_tab = prog->aux->kfunc_btf_tab; 13028 13029 func[i]->aux->linfo = prog->aux->linfo; ··· 13147 13146 out_undo_insn: 13148 13147 /* cleanup main prog to be interpreted */ 13149 13148 prog->jit_requested = 0; 13149 + prog->blinding_requested = 0; 13150 13150 for (i = 0, insn = prog->insnsi; i < prog->len; i++, insn++) { 13151 13151 if (!bpf_pseudo_call(insn)) 13152 13152 continue; ··· 13241 13239 { 13242 13240 struct bpf_prog *prog = env->prog; 13243 13241 enum bpf_attach_type eatype = prog->expected_attach_type; 13244 - bool expect_blinding = bpf_jit_blinding_enabled(prog); 13245 13242 enum bpf_prog_type prog_type = resolve_prog_type(prog); 13246 13243 struct bpf_insn *insn = prog->insnsi; 13247 13244 const struct bpf_func_proto *fn; ··· 13404 13403 insn->code = BPF_JMP | BPF_TAIL_CALL; 13405 13404 13406 13405 aux = &env->insn_aux_data[i + delta]; 13407 - if (env->bpf_capable && !expect_blinding && 13406 + if (env->bpf_capable && !prog->blinding_requested && 13408 13407 prog->jit_requested && 13409 13408 !bpf_map_key_poisoned(aux) && 13410 13409 !bpf_map_ptr_poisoned(aux) &&