Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

bpf: Skip bounds adjustment for conditional jumps on same scalar register

When conditional jumps are performed on the same scalar register
(e.g., r0 <= r0, r0 > r0, r0 < r0), the BPF verifier incorrectly
attempts to adjust the register's min/max bounds. This leads to
invalid range bounds and triggers a BUG warning.

The problematic BPF program:
0: call bpf_get_prandom_u32
1: w8 = 0x80000000
2: r0 &= r8
3: if r0 > r0 goto <exit>

The instruction 3 triggers kernel warning:
3: if r0 > r0 goto <exit>
true_reg1: range bounds violation u64=[0x1, 0x0] s64=[0x1, 0x0] u32=[0x1, 0x0] s32=[0x1, 0x0] var_off=(0x0, 0x0)
true_reg2: const tnum out of sync with range bounds u64=[0x0, 0xffffffffffffffff] s64=[0x8000000000000000, 0x7fffffffffffffff] var_off=(0x0, 0x0)

Comparing a register with itself should not change its bounds and
for most comparison operations, comparing a register with itself has
a known result (e.g., r0 == r0 is always true, r0 < r0 is always false).

Fix this by:
1. Enhance is_scalar_branch_taken() to properly handle branch direction
computation for same register comparisons across all BPF jump operations
2. Adds early return in reg_set_min_max() to avoid bounds adjustment
for unknown branch directions (e.g., BPF_JSET) on the same register

The fix ensures that unnecessary bounds adjustments are skipped, preventing
the verifier bug while maintaining correct branch direction analysis.

Reported-by: Kaiyan Mei <M202472210@hust.edu.cn>
Reported-by: Yinhao Hu <dddddd@hust.edu.cn>
Closes: https://lore.kernel.org/all/1881f0f5.300df.199f2576a01.Coremail.kaiyanm@hust.edu.cn/
Signed-off-by: KaFai Wan <kafai.wan@linux.dev>
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
Link: https://lore.kernel.org/r/20251103063108.1111764-2-kafai.wan@linux.dev
Signed-off-by: Alexei Starovoitov <ast@kernel.org>

authored by

KaFai Wan and committed by
Alexei Starovoitov
d43ad9da 5dae7453

+31
+31
kernel/bpf/verifier.c
··· 15993 15993 s64 smin2 = is_jmp32 ? (s64)reg2->s32_min_value : reg2->smin_value; 15994 15994 s64 smax2 = is_jmp32 ? (s64)reg2->s32_max_value : reg2->smax_value; 15995 15995 15996 + if (reg1 == reg2) { 15997 + switch (opcode) { 15998 + case BPF_JGE: 15999 + case BPF_JLE: 16000 + case BPF_JSGE: 16001 + case BPF_JSLE: 16002 + case BPF_JEQ: 16003 + return 1; 16004 + case BPF_JGT: 16005 + case BPF_JLT: 16006 + case BPF_JSGT: 16007 + case BPF_JSLT: 16008 + case BPF_JNE: 16009 + return 0; 16010 + case BPF_JSET: 16011 + if (tnum_is_const(t1)) 16012 + return t1.value != 0; 16013 + else 16014 + return (smin1 <= 0 && smax1 >= 0) ? -1 : 1; 16015 + default: 16016 + return -1; 16017 + } 16018 + } 16019 + 15996 16020 switch (opcode) { 15997 16021 case BPF_JEQ: 15998 16022 /* constants, umin/umax and smin/smax checks would be ··· 16461 16437 * the same object, but we don't bother with that). 16462 16438 */ 16463 16439 if (false_reg1->type != SCALAR_VALUE || false_reg2->type != SCALAR_VALUE) 16440 + return 0; 16441 + 16442 + /* We compute branch direction for same SCALAR_VALUE registers in 16443 + * is_scalar_branch_taken(). For unknown branch directions (e.g., BPF_JSET) 16444 + * on the same registers, we don't need to adjust the min/max values. 16445 + */ 16446 + if (false_reg1 == false_reg2) 16464 16447 return 0; 16465 16448 16466 16449 /* fallthrough (FALSE) branch */