Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'fix-truncation-bug-in-coerce_reg_to_size_sx-and-extend-selftests'

Dimitar Kanaliev says:

====================
Fix truncation bug in coerce_reg_to_size_sx and extend selftests.

This patch series addresses a truncation bug in the eBPF verifier function
coerce_reg_to_size_sx(). The issue was caused by the incorrect ordering
of assignments between 32-bit and 64-bit min/max values, leading to
improper truncation when updating the register state. This issue has been
reported previously by Zac Ecob[1] , but was not followed up on.

The first patch fixes the assignment order in coerce_reg_to_size_sx()
to ensure correct truncation. The subsequent patches add selftests for
coerce_{reg,subreg}_to_size_sx.

Changelog:
v1 -> v2:
- Moved selftests inside the conditional check for cpuv4

[1] (https://lore.kernel.org/bpf/h3qKLDEO6m9nhif0eAQX4fVrqdO0D_OPb0y5HfMK9jBePEKK33wQ3K-bqSVnr0hiZdFZtSJOsbNkcEQGpv_yJk61PAAiO8fUkgMRSO-lB50=@protonmail.com/)
====================

Link: https://lore.kernel.org/r/20241014121155.92887-1-dimitar.kanaliev@siteground.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>

+44 -4
+4 -4
kernel/bpf/verifier.c
··· 6339 6339 6340 6340 /* both of s64_max/s64_min positive or negative */ 6341 6341 if ((s64_max >= 0) == (s64_min >= 0)) { 6342 - reg->smin_value = reg->s32_min_value = s64_min; 6343 - reg->smax_value = reg->s32_max_value = s64_max; 6344 - reg->umin_value = reg->u32_min_value = s64_min; 6345 - reg->umax_value = reg->u32_max_value = s64_max; 6342 + reg->s32_min_value = reg->smin_value = s64_min; 6343 + reg->s32_max_value = reg->smax_value = s64_max; 6344 + reg->u32_min_value = reg->umin_value = s64_min; 6345 + reg->u32_max_value = reg->umax_value = s64_max; 6346 6346 reg->var_off = tnum_range(s64_min, s64_max); 6347 6347 return; 6348 6348 }
+40
tools/testing/selftests/bpf/progs/verifier_movsx.c
··· 287 287 : __clobber_all); 288 288 } 289 289 290 + SEC("socket") 291 + __description("MOV64SX, S8, unsigned range_check") 292 + __success __retval(0) 293 + __naked void mov64sx_s8_range_check(void) 294 + { 295 + asm volatile (" \ 296 + call %[bpf_get_prandom_u32]; \ 297 + r0 &= 0x1; \ 298 + r0 += 0xfe; \ 299 + r0 = (s8)r0; \ 300 + if r0 < 0xfffffffffffffffe goto label_%=; \ 301 + r0 = 0; \ 302 + exit; \ 303 + label_%=: \ 304 + exit; \ 305 + " : 306 + : __imm(bpf_get_prandom_u32) 307 + : __clobber_all); 308 + } 309 + 310 + SEC("socket") 311 + __description("MOV32SX, S8, unsigned range_check") 312 + __success __retval(0) 313 + __naked void mov32sx_s8_range_check(void) 314 + { 315 + asm volatile (" \ 316 + call %[bpf_get_prandom_u32]; \ 317 + w0 &= 0x1; \ 318 + w0 += 0xfe; \ 319 + w0 = (s8)w0; \ 320 + if w0 < 0xfffffffe goto label_%=; \ 321 + r0 = 0; \ 322 + exit; \ 323 + label_%=: \ 324 + exit; \ 325 + " : 326 + : __imm(bpf_get_prandom_u32) 327 + : __clobber_all); 328 + } 329 + 290 330 #else 291 331 292 332 SEC("socket")