Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

bpf: Track equal scalars history on per-instruction level

Use bpf_verifier_state->jmp_history to track which registers were
updated by find_equal_scalars() (renamed to collect_linked_regs())
when conditional jump was verified. Use recorded information in
backtrack_insn() to propagate precision.

E.g. for the following program:

while verifying instructions
1: r1 = r0 |
2: if r1 < 8 goto ... | push r0,r1 as linked registers in jmp_history
3: if r0 > 16 goto ... | push r0,r1 as linked registers in jmp_history
4: r2 = r10 |
5: r2 += r0 v mark_chain_precision(r0)

while doing mark_chain_precision(r0)
5: r2 += r0 | mark r0 precise
4: r2 = r10 |
3: if r0 > 16 goto ... | mark r0,r1 as precise
2: if r1 < 8 goto ... | mark r0,r1 as precise
1: r1 = r0 v

Technically, do this as follows:
- Use 10 bits to identify each register that gains range because of
sync_linked_regs():
- 3 bits for frame number;
- 6 bits for register or stack slot number;
- 1 bit to indicate if register is spilled.
- Use u64 as a vector of 6 such records + 4 bits for vector length.
- Augment struct bpf_jmp_history_entry with a field 'linked_regs'
representing such vector.
- When doing check_cond_jmp_op() remember up to 6 registers that
gain range because of sync_linked_regs() in such a vector.
- Don't propagate range information and reset IDs for registers that
don't fit in 6-value vector.
- Push a pair {instruction index, linked registers vector}
to bpf_verifier_state->jmp_history.
- When doing backtrack_insn() check if any of recorded linked
registers is currently marked precise, if so mark all linked
registers as precise.

This also requires fixes for two test_verifier tests:
- precise: test 1
- precise: test 2

Both tests contain the following instruction sequence:

19: (bf) r2 = r9 ; R2=scalar(id=3) R9=scalar(id=3)
20: (a5) if r2 < 0x8 goto pc+1 ; R2=scalar(id=3,umin=8)
21: (95) exit
22: (07) r2 += 1 ; R2_w=scalar(id=3+1,...)
23: (bf) r1 = r10 ; R1_w=fp0 R10=fp0
24: (07) r1 += -8 ; R1_w=fp-8
25: (b7) r3 = 0 ; R3_w=0
26: (85) call bpf_probe_read_kernel#113

The call to bpf_probe_read_kernel() at (26) forces r2 to be precise.
Previously, this forced all registers with same id to become precise
immediately when mark_chain_precision() is called.
After this change, the precision is propagated to registers sharing
same id only when 'if' instruction is backtracked.
Hence verification log for both tests is changed:
regs=r2,r9 -> regs=r2 for instructions 25..20.

Fixes: 904e6ddf4133 ("bpf: Use scalar ids in mark_chain_precision()")
Reported-by: Hao Sun <sunhao.th@gmail.com>
Suggested-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20240718202357.1746514-2-eddyz87@gmail.com

Closes: https://lore.kernel.org/bpf/CAEf4BzZ0xidVCqB47XnkXcNhkPWF6_nTV7yt+_Lf0kcFEut2Mg@mail.gmail.com/

authored by

Eduard Zingerman and committed by
Andrii Nakryiko
4bf79f9b 844f7315

+239 -32
+4
include/linux/bpf_verifier.h
··· 371 371 u32 prev_idx : 22; 372 372 /* special flags, e.g., whether insn is doing register stack spill/load */ 373 373 u32 flags : 10; 374 + /* additional registers that need precision tracking when this 375 + * jump is backtracked, vector of six 10-bit records 376 + */ 377 + u64 linked_regs; 374 378 }; 375 379 376 380 /* Maximum number of register states that can exist at once */
+224 -21
kernel/bpf/verifier.c
··· 3335 3335 return env->insn_aux_data[insn_idx].jmp_point; 3336 3336 } 3337 3337 3338 + #define LR_FRAMENO_BITS 3 3339 + #define LR_SPI_BITS 6 3340 + #define LR_ENTRY_BITS (LR_SPI_BITS + LR_FRAMENO_BITS + 1) 3341 + #define LR_SIZE_BITS 4 3342 + #define LR_FRAMENO_MASK ((1ull << LR_FRAMENO_BITS) - 1) 3343 + #define LR_SPI_MASK ((1ull << LR_SPI_BITS) - 1) 3344 + #define LR_SIZE_MASK ((1ull << LR_SIZE_BITS) - 1) 3345 + #define LR_SPI_OFF LR_FRAMENO_BITS 3346 + #define LR_IS_REG_OFF (LR_SPI_BITS + LR_FRAMENO_BITS) 3347 + #define LINKED_REGS_MAX 6 3348 + 3349 + struct linked_reg { 3350 + u8 frameno; 3351 + union { 3352 + u8 spi; 3353 + u8 regno; 3354 + }; 3355 + bool is_reg; 3356 + }; 3357 + 3358 + struct linked_regs { 3359 + int cnt; 3360 + struct linked_reg entries[LINKED_REGS_MAX]; 3361 + }; 3362 + 3363 + static struct linked_reg *linked_regs_push(struct linked_regs *s) 3364 + { 3365 + if (s->cnt < LINKED_REGS_MAX) 3366 + return &s->entries[s->cnt++]; 3367 + 3368 + return NULL; 3369 + } 3370 + 3371 + /* Use u64 as a vector of 6 10-bit values, use first 4-bits to track 3372 + * number of elements currently in stack. 3373 + * Pack one history entry for linked registers as 10 bits in the following format: 3374 + * - 3-bits frameno 3375 + * - 6-bits spi_or_reg 3376 + * - 1-bit is_reg 3377 + */ 3378 + static u64 linked_regs_pack(struct linked_regs *s) 3379 + { 3380 + u64 val = 0; 3381 + int i; 3382 + 3383 + for (i = 0; i < s->cnt; ++i) { 3384 + struct linked_reg *e = &s->entries[i]; 3385 + u64 tmp = 0; 3386 + 3387 + tmp |= e->frameno; 3388 + tmp |= e->spi << LR_SPI_OFF; 3389 + tmp |= (e->is_reg ? 1 : 0) << LR_IS_REG_OFF; 3390 + 3391 + val <<= LR_ENTRY_BITS; 3392 + val |= tmp; 3393 + } 3394 + val <<= LR_SIZE_BITS; 3395 + val |= s->cnt; 3396 + return val; 3397 + } 3398 + 3399 + static void linked_regs_unpack(u64 val, struct linked_regs *s) 3400 + { 3401 + int i; 3402 + 3403 + s->cnt = val & LR_SIZE_MASK; 3404 + val >>= LR_SIZE_BITS; 3405 + 3406 + for (i = 0; i < s->cnt; ++i) { 3407 + struct linked_reg *e = &s->entries[i]; 3408 + 3409 + e->frameno = val & LR_FRAMENO_MASK; 3410 + e->spi = (val >> LR_SPI_OFF) & LR_SPI_MASK; 3411 + e->is_reg = (val >> LR_IS_REG_OFF) & 0x1; 3412 + val >>= LR_ENTRY_BITS; 3413 + } 3414 + } 3415 + 3338 3416 /* for any branch, call, exit record the history of jmps in the given state */ 3339 3417 static int push_jmp_history(struct bpf_verifier_env *env, struct bpf_verifier_state *cur, 3340 - int insn_flags) 3418 + int insn_flags, u64 linked_regs) 3341 3419 { 3342 3420 u32 cnt = cur->jmp_history_cnt; 3343 3421 struct bpf_jmp_history_entry *p; ··· 3431 3353 "verifier insn history bug: insn_idx %d cur flags %x new flags %x\n", 3432 3354 env->insn_idx, env->cur_hist_ent->flags, insn_flags); 3433 3355 env->cur_hist_ent->flags |= insn_flags; 3356 + WARN_ONCE(env->cur_hist_ent->linked_regs != 0, 3357 + "verifier insn history bug: insn_idx %d linked_regs != 0: %#llx\n", 3358 + env->insn_idx, env->cur_hist_ent->linked_regs); 3359 + env->cur_hist_ent->linked_regs = linked_regs; 3434 3360 return 0; 3435 3361 } 3436 3362 ··· 3449 3367 p->idx = env->insn_idx; 3450 3368 p->prev_idx = env->prev_insn_idx; 3451 3369 p->flags = insn_flags; 3370 + p->linked_regs = linked_regs; 3452 3371 cur->jmp_history_cnt = cnt; 3453 3372 env->cur_hist_ent = p; 3454 3373 ··· 3615 3532 return bt->reg_masks[bt->frame] & (1 << reg); 3616 3533 } 3617 3534 3535 + static inline bool bt_is_frame_reg_set(struct backtrack_state *bt, u32 frame, u32 reg) 3536 + { 3537 + return bt->reg_masks[frame] & (1 << reg); 3538 + } 3539 + 3618 3540 static inline bool bt_is_frame_slot_set(struct backtrack_state *bt, u32 frame, u32 slot) 3619 3541 { 3620 3542 return bt->stack_masks[frame] & (1ull << slot); ··· 3664 3576 } 3665 3577 } 3666 3578 3579 + /* If any register R in hist->linked_regs is marked as precise in bt, 3580 + * do bt_set_frame_{reg,slot}(bt, R) for all registers in hist->linked_regs. 3581 + */ 3582 + static void bt_sync_linked_regs(struct backtrack_state *bt, struct bpf_jmp_history_entry *hist) 3583 + { 3584 + struct linked_regs linked_regs; 3585 + bool some_precise = false; 3586 + int i; 3587 + 3588 + if (!hist || hist->linked_regs == 0) 3589 + return; 3590 + 3591 + linked_regs_unpack(hist->linked_regs, &linked_regs); 3592 + for (i = 0; i < linked_regs.cnt; ++i) { 3593 + struct linked_reg *e = &linked_regs.entries[i]; 3594 + 3595 + if ((e->is_reg && bt_is_frame_reg_set(bt, e->frameno, e->regno)) || 3596 + (!e->is_reg && bt_is_frame_slot_set(bt, e->frameno, e->spi))) { 3597 + some_precise = true; 3598 + break; 3599 + } 3600 + } 3601 + 3602 + if (!some_precise) 3603 + return; 3604 + 3605 + for (i = 0; i < linked_regs.cnt; ++i) { 3606 + struct linked_reg *e = &linked_regs.entries[i]; 3607 + 3608 + if (e->is_reg) 3609 + bt_set_frame_reg(bt, e->frameno, e->regno); 3610 + else 3611 + bt_set_frame_slot(bt, e->frameno, e->spi); 3612 + } 3613 + } 3614 + 3667 3615 static bool calls_callback(struct bpf_verifier_env *env, int insn_idx); 3668 3616 3669 3617 /* For given verifier state backtrack_insn() is called from the last insn to ··· 3738 3614 verbose(env, "%d: ", idx); 3739 3615 print_bpf_insn(&cbs, insn, env->allow_ptr_leaks); 3740 3616 } 3617 + 3618 + /* If there is a history record that some registers gained range at this insn, 3619 + * propagate precision marks to those registers, so that bt_is_reg_set() 3620 + * accounts for these registers. 3621 + */ 3622 + bt_sync_linked_regs(bt, hist); 3741 3623 3742 3624 if (class == BPF_ALU || class == BPF_ALU64) { 3743 3625 if (!bt_is_reg_set(bt, dreg)) ··· 3974 3844 */ 3975 3845 bt_set_reg(bt, dreg); 3976 3846 bt_set_reg(bt, sreg); 3977 - /* else dreg <cond> K 3847 + } else if (BPF_SRC(insn->code) == BPF_K) { 3848 + /* dreg <cond> K 3978 3849 * Only dreg still needs precision before 3979 3850 * this insn, so for the K-based conditional 3980 3851 * there is nothing new to be marked. ··· 3993 3862 /* to be analyzed */ 3994 3863 return -ENOTSUPP; 3995 3864 } 3865 + /* Propagate precision marks to linked registers, to account for 3866 + * registers marked as precise in this function. 3867 + */ 3868 + bt_sync_linked_regs(bt, hist); 3996 3869 return 0; 3997 3870 } 3998 3871 ··· 4591 4456 4592 4457 if (!src_reg->id && !tnum_is_const(src_reg->var_off)) 4593 4458 /* Ensure that src_reg has a valid ID that will be copied to 4594 - * dst_reg and then will be used by find_equal_scalars() to 4459 + * dst_reg and then will be used by sync_linked_regs() to 4595 4460 * propagate min/max range. 4596 4461 */ 4597 4462 src_reg->id = ++env->id_gen; ··· 4760 4625 } 4761 4626 4762 4627 if (insn_flags) 4763 - return push_jmp_history(env, env->cur_state, insn_flags); 4628 + return push_jmp_history(env, env->cur_state, insn_flags, 0); 4764 4629 return 0; 4765 4630 } 4766 4631 ··· 5065 4930 insn_flags = 0; /* we are not restoring spilled register */ 5066 4931 } 5067 4932 if (insn_flags) 5068 - return push_jmp_history(env, env->cur_state, insn_flags); 4933 + return push_jmp_history(env, env->cur_state, insn_flags, 0); 5069 4934 return 0; 5070 4935 } 5071 4936 ··· 14234 14099 u64 val = reg_const_value(src_reg, alu32); 14235 14100 14236 14101 if ((dst_reg->id & BPF_ADD_CONST) || 14237 - /* prevent overflow in find_equal_scalars() later */ 14102 + /* prevent overflow in sync_linked_regs() later */ 14238 14103 val > (u32)S32_MAX) { 14239 14104 /* 14240 14105 * If the register already went through rX += val ··· 14249 14114 } else { 14250 14115 /* 14251 14116 * Make sure ID is cleared otherwise dst_reg min/max could be 14252 - * incorrectly propagated into other registers by find_equal_scalars() 14117 + * incorrectly propagated into other registers by sync_linked_regs() 14253 14118 */ 14254 14119 dst_reg->id = 0; 14255 14120 } ··· 14399 14264 copy_register_state(dst_reg, src_reg); 14400 14265 /* Make sure ID is cleared if src_reg is not in u32 14401 14266 * range otherwise dst_reg min/max could be incorrectly 14402 - * propagated into src_reg by find_equal_scalars() 14267 + * propagated into src_reg by sync_linked_regs() 14403 14268 */ 14404 14269 if (!is_src_reg_u32) 14405 14270 dst_reg->id = 0; ··· 15222 15087 return true; 15223 15088 } 15224 15089 15225 - static void find_equal_scalars(struct bpf_verifier_state *vstate, 15226 - struct bpf_reg_state *known_reg) 15090 + static void __collect_linked_regs(struct linked_regs *reg_set, struct bpf_reg_state *reg, 15091 + u32 id, u32 frameno, u32 spi_or_reg, bool is_reg) 15092 + { 15093 + struct linked_reg *e; 15094 + 15095 + if (reg->type != SCALAR_VALUE || (reg->id & ~BPF_ADD_CONST) != id) 15096 + return; 15097 + 15098 + e = linked_regs_push(reg_set); 15099 + if (e) { 15100 + e->frameno = frameno; 15101 + e->is_reg = is_reg; 15102 + e->regno = spi_or_reg; 15103 + } else { 15104 + reg->id = 0; 15105 + } 15106 + } 15107 + 15108 + /* For all R being scalar registers or spilled scalar registers 15109 + * in verifier state, save R in linked_regs if R->id == id. 15110 + * If there are too many Rs sharing same id, reset id for leftover Rs. 15111 + */ 15112 + static void collect_linked_regs(struct bpf_verifier_state *vstate, u32 id, 15113 + struct linked_regs *linked_regs) 15114 + { 15115 + struct bpf_func_state *func; 15116 + struct bpf_reg_state *reg; 15117 + int i, j; 15118 + 15119 + id = id & ~BPF_ADD_CONST; 15120 + for (i = vstate->curframe; i >= 0; i--) { 15121 + func = vstate->frame[i]; 15122 + for (j = 0; j < BPF_REG_FP; j++) { 15123 + reg = &func->regs[j]; 15124 + __collect_linked_regs(linked_regs, reg, id, i, j, true); 15125 + } 15126 + for (j = 0; j < func->allocated_stack / BPF_REG_SIZE; j++) { 15127 + if (!is_spilled_reg(&func->stack[j])) 15128 + continue; 15129 + reg = &func->stack[j].spilled_ptr; 15130 + __collect_linked_regs(linked_regs, reg, id, i, j, false); 15131 + } 15132 + } 15133 + } 15134 + 15135 + /* For all R in linked_regs, copy known_reg range into R 15136 + * if R->id == known_reg->id. 15137 + */ 15138 + static void sync_linked_regs(struct bpf_verifier_state *vstate, struct bpf_reg_state *known_reg, 15139 + struct linked_regs *linked_regs) 15227 15140 { 15228 15141 struct bpf_reg_state fake_reg; 15229 - struct bpf_func_state *state; 15230 15142 struct bpf_reg_state *reg; 15143 + struct linked_reg *e; 15144 + int i; 15231 15145 15232 - bpf_for_each_reg_in_vstate(vstate, state, reg, ({ 15146 + for (i = 0; i < linked_regs->cnt; ++i) { 15147 + e = &linked_regs->entries[i]; 15148 + reg = e->is_reg ? &vstate->frame[e->frameno]->regs[e->regno] 15149 + : &vstate->frame[e->frameno]->stack[e->spi].spilled_ptr; 15233 15150 if (reg->type != SCALAR_VALUE || reg == known_reg) 15234 15151 continue; 15235 15152 if ((reg->id & ~BPF_ADD_CONST) != (known_reg->id & ~BPF_ADD_CONST)) ··· 15299 15112 copy_register_state(reg, known_reg); 15300 15113 /* 15301 15114 * Must preserve off, id and add_const flag, 15302 - * otherwise another find_equal_scalars() will be incorrect. 15115 + * otherwise another sync_linked_regs() will be incorrect. 15303 15116 */ 15304 15117 reg->off = saved_off; 15305 15118 ··· 15307 15120 scalar_min_max_add(reg, &fake_reg); 15308 15121 reg->var_off = tnum_add(reg->var_off, fake_reg.var_off); 15309 15122 } 15310 - })); 15123 + } 15311 15124 } 15312 15125 15313 15126 static int check_cond_jmp_op(struct bpf_verifier_env *env, ··· 15318 15131 struct bpf_reg_state *regs = this_branch->frame[this_branch->curframe]->regs; 15319 15132 struct bpf_reg_state *dst_reg, *other_branch_regs, *src_reg = NULL; 15320 15133 struct bpf_reg_state *eq_branch_regs; 15134 + struct linked_regs linked_regs = {}; 15321 15135 u8 opcode = BPF_OP(insn->code); 15322 15136 bool is_jmp32; 15323 15137 int pred = -1; ··· 15433 15245 return 0; 15434 15246 } 15435 15247 15248 + /* Push scalar registers sharing same ID to jump history, 15249 + * do this before creating 'other_branch', so that both 15250 + * 'this_branch' and 'other_branch' share this history 15251 + * if parent state is created. 15252 + */ 15253 + if (BPF_SRC(insn->code) == BPF_X && src_reg->type == SCALAR_VALUE && src_reg->id) 15254 + collect_linked_regs(this_branch, src_reg->id, &linked_regs); 15255 + if (dst_reg->type == SCALAR_VALUE && dst_reg->id) 15256 + collect_linked_regs(this_branch, dst_reg->id, &linked_regs); 15257 + if (linked_regs.cnt > 1) { 15258 + err = push_jmp_history(env, this_branch, 0, linked_regs_pack(&linked_regs)); 15259 + if (err) 15260 + return err; 15261 + } 15262 + 15436 15263 other_branch = push_stack(env, *insn_idx + insn->off + 1, *insn_idx, 15437 15264 false); 15438 15265 if (!other_branch) ··· 15478 15275 if (BPF_SRC(insn->code) == BPF_X && 15479 15276 src_reg->type == SCALAR_VALUE && src_reg->id && 15480 15277 !WARN_ON_ONCE(src_reg->id != other_branch_regs[insn->src_reg].id)) { 15481 - find_equal_scalars(this_branch, src_reg); 15482 - find_equal_scalars(other_branch, &other_branch_regs[insn->src_reg]); 15278 + sync_linked_regs(this_branch, src_reg, &linked_regs); 15279 + sync_linked_regs(other_branch, &other_branch_regs[insn->src_reg], &linked_regs); 15483 15280 } 15484 15281 if (dst_reg->type == SCALAR_VALUE && dst_reg->id && 15485 15282 !WARN_ON_ONCE(dst_reg->id != other_branch_regs[insn->dst_reg].id)) { 15486 - find_equal_scalars(this_branch, dst_reg); 15487 - find_equal_scalars(other_branch, &other_branch_regs[insn->dst_reg]); 15283 + sync_linked_regs(this_branch, dst_reg, &linked_regs); 15284 + sync_linked_regs(other_branch, &other_branch_regs[insn->dst_reg], &linked_regs); 15488 15285 } 15489 15286 15490 15287 /* if one pointer register is compared to another pointer ··· 16973 16770 * 16974 16771 * First verification path is [1-6]: 16975 16772 * - at (4) same bpf_reg_state::id (b) would be assigned to r6 and r7; 16976 - * - at (5) r6 would be marked <= X, find_equal_scalars() would also mark 16773 + * - at (5) r6 would be marked <= X, sync_linked_regs() would also mark 16977 16774 * r7 <= X, because r6 and r7 share same id. 16978 16775 * Next verification path is [1-4, 6]. 16979 16776 * ··· 17766 17563 * the current state. 17767 17564 */ 17768 17565 if (is_jmp_point(env, env->insn_idx)) 17769 - err = err ? : push_jmp_history(env, cur, 0); 17566 + err = err ? : push_jmp_history(env, cur, 0, 0); 17770 17567 err = err ? : propagate_precision(env, &sl->state); 17771 17568 if (err) 17772 17569 return err; ··· 18034 17831 } 18035 17832 18036 17833 if (is_jmp_point(env, env->insn_idx)) { 18037 - err = push_jmp_history(env, state, 0); 17834 + err = push_jmp_history(env, state, 0, 0); 18038 17835 if (err) 18039 17836 return err; 18040 17837 }
+1 -1
tools/testing/selftests/bpf/progs/verifier_subprog_precision.c
··· 278 278 __msg("mark_precise: frame0: regs=r6 stack= before 13: (bf) r1 = r7") 279 279 __msg("mark_precise: frame0: regs=r6 stack= before 12: (27) r6 *= 4") 280 280 __msg("mark_precise: frame0: regs=r6 stack= before 11: (25) if r6 > 0x3 goto pc+4") 281 - __msg("mark_precise: frame0: regs=r6 stack= before 10: (bf) r6 = r0") 281 + __msg("mark_precise: frame0: regs=r0,r6 stack= before 10: (bf) r6 = r0") 282 282 __msg("mark_precise: frame0: regs=r0 stack= before 9: (85) call bpf_loop") 283 283 /* State entering callback body popped from states stack */ 284 284 __msg("from 9 to 17: frame1:")
+10 -10
tools/testing/selftests/bpf/verifier/precise.c
··· 39 39 .result = VERBOSE_ACCEPT, 40 40 .errstr = 41 41 "mark_precise: frame0: last_idx 26 first_idx 20\ 42 - mark_precise: frame0: regs=r2,r9 stack= before 25\ 43 - mark_precise: frame0: regs=r2,r9 stack= before 24\ 44 - mark_precise: frame0: regs=r2,r9 stack= before 23\ 45 - mark_precise: frame0: regs=r2,r9 stack= before 22\ 46 - mark_precise: frame0: regs=r2,r9 stack= before 20\ 42 + mark_precise: frame0: regs=r2 stack= before 25\ 43 + mark_precise: frame0: regs=r2 stack= before 24\ 44 + mark_precise: frame0: regs=r2 stack= before 23\ 45 + mark_precise: frame0: regs=r2 stack= before 22\ 46 + mark_precise: frame0: regs=r2 stack= before 20\ 47 47 mark_precise: frame0: parent state regs=r2,r9 stack=:\ 48 48 mark_precise: frame0: last_idx 19 first_idx 10\ 49 49 mark_precise: frame0: regs=r2,r9 stack= before 19\ ··· 100 100 .errstr = 101 101 "26: (85) call bpf_probe_read_kernel#113\ 102 102 mark_precise: frame0: last_idx 26 first_idx 22\ 103 - mark_precise: frame0: regs=r2,r9 stack= before 25\ 104 - mark_precise: frame0: regs=r2,r9 stack= before 24\ 105 - mark_precise: frame0: regs=r2,r9 stack= before 23\ 106 - mark_precise: frame0: regs=r2,r9 stack= before 22\ 107 - mark_precise: frame0: parent state regs=r2,r9 stack=:\ 103 + mark_precise: frame0: regs=r2 stack= before 25\ 104 + mark_precise: frame0: regs=r2 stack= before 24\ 105 + mark_precise: frame0: regs=r2 stack= before 23\ 106 + mark_precise: frame0: regs=r2 stack= before 22\ 107 + mark_precise: frame0: parent state regs=r2 stack=:\ 108 108 mark_precise: frame0: last_idx 20 first_idx 20\ 109 109 mark_precise: frame0: regs=r2,r9 stack= before 20\ 110 110 mark_precise: frame0: parent state regs=r2,r9 stack=:\