Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

bpf: disable and remove registers chain based liveness

Remove register chain based liveness tracking:
- struct bpf_reg_state->{parent,live} fields are no longer needed;
- REG_LIVE_WRITTEN marks are superseded by bpf_mark_stack_write()
calls;
- mark_reg_read() calls are superseded by bpf_mark_stack_read();
- log.c:print_liveness() is superseded by logging in liveness.c;
- propagate_liveness() is superseded by bpf_update_live_stack();
- no need to establish register chains in is_state_visited() anymore;
- fix a bunch of tests expecting "_w" suffixes in verifier log
messages.

Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
Link: https://lore.kernel.org/r/20250918-callchain-sensitive-liveness-v3-9-c3cd27bacc60@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>

authored by

Eduard Zingerman and committed by
Alexei Starovoitov
107e1697 ccf25a67

+226 -806
-264
Documentation/bpf/verifier.rst
··· 347 347 verification. The goal of the liveness tracking algorithm is to spot this fact 348 348 and figure out that both states are actually equivalent. 349 349 350 - Data structures 351 - ~~~~~~~~~~~~~~~ 352 - 353 - Liveness is tracked using the following data structures:: 354 - 355 - enum bpf_reg_liveness { 356 - REG_LIVE_NONE = 0, 357 - REG_LIVE_READ32 = 0x1, 358 - REG_LIVE_READ64 = 0x2, 359 - REG_LIVE_READ = REG_LIVE_READ32 | REG_LIVE_READ64, 360 - REG_LIVE_WRITTEN = 0x4, 361 - REG_LIVE_DONE = 0x8, 362 - }; 363 - 364 - struct bpf_reg_state { 365 - ... 366 - struct bpf_reg_state *parent; 367 - ... 368 - enum bpf_reg_liveness live; 369 - ... 370 - }; 371 - 372 - struct bpf_stack_state { 373 - struct bpf_reg_state spilled_ptr; 374 - ... 375 - }; 376 - 377 - struct bpf_func_state { 378 - struct bpf_reg_state regs[MAX_BPF_REG]; 379 - ... 380 - struct bpf_stack_state *stack; 381 - } 382 - 383 - struct bpf_verifier_state { 384 - struct bpf_func_state *frame[MAX_CALL_FRAMES]; 385 - struct bpf_verifier_state *parent; 386 - ... 387 - } 388 - 389 - * ``REG_LIVE_NONE`` is an initial value assigned to ``->live`` fields upon new 390 - verifier state creation; 391 - 392 - * ``REG_LIVE_WRITTEN`` means that the value of the register (or stack slot) is 393 - defined by some instruction verified between this verifier state's parent and 394 - verifier state itself; 395 - 396 - * ``REG_LIVE_READ{32,64}`` means that the value of the register (or stack slot) 397 - is read by a some child state of this verifier state; 398 - 399 - * ``REG_LIVE_DONE`` is a marker used by ``clean_verifier_state()`` to avoid 400 - processing same verifier state multiple times and for some sanity checks; 401 - 402 - * ``->live`` field values are formed by combining ``enum bpf_reg_liveness`` 403 - values using bitwise or. 404 - 405 - Register parentage chains 406 - ~~~~~~~~~~~~~~~~~~~~~~~~~ 407 - 408 - In order to propagate information between parent and child states, a *register 409 - parentage chain* is established. Each register or stack slot is linked to a 410 - corresponding register or stack slot in its parent state via a ``->parent`` 411 - pointer. This link is established upon state creation in ``is_state_visited()`` 412 - and might be modified by ``set_callee_state()`` called from 413 - ``__check_func_call()``. 414 - 415 - The rules for correspondence between registers / stack slots are as follows: 416 - 417 - * For the current stack frame, registers and stack slots of the new state are 418 - linked to the registers and stack slots of the parent state with the same 419 - indices. 420 - 421 - * For the outer stack frames, only callee saved registers (r6-r9) and stack 422 - slots are linked to the registers and stack slots of the parent state with the 423 - same indices. 424 - 425 - * When function call is processed a new ``struct bpf_func_state`` instance is 426 - allocated, it encapsulates a new set of registers and stack slots. For this 427 - new frame, parent links for r6-r9 and stack slots are set to nil, parent links 428 - for r1-r5 are set to match caller r1-r5 parent links. 429 - 430 - This could be illustrated by the following diagram (arrows stand for 431 - ``->parent`` pointers):: 432 - 433 - ... ; Frame #0, some instructions 434 - --- checkpoint #0 --- 435 - 1 : r6 = 42 ; Frame #0 436 - --- checkpoint #1 --- 437 - 2 : call foo() ; Frame #0 438 - ... ; Frame #1, instructions from foo() 439 - --- checkpoint #2 --- 440 - ... ; Frame #1, instructions from foo() 441 - --- checkpoint #3 --- 442 - exit ; Frame #1, return from foo() 443 - 3 : r1 = r6 ; Frame #0 <- current state 444 - 445 - +-------------------------------+-------------------------------+ 446 - | Frame #0 | Frame #1 | 447 - Checkpoint +-------------------------------+-------------------------------+ 448 - #0 | r0 | r1-r5 | r6-r9 | fp-8 ... | 449 - +-------------------------------+ 450 - ^ ^ ^ ^ 451 - | | | | 452 - Checkpoint +-------------------------------+ 453 - #1 | r0 | r1-r5 | r6-r9 | fp-8 ... | 454 - +-------------------------------+ 455 - ^ ^ ^ 456 - |_______|_______|_______________ 457 - | | | 458 - nil nil | | | nil nil 459 - | | | | | | | 460 - Checkpoint +-------------------------------+-------------------------------+ 461 - #2 | r0 | r1-r5 | r6-r9 | fp-8 ... | r0 | r1-r5 | r6-r9 | fp-8 ... | 462 - +-------------------------------+-------------------------------+ 463 - ^ ^ ^ ^ ^ 464 - nil nil | | | | | 465 - | | | | | | | 466 - Checkpoint +-------------------------------+-------------------------------+ 467 - #3 | r0 | r1-r5 | r6-r9 | fp-8 ... | r0 | r1-r5 | r6-r9 | fp-8 ... | 468 - +-------------------------------+-------------------------------+ 469 - ^ ^ 470 - nil nil | | 471 - | | | | 472 - Current +-------------------------------+ 473 - state | r0 | r1-r5 | r6-r9 | fp-8 ... | 474 - +-------------------------------+ 475 - \ 476 - r6 read mark is propagated via these links 477 - all the way up to checkpoint #1. 478 - The checkpoint #1 contains a write mark for r6 479 - because of instruction (1), thus read propagation 480 - does not reach checkpoint #0 (see section below). 481 - 482 - Liveness marks tracking 483 - ~~~~~~~~~~~~~~~~~~~~~~~ 484 - 485 - For each processed instruction, the verifier tracks read and written registers 486 - and stack slots. The main idea of the algorithm is that read marks propagate 487 - back along the state parentage chain until they hit a write mark, which 'screens 488 - off' earlier states from the read. The information about reads is propagated by 489 - function ``mark_reg_read()`` which could be summarized as follows:: 490 - 491 - mark_reg_read(struct bpf_reg_state *state, ...): 492 - parent = state->parent 493 - while parent: 494 - if state->live & REG_LIVE_WRITTEN: 495 - break 496 - if parent->live & REG_LIVE_READ64: 497 - break 498 - parent->live |= REG_LIVE_READ64 499 - state = parent 500 - parent = state->parent 501 - 502 - Notes: 503 - 504 - * The read marks are applied to the **parent** state while write marks are 505 - applied to the **current** state. The write mark on a register or stack slot 506 - means that it is updated by some instruction in the straight-line code leading 507 - from the parent state to the current state. 508 - 509 - * Details about REG_LIVE_READ32 are omitted. 510 - 511 - * Function ``propagate_liveness()`` (see section :ref:`read_marks_for_cache_hits`) 512 - might override the first parent link. Please refer to the comments in the 513 - ``propagate_liveness()`` and ``mark_reg_read()`` source code for further 514 - details. 515 - 516 - Because stack writes could have different sizes ``REG_LIVE_WRITTEN`` marks are 517 - applied conservatively: stack slots are marked as written only if write size 518 - corresponds to the size of the register, e.g. see function ``save_register_state()``. 519 - 520 - Consider the following example:: 521 - 522 - 0: (*u64)(r10 - 8) = 0 ; define 8 bytes of fp-8 523 - --- checkpoint #0 --- 524 - 1: (*u32)(r10 - 8) = 1 ; redefine lower 4 bytes 525 - 2: r1 = (*u32)(r10 - 8) ; read lower 4 bytes defined at (1) 526 - 3: r2 = (*u32)(r10 - 4) ; read upper 4 bytes defined at (0) 527 - 528 - As stated above, the write at (1) does not count as ``REG_LIVE_WRITTEN``. Should 529 - it be otherwise, the algorithm above wouldn't be able to propagate the read mark 530 - from (3) to checkpoint #0. 531 - 532 - Once the ``BPF_EXIT`` instruction is reached ``update_branch_counts()`` is 533 - called to update the ``->branches`` counter for each verifier state in a chain 534 - of parent verifier states. When the ``->branches`` counter reaches zero the 535 - verifier state becomes a valid entry in a set of cached verifier states. 536 - 537 - Each entry of the verifier states cache is post-processed by a function 538 - ``clean_live_states()``. This function marks all registers and stack slots 539 - without ``REG_LIVE_READ{32,64}`` marks as ``NOT_INIT`` or ``STACK_INVALID``. 540 - Registers/stack slots marked in this way are ignored in function ``stacksafe()`` 541 - called from ``states_equal()`` when a state cache entry is considered for 542 - equivalence with a current state. 543 - 544 - Now it is possible to explain how the example from the beginning of the section 545 - works:: 546 - 547 - 0: call bpf_get_prandom_u32() 548 - 1: r1 = 0 549 - 2: if r0 == 0 goto +1 550 - 3: r0 = 1 551 - --- checkpoint[0] --- 552 - 4: r0 = r1 553 - 5: exit 554 - 555 - * At instruction #2 branching point is reached and state ``{ r0 == 0, r1 == 0, pc == 4 }`` 556 - is pushed to states processing queue (pc stands for program counter). 557 - 558 - * At instruction #4: 559 - 560 - * ``checkpoint[0]`` states cache entry is created: ``{ r0 == 1, r1 == 0, pc == 4 }``; 561 - * ``checkpoint[0].r0`` is marked as written; 562 - * ``checkpoint[0].r1`` is marked as read; 563 - 564 - * At instruction #5 exit is reached and ``checkpoint[0]`` can now be processed 565 - by ``clean_live_states()``. After this processing ``checkpoint[0].r1`` has a 566 - read mark and all other registers and stack slots are marked as ``NOT_INIT`` 567 - or ``STACK_INVALID`` 568 - 569 - * The state ``{ r0 == 0, r1 == 0, pc == 4 }`` is popped from the states queue 570 - and is compared against a cached state ``{ r1 == 0, pc == 4 }``, the states 571 - are considered equivalent. 572 - 573 - .. _read_marks_for_cache_hits: 574 - 575 - Read marks propagation for cache hits 576 - ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 577 - 578 - Another point is the handling of read marks when a previously verified state is 579 - found in the states cache. Upon cache hit verifier must behave in the same way 580 - as if the current state was verified to the program exit. This means that all 581 - read marks, present on registers and stack slots of the cached state, must be 582 - propagated over the parentage chain of the current state. Example below shows 583 - why this is important. Function ``propagate_liveness()`` handles this case. 584 - 585 - Consider the following state parentage chain (S is a starting state, A-E are 586 - derived states, -> arrows show which state is derived from which):: 587 - 588 - r1 read 589 - <------------- A[r1] == 0 590 - C[r1] == 0 591 - S ---> A ---> B ---> exit E[r1] == 1 592 - | 593 - ` ---> C ---> D 594 - | 595 - ` ---> E ^ 596 - |___ suppose all these 597 - ^ states are at insn #Y 598 - | 599 - suppose all these 600 - states are at insn #X 601 - 602 - * Chain of states ``S -> A -> B -> exit`` is verified first. 603 - 604 - * While ``B -> exit`` is verified, register ``r1`` is read and this read mark is 605 - propagated up to state ``A``. 606 - 607 - * When chain of states ``C -> D`` is verified the state ``D`` turns out to be 608 - equivalent to state ``B``. 609 - 610 - * The read mark for ``r1`` has to be propagated to state ``C``, otherwise state 611 - ``C`` might get mistakenly marked as equivalent to state ``E`` even though 612 - values for register ``r1`` differ between ``C`` and ``E``. 613 - 614 350 Understanding eBPF verifier messages 615 351 ==================================== 616 352
-25
include/linux/bpf_verifier.h
··· 26 26 /* Patch buffer size */ 27 27 #define INSN_BUF_SIZE 32 28 28 29 - /* Liveness marks, used for registers and spilled-regs (in stack slots). 30 - * Read marks propagate upwards until they find a write mark; they record that 31 - * "one of this state's descendants read this reg" (and therefore the reg is 32 - * relevant for states_equal() checks). 33 - * Write marks collect downwards and do not propagate; they record that "the 34 - * straight-line code that reached this state (from its parent) wrote this reg" 35 - * (and therefore that reads propagated from this state or its descendants 36 - * should not propagate to its parent). 37 - * A state with a write mark can receive read marks; it just won't propagate 38 - * them to its parent, since the write mark is a property, not of the state, 39 - * but of the link between it and its parent. See mark_reg_read() and 40 - * mark_stack_slot_read() in kernel/bpf/verifier.c. 41 - */ 42 - enum bpf_reg_liveness { 43 - REG_LIVE_NONE = 0, /* reg hasn't been read or written this branch */ 44 - REG_LIVE_READ32 = 0x1, /* reg was read, so we're sensitive to initial value */ 45 - REG_LIVE_READ64 = 0x2, /* likewise, but full 64-bit content matters */ 46 - REG_LIVE_READ = REG_LIVE_READ32 | REG_LIVE_READ64, 47 - REG_LIVE_WRITTEN = 0x4, /* reg was written first, screening off later reads */ 48 - }; 49 - 50 29 #define ITER_PREFIX "bpf_iter_" 51 30 52 31 enum bpf_iter_state { ··· 190 211 * allowed and has the same effect as bpf_sk_release(sk). 191 212 */ 192 213 u32 ref_obj_id; 193 - /* parentage chain for liveness checking */ 194 - struct bpf_reg_state *parent; 195 214 /* Inside the callee two registers can be both PTR_TO_STACK like 196 215 * R1=fp-8 and R2=fp-8, but one of them points to this function stack 197 216 * while another to the caller's stack. To differentiate them 'frameno' ··· 202 225 * patching which only happens after main verification finished. 203 226 */ 204 227 s32 subreg_def; 205 - enum bpf_reg_liveness live; 206 228 /* if (!precise && SCALAR_VALUE) min/max/tnum don't affect safety */ 207 229 bool precise; 208 230 }; ··· 828 852 /* array of pointers to bpf_scc_info indexed by SCC id */ 829 853 struct bpf_scc_info **scc_info; 830 854 u32 scc_cnt; 831 - bool internal_error; 832 855 }; 833 856 834 857 static inline struct bpf_func_info_aux *subprog_aux(struct bpf_verifier_env *env, int subprog)
+4 -22
kernel/bpf/log.c
··· 542 542 [STACK_IRQ_FLAG] = 'f' 543 543 }; 544 544 545 - static void print_liveness(struct bpf_verifier_env *env, 546 - enum bpf_reg_liveness live) 547 - { 548 - if (live & (REG_LIVE_READ | REG_LIVE_WRITTEN)) 549 - verbose(env, "_"); 550 - if (live & REG_LIVE_READ) 551 - verbose(env, "r"); 552 - if (live & REG_LIVE_WRITTEN) 553 - verbose(env, "w"); 554 - } 555 - 556 545 #define UNUM_MAX_DECIMAL U16_MAX 557 546 #define SNUM_MAX_DECIMAL S16_MAX 558 547 #define SNUM_MIN_DECIMAL S16_MIN ··· 759 770 if (!print_all && !reg_scratched(env, i)) 760 771 continue; 761 772 verbose(env, " R%d", i); 762 - print_liveness(env, reg->live); 763 773 verbose(env, "="); 764 774 print_reg_state(env, state, reg); 765 775 } ··· 791 803 break; 792 804 types_buf[j] = '\0'; 793 805 794 - verbose(env, " fp%d", (-i - 1) * BPF_REG_SIZE); 795 - print_liveness(env, reg->live); 796 - verbose(env, "=%s", types_buf); 806 + verbose(env, " fp%d=%s", (-i - 1) * BPF_REG_SIZE, types_buf); 797 807 print_reg_state(env, state, reg); 798 808 break; 799 809 case STACK_DYNPTR: ··· 800 814 reg = &state->stack[i].spilled_ptr; 801 815 802 816 verbose(env, " fp%d", (-i - 1) * BPF_REG_SIZE); 803 - print_liveness(env, reg->live); 804 817 verbose(env, "=dynptr_%s(", dynptr_type_str(reg->dynptr.type)); 805 818 if (reg->id) 806 819 verbose_a("id=%d", reg->id); ··· 814 829 if (!reg->ref_obj_id) 815 830 continue; 816 831 817 - verbose(env, " fp%d", (-i - 1) * BPF_REG_SIZE); 818 - print_liveness(env, reg->live); 819 - verbose(env, "=iter_%s(ref_id=%d,state=%s,depth=%u)", 832 + verbose(env, " fp%d=iter_%s(ref_id=%d,state=%s,depth=%u)", 833 + (-i - 1) * BPF_REG_SIZE, 820 834 iter_type_str(reg->iter.btf, reg->iter.btf_id), 821 835 reg->ref_obj_id, iter_state_str(reg->iter.state), 822 836 reg->iter.depth); ··· 823 839 case STACK_MISC: 824 840 case STACK_ZERO: 825 841 default: 826 - verbose(env, " fp%d", (-i - 1) * BPF_REG_SIZE); 827 - print_liveness(env, reg->live); 828 - verbose(env, "=%s", types_buf); 842 + verbose(env, " fp%d=%s", (-i - 1) * BPF_REG_SIZE, types_buf); 829 843 break; 830 844 } 831 845 }
+21 -294
kernel/bpf/verifier.c
··· 787 787 state->stack[spi - 1].spilled_ptr.ref_obj_id = id; 788 788 } 789 789 790 - state->stack[spi].spilled_ptr.live |= REG_LIVE_WRITTEN; 791 - state->stack[spi - 1].spilled_ptr.live |= REG_LIVE_WRITTEN; 792 790 bpf_mark_stack_write(env, state->frameno, BIT(spi - 1) | BIT(spi)); 793 791 794 792 return 0; ··· 804 806 __mark_reg_not_init(env, &state->stack[spi].spilled_ptr); 805 807 __mark_reg_not_init(env, &state->stack[spi - 1].spilled_ptr); 806 808 807 - /* Why do we need to set REG_LIVE_WRITTEN for STACK_INVALID slot? 808 - * 809 - * While we don't allow reading STACK_INVALID, it is still possible to 810 - * do <8 byte writes marking some but not all slots as STACK_MISC. Then, 811 - * helpers or insns can do partial read of that part without failing, 812 - * but check_stack_range_initialized, check_stack_read_var_off, and 813 - * check_stack_read_fixed_off will do mark_reg_read for all 8-bytes of 814 - * the slot conservatively. Hence we need to prevent those liveness 815 - * marking walks. 816 - * 817 - * This was not a problem before because STACK_INVALID is only set by 818 - * default (where the default reg state has its reg->parent as NULL), or 819 - * in clean_live_states after REG_LIVE_DONE (at which point 820 - * mark_reg_read won't walk reg->parent chain), but not randomly during 821 - * verifier state exploration (like we did above). Hence, for our case 822 - * parentage chain will still be live (i.e. reg->parent may be 823 - * non-NULL), while earlier reg->parent was NULL, so we need 824 - * REG_LIVE_WRITTEN to screen off read marker propagation when it is 825 - * done later on reads or by mark_dynptr_read as well to unnecessary 826 - * mark registers in verifier state. 827 - */ 828 - state->stack[spi].spilled_ptr.live |= REG_LIVE_WRITTEN; 829 - state->stack[spi - 1].spilled_ptr.live |= REG_LIVE_WRITTEN; 830 809 bpf_mark_stack_write(env, state->frameno, BIT(spi - 1) | BIT(spi)); 831 810 } 832 811 ··· 913 938 __mark_reg_not_init(env, &state->stack[spi].spilled_ptr); 914 939 __mark_reg_not_init(env, &state->stack[spi - 1].spilled_ptr); 915 940 916 - /* Same reason as unmark_stack_slots_dynptr above */ 917 - state->stack[spi].spilled_ptr.live |= REG_LIVE_WRITTEN; 918 - state->stack[spi - 1].spilled_ptr.live |= REG_LIVE_WRITTEN; 919 941 bpf_mark_stack_write(env, state->frameno, BIT(spi - 1) | BIT(spi)); 920 942 921 943 return 0; ··· 1031 1059 else 1032 1060 st->type |= PTR_UNTRUSTED; 1033 1061 } 1034 - st->live |= REG_LIVE_WRITTEN; 1035 1062 st->ref_obj_id = i == 0 ? id : 0; 1036 1063 st->iter.btf = btf; 1037 1064 st->iter.btf_id = btf_id; ··· 1065 1094 WARN_ON_ONCE(release_reference(env, st->ref_obj_id)); 1066 1095 1067 1096 __mark_reg_not_init(env, st); 1068 - 1069 - /* see unmark_stack_slots_dynptr() for why we need to set REG_LIVE_WRITTEN */ 1070 - st->live |= REG_LIVE_WRITTEN; 1071 1097 1072 1098 for (j = 0; j < BPF_REG_SIZE; j++) 1073 1099 slot->slot_type[j] = STACK_INVALID; ··· 1162 1194 bpf_mark_stack_write(env, reg->frameno, BIT(spi)); 1163 1195 __mark_reg_known_zero(st); 1164 1196 st->type = PTR_TO_STACK; /* we don't have dedicated reg type */ 1165 - st->live |= REG_LIVE_WRITTEN; 1166 1197 st->ref_obj_id = id; 1167 1198 st->irq.kfunc_class = kfunc_class; 1168 1199 ··· 1215 1248 1216 1249 __mark_reg_not_init(env, st); 1217 1250 1218 - /* see unmark_stack_slots_dynptr() for why we need to set REG_LIVE_WRITTEN */ 1219 - st->live |= REG_LIVE_WRITTEN; 1220 1251 bpf_mark_stack_write(env, reg->frameno, BIT(spi)); 1221 1252 1222 1253 for (i = 0; i < BPF_REG_SIZE; i++) ··· 2866 2901 2867 2902 for (i = 0; i < MAX_BPF_REG; i++) { 2868 2903 mark_reg_not_init(env, regs, i); 2869 - regs[i].live = REG_LIVE_NONE; 2870 - regs[i].parent = NULL; 2871 2904 regs[i].subreg_def = DEF_NOT_SUBREG; 2872 2905 } 2873 2906 ··· 3546 3583 return 0; 3547 3584 } 3548 3585 3549 - /* Parentage chain of this register (or stack slot) should take care of all 3550 - * issues like callee-saved registers, stack slot allocation time, etc. 3551 - */ 3552 - static int mark_reg_read(struct bpf_verifier_env *env, 3553 - const struct bpf_reg_state *state, 3554 - struct bpf_reg_state *parent, u8 flag) 3555 - { 3556 - bool writes = parent == state->parent; /* Observe write marks */ 3557 - int cnt = 0; 3558 - 3559 - while (parent) { 3560 - /* if read wasn't screened by an earlier write ... */ 3561 - if (writes && state->live & REG_LIVE_WRITTEN) 3562 - break; 3563 - /* The first condition is more likely to be true than the 3564 - * second, checked it first. 3565 - */ 3566 - if ((parent->live & REG_LIVE_READ) == flag || 3567 - parent->live & REG_LIVE_READ64) 3568 - /* The parentage chain never changes and 3569 - * this parent was already marked as LIVE_READ. 3570 - * There is no need to keep walking the chain again and 3571 - * keep re-marking all parents as LIVE_READ. 3572 - * This case happens when the same register is read 3573 - * multiple times without writes into it in-between. 3574 - * Also, if parent has the stronger REG_LIVE_READ64 set, 3575 - * then no need to set the weak REG_LIVE_READ32. 3576 - */ 3577 - break; 3578 - /* ... then we depend on parent's value */ 3579 - parent->live |= flag; 3580 - /* REG_LIVE_READ64 overrides REG_LIVE_READ32. */ 3581 - if (flag == REG_LIVE_READ64) 3582 - parent->live &= ~REG_LIVE_READ32; 3583 - state = parent; 3584 - parent = state->parent; 3585 - writes = true; 3586 - cnt++; 3587 - } 3588 - 3589 - if (env->longest_mark_read_walk < cnt) 3590 - env->longest_mark_read_walk = cnt; 3591 - return 0; 3592 - } 3593 - 3594 3586 static int mark_stack_slot_obj_read(struct bpf_verifier_env *env, struct bpf_reg_state *reg, 3595 3587 int spi, int nr_slots) 3596 3588 { 3597 - struct bpf_func_state *state = func(env, reg); 3598 3589 int err, i; 3599 3590 3600 3591 for (i = 0; i < nr_slots; i++) { 3601 - struct bpf_reg_state *st = &state->stack[spi - i].spilled_ptr; 3602 - 3603 - err = mark_reg_read(env, st, st->parent, REG_LIVE_READ64); 3604 - if (err) 3605 - return err; 3606 - 3607 3592 err = bpf_mark_stack_read(env, reg->frameno, env->insn_idx, BIT(spi - i)); 3608 3593 if (err) 3609 3594 return err; ··· 3763 3852 if (rw64) 3764 3853 mark_insn_zext(env, reg); 3765 3854 3766 - return mark_reg_read(env, reg, reg->parent, 3767 - rw64 ? REG_LIVE_READ64 : REG_LIVE_READ32); 3855 + return 0; 3768 3856 } else { 3769 3857 /* check whether register used as dest operand can be written to */ 3770 3858 if (regno == BPF_REG_FP) { 3771 3859 verbose(env, "frame pointer is read only\n"); 3772 3860 return -EACCES; 3773 3861 } 3774 - reg->live |= REG_LIVE_WRITTEN; 3775 3862 reg->subreg_def = rw64 ? DEF_NOT_SUBREG : env->insn_idx + 1; 3776 3863 if (t == DST_OP) 3777 3864 mark_reg_unknown(env, regs, regno); ··· 4974 5065 /* Copy src state preserving dst->parent and dst->live fields */ 4975 5066 static void copy_register_state(struct bpf_reg_state *dst, const struct bpf_reg_state *src) 4976 5067 { 4977 - struct bpf_reg_state *parent = dst->parent; 4978 - enum bpf_reg_liveness live = dst->live; 4979 - 4980 5068 *dst = *src; 4981 - dst->parent = parent; 4982 - dst->live = live; 4983 5069 } 4984 5070 4985 5071 static void save_register_state(struct bpf_verifier_env *env, ··· 4985 5081 int i; 4986 5082 4987 5083 copy_register_state(&state->stack[spi].spilled_ptr, reg); 4988 - if (size == BPF_REG_SIZE) 4989 - state->stack[spi].spilled_ptr.live |= REG_LIVE_WRITTEN; 4990 5084 4991 5085 for (i = BPF_REG_SIZE; i > BPF_REG_SIZE - size; i--) 4992 5086 state->stack[spi].slot_type[i - 1] = STACK_SPILL; ··· 5132 5230 if (is_stack_slot_special(&state->stack[spi])) 5133 5231 for (i = 0; i < BPF_REG_SIZE; i++) 5134 5232 scrub_spilled_slot(&state->stack[spi].slot_type[i]); 5135 - 5136 - /* only mark the slot as written if all 8 bytes were written 5137 - * otherwise read propagation may incorrectly stop too soon 5138 - * when stack slots are partially written. 5139 - * This heuristic means that read propagation will be 5140 - * conservative, since it will add reg_live_read marks 5141 - * to stack slots all the way to first state when programs 5142 - * writes+reads less than 8 bytes 5143 - */ 5144 - if (size == BPF_REG_SIZE) 5145 - state->stack[spi].spilled_ptr.live |= REG_LIVE_WRITTEN; 5146 5233 5147 5234 /* when we zero initialize stack slots mark them as such */ 5148 5235 if ((reg && register_is_null(reg)) || ··· 5325 5434 /* have read misc data from the stack */ 5326 5435 mark_reg_unknown(env, state->regs, dst_regno); 5327 5436 } 5328 - state->regs[dst_regno].live |= REG_LIVE_WRITTEN; 5329 5437 } 5330 5438 5331 5439 /* Read the stack at 'off' and put the results into the register indicated by ··· 5371 5481 return -EACCES; 5372 5482 } 5373 5483 5374 - mark_reg_read(env, reg, reg->parent, REG_LIVE_READ64); 5375 5484 if (dst_regno < 0) 5376 5485 return 0; 5377 5486 ··· 5424 5535 insn_flags = 0; /* not restoring original register state */ 5425 5536 } 5426 5537 } 5427 - state->regs[dst_regno].live |= REG_LIVE_WRITTEN; 5428 5538 } else if (dst_regno >= 0) { 5429 5539 /* restore register state from stack */ 5430 5540 copy_register_state(&state->regs[dst_regno], reg); ··· 5431 5543 * has its liveness marks cleared by is_state_visited() 5432 5544 * which resets stack/reg liveness for state transitions 5433 5545 */ 5434 - state->regs[dst_regno].live |= REG_LIVE_WRITTEN; 5435 5546 } else if (__is_pointer_value(env->allow_ptr_leaks, reg)) { 5436 5547 /* If dst_regno==-1, the caller is asking us whether 5437 5548 * it is acceptable to use this value as a SCALAR_VALUE ··· 5442 5555 off); 5443 5556 return -EACCES; 5444 5557 } 5445 - mark_reg_read(env, reg, reg->parent, REG_LIVE_READ64); 5446 5558 } else { 5447 5559 for (i = 0; i < size; i++) { 5448 5560 type = stype[(slot - i) % BPF_REG_SIZE]; ··· 5455 5569 off, i, size); 5456 5570 return -EACCES; 5457 5571 } 5458 - mark_reg_read(env, reg, reg->parent, REG_LIVE_READ64); 5459 5572 if (dst_regno >= 0) 5460 5573 mark_reg_stack_read(env, reg_state, off, off + size, dst_regno); 5461 5574 insn_flags = 0; /* we are not restoring spilled register */ ··· 8082 8197 /* reading any byte out of 8-byte 'spill_slot' will cause 8083 8198 * the whole slot to be marked as 'read' 8084 8199 */ 8085 - mark_reg_read(env, &state->stack[spi].spilled_ptr, 8086 - state->stack[spi].spilled_ptr.parent, 8087 - REG_LIVE_READ64); 8088 8200 err = bpf_mark_stack_read(env, reg->frameno, env->insn_idx, BIT(spi)); 8089 8201 if (err) 8090 8202 return err; 8091 - /* We do not set REG_LIVE_WRITTEN for stack slot, as we can not 8203 + /* We do not call bpf_mark_stack_write(), as we can not 8092 8204 * be sure that whether stack slot is written to or not. Hence, 8093 8205 * we must still conservatively propagate reads upwards even if 8094 8206 * helper may write to the entire memory range. ··· 10923 11041 } 10924 11042 10925 11043 /* we are going to rely on register's precise value */ 10926 - err = mark_reg_read(env, r0, r0->parent, REG_LIVE_READ64); 10927 - err = err ?: mark_chain_precision(env, BPF_REG_0); 11044 + err = mark_chain_precision(env, BPF_REG_0); 10928 11045 if (err) 10929 11046 return err; 10930 11047 ··· 11827 11946 11828 11947 if (regno == BPF_REG_0) { 11829 11948 /* Function return value */ 11830 - reg->live |= REG_LIVE_WRITTEN; 11831 11949 reg->subreg_def = reg_size == sizeof(u64) ? 11832 11950 DEF_NOT_SUBREG : env->insn_idx + 1; 11833 - } else { 11951 + } else if (reg_size == sizeof(u64)) { 11834 11952 /* Function argument */ 11835 - if (reg_size == sizeof(u64)) { 11836 - mark_insn_zext(env, reg); 11837 - mark_reg_read(env, reg, reg->parent, REG_LIVE_READ64); 11838 - } else { 11839 - mark_reg_read(env, reg, reg->parent, REG_LIVE_READ32); 11840 - } 11953 + mark_insn_zext(env, reg); 11841 11954 } 11842 11955 } 11843 11956 ··· 15585 15710 */ 15586 15711 assign_scalar_id_before_mov(env, src_reg); 15587 15712 copy_register_state(dst_reg, src_reg); 15588 - dst_reg->live |= REG_LIVE_WRITTEN; 15589 15713 dst_reg->subreg_def = DEF_NOT_SUBREG; 15590 15714 } else { 15591 15715 /* case: R1 = (s8, s16 s32)R2 */ ··· 15603 15729 if (!no_sext) 15604 15730 dst_reg->id = 0; 15605 15731 coerce_reg_to_size_sx(dst_reg, insn->off >> 3); 15606 - dst_reg->live |= REG_LIVE_WRITTEN; 15607 15732 dst_reg->subreg_def = DEF_NOT_SUBREG; 15608 15733 } else { 15609 15734 mark_reg_unknown(env, regs, insn->dst_reg); ··· 15628 15755 */ 15629 15756 if (!is_src_reg_u32) 15630 15757 dst_reg->id = 0; 15631 - dst_reg->live |= REG_LIVE_WRITTEN; 15632 15758 dst_reg->subreg_def = env->insn_idx + 1; 15633 15759 } else { 15634 15760 /* case: W1 = (s8, s16)W2 */ ··· 15638 15766 copy_register_state(dst_reg, src_reg); 15639 15767 if (!no_sext) 15640 15768 dst_reg->id = 0; 15641 - dst_reg->live |= REG_LIVE_WRITTEN; 15642 15769 dst_reg->subreg_def = env->insn_idx + 1; 15643 15770 coerce_subreg_to_size_sx(dst_reg, insn->off >> 3); 15644 15771 } ··· 18447 18576 18448 18577 for (i = 0; i < st->allocated_stack / BPF_REG_SIZE; i++) { 18449 18578 if (!bpf_stack_slot_alive(env, st->frameno, i)) { 18450 - if (st->stack[i].spilled_ptr.live & REG_LIVE_READ) { 18451 - verifier_bug(env, "incorrect live marks #1 for insn %d frameno %d spi %d\n", 18452 - env->insn_idx, st->frameno, i); 18453 - env->internal_error = true; 18454 - } 18455 18579 __mark_reg_not_init(env, &st->stack[i].spilled_ptr); 18456 18580 for (j = 0; j < BPF_REG_SIZE; j++) 18457 18581 st->stack[i].slot_type[j] = STACK_INVALID; ··· 18475 18609 * but a lot of states will get revised from liveness point of view when 18476 18610 * the verifier explores other branches. 18477 18611 * Example: 18478 - * 1: r0 = 1 18612 + * 1: *(u64)(r10 - 8) = 1 18479 18613 * 2: if r1 == 100 goto pc+1 18480 - * 3: r0 = 2 18481 - * 4: exit 18482 - * when the verifier reaches exit insn the register r0 in the state list of 18483 - * insn 2 will be seen as !REG_LIVE_READ. Then the verifier pops the other_branch 18484 - * of insn 2 and goes exploring further. At the insn 4 it will walk the 18485 - * parentage chain from insn 4 into insn 2 and will mark r0 as REG_LIVE_READ. 18614 + * 3: *(u64)(r10 - 8) = 2 18615 + * 4: r0 = *(u64)(r10 - 8) 18616 + * 5: exit 18617 + * when the verifier reaches exit insn the stack slot -8 in the state list of 18618 + * insn 2 is not yet marked alive. Then the verifier pops the other_branch 18619 + * of insn 2 and goes exploring further. After the insn 4 read, liveness 18620 + * analysis would propagate read mark for -8 at insn 2. 18486 18621 * 18487 18622 * Since the verifier pushes the branch states as it sees them while exploring 18488 18623 * the program the condition of walking the branch instruction for the second 18489 18624 * time means that all states below this branch were already explored and 18490 18625 * their final liveness marks are already propagated. 18491 18626 * Hence when the verifier completes the search of state list in is_state_visited() 18492 - * we can call this clean_live_states() function to mark all liveness states 18493 - * as st->cleaned to indicate that 'parent' pointers of 'struct bpf_reg_state' 18494 - * will not be used. 18495 - * This function also clears the registers and stack for states that !READ 18496 - * to simplify state merging. 18627 + * we can call this clean_live_states() function to clear dead the registers and stack 18628 + * slots to simplify state merging. 18497 18629 * 18498 18630 * Important note here that walking the same branch instruction in the callee 18499 18631 * doesn't meant that the states are DONE. The verifier has to compare ··· 18666 18802 static __init int unbound_reg_init(void) 18667 18803 { 18668 18804 __mark_reg_unknown_imprecise(&unbound_reg); 18669 - unbound_reg.live |= REG_LIVE_READ; 18670 18805 return 0; 18671 18806 } 18672 18807 late_initcall(unbound_reg_init); ··· 18960 19097 return true; 18961 19098 } 18962 19099 18963 - /* Return 0 if no propagation happened. Return negative error code if error 18964 - * happened. Otherwise, return the propagated bit. 18965 - */ 18966 - static int propagate_liveness_reg(struct bpf_verifier_env *env, 18967 - struct bpf_reg_state *reg, 18968 - struct bpf_reg_state *parent_reg) 18969 - { 18970 - u8 parent_flag = parent_reg->live & REG_LIVE_READ; 18971 - u8 flag = reg->live & REG_LIVE_READ; 18972 - int err; 18973 - 18974 - /* When comes here, read flags of PARENT_REG or REG could be any of 18975 - * REG_LIVE_READ64, REG_LIVE_READ32, REG_LIVE_NONE. There is no need 18976 - * of propagation if PARENT_REG has strongest REG_LIVE_READ64. 18977 - */ 18978 - if (parent_flag == REG_LIVE_READ64 || 18979 - /* Or if there is no read flag from REG. */ 18980 - !flag || 18981 - /* Or if the read flag from REG is the same as PARENT_REG. */ 18982 - parent_flag == flag) 18983 - return 0; 18984 - 18985 - err = mark_reg_read(env, reg, parent_reg, flag); 18986 - if (err) 18987 - return err; 18988 - 18989 - return flag; 18990 - } 18991 - 18992 - /* A write screens off any subsequent reads; but write marks come from the 18993 - * straight-line code between a state and its parent. When we arrive at an 18994 - * equivalent state (jump target or such) we didn't arrive by the straight-line 18995 - * code, so read marks in the state must propagate to the parent regardless 18996 - * of the state's write marks. That's what 'parent == state->parent' comparison 18997 - * in mark_reg_read() is for. 18998 - */ 18999 - static int propagate_liveness(struct bpf_verifier_env *env, 19000 - const struct bpf_verifier_state *vstate, 19001 - struct bpf_verifier_state *vparent, 19002 - bool *changed) 19003 - { 19004 - struct bpf_reg_state *state_reg, *parent_reg; 19005 - struct bpf_func_state *state, *parent; 19006 - int i, frame, err = 0; 19007 - bool tmp = false; 19008 - 19009 - changed = changed ?: &tmp; 19010 - if (vparent->curframe != vstate->curframe) { 19011 - WARN(1, "propagate_live: parent frame %d current frame %d\n", 19012 - vparent->curframe, vstate->curframe); 19013 - return -EFAULT; 19014 - } 19015 - /* Propagate read liveness of registers... */ 19016 - BUILD_BUG_ON(BPF_REG_FP + 1 != MAX_BPF_REG); 19017 - for (frame = 0; frame <= vstate->curframe; frame++) { 19018 - parent = vparent->frame[frame]; 19019 - state = vstate->frame[frame]; 19020 - parent_reg = parent->regs; 19021 - state_reg = state->regs; 19022 - /* We don't need to worry about FP liveness, it's read-only */ 19023 - for (i = frame < vstate->curframe ? BPF_REG_6 : 0; i < BPF_REG_FP; i++) { 19024 - err = propagate_liveness_reg(env, &state_reg[i], 19025 - &parent_reg[i]); 19026 - if (err < 0) 19027 - return err; 19028 - *changed |= err > 0; 19029 - if (err == REG_LIVE_READ64) 19030 - mark_insn_zext(env, &parent_reg[i]); 19031 - } 19032 - 19033 - /* Propagate stack slots. */ 19034 - for (i = 0; i < state->allocated_stack / BPF_REG_SIZE && 19035 - i < parent->allocated_stack / BPF_REG_SIZE; i++) { 19036 - parent_reg = &parent->stack[i].spilled_ptr; 19037 - state_reg = &state->stack[i].spilled_ptr; 19038 - err = propagate_liveness_reg(env, state_reg, 19039 - parent_reg); 19040 - *changed |= err > 0; 19041 - if (err < 0) 19042 - return err; 19043 - } 19044 - } 19045 - return 0; 19046 - } 19047 - 19048 19100 /* find precise scalars in the previous equivalent state and 19049 19101 * propagate them into the current state 19050 19102 */ ··· 18979 19201 first = true; 18980 19202 for (i = 0; i < BPF_REG_FP; i++, state_reg++) { 18981 19203 if (state_reg->type != SCALAR_VALUE || 18982 - !state_reg->precise || 18983 - !(state_reg->live & REG_LIVE_READ)) 19204 + !state_reg->precise) 18984 19205 continue; 18985 19206 if (env->log.level & BPF_LOG_LEVEL2) { 18986 19207 if (first) ··· 18996 19219 continue; 18997 19220 state_reg = &state->stack[i].spilled_ptr; 18998 19221 if (state_reg->type != SCALAR_VALUE || 18999 - !state_reg->precise || 19000 - !(state_reg->live & REG_LIVE_READ)) 19222 + !state_reg->precise) 19001 19223 continue; 19002 19224 if (env->log.level & BPF_LOG_LEVEL2) { 19003 19225 if (first) ··· 19046 19270 changed = false; 19047 19271 for (backedge = visit->backedges; backedge; backedge = backedge->next) { 19048 19272 st = &backedge->state; 19049 - err = propagate_liveness(env, st->equal_state, st, &changed); 19050 - if (err) 19051 - return err; 19052 19273 err = propagate_precision(env, st->equal_state, st, &changed); 19053 19274 if (err) 19054 19275 return err; ··· 19069 19296 fcur = cur->frame[fr]; 19070 19297 for (i = 0; i < MAX_BPF_REG; i++) 19071 19298 if (memcmp(&fold->regs[i], &fcur->regs[i], 19072 - offsetof(struct bpf_reg_state, parent))) 19299 + offsetof(struct bpf_reg_state, frameno))) 19073 19300 return false; 19074 19301 return true; 19075 19302 } ··· 19167 19394 struct bpf_verifier_state_list *sl; 19168 19395 struct bpf_verifier_state *cur = env->cur_state, *new; 19169 19396 bool force_new_state, add_new_state, loop; 19170 - int i, j, n, err, states_cnt = 0; 19397 + int n, err, states_cnt = 0; 19171 19398 struct list_head *pos, *tmp, *head; 19172 19399 19173 19400 force_new_state = env->test_state_freq || is_force_checkpoint(env, insn_idx) || ··· 19324 19551 loop = incomplete_read_marks(env, &sl->state); 19325 19552 if (states_equal(env, &sl->state, cur, loop ? RANGE_WITHIN : NOT_EXACT)) { 19326 19553 hit: 19327 - if (env->internal_error) 19328 - return -EFAULT; 19329 19554 sl->hit_cnt++; 19330 - /* reached equivalent register/stack state, 19331 - * prune the search. 19332 - * Registers read by the continuation are read by us. 19333 - * If we have any write marks in env->cur_state, they 19334 - * will prevent corresponding reads in the continuation 19335 - * from reaching our parent (an explored_state). Our 19336 - * own state will get the read marks recorded, but 19337 - * they'll be immediately forgotten as we're pruning 19338 - * this state and will pop a new one. 19339 - */ 19340 - err = propagate_liveness(env, &sl->state, cur, NULL); 19341 19555 19342 19556 /* if previous state reached the exit with precision and 19343 19557 * current state is equivalent to it (except precision marks) 19344 19558 * the precision needs to be propagated back in 19345 19559 * the current state. 19346 19560 */ 19561 + err = 0; 19347 19562 if (is_jmp_point(env, env->insn_idx)) 19348 - err = err ? : push_jmp_history(env, cur, 0, 0); 19563 + err = push_jmp_history(env, cur, 0, 0); 19349 19564 err = err ? : propagate_precision(env, &sl->state, cur, NULL); 19350 19565 if (err) 19351 19566 return err; ··· 19428 19667 return 1; 19429 19668 } 19430 19669 miss: 19431 - if (env->internal_error) 19432 - return -EFAULT; 19433 19670 /* when new state is not going to be added do not increase miss count. 19434 19671 * Otherwise several loop iterations will remove the state 19435 19672 * recorded earlier. The goal of these heuristics is to have ··· 19513 19754 cur->dfs_depth = new->dfs_depth + 1; 19514 19755 clear_jmp_history(cur); 19515 19756 list_add(&new_sl->node, head); 19516 - 19517 - /* connect new state to parentage chain. Current frame needs all 19518 - * registers connected. Only r6 - r9 of the callers are alive (pushed 19519 - * to the stack implicitly by JITs) so in callers' frames connect just 19520 - * r6 - r9 as an optimization. Callers will have r1 - r5 connected to 19521 - * the state of the call instruction (with WRITTEN set), and r0 comes 19522 - * from callee with its full parentage chain, anyway. 19523 - */ 19524 - /* clear write marks in current state: the writes we did are not writes 19525 - * our child did, so they don't screen off its reads from us. 19526 - * (There are no read marks in current state, because reads always mark 19527 - * their parent and current state never has children yet. Only 19528 - * explored_states can get read marks.) 19529 - */ 19530 - for (j = 0; j <= cur->curframe; j++) { 19531 - for (i = j < cur->curframe ? BPF_REG_6 : 0; i < BPF_REG_FP; i++) 19532 - cur->frame[j]->regs[i].parent = &new->frame[j]->regs[i]; 19533 - for (i = 0; i < BPF_REG_FP; i++) 19534 - cur->frame[j]->regs[i].live = REG_LIVE_NONE; 19535 - } 19536 - 19537 - /* all stack frames are accessible from callee, clear them all */ 19538 - for (j = 0; j <= cur->curframe; j++) { 19539 - struct bpf_func_state *frame = cur->frame[j]; 19540 - struct bpf_func_state *newframe = new->frame[j]; 19541 - 19542 - for (i = 0; i < frame->allocated_stack / BPF_REG_SIZE; i++) { 19543 - frame->stack[i].spilled_ptr.live = REG_LIVE_NONE; 19544 - frame->stack[i].spilled_ptr.parent = 19545 - &newframe->stack[i].spilled_ptr; 19546 - } 19547 - } 19548 19757 return 0; 19549 19758 } 19550 19759
+89 -89
tools/testing/selftests/bpf/prog_tests/align.c
··· 42 42 .matches = { 43 43 {0, "R1", "ctx()"}, 44 44 {0, "R10", "fp0"}, 45 - {0, "R3_w", "2"}, 46 - {1, "R3_w", "4"}, 47 - {2, "R3_w", "8"}, 48 - {3, "R3_w", "16"}, 49 - {4, "R3_w", "32"}, 45 + {0, "R3", "2"}, 46 + {1, "R3", "4"}, 47 + {2, "R3", "8"}, 48 + {3, "R3", "16"}, 49 + {4, "R3", "32"}, 50 50 }, 51 51 }, 52 52 { ··· 70 70 .matches = { 71 71 {0, "R1", "ctx()"}, 72 72 {0, "R10", "fp0"}, 73 - {0, "R3_w", "1"}, 74 - {1, "R3_w", "2"}, 75 - {2, "R3_w", "4"}, 76 - {3, "R3_w", "8"}, 77 - {4, "R3_w", "16"}, 78 - {5, "R3_w", "1"}, 79 - {6, "R4_w", "32"}, 80 - {7, "R4_w", "16"}, 81 - {8, "R4_w", "8"}, 82 - {9, "R4_w", "4"}, 83 - {10, "R4_w", "2"}, 73 + {0, "R3", "1"}, 74 + {1, "R3", "2"}, 75 + {2, "R3", "4"}, 76 + {3, "R3", "8"}, 77 + {4, "R3", "16"}, 78 + {5, "R3", "1"}, 79 + {6, "R4", "32"}, 80 + {7, "R4", "16"}, 81 + {8, "R4", "8"}, 82 + {9, "R4", "4"}, 83 + {10, "R4", "2"}, 84 84 }, 85 85 }, 86 86 { ··· 99 99 .matches = { 100 100 {0, "R1", "ctx()"}, 101 101 {0, "R10", "fp0"}, 102 - {0, "R3_w", "4"}, 103 - {1, "R3_w", "8"}, 104 - {2, "R3_w", "10"}, 105 - {3, "R4_w", "8"}, 106 - {4, "R4_w", "12"}, 107 - {5, "R4_w", "14"}, 102 + {0, "R3", "4"}, 103 + {1, "R3", "8"}, 104 + {2, "R3", "10"}, 105 + {3, "R4", "8"}, 106 + {4, "R4", "12"}, 107 + {5, "R4", "14"}, 108 108 }, 109 109 }, 110 110 { ··· 121 121 .matches = { 122 122 {0, "R1", "ctx()"}, 123 123 {0, "R10", "fp0"}, 124 - {0, "R3_w", "7"}, 125 - {1, "R3_w", "7"}, 126 - {2, "R3_w", "14"}, 127 - {3, "R3_w", "56"}, 124 + {0, "R3", "7"}, 125 + {1, "R3", "7"}, 126 + {2, "R3", "14"}, 127 + {3, "R3", "56"}, 128 128 }, 129 129 }, 130 130 ··· 162 162 }, 163 163 .prog_type = BPF_PROG_TYPE_SCHED_CLS, 164 164 .matches = { 165 - {6, "R0_w", "pkt(off=8,r=8)"}, 166 - {6, "R3_w", "var_off=(0x0; 0xff)"}, 167 - {7, "R3_w", "var_off=(0x0; 0x1fe)"}, 168 - {8, "R3_w", "var_off=(0x0; 0x3fc)"}, 169 - {9, "R3_w", "var_off=(0x0; 0x7f8)"}, 170 - {10, "R3_w", "var_off=(0x0; 0xff0)"}, 171 - {12, "R3_w", "pkt_end()"}, 172 - {17, "R4_w", "var_off=(0x0; 0xff)"}, 173 - {18, "R4_w", "var_off=(0x0; 0x1fe0)"}, 174 - {19, "R4_w", "var_off=(0x0; 0xff0)"}, 175 - {20, "R4_w", "var_off=(0x0; 0x7f8)"}, 176 - {21, "R4_w", "var_off=(0x0; 0x3fc)"}, 177 - {22, "R4_w", "var_off=(0x0; 0x1fe)"}, 165 + {6, "R0", "pkt(off=8,r=8)"}, 166 + {6, "R3", "var_off=(0x0; 0xff)"}, 167 + {7, "R3", "var_off=(0x0; 0x1fe)"}, 168 + {8, "R3", "var_off=(0x0; 0x3fc)"}, 169 + {9, "R3", "var_off=(0x0; 0x7f8)"}, 170 + {10, "R3", "var_off=(0x0; 0xff0)"}, 171 + {12, "R3", "pkt_end()"}, 172 + {17, "R4", "var_off=(0x0; 0xff)"}, 173 + {18, "R4", "var_off=(0x0; 0x1fe0)"}, 174 + {19, "R4", "var_off=(0x0; 0xff0)"}, 175 + {20, "R4", "var_off=(0x0; 0x7f8)"}, 176 + {21, "R4", "var_off=(0x0; 0x3fc)"}, 177 + {22, "R4", "var_off=(0x0; 0x1fe)"}, 178 178 }, 179 179 }, 180 180 { ··· 195 195 }, 196 196 .prog_type = BPF_PROG_TYPE_SCHED_CLS, 197 197 .matches = { 198 - {6, "R3_w", "var_off=(0x0; 0xff)"}, 199 - {7, "R4_w", "var_off=(0x0; 0xff)"}, 200 - {8, "R4_w", "var_off=(0x0; 0xff)"}, 201 - {9, "R4_w", "var_off=(0x0; 0xff)"}, 202 - {10, "R4_w", "var_off=(0x0; 0x1fe)"}, 203 - {11, "R4_w", "var_off=(0x0; 0xff)"}, 204 - {12, "R4_w", "var_off=(0x0; 0x3fc)"}, 205 - {13, "R4_w", "var_off=(0x0; 0xff)"}, 206 - {14, "R4_w", "var_off=(0x0; 0x7f8)"}, 207 - {15, "R4_w", "var_off=(0x0; 0xff0)"}, 198 + {6, "R3", "var_off=(0x0; 0xff)"}, 199 + {7, "R4", "var_off=(0x0; 0xff)"}, 200 + {8, "R4", "var_off=(0x0; 0xff)"}, 201 + {9, "R4", "var_off=(0x0; 0xff)"}, 202 + {10, "R4", "var_off=(0x0; 0x1fe)"}, 203 + {11, "R4", "var_off=(0x0; 0xff)"}, 204 + {12, "R4", "var_off=(0x0; 0x3fc)"}, 205 + {13, "R4", "var_off=(0x0; 0xff)"}, 206 + {14, "R4", "var_off=(0x0; 0x7f8)"}, 207 + {15, "R4", "var_off=(0x0; 0xff0)"}, 208 208 }, 209 209 }, 210 210 { ··· 235 235 }, 236 236 .prog_type = BPF_PROG_TYPE_SCHED_CLS, 237 237 .matches = { 238 - {2, "R5_w", "pkt(r=0)"}, 239 - {4, "R5_w", "pkt(off=14,r=0)"}, 240 - {5, "R4_w", "pkt(off=14,r=0)"}, 238 + {2, "R5", "pkt(r=0)"}, 239 + {4, "R5", "pkt(off=14,r=0)"}, 240 + {5, "R4", "pkt(off=14,r=0)"}, 241 241 {9, "R2", "pkt(r=18)"}, 242 242 {10, "R5", "pkt(off=14,r=18)"}, 243 - {10, "R4_w", "var_off=(0x0; 0xff)"}, 244 - {13, "R4_w", "var_off=(0x0; 0xffff)"}, 245 - {14, "R4_w", "var_off=(0x0; 0xffff)"}, 243 + {10, "R4", "var_off=(0x0; 0xff)"}, 244 + {13, "R4", "var_off=(0x0; 0xffff)"}, 245 + {14, "R4", "var_off=(0x0; 0xffff)"}, 246 246 }, 247 247 }, 248 248 { ··· 299 299 /* Calculated offset in R6 has unknown value, but known 300 300 * alignment of 4. 301 301 */ 302 - {6, "R2_w", "pkt(r=8)"}, 303 - {7, "R6_w", "var_off=(0x0; 0x3fc)"}, 302 + {6, "R2", "pkt(r=8)"}, 303 + {7, "R6", "var_off=(0x0; 0x3fc)"}, 304 304 /* Offset is added to packet pointer R5, resulting in 305 305 * known fixed offset, and variable offset from R6. 306 306 */ 307 - {11, "R5_w", "pkt(id=1,off=14,"}, 307 + {11, "R5", "pkt(id=1,off=14,"}, 308 308 /* At the time the word size load is performed from R5, 309 309 * it's total offset is NET_IP_ALIGN + reg->off (0) + 310 310 * reg->aux_off (14) which is 16. Then the variable ··· 320 320 * instruction to validate R5 state. We also check 321 321 * that R4 is what it should be in such case. 322 322 */ 323 - {18, "R4_w", "var_off=(0x0; 0x3fc)"}, 324 - {18, "R5_w", "var_off=(0x0; 0x3fc)"}, 323 + {18, "R4", "var_off=(0x0; 0x3fc)"}, 324 + {18, "R5", "var_off=(0x0; 0x3fc)"}, 325 325 /* Constant offset is added to R5, resulting in 326 326 * reg->off of 14. 327 327 */ 328 - {19, "R5_w", "pkt(id=2,off=14,"}, 328 + {19, "R5", "pkt(id=2,off=14,"}, 329 329 /* At the time the word size load is performed from R5, 330 330 * its total fixed offset is NET_IP_ALIGN + reg->off 331 331 * (14) which is 16. Then the variable offset is 4-byte ··· 337 337 /* Constant offset is added to R5 packet pointer, 338 338 * resulting in reg->off value of 14. 339 339 */ 340 - {26, "R5_w", "pkt(off=14,r=8)"}, 340 + {26, "R5", "pkt(off=14,r=8)"}, 341 341 /* Variable offset is added to R5, resulting in a 342 342 * variable offset of (4n). See comment for insn #18 343 343 * for R4 = R5 trick. 344 344 */ 345 - {28, "R4_w", "var_off=(0x0; 0x3fc)"}, 346 - {28, "R5_w", "var_off=(0x0; 0x3fc)"}, 345 + {28, "R4", "var_off=(0x0; 0x3fc)"}, 346 + {28, "R5", "var_off=(0x0; 0x3fc)"}, 347 347 /* Constant is added to R5 again, setting reg->off to 18. */ 348 - {29, "R5_w", "pkt(id=3,off=18,"}, 348 + {29, "R5", "pkt(id=3,off=18,"}, 349 349 /* And once more we add a variable; resulting var_off 350 350 * is still (4n), fixed offset is not changed. 351 351 * Also, we create a new reg->id. 352 352 */ 353 - {31, "R4_w", "var_off=(0x0; 0x7fc)"}, 354 - {31, "R5_w", "var_off=(0x0; 0x7fc)"}, 353 + {31, "R4", "var_off=(0x0; 0x7fc)"}, 354 + {31, "R5", "var_off=(0x0; 0x7fc)"}, 355 355 /* At the time the word size load is performed from R5, 356 356 * its total fixed offset is NET_IP_ALIGN + reg->off (18) 357 357 * which is 20. Then the variable offset is (4n), so ··· 397 397 /* Calculated offset in R6 has unknown value, but known 398 398 * alignment of 4. 399 399 */ 400 - {6, "R2_w", "pkt(r=8)"}, 401 - {7, "R6_w", "var_off=(0x0; 0x3fc)"}, 400 + {6, "R2", "pkt(r=8)"}, 401 + {7, "R6", "var_off=(0x0; 0x3fc)"}, 402 402 /* Adding 14 makes R6 be (4n+2) */ 403 - {8, "R6_w", "var_off=(0x2; 0x7fc)"}, 403 + {8, "R6", "var_off=(0x2; 0x7fc)"}, 404 404 /* Packet pointer has (4n+2) offset */ 405 - {11, "R5_w", "var_off=(0x2; 0x7fc)"}, 405 + {11, "R5", "var_off=(0x2; 0x7fc)"}, 406 406 {12, "R4", "var_off=(0x2; 0x7fc)"}, 407 407 /* At the time the word size load is performed from R5, 408 408 * its total fixed offset is NET_IP_ALIGN + reg->off (0) ··· 414 414 /* Newly read value in R6 was shifted left by 2, so has 415 415 * known alignment of 4. 416 416 */ 417 - {17, "R6_w", "var_off=(0x0; 0x3fc)"}, 417 + {17, "R6", "var_off=(0x0; 0x3fc)"}, 418 418 /* Added (4n) to packet pointer's (4n+2) var_off, giving 419 419 * another (4n+2). 420 420 */ 421 - {19, "R5_w", "var_off=(0x2; 0xffc)"}, 421 + {19, "R5", "var_off=(0x2; 0xffc)"}, 422 422 {20, "R4", "var_off=(0x2; 0xffc)"}, 423 423 /* At the time the word size load is performed from R5, 424 424 * its total fixed offset is NET_IP_ALIGN + reg->off (0) ··· 459 459 .prog_type = BPF_PROG_TYPE_SCHED_CLS, 460 460 .result = REJECT, 461 461 .matches = { 462 - {3, "R5_w", "pkt_end()"}, 462 + {3, "R5", "pkt_end()"}, 463 463 /* (ptr - ptr) << 2 == unknown, (4n) */ 464 - {5, "R5_w", "var_off=(0x0; 0xfffffffffffffffc)"}, 464 + {5, "R5", "var_off=(0x0; 0xfffffffffffffffc)"}, 465 465 /* (4n) + 14 == (4n+2). We blow our bounds, because 466 466 * the add could overflow. 467 467 */ 468 - {6, "R5_w", "var_off=(0x2; 0xfffffffffffffffc)"}, 468 + {6, "R5", "var_off=(0x2; 0xfffffffffffffffc)"}, 469 469 /* Checked s>=0 */ 470 470 {9, "R5", "var_off=(0x2; 0x7ffffffffffffffc)"}, 471 471 /* packet pointer + nonnegative (4n+2) */ 472 - {11, "R6_w", "var_off=(0x2; 0x7ffffffffffffffc)"}, 473 - {12, "R4_w", "var_off=(0x2; 0x7ffffffffffffffc)"}, 472 + {11, "R6", "var_off=(0x2; 0x7ffffffffffffffc)"}, 473 + {12, "R4", "var_off=(0x2; 0x7ffffffffffffffc)"}, 474 474 /* NET_IP_ALIGN + (4n+2) == (4n), alignment is fine. 475 475 * We checked the bounds, but it might have been able 476 476 * to overflow if the packet pointer started in the ··· 478 478 * So we did not get a 'range' on R6, and the access 479 479 * attempt will fail. 480 480 */ 481 - {15, "R6_w", "var_off=(0x2; 0x7ffffffffffffffc)"}, 481 + {15, "R6", "var_off=(0x2; 0x7ffffffffffffffc)"}, 482 482 } 483 483 }, 484 484 { ··· 513 513 /* Calculated offset in R6 has unknown value, but known 514 514 * alignment of 4. 515 515 */ 516 - {6, "R2_w", "pkt(r=8)"}, 517 - {8, "R6_w", "var_off=(0x0; 0x3fc)"}, 516 + {6, "R2", "pkt(r=8)"}, 517 + {8, "R6", "var_off=(0x0; 0x3fc)"}, 518 518 /* Adding 14 makes R6 be (4n+2) */ 519 - {9, "R6_w", "var_off=(0x2; 0x7fc)"}, 519 + {9, "R6", "var_off=(0x2; 0x7fc)"}, 520 520 /* New unknown value in R7 is (4n) */ 521 - {10, "R7_w", "var_off=(0x0; 0x3fc)"}, 521 + {10, "R7", "var_off=(0x0; 0x3fc)"}, 522 522 /* Subtracting it from R6 blows our unsigned bounds */ 523 523 {11, "R6", "var_off=(0x2; 0xfffffffffffffffc)"}, 524 524 /* Checked s>= 0 */ ··· 566 566 /* Calculated offset in R6 has unknown value, but known 567 567 * alignment of 4. 568 568 */ 569 - {6, "R2_w", "pkt(r=8)"}, 570 - {9, "R6_w", "var_off=(0x0; 0x3c)"}, 569 + {6, "R2", "pkt(r=8)"}, 570 + {9, "R6", "var_off=(0x0; 0x3c)"}, 571 571 /* Adding 14 makes R6 be (4n+2) */ 572 - {10, "R6_w", "var_off=(0x2; 0x7c)"}, 572 + {10, "R6", "var_off=(0x2; 0x7c)"}, 573 573 /* Subtracting from packet pointer overflows ubounds */ 574 - {13, "R5_w", "var_off=(0xffffffffffffff82; 0x7c)"}, 574 + {13, "R5", "var_off=(0xffffffffffffff82; 0x7c)"}, 575 575 /* New unknown value in R7 is (4n), >= 76 */ 576 - {14, "R7_w", "var_off=(0x0; 0x7fc)"}, 576 + {14, "R7", "var_off=(0x0; 0x7fc)"}, 577 577 /* Adding it to packet pointer gives nice bounds again */ 578 - {16, "R5_w", "var_off=(0x2; 0x7fc)"}, 578 + {16, "R5", "var_off=(0x2; 0x7fc)"}, 579 579 /* At the time the word size load is performed from R5, 580 580 * its total fixed offset is NET_IP_ALIGN + reg->off (0) 581 581 * which is 2. Then the variable offset is (4n+2), so
+6 -6
tools/testing/selftests/bpf/prog_tests/spin_lock.c
··· 13 13 const char *err_msg; 14 14 } spin_lock_fail_tests[] = { 15 15 { "lock_id_kptr_preserve", 16 - "5: (bf) r1 = r0 ; R0_w=ptr_foo(id=2,ref_obj_id=2) " 17 - "R1_w=ptr_foo(id=2,ref_obj_id=2) refs=2\n6: (85) call bpf_this_cpu_ptr#154\n" 16 + "5: (bf) r1 = r0 ; R0=ptr_foo(id=2,ref_obj_id=2) " 17 + "R1=ptr_foo(id=2,ref_obj_id=2) refs=2\n6: (85) call bpf_this_cpu_ptr#154\n" 18 18 "R1 type=ptr_ expected=percpu_ptr_" }, 19 19 { "lock_id_global_zero", 20 - "; R1_w=map_value(map=.data.A,ks=4,vs=4)\n2: (85) call bpf_this_cpu_ptr#154\n" 20 + "; R1=map_value(map=.data.A,ks=4,vs=4)\n2: (85) call bpf_this_cpu_ptr#154\n" 21 21 "R1 type=map_value expected=percpu_ptr_" }, 22 22 { "lock_id_mapval_preserve", 23 23 "[0-9]\\+: (bf) r1 = r0 ;" 24 - " R0_w=map_value(id=1,map=array_map,ks=4,vs=8)" 25 - " R1_w=map_value(id=1,map=array_map,ks=4,vs=8)\n" 24 + " R0=map_value(id=1,map=array_map,ks=4,vs=8)" 25 + " R1=map_value(id=1,map=array_map,ks=4,vs=8)\n" 26 26 "[0-9]\\+: (85) call bpf_this_cpu_ptr#154\n" 27 27 "R1 type=map_value expected=percpu_ptr_" }, 28 28 { "lock_id_innermapval_preserve", 29 29 "[0-9]\\+: (bf) r1 = r0 ;" 30 30 " R0=map_value(id=2,ks=4,vs=8)" 31 - " R1_w=map_value(id=2,ks=4,vs=8)\n" 31 + " R1=map_value(id=2,ks=4,vs=8)\n" 32 32 "[0-9]\\+: (85) call bpf_this_cpu_ptr#154\n" 33 33 "R1 type=map_value expected=percpu_ptr_" }, 34 34 { "lock_id_mismatch_kptr_kptr", "bpf_spin_unlock of different lock" },
+22 -22
tools/testing/selftests/bpf/prog_tests/test_veristat.c
··· 75 75 " -vl2 > %s", fix->veristat, fix->tmpfile); 76 76 77 77 read(fix->fd, fix->output, fix->sz); 78 - __CHECK_STR("_w=0xf000000000000001 ", "var_s64 = 0xf000000000000001"); 79 - __CHECK_STR("_w=0xfedcba9876543210 ", "var_u64 = 0xfedcba9876543210"); 80 - __CHECK_STR("_w=0x80000000 ", "var_s32 = -0x80000000"); 81 - __CHECK_STR("_w=0x76543210 ", "var_u32 = 0x76543210"); 82 - __CHECK_STR("_w=0x8000 ", "var_s16 = -32768"); 83 - __CHECK_STR("_w=0xecec ", "var_u16 = 60652"); 84 - __CHECK_STR("_w=128 ", "var_s8 = -128"); 85 - __CHECK_STR("_w=255 ", "var_u8 = 255"); 86 - __CHECK_STR("_w=11 ", "var_ea = EA2"); 87 - __CHECK_STR("_w=12 ", "var_eb = EB2"); 88 - __CHECK_STR("_w=13 ", "var_ec = EC2"); 89 - __CHECK_STR("_w=1 ", "var_b = 1"); 90 - __CHECK_STR("_w=170 ", "struct1[2].struct2[1][2].u.var_u8[2]=170"); 91 - __CHECK_STR("_w=0xaaaa ", "union1.var_u16 = 0xaaaa"); 92 - __CHECK_STR("_w=171 ", "arr[3]= 171"); 93 - __CHECK_STR("_w=172 ", "arr[EA2] =172"); 94 - __CHECK_STR("_w=10 ", "enum_arr[EC2]=EA3"); 95 - __CHECK_STR("_w=173 ", "matrix[31][7][11]=173"); 96 - __CHECK_STR("_w=174 ", "struct1[2].struct2[1][2].u.mat[5][3]=174"); 97 - __CHECK_STR("_w=175 ", "struct11[7][5].struct2[0][1].u.mat[3][0]=175"); 78 + __CHECK_STR("=0xf000000000000001 ", "var_s64 = 0xf000000000000001"); 79 + __CHECK_STR("=0xfedcba9876543210 ", "var_u64 = 0xfedcba9876543210"); 80 + __CHECK_STR("=0x80000000 ", "var_s32 = -0x80000000"); 81 + __CHECK_STR("=0x76543210 ", "var_u32 = 0x76543210"); 82 + __CHECK_STR("=0x8000 ", "var_s16 = -32768"); 83 + __CHECK_STR("=0xecec ", "var_u16 = 60652"); 84 + __CHECK_STR("=128 ", "var_s8 = -128"); 85 + __CHECK_STR("=255 ", "var_u8 = 255"); 86 + __CHECK_STR("=11 ", "var_ea = EA2"); 87 + __CHECK_STR("=12 ", "var_eb = EB2"); 88 + __CHECK_STR("=13 ", "var_ec = EC2"); 89 + __CHECK_STR("=1 ", "var_b = 1"); 90 + __CHECK_STR("=170 ", "struct1[2].struct2[1][2].u.var_u8[2]=170"); 91 + __CHECK_STR("=0xaaaa ", "union1.var_u16 = 0xaaaa"); 92 + __CHECK_STR("=171 ", "arr[3]= 171"); 93 + __CHECK_STR("=172 ", "arr[EA2] =172"); 94 + __CHECK_STR("=10 ", "enum_arr[EC2]=EA3"); 95 + __CHECK_STR("=173 ", "matrix[31][7][11]=173"); 96 + __CHECK_STR("=174 ", "struct1[2].struct2[1][2].u.mat[5][3]=174"); 97 + __CHECK_STR("=175 ", "struct11[7][5].struct2[0][1].u.mat[3][0]=175"); 98 98 99 99 out: 100 100 teardown_fixture(fix); ··· 117 117 SYS(out, "%s set_global_vars.bpf.o -G \"@%s\" -vl2 > %s", 118 118 fix->veristat, input_file, fix->tmpfile); 119 119 read(fix->fd, fix->output, fix->sz); 120 - __CHECK_STR("_w=0x8000 ", "var_s16 = -32768"); 121 - __CHECK_STR("_w=0xecec ", "var_u16 = 60652"); 120 + __CHECK_STR("=0x8000 ", "var_s16 = -32768"); 121 + __CHECK_STR("=0xecec ", "var_u16 = 60652"); 122 122 123 123 out: 124 124 close(fd);
+17 -17
tools/testing/selftests/bpf/progs/exceptions_assert.c
··· 18 18 return *(u64 *)num; \ 19 19 } 20 20 21 - __msg(": R0_w=0xffffffff80000000") 21 + __msg(": R0=0xffffffff80000000") 22 22 check_assert(s64, ==, eq_int_min, INT_MIN); 23 - __msg(": R0_w=0x7fffffff") 23 + __msg(": R0=0x7fffffff") 24 24 check_assert(s64, ==, eq_int_max, INT_MAX); 25 - __msg(": R0_w=0") 25 + __msg(": R0=0") 26 26 check_assert(s64, ==, eq_zero, 0); 27 - __msg(": R0_w=0x8000000000000000 R1_w=0x8000000000000000") 27 + __msg(": R0=0x8000000000000000 R1=0x8000000000000000") 28 28 check_assert(s64, ==, eq_llong_min, LLONG_MIN); 29 - __msg(": R0_w=0x7fffffffffffffff R1_w=0x7fffffffffffffff") 29 + __msg(": R0=0x7fffffffffffffff R1=0x7fffffffffffffff") 30 30 check_assert(s64, ==, eq_llong_max, LLONG_MAX); 31 31 32 - __msg(": R0_w=scalar(id=1,smax=0x7ffffffe)") 32 + __msg(": R0=scalar(id=1,smax=0x7ffffffe)") 33 33 check_assert(s64, <, lt_pos, INT_MAX); 34 - __msg(": R0_w=scalar(id=1,smax=-1,umin=0x8000000000000000,var_off=(0x8000000000000000; 0x7fffffffffffffff))") 34 + __msg(": R0=scalar(id=1,smax=-1,umin=0x8000000000000000,var_off=(0x8000000000000000; 0x7fffffffffffffff))") 35 35 check_assert(s64, <, lt_zero, 0); 36 - __msg(": R0_w=scalar(id=1,smax=0xffffffff7fffffff") 36 + __msg(": R0=scalar(id=1,smax=0xffffffff7fffffff") 37 37 check_assert(s64, <, lt_neg, INT_MIN); 38 38 39 - __msg(": R0_w=scalar(id=1,smax=0x7fffffff)") 39 + __msg(": R0=scalar(id=1,smax=0x7fffffff)") 40 40 check_assert(s64, <=, le_pos, INT_MAX); 41 - __msg(": R0_w=scalar(id=1,smax=0)") 41 + __msg(": R0=scalar(id=1,smax=0)") 42 42 check_assert(s64, <=, le_zero, 0); 43 - __msg(": R0_w=scalar(id=1,smax=0xffffffff80000000") 43 + __msg(": R0=scalar(id=1,smax=0xffffffff80000000") 44 44 check_assert(s64, <=, le_neg, INT_MIN); 45 45 46 - __msg(": R0_w=scalar(id=1,smin=umin=0x80000000,umax=0x7fffffffffffffff,var_off=(0x0; 0x7fffffffffffffff))") 46 + __msg(": R0=scalar(id=1,smin=umin=0x80000000,umax=0x7fffffffffffffff,var_off=(0x0; 0x7fffffffffffffff))") 47 47 check_assert(s64, >, gt_pos, INT_MAX); 48 - __msg(": R0_w=scalar(id=1,smin=umin=1,umax=0x7fffffffffffffff,var_off=(0x0; 0x7fffffffffffffff))") 48 + __msg(": R0=scalar(id=1,smin=umin=1,umax=0x7fffffffffffffff,var_off=(0x0; 0x7fffffffffffffff))") 49 49 check_assert(s64, >, gt_zero, 0); 50 - __msg(": R0_w=scalar(id=1,smin=0xffffffff80000001") 50 + __msg(": R0=scalar(id=1,smin=0xffffffff80000001") 51 51 check_assert(s64, >, gt_neg, INT_MIN); 52 52 53 - __msg(": R0_w=scalar(id=1,smin=umin=0x7fffffff,umax=0x7fffffffffffffff,var_off=(0x0; 0x7fffffffffffffff))") 53 + __msg(": R0=scalar(id=1,smin=umin=0x7fffffff,umax=0x7fffffffffffffff,var_off=(0x0; 0x7fffffffffffffff))") 54 54 check_assert(s64, >=, ge_pos, INT_MAX); 55 - __msg(": R0_w=scalar(id=1,smin=0,umax=0x7fffffffffffffff,var_off=(0x0; 0x7fffffffffffffff))") 55 + __msg(": R0=scalar(id=1,smin=0,umax=0x7fffffffffffffff,var_off=(0x0; 0x7fffffffffffffff))") 56 56 check_assert(s64, >=, ge_zero, 0); 57 - __msg(": R0_w=scalar(id=1,smin=0xffffffff80000000") 57 + __msg(": R0=scalar(id=1,smin=0xffffffff80000000") 58 58 check_assert(s64, >=, ge_neg, INT_MIN); 59 59 60 60 SEC("?tc")
+2 -2
tools/testing/selftests/bpf/progs/iters_state_safety.c
··· 30 30 31 31 SEC("?raw_tp") 32 32 __success __log_level(2) 33 - __msg("fp-8_w=iter_num(ref_id=1,state=active,depth=0)") 33 + __msg("fp-8=iter_num(ref_id=1,state=active,depth=0)") 34 34 int create_and_destroy(void *ctx) 35 35 { 36 36 struct bpf_iter_num iter; ··· 196 196 197 197 SEC("?raw_tp") 198 198 __success __log_level(2) 199 - __msg("fp-8_w=iter_num(ref_id=1,state=active,depth=0)") 199 + __msg("fp-8=iter_num(ref_id=1,state=active,depth=0)") 200 200 int valid_stack_reuse(void *ctx) 201 201 { 202 202 struct bpf_iter_num iter;
+3 -3
tools/testing/selftests/bpf/progs/iters_testmod_seq.c
··· 20 20 21 21 SEC("raw_tp/sys_enter") 22 22 __success __log_level(2) 23 - __msg("fp-16_w=iter_testmod_seq(ref_id=1,state=active,depth=0)") 23 + __msg("fp-16=iter_testmod_seq(ref_id=1,state=active,depth=0)") 24 24 __msg("fp-16=iter_testmod_seq(ref_id=1,state=drained,depth=0)") 25 25 __msg("call bpf_iter_testmod_seq_destroy") 26 26 int testmod_seq_empty(const void *ctx) ··· 38 38 39 39 SEC("raw_tp/sys_enter") 40 40 __success __log_level(2) 41 - __msg("fp-16_w=iter_testmod_seq(ref_id=1,state=active,depth=0)") 41 + __msg("fp-16=iter_testmod_seq(ref_id=1,state=active,depth=0)") 42 42 __msg("fp-16=iter_testmod_seq(ref_id=1,state=drained,depth=0)") 43 43 __msg("call bpf_iter_testmod_seq_destroy") 44 44 int testmod_seq_full(const void *ctx) ··· 58 58 59 59 SEC("raw_tp/sys_enter") 60 60 __success __log_level(2) 61 - __msg("fp-16_w=iter_testmod_seq(ref_id=1,state=active,depth=0)") 61 + __msg("fp-16=iter_testmod_seq(ref_id=1,state=active,depth=0)") 62 62 __msg("fp-16=iter_testmod_seq(ref_id=1,state=drained,depth=0)") 63 63 __msg("call bpf_iter_testmod_seq_destroy") 64 64 int testmod_seq_truncated(const void *ctx)
+2 -2
tools/testing/selftests/bpf/progs/mem_rdonly_untrusted.c
··· 8 8 SEC("tp_btf/sys_enter") 9 9 __success 10 10 __log_level(2) 11 - __msg("r8 = *(u64 *)(r7 +0) ; R7_w=ptr_nameidata(off={{[0-9]+}}) R8_w=rdonly_untrusted_mem(sz=0)") 12 - __msg("r9 = *(u8 *)(r8 +0) ; R8_w=rdonly_untrusted_mem(sz=0) R9_w=scalar") 11 + __msg("r8 = *(u64 *)(r7 +0) ; R7=ptr_nameidata(off={{[0-9]+}}) R8=rdonly_untrusted_mem(sz=0)") 12 + __msg("r9 = *(u8 *)(r8 +0) ; R8=rdonly_untrusted_mem(sz=0) R9=scalar") 13 13 int btf_id_to_ptr_mem(void *ctx) 14 14 { 15 15 struct task_struct *task;
+19 -19
tools/testing/selftests/bpf/progs/verifier_bounds.c
··· 926 926 SEC("socket") 927 927 __description("bounds check for non const xor src dst") 928 928 __success __log_level(2) 929 - __msg("5: (af) r0 ^= r6 ; R0_w=scalar(smin=smin32=0,smax=umax=smax32=umax32=431,var_off=(0x0; 0x1af))") 929 + __msg("5: (af) r0 ^= r6 ; R0=scalar(smin=smin32=0,smax=umax=smax32=umax32=431,var_off=(0x0; 0x1af))") 930 930 __naked void non_const_xor_src_dst(void) 931 931 { 932 932 asm volatile (" \ ··· 947 947 SEC("socket") 948 948 __description("bounds check for non const or src dst") 949 949 __success __log_level(2) 950 - __msg("5: (4f) r0 |= r6 ; R0_w=scalar(smin=smin32=0,smax=umax=smax32=umax32=431,var_off=(0x0; 0x1af))") 950 + __msg("5: (4f) r0 |= r6 ; R0=scalar(smin=smin32=0,smax=umax=smax32=umax32=431,var_off=(0x0; 0x1af))") 951 951 __naked void non_const_or_src_dst(void) 952 952 { 953 953 asm volatile (" \ ··· 968 968 SEC("socket") 969 969 __description("bounds check for non const mul regs") 970 970 __success __log_level(2) 971 - __msg("5: (2f) r0 *= r6 ; R0_w=scalar(smin=smin32=0,smax=umax=smax32=umax32=3825,var_off=(0x0; 0xfff))") 971 + __msg("5: (2f) r0 *= r6 ; R0=scalar(smin=smin32=0,smax=umax=smax32=umax32=3825,var_off=(0x0; 0xfff))") 972 972 __naked void non_const_mul_regs(void) 973 973 { 974 974 asm volatile (" \ ··· 1241 1241 SEC("tc") 1242 1242 __description("multiply mixed sign bounds. test 1") 1243 1243 __success __log_level(2) 1244 - __msg("r6 *= r7 {{.*}}; R6_w=scalar(smin=umin=0x1bc16d5cd4927ee1,smax=umax=0x1bc16d674ec80000,smax32=0x7ffffeff,umax32=0xfffffeff,var_off=(0x1bc16d4000000000; 0x3ffffffeff))") 1244 + __msg("r6 *= r7 {{.*}}; R6=scalar(smin=umin=0x1bc16d5cd4927ee1,smax=umax=0x1bc16d674ec80000,smax32=0x7ffffeff,umax32=0xfffffeff,var_off=(0x1bc16d4000000000; 0x3ffffffeff))") 1245 1245 __naked void mult_mixed0_sign(void) 1246 1246 { 1247 1247 asm volatile ( ··· 1264 1264 SEC("tc") 1265 1265 __description("multiply mixed sign bounds. test 2") 1266 1266 __success __log_level(2) 1267 - __msg("r6 *= r7 {{.*}}; R6_w=scalar(smin=smin32=-100,smax=smax32=200)") 1267 + __msg("r6 *= r7 {{.*}}; R6=scalar(smin=smin32=-100,smax=smax32=200)") 1268 1268 __naked void mult_mixed1_sign(void) 1269 1269 { 1270 1270 asm volatile ( ··· 1287 1287 SEC("tc") 1288 1288 __description("multiply negative bounds") 1289 1289 __success __log_level(2) 1290 - __msg("r6 *= r7 {{.*}}; R6_w=scalar(smin=umin=smin32=umin32=0x3ff280b0,smax=umax=smax32=umax32=0x3fff0001,var_off=(0x3ff00000; 0xf81ff))") 1290 + __msg("r6 *= r7 {{.*}}; R6=scalar(smin=umin=smin32=umin32=0x3ff280b0,smax=umax=smax32=umax32=0x3fff0001,var_off=(0x3ff00000; 0xf81ff))") 1291 1291 __naked void mult_sign_bounds(void) 1292 1292 { 1293 1293 asm volatile ( ··· 1311 1311 SEC("tc") 1312 1312 __description("multiply bounds that don't cross signed boundary") 1313 1313 __success __log_level(2) 1314 - __msg("r8 *= r6 {{.*}}; R6_w=scalar(smin=smin32=0,smax=umax=smax32=umax32=11,var_off=(0x0; 0xb)) R8_w=scalar(smin=0,smax=umax=0x7b96bb0a94a3a7cd,var_off=(0x0; 0x7fffffffffffffff))") 1314 + __msg("r8 *= r6 {{.*}}; R6=scalar(smin=smin32=0,smax=umax=smax32=umax32=11,var_off=(0x0; 0xb)) R8=scalar(smin=0,smax=umax=0x7b96bb0a94a3a7cd,var_off=(0x0; 0x7fffffffffffffff))") 1315 1315 __naked void mult_no_sign_crossing(void) 1316 1316 { 1317 1317 asm volatile ( ··· 1331 1331 SEC("tc") 1332 1332 __description("multiplication overflow, result in unbounded reg. test 1") 1333 1333 __success __log_level(2) 1334 - __msg("r6 *= r7 {{.*}}; R6_w=scalar()") 1334 + __msg("r6 *= r7 {{.*}}; R6=scalar()") 1335 1335 __naked void mult_unsign_ovf(void) 1336 1336 { 1337 1337 asm volatile ( ··· 1353 1353 SEC("tc") 1354 1354 __description("multiplication overflow, result in unbounded reg. test 2") 1355 1355 __success __log_level(2) 1356 - __msg("r6 *= r7 {{.*}}; R6_w=scalar()") 1356 + __msg("r6 *= r7 {{.*}}; R6=scalar()") 1357 1357 __naked void mult_sign_ovf(void) 1358 1358 { 1359 1359 asm volatile ( ··· 1376 1376 SEC("socket") 1377 1377 __description("64-bit addition, all outcomes overflow") 1378 1378 __success __log_level(2) 1379 - __msg("5: (0f) r3 += r3 {{.*}} R3_w=scalar(umin=0x4000000000000000,umax=0xfffffffffffffffe)") 1379 + __msg("5: (0f) r3 += r3 {{.*}} R3=scalar(umin=0x4000000000000000,umax=0xfffffffffffffffe)") 1380 1380 __retval(0) 1381 1381 __naked void add64_full_overflow(void) 1382 1382 { ··· 1396 1396 SEC("socket") 1397 1397 __description("64-bit addition, partial overflow, result in unbounded reg") 1398 1398 __success __log_level(2) 1399 - __msg("4: (0f) r3 += r3 {{.*}} R3_w=scalar()") 1399 + __msg("4: (0f) r3 += r3 {{.*}} R3=scalar()") 1400 1400 __retval(0) 1401 1401 __naked void add64_partial_overflow(void) 1402 1402 { ··· 1416 1416 SEC("socket") 1417 1417 __description("32-bit addition overflow, all outcomes overflow") 1418 1418 __success __log_level(2) 1419 - __msg("4: (0c) w3 += w3 {{.*}} R3_w=scalar(smin=umin=umin32=0x40000000,smax=umax=umax32=0xfffffffe,var_off=(0x0; 0xffffffff))") 1419 + __msg("4: (0c) w3 += w3 {{.*}} R3=scalar(smin=umin=umin32=0x40000000,smax=umax=umax32=0xfffffffe,var_off=(0x0; 0xffffffff))") 1420 1420 __retval(0) 1421 1421 __naked void add32_full_overflow(void) 1422 1422 { ··· 1436 1436 SEC("socket") 1437 1437 __description("32-bit addition, partial overflow, result in unbounded u32 bounds") 1438 1438 __success __log_level(2) 1439 - __msg("4: (0c) w3 += w3 {{.*}} R3_w=scalar(smin=0,smax=umax=0xffffffff,var_off=(0x0; 0xffffffff))") 1439 + __msg("4: (0c) w3 += w3 {{.*}} R3=scalar(smin=0,smax=umax=0xffffffff,var_off=(0x0; 0xffffffff))") 1440 1440 __retval(0) 1441 1441 __naked void add32_partial_overflow(void) 1442 1442 { ··· 1456 1456 SEC("socket") 1457 1457 __description("64-bit subtraction, all outcomes underflow") 1458 1458 __success __log_level(2) 1459 - __msg("6: (1f) r3 -= r1 {{.*}} R3_w=scalar(umin=1,umax=0x8000000000000000)") 1459 + __msg("6: (1f) r3 -= r1 {{.*}} R3=scalar(umin=1,umax=0x8000000000000000)") 1460 1460 __retval(0) 1461 1461 __naked void sub64_full_overflow(void) 1462 1462 { ··· 1477 1477 SEC("socket") 1478 1478 __description("64-bit subtraction, partial overflow, result in unbounded reg") 1479 1479 __success __log_level(2) 1480 - __msg("3: (1f) r3 -= r2 {{.*}} R3_w=scalar()") 1480 + __msg("3: (1f) r3 -= r2 {{.*}} R3=scalar()") 1481 1481 __retval(0) 1482 1482 __naked void sub64_partial_overflow(void) 1483 1483 { ··· 1496 1496 SEC("socket") 1497 1497 __description("32-bit subtraction overflow, all outcomes underflow") 1498 1498 __success __log_level(2) 1499 - __msg("5: (1c) w3 -= w1 {{.*}} R3_w=scalar(smin=umin=umin32=1,smax=umax=umax32=0x80000000,var_off=(0x0; 0xffffffff))") 1499 + __msg("5: (1c) w3 -= w1 {{.*}} R3=scalar(smin=umin=umin32=1,smax=umax=umax32=0x80000000,var_off=(0x0; 0xffffffff))") 1500 1500 __retval(0) 1501 1501 __naked void sub32_full_overflow(void) 1502 1502 { ··· 1517 1517 SEC("socket") 1518 1518 __description("32-bit subtraction, partial overflow, result in unbounded u32 bounds") 1519 1519 __success __log_level(2) 1520 - __msg("3: (1c) w3 -= w2 {{.*}} R3_w=scalar(smin=0,smax=umax=0xffffffff,var_off=(0x0; 0xffffffff))") 1520 + __msg("3: (1c) w3 -= w2 {{.*}} R3=scalar(smin=0,smax=umax=0xffffffff,var_off=(0x0; 0xffffffff))") 1521 1521 __retval(0) 1522 1522 __naked void sub32_partial_overflow(void) 1523 1523 { ··· 1617 1617 SEC("socket") 1618 1618 __description("bounds deduction cross sign boundary, positive overlap") 1619 1619 __success __log_level(2) __flag(BPF_F_TEST_REG_INVARIANTS) 1620 - __msg("3: (2d) if r0 > r1 {{.*}} R0_w=scalar(smin=smin32=0,smax=umax=smax32=umax32=127,var_off=(0x0; 0x7f))") 1620 + __msg("3: (2d) if r0 > r1 {{.*}} R0=scalar(smin=smin32=0,smax=umax=smax32=umax32=127,var_off=(0x0; 0x7f))") 1621 1621 __retval(0) 1622 1622 __naked void bounds_deduct_positive_overlap(void) 1623 1623 { ··· 1650 1650 SEC("socket") 1651 1651 __description("bounds deduction cross sign boundary, two overlaps") 1652 1652 __failure __flag(BPF_F_TEST_REG_INVARIANTS) 1653 - __msg("3: (2d) if r0 > r1 {{.*}} R0_w=scalar(smin=smin32=-128,smax=smax32=127,umax=0xffffffffffffff80)") 1653 + __msg("3: (2d) if r0 > r1 {{.*}} R0=scalar(smin=smin32=-128,smax=smax32=127,umax=0xffffffffffffff80)") 1654 1654 __msg("frame pointer is read only") 1655 1655 __naked void bounds_deduct_two_overlaps(void) 1656 1656 {
+2 -2
tools/testing/selftests/bpf/progs/verifier_global_ptr_args.c
··· 215 215 SEC("tp_btf/sys_enter") 216 216 __success 217 217 __log_level(2) 218 - __msg("r1 = {{.*}}; {{.*}}R1_w=trusted_ptr_task_struct()") 218 + __msg("r1 = {{.*}}; {{.*}}R1=trusted_ptr_task_struct()") 219 219 __msg("Func#1 ('subprog_untrusted') is global and assumed valid.") 220 220 __msg("Validating subprog_untrusted() func#1...") 221 221 __msg(": R1=untrusted_ptr_task_struct") ··· 278 278 SEC("tp_btf/sys_enter") 279 279 __success 280 280 __log_level(2) 281 - __msg("r1 = {{.*}}; {{.*}}R1_w=trusted_ptr_task_struct()") 281 + __msg("r1 = {{.*}}; {{.*}}R1=trusted_ptr_task_struct()") 282 282 __msg("Func#1 ('subprog_void_untrusted') is global and assumed valid.") 283 283 __msg("Validating subprog_void_untrusted() func#1...") 284 284 __msg(": R1=rdonly_untrusted_mem(sz=0)")
+1 -1
tools/testing/selftests/bpf/progs/verifier_ldsx.c
··· 65 65 SEC("socket") 66 66 __description("LDSX, S8 range checking, privileged") 67 67 __log_level(2) __success __retval(1) 68 - __msg("R1_w=scalar(smin=smin32=-128,smax=smax32=127)") 68 + __msg("R1=scalar(smin=smin32=-128,smax=smax32=127)") 69 69 __naked void ldsx_s8_range_priv(void) 70 70 { 71 71 asm volatile (
+8 -8
tools/testing/selftests/bpf/progs/verifier_precision.c
··· 144 144 __success __log_level(2) 145 145 /* 146 146 * Without the bug fix there will be no history between "last_idx 3 first_idx 3" 147 - * and "parent state regs=" lines. "R0_w=6" parts are here to help anchor 147 + * and "parent state regs=" lines. "R0=6" parts are here to help anchor 148 148 * expected log messages to the one specific mark_chain_precision operation. 149 149 * 150 150 * This is quite fragile: if verifier checkpointing heuristic changes, this 151 151 * might need adjusting. 152 152 */ 153 - __msg("2: (07) r0 += 1 ; R0_w=6") 153 + __msg("2: (07) r0 += 1 ; R0=6") 154 154 __msg("3: (35) if r0 >= 0xa goto pc+1") 155 155 __msg("mark_precise: frame0: last_idx 3 first_idx 3 subseq_idx -1") 156 156 __msg("mark_precise: frame0: regs=r0 stack= before 2: (07) r0 += 1") 157 157 __msg("mark_precise: frame0: regs=r0 stack= before 1: (07) r0 += 1") 158 158 __msg("mark_precise: frame0: regs=r0 stack= before 4: (05) goto pc-4") 159 159 __msg("mark_precise: frame0: regs=r0 stack= before 3: (35) if r0 >= 0xa goto pc+1") 160 - __msg("mark_precise: frame0: parent state regs= stack=: R0_rw=P4") 161 - __msg("3: R0_w=6") 160 + __msg("mark_precise: frame0: parent state regs= stack=: R0=P4") 161 + __msg("3: R0=6") 162 162 __naked int state_loop_first_last_equal(void) 163 163 { 164 164 asm volatile ( ··· 233 233 234 234 SEC("lsm.s/socket_connect") 235 235 __success __log_level(2) 236 - __msg("0: (b7) r0 = 1 ; R0_w=1") 237 - __msg("1: (84) w0 = -w0 ; R0_w=0xffffffff") 236 + __msg("0: (b7) r0 = 1 ; R0=1") 237 + __msg("1: (84) w0 = -w0 ; R0=0xffffffff") 238 238 __msg("mark_precise: frame0: last_idx 2 first_idx 0 subseq_idx -1") 239 239 __msg("mark_precise: frame0: regs=r0 stack= before 1: (84) w0 = -w0") 240 240 __msg("mark_precise: frame0: regs=r0 stack= before 0: (b7) r0 = 1") ··· 268 268 269 269 SEC("lsm.s/socket_connect") 270 270 __success __log_level(2) 271 - __msg("0: (b7) r0 = 1 ; R0_w=1") 272 - __msg("1: (87) r0 = -r0 ; R0_w=-1") 271 + __msg("0: (b7) r0 = 1 ; R0=1") 272 + __msg("1: (87) r0 = -r0 ; R0=-1") 273 273 __msg("mark_precise: frame0: last_idx 2 first_idx 0 subseq_idx -1") 274 274 __msg("mark_precise: frame0: regs=r0 stack= before 1: (87) r0 = -r0") 275 275 __msg("mark_precise: frame0: regs=r0 stack= before 0: (b7) r0 = 1")
+5 -5
tools/testing/selftests/bpf/progs/verifier_scalar_ids.c
··· 353 353 * collect_linked_regs() can't tie more than 6 registers for a single insn. 354 354 */ 355 355 __msg("8: (25) if r0 > 0x7 goto pc+0 ; R0=scalar(id=1") 356 - __msg("9: (bf) r6 = r6 ; R6_w=scalar(id=2") 356 + __msg("9: (bf) r6 = r6 ; R6=scalar(id=2") 357 357 /* check that r{0-5} are marked precise after 'if' */ 358 358 __msg("frame0: regs=r0 stack= before 8: (25) if r0 > 0x7 goto pc+0") 359 359 __msg("frame0: parent state regs=r0,r1,r2,r3,r4,r5 stack=:") ··· 779 779 __retval(0) 780 780 /* Check that verifier believes r1/r0 are zero at exit */ 781 781 __log_level(2) 782 - __msg("4: (77) r1 >>= 32 ; R1_w=0") 783 - __msg("5: (bf) r0 = r1 ; R0_w=0 R1_w=0") 782 + __msg("4: (77) r1 >>= 32 ; R1=0") 783 + __msg("5: (bf) r0 = r1 ; R0=0 R1=0") 784 784 __msg("6: (95) exit") 785 785 __msg("from 3 to 4") 786 - __msg("4: (77) r1 >>= 32 ; R1_w=0") 787 - __msg("5: (bf) r0 = r1 ; R0_w=0 R1_w=0") 786 + __msg("4: (77) r1 >>= 32 ; R1=0") 787 + __msg("5: (bf) r0 = r1 ; R0=0 R1=0") 788 788 __msg("6: (95) exit") 789 789 /* Verify that statements to randomize upper half of r1 had not been 790 790 * generated.
+20 -20
tools/testing/selftests/bpf/progs/verifier_spill_fill.c
··· 506 506 __log_level(2) 507 507 __success 508 508 /* fp-8 is spilled IMPRECISE value zero (represented by a zero value fake reg) */ 509 - __msg("2: (7a) *(u64 *)(r10 -8) = 0 ; R10=fp0 fp-8_w=0") 509 + __msg("2: (7a) *(u64 *)(r10 -8) = 0 ; R10=fp0 fp-8=0") 510 510 /* but fp-16 is spilled IMPRECISE zero const reg */ 511 - __msg("4: (7b) *(u64 *)(r10 -16) = r0 ; R0_w=0 R10=fp0 fp-16_w=0") 511 + __msg("4: (7b) *(u64 *)(r10 -16) = r0 ; R0=0 R10=fp0 fp-16=0") 512 512 /* validate that assigning R2 from STACK_SPILL with zero value doesn't mark register 513 513 * precise immediately; if necessary, it will be marked precise later 514 514 */ 515 - __msg("6: (71) r2 = *(u8 *)(r10 -1) ; R2_w=0 R10=fp0 fp-8_w=0") 515 + __msg("6: (71) r2 = *(u8 *)(r10 -1) ; R2=0 R10=fp0 fp-8=0") 516 516 /* similarly, when R2 is assigned from spilled register, it is initially 517 517 * imprecise, but will be marked precise later once it is used in precise context 518 518 */ 519 - __msg("10: (71) r2 = *(u8 *)(r10 -9) ; R2_w=0 R10=fp0 fp-16_w=0") 519 + __msg("10: (71) r2 = *(u8 *)(r10 -9) ; R2=0 R10=fp0 fp-16=0") 520 520 __msg("11: (0f) r1 += r2") 521 521 __msg("mark_precise: frame0: last_idx 11 first_idx 0 subseq_idx -1") 522 522 __msg("mark_precise: frame0: regs=r2 stack= before 10: (71) r2 = *(u8 *)(r10 -9)") ··· 598 598 __success 599 599 /* fp-4 is STACK_ZERO */ 600 600 __msg("2: (62) *(u32 *)(r10 -4) = 0 ; R10=fp0 fp-8=0000????") 601 - __msg("4: (71) r2 = *(u8 *)(r10 -1) ; R2_w=0 R10=fp0 fp-8=0000????") 601 + __msg("4: (71) r2 = *(u8 *)(r10 -1) ; R2=0 R10=fp0 fp-8=0000????") 602 602 __msg("5: (0f) r1 += r2") 603 603 __msg("mark_precise: frame0: last_idx 5 first_idx 0 subseq_idx -1") 604 604 __msg("mark_precise: frame0: regs=r2 stack= before 4: (71) r2 = *(u8 *)(r10 -1)") ··· 640 640 __log_level(2) __flag(BPF_F_TEST_STATE_FREQ) 641 641 __success 642 642 /* make sure fp-8 is IMPRECISE fake register spill */ 643 - __msg("3: (7a) *(u64 *)(r10 -8) = 1 ; R10=fp0 fp-8_w=1") 643 + __msg("3: (7a) *(u64 *)(r10 -8) = 1 ; R10=fp0 fp-8=1") 644 644 /* and fp-16 is spilled IMPRECISE const reg */ 645 - __msg("5: (7b) *(u64 *)(r10 -16) = r0 ; R0_w=1 R10=fp0 fp-16_w=1") 645 + __msg("5: (7b) *(u64 *)(r10 -16) = r0 ; R0=1 R10=fp0 fp-16=1") 646 646 /* validate load from fp-8, which was initialized using BPF_ST_MEM */ 647 - __msg("8: (79) r2 = *(u64 *)(r10 -8) ; R2_w=1 R10=fp0 fp-8=1") 647 + __msg("8: (79) r2 = *(u64 *)(r10 -8) ; R2=1 R10=fp0 fp-8=1") 648 648 __msg("9: (0f) r1 += r2") 649 649 __msg("mark_precise: frame0: last_idx 9 first_idx 7 subseq_idx -1") 650 650 __msg("mark_precise: frame0: regs=r2 stack= before 8: (79) r2 = *(u64 *)(r10 -8)") 651 651 __msg("mark_precise: frame0: regs= stack=-8 before 7: (bf) r1 = r6") 652 652 /* note, fp-8 is precise, fp-16 is not yet precise, we'll get there */ 653 - __msg("mark_precise: frame0: parent state regs= stack=-8: R0_w=1 R1=ctx() R6_r=map_value(map=.data.two_byte_,ks=4,vs=2) R10=fp0 fp-8_rw=P1 fp-16_w=1") 653 + __msg("mark_precise: frame0: parent state regs= stack=-8: R0=1 R1=ctx() R6=map_value(map=.data.two_byte_,ks=4,vs=2) R10=fp0 fp-8=P1 fp-16=1") 654 654 __msg("mark_precise: frame0: last_idx 6 first_idx 3 subseq_idx 7") 655 655 __msg("mark_precise: frame0: regs= stack=-8 before 6: (05) goto pc+0") 656 656 __msg("mark_precise: frame0: regs= stack=-8 before 5: (7b) *(u64 *)(r10 -16) = r0") 657 657 __msg("mark_precise: frame0: regs= stack=-8 before 4: (b7) r0 = 1") 658 658 __msg("mark_precise: frame0: regs= stack=-8 before 3: (7a) *(u64 *)(r10 -8) = 1") 659 - __msg("10: R1_w=map_value(map=.data.two_byte_,ks=4,vs=2,off=1) R2_w=1") 659 + __msg("10: R1=map_value(map=.data.two_byte_,ks=4,vs=2,off=1) R2=1") 660 660 /* validate load from fp-16, which was initialized using BPF_STX_MEM */ 661 - __msg("12: (79) r2 = *(u64 *)(r10 -16) ; R2_w=1 R10=fp0 fp-16=1") 661 + __msg("12: (79) r2 = *(u64 *)(r10 -16) ; R2=1 R10=fp0 fp-16=1") 662 662 __msg("13: (0f) r1 += r2") 663 663 __msg("mark_precise: frame0: last_idx 13 first_idx 7 subseq_idx -1") 664 664 __msg("mark_precise: frame0: regs=r2 stack= before 12: (79) r2 = *(u64 *)(r10 -16)") ··· 668 668 __msg("mark_precise: frame0: regs= stack=-16 before 8: (79) r2 = *(u64 *)(r10 -8)") 669 669 __msg("mark_precise: frame0: regs= stack=-16 before 7: (bf) r1 = r6") 670 670 /* now both fp-8 and fp-16 are precise, very good */ 671 - __msg("mark_precise: frame0: parent state regs= stack=-16: R0_w=1 R1=ctx() R6_r=map_value(map=.data.two_byte_,ks=4,vs=2) R10=fp0 fp-8_rw=P1 fp-16_rw=P1") 671 + __msg("mark_precise: frame0: parent state regs= stack=-16: R0=1 R1=ctx() R6=map_value(map=.data.two_byte_,ks=4,vs=2) R10=fp0 fp-8=P1 fp-16=P1") 672 672 __msg("mark_precise: frame0: last_idx 6 first_idx 3 subseq_idx 7") 673 673 __msg("mark_precise: frame0: regs= stack=-16 before 6: (05) goto pc+0") 674 674 __msg("mark_precise: frame0: regs= stack=-16 before 5: (7b) *(u64 *)(r10 -16) = r0") 675 675 __msg("mark_precise: frame0: regs=r0 stack= before 4: (b7) r0 = 1") 676 - __msg("14: R1_w=map_value(map=.data.two_byte_,ks=4,vs=2,off=1) R2_w=1") 676 + __msg("14: R1=map_value(map=.data.two_byte_,ks=4,vs=2,off=1) R2=1") 677 677 __naked void stack_load_preserves_const_precision(void) 678 678 { 679 679 asm volatile ( ··· 719 719 /* make sure fp-8 is 32-bit FAKE subregister spill */ 720 720 __msg("3: (62) *(u32 *)(r10 -8) = 1 ; R10=fp0 fp-8=????1") 721 721 /* but fp-16 is spilled IMPRECISE zero const reg */ 722 - __msg("5: (63) *(u32 *)(r10 -16) = r0 ; R0_w=1 R10=fp0 fp-16=????1") 722 + __msg("5: (63) *(u32 *)(r10 -16) = r0 ; R0=1 R10=fp0 fp-16=????1") 723 723 /* validate load from fp-8, which was initialized using BPF_ST_MEM */ 724 - __msg("8: (61) r2 = *(u32 *)(r10 -8) ; R2_w=1 R10=fp0 fp-8=????1") 724 + __msg("8: (61) r2 = *(u32 *)(r10 -8) ; R2=1 R10=fp0 fp-8=????1") 725 725 __msg("9: (0f) r1 += r2") 726 726 __msg("mark_precise: frame0: last_idx 9 first_idx 7 subseq_idx -1") 727 727 __msg("mark_precise: frame0: regs=r2 stack= before 8: (61) r2 = *(u32 *)(r10 -8)") 728 728 __msg("mark_precise: frame0: regs= stack=-8 before 7: (bf) r1 = r6") 729 - __msg("mark_precise: frame0: parent state regs= stack=-8: R0_w=1 R1=ctx() R6_r=map_value(map=.data.two_byte_,ks=4,vs=2) R10=fp0 fp-8_r=????P1 fp-16=????1") 729 + __msg("mark_precise: frame0: parent state regs= stack=-8: R0=1 R1=ctx() R6=map_value(map=.data.two_byte_,ks=4,vs=2) R10=fp0 fp-8=????P1 fp-16=????1") 730 730 __msg("mark_precise: frame0: last_idx 6 first_idx 3 subseq_idx 7") 731 731 __msg("mark_precise: frame0: regs= stack=-8 before 6: (05) goto pc+0") 732 732 __msg("mark_precise: frame0: regs= stack=-8 before 5: (63) *(u32 *)(r10 -16) = r0") 733 733 __msg("mark_precise: frame0: regs= stack=-8 before 4: (b7) r0 = 1") 734 734 __msg("mark_precise: frame0: regs= stack=-8 before 3: (62) *(u32 *)(r10 -8) = 1") 735 - __msg("10: R1_w=map_value(map=.data.two_byte_,ks=4,vs=2,off=1) R2_w=1") 735 + __msg("10: R1=map_value(map=.data.two_byte_,ks=4,vs=2,off=1) R2=1") 736 736 /* validate load from fp-16, which was initialized using BPF_STX_MEM */ 737 - __msg("12: (61) r2 = *(u32 *)(r10 -16) ; R2_w=1 R10=fp0 fp-16=????1") 737 + __msg("12: (61) r2 = *(u32 *)(r10 -16) ; R2=1 R10=fp0 fp-16=????1") 738 738 __msg("13: (0f) r1 += r2") 739 739 __msg("mark_precise: frame0: last_idx 13 first_idx 7 subseq_idx -1") 740 740 __msg("mark_precise: frame0: regs=r2 stack= before 12: (61) r2 = *(u32 *)(r10 -16)") ··· 743 743 __msg("mark_precise: frame0: regs= stack=-16 before 9: (0f) r1 += r2") 744 744 __msg("mark_precise: frame0: regs= stack=-16 before 8: (61) r2 = *(u32 *)(r10 -8)") 745 745 __msg("mark_precise: frame0: regs= stack=-16 before 7: (bf) r1 = r6") 746 - __msg("mark_precise: frame0: parent state regs= stack=-16: R0_w=1 R1=ctx() R6_r=map_value(map=.data.two_byte_,ks=4,vs=2) R10=fp0 fp-8_r=????P1 fp-16_r=????P1") 746 + __msg("mark_precise: frame0: parent state regs= stack=-16: R0=1 R1=ctx() R6=map_value(map=.data.two_byte_,ks=4,vs=2) R10=fp0 fp-8=????P1 fp-16=????P1") 747 747 __msg("mark_precise: frame0: last_idx 6 first_idx 3 subseq_idx 7") 748 748 __msg("mark_precise: frame0: regs= stack=-16 before 6: (05) goto pc+0") 749 749 __msg("mark_precise: frame0: regs= stack=-16 before 5: (63) *(u32 *)(r10 -16) = r0") 750 750 __msg("mark_precise: frame0: regs=r0 stack= before 4: (b7) r0 = 1") 751 - __msg("14: R1_w=map_value(map=.data.two_byte_,ks=4,vs=2,off=1) R2_w=1") 751 + __msg("14: R1=map_value(map=.data.two_byte_,ks=4,vs=2,off=1) R2=1") 752 752 __naked void stack_load_preserves_const_precision_subreg(void) 753 753 { 754 754 asm volatile (
+3 -3
tools/testing/selftests/bpf/progs/verifier_subprog_precision.c
··· 105 105 __msg("mark_precise: frame0: regs=r0 stack= before 3: (57) r0 &= 3") 106 106 __msg("mark_precise: frame0: regs=r0 stack= before 10: (95) exit") 107 107 __msg("mark_precise: frame1: regs=r0 stack= before 9: (bf) r0 = (s8)r10") 108 - __msg("7: R0_w=scalar") 108 + __msg("7: R0=scalar") 109 109 __naked int fp_precise_subprog_result(void) 110 110 { 111 111 asm volatile ( ··· 141 141 * anyways, at which point we'll break precision chain 142 142 */ 143 143 __msg("mark_precise: frame1: regs=r1 stack= before 9: (bf) r1 = r10") 144 - __msg("7: R0_w=scalar") 144 + __msg("7: R0=scalar") 145 145 __naked int sneaky_fp_precise_subprog_result(void) 146 146 { 147 147 asm volatile ( ··· 681 681 __msg("mark_precise: frame0: regs=r7 stack= before 9: (bf) r1 = r8") 682 682 __msg("mark_precise: frame0: regs=r7 stack= before 8: (27) r7 *= 4") 683 683 __msg("mark_precise: frame0: regs=r7 stack= before 7: (79) r7 = *(u64 *)(r10 -8)") 684 - __msg("mark_precise: frame0: parent state regs= stack=-8: R0_w=2 R6_w=1 R8_rw=map_value(map=.data.vals,ks=4,vs=16) R10=fp0 fp-8_rw=P1") 684 + __msg("mark_precise: frame0: parent state regs= stack=-8: R0=2 R6=1 R8=map_value(map=.data.vals,ks=4,vs=16) R10=fp0 fp-8=P1") 685 685 __msg("mark_precise: frame0: last_idx 18 first_idx 0 subseq_idx 7") 686 686 __msg("mark_precise: frame0: regs= stack=-8 before 18: (95) exit") 687 687 __msg("mark_precise: frame1: regs= stack= before 17: (0f) r0 += r2")
+2 -2
tools/testing/selftests/bpf/verifier/bpf_st_mem.c
··· 93 93 .expected_attach_type = BPF_SK_LOOKUP, 94 94 .result = VERBOSE_ACCEPT, 95 95 .runs = -1, 96 - .errstr = "0: (7a) *(u64 *)(r10 -8) = -44 ; R10=fp0 fp-8_w=-44\ 96 + .errstr = "0: (7a) *(u64 *)(r10 -8) = -44 ; R10=fp0 fp-8=-44\ 97 97 2: (c5) if r0 s< 0x0 goto pc+2\ 98 - R0_w=-44", 98 + R0=-44", 99 99 },