Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next

Daniel Borkmann says:

====================
pull-request: bpf-next 2022-03-04

We've added 32 non-merge commits during the last 14 day(s) which contain
a total of 59 files changed, 1038 insertions(+), 473 deletions(-).

The main changes are:

1) Optimize BPF stackmap's build_id retrieval by caching last valid build_id,
as consecutive stack frames are likely to be in the same VMA and therefore
have the same build id, from Hao Luo.

2) Several improvements to arm64 BPF JIT, that is, support for JITing
the atomic[64]_fetch_add, atomic[64]_[fetch_]{and,or,xor} and lastly
atomic[64]_{xchg|cmpxchg}. Also fix the BTF line info dump for JITed
programs, from Hou Tao.

3) Optimize generic BPF map batch deletion by only enforcing synchronize_rcu()
barrier once upon return to user space, from Eric Dumazet.

4) For kernel build parse DWARF and generate BTF through pahole with enabled
multithreading, from Kui-Feng Lee.

5) BPF verifier usability improvements by making log info more concise and
replacing inv with scalar type name, from Mykola Lysenko.

6) Two follow-up fixes for BPF prog JIT pack allocator, from Song Liu.

7) Add a new Kconfig to allow for loading kernel modules with non-matching
BTF type info; their BTF info is then removed on load, from Connor O'Brien.

8) Remove reallocarray() usage from bpftool and switch to libbpf_reallocarray()
in order to fix compilation errors for older glibc, from Mauricio Vásquez.

9) Fix libbpf to error on conflicting name in BTF when type declaration
appears before the definition, from Xu Kuohai.

10) Fix issue in BPF preload for in-kernel light skeleton where loaded BPF
program fds prevent init process from setting up fd 0-2, from Yucong Sun.

11) Fix libbpf reuse of pinned perf RB map when max_entries is auto-determined
by libbpf, from Stijn Tintel.

12) Several cleanups for libbpf and a fix to enforce perf RB map #pages to be
non-zero, from Yuntao Wang.

* https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (32 commits)
bpf: Small BPF verifier log improvements
libbpf: Add a check to ensure that page_cnt is non-zero
bpf, x86: Set header->size properly before freeing it
x86: Disable HAVE_ARCH_HUGE_VMALLOC on 32-bit x86
bpf, test_run: Fix overflow in XDP frags bpf_test_finish
selftests/bpf: Update btf_dump case for conflicting names
libbpf: Skip forward declaration when counting duplicated type names
bpf: Add some description about BPF_JIT_ALWAYS_ON in Kconfig
bpf, docs: Add a missing colon in verifier.rst
bpf: Cache the last valid build_id
libbpf: Fix BPF_MAP_TYPE_PERF_EVENT_ARRAY auto-pinning
bpf, selftests: Use raw_tp program for atomic test
bpf, arm64: Support more atomic operations
bpftool: Remove redundant slashes
bpf: Add config to allow loading modules with BTF mismatches
bpf, arm64: Feed byte-offset into bpf line info
bpf, arm64: Call build_prologue() first in first JIT pass
bpf: Fix issue with bpf preload module taking over stdout/stdin of kernel.
bpftool: Bpf skeletons assert type sizes
bpf: Cleanup comments
...
====================

Link: https://lore.kernel.org/r/20220304164313.31675-1-daniel@iogearbox.net
Signed-off-by: Jakub Kicinski <kuba@kernel.org>

+1039 -474
+1 -1
Documentation/bpf/verifier.rst
··· 329 329 BPF_EXIT_INSN(), 330 330 }; 331 331 332 - Error: 332 + Error:: 333 333 334 334 unreachable insn 1 335 335
-12
arch/arm64/include/asm/debug-monitors.h
··· 34 34 */ 35 35 #define BREAK_INSTR_SIZE AARCH64_INSN_SIZE 36 36 37 - /* 38 - * BRK instruction encoding 39 - * The #imm16 value should be placed at bits[20:5] within BRK ins 40 - */ 41 - #define AARCH64_BREAK_MON 0xd4200000 42 - 43 - /* 44 - * BRK instruction for provoking a fault on purpose 45 - * Unlike kgdb, #imm16 value with unallocated handler is used for faulting. 46 - */ 47 - #define AARCH64_BREAK_FAULT (AARCH64_BREAK_MON | (FAULT_BRK_IMM << 5)) 48 - 49 37 #define AARCH64_BREAK_KGDB_DYN_DBG \ 50 38 (AARCH64_BREAK_MON | (KGDB_DYN_DBG_BRK_IMM << 5)) 51 39
+14
arch/arm64/include/asm/insn-def.h
··· 3 3 #ifndef __ASM_INSN_DEF_H 4 4 #define __ASM_INSN_DEF_H 5 5 6 + #include <asm/brk-imm.h> 7 + 6 8 /* A64 instructions are always 32 bits. */ 7 9 #define AARCH64_INSN_SIZE 4 10 + 11 + /* 12 + * BRK instruction encoding 13 + * The #imm16 value should be placed at bits[20:5] within BRK ins 14 + */ 15 + #define AARCH64_BREAK_MON 0xd4200000 16 + 17 + /* 18 + * BRK instruction for provoking a fault on purpose 19 + * Unlike kgdb, #imm16 value with unallocated handler is used for faulting. 20 + */ 21 + #define AARCH64_BREAK_FAULT (AARCH64_BREAK_MON | (FAULT_BRK_IMM << 5)) 8 22 9 23 #endif /* __ASM_INSN_DEF_H */
+73 -7
arch/arm64/include/asm/insn.h
··· 205 205 AARCH64_INSN_LDST_LOAD_PAIR_POST_INDEX, 206 206 AARCH64_INSN_LDST_STORE_PAIR_POST_INDEX, 207 207 AARCH64_INSN_LDST_LOAD_EX, 208 + AARCH64_INSN_LDST_LOAD_ACQ_EX, 208 209 AARCH64_INSN_LDST_STORE_EX, 210 + AARCH64_INSN_LDST_STORE_REL_EX, 209 211 }; 210 212 211 213 enum aarch64_insn_adsb_type { ··· 282 280 AARCH64_INSN_ADR_TYPE_ADR, 283 281 }; 284 282 283 + enum aarch64_insn_mem_atomic_op { 284 + AARCH64_INSN_MEM_ATOMIC_ADD, 285 + AARCH64_INSN_MEM_ATOMIC_CLR, 286 + AARCH64_INSN_MEM_ATOMIC_EOR, 287 + AARCH64_INSN_MEM_ATOMIC_SET, 288 + AARCH64_INSN_MEM_ATOMIC_SWP, 289 + }; 290 + 291 + enum aarch64_insn_mem_order_type { 292 + AARCH64_INSN_MEM_ORDER_NONE, 293 + AARCH64_INSN_MEM_ORDER_ACQ, 294 + AARCH64_INSN_MEM_ORDER_REL, 295 + AARCH64_INSN_MEM_ORDER_ACQREL, 296 + }; 297 + 298 + enum aarch64_insn_mb_type { 299 + AARCH64_INSN_MB_SY, 300 + AARCH64_INSN_MB_ST, 301 + AARCH64_INSN_MB_LD, 302 + AARCH64_INSN_MB_ISH, 303 + AARCH64_INSN_MB_ISHST, 304 + AARCH64_INSN_MB_ISHLD, 305 + AARCH64_INSN_MB_NSH, 306 + AARCH64_INSN_MB_NSHST, 307 + AARCH64_INSN_MB_NSHLD, 308 + AARCH64_INSN_MB_OSH, 309 + AARCH64_INSN_MB_OSHST, 310 + AARCH64_INSN_MB_OSHLD, 311 + }; 312 + 285 313 #define __AARCH64_INSN_FUNCS(abbr, mask, val) \ 286 314 static __always_inline bool aarch64_insn_is_##abbr(u32 code) \ 287 315 { \ ··· 335 303 __AARCH64_INSN_FUNCS(load_post, 0x3FE00C00, 0x38400400) 336 304 __AARCH64_INSN_FUNCS(str_reg, 0x3FE0EC00, 0x38206800) 337 305 __AARCH64_INSN_FUNCS(ldadd, 0x3F20FC00, 0x38200000) 306 + __AARCH64_INSN_FUNCS(ldclr, 0x3F20FC00, 0x38201000) 307 + __AARCH64_INSN_FUNCS(ldeor, 0x3F20FC00, 0x38202000) 308 + __AARCH64_INSN_FUNCS(ldset, 0x3F20FC00, 0x38203000) 309 + __AARCH64_INSN_FUNCS(swp, 0x3F20FC00, 0x38208000) 310 + __AARCH64_INSN_FUNCS(cas, 0x3FA07C00, 0x08A07C00) 338 311 __AARCH64_INSN_FUNCS(ldr_reg, 0x3FE0EC00, 0x38606800) 339 312 __AARCH64_INSN_FUNCS(ldr_lit, 0xBF000000, 0x18000000) 340 313 __AARCH64_INSN_FUNCS(ldrsw_lit, 0xFF000000, 0x98000000) ··· 511 474 enum aarch64_insn_register state, 512 475 enum aarch64_insn_size_type size, 513 476 enum aarch64_insn_ldst_type type); 514 - u32 aarch64_insn_gen_ldadd(enum aarch64_insn_register result, 515 - enum aarch64_insn_register address, 516 - enum aarch64_insn_register value, 517 - enum aarch64_insn_size_type size); 518 - u32 aarch64_insn_gen_stadd(enum aarch64_insn_register address, 519 - enum aarch64_insn_register value, 520 - enum aarch64_insn_size_type size); 521 477 u32 aarch64_insn_gen_add_sub_imm(enum aarch64_insn_register dst, 522 478 enum aarch64_insn_register src, 523 479 int imm, enum aarch64_insn_variant variant, ··· 571 541 enum aarch64_insn_prfm_type type, 572 542 enum aarch64_insn_prfm_target target, 573 543 enum aarch64_insn_prfm_policy policy); 544 + #ifdef CONFIG_ARM64_LSE_ATOMICS 545 + u32 aarch64_insn_gen_atomic_ld_op(enum aarch64_insn_register result, 546 + enum aarch64_insn_register address, 547 + enum aarch64_insn_register value, 548 + enum aarch64_insn_size_type size, 549 + enum aarch64_insn_mem_atomic_op op, 550 + enum aarch64_insn_mem_order_type order); 551 + u32 aarch64_insn_gen_cas(enum aarch64_insn_register result, 552 + enum aarch64_insn_register address, 553 + enum aarch64_insn_register value, 554 + enum aarch64_insn_size_type size, 555 + enum aarch64_insn_mem_order_type order); 556 + #else 557 + static inline 558 + u32 aarch64_insn_gen_atomic_ld_op(enum aarch64_insn_register result, 559 + enum aarch64_insn_register address, 560 + enum aarch64_insn_register value, 561 + enum aarch64_insn_size_type size, 562 + enum aarch64_insn_mem_atomic_op op, 563 + enum aarch64_insn_mem_order_type order) 564 + { 565 + return AARCH64_BREAK_FAULT; 566 + } 567 + 568 + static inline 569 + u32 aarch64_insn_gen_cas(enum aarch64_insn_register result, 570 + enum aarch64_insn_register address, 571 + enum aarch64_insn_register value, 572 + enum aarch64_insn_size_type size, 573 + enum aarch64_insn_mem_order_type order) 574 + { 575 + return AARCH64_BREAK_FAULT; 576 + } 577 + #endif 578 + u32 aarch64_insn_gen_dmb(enum aarch64_insn_mb_type type); 579 + 574 580 s32 aarch64_get_branch_offset(u32 insn); 575 581 u32 aarch64_set_branch_offset(u32 insn, s32 offset); 576 582
+172 -15
arch/arm64/lib/insn.c
··· 578 578 579 579 switch (type) { 580 580 case AARCH64_INSN_LDST_LOAD_EX: 581 + case AARCH64_INSN_LDST_LOAD_ACQ_EX: 581 582 insn = aarch64_insn_get_load_ex_value(); 583 + if (type == AARCH64_INSN_LDST_LOAD_ACQ_EX) 584 + insn |= BIT(15); 582 585 break; 583 586 case AARCH64_INSN_LDST_STORE_EX: 587 + case AARCH64_INSN_LDST_STORE_REL_EX: 584 588 insn = aarch64_insn_get_store_ex_value(); 589 + if (type == AARCH64_INSN_LDST_STORE_REL_EX) 590 + insn |= BIT(15); 585 591 break; 586 592 default: 587 593 pr_err("%s: unknown load/store exclusive encoding %d\n", __func__, type); ··· 609 603 state); 610 604 } 611 605 612 - u32 aarch64_insn_gen_ldadd(enum aarch64_insn_register result, 613 - enum aarch64_insn_register address, 614 - enum aarch64_insn_register value, 615 - enum aarch64_insn_size_type size) 606 + #ifdef CONFIG_ARM64_LSE_ATOMICS 607 + static u32 aarch64_insn_encode_ldst_order(enum aarch64_insn_mem_order_type type, 608 + u32 insn) 616 609 { 617 - u32 insn = aarch64_insn_get_ldadd_value(); 610 + u32 order; 611 + 612 + switch (type) { 613 + case AARCH64_INSN_MEM_ORDER_NONE: 614 + order = 0; 615 + break; 616 + case AARCH64_INSN_MEM_ORDER_ACQ: 617 + order = 2; 618 + break; 619 + case AARCH64_INSN_MEM_ORDER_REL: 620 + order = 1; 621 + break; 622 + case AARCH64_INSN_MEM_ORDER_ACQREL: 623 + order = 3; 624 + break; 625 + default: 626 + pr_err("%s: unknown mem order %d\n", __func__, type); 627 + return AARCH64_BREAK_FAULT; 628 + } 629 + 630 + insn &= ~GENMASK(23, 22); 631 + insn |= order << 22; 632 + 633 + return insn; 634 + } 635 + 636 + u32 aarch64_insn_gen_atomic_ld_op(enum aarch64_insn_register result, 637 + enum aarch64_insn_register address, 638 + enum aarch64_insn_register value, 639 + enum aarch64_insn_size_type size, 640 + enum aarch64_insn_mem_atomic_op op, 641 + enum aarch64_insn_mem_order_type order) 642 + { 643 + u32 insn; 644 + 645 + switch (op) { 646 + case AARCH64_INSN_MEM_ATOMIC_ADD: 647 + insn = aarch64_insn_get_ldadd_value(); 648 + break; 649 + case AARCH64_INSN_MEM_ATOMIC_CLR: 650 + insn = aarch64_insn_get_ldclr_value(); 651 + break; 652 + case AARCH64_INSN_MEM_ATOMIC_EOR: 653 + insn = aarch64_insn_get_ldeor_value(); 654 + break; 655 + case AARCH64_INSN_MEM_ATOMIC_SET: 656 + insn = aarch64_insn_get_ldset_value(); 657 + break; 658 + case AARCH64_INSN_MEM_ATOMIC_SWP: 659 + insn = aarch64_insn_get_swp_value(); 660 + break; 661 + default: 662 + pr_err("%s: unimplemented mem atomic op %d\n", __func__, op); 663 + return AARCH64_BREAK_FAULT; 664 + } 618 665 619 666 switch (size) { 620 667 case AARCH64_INSN_SIZE_32: ··· 680 621 681 622 insn = aarch64_insn_encode_ldst_size(size, insn); 682 623 624 + insn = aarch64_insn_encode_ldst_order(order, insn); 625 + 683 626 insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RT, insn, 684 627 result); 685 628 ··· 692 631 value); 693 632 } 694 633 695 - u32 aarch64_insn_gen_stadd(enum aarch64_insn_register address, 696 - enum aarch64_insn_register value, 697 - enum aarch64_insn_size_type size) 634 + static u32 aarch64_insn_encode_cas_order(enum aarch64_insn_mem_order_type type, 635 + u32 insn) 698 636 { 699 - /* 700 - * STADD is simply encoded as an alias for LDADD with XZR as 701 - * the destination register. 702 - */ 703 - return aarch64_insn_gen_ldadd(AARCH64_INSN_REG_ZR, address, 704 - value, size); 637 + u32 order; 638 + 639 + switch (type) { 640 + case AARCH64_INSN_MEM_ORDER_NONE: 641 + order = 0; 642 + break; 643 + case AARCH64_INSN_MEM_ORDER_ACQ: 644 + order = BIT(22); 645 + break; 646 + case AARCH64_INSN_MEM_ORDER_REL: 647 + order = BIT(15); 648 + break; 649 + case AARCH64_INSN_MEM_ORDER_ACQREL: 650 + order = BIT(15) | BIT(22); 651 + break; 652 + default: 653 + pr_err("%s: unknown mem order %d\n", __func__, type); 654 + return AARCH64_BREAK_FAULT; 655 + } 656 + 657 + insn &= ~(BIT(15) | BIT(22)); 658 + insn |= order; 659 + 660 + return insn; 705 661 } 662 + 663 + u32 aarch64_insn_gen_cas(enum aarch64_insn_register result, 664 + enum aarch64_insn_register address, 665 + enum aarch64_insn_register value, 666 + enum aarch64_insn_size_type size, 667 + enum aarch64_insn_mem_order_type order) 668 + { 669 + u32 insn; 670 + 671 + switch (size) { 672 + case AARCH64_INSN_SIZE_32: 673 + case AARCH64_INSN_SIZE_64: 674 + break; 675 + default: 676 + pr_err("%s: unimplemented size encoding %d\n", __func__, size); 677 + return AARCH64_BREAK_FAULT; 678 + } 679 + 680 + insn = aarch64_insn_get_cas_value(); 681 + 682 + insn = aarch64_insn_encode_ldst_size(size, insn); 683 + 684 + insn = aarch64_insn_encode_cas_order(order, insn); 685 + 686 + insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RT, insn, 687 + result); 688 + 689 + insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RN, insn, 690 + address); 691 + 692 + return aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RS, insn, 693 + value); 694 + } 695 + #endif 706 696 707 697 static u32 aarch64_insn_encode_prfm_imm(enum aarch64_insn_prfm_type type, 708 698 enum aarch64_insn_prfm_target target, ··· 1491 1379 * Compute the rotation to get a continuous set of 1492 1380 * ones, with the first bit set at position 0 1493 1381 */ 1494 - ror = fls(~imm); 1382 + ror = fls64(~imm); 1495 1383 } 1496 1384 1497 1385 /* ··· 1567 1455 insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RD, insn, Rd); 1568 1456 insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RN, insn, Rn); 1569 1457 return aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RM, insn, Rm); 1458 + } 1459 + 1460 + u32 aarch64_insn_gen_dmb(enum aarch64_insn_mb_type type) 1461 + { 1462 + u32 opt; 1463 + u32 insn; 1464 + 1465 + switch (type) { 1466 + case AARCH64_INSN_MB_SY: 1467 + opt = 0xf; 1468 + break; 1469 + case AARCH64_INSN_MB_ST: 1470 + opt = 0xe; 1471 + break; 1472 + case AARCH64_INSN_MB_LD: 1473 + opt = 0xd; 1474 + break; 1475 + case AARCH64_INSN_MB_ISH: 1476 + opt = 0xb; 1477 + break; 1478 + case AARCH64_INSN_MB_ISHST: 1479 + opt = 0xa; 1480 + break; 1481 + case AARCH64_INSN_MB_ISHLD: 1482 + opt = 0x9; 1483 + break; 1484 + case AARCH64_INSN_MB_NSH: 1485 + opt = 0x7; 1486 + break; 1487 + case AARCH64_INSN_MB_NSHST: 1488 + opt = 0x6; 1489 + break; 1490 + case AARCH64_INSN_MB_NSHLD: 1491 + opt = 0x5; 1492 + break; 1493 + default: 1494 + pr_err("%s: unknown dmb type %d\n", __func__, type); 1495 + return AARCH64_BREAK_FAULT; 1496 + } 1497 + 1498 + insn = aarch64_insn_get_dmb_value(); 1499 + insn &= ~GENMASK(11, 8); 1500 + insn |= (opt << 8); 1501 + 1502 + return insn; 1570 1503 }
+41 -3
arch/arm64/net/bpf_jit.h
··· 88 88 /* [Rn] = Rt; (atomic) Rs = [state] */ 89 89 #define A64_STXR(sf, Rt, Rn, Rs) \ 90 90 A64_LSX(sf, Rt, Rn, Rs, STORE_EX) 91 + /* [Rn] = Rt (store release); (atomic) Rs = [state] */ 92 + #define A64_STLXR(sf, Rt, Rn, Rs) \ 93 + aarch64_insn_gen_load_store_ex(Rt, Rn, Rs, A64_SIZE(sf), \ 94 + AARCH64_INSN_LDST_STORE_REL_EX) 91 95 92 - /* LSE atomics */ 93 - #define A64_STADD(sf, Rn, Rs) \ 94 - aarch64_insn_gen_stadd(Rn, Rs, A64_SIZE(sf)) 96 + /* 97 + * LSE atomics 98 + * 99 + * ST{ADD,CLR,SET,EOR} is simply encoded as an alias for 100 + * LDD{ADD,CLR,SET,EOR} with XZR as the destination register. 101 + */ 102 + #define A64_ST_OP(sf, Rn, Rs, op) \ 103 + aarch64_insn_gen_atomic_ld_op(A64_ZR, Rn, Rs, \ 104 + A64_SIZE(sf), AARCH64_INSN_MEM_ATOMIC_##op, \ 105 + AARCH64_INSN_MEM_ORDER_NONE) 106 + /* [Rn] <op>= Rs */ 107 + #define A64_STADD(sf, Rn, Rs) A64_ST_OP(sf, Rn, Rs, ADD) 108 + #define A64_STCLR(sf, Rn, Rs) A64_ST_OP(sf, Rn, Rs, CLR) 109 + #define A64_STEOR(sf, Rn, Rs) A64_ST_OP(sf, Rn, Rs, EOR) 110 + #define A64_STSET(sf, Rn, Rs) A64_ST_OP(sf, Rn, Rs, SET) 111 + 112 + #define A64_LD_OP_AL(sf, Rt, Rn, Rs, op) \ 113 + aarch64_insn_gen_atomic_ld_op(Rt, Rn, Rs, \ 114 + A64_SIZE(sf), AARCH64_INSN_MEM_ATOMIC_##op, \ 115 + AARCH64_INSN_MEM_ORDER_ACQREL) 116 + /* Rt = [Rn] (load acquire); [Rn] <op>= Rs (store release) */ 117 + #define A64_LDADDAL(sf, Rt, Rn, Rs) A64_LD_OP_AL(sf, Rt, Rn, Rs, ADD) 118 + #define A64_LDCLRAL(sf, Rt, Rn, Rs) A64_LD_OP_AL(sf, Rt, Rn, Rs, CLR) 119 + #define A64_LDEORAL(sf, Rt, Rn, Rs) A64_LD_OP_AL(sf, Rt, Rn, Rs, EOR) 120 + #define A64_LDSETAL(sf, Rt, Rn, Rs) A64_LD_OP_AL(sf, Rt, Rn, Rs, SET) 121 + /* Rt = [Rn] (load acquire); [Rn] = Rs (store release) */ 122 + #define A64_SWPAL(sf, Rt, Rn, Rs) A64_LD_OP_AL(sf, Rt, Rn, Rs, SWP) 123 + /* Rs = CAS(Rn, Rs, Rt) (load acquire & store release) */ 124 + #define A64_CASAL(sf, Rt, Rn, Rs) \ 125 + aarch64_insn_gen_cas(Rt, Rn, Rs, A64_SIZE(sf), \ 126 + AARCH64_INSN_MEM_ORDER_ACQREL) 95 127 96 128 /* Add/subtract (immediate) */ 97 129 #define A64_ADDSUB_IMM(sf, Rd, Rn, imm12, type) \ ··· 228 196 #define A64_ANDS(sf, Rd, Rn, Rm) A64_LOGIC_SREG(sf, Rd, Rn, Rm, AND_SETFLAGS) 229 197 /* Rn & Rm; set condition flags */ 230 198 #define A64_TST(sf, Rn, Rm) A64_ANDS(sf, A64_ZR, Rn, Rm) 199 + /* Rd = ~Rm (alias of ORN with A64_ZR as Rn) */ 200 + #define A64_MVN(sf, Rd, Rm) \ 201 + A64_LOGIC_SREG(sf, Rd, A64_ZR, Rm, ORN) 231 202 232 203 /* Logical (immediate) */ 233 204 #define A64_LOGIC_IMM(sf, Rd, Rn, imm, type) ({ \ ··· 253 218 #define A64_BTI_C A64_HINT(AARCH64_INSN_HINT_BTIC) 254 219 #define A64_BTI_J A64_HINT(AARCH64_INSN_HINT_BTIJ) 255 220 #define A64_BTI_JC A64_HINT(AARCH64_INSN_HINT_BTIJC) 221 + 222 + /* DMB */ 223 + #define A64_DMB_ISH aarch64_insn_gen_dmb(AARCH64_INSN_MB_ISH) 256 224 257 225 #endif /* _BPF_JIT_H */
+195 -46
arch/arm64/net/bpf_jit_comp.c
··· 27 27 #define TCALL_CNT (MAX_BPF_JIT_REG + 2) 28 28 #define TMP_REG_3 (MAX_BPF_JIT_REG + 3) 29 29 30 + #define check_imm(bits, imm) do { \ 31 + if ((((imm) > 0) && ((imm) >> (bits))) || \ 32 + (((imm) < 0) && (~(imm) >> (bits)))) { \ 33 + pr_info("[%2d] imm=%d(0x%x) out of range\n", \ 34 + i, imm, imm); \ 35 + return -EINVAL; \ 36 + } \ 37 + } while (0) 38 + #define check_imm19(imm) check_imm(19, imm) 39 + #define check_imm26(imm) check_imm(26, imm) 40 + 30 41 /* Map BPF registers to A64 registers */ 31 42 static const int bpf2a64[] = { 32 43 /* return value from in-kernel function, and exit value from eBPF */ ··· 340 329 #undef jmp_offset 341 330 } 342 331 332 + #ifdef CONFIG_ARM64_LSE_ATOMICS 333 + static int emit_lse_atomic(const struct bpf_insn *insn, struct jit_ctx *ctx) 334 + { 335 + const u8 code = insn->code; 336 + const u8 dst = bpf2a64[insn->dst_reg]; 337 + const u8 src = bpf2a64[insn->src_reg]; 338 + const u8 tmp = bpf2a64[TMP_REG_1]; 339 + const u8 tmp2 = bpf2a64[TMP_REG_2]; 340 + const bool isdw = BPF_SIZE(code) == BPF_DW; 341 + const s16 off = insn->off; 342 + u8 reg; 343 + 344 + if (!off) { 345 + reg = dst; 346 + } else { 347 + emit_a64_mov_i(1, tmp, off, ctx); 348 + emit(A64_ADD(1, tmp, tmp, dst), ctx); 349 + reg = tmp; 350 + } 351 + 352 + switch (insn->imm) { 353 + /* lock *(u32/u64 *)(dst_reg + off) <op>= src_reg */ 354 + case BPF_ADD: 355 + emit(A64_STADD(isdw, reg, src), ctx); 356 + break; 357 + case BPF_AND: 358 + emit(A64_MVN(isdw, tmp2, src), ctx); 359 + emit(A64_STCLR(isdw, reg, tmp2), ctx); 360 + break; 361 + case BPF_OR: 362 + emit(A64_STSET(isdw, reg, src), ctx); 363 + break; 364 + case BPF_XOR: 365 + emit(A64_STEOR(isdw, reg, src), ctx); 366 + break; 367 + /* src_reg = atomic_fetch_<op>(dst_reg + off, src_reg) */ 368 + case BPF_ADD | BPF_FETCH: 369 + emit(A64_LDADDAL(isdw, src, reg, src), ctx); 370 + break; 371 + case BPF_AND | BPF_FETCH: 372 + emit(A64_MVN(isdw, tmp2, src), ctx); 373 + emit(A64_LDCLRAL(isdw, src, reg, tmp2), ctx); 374 + break; 375 + case BPF_OR | BPF_FETCH: 376 + emit(A64_LDSETAL(isdw, src, reg, src), ctx); 377 + break; 378 + case BPF_XOR | BPF_FETCH: 379 + emit(A64_LDEORAL(isdw, src, reg, src), ctx); 380 + break; 381 + /* src_reg = atomic_xchg(dst_reg + off, src_reg); */ 382 + case BPF_XCHG: 383 + emit(A64_SWPAL(isdw, src, reg, src), ctx); 384 + break; 385 + /* r0 = atomic_cmpxchg(dst_reg + off, r0, src_reg); */ 386 + case BPF_CMPXCHG: 387 + emit(A64_CASAL(isdw, src, reg, bpf2a64[BPF_REG_0]), ctx); 388 + break; 389 + default: 390 + pr_err_once("unknown atomic op code %02x\n", insn->imm); 391 + return -EINVAL; 392 + } 393 + 394 + return 0; 395 + } 396 + #else 397 + static inline int emit_lse_atomic(const struct bpf_insn *insn, struct jit_ctx *ctx) 398 + { 399 + return -EINVAL; 400 + } 401 + #endif 402 + 403 + static int emit_ll_sc_atomic(const struct bpf_insn *insn, struct jit_ctx *ctx) 404 + { 405 + const u8 code = insn->code; 406 + const u8 dst = bpf2a64[insn->dst_reg]; 407 + const u8 src = bpf2a64[insn->src_reg]; 408 + const u8 tmp = bpf2a64[TMP_REG_1]; 409 + const u8 tmp2 = bpf2a64[TMP_REG_2]; 410 + const u8 tmp3 = bpf2a64[TMP_REG_3]; 411 + const int i = insn - ctx->prog->insnsi; 412 + const s32 imm = insn->imm; 413 + const s16 off = insn->off; 414 + const bool isdw = BPF_SIZE(code) == BPF_DW; 415 + u8 reg; 416 + s32 jmp_offset; 417 + 418 + if (!off) { 419 + reg = dst; 420 + } else { 421 + emit_a64_mov_i(1, tmp, off, ctx); 422 + emit(A64_ADD(1, tmp, tmp, dst), ctx); 423 + reg = tmp; 424 + } 425 + 426 + if (imm == BPF_ADD || imm == BPF_AND || 427 + imm == BPF_OR || imm == BPF_XOR) { 428 + /* lock *(u32/u64 *)(dst_reg + off) <op>= src_reg */ 429 + emit(A64_LDXR(isdw, tmp2, reg), ctx); 430 + if (imm == BPF_ADD) 431 + emit(A64_ADD(isdw, tmp2, tmp2, src), ctx); 432 + else if (imm == BPF_AND) 433 + emit(A64_AND(isdw, tmp2, tmp2, src), ctx); 434 + else if (imm == BPF_OR) 435 + emit(A64_ORR(isdw, tmp2, tmp2, src), ctx); 436 + else 437 + emit(A64_EOR(isdw, tmp2, tmp2, src), ctx); 438 + emit(A64_STXR(isdw, tmp2, reg, tmp3), ctx); 439 + jmp_offset = -3; 440 + check_imm19(jmp_offset); 441 + emit(A64_CBNZ(0, tmp3, jmp_offset), ctx); 442 + } else if (imm == (BPF_ADD | BPF_FETCH) || 443 + imm == (BPF_AND | BPF_FETCH) || 444 + imm == (BPF_OR | BPF_FETCH) || 445 + imm == (BPF_XOR | BPF_FETCH)) { 446 + /* src_reg = atomic_fetch_<op>(dst_reg + off, src_reg) */ 447 + const u8 ax = bpf2a64[BPF_REG_AX]; 448 + 449 + emit(A64_MOV(isdw, ax, src), ctx); 450 + emit(A64_LDXR(isdw, src, reg), ctx); 451 + if (imm == (BPF_ADD | BPF_FETCH)) 452 + emit(A64_ADD(isdw, tmp2, src, ax), ctx); 453 + else if (imm == (BPF_AND | BPF_FETCH)) 454 + emit(A64_AND(isdw, tmp2, src, ax), ctx); 455 + else if (imm == (BPF_OR | BPF_FETCH)) 456 + emit(A64_ORR(isdw, tmp2, src, ax), ctx); 457 + else 458 + emit(A64_EOR(isdw, tmp2, src, ax), ctx); 459 + emit(A64_STLXR(isdw, tmp2, reg, tmp3), ctx); 460 + jmp_offset = -3; 461 + check_imm19(jmp_offset); 462 + emit(A64_CBNZ(0, tmp3, jmp_offset), ctx); 463 + emit(A64_DMB_ISH, ctx); 464 + } else if (imm == BPF_XCHG) { 465 + /* src_reg = atomic_xchg(dst_reg + off, src_reg); */ 466 + emit(A64_MOV(isdw, tmp2, src), ctx); 467 + emit(A64_LDXR(isdw, src, reg), ctx); 468 + emit(A64_STLXR(isdw, tmp2, reg, tmp3), ctx); 469 + jmp_offset = -2; 470 + check_imm19(jmp_offset); 471 + emit(A64_CBNZ(0, tmp3, jmp_offset), ctx); 472 + emit(A64_DMB_ISH, ctx); 473 + } else if (imm == BPF_CMPXCHG) { 474 + /* r0 = atomic_cmpxchg(dst_reg + off, r0, src_reg); */ 475 + const u8 r0 = bpf2a64[BPF_REG_0]; 476 + 477 + emit(A64_MOV(isdw, tmp2, r0), ctx); 478 + emit(A64_LDXR(isdw, r0, reg), ctx); 479 + emit(A64_EOR(isdw, tmp3, r0, tmp2), ctx); 480 + jmp_offset = 4; 481 + check_imm19(jmp_offset); 482 + emit(A64_CBNZ(isdw, tmp3, jmp_offset), ctx); 483 + emit(A64_STLXR(isdw, src, reg, tmp3), ctx); 484 + jmp_offset = -4; 485 + check_imm19(jmp_offset); 486 + emit(A64_CBNZ(0, tmp3, jmp_offset), ctx); 487 + emit(A64_DMB_ISH, ctx); 488 + } else { 489 + pr_err_once("unknown atomic op code %02x\n", imm); 490 + return -EINVAL; 491 + } 492 + 493 + return 0; 494 + } 495 + 343 496 static void build_epilogue(struct jit_ctx *ctx) 344 497 { 345 498 const u8 r0 = bpf2a64[BPF_REG_0]; ··· 609 434 const u8 src = bpf2a64[insn->src_reg]; 610 435 const u8 tmp = bpf2a64[TMP_REG_1]; 611 436 const u8 tmp2 = bpf2a64[TMP_REG_2]; 612 - const u8 tmp3 = bpf2a64[TMP_REG_3]; 613 437 const s16 off = insn->off; 614 438 const s32 imm = insn->imm; 615 439 const int i = insn - ctx->prog->insnsi; 616 440 const bool is64 = BPF_CLASS(code) == BPF_ALU64 || 617 441 BPF_CLASS(code) == BPF_JMP; 618 - const bool isdw = BPF_SIZE(code) == BPF_DW; 619 - u8 jmp_cond, reg; 442 + u8 jmp_cond; 620 443 s32 jmp_offset; 621 444 u32 a64_insn; 622 445 int ret; 623 - 624 - #define check_imm(bits, imm) do { \ 625 - if ((((imm) > 0) && ((imm) >> (bits))) || \ 626 - (((imm) < 0) && (~(imm) >> (bits)))) { \ 627 - pr_info("[%2d] imm=%d(0x%x) out of range\n", \ 628 - i, imm, imm); \ 629 - return -EINVAL; \ 630 - } \ 631 - } while (0) 632 - #define check_imm19(imm) check_imm(19, imm) 633 - #define check_imm26(imm) check_imm(26, imm) 634 446 635 447 switch (code) { 636 448 /* dst = src */ ··· 1053 891 1054 892 case BPF_STX | BPF_ATOMIC | BPF_W: 1055 893 case BPF_STX | BPF_ATOMIC | BPF_DW: 1056 - if (insn->imm != BPF_ADD) { 1057 - pr_err_once("unknown atomic op code %02x\n", insn->imm); 1058 - return -EINVAL; 1059 - } 1060 - 1061 - /* STX XADD: lock *(u32 *)(dst + off) += src 1062 - * and 1063 - * STX XADD: lock *(u64 *)(dst + off) += src 1064 - */ 1065 - 1066 - if (!off) { 1067 - reg = dst; 1068 - } else { 1069 - emit_a64_mov_i(1, tmp, off, ctx); 1070 - emit(A64_ADD(1, tmp, tmp, dst), ctx); 1071 - reg = tmp; 1072 - } 1073 - if (cpus_have_cap(ARM64_HAS_LSE_ATOMICS)) { 1074 - emit(A64_STADD(isdw, reg, src), ctx); 1075 - } else { 1076 - emit(A64_LDXR(isdw, tmp2, reg), ctx); 1077 - emit(A64_ADD(isdw, tmp2, tmp2, src), ctx); 1078 - emit(A64_STXR(isdw, tmp2, reg, tmp3), ctx); 1079 - jmp_offset = -3; 1080 - check_imm19(jmp_offset); 1081 - emit(A64_CBNZ(0, tmp3, jmp_offset), ctx); 1082 - } 894 + if (cpus_have_cap(ARM64_HAS_LSE_ATOMICS)) 895 + ret = emit_lse_atomic(insn, ctx); 896 + else 897 + ret = emit_ll_sc_atomic(insn, ctx); 898 + if (ret) 899 + return ret; 1083 900 break; 1084 901 1085 902 default: ··· 1190 1049 goto out_off; 1191 1050 } 1192 1051 1193 - /* 1. Initial fake pass to compute ctx->idx. */ 1194 - 1195 - /* Fake pass to fill in ctx->offset. */ 1196 - if (build_body(&ctx, extra_pass)) { 1052 + /* 1053 + * 1. Initial fake pass to compute ctx->idx and ctx->offset. 1054 + * 1055 + * BPF line info needs ctx->offset[i] to be the offset of 1056 + * instruction[i] in jited image, so build prologue first. 1057 + */ 1058 + if (build_prologue(&ctx, was_classic)) { 1197 1059 prog = orig_prog; 1198 1060 goto out_off; 1199 1061 } 1200 1062 1201 - if (build_prologue(&ctx, was_classic)) { 1063 + if (build_body(&ctx, extra_pass)) { 1202 1064 prog = orig_prog; 1203 1065 goto out_off; 1204 1066 } ··· 1274 1130 prog->jited_len = prog_size; 1275 1131 1276 1132 if (!prog->is_func || extra_pass) { 1133 + int i; 1134 + 1135 + /* offset[prog->len] is the size of program */ 1136 + for (i = 0; i <= prog->len; i++) 1137 + ctx.offset[i] *= AARCH64_INSN_SIZE; 1277 1138 bpf_prog_fill_jited_linfo(prog, ctx.offset + 1); 1278 1139 out_off: 1279 1140 kfree(ctx.offset);
+1 -1
arch/x86/Kconfig
··· 158 158 select HAVE_ALIGNED_STRUCT_PAGE if SLUB 159 159 select HAVE_ARCH_AUDITSYSCALL 160 160 select HAVE_ARCH_HUGE_VMAP if X86_64 || X86_PAE 161 - select HAVE_ARCH_HUGE_VMALLOC if HAVE_ARCH_HUGE_VMAP 161 + select HAVE_ARCH_HUGE_VMALLOC if X86_64 162 162 select HAVE_ARCH_JUMP_LABEL 163 163 select HAVE_ARCH_JUMP_LABEL_RELATIVE 164 164 select HAVE_ARCH_KASAN if X86_64
+4 -1
arch/x86/net/bpf_jit_comp.c
··· 2330 2330 if (proglen <= 0) { 2331 2331 out_image: 2332 2332 image = NULL; 2333 - if (header) 2333 + if (header) { 2334 + bpf_arch_text_copy(&header->size, &rw_header->size, 2335 + sizeof(rw_header->size)); 2334 2336 bpf_jit_binary_pack_free(header, rw_header); 2337 + } 2335 2338 prog = orig_prog; 2336 2339 goto out_addrs; 2337 2340 }
+4
kernel/bpf/Kconfig
··· 58 58 Enables BPF JIT and removes BPF interpreter to avoid speculative 59 59 execution of BPF instructions by the interpreter. 60 60 61 + When CONFIG_BPF_JIT_ALWAYS_ON is enabled, /proc/sys/net/core/bpf_jit_enable 62 + is permanently set to 1 and setting any other value than that will 63 + return failure. 64 + 61 65 config BPF_JIT_DEFAULT_ON 62 66 def_bool ARCH_WANT_DEFAULT_BPF_JIT || BPF_JIT_ALWAYS_ON 63 67 depends on HAVE_EBPF_JIT && BPF_JIT
+1 -1
kernel/bpf/bpf_local_storage.c
··· 136 136 * will be done by the caller. 137 137 * 138 138 * Although the unlock will be done under 139 - * rcu_read_lock(), it is more intutivie to 139 + * rcu_read_lock(), it is more intuitive to 140 140 * read if the freeing of the storage is done 141 141 * after the raw_spin_unlock_bh(&local_storage->lock). 142 142 *
+6 -5
kernel/bpf/btf.c
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 1 + // SPDX-License-Identifier: GPL-2.0 2 2 /* Copyright (c) 2018 Facebook */ 3 3 4 4 #include <uapi/linux/btf.h> ··· 2547 2547 * 2548 2548 * We now need to continue from the last-resolved-ptr to 2549 2549 * ensure the last-resolved-ptr will not referring back to 2550 - * the currenct ptr (t). 2550 + * the current ptr (t). 2551 2551 */ 2552 2552 if (btf_type_is_modifier(next_type)) { 2553 2553 const struct btf_type *resolved_type; ··· 6148 6148 6149 6149 btf_type_show(btf, type_id, obj, (struct btf_show *)&ssnprintf); 6150 6150 6151 - /* If we encontered an error, return it. */ 6151 + /* If we encountered an error, return it. */ 6152 6152 if (ssnprintf.show.state.status) 6153 6153 return ssnprintf.show.state.status; 6154 6154 ··· 6398 6398 pr_warn("failed to validate module [%s] BTF: %ld\n", 6399 6399 mod->name, PTR_ERR(btf)); 6400 6400 kfree(btf_mod); 6401 - err = PTR_ERR(btf); 6401 + if (!IS_ENABLED(CONFIG_MODULE_ALLOW_BTF_MISMATCH)) 6402 + err = PTR_ERR(btf); 6402 6403 goto out; 6403 6404 } 6404 6405 err = btf_alloc_id(btf); ··· 6707 6706 const struct btf_kfunc_id_set *kset) 6708 6707 { 6709 6708 bool vmlinux_set = !btf_is_module(btf); 6710 - int type, ret; 6709 + int type, ret = 0; 6711 6710 6712 6711 for (type = 0; type < ARRAY_SIZE(kset->sets); type++) { 6713 6712 if (!kset->sets[type])
+4 -4
kernel/bpf/cgroup.c
··· 1031 1031 * __cgroup_bpf_run_filter_skb() - Run a program for packet filtering 1032 1032 * @sk: The socket sending or receiving traffic 1033 1033 * @skb: The skb that is being sent or received 1034 - * @type: The type of program to be exectuted 1034 + * @type: The type of program to be executed 1035 1035 * 1036 1036 * If no socket is passed, or the socket is not of type INET or INET6, 1037 1037 * this function does nothing and returns 0. ··· 1094 1094 /** 1095 1095 * __cgroup_bpf_run_filter_sk() - Run a program on a sock 1096 1096 * @sk: sock structure to manipulate 1097 - * @type: The type of program to be exectuted 1097 + * @type: The type of program to be executed 1098 1098 * 1099 1099 * socket is passed is expected to be of type INET or INET6. 1100 1100 * ··· 1119 1119 * provided by user sockaddr 1120 1120 * @sk: sock struct that will use sockaddr 1121 1121 * @uaddr: sockaddr struct provided by user 1122 - * @type: The type of program to be exectuted 1122 + * @type: The type of program to be executed 1123 1123 * @t_ctx: Pointer to attach type specific context 1124 1124 * @flags: Pointer to u32 which contains higher bits of BPF program 1125 1125 * return value (OR'ed together). ··· 1166 1166 * @sock_ops: bpf_sock_ops_kern struct to pass to program. Contains 1167 1167 * sk with connection information (IP addresses, etc.) May not contain 1168 1168 * cgroup info if it is a req sock. 1169 - * @type: The type of program to be exectuted 1169 + * @type: The type of program to be executed 1170 1170 * 1171 1171 * socket passed is expected to be of type INET or INET6. 1172 1172 *
+6 -3
kernel/bpf/core.c
··· 1112 1112 * 1) when the program is freed after; 1113 1113 * 2) when the JIT engine fails (before bpf_jit_binary_pack_finalize). 1114 1114 * For case 2), we need to free both the RO memory and the RW buffer. 1115 - * Also, ro_header->size in 2) is not properly set yet, so rw_header->size 1116 - * is used for uncharge. 1115 + * 1116 + * bpf_jit_binary_pack_free requires proper ro_header->size. However, 1117 + * bpf_jit_binary_pack_alloc does not set it. Therefore, ro_header->size 1118 + * must be set with either bpf_jit_binary_pack_finalize (normal path) or 1119 + * bpf_arch_text_copy (when jit fails). 1117 1120 */ 1118 1121 void bpf_jit_binary_pack_free(struct bpf_binary_header *ro_header, 1119 1122 struct bpf_binary_header *rw_header) 1120 1123 { 1121 - u32 size = rw_header ? rw_header->size : ro_header->size; 1124 + u32 size = ro_header->size; 1122 1125 1123 1126 bpf_prog_pack_free(ro_header); 1124 1127 kvfree(rw_header);
+1 -1
kernel/bpf/hashtab.c
··· 1636 1636 value_size = size * num_possible_cpus(); 1637 1637 total = 0; 1638 1638 /* while experimenting with hash tables with sizes ranging from 10 to 1639 - * 1000, it was observed that a bucket can have upto 5 entries. 1639 + * 1000, it was observed that a bucket can have up to 5 entries. 1640 1640 */ 1641 1641 bucket_size = 5; 1642 1642
+1 -1
kernel/bpf/helpers.c
··· 1093 1093 struct bpf_timer_kern { 1094 1094 struct bpf_hrtimer *timer; 1095 1095 /* bpf_spin_lock is used here instead of spinlock_t to make 1096 - * sure that it always fits into space resereved by struct bpf_timer 1096 + * sure that it always fits into space reserved by struct bpf_timer 1097 1097 * regardless of LOCKDEP and spinlock debug flags. 1098 1098 */ 1099 1099 struct bpf_spin_lock lock;
+1 -1
kernel/bpf/local_storage.c
··· 1 - //SPDX-License-Identifier: GPL-2.0 1 + // SPDX-License-Identifier: GPL-2.0 2 2 #include <linux/bpf-cgroup.h> 3 3 #include <linux/bpf.h> 4 4 #include <linux/bpf_local_storage.h>
+7
kernel/bpf/preload/bpf_preload_kern.c
··· 54 54 err = PTR_ERR(progs_link); 55 55 goto out; 56 56 } 57 + /* Avoid taking over stdin/stdout/stderr of init process. Zeroing out 58 + * makes skel_closenz() a no-op later in iterators_bpf__destroy(). 59 + */ 60 + close_fd(skel->links.dump_bpf_map_fd); 61 + skel->links.dump_bpf_map_fd = 0; 62 + close_fd(skel->links.dump_bpf_prog_fd); 63 + skel->links.dump_bpf_prog_fd = 0; 57 64 return 0; 58 65 out: 59 66 free_links_and_skel();
+1 -1
kernel/bpf/reuseport_array.c
··· 143 143 144 144 /* 145 145 * Once reaching here, all sk->sk_user_data is not 146 - * referenceing this "array". "array" can be freed now. 146 + * referencing this "array". "array" can be freed now. 147 147 */ 148 148 bpf_map_area_free(array); 149 149 }
+11 -1
kernel/bpf/stackmap.c
··· 132 132 int i; 133 133 struct mmap_unlock_irq_work *work = NULL; 134 134 bool irq_work_busy = bpf_mmap_unlock_get_irq_work(&work); 135 - struct vm_area_struct *vma; 135 + struct vm_area_struct *vma, *prev_vma = NULL; 136 + const char *prev_build_id; 136 137 137 138 /* If the irq_work is in use, fall back to report ips. Same 138 139 * fallback is used for kernel stack (!user) on a stackmap with ··· 151 150 } 152 151 153 152 for (i = 0; i < trace_nr; i++) { 153 + if (range_in_vma(prev_vma, ips[i], ips[i])) { 154 + vma = prev_vma; 155 + memcpy(id_offs[i].build_id, prev_build_id, 156 + BUILD_ID_SIZE_MAX); 157 + goto build_id_valid; 158 + } 154 159 vma = find_vma(current->mm, ips[i]); 155 160 if (!vma || build_id_parse(vma, id_offs[i].build_id, NULL)) { 156 161 /* per entry fall back to ips */ ··· 165 158 memset(id_offs[i].build_id, 0, BUILD_ID_SIZE_MAX); 166 159 continue; 167 160 } 161 + build_id_valid: 168 162 id_offs[i].offset = (vma->vm_pgoff << PAGE_SHIFT) + ips[i] 169 163 - vma->vm_start; 170 164 id_offs[i].status = BPF_STACK_BUILD_ID_VALID; 165 + prev_vma = vma; 166 + prev_build_id = id_offs[i].build_id; 171 167 } 172 168 bpf_mmap_unlock_mm(work, current->mm); 173 169 }
+3 -2
kernel/bpf/syscall.c
··· 1352 1352 err = map->ops->map_delete_elem(map, key); 1353 1353 rcu_read_unlock(); 1354 1354 bpf_enable_instrumentation(); 1355 - maybe_wait_bpf_programs(map); 1356 1355 if (err) 1357 1356 break; 1358 1357 cond_resched(); ··· 1360 1361 err = -EFAULT; 1361 1362 1362 1363 kvfree(key); 1364 + 1365 + maybe_wait_bpf_programs(map); 1363 1366 return err; 1364 1367 } 1365 1368 ··· 2566 2565 * pre-allocated resources are to be freed with bpf_cleanup() call. All the 2567 2566 * transient state is passed around in struct bpf_link_primer. 2568 2567 * This is preferred way to create and initialize bpf_link, especially when 2569 - * there are complicated and expensive operations inbetween creating bpf_link 2568 + * there are complicated and expensive operations in between creating bpf_link 2570 2569 * itself and attaching it to BPF hook. By using bpf_link_prime() and 2571 2570 * bpf_link_settle() kernel code using bpf_link doesn't have to perform 2572 2571 * expensive (and potentially failing) roll back operations in a rare case
+1 -1
kernel/bpf/trampoline.c
··· 45 45 46 46 set_vm_flush_reset_perms(image); 47 47 /* Keep image as writeable. The alternative is to keep flipping ro/rw 48 - * everytime new program is attached or detached. 48 + * every time new program is attached or detached. 49 49 */ 50 50 set_memory_x((long)image, 1); 51 51 return image;
+35 -29
kernel/bpf/verifier.c
··· 539 539 char postfix[16] = {0}, prefix[32] = {0}; 540 540 static const char * const str[] = { 541 541 [NOT_INIT] = "?", 542 - [SCALAR_VALUE] = "inv", 542 + [SCALAR_VALUE] = "scalar", 543 543 [PTR_TO_CTX] = "ctx", 544 544 [CONST_PTR_TO_MAP] = "map_ptr", 545 545 [PTR_TO_MAP_VALUE] = "map_value", ··· 685 685 continue; 686 686 verbose(env, " R%d", i); 687 687 print_liveness(env, reg->live); 688 - verbose(env, "=%s", reg_type_str(env, t)); 688 + verbose(env, "="); 689 689 if (t == SCALAR_VALUE && reg->precise) 690 690 verbose(env, "P"); 691 691 if ((t == SCALAR_VALUE || t == PTR_TO_STACK) && 692 692 tnum_is_const(reg->var_off)) { 693 693 /* reg->off should be 0 for SCALAR_VALUE */ 694 + verbose(env, "%s", t == SCALAR_VALUE ? "" : reg_type_str(env, t)); 694 695 verbose(env, "%lld", reg->var_off.value + reg->off); 695 696 } else { 697 + const char *sep = ""; 698 + 699 + verbose(env, "%s", reg_type_str(env, t)); 696 700 if (base_type(t) == PTR_TO_BTF_ID || 697 701 base_type(t) == PTR_TO_PERCPU_BTF_ID) 698 702 verbose(env, "%s", kernel_type_name(reg->btf, reg->btf_id)); 699 - verbose(env, "(id=%d", reg->id); 700 - if (reg_type_may_be_refcounted_or_null(t)) 701 - verbose(env, ",ref_obj_id=%d", reg->ref_obj_id); 703 + verbose(env, "("); 704 + /* 705 + * _a stands for append, was shortened to avoid multiline statements below. 706 + * This macro is used to output a comma separated list of attributes. 707 + */ 708 + #define verbose_a(fmt, ...) ({ verbose(env, "%s" fmt, sep, __VA_ARGS__); sep = ","; }) 709 + 710 + if (reg->id) 711 + verbose_a("id=%d", reg->id); 712 + if (reg_type_may_be_refcounted_or_null(t) && reg->ref_obj_id) 713 + verbose_a("ref_obj_id=%d", reg->ref_obj_id); 702 714 if (t != SCALAR_VALUE) 703 - verbose(env, ",off=%d", reg->off); 715 + verbose_a("off=%d", reg->off); 704 716 if (type_is_pkt_pointer(t)) 705 - verbose(env, ",r=%d", reg->range); 717 + verbose_a("r=%d", reg->range); 706 718 else if (base_type(t) == CONST_PTR_TO_MAP || 707 719 base_type(t) == PTR_TO_MAP_KEY || 708 720 base_type(t) == PTR_TO_MAP_VALUE) 709 - verbose(env, ",ks=%d,vs=%d", 710 - reg->map_ptr->key_size, 711 - reg->map_ptr->value_size); 721 + verbose_a("ks=%d,vs=%d", 722 + reg->map_ptr->key_size, 723 + reg->map_ptr->value_size); 712 724 if (tnum_is_const(reg->var_off)) { 713 725 /* Typically an immediate SCALAR_VALUE, but 714 726 * could be a pointer whose offset is too big 715 727 * for reg->off 716 728 */ 717 - verbose(env, ",imm=%llx", reg->var_off.value); 729 + verbose_a("imm=%llx", reg->var_off.value); 718 730 } else { 719 731 if (reg->smin_value != reg->umin_value && 720 732 reg->smin_value != S64_MIN) 721 - verbose(env, ",smin_value=%lld", 722 - (long long)reg->smin_value); 733 + verbose_a("smin=%lld", (long long)reg->smin_value); 723 734 if (reg->smax_value != reg->umax_value && 724 735 reg->smax_value != S64_MAX) 725 - verbose(env, ",smax_value=%lld", 726 - (long long)reg->smax_value); 736 + verbose_a("smax=%lld", (long long)reg->smax_value); 727 737 if (reg->umin_value != 0) 728 - verbose(env, ",umin_value=%llu", 729 - (unsigned long long)reg->umin_value); 738 + verbose_a("umin=%llu", (unsigned long long)reg->umin_value); 730 739 if (reg->umax_value != U64_MAX) 731 - verbose(env, ",umax_value=%llu", 732 - (unsigned long long)reg->umax_value); 740 + verbose_a("umax=%llu", (unsigned long long)reg->umax_value); 733 741 if (!tnum_is_unknown(reg->var_off)) { 734 742 char tn_buf[48]; 735 743 736 744 tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off); 737 - verbose(env, ",var_off=%s", tn_buf); 745 + verbose_a("var_off=%s", tn_buf); 738 746 } 739 747 if (reg->s32_min_value != reg->smin_value && 740 748 reg->s32_min_value != S32_MIN) 741 - verbose(env, ",s32_min_value=%d", 742 - (int)(reg->s32_min_value)); 749 + verbose_a("s32_min=%d", (int)(reg->s32_min_value)); 743 750 if (reg->s32_max_value != reg->smax_value && 744 751 reg->s32_max_value != S32_MAX) 745 - verbose(env, ",s32_max_value=%d", 746 - (int)(reg->s32_max_value)); 752 + verbose_a("s32_max=%d", (int)(reg->s32_max_value)); 747 753 if (reg->u32_min_value != reg->umin_value && 748 754 reg->u32_min_value != U32_MIN) 749 - verbose(env, ",u32_min_value=%d", 750 - (int)(reg->u32_min_value)); 755 + verbose_a("u32_min=%d", (int)(reg->u32_min_value)); 751 756 if (reg->u32_max_value != reg->umax_value && 752 757 reg->u32_max_value != U32_MAX) 753 - verbose(env, ",u32_max_value=%d", 754 - (int)(reg->u32_max_value)); 758 + verbose_a("u32_max=%d", (int)(reg->u32_max_value)); 755 759 } 760 + #undef verbose_a 761 + 756 762 verbose(env, ")"); 757 763 } 758 764 } ··· 783 777 if (is_spilled_reg(&state->stack[i])) { 784 778 reg = &state->stack[i].spilled_ptr; 785 779 t = reg->type; 786 - verbose(env, "=%s", reg_type_str(env, t)); 780 + verbose(env, "=%s", t == SCALAR_VALUE ? "" : reg_type_str(env, t)); 787 781 if (t == SCALAR_VALUE && reg->precise) 788 782 verbose(env, "P"); 789 783 if (t == SCALAR_VALUE && tnum_is_const(reg->var_off))
+10
lib/Kconfig.debug
··· 339 339 help 340 340 Generate compact split BTF type information for kernel modules. 341 341 342 + config MODULE_ALLOW_BTF_MISMATCH 343 + bool "Allow loading modules with non-matching BTF type info" 344 + depends on DEBUG_INFO_BTF_MODULES 345 + help 346 + For modules whose split BTF does not match vmlinux, load without 347 + BTF rather than refusing to load. The default behavior with 348 + module BTF enabled is to reject modules with such mismatches; 349 + this option will still load module BTF where possible but ignore 350 + it when a mismatch is found. 351 + 342 352 config GDB_SCRIPTS 343 353 bool "Provide GDB scripts for kernel debugging" 344 354 help
+5
net/bpf/test_run.c
··· 150 150 if (data_out) { 151 151 int len = sinfo ? copy_size - sinfo->xdp_frags_size : copy_size; 152 152 153 + if (len < 0) { 154 + err = -ENOSPC; 155 + goto out; 156 + } 157 + 153 158 if (copy_to_user(data_out, data, len)) 154 159 goto out; 155 160
+3
scripts/pahole-flags.sh
··· 16 16 if [ "${pahole_ver}" -ge "121" ]; then 17 17 extra_paholeopt="${extra_paholeopt} --btf_gen_floats" 18 18 fi 19 + if [ "${pahole_ver}" -ge "122" ]; then 20 + extra_paholeopt="${extra_paholeopt} -j" 21 + fi 19 22 20 23 echo ${extra_paholeopt}
+8 -12
tools/bpf/bpftool/Makefile
··· 18 18 ifneq ($(OUTPUT),) 19 19 _OUTPUT := $(OUTPUT) 20 20 else 21 - _OUTPUT := $(CURDIR) 21 + _OUTPUT := $(CURDIR)/ 22 22 endif 23 - BOOTSTRAP_OUTPUT := $(_OUTPUT)/bootstrap/ 23 + BOOTSTRAP_OUTPUT := $(_OUTPUT)bootstrap/ 24 24 25 - LIBBPF_OUTPUT := $(_OUTPUT)/libbpf/ 25 + LIBBPF_OUTPUT := $(_OUTPUT)libbpf/ 26 26 LIBBPF_DESTDIR := $(LIBBPF_OUTPUT) 27 - LIBBPF_INCLUDE := $(LIBBPF_DESTDIR)/include 27 + LIBBPF_INCLUDE := $(LIBBPF_DESTDIR)include 28 28 LIBBPF_HDRS_DIR := $(LIBBPF_INCLUDE)/bpf 29 29 LIBBPF := $(LIBBPF_OUTPUT)libbpf.a 30 30 31 31 LIBBPF_BOOTSTRAP_OUTPUT := $(BOOTSTRAP_OUTPUT)libbpf/ 32 32 LIBBPF_BOOTSTRAP_DESTDIR := $(LIBBPF_BOOTSTRAP_OUTPUT) 33 - LIBBPF_BOOTSTRAP_INCLUDE := $(LIBBPF_BOOTSTRAP_DESTDIR)/include 33 + LIBBPF_BOOTSTRAP_INCLUDE := $(LIBBPF_BOOTSTRAP_DESTDIR)include 34 34 LIBBPF_BOOTSTRAP_HDRS_DIR := $(LIBBPF_BOOTSTRAP_INCLUDE)/bpf 35 35 LIBBPF_BOOTSTRAP := $(LIBBPF_BOOTSTRAP_OUTPUT)libbpf.a 36 36 ··· 44 44 45 45 $(LIBBPF): $(wildcard $(BPF_DIR)/*.[ch] $(BPF_DIR)/Makefile) | $(LIBBPF_OUTPUT) 46 46 $(Q)$(MAKE) -C $(BPF_DIR) OUTPUT=$(LIBBPF_OUTPUT) \ 47 - DESTDIR=$(LIBBPF_DESTDIR) prefix= $(LIBBPF) install_headers 47 + DESTDIR=$(LIBBPF_DESTDIR:/=) prefix= $(LIBBPF) install_headers 48 48 49 49 $(LIBBPF_INTERNAL_HDRS): $(LIBBPF_HDRS_DIR)/%.h: $(BPF_DIR)/%.h | $(LIBBPF_HDRS_DIR) 50 50 $(call QUIET_INSTALL, $@) ··· 52 52 53 53 $(LIBBPF_BOOTSTRAP): $(wildcard $(BPF_DIR)/*.[ch] $(BPF_DIR)/Makefile) | $(LIBBPF_BOOTSTRAP_OUTPUT) 54 54 $(Q)$(MAKE) -C $(BPF_DIR) OUTPUT=$(LIBBPF_BOOTSTRAP_OUTPUT) \ 55 - DESTDIR=$(LIBBPF_BOOTSTRAP_DESTDIR) prefix= \ 55 + DESTDIR=$(LIBBPF_BOOTSTRAP_DESTDIR:/=) prefix= \ 56 56 ARCH= CROSS_COMPILE= CC=$(HOSTCC) LD=$(HOSTLD) $@ install_headers 57 57 58 58 $(LIBBPF_BOOTSTRAP_INTERNAL_HDRS): $(LIBBPF_BOOTSTRAP_HDRS_DIR)/%.h: $(BPF_DIR)/%.h | $(LIBBPF_BOOTSTRAP_HDRS_DIR) ··· 93 93 RM ?= rm -f 94 94 95 95 FEATURE_USER = .bpftool 96 - FEATURE_TESTS = libbfd disassembler-four-args reallocarray zlib libcap \ 96 + FEATURE_TESTS = libbfd disassembler-four-args zlib libcap \ 97 97 clang-bpf-co-re 98 98 FEATURE_DISPLAY = libbfd disassembler-four-args zlib libcap \ 99 99 clang-bpf-co-re ··· 116 116 117 117 ifeq ($(feature-disassembler-four-args), 1) 118 118 CFLAGS += -DDISASM_FOUR_ARGS_SIGNATURE 119 - endif 120 - 121 - ifeq ($(feature-reallocarray), 0) 122 - CFLAGS += -DCOMPAT_NEED_REALLOCARRAY 123 119 endif 124 120 125 121 LIBS = $(LIBBPF) -lelf -lz
+105 -22
tools/bpf/bpftool/gen.c
··· 209 209 return 0; 210 210 } 211 211 212 + static const struct btf_type *find_type_for_map(struct btf *btf, const char *map_ident) 213 + { 214 + int n = btf__type_cnt(btf), i; 215 + char sec_ident[256]; 216 + 217 + for (i = 1; i < n; i++) { 218 + const struct btf_type *t = btf__type_by_id(btf, i); 219 + const char *name; 220 + 221 + if (!btf_is_datasec(t)) 222 + continue; 223 + 224 + name = btf__str_by_offset(btf, t->name_off); 225 + if (!get_datasec_ident(name, sec_ident, sizeof(sec_ident))) 226 + continue; 227 + 228 + if (strcmp(sec_ident, map_ident) == 0) 229 + return t; 230 + } 231 + return NULL; 232 + } 233 + 212 234 static int codegen_datasecs(struct bpf_object *obj, const char *obj_name) 213 235 { 214 236 struct btf *btf = bpf_object__btf(obj); 215 - int n = btf__type_cnt(btf); 216 237 struct btf_dump *d; 217 238 struct bpf_map *map; 218 239 const struct btf_type *sec; 219 - char sec_ident[256], map_ident[256]; 220 - int i, err = 0; 240 + char map_ident[256]; 241 + int err = 0; 221 242 222 243 d = btf_dump__new(btf, codegen_btf_dump_printf, NULL, NULL); 223 244 err = libbpf_get_error(d); ··· 255 234 if (!get_map_ident(map, map_ident, sizeof(map_ident))) 256 235 continue; 257 236 258 - sec = NULL; 259 - for (i = 1; i < n; i++) { 260 - const struct btf_type *t = btf__type_by_id(btf, i); 261 - const char *name; 262 - 263 - if (!btf_is_datasec(t)) 264 - continue; 265 - 266 - name = btf__str_by_offset(btf, t->name_off); 267 - if (!get_datasec_ident(name, sec_ident, sizeof(sec_ident))) 268 - continue; 269 - 270 - if (strcmp(sec_ident, map_ident) == 0) { 271 - sec = t; 272 - break; 273 - } 274 - } 237 + sec = find_type_for_map(btf, map_ident); 275 238 276 239 /* In some cases (e.g., sections like .rodata.cst16 containing 277 240 * compiler allocated string constants only) there will be ··· 366 361 map_sz = (size_t)roundup(bpf_map__value_size(map), 8) * bpf_map__max_entries(map); 367 362 map_sz = roundup(map_sz, page_sz); 368 363 return map_sz; 364 + } 365 + 366 + /* Emit type size asserts for all top-level fields in memory-mapped internal maps. */ 367 + static void codegen_asserts(struct bpf_object *obj, const char *obj_name) 368 + { 369 + struct btf *btf = bpf_object__btf(obj); 370 + struct bpf_map *map; 371 + struct btf_var_secinfo *sec_var; 372 + int i, vlen; 373 + const struct btf_type *sec; 374 + char map_ident[256], var_ident[256]; 375 + 376 + codegen("\ 377 + \n\ 378 + __attribute__((unused)) static void \n\ 379 + %1$s__assert(struct %1$s *s) \n\ 380 + { \n\ 381 + #ifdef __cplusplus \n\ 382 + #define _Static_assert static_assert \n\ 383 + #endif \n\ 384 + ", obj_name); 385 + 386 + bpf_object__for_each_map(map, obj) { 387 + if (!bpf_map__is_internal(map)) 388 + continue; 389 + if (!(bpf_map__map_flags(map) & BPF_F_MMAPABLE)) 390 + continue; 391 + if (!get_map_ident(map, map_ident, sizeof(map_ident))) 392 + continue; 393 + 394 + sec = find_type_for_map(btf, map_ident); 395 + if (!sec) { 396 + /* best effort, couldn't find the type for this map */ 397 + continue; 398 + } 399 + 400 + sec_var = btf_var_secinfos(sec); 401 + vlen = btf_vlen(sec); 402 + 403 + for (i = 0; i < vlen; i++, sec_var++) { 404 + const struct btf_type *var = btf__type_by_id(btf, sec_var->type); 405 + const char *var_name = btf__name_by_offset(btf, var->name_off); 406 + long var_size; 407 + 408 + /* static variables are not exposed through BPF skeleton */ 409 + if (btf_var(var)->linkage == BTF_VAR_STATIC) 410 + continue; 411 + 412 + var_size = btf__resolve_size(btf, var->type); 413 + if (var_size < 0) 414 + continue; 415 + 416 + var_ident[0] = '\0'; 417 + strncat(var_ident, var_name, sizeof(var_ident) - 1); 418 + sanitize_identifier(var_ident); 419 + 420 + printf("\t_Static_assert(sizeof(s->%s->%s) == %ld, \"unexpected size of '%s'\");\n", 421 + map_ident, var_ident, var_size, var_ident); 422 + } 423 + } 424 + codegen("\ 425 + \n\ 426 + #ifdef __cplusplus \n\ 427 + #undef _Static_assert \n\ 428 + #endif \n\ 429 + } \n\ 430 + "); 369 431 } 370 432 371 433 static void codegen_attach_detach(struct bpf_object *obj, const char *obj_name) ··· 711 639 } \n\ 712 640 return skel; \n\ 713 641 } \n\ 642 + \n\ 714 643 ", obj_name); 644 + 645 + codegen_asserts(obj, obj_name); 715 646 716 647 codegen("\ 717 648 \n\ ··· 1121 1046 const void *%1$s::elf_bytes(size_t *sz) { return %1$s__elf_bytes(sz); } \n\ 1122 1047 #endif /* __cplusplus */ \n\ 1123 1048 \n\ 1124 - #endif /* %2$s */ \n\ 1125 1049 ", 1126 - obj_name, header_guard); 1050 + obj_name); 1051 + 1052 + codegen_asserts(obj, obj_name); 1053 + 1054 + codegen("\ 1055 + \n\ 1056 + \n\ 1057 + #endif /* %1$s */ \n\ 1058 + ", 1059 + header_guard); 1127 1060 err = 0; 1128 1061 out: 1129 1062 bpf_object__close(obj);
+1 -1
tools/bpf/bpftool/main.h
··· 8 8 #undef GCC_VERSION 9 9 #include <stdbool.h> 10 10 #include <stdio.h> 11 + #include <stdlib.h> 11 12 #include <linux/bpf.h> 12 13 #include <linux/compiler.h> 13 14 #include <linux/kernel.h> 14 - #include <tools/libc_compat.h> 15 15 16 16 #include <bpf/hashmap.h> 17 17 #include <bpf/libbpf.h>
+4 -3
tools/bpf/bpftool/prog.c
··· 26 26 #include <bpf/btf.h> 27 27 #include <bpf/hashmap.h> 28 28 #include <bpf/libbpf.h> 29 + #include <bpf/libbpf_internal.h> 29 30 #include <bpf/skel_internal.h> 30 31 31 32 #include "cfg.h" ··· 1559 1558 if (fd < 0) 1560 1559 goto err_free_reuse_maps; 1561 1560 1562 - new_map_replace = reallocarray(map_replace, 1563 - old_map_fds + 1, 1564 - sizeof(*map_replace)); 1561 + new_map_replace = libbpf_reallocarray(map_replace, 1562 + old_map_fds + 1, 1563 + sizeof(*map_replace)); 1565 1564 if (!new_map_replace) { 1566 1565 p_err("mem alloc failed"); 1567 1566 goto err_free_reuse_maps;
+3 -2
tools/bpf/bpftool/xlated_dumper.c
··· 8 8 #include <string.h> 9 9 #include <sys/types.h> 10 10 #include <bpf/libbpf.h> 11 + #include <bpf/libbpf_internal.h> 11 12 12 13 #include "disasm.h" 13 14 #include "json_writer.h" ··· 33 32 return; 34 33 35 34 while (fgets(buff, sizeof(buff), fp)) { 36 - tmp = reallocarray(dd->sym_mapping, dd->sym_count + 1, 37 - sizeof(*dd->sym_mapping)); 35 + tmp = libbpf_reallocarray(dd->sym_mapping, dd->sym_count + 1, 36 + sizeof(*dd->sym_mapping)); 38 37 if (!tmp) { 39 38 out: 40 39 free(dd->sym_mapping);
+5
tools/lib/bpf/btf_dump.c
··· 1505 1505 if (s->name_resolved) 1506 1506 return *cached_name ? *cached_name : orig_name; 1507 1507 1508 + if (btf_is_fwd(t) || (btf_is_enum(t) && btf_vlen(t) == 0)) { 1509 + s->name_resolved = 1; 1510 + return orig_name; 1511 + } 1512 + 1508 1513 dup_cnt = btf_dump_name_dups(d, name_map, orig_name); 1509 1514 if (dup_cnt > 1) { 1510 1515 const size_t max_len = 256;
+30 -26
tools/lib/bpf/libbpf.c
··· 1374 1374 1375 1375 static int find_elf_sec_sz(const struct bpf_object *obj, const char *name, __u32 *size) 1376 1376 { 1377 - int ret = -ENOENT; 1378 1377 Elf_Data *data; 1379 1378 Elf_Scn *scn; 1380 1379 1381 - *size = 0; 1382 1380 if (!name) 1383 1381 return -EINVAL; 1384 1382 1385 1383 scn = elf_sec_by_name(obj, name); 1386 1384 data = elf_sec_data(obj, scn); 1387 1385 if (data) { 1388 - ret = 0; /* found it */ 1389 1386 *size = data->d_size; 1387 + return 0; /* found it */ 1390 1388 } 1391 1389 1392 - return *size ? 0 : ret; 1390 + return -ENOENT; 1393 1391 } 1394 1392 1395 1393 static int find_elf_var_offset(const struct bpf_object *obj, const char *name, __u32 *off) ··· 2793 2795 goto sort_vars; 2794 2796 2795 2797 ret = find_elf_sec_sz(obj, name, &size); 2796 - if (ret || !size || (t->size && t->size != size)) { 2798 + if (ret || !size) { 2797 2799 pr_debug("Invalid size for section %s: %u bytes\n", name, size); 2798 2800 return -ENOENT; 2799 2801 } ··· 4859 4861 LIBBPF_OPTS(bpf_map_create_opts, create_attr); 4860 4862 struct bpf_map_def *def = &map->def; 4861 4863 const char *map_name = NULL; 4862 - __u32 max_entries; 4863 4864 int err = 0; 4864 4865 4865 4866 if (kernel_supports(obj, FEAT_PROG_NAME)) ··· 4867 4870 create_attr.map_flags = def->map_flags; 4868 4871 create_attr.numa_node = map->numa_node; 4869 4872 create_attr.map_extra = map->map_extra; 4870 - 4871 - if (def->type == BPF_MAP_TYPE_PERF_EVENT_ARRAY && !def->max_entries) { 4872 - int nr_cpus; 4873 - 4874 - nr_cpus = libbpf_num_possible_cpus(); 4875 - if (nr_cpus < 0) { 4876 - pr_warn("map '%s': failed to determine number of system CPUs: %d\n", 4877 - map->name, nr_cpus); 4878 - return nr_cpus; 4879 - } 4880 - pr_debug("map '%s': setting size to %d\n", map->name, nr_cpus); 4881 - max_entries = nr_cpus; 4882 - } else { 4883 - max_entries = def->max_entries; 4884 - } 4885 4873 4886 4874 if (bpf_map__is_struct_ops(map)) 4887 4875 create_attr.btf_vmlinux_value_type_id = map->btf_vmlinux_value_type_id; ··· 4917 4935 4918 4936 if (obj->gen_loader) { 4919 4937 bpf_gen__map_create(obj->gen_loader, def->type, map_name, 4920 - def->key_size, def->value_size, max_entries, 4938 + def->key_size, def->value_size, def->max_entries, 4921 4939 &create_attr, is_inner ? -1 : map - obj->maps); 4922 4940 /* Pretend to have valid FD to pass various fd >= 0 checks. 4923 4941 * This fd == 0 will not be used with any syscall and will be reset to -1 eventually. ··· 4926 4944 } else { 4927 4945 map->fd = bpf_map_create(def->type, map_name, 4928 4946 def->key_size, def->value_size, 4929 - max_entries, &create_attr); 4947 + def->max_entries, &create_attr); 4930 4948 } 4931 4949 if (map->fd < 0 && (create_attr.btf_key_type_id || 4932 4950 create_attr.btf_value_type_id)) { ··· 4943 4961 map->btf_value_type_id = 0; 4944 4962 map->fd = bpf_map_create(def->type, map_name, 4945 4963 def->key_size, def->value_size, 4946 - max_entries, &create_attr); 4964 + def->max_entries, &create_attr); 4947 4965 } 4948 4966 4949 4967 err = map->fd < 0 ? -errno : 0; ··· 5047 5065 return 0; 5048 5066 } 5049 5067 5068 + static int map_set_def_max_entries(struct bpf_map *map) 5069 + { 5070 + if (map->def.type == BPF_MAP_TYPE_PERF_EVENT_ARRAY && !map->def.max_entries) { 5071 + int nr_cpus; 5072 + 5073 + nr_cpus = libbpf_num_possible_cpus(); 5074 + if (nr_cpus < 0) { 5075 + pr_warn("map '%s': failed to determine number of system CPUs: %d\n", 5076 + map->name, nr_cpus); 5077 + return nr_cpus; 5078 + } 5079 + pr_debug("map '%s': setting size to %d\n", map->name, nr_cpus); 5080 + map->def.max_entries = nr_cpus; 5081 + } 5082 + 5083 + return 0; 5084 + } 5085 + 5050 5086 static int 5051 5087 bpf_object__create_maps(struct bpf_object *obj) 5052 5088 { ··· 5096 5096 map->skipped = true; 5097 5097 continue; 5098 5098 } 5099 + 5100 + err = map_set_def_max_entries(map); 5101 + if (err) 5102 + goto err_out; 5099 5103 5100 5104 retried = false; 5101 5105 retry: ··· 10951 10947 { 10952 10948 struct perf_buffer_params p = {}; 10953 10949 10954 - if (page_cnt == 0 || !attr) 10950 + if (!attr) 10955 10951 return libbpf_err_ptr(-EINVAL); 10956 10952 10957 10953 if (!OPTS_VALID(opts, perf_buffer_raw_opts)) ··· 10992 10988 __u32 map_info_len; 10993 10989 int err, i, j, n; 10994 10990 10995 - if (page_cnt & (page_cnt - 1)) { 10991 + if (page_cnt == 0 || (page_cnt & (page_cnt - 1))) { 10996 10992 pr_warn("page count should be power of two, but is %zu\n", 10997 10993 page_cnt); 10998 10994 return ERR_PTR(-EINVAL);
+1
tools/testing/selftests/bpf/.gitignore
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 + bpftool 2 3 bpf-helpers* 3 4 bpf-syscall* 4 5 test_verifier
+2 -1
tools/testing/selftests/bpf/Makefile
··· 470 470 $$(call msg,BINARY,,$$@) 471 471 $(Q)$$(CC) $$(CFLAGS) $$(filter %.a %.o,$$^) $$(LDLIBS) -o $$@ 472 472 $(Q)$(RESOLVE_BTFIDS) --btf $(TRUNNER_OUTPUT)/btf_data.o $$@ 473 + $(Q)ln -sf $(if $2,..,.)/tools/build/bpftool/bootstrap/bpftool $(if $2,$2/)bpftool 473 474 474 475 endef 475 476 ··· 556 555 557 556 EXTRA_CLEAN := $(TEST_CUSTOM_PROGS) $(SCRATCH_DIR) $(HOST_SCRATCH_DIR) \ 558 557 prog_tests/tests.h map_tests/tests.h verifier/tests.h \ 559 - feature \ 558 + feature bpftool \ 560 559 $(addprefix $(OUTPUT)/,*.o *.skel.h *.lskel.h no_alu32 bpf_gcc bpf_testmod.ko) 561 560 562 561 .PHONY: docs docs-clean
+109 -109
tools/testing/selftests/bpf/prog_tests/align.c
··· 39 39 }, 40 40 .prog_type = BPF_PROG_TYPE_SCHED_CLS, 41 41 .matches = { 42 - {0, "R1=ctx(id=0,off=0,imm=0)"}, 42 + {0, "R1=ctx(off=0,imm=0)"}, 43 43 {0, "R10=fp0"}, 44 - {0, "R3_w=inv2"}, 45 - {1, "R3_w=inv4"}, 46 - {2, "R3_w=inv8"}, 47 - {3, "R3_w=inv16"}, 48 - {4, "R3_w=inv32"}, 44 + {0, "R3_w=2"}, 45 + {1, "R3_w=4"}, 46 + {2, "R3_w=8"}, 47 + {3, "R3_w=16"}, 48 + {4, "R3_w=32"}, 49 49 }, 50 50 }, 51 51 { ··· 67 67 }, 68 68 .prog_type = BPF_PROG_TYPE_SCHED_CLS, 69 69 .matches = { 70 - {0, "R1=ctx(id=0,off=0,imm=0)"}, 70 + {0, "R1=ctx(off=0,imm=0)"}, 71 71 {0, "R10=fp0"}, 72 - {0, "R3_w=inv1"}, 73 - {1, "R3_w=inv2"}, 74 - {2, "R3_w=inv4"}, 75 - {3, "R3_w=inv8"}, 76 - {4, "R3_w=inv16"}, 77 - {5, "R3_w=inv1"}, 78 - {6, "R4_w=inv32"}, 79 - {7, "R4_w=inv16"}, 80 - {8, "R4_w=inv8"}, 81 - {9, "R4_w=inv4"}, 82 - {10, "R4_w=inv2"}, 72 + {0, "R3_w=1"}, 73 + {1, "R3_w=2"}, 74 + {2, "R3_w=4"}, 75 + {3, "R3_w=8"}, 76 + {4, "R3_w=16"}, 77 + {5, "R3_w=1"}, 78 + {6, "R4_w=32"}, 79 + {7, "R4_w=16"}, 80 + {8, "R4_w=8"}, 81 + {9, "R4_w=4"}, 82 + {10, "R4_w=2"}, 83 83 }, 84 84 }, 85 85 { ··· 96 96 }, 97 97 .prog_type = BPF_PROG_TYPE_SCHED_CLS, 98 98 .matches = { 99 - {0, "R1=ctx(id=0,off=0,imm=0)"}, 99 + {0, "R1=ctx(off=0,imm=0)"}, 100 100 {0, "R10=fp0"}, 101 - {0, "R3_w=inv4"}, 102 - {1, "R3_w=inv8"}, 103 - {2, "R3_w=inv10"}, 104 - {3, "R4_w=inv8"}, 105 - {4, "R4_w=inv12"}, 106 - {5, "R4_w=inv14"}, 101 + {0, "R3_w=4"}, 102 + {1, "R3_w=8"}, 103 + {2, "R3_w=10"}, 104 + {3, "R4_w=8"}, 105 + {4, "R4_w=12"}, 106 + {5, "R4_w=14"}, 107 107 }, 108 108 }, 109 109 { ··· 118 118 }, 119 119 .prog_type = BPF_PROG_TYPE_SCHED_CLS, 120 120 .matches = { 121 - {0, "R1=ctx(id=0,off=0,imm=0)"}, 121 + {0, "R1=ctx(off=0,imm=0)"}, 122 122 {0, "R10=fp0"}, 123 - {0, "R3_w=inv7"}, 124 - {1, "R3_w=inv7"}, 125 - {2, "R3_w=inv14"}, 126 - {3, "R3_w=inv56"}, 123 + {0, "R3_w=7"}, 124 + {1, "R3_w=7"}, 125 + {2, "R3_w=14"}, 126 + {3, "R3_w=56"}, 127 127 }, 128 128 }, 129 129 ··· 161 161 }, 162 162 .prog_type = BPF_PROG_TYPE_SCHED_CLS, 163 163 .matches = { 164 - {6, "R0_w=pkt(id=0,off=8,r=8,imm=0)"}, 165 - {6, "R3_w=inv(id=0,umax_value=255,var_off=(0x0; 0xff))"}, 166 - {7, "R3_w=inv(id=0,umax_value=510,var_off=(0x0; 0x1fe))"}, 167 - {8, "R3_w=inv(id=0,umax_value=1020,var_off=(0x0; 0x3fc))"}, 168 - {9, "R3_w=inv(id=0,umax_value=2040,var_off=(0x0; 0x7f8))"}, 169 - {10, "R3_w=inv(id=0,umax_value=4080,var_off=(0x0; 0xff0))"}, 170 - {12, "R3_w=pkt_end(id=0,off=0,imm=0)"}, 171 - {17, "R4_w=inv(id=0,umax_value=255,var_off=(0x0; 0xff))"}, 172 - {18, "R4_w=inv(id=0,umax_value=8160,var_off=(0x0; 0x1fe0))"}, 173 - {19, "R4_w=inv(id=0,umax_value=4080,var_off=(0x0; 0xff0))"}, 174 - {20, "R4_w=inv(id=0,umax_value=2040,var_off=(0x0; 0x7f8))"}, 175 - {21, "R4_w=inv(id=0,umax_value=1020,var_off=(0x0; 0x3fc))"}, 176 - {22, "R4_w=inv(id=0,umax_value=510,var_off=(0x0; 0x1fe))"}, 164 + {6, "R0_w=pkt(off=8,r=8,imm=0)"}, 165 + {6, "R3_w=scalar(umax=255,var_off=(0x0; 0xff))"}, 166 + {7, "R3_w=scalar(umax=510,var_off=(0x0; 0x1fe))"}, 167 + {8, "R3_w=scalar(umax=1020,var_off=(0x0; 0x3fc))"}, 168 + {9, "R3_w=scalar(umax=2040,var_off=(0x0; 0x7f8))"}, 169 + {10, "R3_w=scalar(umax=4080,var_off=(0x0; 0xff0))"}, 170 + {12, "R3_w=pkt_end(off=0,imm=0)"}, 171 + {17, "R4_w=scalar(umax=255,var_off=(0x0; 0xff))"}, 172 + {18, "R4_w=scalar(umax=8160,var_off=(0x0; 0x1fe0))"}, 173 + {19, "R4_w=scalar(umax=4080,var_off=(0x0; 0xff0))"}, 174 + {20, "R4_w=scalar(umax=2040,var_off=(0x0; 0x7f8))"}, 175 + {21, "R4_w=scalar(umax=1020,var_off=(0x0; 0x3fc))"}, 176 + {22, "R4_w=scalar(umax=510,var_off=(0x0; 0x1fe))"}, 177 177 }, 178 178 }, 179 179 { ··· 194 194 }, 195 195 .prog_type = BPF_PROG_TYPE_SCHED_CLS, 196 196 .matches = { 197 - {6, "R3_w=inv(id=0,umax_value=255,var_off=(0x0; 0xff))"}, 198 - {7, "R4_w=inv(id=1,umax_value=255,var_off=(0x0; 0xff))"}, 199 - {8, "R4_w=inv(id=0,umax_value=255,var_off=(0x0; 0xff))"}, 200 - {9, "R4_w=inv(id=1,umax_value=255,var_off=(0x0; 0xff))"}, 201 - {10, "R4_w=inv(id=0,umax_value=510,var_off=(0x0; 0x1fe))"}, 202 - {11, "R4_w=inv(id=1,umax_value=255,var_off=(0x0; 0xff))"}, 203 - {12, "R4_w=inv(id=0,umax_value=1020,var_off=(0x0; 0x3fc))"}, 204 - {13, "R4_w=inv(id=1,umax_value=255,var_off=(0x0; 0xff))"}, 205 - {14, "R4_w=inv(id=0,umax_value=2040,var_off=(0x0; 0x7f8))"}, 206 - {15, "R4_w=inv(id=0,umax_value=4080,var_off=(0x0; 0xff0))"}, 197 + {6, "R3_w=scalar(umax=255,var_off=(0x0; 0xff))"}, 198 + {7, "R4_w=scalar(id=1,umax=255,var_off=(0x0; 0xff))"}, 199 + {8, "R4_w=scalar(umax=255,var_off=(0x0; 0xff))"}, 200 + {9, "R4_w=scalar(id=1,umax=255,var_off=(0x0; 0xff))"}, 201 + {10, "R4_w=scalar(umax=510,var_off=(0x0; 0x1fe))"}, 202 + {11, "R4_w=scalar(id=1,umax=255,var_off=(0x0; 0xff))"}, 203 + {12, "R4_w=scalar(umax=1020,var_off=(0x0; 0x3fc))"}, 204 + {13, "R4_w=scalar(id=1,umax=255,var_off=(0x0; 0xff))"}, 205 + {14, "R4_w=scalar(umax=2040,var_off=(0x0; 0x7f8))"}, 206 + {15, "R4_w=scalar(umax=4080,var_off=(0x0; 0xff0))"}, 207 207 }, 208 208 }, 209 209 { ··· 234 234 }, 235 235 .prog_type = BPF_PROG_TYPE_SCHED_CLS, 236 236 .matches = { 237 - {2, "R5_w=pkt(id=0,off=0,r=0,imm=0)"}, 238 - {4, "R5_w=pkt(id=0,off=14,r=0,imm=0)"}, 239 - {5, "R4_w=pkt(id=0,off=14,r=0,imm=0)"}, 240 - {9, "R2=pkt(id=0,off=0,r=18,imm=0)"}, 241 - {10, "R5=pkt(id=0,off=14,r=18,imm=0)"}, 242 - {10, "R4_w=inv(id=0,umax_value=255,var_off=(0x0; 0xff))"}, 243 - {13, "R4_w=inv(id=0,umax_value=65535,var_off=(0x0; 0xffff))"}, 244 - {14, "R4_w=inv(id=0,umax_value=65535,var_off=(0x0; 0xffff))"}, 237 + {2, "R5_w=pkt(off=0,r=0,imm=0)"}, 238 + {4, "R5_w=pkt(off=14,r=0,imm=0)"}, 239 + {5, "R4_w=pkt(off=14,r=0,imm=0)"}, 240 + {9, "R2=pkt(off=0,r=18,imm=0)"}, 241 + {10, "R5=pkt(off=14,r=18,imm=0)"}, 242 + {10, "R4_w=scalar(umax=255,var_off=(0x0; 0xff))"}, 243 + {13, "R4_w=scalar(umax=65535,var_off=(0x0; 0xffff))"}, 244 + {14, "R4_w=scalar(umax=65535,var_off=(0x0; 0xffff))"}, 245 245 }, 246 246 }, 247 247 { ··· 296 296 /* Calculated offset in R6 has unknown value, but known 297 297 * alignment of 4. 298 298 */ 299 - {6, "R2_w=pkt(id=0,off=0,r=8,imm=0)"}, 300 - {7, "R6_w=inv(id=0,umax_value=1020,var_off=(0x0; 0x3fc))"}, 299 + {6, "R2_w=pkt(off=0,r=8,imm=0)"}, 300 + {7, "R6_w=scalar(umax=1020,var_off=(0x0; 0x3fc))"}, 301 301 /* Offset is added to packet pointer R5, resulting in 302 302 * known fixed offset, and variable offset from R6. 303 303 */ 304 - {11, "R5_w=pkt(id=1,off=14,r=0,umax_value=1020,var_off=(0x0; 0x3fc))"}, 304 + {11, "R5_w=pkt(id=1,off=14,r=0,umax=1020,var_off=(0x0; 0x3fc))"}, 305 305 /* At the time the word size load is performed from R5, 306 306 * it's total offset is NET_IP_ALIGN + reg->off (0) + 307 307 * reg->aux_off (14) which is 16. Then the variable 308 308 * offset is considered using reg->aux_off_align which 309 309 * is 4 and meets the load's requirements. 310 310 */ 311 - {15, "R4=pkt(id=1,off=18,r=18,umax_value=1020,var_off=(0x0; 0x3fc))"}, 312 - {15, "R5=pkt(id=1,off=14,r=18,umax_value=1020,var_off=(0x0; 0x3fc))"}, 311 + {15, "R4=pkt(id=1,off=18,r=18,umax=1020,var_off=(0x0; 0x3fc))"}, 312 + {15, "R5=pkt(id=1,off=14,r=18,umax=1020,var_off=(0x0; 0x3fc))"}, 313 313 /* Variable offset is added to R5 packet pointer, 314 314 * resulting in auxiliary alignment of 4. 315 315 */ 316 - {17, "R5_w=pkt(id=2,off=0,r=0,umax_value=1020,var_off=(0x0; 0x3fc))"}, 316 + {17, "R5_w=pkt(id=2,off=0,r=0,umax=1020,var_off=(0x0; 0x3fc))"}, 317 317 /* Constant offset is added to R5, resulting in 318 318 * reg->off of 14. 319 319 */ 320 - {18, "R5_w=pkt(id=2,off=14,r=0,umax_value=1020,var_off=(0x0; 0x3fc))"}, 320 + {18, "R5_w=pkt(id=2,off=14,r=0,umax=1020,var_off=(0x0; 0x3fc))"}, 321 321 /* At the time the word size load is performed from R5, 322 322 * its total fixed offset is NET_IP_ALIGN + reg->off 323 323 * (14) which is 16. Then the variable offset is 4-byte 324 324 * aligned, so the total offset is 4-byte aligned and 325 325 * meets the load's requirements. 326 326 */ 327 - {23, "R4=pkt(id=2,off=18,r=18,umax_value=1020,var_off=(0x0; 0x3fc))"}, 328 - {23, "R5=pkt(id=2,off=14,r=18,umax_value=1020,var_off=(0x0; 0x3fc))"}, 327 + {23, "R4=pkt(id=2,off=18,r=18,umax=1020,var_off=(0x0; 0x3fc))"}, 328 + {23, "R5=pkt(id=2,off=14,r=18,umax=1020,var_off=(0x0; 0x3fc))"}, 329 329 /* Constant offset is added to R5 packet pointer, 330 330 * resulting in reg->off value of 14. 331 331 */ 332 - {25, "R5_w=pkt(id=0,off=14,r=8"}, 332 + {25, "R5_w=pkt(off=14,r=8"}, 333 333 /* Variable offset is added to R5, resulting in a 334 334 * variable offset of (4n). 335 335 */ 336 - {26, "R5_w=pkt(id=3,off=14,r=0,umax_value=1020,var_off=(0x0; 0x3fc))"}, 336 + {26, "R5_w=pkt(id=3,off=14,r=0,umax=1020,var_off=(0x0; 0x3fc))"}, 337 337 /* Constant is added to R5 again, setting reg->off to 18. */ 338 - {27, "R5_w=pkt(id=3,off=18,r=0,umax_value=1020,var_off=(0x0; 0x3fc))"}, 338 + {27, "R5_w=pkt(id=3,off=18,r=0,umax=1020,var_off=(0x0; 0x3fc))"}, 339 339 /* And once more we add a variable; resulting var_off 340 340 * is still (4n), fixed offset is not changed. 341 341 * Also, we create a new reg->id. 342 342 */ 343 - {28, "R5_w=pkt(id=4,off=18,r=0,umax_value=2040,var_off=(0x0; 0x7fc)"}, 343 + {28, "R5_w=pkt(id=4,off=18,r=0,umax=2040,var_off=(0x0; 0x7fc)"}, 344 344 /* At the time the word size load is performed from R5, 345 345 * its total fixed offset is NET_IP_ALIGN + reg->off (18) 346 346 * which is 20. Then the variable offset is (4n), so 347 347 * the total offset is 4-byte aligned and meets the 348 348 * load's requirements. 349 349 */ 350 - {33, "R4=pkt(id=4,off=22,r=22,umax_value=2040,var_off=(0x0; 0x7fc)"}, 351 - {33, "R5=pkt(id=4,off=18,r=22,umax_value=2040,var_off=(0x0; 0x7fc)"}, 350 + {33, "R4=pkt(id=4,off=22,r=22,umax=2040,var_off=(0x0; 0x7fc)"}, 351 + {33, "R5=pkt(id=4,off=18,r=22,umax=2040,var_off=(0x0; 0x7fc)"}, 352 352 }, 353 353 }, 354 354 { ··· 386 386 /* Calculated offset in R6 has unknown value, but known 387 387 * alignment of 4. 388 388 */ 389 - {6, "R2_w=pkt(id=0,off=0,r=8,imm=0)"}, 390 - {7, "R6_w=inv(id=0,umax_value=1020,var_off=(0x0; 0x3fc))"}, 389 + {6, "R2_w=pkt(off=0,r=8,imm=0)"}, 390 + {7, "R6_w=scalar(umax=1020,var_off=(0x0; 0x3fc))"}, 391 391 /* Adding 14 makes R6 be (4n+2) */ 392 - {8, "R6_w=inv(id=0,umin_value=14,umax_value=1034,var_off=(0x2; 0x7fc))"}, 392 + {8, "R6_w=scalar(umin=14,umax=1034,var_off=(0x2; 0x7fc))"}, 393 393 /* Packet pointer has (4n+2) offset */ 394 - {11, "R5_w=pkt(id=1,off=0,r=0,umin_value=14,umax_value=1034,var_off=(0x2; 0x7fc)"}, 395 - {12, "R4=pkt(id=1,off=4,r=0,umin_value=14,umax_value=1034,var_off=(0x2; 0x7fc)"}, 394 + {11, "R5_w=pkt(id=1,off=0,r=0,umin=14,umax=1034,var_off=(0x2; 0x7fc)"}, 395 + {12, "R4=pkt(id=1,off=4,r=0,umin=14,umax=1034,var_off=(0x2; 0x7fc)"}, 396 396 /* At the time the word size load is performed from R5, 397 397 * its total fixed offset is NET_IP_ALIGN + reg->off (0) 398 398 * which is 2. Then the variable offset is (4n+2), so 399 399 * the total offset is 4-byte aligned and meets the 400 400 * load's requirements. 401 401 */ 402 - {15, "R5=pkt(id=1,off=0,r=4,umin_value=14,umax_value=1034,var_off=(0x2; 0x7fc)"}, 402 + {15, "R5=pkt(id=1,off=0,r=4,umin=14,umax=1034,var_off=(0x2; 0x7fc)"}, 403 403 /* Newly read value in R6 was shifted left by 2, so has 404 404 * known alignment of 4. 405 405 */ 406 - {17, "R6_w=inv(id=0,umax_value=1020,var_off=(0x0; 0x3fc))"}, 406 + {17, "R6_w=scalar(umax=1020,var_off=(0x0; 0x3fc))"}, 407 407 /* Added (4n) to packet pointer's (4n+2) var_off, giving 408 408 * another (4n+2). 409 409 */ 410 - {19, "R5_w=pkt(id=2,off=0,r=0,umin_value=14,umax_value=2054,var_off=(0x2; 0xffc)"}, 411 - {20, "R4=pkt(id=2,off=4,r=0,umin_value=14,umax_value=2054,var_off=(0x2; 0xffc)"}, 410 + {19, "R5_w=pkt(id=2,off=0,r=0,umin=14,umax=2054,var_off=(0x2; 0xffc)"}, 411 + {20, "R4=pkt(id=2,off=4,r=0,umin=14,umax=2054,var_off=(0x2; 0xffc)"}, 412 412 /* At the time the word size load is performed from R5, 413 413 * its total fixed offset is NET_IP_ALIGN + reg->off (0) 414 414 * which is 2. Then the variable offset is (4n+2), so 415 415 * the total offset is 4-byte aligned and meets the 416 416 * load's requirements. 417 417 */ 418 - {23, "R5=pkt(id=2,off=0,r=4,umin_value=14,umax_value=2054,var_off=(0x2; 0xffc)"}, 418 + {23, "R5=pkt(id=2,off=0,r=4,umin=14,umax=2054,var_off=(0x2; 0xffc)"}, 419 419 }, 420 420 }, 421 421 { ··· 448 448 .prog_type = BPF_PROG_TYPE_SCHED_CLS, 449 449 .result = REJECT, 450 450 .matches = { 451 - {3, "R5_w=pkt_end(id=0,off=0,imm=0)"}, 451 + {3, "R5_w=pkt_end(off=0,imm=0)"}, 452 452 /* (ptr - ptr) << 2 == unknown, (4n) */ 453 - {5, "R5_w=inv(id=0,smax_value=9223372036854775804,umax_value=18446744073709551612,var_off=(0x0; 0xfffffffffffffffc)"}, 453 + {5, "R5_w=scalar(smax=9223372036854775804,umax=18446744073709551612,var_off=(0x0; 0xfffffffffffffffc)"}, 454 454 /* (4n) + 14 == (4n+2). We blow our bounds, because 455 455 * the add could overflow. 456 456 */ 457 - {6, "R5_w=inv(id=0,smin_value=-9223372036854775806,smax_value=9223372036854775806,umin_value=2,umax_value=18446744073709551614,var_off=(0x2; 0xfffffffffffffffc)"}, 457 + {6, "R5_w=scalar(smin=-9223372036854775806,smax=9223372036854775806,umin=2,umax=18446744073709551614,var_off=(0x2; 0xfffffffffffffffc)"}, 458 458 /* Checked s>=0 */ 459 - {9, "R5=inv(id=0,umin_value=2,umax_value=9223372036854775806,var_off=(0x2; 0x7ffffffffffffffc)"}, 459 + {9, "R5=scalar(umin=2,umax=9223372036854775806,var_off=(0x2; 0x7ffffffffffffffc)"}, 460 460 /* packet pointer + nonnegative (4n+2) */ 461 - {11, "R6_w=pkt(id=1,off=0,r=0,umin_value=2,umax_value=9223372036854775806,var_off=(0x2; 0x7ffffffffffffffc)"}, 462 - {12, "R4_w=pkt(id=1,off=4,r=0,umin_value=2,umax_value=9223372036854775806,var_off=(0x2; 0x7ffffffffffffffc)"}, 461 + {11, "R6_w=pkt(id=1,off=0,r=0,umin=2,umax=9223372036854775806,var_off=(0x2; 0x7ffffffffffffffc)"}, 462 + {12, "R4_w=pkt(id=1,off=4,r=0,umin=2,umax=9223372036854775806,var_off=(0x2; 0x7ffffffffffffffc)"}, 463 463 /* NET_IP_ALIGN + (4n+2) == (4n), alignment is fine. 464 464 * We checked the bounds, but it might have been able 465 465 * to overflow if the packet pointer started in the ··· 467 467 * So we did not get a 'range' on R6, and the access 468 468 * attempt will fail. 469 469 */ 470 - {15, "R6_w=pkt(id=1,off=0,r=0,umin_value=2,umax_value=9223372036854775806,var_off=(0x2; 0x7ffffffffffffffc)"}, 470 + {15, "R6_w=pkt(id=1,off=0,r=0,umin=2,umax=9223372036854775806,var_off=(0x2; 0x7ffffffffffffffc)"}, 471 471 } 472 472 }, 473 473 { ··· 502 502 /* Calculated offset in R6 has unknown value, but known 503 503 * alignment of 4. 504 504 */ 505 - {6, "R2_w=pkt(id=0,off=0,r=8,imm=0)"}, 506 - {8, "R6_w=inv(id=0,umax_value=1020,var_off=(0x0; 0x3fc))"}, 505 + {6, "R2_w=pkt(off=0,r=8,imm=0)"}, 506 + {8, "R6_w=scalar(umax=1020,var_off=(0x0; 0x3fc))"}, 507 507 /* Adding 14 makes R6 be (4n+2) */ 508 - {9, "R6_w=inv(id=0,umin_value=14,umax_value=1034,var_off=(0x2; 0x7fc))"}, 508 + {9, "R6_w=scalar(umin=14,umax=1034,var_off=(0x2; 0x7fc))"}, 509 509 /* New unknown value in R7 is (4n) */ 510 - {10, "R7_w=inv(id=0,umax_value=1020,var_off=(0x0; 0x3fc))"}, 510 + {10, "R7_w=scalar(umax=1020,var_off=(0x0; 0x3fc))"}, 511 511 /* Subtracting it from R6 blows our unsigned bounds */ 512 - {11, "R6=inv(id=0,smin_value=-1006,smax_value=1034,umin_value=2,umax_value=18446744073709551614,var_off=(0x2; 0xfffffffffffffffc)"}, 512 + {11, "R6=scalar(smin=-1006,smax=1034,umin=2,umax=18446744073709551614,var_off=(0x2; 0xfffffffffffffffc)"}, 513 513 /* Checked s>= 0 */ 514 - {14, "R6=inv(id=0,umin_value=2,umax_value=1034,var_off=(0x2; 0x7fc))"}, 514 + {14, "R6=scalar(umin=2,umax=1034,var_off=(0x2; 0x7fc))"}, 515 515 /* At the time the word size load is performed from R5, 516 516 * its total fixed offset is NET_IP_ALIGN + reg->off (0) 517 517 * which is 2. Then the variable offset is (4n+2), so 518 518 * the total offset is 4-byte aligned and meets the 519 519 * load's requirements. 520 520 */ 521 - {20, "R5=pkt(id=2,off=0,r=4,umin_value=2,umax_value=1034,var_off=(0x2; 0x7fc)"}, 521 + {20, "R5=pkt(id=2,off=0,r=4,umin=2,umax=1034,var_off=(0x2; 0x7fc)"}, 522 522 523 523 }, 524 524 }, ··· 556 556 /* Calculated offset in R6 has unknown value, but known 557 557 * alignment of 4. 558 558 */ 559 - {6, "R2_w=pkt(id=0,off=0,r=8,imm=0)"}, 560 - {9, "R6_w=inv(id=0,umax_value=60,var_off=(0x0; 0x3c))"}, 559 + {6, "R2_w=pkt(off=0,r=8,imm=0)"}, 560 + {9, "R6_w=scalar(umax=60,var_off=(0x0; 0x3c))"}, 561 561 /* Adding 14 makes R6 be (4n+2) */ 562 - {10, "R6_w=inv(id=0,umin_value=14,umax_value=74,var_off=(0x2; 0x7c))"}, 562 + {10, "R6_w=scalar(umin=14,umax=74,var_off=(0x2; 0x7c))"}, 563 563 /* Subtracting from packet pointer overflows ubounds */ 564 - {13, "R5_w=pkt(id=2,off=0,r=8,umin_value=18446744073709551542,umax_value=18446744073709551602,var_off=(0xffffffffffffff82; 0x7c)"}, 564 + {13, "R5_w=pkt(id=2,off=0,r=8,umin=18446744073709551542,umax=18446744073709551602,var_off=(0xffffffffffffff82; 0x7c)"}, 565 565 /* New unknown value in R7 is (4n), >= 76 */ 566 - {14, "R7_w=inv(id=0,umin_value=76,umax_value=1096,var_off=(0x0; 0x7fc))"}, 566 + {14, "R7_w=scalar(umin=76,umax=1096,var_off=(0x0; 0x7fc))"}, 567 567 /* Adding it to packet pointer gives nice bounds again */ 568 - {16, "R5_w=pkt(id=3,off=0,r=0,umin_value=2,umax_value=1082,var_off=(0x2; 0xfffffffc)"}, 568 + {16, "R5_w=pkt(id=3,off=0,r=0,umin=2,umax=1082,var_off=(0x2; 0xfffffffc)"}, 569 569 /* At the time the word size load is performed from R5, 570 570 * its total fixed offset is NET_IP_ALIGN + reg->off (0) 571 571 * which is 2. Then the variable offset is (4n+2), so 572 572 * the total offset is 4-byte aligned and meets the 573 573 * load's requirements. 574 574 */ 575 - {20, "R5=pkt(id=3,off=0,r=4,umin_value=2,umax_value=1082,var_off=(0x2; 0xfffffffc)"}, 575 + {20, "R5=pkt(id=3,off=0,r=4,umin=2,umax=1082,var_off=(0x2; 0xfffffffc)"}, 576 576 }, 577 577 }, 578 578 }; ··· 648 648 /* Check the next line as well in case the previous line 649 649 * did not have a corresponding bpf insn. Example: 650 650 * func#0 @0 651 - * 0: R1=ctx(id=0,off=0,imm=0) R10=fp0 652 - * 0: (b7) r3 = 2 ; R3_w=inv2 651 + * 0: R1=ctx(off=0,imm=0) R10=fp0 652 + * 0: (b7) r3 = 2 ; R3_w=2 653 653 */ 654 654 if (!strstr(line_ptr, m.match)) { 655 655 cur_line = -1;
+22 -69
tools/testing/selftests/bpf/prog_tests/atomics.c
··· 7 7 static void test_add(struct atomics_lskel *skel) 8 8 { 9 9 int err, prog_fd; 10 - int link_fd; 11 10 LIBBPF_OPTS(bpf_test_run_opts, topts); 12 11 13 - link_fd = atomics_lskel__add__attach(skel); 14 - if (!ASSERT_GT(link_fd, 0, "attach(add)")) 15 - return; 16 - 12 + /* No need to attach it, just run it directly */ 17 13 prog_fd = skel->progs.add.prog_fd; 18 14 err = bpf_prog_test_run_opts(prog_fd, &topts); 19 15 if (!ASSERT_OK(err, "test_run_opts err")) 20 - goto cleanup; 16 + return; 21 17 if (!ASSERT_OK(topts.retval, "test_run_opts retval")) 22 - goto cleanup; 18 + return; 23 19 24 20 ASSERT_EQ(skel->data->add64_value, 3, "add64_value"); 25 21 ASSERT_EQ(skel->bss->add64_result, 1, "add64_result"); ··· 27 31 ASSERT_EQ(skel->bss->add_stack_result, 1, "add_stack_result"); 28 32 29 33 ASSERT_EQ(skel->data->add_noreturn_value, 3, "add_noreturn_value"); 30 - 31 - cleanup: 32 - close(link_fd); 33 34 } 34 35 35 36 static void test_sub(struct atomics_lskel *skel) 36 37 { 37 38 int err, prog_fd; 38 - int link_fd; 39 39 LIBBPF_OPTS(bpf_test_run_opts, topts); 40 40 41 - link_fd = atomics_lskel__sub__attach(skel); 42 - if (!ASSERT_GT(link_fd, 0, "attach(sub)")) 43 - return; 44 - 41 + /* No need to attach it, just run it directly */ 45 42 prog_fd = skel->progs.sub.prog_fd; 46 43 err = bpf_prog_test_run_opts(prog_fd, &topts); 47 44 if (!ASSERT_OK(err, "test_run_opts err")) 48 - goto cleanup; 45 + return; 49 46 if (!ASSERT_OK(topts.retval, "test_run_opts retval")) 50 - goto cleanup; 47 + return; 51 48 52 49 ASSERT_EQ(skel->data->sub64_value, -1, "sub64_value"); 53 50 ASSERT_EQ(skel->bss->sub64_result, 1, "sub64_result"); ··· 52 63 ASSERT_EQ(skel->bss->sub_stack_result, 1, "sub_stack_result"); 53 64 54 65 ASSERT_EQ(skel->data->sub_noreturn_value, -1, "sub_noreturn_value"); 55 - 56 - cleanup: 57 - close(link_fd); 58 66 } 59 67 60 68 static void test_and(struct atomics_lskel *skel) 61 69 { 62 70 int err, prog_fd; 63 - int link_fd; 64 71 LIBBPF_OPTS(bpf_test_run_opts, topts); 65 72 66 - link_fd = atomics_lskel__and__attach(skel); 67 - if (!ASSERT_GT(link_fd, 0, "attach(and)")) 68 - return; 69 - 73 + /* No need to attach it, just run it directly */ 70 74 prog_fd = skel->progs.and.prog_fd; 71 75 err = bpf_prog_test_run_opts(prog_fd, &topts); 72 76 if (!ASSERT_OK(err, "test_run_opts err")) 73 - goto cleanup; 77 + return; 74 78 if (!ASSERT_OK(topts.retval, "test_run_opts retval")) 75 - goto cleanup; 79 + return; 76 80 77 81 ASSERT_EQ(skel->data->and64_value, 0x010ull << 32, "and64_value"); 78 82 ASSERT_EQ(skel->bss->and64_result, 0x110ull << 32, "and64_result"); ··· 74 92 ASSERT_EQ(skel->bss->and32_result, 0x110, "and32_result"); 75 93 76 94 ASSERT_EQ(skel->data->and_noreturn_value, 0x010ull << 32, "and_noreturn_value"); 77 - cleanup: 78 - close(link_fd); 79 95 } 80 96 81 97 static void test_or(struct atomics_lskel *skel) 82 98 { 83 99 int err, prog_fd; 84 - int link_fd; 85 100 LIBBPF_OPTS(bpf_test_run_opts, topts); 86 101 87 - link_fd = atomics_lskel__or__attach(skel); 88 - if (!ASSERT_GT(link_fd, 0, "attach(or)")) 89 - return; 90 - 102 + /* No need to attach it, just run it directly */ 91 103 prog_fd = skel->progs.or.prog_fd; 92 104 err = bpf_prog_test_run_opts(prog_fd, &topts); 93 105 if (!ASSERT_OK(err, "test_run_opts err")) 94 - goto cleanup; 106 + return; 95 107 if (!ASSERT_OK(topts.retval, "test_run_opts retval")) 96 - goto cleanup; 108 + return; 97 109 98 110 ASSERT_EQ(skel->data->or64_value, 0x111ull << 32, "or64_value"); 99 111 ASSERT_EQ(skel->bss->or64_result, 0x110ull << 32, "or64_result"); ··· 96 120 ASSERT_EQ(skel->bss->or32_result, 0x110, "or32_result"); 97 121 98 122 ASSERT_EQ(skel->data->or_noreturn_value, 0x111ull << 32, "or_noreturn_value"); 99 - cleanup: 100 - close(link_fd); 101 123 } 102 124 103 125 static void test_xor(struct atomics_lskel *skel) 104 126 { 105 127 int err, prog_fd; 106 - int link_fd; 107 128 LIBBPF_OPTS(bpf_test_run_opts, topts); 108 129 109 - link_fd = atomics_lskel__xor__attach(skel); 110 - if (!ASSERT_GT(link_fd, 0, "attach(xor)")) 111 - return; 112 - 130 + /* No need to attach it, just run it directly */ 113 131 prog_fd = skel->progs.xor.prog_fd; 114 132 err = bpf_prog_test_run_opts(prog_fd, &topts); 115 133 if (!ASSERT_OK(err, "test_run_opts err")) 116 - goto cleanup; 134 + return; 117 135 if (!ASSERT_OK(topts.retval, "test_run_opts retval")) 118 - goto cleanup; 136 + return; 119 137 120 138 ASSERT_EQ(skel->data->xor64_value, 0x101ull << 32, "xor64_value"); 121 139 ASSERT_EQ(skel->bss->xor64_result, 0x110ull << 32, "xor64_result"); ··· 118 148 ASSERT_EQ(skel->bss->xor32_result, 0x110, "xor32_result"); 119 149 120 150 ASSERT_EQ(skel->data->xor_noreturn_value, 0x101ull << 32, "xor_nxoreturn_value"); 121 - cleanup: 122 - close(link_fd); 123 151 } 124 152 125 153 static void test_cmpxchg(struct atomics_lskel *skel) 126 154 { 127 155 int err, prog_fd; 128 - int link_fd; 129 156 LIBBPF_OPTS(bpf_test_run_opts, topts); 130 157 131 - link_fd = atomics_lskel__cmpxchg__attach(skel); 132 - if (!ASSERT_GT(link_fd, 0, "attach(cmpxchg)")) 133 - return; 134 - 158 + /* No need to attach it, just run it directly */ 135 159 prog_fd = skel->progs.cmpxchg.prog_fd; 136 160 err = bpf_prog_test_run_opts(prog_fd, &topts); 137 161 if (!ASSERT_OK(err, "test_run_opts err")) 138 - goto cleanup; 162 + return; 139 163 if (!ASSERT_OK(topts.retval, "test_run_opts retval")) 140 - goto cleanup; 164 + return; 141 165 142 166 ASSERT_EQ(skel->data->cmpxchg64_value, 2, "cmpxchg64_value"); 143 167 ASSERT_EQ(skel->bss->cmpxchg64_result_fail, 1, "cmpxchg_result_fail"); ··· 140 176 ASSERT_EQ(skel->data->cmpxchg32_value, 2, "lcmpxchg32_value"); 141 177 ASSERT_EQ(skel->bss->cmpxchg32_result_fail, 1, "cmpxchg_result_fail"); 142 178 ASSERT_EQ(skel->bss->cmpxchg32_result_succeed, 1, "cmpxchg_result_succeed"); 143 - 144 - cleanup: 145 - close(link_fd); 146 179 } 147 180 148 181 static void test_xchg(struct atomics_lskel *skel) 149 182 { 150 183 int err, prog_fd; 151 - int link_fd; 152 184 LIBBPF_OPTS(bpf_test_run_opts, topts); 153 185 154 - link_fd = atomics_lskel__xchg__attach(skel); 155 - if (!ASSERT_GT(link_fd, 0, "attach(xchg)")) 156 - return; 157 - 186 + /* No need to attach it, just run it directly */ 158 187 prog_fd = skel->progs.xchg.prog_fd; 159 188 err = bpf_prog_test_run_opts(prog_fd, &topts); 160 189 if (!ASSERT_OK(err, "test_run_opts err")) 161 - goto cleanup; 190 + return; 162 191 if (!ASSERT_OK(topts.retval, "test_run_opts retval")) 163 - goto cleanup; 192 + return; 164 193 165 194 ASSERT_EQ(skel->data->xchg64_value, 2, "xchg64_value"); 166 195 ASSERT_EQ(skel->bss->xchg64_result, 1, "xchg64_result"); 167 196 168 197 ASSERT_EQ(skel->data->xchg32_value, 2, "xchg32_value"); 169 198 ASSERT_EQ(skel->bss->xchg32_result, 1, "xchg32_result"); 170 - 171 - cleanup: 172 - close(link_fd); 173 199 } 174 200 175 201 void test_atomics(void) 176 202 { 177 203 struct atomics_lskel *skel; 178 - __u32 duration = 0; 179 204 180 205 skel = atomics_lskel__open_and_load(); 181 - if (CHECK(!skel, "skel_load", "atomics skeleton failed\n")) 206 + if (!ASSERT_OK_PTR(skel, "atomics skeleton load")) 182 207 return; 183 208 184 209 if (skel->data->skip_tests) {
+42 -14
tools/testing/selftests/bpf/prog_tests/btf_dump.c
··· 148 148 149 149 /* First, generate BTF corresponding to the following C code: 150 150 * 151 - * enum { VAL = 1 }; 151 + * enum x; 152 + * 153 + * enum x { X = 1 }; 154 + * 155 + * enum { Y = 1 }; 156 + * 157 + * struct s; 152 158 * 153 159 * struct s { int x; }; 154 160 * 155 161 */ 162 + id = btf__add_enum(btf, "x", 4); 163 + ASSERT_EQ(id, 1, "enum_declaration_id"); 164 + id = btf__add_enum(btf, "x", 4); 165 + ASSERT_EQ(id, 2, "named_enum_id"); 166 + err = btf__add_enum_value(btf, "X", 1); 167 + ASSERT_OK(err, "named_enum_val_ok"); 168 + 156 169 id = btf__add_enum(btf, NULL, 4); 157 - ASSERT_EQ(id, 1, "enum_id"); 158 - err = btf__add_enum_value(btf, "VAL", 1); 159 - ASSERT_OK(err, "enum_val_ok"); 170 + ASSERT_EQ(id, 3, "anon_enum_id"); 171 + err = btf__add_enum_value(btf, "Y", 1); 172 + ASSERT_OK(err, "anon_enum_val_ok"); 160 173 161 174 id = btf__add_int(btf, "int", 4, BTF_INT_SIGNED); 162 - ASSERT_EQ(id, 2, "int_id"); 175 + ASSERT_EQ(id, 4, "int_id"); 176 + 177 + id = btf__add_fwd(btf, "s", BTF_FWD_STRUCT); 178 + ASSERT_EQ(id, 5, "fwd_id"); 163 179 164 180 id = btf__add_struct(btf, "s", 4); 165 - ASSERT_EQ(id, 3, "struct_id"); 166 - err = btf__add_field(btf, "x", 2, 0, 0); 181 + ASSERT_EQ(id, 6, "struct_id"); 182 + err = btf__add_field(btf, "x", 4, 0, 0); 167 183 ASSERT_OK(err, "field_ok"); 168 184 169 185 for (i = 1; i < btf__type_cnt(btf); i++) { ··· 189 173 190 174 fflush(dump_buf_file); 191 175 dump_buf[dump_buf_sz] = 0; /* some libc implementations don't do this */ 176 + 192 177 ASSERT_STREQ(dump_buf, 193 - "enum {\n" 194 - " VAL = 1,\n" 178 + "enum x;\n" 179 + "\n" 180 + "enum x {\n" 181 + " X = 1,\n" 195 182 "};\n" 183 + "\n" 184 + "enum {\n" 185 + " Y = 1,\n" 186 + "};\n" 187 + "\n" 188 + "struct s;\n" 196 189 "\n" 197 190 "struct s {\n" 198 191 " int x;\n" ··· 224 199 fseek(dump_buf_file, 0, SEEK_SET); 225 200 226 201 id = btf__add_struct(btf, "s", 4); 227 - ASSERT_EQ(id, 4, "struct_id"); 228 - err = btf__add_field(btf, "x", 1, 0, 0); 202 + ASSERT_EQ(id, 7, "struct_id"); 203 + err = btf__add_field(btf, "x", 2, 0, 0); 229 204 ASSERT_OK(err, "field_ok"); 230 - err = btf__add_field(btf, "s", 3, 32, 0); 205 + err = btf__add_field(btf, "y", 3, 32, 0); 206 + ASSERT_OK(err, "field_ok"); 207 + err = btf__add_field(btf, "s", 6, 64, 0); 231 208 ASSERT_OK(err, "field_ok"); 232 209 233 210 for (i = 1; i < btf__type_cnt(btf); i++) { ··· 241 214 dump_buf[dump_buf_sz] = 0; /* some libc implementations don't do this */ 242 215 ASSERT_STREQ(dump_buf, 243 216 "struct s___2 {\n" 217 + " enum x x;\n" 244 218 " enum {\n" 245 - " VAL___2 = 1,\n" 246 - " } x;\n" 219 + " Y___2 = 1,\n" 220 + " } y;\n" 247 221 " struct s s;\n" 248 222 "};\n\n" , "c_dump1"); 249 223
+11 -6
tools/testing/selftests/bpf/prog_tests/core_reloc.c
··· 512 512 } 513 513 514 514 515 - static struct core_reloc_test_case test_cases[] = { 515 + static const struct core_reloc_test_case test_cases[] = { 516 516 /* validate we can find kernel image and use its BTF for relocs */ 517 517 { 518 518 .case_name = "kernel", ··· 843 843 int n; 844 844 845 845 n = snprintf(command, sizeof(command), 846 - "./tools/build/bpftool/bpftool gen min_core_btf %s %s %s", 846 + "./bpftool gen min_core_btf %s %s %s", 847 847 src_btf, dst_btf, objpath); 848 848 if (n < 0 || n >= sizeof(command)) 849 849 return -1; ··· 855 855 { 856 856 const size_t mmap_sz = roundup_page(sizeof(struct data)); 857 857 DECLARE_LIBBPF_OPTS(bpf_object_open_opts, open_opts); 858 - struct core_reloc_test_case *test_case; 858 + struct core_reloc_test_case *test_case, test_case_copy; 859 859 const char *tp_name, *probe_name; 860 860 int err, i, equal, fd; 861 861 struct bpf_link *link = NULL; ··· 870 870 871 871 for (i = 0; i < ARRAY_SIZE(test_cases); i++) { 872 872 char btf_file[] = "/tmp/core_reloc.btf.XXXXXX"; 873 - test_case = &test_cases[i]; 873 + 874 + test_case_copy = test_cases[i]; 875 + test_case = &test_case_copy; 876 + 874 877 if (!test__start_subtest(test_case->case_name)) 875 878 continue; 876 879 ··· 884 881 885 882 /* generate a "minimal" BTF file and use it as source */ 886 883 if (use_btfgen) { 884 + 887 885 if (!test_case->btf_src_file || test_case->fails) { 888 886 test__skip(); 889 887 continue; ··· 993 989 CHECK_FAIL(munmap(mmap_data, mmap_sz)); 994 990 mmap_data = NULL; 995 991 } 996 - remove(btf_file); 992 + if (use_btfgen) 993 + remove(test_case->btf_src_file); 997 994 bpf_link__destroy(link); 998 995 link = NULL; 999 996 bpf_object__close(obj); ··· 1006 1001 run_core_reloc_tests(false); 1007 1002 } 1008 1003 1009 - void test_core_btfgen(void) 1004 + void test_core_reloc_btfgen(void) 1010 1005 { 1011 1006 run_core_reloc_tests(true); 1012 1007 }
+2 -2
tools/testing/selftests/bpf/prog_tests/log_buf.c
··· 78 78 ASSERT_OK_PTR(strstr(libbpf_log_buf, "prog 'bad_prog': BPF program load failed"), 79 79 "libbpf_log_not_empty"); 80 80 ASSERT_OK_PTR(strstr(obj_log_buf, "DATASEC license"), "obj_log_not_empty"); 81 - ASSERT_OK_PTR(strstr(good_log_buf, "0: R1=ctx(id=0,off=0,imm=0) R10=fp0"), 81 + ASSERT_OK_PTR(strstr(good_log_buf, "0: R1=ctx(off=0,imm=0) R10=fp0"), 82 82 "good_log_verbose"); 83 83 ASSERT_OK_PTR(strstr(bad_log_buf, "invalid access to map value, value_size=16 off=16000 size=4"), 84 84 "bad_log_not_empty"); ··· 175 175 opts.log_level = 2; 176 176 fd = bpf_prog_load(BPF_PROG_TYPE_SOCKET_FILTER, "good_prog", "GPL", 177 177 good_prog_insns, good_prog_insn_cnt, &opts); 178 - ASSERT_OK_PTR(strstr(log_buf, "0: R1=ctx(id=0,off=0,imm=0) R10=fp0"), "good_log_2"); 178 + ASSERT_OK_PTR(strstr(log_buf, "0: R1=ctx(off=0,imm=0) R10=fp0"), "good_log_2"); 179 179 ASSERT_GE(fd, 0, "good_fd2"); 180 180 if (fd >= 0) 181 181 close(fd);
+14 -14
tools/testing/selftests/bpf/progs/atomics.c
··· 20 20 __u64 add_stack_result = 0; 21 21 __u64 add_noreturn_value = 1; 22 22 23 - SEC("fentry/bpf_fentry_test1") 24 - int BPF_PROG(add, int a) 23 + SEC("raw_tp/sys_enter") 24 + int add(const void *ctx) 25 25 { 26 26 if (pid != (bpf_get_current_pid_tgid() >> 32)) 27 27 return 0; ··· 46 46 __s64 sub_stack_result = 0; 47 47 __s64 sub_noreturn_value = 1; 48 48 49 - SEC("fentry/bpf_fentry_test1") 50 - int BPF_PROG(sub, int a) 49 + SEC("raw_tp/sys_enter") 50 + int sub(const void *ctx) 51 51 { 52 52 if (pid != (bpf_get_current_pid_tgid() >> 32)) 53 53 return 0; ··· 70 70 __u32 and32_result = 0; 71 71 __u64 and_noreturn_value = (0x110ull << 32); 72 72 73 - SEC("fentry/bpf_fentry_test1") 74 - int BPF_PROG(and, int a) 73 + SEC("raw_tp/sys_enter") 74 + int and(const void *ctx) 75 75 { 76 76 if (pid != (bpf_get_current_pid_tgid() >> 32)) 77 77 return 0; ··· 91 91 __u32 or32_result = 0; 92 92 __u64 or_noreturn_value = (0x110ull << 32); 93 93 94 - SEC("fentry/bpf_fentry_test1") 95 - int BPF_PROG(or, int a) 94 + SEC("raw_tp/sys_enter") 95 + int or(const void *ctx) 96 96 { 97 97 if (pid != (bpf_get_current_pid_tgid() >> 32)) 98 98 return 0; ··· 111 111 __u32 xor32_result = 0; 112 112 __u64 xor_noreturn_value = (0x110ull << 32); 113 113 114 - SEC("fentry/bpf_fentry_test1") 115 - int BPF_PROG(xor, int a) 114 + SEC("raw_tp/sys_enter") 115 + int xor(const void *ctx) 116 116 { 117 117 if (pid != (bpf_get_current_pid_tgid() >> 32)) 118 118 return 0; ··· 132 132 __u32 cmpxchg32_result_fail = 0; 133 133 __u32 cmpxchg32_result_succeed = 0; 134 134 135 - SEC("fentry/bpf_fentry_test1") 136 - int BPF_PROG(cmpxchg, int a) 135 + SEC("raw_tp/sys_enter") 136 + int cmpxchg(const void *ctx) 137 137 { 138 138 if (pid != (bpf_get_current_pid_tgid() >> 32)) 139 139 return 0; ··· 153 153 __u32 xchg32_value = 1; 154 154 __u32 xchg32_result = 0; 155 155 156 - SEC("fentry/bpf_fentry_test1") 157 - int BPF_PROG(xchg, int a) 156 + SEC("raw_tp/sys_enter") 157 + int xchg(const void *ctx) 158 158 { 159 159 if (pid != (bpf_get_current_pid_tgid() >> 32)) 160 160 return 0;
+3
tools/testing/selftests/bpf/test_cpp.cpp
··· 1 1 /* SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause) */ 2 2 #include <iostream> 3 + #pragma GCC diagnostic push 4 + #pragma GCC diagnostic ignored "-Wdeprecated-declarations" 3 5 #include <bpf/libbpf.h> 6 + #pragma GCC diagnostic pop 4 7 #include <bpf/bpf.h> 5 8 #include <bpf/btf.h> 6 9 #include "test_core_extern.skel.h"
+3 -3
tools/testing/selftests/bpf/verifier/atomic_invalid.c
··· 1 - #define __INVALID_ATOMIC_ACCESS_TEST(op) \ 1 + #define __INVALID_ATOMIC_ACCESS_TEST(op) \ 2 2 { \ 3 - "atomic " #op " access through non-pointer ", \ 3 + "atomic " #op " access through non-pointer ", \ 4 4 .insns = { \ 5 5 BPF_MOV64_IMM(BPF_REG_0, 1), \ 6 6 BPF_MOV64_IMM(BPF_REG_1, 0), \ ··· 9 9 BPF_EXIT_INSN(), \ 10 10 }, \ 11 11 .result = REJECT, \ 12 - .errstr = "R1 invalid mem access 'inv'" \ 12 + .errstr = "R1 invalid mem access 'scalar'" \ 13 13 } 14 14 __INVALID_ATOMIC_ACCESS_TEST(BPF_ADD), 15 15 __INVALID_ATOMIC_ACCESS_TEST(BPF_ADD | BPF_FETCH),
+2 -2
tools/testing/selftests/bpf/verifier/bounds.c
··· 508 508 BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, -1), 509 509 BPF_EXIT_INSN(), 510 510 }, 511 - .errstr_unpriv = "R0 invalid mem access 'inv'", 511 + .errstr_unpriv = "R0 invalid mem access 'scalar'", 512 512 .result_unpriv = REJECT, 513 513 .result = ACCEPT 514 514 }, ··· 530 530 BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, -1), 531 531 BPF_EXIT_INSN(), 532 532 }, 533 - .errstr_unpriv = "R0 invalid mem access 'inv'", 533 + .errstr_unpriv = "R0 invalid mem access 'scalar'", 534 534 .result_unpriv = REJECT, 535 535 .result = ACCEPT 536 536 },
+22 -3
tools/testing/selftests/bpf/verifier/calls.c
··· 97 97 }, 98 98 }, 99 99 { 100 + "calls: trigger reg2btf_ids[reg->type] for reg->type > __BPF_REG_TYPE_MAX", 101 + .insns = { 102 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), 103 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8), 104 + BPF_ST_MEM(BPF_DW, BPF_REG_1, 0, 0), 105 + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0), 106 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), 107 + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0), 108 + BPF_EXIT_INSN(), 109 + }, 110 + .prog_type = BPF_PROG_TYPE_SCHED_CLS, 111 + .result = REJECT, 112 + .errstr = "arg#0 pointer type STRUCT prog_test_ref_kfunc must point", 113 + .fixup_kfunc_btf_id = { 114 + { "bpf_kfunc_call_test_acquire", 3 }, 115 + { "bpf_kfunc_call_test_release", 5 }, 116 + }, 117 + }, 118 + { 100 119 "calls: basic sanity", 101 120 .insns = { 102 121 BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 2), ··· 188 169 }, 189 170 .prog_type = BPF_PROG_TYPE_SCHED_CLS, 190 171 .result = REJECT, 191 - .errstr = "R0 invalid mem access 'inv'", 172 + .errstr = "R0 invalid mem access 'scalar'", 192 173 }, 193 174 { 194 175 "calls: multiple ret types in subprog 2", ··· 491 472 BPF_EXIT_INSN(), 492 473 }, 493 474 .result = REJECT, 494 - .errstr = "R6 invalid mem access 'inv'", 475 + .errstr = "R6 invalid mem access 'scalar'", 495 476 .prog_type = BPF_PROG_TYPE_XDP, 496 477 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 497 478 }, ··· 1697 1678 .prog_type = BPF_PROG_TYPE_SCHED_CLS, 1698 1679 .fixup_map_hash_8b = { 12, 22 }, 1699 1680 .result = REJECT, 1700 - .errstr = "R0 invalid mem access 'inv'", 1681 + .errstr = "R0 invalid mem access 'scalar'", 1701 1682 }, 1702 1683 { 1703 1684 "calls: pkt_ptr spill into caller stack",
+2 -2
tools/testing/selftests/bpf/verifier/ctx.c
··· 127 127 .prog_type = BPF_PROG_TYPE_CGROUP_SOCK_ADDR, 128 128 .expected_attach_type = BPF_CGROUP_UDP6_SENDMSG, 129 129 .result = REJECT, 130 - .errstr = "R1 type=inv expected=ctx", 130 + .errstr = "R1 type=scalar expected=ctx", 131 131 }, 132 132 { 133 133 "pass ctx or null check, 4: ctx - const", ··· 193 193 .prog_type = BPF_PROG_TYPE_CGROUP_SOCK, 194 194 .expected_attach_type = BPF_CGROUP_INET4_POST_BIND, 195 195 .result = REJECT, 196 - .errstr = "R1 type=inv expected=ctx", 196 + .errstr = "R1 type=scalar expected=ctx", 197 197 },
+1 -1
tools/testing/selftests/bpf/verifier/direct_packet_access.c
··· 339 339 BPF_MOV64_IMM(BPF_REG_0, 0), 340 340 BPF_EXIT_INSN(), 341 341 }, 342 - .errstr = "R2 invalid mem access 'inv'", 342 + .errstr = "R2 invalid mem access 'scalar'", 343 343 .result = REJECT, 344 344 .prog_type = BPF_PROG_TYPE_SCHED_CLS, 345 345 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
+3 -3
tools/testing/selftests/bpf/verifier/helper_access_var_len.c
··· 350 350 BPF_EMIT_CALL(BPF_FUNC_csum_diff), 351 351 BPF_EXIT_INSN(), 352 352 }, 353 - .errstr = "R1 type=inv expected=fp", 353 + .errstr = "R1 type=scalar expected=fp", 354 354 .result = REJECT, 355 355 .prog_type = BPF_PROG_TYPE_SCHED_CLS, 356 356 }, ··· 471 471 BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel), 472 472 BPF_EXIT_INSN(), 473 473 }, 474 - .errstr = "R1 type=inv expected=fp", 474 + .errstr = "R1 type=scalar expected=fp", 475 475 .result = REJECT, 476 476 .prog_type = BPF_PROG_TYPE_TRACEPOINT, 477 477 }, ··· 484 484 BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel), 485 485 BPF_EXIT_INSN(), 486 486 }, 487 - .errstr = "R1 type=inv expected=fp", 487 + .errstr = "R1 type=scalar expected=fp", 488 488 .result = REJECT, 489 489 .prog_type = BPF_PROG_TYPE_TRACEPOINT, 490 490 },
+8 -8
tools/testing/selftests/bpf/verifier/jmp32.c
··· 286 286 BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0), 287 287 BPF_EXIT_INSN(), 288 288 }, 289 - .errstr_unpriv = "R0 invalid mem access 'inv'", 289 + .errstr_unpriv = "R0 invalid mem access 'scalar'", 290 290 .result_unpriv = REJECT, 291 291 .result = ACCEPT, 292 292 .retval = 2, ··· 356 356 BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0), 357 357 BPF_EXIT_INSN(), 358 358 }, 359 - .errstr_unpriv = "R0 invalid mem access 'inv'", 359 + .errstr_unpriv = "R0 invalid mem access 'scalar'", 360 360 .result_unpriv = REJECT, 361 361 .result = ACCEPT, 362 362 .retval = 2, ··· 426 426 BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0), 427 427 BPF_EXIT_INSN(), 428 428 }, 429 - .errstr_unpriv = "R0 invalid mem access 'inv'", 429 + .errstr_unpriv = "R0 invalid mem access 'scalar'", 430 430 .result_unpriv = REJECT, 431 431 .result = ACCEPT, 432 432 .retval = 2, ··· 496 496 BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0), 497 497 BPF_EXIT_INSN(), 498 498 }, 499 - .errstr_unpriv = "R0 invalid mem access 'inv'", 499 + .errstr_unpriv = "R0 invalid mem access 'scalar'", 500 500 .result_unpriv = REJECT, 501 501 .result = ACCEPT, 502 502 .retval = 2, ··· 566 566 BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0), 567 567 BPF_EXIT_INSN(), 568 568 }, 569 - .errstr_unpriv = "R0 invalid mem access 'inv'", 569 + .errstr_unpriv = "R0 invalid mem access 'scalar'", 570 570 .result_unpriv = REJECT, 571 571 .result = ACCEPT, 572 572 .retval = 2, ··· 636 636 BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0), 637 637 BPF_EXIT_INSN(), 638 638 }, 639 - .errstr_unpriv = "R0 invalid mem access 'inv'", 639 + .errstr_unpriv = "R0 invalid mem access 'scalar'", 640 640 .result_unpriv = REJECT, 641 641 .result = ACCEPT, 642 642 .retval = 2, ··· 706 706 BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0), 707 707 BPF_EXIT_INSN(), 708 708 }, 709 - .errstr_unpriv = "R0 invalid mem access 'inv'", 709 + .errstr_unpriv = "R0 invalid mem access 'scalar'", 710 710 .result_unpriv = REJECT, 711 711 .result = ACCEPT, 712 712 .retval = 2, ··· 776 776 BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0), 777 777 BPF_EXIT_INSN(), 778 778 }, 779 - .errstr_unpriv = "R0 invalid mem access 'inv'", 779 + .errstr_unpriv = "R0 invalid mem access 'scalar'", 780 780 .result_unpriv = REJECT, 781 781 .result = ACCEPT, 782 782 .retval = 2,
+2 -2
tools/testing/selftests/bpf/verifier/precise.c
··· 27 27 BPF_JMP_IMM(BPF_JLT, BPF_REG_2, 8, 1), 28 28 BPF_EXIT_INSN(), 29 29 30 - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, 1), /* R2=inv(umin=1, umax=8) */ 30 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, 1), /* R2=scalar(umin=1, umax=8) */ 31 31 BPF_MOV64_REG(BPF_REG_1, BPF_REG_FP), 32 32 BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8), 33 33 BPF_MOV64_IMM(BPF_REG_3, 0), ··· 87 87 BPF_JMP_IMM(BPF_JLT, BPF_REG_2, 8, 1), 88 88 BPF_EXIT_INSN(), 89 89 90 - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, 1), /* R2=inv(umin=1, umax=8) */ 90 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, 1), /* R2=scalar(umin=1, umax=8) */ 91 91 BPF_MOV64_REG(BPF_REG_1, BPF_REG_FP), 92 92 BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8), 93 93 BPF_MOV64_IMM(BPF_REG_3, 0),
+2 -2
tools/testing/selftests/bpf/verifier/raw_stack.c
··· 132 132 BPF_EXIT_INSN(), 133 133 }, 134 134 .result = REJECT, 135 - .errstr = "R0 invalid mem access 'inv'", 135 + .errstr = "R0 invalid mem access 'scalar'", 136 136 .prog_type = BPF_PROG_TYPE_SCHED_CLS, 137 137 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 138 138 }, ··· 162 162 BPF_EXIT_INSN(), 163 163 }, 164 164 .result = REJECT, 165 - .errstr = "R3 invalid mem access 'inv'", 165 + .errstr = "R3 invalid mem access 'scalar'", 166 166 .prog_type = BPF_PROG_TYPE_SCHED_CLS, 167 167 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 168 168 },
+3 -3
tools/testing/selftests/bpf/verifier/ref_tracking.c
··· 162 162 BPF_EXIT_INSN(), 163 163 }, 164 164 .prog_type = BPF_PROG_TYPE_SCHED_CLS, 165 - .errstr = "type=inv expected=sock", 165 + .errstr = "type=scalar expected=sock", 166 166 .result = REJECT, 167 167 }, 168 168 { ··· 178 178 BPF_EXIT_INSN(), 179 179 }, 180 180 .prog_type = BPF_PROG_TYPE_SCHED_CLS, 181 - .errstr = "type=inv expected=sock", 181 + .errstr = "type=scalar expected=sock", 182 182 .result = REJECT, 183 183 }, 184 184 { ··· 274 274 BPF_EXIT_INSN(), 275 275 }, 276 276 .prog_type = BPF_PROG_TYPE_SCHED_CLS, 277 - .errstr = "type=inv expected=sock", 277 + .errstr = "type=scalar expected=sock", 278 278 .result = REJECT, 279 279 }, 280 280 {
+1 -1
tools/testing/selftests/bpf/verifier/search_pruning.c
··· 104 104 BPF_EXIT_INSN(), 105 105 }, 106 106 .fixup_map_hash_8b = { 3 }, 107 - .errstr = "R6 invalid mem access 'inv'", 107 + .errstr = "R6 invalid mem access 'scalar'", 108 108 .result = REJECT, 109 109 .prog_type = BPF_PROG_TYPE_TRACEPOINT, 110 110 },
+1 -1
tools/testing/selftests/bpf/verifier/sock.c
··· 502 502 .fixup_sk_storage_map = { 11 }, 503 503 .prog_type = BPF_PROG_TYPE_SCHED_CLS, 504 504 .result = REJECT, 505 - .errstr = "R3 type=inv expected=fp", 505 + .errstr = "R3 type=scalar expected=fp", 506 506 }, 507 507 { 508 508 "sk_storage_get(map, skb->sk, &stack_value, 1): stack_value",
+19 -19
tools/testing/selftests/bpf/verifier/spill_fill.c
··· 102 102 BPF_EXIT_INSN(), 103 103 }, 104 104 .errstr_unpriv = "attempt to corrupt spilled", 105 - .errstr = "R0 invalid mem access 'inv", 105 + .errstr = "R0 invalid mem access 'scalar'", 106 106 .result = REJECT, 107 107 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 108 108 }, ··· 147 147 BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_10, -8), 148 148 /* r0 = r2 */ 149 149 BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), 150 - /* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=inv20 */ 150 + /* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=20 */ 151 151 BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_4), 152 - /* if (r0 > r3) R0=pkt,off=20 R2=pkt R3=pkt_end R4=inv20 */ 152 + /* if (r0 > r3) R0=pkt,off=20 R2=pkt R3=pkt_end R4=20 */ 153 153 BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1), 154 - /* r0 = *(u32 *)r2 R0=pkt,off=20,r=20 R2=pkt,r=20 R3=pkt_end R4=inv20 */ 154 + /* r0 = *(u32 *)r2 R0=pkt,off=20,r=20 R2=pkt,r=20 R3=pkt_end R4=20 */ 155 155 BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_2, 0), 156 156 BPF_MOV64_IMM(BPF_REG_0, 0), 157 157 BPF_EXIT_INSN(), ··· 190 190 BPF_LDX_MEM(BPF_H, BPF_REG_4, BPF_REG_10, -8), 191 191 /* r0 = r2 */ 192 192 BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), 193 - /* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=inv,umax=65535 */ 193 + /* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=umax=65535 */ 194 194 BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_4), 195 - /* if (r0 > r3) R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=inv,umax=65535 */ 195 + /* if (r0 > r3) R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=umax=65535 */ 196 196 BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1), 197 - /* r0 = *(u32 *)r2 R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=inv20 */ 197 + /* r0 = *(u32 *)r2 R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=20 */ 198 198 BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_2, 0), 199 199 BPF_MOV64_IMM(BPF_REG_0, 0), 200 200 BPF_EXIT_INSN(), ··· 222 222 BPF_LDX_MEM(BPF_H, BPF_REG_4, BPF_REG_10, -8), 223 223 /* r0 = r2 */ 224 224 BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), 225 - /* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=inv,umax=65535 */ 225 + /* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=umax=65535 */ 226 226 BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_4), 227 - /* if (r0 > r3) R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=inv,umax=65535 */ 227 + /* if (r0 > r3) R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=umax=65535 */ 228 228 BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1), 229 - /* r0 = *(u32 *)r2 R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=inv20 */ 229 + /* r0 = *(u32 *)r2 R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=20 */ 230 230 BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_2, 0), 231 231 BPF_MOV64_IMM(BPF_REG_0, 0), 232 232 BPF_EXIT_INSN(), ··· 250 250 BPF_LDX_MEM(BPF_H, BPF_REG_4, BPF_REG_10, -6), 251 251 /* r0 = r2 */ 252 252 BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), 253 - /* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=inv,umax=65535 */ 253 + /* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=umax=65535 */ 254 254 BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_4), 255 - /* if (r0 > r3) R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=inv,umax=65535 */ 255 + /* if (r0 > r3) R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=umax=65535 */ 256 256 BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1), 257 - /* r0 = *(u32 *)r2 R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=inv20 */ 257 + /* r0 = *(u32 *)r2 R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=20 */ 258 258 BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_2, 0), 259 259 BPF_MOV64_IMM(BPF_REG_0, 0), 260 260 BPF_EXIT_INSN(), ··· 280 280 BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_10, -4), 281 281 /* r0 = r2 */ 282 282 BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), 283 - /* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=inv,umax=U32_MAX */ 283 + /* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=umax=U32_MAX */ 284 284 BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_4), 285 - /* if (r0 > r3) R0=pkt,umax=U32_MAX R2=pkt R3=pkt_end R4=inv */ 285 + /* if (r0 > r3) R0=pkt,umax=U32_MAX R2=pkt R3=pkt_end R4= */ 286 286 BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1), 287 - /* r0 = *(u32 *)r2 R0=pkt,umax=U32_MAX R2=pkt R3=pkt_end R4=inv */ 287 + /* r0 = *(u32 *)r2 R0=pkt,umax=U32_MAX R2=pkt R3=pkt_end R4= */ 288 288 BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_2, 0), 289 289 BPF_MOV64_IMM(BPF_REG_0, 0), 290 290 BPF_EXIT_INSN(), ··· 305 305 BPF_JMP_IMM(BPF_JLE, BPF_REG_4, 40, 2), 306 306 BPF_MOV64_IMM(BPF_REG_0, 0), 307 307 BPF_EXIT_INSN(), 308 - /* *(u32 *)(r10 -8) = r4 R4=inv,umax=40 */ 308 + /* *(u32 *)(r10 -8) = r4 R4=umax=40 */ 309 309 BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_4, -8), 310 310 /* r4 = (*u32 *)(r10 - 8) */ 311 311 BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_10, -8), 312 - /* r2 += r4 R2=pkt R4=inv,umax=40 */ 312 + /* r2 += r4 R2=pkt R4=umax=40 */ 313 313 BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_4), 314 - /* r0 = r2 R2=pkt,umax=40 R4=inv,umax=40 */ 314 + /* r0 = r2 R2=pkt,umax=40 R4=umax=40 */ 315 315 BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), 316 316 /* r2 += 20 R0=pkt,umax=40 R2=pkt,umax=40 */ 317 317 BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, 20),
+2 -2
tools/testing/selftests/bpf/verifier/unpriv.c
··· 214 214 BPF_EXIT_INSN(), 215 215 }, 216 216 .result = REJECT, 217 - .errstr = "R1 type=inv expected=ctx", 217 + .errstr = "R1 type=scalar expected=ctx", 218 218 .prog_type = BPF_PROG_TYPE_SCHED_CLS, 219 219 }, 220 220 { ··· 420 420 BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_7, 0), 421 421 BPF_EXIT_INSN(), 422 422 }, 423 - .errstr_unpriv = "R7 invalid mem access 'inv'", 423 + .errstr_unpriv = "R7 invalid mem access 'scalar'", 424 424 .result_unpriv = REJECT, 425 425 .result = ACCEPT, 426 426 .retval = 0,
+2 -2
tools/testing/selftests/bpf/verifier/value_illegal_alu.c
··· 64 64 }, 65 65 .fixup_map_hash_48b = { 3 }, 66 66 .errstr_unpriv = "R0 pointer arithmetic prohibited", 67 - .errstr = "invalid mem access 'inv'", 67 + .errstr = "invalid mem access 'scalar'", 68 68 .result = REJECT, 69 69 .result_unpriv = REJECT, 70 70 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, ··· 89 89 }, 90 90 .fixup_map_hash_48b = { 3 }, 91 91 .errstr_unpriv = "leaking pointer from stack off -8", 92 - .errstr = "R0 invalid mem access 'inv'", 92 + .errstr = "R0 invalid mem access 'scalar'", 93 93 .result = REJECT, 94 94 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 95 95 },
+2 -2
tools/testing/selftests/bpf/verifier/value_ptr_arith.c
··· 397 397 .fixup_map_array_48b = { 1 }, 398 398 .result = ACCEPT, 399 399 .result_unpriv = REJECT, 400 - .errstr_unpriv = "R0 invalid mem access 'inv'", 400 + .errstr_unpriv = "R0 invalid mem access 'scalar'", 401 401 .retval = 0, 402 402 }, 403 403 { ··· 1074 1074 }, 1075 1075 .fixup_map_array_48b = { 3 }, 1076 1076 .result = REJECT, 1077 - .errstr = "R0 invalid mem access 'inv'", 1077 + .errstr = "R0 invalid mem access 'scalar'", 1078 1078 .errstr_unpriv = "R0 pointer -= pointer prohibited", 1079 1079 }, 1080 1080 {
+1 -1
tools/testing/selftests/bpf/verifier/var_off.c
··· 131 131 * write might have overwritten the spilled pointer (i.e. we lose track 132 132 * of the spilled register when we analyze the write). 133 133 */ 134 - .errstr = "R2 invalid mem access 'inv'", 134 + .errstr = "R2 invalid mem access 'scalar'", 135 135 .result = REJECT, 136 136 }, 137 137 {